title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[Benchmark Visualization] RTX Pro 6000 vs DGX Spark - I visualized the LMSYS data and the results are interesting | 128 | I was curious how the RTX Pro 6000 Workstation Edition compares to the new DGX Spark, so I dove into the [LMSYS benchmark data](https://lmsys.org/blog/2025-10-13-nvidia-dgx-spark/) (which tested both sglang and ollama). The results were so interesting I created visualizations for it.
https://preview.redd.it/pdrox73furvf1.png?width=1346&format=png&auto=webp&s=55c2ef1d2d887273a5f37747b6834d7a4bcb441b
**GitHub repo with charts:** [https://github.com/casualcomputer/rtx\_pro\_6000\_vs\_dgx\_spark](https://github.com/casualcomputer/rtx_pro_6000_vs_dgx_spark)
# TL;DR
**RTX Pro 6000 is 6-7x faster** for LLM inference across every batch size and model tested. This isn't a small difference - we're talking 100 seconds vs 14 seconds for a 4k token conversation with Llama 3.1 8B.
# The Numbers (FP8, SGLang, 2k in/2k out)
**Llama 3.1 8B - Batch Size 1:**
* DGX Spark: 100.1s end-to-end
* RTX Pro 6000: 14.3s end-to-end
* **7.0x faster**
**Llama 3.1 70B - Batch Size 1:**
* DGX Spark: 772s (almost 13 minutes!)
* RTX Pro 6000: 100s
* **7.7x faster**
**Performance stays consistent across batch sizes 1-32.** The RTX just keeps winning by \~6x regardless of whether you're running single user or multi-tenant.
I was curious how the RTX Pro 6000 Workstation Edition compares to the new DGX Spark, so I dove into the [LMSYS benchmark data](https://lmsys.org/blog/2025-10-13-nvidia-dgx-spark/) (which tested both sglang and ollama). The results were so interesting I created visualizations for it.
**GitHub repo with charts:** [https://github.com/casualcomputer/rtx\_pro\_6000\_vs\_dgx\_spark](https://github.com/casualcomputer/rtx_pro_6000_vs_dgx_spark)
# TL;DR
**RTX Pro 6000 is 6-7x faster** for LLM inference across every batch size and model tested. This isn't a small difference - we're talking 100 seconds vs 14 seconds for a 4k token conversation with Llama 3.1 8B.
# The Numbers (FP8, SGLang, 2k in/2k out)
**Llama 3.1 8B - Batch Size 1:**
* DGX Spark: 100.1s end-to-end
* RTX Pro 6000: 14.3s end-to-end
* **7.0x faster**
**Llama 3.1 70B - Batch Size 1:**
* DGX Spark: 772s (almost 13 minutes!)
* RTX Pro 6000: 100s
* **7.7x faster**
**Performance stays consistent across batch sizes 1-32.** The RTX just keeps winning by \~6x regardless of whether you're running single user or multi-tenant.
# Why Though? LLM inference is memory-bound. You're constantly loading model weights from memory for every token generation. The RTX Pro 6000 has 6.5x more memory bandwidth (1,792 GB/s) than DGX-Spark (273 GB/s), and surprise - it's 6x faster. The math seems to check out. | 2025-10-18T01:18:03 | https://www.reddit.com/r/LocalLLaMA/comments/1o9it7v/benchmark_visualization_rtx_pro_6000_vs_dgx_spark/ | Spare-Solution-787 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9it7v | false | null | t3_1o9it7v | /r/LocalLLaMA/comments/1o9it7v/benchmark_visualization_rtx_pro_6000_vs_dgx_spark/ | false | false | 128 | {'enabled': False, 'images': [{'id': 'o-FGzNvhg8l8VVDovZxd73JtKM597iNiithcrypGneg', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/o-FGzNvhg8l8VVDovZxd73JtKM597iNiithcrypGneg.jpeg?width=108&crop=smart&auto=webp&s=55934a956a05ed16401f625137c7b5e5e7ba9bd6', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/o-FGzNvhg8l8VVDovZxd73JtKM597iNiithcrypGneg.jpeg?width=216&crop=smart&auto=webp&s=57a098b503bc9bf2a2649e6e8a4e041e75e5df90', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/o-FGzNvhg8l8VVDovZxd73JtKM597iNiithcrypGneg.jpeg?width=320&crop=smart&auto=webp&s=be3a2f85be0acf95014aaa7566b261ca71f91846', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/o-FGzNvhg8l8VVDovZxd73JtKM597iNiithcrypGneg.jpeg?width=640&crop=smart&auto=webp&s=83cef7040edd7231023da7fa0f5e5f6ccb36d889', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/o-FGzNvhg8l8VVDovZxd73JtKM597iNiithcrypGneg.jpeg?width=960&crop=smart&auto=webp&s=14082af8987e4544b3f3d80113f3ca2ba139f56b', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/o-FGzNvhg8l8VVDovZxd73JtKM597iNiithcrypGneg.jpeg?width=1080&crop=smart&auto=webp&s=f4abb5c5e71cffde3db4dcf2a1b44ffd0b4917aa', 'width': 1080}], 'source': {'height': 1131, 'url': 'https://external-preview.redd.it/o-FGzNvhg8l8VVDovZxd73JtKM597iNiithcrypGneg.jpeg?auto=webp&s=70ac268d1a7c888649baf23d0f2606cae8c85408', 'width': 1697}, 'variants': {}}]} | |
Guys my Sparx Station won't start it just beeps. I can hear the 4.3gb hard drive spinning but nothing else. | 38 | 2025-10-18T00:46:06 | https://v.redd.it/fmvhojfvorvf1 | oodelay | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o9i5rm | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/fmvhojfvorvf1/DASHPlaylist.mpd?a=1763340381%2CYTcxOWI3ODMzYjg3Zjk1YWNjNzE5NmYzYjBkZjg5MjExMTgwZGZhY2MwN2I5NWRkNmM1YTk0YTZjZWI0Mzg5NQ%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/fmvhojfvorvf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/fmvhojfvorvf1/HLSPlaylist.m3u8?a=1763340381%2CNWE1OTY1MTA1ZTJmOTUxMGZhYzQ1YWVkNjJjZjAyMmNjYTk3NzdkMTdhMGE5MDM3ZThiODU5ZDQxMmEzZTFmMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fmvhojfvorvf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1o9i5rm | /r/LocalLLaMA/comments/1o9i5rm/guys_my_sparx_station_wont_start_it_just_beeps_i/ | false | false | 38 | {'enabled': False, 'images': [{'id': 'OGM5NDY4aXZvcnZmMYsjotqartWvywDpSqT_oNbLP9E-6VMssWCFXA3Vaqho', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/OGM5NDY4aXZvcnZmMYsjotqartWvywDpSqT_oNbLP9E-6VMssWCFXA3Vaqho.png?width=108&crop=smart&format=pjpg&auto=webp&s=00a860fb0ae8d44410bf8b6d67b0ee8eed768433', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/OGM5NDY4aXZvcnZmMYsjotqartWvywDpSqT_oNbLP9E-6VMssWCFXA3Vaqho.png?width=216&crop=smart&format=pjpg&auto=webp&s=08471c865903f1f5ac4681593cce88d6dde246ce', 'width': 216}, {'height': 569, 'url': 'https://external-preview.redd.it/OGM5NDY4aXZvcnZmMYsjotqartWvywDpSqT_oNbLP9E-6VMssWCFXA3Vaqho.png?width=320&crop=smart&format=pjpg&auto=webp&s=57b98481bedd07d0c7f318592b00c2e098756953', 'width': 320}, {'height': 1138, 'url': 'https://external-preview.redd.it/OGM5NDY4aXZvcnZmMYsjotqartWvywDpSqT_oNbLP9E-6VMssWCFXA3Vaqho.png?width=640&crop=smart&format=pjpg&auto=webp&s=7a297ec291a5e8b3494242b8988919a68b91de0c', 'width': 640}, {'height': 1707, 'url': 'https://external-preview.redd.it/OGM5NDY4aXZvcnZmMYsjotqartWvywDpSqT_oNbLP9E-6VMssWCFXA3Vaqho.png?width=960&crop=smart&format=pjpg&auto=webp&s=79f0c19b3e410f311ed5b90372cba61e94223abd', 'width': 960}], 'source': {'height': 1725, 'url': 'https://external-preview.redd.it/OGM5NDY4aXZvcnZmMYsjotqartWvywDpSqT_oNbLP9E-6VMssWCFXA3Vaqho.png?format=pjpg&auto=webp&s=88f9225da434a2dbd0544933abb463f003c74be8', 'width': 970}, 'variants': {}}]} | ||
Has anyone run this Coconut-Qwen2.5-7B successfully on llama.cpp? If so, what flags/settings worked? | 0 | This is a fine-tuned Qwen2.5-7B-Instruct with latent reasoning enhancements, and I’m running it on with a recent llama.cpp build but I’m getting gibberish outputs.
I’ve Tried:
./llama-cli -m coconut-qwen2.5-7b.Q4_K_M.gguf
./llama-cli -m coconut-qwen2.5-7b.Q4_K_M.gguf --jinja
./llama-cli -m coconut-qwen2.5-7b.Q4_K_M.gguf -p "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\nHello, who are you?<|im_end|>\n<|im_start|>assistant"
Interactive with flash attention and sampling tweaks:
./llama-cli -m ~/Desktop/coconut-qwen2.5-7b.Q4_K_M.gguf --color -i -ngl 99 --flash-attn on -t 0.7 --top-p 0.9 --top-k 40 --repeat-penalty 1.1 --ctx-size 8192
Everything so far has given gibberish outputs. Are there
any other prompt formats or llama.cpp flags worth trying?
| 2025-10-18T00:32:18 | https://huggingface.co/mradermacher/coconut-qwen2.5-7b-GGUF | ArchdukeofHyperbole | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1o9hvow | false | null | t3_1o9hvow | /r/LocalLLaMA/comments/1o9hvow/has_anyone_run_this_coconutqwen257b_successfully/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'SZixAzi3OKEpwQjj_xjHN-NhhbzQPeTXCIWrcJy6ijw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SZixAzi3OKEpwQjj_xjHN-NhhbzQPeTXCIWrcJy6ijw.png?width=108&crop=smart&auto=webp&s=9f0fc3f5262813347174c2418b2c745ed26cb5c2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SZixAzi3OKEpwQjj_xjHN-NhhbzQPeTXCIWrcJy6ijw.png?width=216&crop=smart&auto=webp&s=55770e131d5aec20ba391e5a94c5263cc6e0a1cc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SZixAzi3OKEpwQjj_xjHN-NhhbzQPeTXCIWrcJy6ijw.png?width=320&crop=smart&auto=webp&s=a5caec98af1ba5abe5805f81db61e94b4a0e4235', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SZixAzi3OKEpwQjj_xjHN-NhhbzQPeTXCIWrcJy6ijw.png?width=640&crop=smart&auto=webp&s=60e1b658fa75fb0d297a6b04592942c265972b66', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SZixAzi3OKEpwQjj_xjHN-NhhbzQPeTXCIWrcJy6ijw.png?width=960&crop=smart&auto=webp&s=82bcd0de2f21783c8b2fc56e7426c6ce8efbc1c9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SZixAzi3OKEpwQjj_xjHN-NhhbzQPeTXCIWrcJy6ijw.png?width=1080&crop=smart&auto=webp&s=cad9f9faa180294ef035b5b8df6557ade2cac2a1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SZixAzi3OKEpwQjj_xjHN-NhhbzQPeTXCIWrcJy6ijw.png?auto=webp&s=2c667757199d1032b799fca6d4c6c2c2496554d7', 'width': 1200}, 'variants': {}}]} |
Quantized Qwen3-Embedder an Reranker | 5 | Hello,
is there any quantized Qwen3-embedder or Reranker 4b or 8b for VLLM out there? Cant really find one that is NOT in GGUF. | 2025-10-18T00:11:20 | https://www.reddit.com/r/LocalLLaMA/comments/1o9hg5u/quantized_qwen3embedder_an_reranker/ | Bowdenzug | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9hg5u | false | null | t3_1o9hg5u | /r/LocalLLaMA/comments/1o9hg5u/quantized_qwen3embedder_an_reranker/ | false | false | self | 5 | null |
I built a "Google Maps AI Assistant" to help you find places, get reviews, and explore for free | 2 | 2025-10-17T23:57:38 | https://huggingface.co/spaces/llamameta/gmaps-search | balianone | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1o9h5ph | false | null | t3_1o9h5ph | /r/LocalLLaMA/comments/1o9h5ph/i_built_a_google_maps_ai_assistant_to_help_you/ | false | false | 2 | {'enabled': False, 'images': [{'id': '6OaFHbm6LRlWPxOv6l5fu8dRp-MghsW-ursJi0vAFGA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6OaFHbm6LRlWPxOv6l5fu8dRp-MghsW-ursJi0vAFGA.png?width=108&crop=smart&auto=webp&s=703fc7189a9b2a2180ee95bc5f1a7c3fb2b0c3b7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6OaFHbm6LRlWPxOv6l5fu8dRp-MghsW-ursJi0vAFGA.png?width=216&crop=smart&auto=webp&s=4cf8cc29fee0f7d3fc4565553747e0a7eb3dff68', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6OaFHbm6LRlWPxOv6l5fu8dRp-MghsW-ursJi0vAFGA.png?width=320&crop=smart&auto=webp&s=465179ed9f2abb28b1f6101e2c5a81d918efbd31', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6OaFHbm6LRlWPxOv6l5fu8dRp-MghsW-ursJi0vAFGA.png?width=640&crop=smart&auto=webp&s=35807a739ed2dcc3bfc09e4b9b86675dfa8f278b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6OaFHbm6LRlWPxOv6l5fu8dRp-MghsW-ursJi0vAFGA.png?width=960&crop=smart&auto=webp&s=993fff76f79dbec080933c869c3825b2ae231ca4', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6OaFHbm6LRlWPxOv6l5fu8dRp-MghsW-ursJi0vAFGA.png?width=1080&crop=smart&auto=webp&s=8d9ed70ef63013e1479021198c36d90578880b4c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6OaFHbm6LRlWPxOv6l5fu8dRp-MghsW-ursJi0vAFGA.png?auto=webp&s=d4820692513061d03cc4d5e3d75af90b78cb1d9a', 'width': 1200}, 'variants': {}}]} | ||
Looking for help with AI system and software design | 0 | I am looking at possibility of getting into AI
I would like to use AI for the following:
Story generation, Ai photo creation, AI clothing swaps, and even a local "ChatGPT" type AI, also needs to have uncensored models for story generation.
No idea on where to start. Here is my current system. 3700x on x570 MB, 32gb ram, 12gb 6700xt card.
Not wanting to dump mega money into this. Prefer under $1k
Was thinking an old server. Possibly a 16+core with 64gb ram and a 24gb 3090 or an old v100 card.
Also what software am i going to need? I do not know Linux so a GUI based OS is needed. (Too damn old and no patience to learn new OS)
Ubuntu? Window 10 Pro? | 2025-10-17T23:55:41 | https://www.reddit.com/r/LocalLLaMA/comments/1o9h47b/looking_for_help_with_ai_system_and_software/ | cmdrmcgarrett | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9h47b | false | null | t3_1o9h47b | /r/LocalLLaMA/comments/1o9h47b/looking_for_help_with_ai_system_and_software/ | false | false | self | 0 | null |
It would be nice to have a super lightweight LM Studio like utility that would let you construct llama-serve command. | 8 | So, I use LM Studio in Linux but if you run \`nvtop\` or \`nvidia-smi\` you will notice LM Studio is a VRAM eater itself. And takes more than a gig for itself. Not everyone is a llama.cpp expert and I am not either but if there existed a utility if only existed a utility that was super lightweight and would help in managing models and remembering parameters and even let us copy generated command for the settings we do via UI that would be awesome.
Maybe someone can vibe code it too as a fun project. | 2025-10-17T23:51:16 | https://www.reddit.com/r/LocalLLaMA/comments/1o9h0yd/it_would_be_nice_to_have_a_super_lightweight_lm/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9h0yd | false | null | t3_1o9h0yd | /r/LocalLLaMA/comments/1o9h0yd/it_would_be_nice_to_have_a_super_lightweight_lm/ | false | false | self | 8 | null |
Diagnosing layer sensitivity during post training quantization | 33 | I have written a blog post on using layerwise PSNR to diagnose where models break during post-training quantization.
Instead of only checking output accuracy, layerwise metrics let you spot exactly which layers are sensitive (e.g. softmax, SE blocks), making it easier to debug and decide what to keep in higher precision.
If you’re experimenting with quantization for local or edge inference, you might find this interesting:
[Quantization – Diagnosing layer sensitivity during post training quantization](https://hub.embedl.com)
Would love to hear if anyone has tried similar layerwise diagnostics. | 2025-10-17T23:31:57 | elinaembedl | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o9glvt | false | null | t3_1o9glvt | /r/LocalLLaMA/comments/1o9glvt/diagnosing_layer_sensitivity_during_post_training/ | false | false | default | 33 | {'enabled': True, 'images': [{'id': 'ibr4vouwarvf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ibr4vouwarvf1.png?width=108&crop=smart&auto=webp&s=9983c587aa79a54f4efc5dc8753986deb962de8f', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ibr4vouwarvf1.png?width=216&crop=smart&auto=webp&s=eeac6a64b489afaac1141c418f071cef6514d486', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ibr4vouwarvf1.png?width=320&crop=smart&auto=webp&s=24a9f87b623850eef9ad6d80901a615440647d52', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/ibr4vouwarvf1.png?width=640&crop=smart&auto=webp&s=79ed43ec6d43bdb26d7c463a7a28500778a67f70', 'width': 640}], 'source': {'height': 600, 'url': 'https://preview.redd.it/ibr4vouwarvf1.png?auto=webp&s=11dc89bd1dc907338b42e5be3bab2f44e8bbaca4', 'width': 800}, 'variants': {}}]} | |
LLM recomendation | 0 | I have a 5090, i need ai that could do 200+ on a llm. The ai gets a clean text from a job post, on multiple languages. It then aranges that text into JSON format that goes into the DB. Tables have 20+ columns like:
Title
Job description
Max salaray
Min salary
Email
Job Requirements
City
Country
Region
etc...
I needs to finish every job post in couple of seconds. If necessary i could buy the second 5090 or go with double 4090. I considered mistral 7b q4, but i am not sure if it is effective. Is it cheaper to do this thru api with something like grok 4 fast, or do i buy the rest of the pc. This is long term i at one point it will have to pars 5000 text a day. Any recomendatio for LLM and maybe another pc build, all ideas are welcome 🙏 | 2025-10-17T23:28:34 | https://www.reddit.com/r/LocalLLaMA/comments/1o9gj9b/llm_recomendation/ | PatienceSensitive650 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9gj9b | false | null | t3_1o9gj9b | /r/LocalLLaMA/comments/1o9gj9b/llm_recomendation/ | false | false | self | 0 | null |
Ling-1T-GGUF on ik_llama.cpp | 41 | I'll try to fixup the namespace ASAP but wanted to rush out some test quants of Ling-1T 1000B model. For now you'll need roughly 256GiB RAM + 24-32+ GiB VRAM to fit the available quants. Hope to release more after fixing up the 403 uploading issues.
Big thanks to ik and CISC for all the help figuring out how to quantize this beast, and of course thanks to Wendell at level1techs for the hardware support and also the aifoundry folks supporting me to come out to SF for the upcoming AI Plumbers Unconference next week!
In early testing I got out to roughly 40k context depth in \~6 turns of chat and it was doing okay reading some papers and generating diff patches without going off the rails at least.
Please give it a test and lemme know what you find! | 2025-10-17T23:09:36 | https://huggingface.co/ubergarm2/Ling-1T-GGUF | VoidAlchemy | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1o9g4if | false | null | t3_1o9g4if | /r/LocalLLaMA/comments/1o9g4if/ling1tgguf_on_ik_llamacpp/ | false | false | 41 | {'enabled': False, 'images': [{'id': 'R-9hsMR8vUhTujexAftojKeBUYgoMQ0LKYIfxmVtxHE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/R-9hsMR8vUhTujexAftojKeBUYgoMQ0LKYIfxmVtxHE.png?width=108&crop=smart&auto=webp&s=9a2996c05bc81a6ac32204e92521ad0de9cb8a8f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/R-9hsMR8vUhTujexAftojKeBUYgoMQ0LKYIfxmVtxHE.png?width=216&crop=smart&auto=webp&s=5be23ef537220857a9e48a11db2c4fd90fb78b0c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/R-9hsMR8vUhTujexAftojKeBUYgoMQ0LKYIfxmVtxHE.png?width=320&crop=smart&auto=webp&s=ec614ff98596036f81745e261e0c697762d0d1fe', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/R-9hsMR8vUhTujexAftojKeBUYgoMQ0LKYIfxmVtxHE.png?width=640&crop=smart&auto=webp&s=98540d3830dcf459c8003a9578801440c349b88b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/R-9hsMR8vUhTujexAftojKeBUYgoMQ0LKYIfxmVtxHE.png?width=960&crop=smart&auto=webp&s=3f16ba041859b857d5f2c7979165c4e0e8fa99a0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/R-9hsMR8vUhTujexAftojKeBUYgoMQ0LKYIfxmVtxHE.png?width=1080&crop=smart&auto=webp&s=8be2455b4ee31ec562c72f91deaae4fae9e960c8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/R-9hsMR8vUhTujexAftojKeBUYgoMQ0LKYIfxmVtxHE.png?auto=webp&s=96c74782aec7237fa8eb464672ae7c6ee7999a83', 'width': 1200}, 'variants': {}}]} | |
[Experiment] Qwen3-VL-8B matches VS Qwen2.5-VL-7B test results | 4 | **TL;DR:**
I tested the brand-new **Qwen3-VL-8B** against **Qwen2.5-VL-7B** on the same set of visual reasoning tasks — OCR, chart analysis, multimodal QA, and instruction following.
Despite being only 1B parameters larger, Qwen3-VL shows a ***clear generation-to-generation leap*** and delivers more accurate, nuanced, and faster multimodal reasoning.
# 1. Setup
* **Environment:** Local inference
* **Hardware:** Mac Air M4, 8-core GPU, 24 GB VRAM
* **Model format:** gguf, Q4
* **Tasks tested:**
* Visual perception (receipts, invoice)
* Visual captioning (photos)
* Visual reasoning (business data)
* Multimodal Fusion (does paragraph match figure)
* Instruction following (structured answers)
Each prompt + image pair was fed to both models, using identical context.
# 2. Evaluation Criteria
**Visual Perception**
* **Metric**: Correctly identifies text, objects, and layout.
* **Why It Matters**: This reflects the model’s baseline visual IQ.
**Visual Captioning**
* **Metric**: Generates natural language descriptions of images.
* **Why It Matters**: Bridges vision and language, showing the model can translate what it sees into coherent text.
**Visual Reasoning**
* **Metric**: Reads chart trends and applies numerical logic.
* **Why It Matters**: Tests true multimodal reasoning ability, beyond surface-level recognition.
**Multimodal Fusion**
* **Metric**: Connects image content with text context.
* **Why It Matters**: Demonstrates cross-attention strength—how well the model integrates multiple modalities.
**Instruction Following**
* **Metric**: Obeys structured prompts, such as “answer in 3 bullets.”
* **Why It Matters**: Reflects alignment quality and the ability to produce controllable outputs.
**Efficiency**
* **Metric**: TTFT (time to first token) and decoding speed.
* **Why It Matters**: Determines local usability and user experience.
Note: all answers are verified by humans and ChatGPT5.
# 3. Test Results Summary
1. **Visual Perception**
* Qwen2.5-VL-7B: Score **5**
* Qwen3-VL-8B: Score **8**
* **Winner: Qwen3-VL-8B**
* Notes: Qwen3-VL-8B identify all the elements in the pic but fail the first and final calculation (the answer is 480.96 and 976.94). In comparison, Qwen2.5-VL-7B could not even understand the meaning of all the elements in the pic (there are two tourists) though the calculation is correct.
1. **Visual Captioning**
* Qwen2.5-VL-7B: Score **6.5**
* Qwen3-VL-8B: Score **9**
* **Winner: Qwen3-VL-8B**
* Notes: Qwen3-VL-8B is more accurate, detailed, and has better scene understanding. (for example, identify Christmas Tree and Milkis). In contrary, Qwen2.5-VL-7B Gets the gist, but makes several misidentifications and lacks nuance.
1. **Visual Reasoning**
* Qwen2.5-VL-7B: Score **8**
* Qwen3-VL-8B: Score **9**
* **Winner: Qwen3-VL-8B**
* Notes: Both models show the basically correct reasoning of the charts and one or two numeric errors. Qwen3-VL-8B is better at analysis/insight which indicates the key shifts while Qwen2.5-VL-7B has a clearer structure.
1. **Multimodal Fusion**
* Qwen2.5-VL-7B: Score **7**
* Qwen3-VL-8B: Score **9**
* **Winner: Qwen3-VL-8B**
* Notes: The reasoning of Qwen3-VL-8B is correct, well-supported, and compelling with slight round up for some percentages, while that of Qwen2.5-VL-7B shows Incorrect data reference.
1. **Instruction Following**
* Qwen2.5-VL-7B: Score **8**
* Qwen3-VL-8B: Score **8.5**
* **Winner: Qwen3-VL-8B**
* Notes: The summary from Qwen3-VL-8B is more faithful and nuanced, but more wordy. The suammry of Qwen2.5-VL-7B is cleaner and easier to read but misses some details.
1. **Decode Speed**
* Qwen2.5-VL-7B: 11.7–19.9t/s
* Qwen3-VL-8B: 15.2–20.3t/s
* **Winner: Qwen3-VL-8B**
* Notes: 15–40% faster.
1. **TTFT**
* Qwen2.5-VL-7B: 5.9–9.9s
* Qwen3-VL-8B: 4.6–7.1s
* **Winner: Qwen3-VL-8B**
* Notes: 20–40% faster.
4. Example Prompts
* **Visual perception:** “Extract the total amount and payment date from this invoice.”
* **Visual captioning**: "Describe this photo"
* **Visual reasoning:** “From this chart, what’s the trend from 1963 to 1990?”
* **Multimodal Fusion:** “Does the table in the image support the written claim: Europe is the dominant market for Farmed Caviar?”
* **Instruction following** “Summarize this poster in exactly 3 bullet points.”
# 5. Summary & Takeaway
The comparison does not demonstrate just a minor version bump, but a generation leap:
* Qwen3-VL-8B consistently outperforms in **Visual reasoning**, **Multimodal fusion, Instruction following,** and especially **Visual perception and Visual captioning.**
* Qwen3-VL-8B produces more **faithful and nuanced answers**, often giving richer context and insights. (however, conciseness is the tradeoff). Thus, users who value **accuracy and depth** should prefer Qwen3, while those who want **conciseness with less cognitive load** might tolerate Qwen2.5.
* Qwen3’s mistakes are easier for humans to correct (eg, some numeric errors), whereas Qwen2.5 can mislead due to **deeper misunderstandings**.
* Qwen3 not only **improves quality but also reduces latency**, improving user experience. | 2025-10-17T23:03:03 | https://v.redd.it/lbtoc1le6rvf1 | Unbreakable_ryan | /r/LocalLLaMA/comments/1o9fzcg/experiment_qwen3vl8b_matches_vs_qwen25vl7b_test/ | 1970-01-01T00:00:00 | 0 | {} | 1o9fzcg | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/lbtoc1le6rvf1/DASHPlaylist.mpd?a=1763463790%2CNDk3YTRjY2QzOGRkYmVlODVmNGQyZjQyMjE1OTA1ZWEyNDgwYWJlMDBiZGRlZDIwNDZiOGY4NzA5ZWY4NmJmYw%3D%3D&v=1&f=sd', 'duration': 512, 'fallback_url': 'https://v.redd.it/lbtoc1le6rvf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/lbtoc1le6rvf1/HLSPlaylist.m3u8?a=1763463790%2CNmQ5NTM2MjU0NmRkYTc1NTRiODQ4MzI5MTJhMWFlNjhhZGZmNjQ0NmI4OTA1ZWQ4NzI1NTU4YjFiYzQ4MTUxNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lbtoc1le6rvf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1896}} | t3_1o9fzcg | /r/LocalLLaMA/comments/1o9fzcg/experiment_qwen3vl8b_matches_vs_qwen25vl7b_test/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'b3c2dHdsa2U2cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/b3c2dHdsa2U2cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=108&crop=smart&format=pjpg&auto=webp&s=d3d5634b8115050c6a0debeb8efee21684deb1e0', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/b3c2dHdsa2U2cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=216&crop=smart&format=pjpg&auto=webp&s=7e0776e82de44ee325ef600de152d2a9eb9af826', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/b3c2dHdsa2U2cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=320&crop=smart&format=pjpg&auto=webp&s=c57b8a3c5aada13f34b6901d6594d2d6af388e6c', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/b3c2dHdsa2U2cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=640&crop=smart&format=pjpg&auto=webp&s=1d14676ba97beaa8b4f6912351d407be506feea3', 'width': 640}, {'height': 547, 'url': 'https://external-preview.redd.it/b3c2dHdsa2U2cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=960&crop=smart&format=pjpg&auto=webp&s=0c3bde05e46dd6975ec5caceb6bc0dafb3d36ace', 'width': 960}, {'height': 615, 'url': 'https://external-preview.redd.it/b3c2dHdsa2U2cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a7c6d17600c2e6b1cd7632ab6a0eac662702812f', 'width': 1080}], 'source': {'height': 1510, 'url': 'https://external-preview.redd.it/b3c2dHdsa2U2cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?format=pjpg&auto=webp&s=2ac8582fbd39db2ec4a6571ce0f0ef5a63406ca5', 'width': 2650}, 'variants': {}}]} | |
Best roleplay model to run locally | 2 | HI folks:
Ive got a Ryzen 9 9950x, 64gb ram, 12gb 3060 video card and 12 tb of hdd/ssd. Im looking for recommendations on the best roleplay LLM's to run LOCALLY -- i know you can get better using API, but I have a number of concerns, not the least of which is cost. Im planning to use LM Studio and SillyTavern
What Say you? | 2025-10-17T23:01:20 | https://www.reddit.com/r/LocalLLaMA/comments/1o9fxxt/best_roleplay_model_to_run_locally/ | slrg1968 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9fxxt | false | null | t3_1o9fxxt | /r/LocalLLaMA/comments/1o9fxxt/best_roleplay_model_to_run_locally/ | false | false | self | 2 | null |
[Experiment] Qwen3-VL-8B matches VS Qwen2.5-VL-7B test results | 1 | **TL;DR:**
I tested the brand-new **Qwen3-VL-8B** against **Qwen2.5-VL-7B** on the same set of visual reasoning tasks — OCR, chart analysis, multimodal QA, and instruction following.
Despite being only 1B parameters larger, Qwen3-VL shows a ***clear generation-to-generation leap*** and delivers more accurate, nuanced, and faster multimodal reasoning.
# 1. Setup
* **Environment:** Local inference
* **Hardware:** Mac Air M4, 8-core GPU, 24 GB VRAM
* **Model format:** gguf, Q4
* **Tasks tested:**
* Visual perception (receipts, invoice)
* Visual captioning (photos)
* Visual reasoning (business data)
* Multimodal Fusion (does paragraph match figure)
* Instruction following (structured answers)
Each prompt + image pair was fed to both models, using identical context.
# 2. Evaluation Criteria
||
||
|**Tasks**|**Metric**|**Why It Matters**|
|**Visual Perception**|Correctly identifies text, objects, layout|Baseline visual IQ|
|**Visual captioning**|Generates natural language descriptions of images|Bridges vision and language|
|**Visual Reasoning**|Reads chart trends, numerical logic|True multimodal reasoning|
|**Multimodal Fusion**|Connects image content with text context|Cross-attention strength|
|**Instruction Following**|Obeys “answer in 3 bullets” etc.|Alignment quality|
|**Efficiency**|TTFT, decoding speed|Local usability|
Note: all answers are verified by humans and ChatGPT5.
# 3. Test Results Summary
||
||
|**Test**|**Qwen2.5-VL-7B**|**Qwen3-VL-8B**|**Winner**|**Notes**|
|**Visual Perception**|5|8|**Qwen3-VL-8B**|**Qwen3-VL-8B identify all the elements in the pic but fail the first and final calculation (the answer is 480.96 and 976.94). In comparison, Qwen2.5-VL-7B could not even understand the meaning of all the elements in the pic (there are two tourists) though the calculation is corret.** |
|**Visual captioning**|6.5|9|**Qwen3-VL-8B**|**Qwen3-VL-8B is more accurate, detailed, and has better scene understanding. (for example, identify Christmas Tree and Milkis). In comtrary, Qwen2.5-VL-7B Gets the gist, but makes several misidentifications and lacks nuance.**|
|**Visual Reasoning**|8|9|**Qwen3-VL-8B**|**Both models show the basically correct reasoning of the charts and one or two numeric errors. Qwen3-VL-8B is better at analysis/insight which indicates the key shifts while Qwen2.5-VL-7B has a clearer structure**|
|**Multimodal Fusion**|7|9|**Qwen3-VL-8B**|**The reasoning of Qwen3-VL-8B is correct, well-supported, and compelling with slight round up for some percentages, while that of Qwen2.5-VL-7B shows Incorrect data reference.** |
|**Instruction Following**|8|8.5|**Qwen3-VL-8B**|**The summary from Qwen3-VL-8B is more faithful and nuanced, but more wordy.** **The suammry of Qwen2.5-VL-7B is cleaner and easier to read but misses some details.**|
|Decode Speed|11.7-19.9t/s|15.2-20.3t/s|**Qwen3-VL-8B**|**15-40% faster**|
|TTFT|5.9-9.9s|4.6-7.1s|**Qwen3-VL-8B**|**20-40% faster**|
# 4. Example Prompts
* **Visual perception:** “Extract the total amount and payment date from this invoice.”
* **Visual captioning**: "Describe this photo"
* **Visual reasoning:** “From this chart, what’s the trend from 1963 to 1990?”
* **Multimodal Fusion:** “Does the table in the image support the written claim: Europe is the dominant market for Farmed Caviar?”
* **Instruction following** “Summarize this poster in exactly 3 bullet points.”
# 5. Summary & Takeaway
The comparison does not demonstrate just a minor version bump, but a generation leap:
* Qwen3-VL-8B consistently outperforms in **Visual reasoning**, **Multimodal fusion, Instruction following,** and especially **Visual perception and Visual captioning.**
* Qwen3-VL-8B produces more **faithful and nuanced answers**, often giving richer context and insights. (however, conciseness is the tradeoff). Thus, users who value **accuracy and depth** should prefer Qwen3, while those who want **conciseness with less cognitive load** might tolerate Qwen2.5.
* Qwen3’s mistakes are easier for humans to correct (eg, some numeric errors), whereas Qwen2.5 can mislead due to **deeper misunderstandings**.
* Qwen3 not only **improves quality but also reduces latency**, improving user experience. | 2025-10-17T22:51:50 | https://v.redd.it/apsm1hfa4rvf1 | Unbreakable_ryan | /r/LocalLLaMA/comments/1o9fq23/experiment_qwen3vl8b_matches_vs_qwen25vl7b_test/ | 1970-01-01T00:00:00 | 0 | {} | 1o9fq23 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/apsm1hfa4rvf1/DASHPlaylist.mpd?a=1763463118%2CZTA1NGNkMDlkZTdlZjBlYTI2MDRjN2I2N2M5NjhmMjAyZjgwZTBkZWFjNDQ2NzYyZDkzNjVlNTllYWYzMjU0MA%3D%3D&v=1&f=sd', 'duration': 512, 'fallback_url': 'https://v.redd.it/apsm1hfa4rvf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/apsm1hfa4rvf1/HLSPlaylist.m3u8?a=1763463118%2CMThkMzc2Y2ExNjZmZTc1ZGNjN2UxYmZiYWE5ZGIyZTVlNTc3YTdiYTdmZjk0N2VhMzdmM2NjMGQ3YzcxMTYwOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/apsm1hfa4rvf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1896}} | t3_1o9fq23 | /r/LocalLLaMA/comments/1o9fq23/experiment_qwen3vl8b_matches_vs_qwen25vl7b_test/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MDQ0NHZnZmE0cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/MDQ0NHZnZmE0cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=108&crop=smart&format=pjpg&auto=webp&s=d3dac1f34829b817429c2ab8385d0a8d93a36e22', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/MDQ0NHZnZmE0cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=216&crop=smart&format=pjpg&auto=webp&s=258db7b08fac93b89d7ebdc782dc0ed45838479f', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/MDQ0NHZnZmE0cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=320&crop=smart&format=pjpg&auto=webp&s=e60643acf47867fe00aadb3becd5f8ab4ed7c941', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/MDQ0NHZnZmE0cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=640&crop=smart&format=pjpg&auto=webp&s=b443c1ce79d095ae010d9c6d2a8578f02dba9c3b', 'width': 640}, {'height': 547, 'url': 'https://external-preview.redd.it/MDQ0NHZnZmE0cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=960&crop=smart&format=pjpg&auto=webp&s=abf0aa10f94801d74c2dc483f2285ca028daf6fb', 'width': 960}, {'height': 615, 'url': 'https://external-preview.redd.it/MDQ0NHZnZmE0cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9f8971844eeaee8d241f2125e6b435c6b74414c0', 'width': 1080}], 'source': {'height': 1510, 'url': 'https://external-preview.redd.it/MDQ0NHZnZmE0cnZmMSlMdchWVQEhhJbqcRpTSv2Il4U_vTm8zPYXwK-Tk6g_.png?format=pjpg&auto=webp&s=58088dd6e26b63112c657e47078ef28062775af4', 'width': 2650}, 'variants': {}}]} | |
What is considered to be a top tier Speech To Text model, with speaker identification | 18 | Looking to locally run a speech to text model, with the highest accuracy on the transcripts. ideally want it to not break when there is gaps in speech or "ums". I can guarantee high quality audio for the model, however I just need it to work when there is silence. I tried Whisper.CPP but it struggles with silence and it is not the most accurate. Additionally it does not identify or split the transcripts among the speakers.
Any insights would be much appreciated!! | 2025-10-17T22:14:42 | https://www.reddit.com/r/LocalLLaMA/comments/1o9evno/what_is_considered_to_be_a_top_tier_speech_to/ | ImmediateFudge02 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9evno | false | null | t3_1o9evno | /r/LocalLLaMA/comments/1o9evno/what_is_considered_to_be_a_top_tier_speech_to/ | false | false | self | 18 | null |
What’s your take on today’s AI chat models? Quick survey (reposting for more feedback!) | 3 | (I’m reposting this to get a few more eyes on it)
I’m running an anonymous survey to learn how people actually use and feel about AI chat tools like ChatGPT, Claude, Gemini, etc. I’d love to hear your perspective on what works well and what could be better.
You can share your thoughts here: [Survey link](https://qualtricsxm899s6r9s6.qualtrics.com/jfe/form/SV_78MbG1XMIndSowC)
Once enough responses come in, I’ll post a short summary of what people are saying. Thanks for taking part. | 2025-10-17T22:13:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o9eufs/whats_your_take_on_todays_ai_chat_models_quick/ | moizsawan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9eufs | false | null | t3_1o9eufs | /r/LocalLLaMA/comments/1o9eufs/whats_your_take_on_todays_ai_chat_models_quick/ | false | false | self | 3 | null |
Anyone with 7900 XTX and vllm with Gemma3 QAT models? | 0 | If you have been able to run gemma3 QAT models with AMD consumer cards and vLLM please let me know how. I can run only unquantized and GPTQ models. QAT would be little bit better quality... | 2025-10-17T22:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/1o9esmj/anyone_with_7900_xtx_and_vllm_with_gemma3_qat/ | somealusta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9esmj | false | null | t3_1o9esmj | /r/LocalLLaMA/comments/1o9esmj/anyone_with_7900_xtx_and_vllm_with_gemma3_qat/ | false | false | self | 0 | null |
Qwen3-VL testout - open-source VL GOAT | 34 | I’ve been waiting on Qwen3-VL and finally ran the 4B on scanned tables, color-blind plates, UI screenshots, and small “sort these images” sets. For “read text fast and accurately,” ramp-up was near zero. Tables came out clean with headers and merged cells handled better than Qwen2.5-VL. Color perception is clearly improved—the standard plates that used to trip it now pass across runs. For simple ranking tasks, it got the ice-cream series right; mushrooms were off but the rationale was reasonable and still ahead of most open-source VL peers I’ve tried.
For GUI work, the loop is straightforward: recognize → locate → act. It reliably finds on-screen elements and returns usable boxes, so basic desktop/mobile flows can close. On charts and figures, it not only reads values but also does the arithmetic; visual data + reasoning feels stronger than last gen.
Two areas lag. Screenshot → HTML/CSS replication is weak in my tests; skeletons don’t match layout closely. Spatial transforms improved just enough to identify the main view correctly, but complex rotations and occlusions still cause slips. World knowledge mix-ups remain too: it still confuses Shanghai’s Jin Mao Tower with Shanghai Tower.
Variant behavior matters. The Think build tends to over-explain and sometimes lands wrong. The Instruct build stays steadier for perception, grounding, and “read + point” jobs. My pattern is simple: let 4B handle recognition and coordinates, then hand multi-step reasoning or code-gen to a larger text model. That stays stable.
Net take: big lift in perception, grounding, and visual math; still weak on faithful webpage replication and hard spatial transforms. As of today, it feels like the top open-source VL at this size. | 2025-10-17T22:05:26 | https://www.reddit.com/r/LocalLLaMA/comments/1o9eo4f/qwen3vl_testout_opensource_vl_goat/ | Zealousideal-Fox-76 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9eo4f | false | null | t3_1o9eo4f | /r/LocalLLaMA/comments/1o9eo4f/qwen3vl_testout_opensource_vl_goat/ | false | false | self | 34 | null |
Built a 100% Local AI Medical Assistant in an afternoon - Zero Cloud, using LlamaFarm | 26 | I wanted to show off the power of local AI and got tired of uploading my lab results to ChatGPT and trusting some API with my medical data. Got this up and running in 4 hours. It has 125K+ medical knowledge chunks to ground it in truth and a multi-step RAG retrieval strategy to get the best responses. Plus, it is open source (link down below)!
**What it does:**
Upload a PDF of your medical records/lab results or ask it a quick question. It explains what's abnormal, why it matters, and what questions to ask your doctor. Uses actual medical textbooks (Harrison's Internal Medicine, Schwartz's Surgery, etc.), not just info from Reddit posts scraped by an agent a few months ago (yeah, I know the irony).
Check out the video:
[Walk through of the local medical helper](https://reddit.com/link/1o9en0w/video/3oef2nhjvqvf1/player)
**The privacy angle:**
* PDFs parsed in your browser (PDF.js) - never uploaded anywhere
* All AI runs locally with LlamaFarm config; easy to reproduce
* Your data literally never leaves your computer
* Perfect for sensitive medical docs or very personal questions.
**Tech stack:**
* Next.js frontend
* gemma3:1b (134MB) + qwen3:1.7B (1GB) local models via Ollama
* 18 medical textbooks, 125k knowledge chunks
* Multi-hop RAG (way smarter than basic RAG)
**The RAG approach actually works:**
Instead of one dumb query, the system generates 4-6 specific questions from your document and searches in parallel. So if you upload labs with high cholesterol, low Vitamin D, and high glucose, it automatically creates separate queries for each issue and retrieves comprehensive info about ALL of them.
**What I learned:**
* Small models (gemma3:1b is 134MB!) are shockingly good for structured tasks if you use XML instead of JSON
* Multi-hop RAG retrieves 3-4x more relevant info than single-query
* Streaming with multiple `<think>` blocks is a pain in the butt to parse
* Its not that slow; the multi-hop and everything takes a 30-45 seconds, but its doing a lot and it is 100% local.
**How to try it:**
Setup takes about 10 minutes + 2-3 hours for dataset processing (one-time) - We are shipping a way to not have to populate the database in the future. I am using Ollama right now, but will be shipping a runtime soon.
# Install Ollama, pull models
ollama pull gemma3:1b
ollama pull qwen3:1.7B
# Clone repo
git clone https://github.com/llama-farm/local-ai-apps.git
cd Medical-Records-Helper
# Full instructions in README
After initial setup, everything is instant and offline. No API costs, no rate limits, no spying.
**Requirements:**
* 8GB RAM (4GB might work)
* Docker
* Ollama
* \~3GB disk space
Full docs, troubleshooting, architecture details: [https://github.com/llama-farm/local-ai-apps/tree/main/Medical-Records-Helper](https://github.com/llama-farm/local-ai-apps/tree/main/Medical-Records-Helper)
r/LlamaFarm
**Roadmap:**
* You tell me.
**Disclaimer:** Educational only, not medical advice, talk to real doctors, etc. Open source, MIT licensed. Built most of it in an afternoon once I figured out the multi-hop RAG pattern.
What features would you actually use? Thinking about adding wearable data analysis next. | 2025-10-17T22:04:05 | https://www.reddit.com/r/LocalLLaMA/comments/1o9en0w/built_a_100_local_ai_medical_assistant_in_an/ | badgerbadgerbadgerWI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9en0w | false | null | t3_1o9en0w | /r/LocalLLaMA/comments/1o9en0w/built_a_100_local_ai_medical_assistant_in_an/ | false | false | self | 26 | null |
EXO + Mac Studio + DGX Sparks (for prefill tokens) = 2.8x performance gains on AI benchmarks. | 7 | I mean, it’s kind of an extremely pricey Frankenstein setup, but still kind of cool that it uses the strengths of both the Mac Studio (wide memory bus) and the DGX (compute for prefill) together to achieve significant performance gains. | 2025-10-17T22:01:04 | https://www.tomshardware.com/software/two-nvidia-dgx-spark-systems-combined-with-m3-ultra-mac-studio-to-create-blistering-llm-system-exo-labs-demonstrates-disaggregated-ai-inference-and-achieves-a-2-8-benchmark-boost | Porespellar | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 1o9ekh1 | false | null | t3_1o9ekh1 | /r/LocalLLaMA/comments/1o9ekh1/exo_mac_studio_dgx_sparks_for_prefill_tokens_28x/ | false | false | default | 7 | {'enabled': False, 'images': [{'id': 'hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=108&crop=smart&auto=webp&s=bf5936334c9e0d6cec4bd239b6bcbc698133af94', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=216&crop=smart&auto=webp&s=e6a51335985a7c032c3e1d8a369c015d70be5690', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=320&crop=smart&auto=webp&s=f95af4e8cfedc71cc2e6efe73e1e4907df5cef87', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=640&crop=smart&auto=webp&s=bdf78665a24bb3ff417303dab346ea60afc9e667', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=960&crop=smart&auto=webp&s=30a2939d98debf13f77d38b89c479601f80e8902', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?width=1080&crop=smart&auto=webp&s=36aa0251ea34cf9bc60e122a38568e722b31f50d', 'width': 1080}], 'source': {'height': 1337, 'url': 'https://external-preview.redd.it/hyPfbB2-0S9_BpSzWHTEzbVGWtAk9YMYdbHv82PBK64.jpeg?auto=webp&s=5247ca82390f25f14fcd33d9d4d52de15d6ed2b9', 'width': 2376}, 'variants': {}}]} |
Has anyone tried AgentRouter for testing multiple LLM APIs? Looking for feedback | 0 | Hey folks,
I’ve been looking for ways to test different AI models without committing to multiple paid subscriptions, and I came across this platform called AgentRouter that seems to aggregate access to various models through a single API endpoint.
From what I understand, they’re offering $200 in free credits right now (apparently it was $300 before, so not sure how long this will last). The main appeal for me is being able to compare outputs from:
• OpenAI’s newer models (GPT-5, GPT-4o)
• Claude variants (Sonnet 4.5, Opus 4.1)
• DeepSeek (v3 and r1)
• GLM models from Zhipu AI
• Some Z.AI models I hadn’t heard of before
I signed up using this referral link (full transparency: it’s an affiliate link, so I get some credit if you use it, but you still get the same $200 either way). No credit card required, just GitHub authentication.
My questions for anyone who’s used it:
1. How does the response quality/latency compare to using the native APIs directly?
2. Are there any hidden limitations on the free tier? (rate limits, model restrictions, etc.)
3. Has anyone successfully integrated it with tools like Continue, Cursor, or similar coding assistants?
4. Is the $200 credit actually enough to do meaningful testing, or does it burn through quickly?
I’m mainly interested in using it for coding tasks and comparing which model handles context better for my specific use cases. The unified API approach seems convenient, but I’m curious if there are downsides I’m not seeing.
Would appreciate any real-world experience or gotchas to watch out for before I start migrating my test workflows over.
Thanks! | 2025-10-17T21:58:18 | https://www.reddit.com/r/LocalLLaMA/comments/1o9ei24/has_anyone_tried_agentrouter_for_testing_multiple/ | AdLongjumping3934 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9ei24 | false | null | t3_1o9ei24 | /r/LocalLLaMA/comments/1o9ei24/has_anyone_tried_agentrouter_for_testing_multiple/ | false | false | self | 0 | null |
Yet another unemployment-fueled Perplexity clone | 35 | Hi,
I lost my Data Analyst job so i figured it was the perfect time to get back into coding.
I tried to selfhost SearxNG and Perplexica
SearxNG is great but Perplexica is not, (not fully configurable, no Katex support) generally the features of Perplexica didn't feat my use case (neither for Morphic)
So i started to code my own Perplexity alternative using langchain and React.
My solution have a cool and practical unified config file, better providers support, Katex support and expose a tool to the model allowing it to generate maps (i love this feature).
I thought you guys could like such a project. (even if it's yet-another 0 stars Perplexity clone)
I’d really appreciate your feedback: which features would you find useful, what’s missing, and any tips on managing a serious open-source project (since this is my biggest one so far).
Here is the repo [https://github.com/edoigtrd/ubiquite](https://github.com/edoigtrd/ubiquite)
https://preview.redd.it/olvmqd8jiqvf1.png?width=2560&format=png&auto=webp&s=16b2ec4ef2775e85810c4091b59b0944d0c29558
https://preview.redd.it/gett2h7jiqvf1.png?width=1920&format=png&auto=webp&s=f66727a6000424655287fcb0db81908df21832be
| 2025-10-17T21:46:03 | https://www.reddit.com/r/LocalLLaMA/comments/1o9e7of/yet_another_unemploymentfueled_perplexity_clone/ | Opti_Dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9e7of | false | null | t3_1o9e7of | /r/LocalLLaMA/comments/1o9e7of/yet_another_unemploymentfueled_perplexity_clone/ | false | false | 35 | {'enabled': False, 'images': [{'id': 'f_xEzNTJ4pROCB6zPKxq9ZMay-N5DHhAeVgu-fY73wU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/f_xEzNTJ4pROCB6zPKxq9ZMay-N5DHhAeVgu-fY73wU.png?width=108&crop=smart&auto=webp&s=53c0a3372d04fb048b848c65068f18e6c1d78ef3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/f_xEzNTJ4pROCB6zPKxq9ZMay-N5DHhAeVgu-fY73wU.png?width=216&crop=smart&auto=webp&s=75e601c3def1e361a3f5d0aeff6f54e7e25fbd4f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/f_xEzNTJ4pROCB6zPKxq9ZMay-N5DHhAeVgu-fY73wU.png?width=320&crop=smart&auto=webp&s=53e2605ddfc8d4b79876d8fc9cb4a73d39cae2df', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/f_xEzNTJ4pROCB6zPKxq9ZMay-N5DHhAeVgu-fY73wU.png?width=640&crop=smart&auto=webp&s=9bb4ae36fae9b5478e04295bc66d5c3baf012f38', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/f_xEzNTJ4pROCB6zPKxq9ZMay-N5DHhAeVgu-fY73wU.png?width=960&crop=smart&auto=webp&s=bc221c8f19198d9861938c5a71691c2d7d851c6a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/f_xEzNTJ4pROCB6zPKxq9ZMay-N5DHhAeVgu-fY73wU.png?width=1080&crop=smart&auto=webp&s=7b213be68f362ebc7e025df30370b8a9287bf92a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/f_xEzNTJ4pROCB6zPKxq9ZMay-N5DHhAeVgu-fY73wU.png?auto=webp&s=8183be3548b1a8ebc73c49c090c378a93566ffd5', 'width': 1200}, 'variants': {}}]} | |
PlayDiffusion finetune for audio inpainting non-verbal tags | 6 | PlayDiffusion is a 7B Apache-licensed diffusion model which can 'inpaint' audio. So you can change existing audio (slightly) by providing new text. I was curious to learn how it works and challenged myself if it was possible to make a small fine-tune which adds support for non-verbal tags such as \`<laugh>\` or \`<cough>\`.
After two weeks of tinkering I have support for \`<laugh>\`, \`<pause>\` and \`<breath>\` because there wasn't enough good training data for other tags such as \`<cough>\` to find easily.
It comes with gradio, docker or runs directly from \`uvx\`:
* Source available here: [https://github.com/coezbek/PlayDiffusion](https://github.com/coezbek/PlayDiffusion)
* Original PlayDiffusion: [https://github.com/PlayHT/playdiffusion](https://github.com/PlayHT/playdiffusion)
* HF Checkpoint: [https://huggingface.co/oezi13/PlayDiffusion-nonverbal](https://huggingface.co/oezi13/PlayDiffusion-nonverbal)
* Datasets used for training: [https://huggingface.co/collections/oezi13/nonverbal-tts-audio-68ec1bee4163e50369424650](https://huggingface.co/collections/oezi13/nonverbal-tts-audio-68ec1bee4163e50369424650)
Note: PlayDiffusion is english only and doesn't work for all voices.
| 2025-10-17T21:30:28 | https://www.reddit.com/r/LocalLLaMA/comments/1o9du7h/playdiffusion_finetune_for_audio_inpainting/ | oezi13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9du7h | false | null | t3_1o9du7h | /r/LocalLLaMA/comments/1o9du7h/playdiffusion_finetune_for_audio_inpainting/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'ZEcJi7QNv7vzN-560uxn0XlcMRWh7EsxM7YNcyM6l6g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZEcJi7QNv7vzN-560uxn0XlcMRWh7EsxM7YNcyM6l6g.png?width=108&crop=smart&auto=webp&s=bcd87614a3688dd2b7062e471197a5c77e0e1430', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZEcJi7QNv7vzN-560uxn0XlcMRWh7EsxM7YNcyM6l6g.png?width=216&crop=smart&auto=webp&s=7a702b8fe2d26cde8c06459492c5add6b18afcab', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZEcJi7QNv7vzN-560uxn0XlcMRWh7EsxM7YNcyM6l6g.png?width=320&crop=smart&auto=webp&s=7856cbffe90d37f2a03c1b75c609e5d054e18a50', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZEcJi7QNv7vzN-560uxn0XlcMRWh7EsxM7YNcyM6l6g.png?width=640&crop=smart&auto=webp&s=c40a36bff9814576476e9dde080ca16150372f96', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZEcJi7QNv7vzN-560uxn0XlcMRWh7EsxM7YNcyM6l6g.png?width=960&crop=smart&auto=webp&s=00bcfbf6b2d0fed953960f7db9114439bec64f48', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZEcJi7QNv7vzN-560uxn0XlcMRWh7EsxM7YNcyM6l6g.png?width=1080&crop=smart&auto=webp&s=acc71be4ca1feeabff8b6dd4f4ca33f61521960d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZEcJi7QNv7vzN-560uxn0XlcMRWh7EsxM7YNcyM6l6g.png?auto=webp&s=a9bac66e8bfce0c8cd61092812322cbfa7e73039', 'width': 1200}, 'variants': {}}]} |
Is qwen VL2 worth downloading today | 1 | I’m using iPhone 13 locally AI and qwen 2 VL seem to be the only vision choice, at 1.25gig, does it compare well to newer vl models? Also is open Ilm leaderboard still maintained | 2025-10-17T21:14:30 | https://www.reddit.com/r/LocalLLaMA/comments/1o9dga8/is_qwen_vl2_worth_downloading_today/ | crappy-Userinterface | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9dga8 | false | null | t3_1o9dga8 | /r/LocalLLaMA/comments/1o9dga8/is_qwen_vl2_worth_downloading_today/ | false | false | self | 1 | null |
Best hardware and models to get started with local hosting late 2025 | 13 | Hi Everyone,
I've been curious about getting into hosting local models to mess around with. And maybe to help with my daily coding work, but I'd consider that just as a bonus. Generally, my usecases would be around processing data and coding.
I was wondering what would decent hardware to get started, I don't think I currently own anything that would work. I am happy to spend around $4000 at the absolute max, but less would be very welcome!
I heard about the DGX Spark, Framework Desktop and the M4 Macs/ M5 in the near future. I've heard mixed opinions on which is the best and what the pros and cons of each are.
Aside from performance, what are the benefits and downsides of each from a user perspective. Are any just a pain to get to work?
Finally, I want to learn about this whole world. Any Youtube channels or outlets that are good resources?
| 2025-10-17T20:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/1o9d13w/best_hardware_and_models_to_get_started_with/ | overloafunderloaf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9d13w | false | null | t3_1o9d13w | /r/LocalLLaMA/comments/1o9d13w/best_hardware_and_models_to_get_started_with/ | false | false | self | 13 | null |
A Framework for Autonomous Context Engineering in Large Language Models | 0 | 2025-10-17T20:03:52 | https://medium.com/@mbonsign/a-framework-for-autonomous-context-engineering-in-large-language-models-749c4c8ef8e5 | MikeBeezzz | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1o9bn6f | false | null | t3_1o9bn6f | /r/LocalLLaMA/comments/1o9bn6f/a_framework_for_autonomous_context_engineering_in/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=108&crop=smart&auto=webp&s=7e71148290a943095daca4dc044d6b8546eb49b8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=216&crop=smart&auto=webp&s=26ff91024b22d68b6b3e438dcb220d5ed8622409', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=320&crop=smart&auto=webp&s=400af67f485343a87337480d7b743b28f8bc4999', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=640&crop=smart&auto=webp&s=0f656ffd07e1fc84f2c67c820634d95c13752753', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=960&crop=smart&auto=webp&s=01f2e480b05849948e42c6e33f4a8953b46e0978', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=1080&crop=smart&auto=webp&s=aa6fdeb97cfcf72c8ce3a91345583b5f0880c5d9', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?auto=webp&s=2fece001026ad37068b130c8715a78062ca08fd6', 'width': 1200}, 'variants': {}}]} | |
LLM on USB (offline) | 2 | I'm trying to get an AI chatbot that helps me with coding that runs completely online and on my USB flash drive, is that possible? | 2025-10-17T19:56:47 | https://www.reddit.com/r/LocalLLaMA/comments/1o9bgia/llm_on_usb_offline/ | ilBenso_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9bgia | false | null | t3_1o9bgia | /r/LocalLLaMA/comments/1o9bgia/llm_on_usb_offline/ | false | false | self | 2 | null |
Whistledash. Create Private LLM Endpoints in 3 Clicks | 0 | Hey everyone
I’ve been building something called Whistledash, and I’d love to hear your thoughts.
It’s designed for developers and small AI projects who want to spin up private LLM inference endpoints - without dealing with complicated infra setups.
Think of it as a kind of Vercel for LLMs, focused on simplicity, privacy, and fast cold starts.
What It Does
* Private Endpoints: Every user gets a fully private inference endpoint (no shared GPUs).
* Ultra-fast Llama.cpp setup: Cold starts under 2 seconds, great for low-traffic or dev-stage apps.
* Always-on SGLang deployments: Autoscaling and billed per GPU hour for production workloads.
* Automatic Deployment UI: Three clicks from model → deploy → endpoint.
* Future roadmap: credit-based billing, SDKs for Node + Python and other languages, and easy fine-tuning.
Pricing Model (Simple and Transparent)
Llama.cpp Endpoints
* $0.02 per request
* Max 3000 tokens in/out
* Perfect for small projects, tests, or low-traffic endpoints.
* Cold start: < 2 seconds.
SGLang Always-On Endpoints
* Billed per GPU hour, completely private.
B200 — $6.75/h
H200 — $5.04/h
H100 — $4.45/h
A100 (80GB) — $3.00/h
A100 (40GB) — $2.60/h
L40S — $2.45/h
A10 — $1.60/h
L4 — $1.30/h
T4 — $1.09/h
* Autoscaling handles load automatically.
* Straightforward billing, no hidden fees.
Why I Built It
As a developer, I got tired of:
* waiting for cold starts on shared infra
* managing Docker setups for small AI experiments
* and dealing with complicated pricing models
Whistledash is my attempt to make private LLM inference simple, fast, and affordable - especially for developers who are still in the early stage of building their apps.
Would love your honest feedback:
* Does the pricing seem fair?
* Would you use something like this?
* What’s missing or confusing?
* Any dealbreakers?
Whistledash = 3-click private LLM endpoints.Llama.cpp → $0.02 per request.SGLang → pay per GPU hour.Private. Fast. No sharing.Video demo inside — feedback very welcome!
| 2025-10-17T19:41:15 | https://v.redd.it/fbv82ylh6qvf1 | purellmagents | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o9b2i4 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/fbv82ylh6qvf1/DASHPlaylist.mpd?a=1763322092%2CMTM5MzYyMjAzMGViZTc5M2JmM2YzNjI0ODNjMDJiNzE5NmM5YzZkZjBjZWMyYzNiZDE4YzllZjdiZTE5ZGEzZg%3D%3D&v=1&f=sd', 'duration': 71, 'fallback_url': 'https://v.redd.it/fbv82ylh6qvf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/fbv82ylh6qvf1/HLSPlaylist.m3u8?a=1763322092%2CY2EyNWI4Mzg2ZGVlZGFhNzc5MDg1ZDc0ZWFlZTExYTIxYjExNTQ1ZGZiNDUzNGUzNmE4NzdjMTkxMDA0M2UzOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fbv82ylh6qvf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1o9b2i4 | /r/LocalLLaMA/comments/1o9b2i4/whistledash_create_private_llm_endpoints_in_3/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'cGc1YmE3aGg2cXZmMa4hBUmpVbyIIIu2tcw9SAGwcn0a8gFnfIgyl6a1T45i', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cGc1YmE3aGg2cXZmMa4hBUmpVbyIIIu2tcw9SAGwcn0a8gFnfIgyl6a1T45i.png?width=108&crop=smart&format=pjpg&auto=webp&s=c5436a8f7a581ebc6962c46eec28fa9398dc29d9', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/cGc1YmE3aGg2cXZmMa4hBUmpVbyIIIu2tcw9SAGwcn0a8gFnfIgyl6a1T45i.png?width=216&crop=smart&format=pjpg&auto=webp&s=2416c319301196cb7511d57ed99ed8d636c7fc8b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/cGc1YmE3aGg2cXZmMa4hBUmpVbyIIIu2tcw9SAGwcn0a8gFnfIgyl6a1T45i.png?width=320&crop=smart&format=pjpg&auto=webp&s=4e21135c5bcac795f6637b5909d29a9aaedd3819', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/cGc1YmE3aGg2cXZmMa4hBUmpVbyIIIu2tcw9SAGwcn0a8gFnfIgyl6a1T45i.png?width=640&crop=smart&format=pjpg&auto=webp&s=98fcd0998c1290607d96a0676c281d4f39287213', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/cGc1YmE3aGg2cXZmMa4hBUmpVbyIIIu2tcw9SAGwcn0a8gFnfIgyl6a1T45i.png?width=960&crop=smart&format=pjpg&auto=webp&s=12dcaa913797db2d2bf3218f2a900430a6dcd002', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/cGc1YmE3aGg2cXZmMa4hBUmpVbyIIIu2tcw9SAGwcn0a8gFnfIgyl6a1T45i.png?width=1080&crop=smart&format=pjpg&auto=webp&s=54437d18c3c44380b8214c29aff26405ae1a94f9', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/cGc1YmE3aGg2cXZmMa4hBUmpVbyIIIu2tcw9SAGwcn0a8gFnfIgyl6a1T45i.png?format=pjpg&auto=webp&s=20608ecfeeea86603b5b6f61816f841a08510fe4', 'width': 1280}, 'variants': {}}]} | |
So I guess I accidentally became one of you guys | 14 | I have kind of always dismissed the idea of getting a computer that is good enough to run anything locally, but decided to upgrade my current setup and got a mac m4 mini desktop computer. I know this isn't like the best thing ever and doesn't have like some massive GPU on it or something, but I'm wondering if there is anything interesting that you guys think I could do locally with some type of model? Personally, I'm kind of interested in more like productivity things or like helping with coding or something autonomous that can do work for me or like puppet a computer or other things in this ballpark potentially.
I also decided to just to get this chip because I feel like it might enable a future generation of products a bit more than buying a random $200 laptop. | 2025-10-17T19:38:42 | https://www.reddit.com/r/LocalLLaMA/comments/1o9b08m/so_i_guess_i_accidentally_became_one_of_you_guys/ | cobalt1137 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9b08m | false | null | t3_1o9b08m | /r/LocalLLaMA/comments/1o9b08m/so_i_guess_i_accidentally_became_one_of_you_guys/ | false | false | self | 14 | null |
LM Studio API: MCP tool-calling with tools: [{ "type": "mcp" … }] never emits tool_call - GUI works. Anyone got a working payload? | 0 | **TL;DR**
Calling LM Studio’s **OpenAI-compatible API** with the documented **MCP tool schema** (via /v1/responses) never produces tool\_calls for me. The **LM Studio GUI** with the same MCP server *does* work. Looking for a **known-good JSON payload** or confirmation that MCP tool-calling is **GUI-only** right now.
# Environment
* **Host**: Mac Studio (Apple Silicon)
* **LM Studio API**: [http://127.0.0.1:1234/v1](http://127.0.0.1:1234/v1)
* **Model**: openai/gpt-oss-20b (LM Studio UI says “Tool Use” detected)
* **MCP server**: Home Assistant MCP (SSE) on LAN: http://<HA\_IP>:8123/mcp\_server/sse
* **Status**: Plain chat completions via API work.
# Expected
Per the docs, calling /v1/responses with an MCP tool block should let the model emit tool\_calls, which LM Studio then invokes against the MCP server.
**Actual**
The HTTP response is a normal assistant message (no tool\_calls).
LM Studio logs show something like:
Model generated tool calls: []
No API errors. The same MCP server works from the **LM Studio GUI**.
**Minimal failing request (secrets redacted)**
curl -sS http://127.0.0.1:1234/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer local" \
-d '{
"model": "openai/gpt-oss-20b",
"tools": [{
"type": "mcp",
"server_label": "home-assistant",
"server_url": "http://<HA_IP>:8123/mcp_server/sse",
"allowed_tools": ["list_entities"]
}],
"input": "Using the Home Assistant tools, list 3 entities and return ONLY a JSON array of their entity_ids."
}'
**Response**: regular text output, **no** tool\_calls.
**Questions**
1. Does the **LM Studio API** actually support MCP tools via **/v1/responses** today, or is that functionality currently **GUI-only**?
2. If it is supported, can someone share a **known-working JSON payload** (and any required headers/flags) that leads to tool\_calls with an **SSE MCP server**?
3. Any **model/endpoint caveats** (e.g., only certain models emit tool\_calls over the API, or differences between /v1/responses and /v1/chat/completions)?
Thanks!
I wrote this with ChatGPT but have checked it for accuracy. It's just easier than me typing out the same problem. | 2025-10-17T19:32:30 | https://www.reddit.com/r/LocalLLaMA/comments/1o9aumu/lm_studio_api_mcp_toolcalling_with_tools_type_mcp/ | Shark_Tooth1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9aumu | false | null | t3_1o9aumu | /r/LocalLLaMA/comments/1o9aumu/lm_studio_api_mcp_toolcalling_with_tools_type_mcp/ | false | false | self | 0 | null |
New model from inclusionAI - LLaDA2.0-mini-preview | 76 | LLaDA2-mini-preview is a diffusion language model featuring a 16BA1B Mixture-of-Experts (MoE) architecture. As an enhanced, instruction-tuned iteration of the LLaDA series, it is optimized for practical applications.
From the benchmarks the preview looks 'not as good' as ling mini 2.0, but it's still a preview, not the final model, and this is a diffusion language model which makes it interesting | 2025-10-17T19:17:48 | https://huggingface.co/inclusionAI/LLaDA2.0-mini-preview | edward-dev | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1o9agyg | false | null | t3_1o9agyg | /r/LocalLLaMA/comments/1o9agyg/new_model_from_inclusionai_llada20minipreview/ | false | false | default | 76 | {'enabled': False, 'images': [{'id': 'xv_Z1skcqtXjop4d-0l1Usyn5XM0UbKgNLHO0wCID8I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xv_Z1skcqtXjop4d-0l1Usyn5XM0UbKgNLHO0wCID8I.png?width=108&crop=smart&auto=webp&s=c93fc9c259ea7e1a3bbec94e110840e01d80dd4b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/xv_Z1skcqtXjop4d-0l1Usyn5XM0UbKgNLHO0wCID8I.png?width=216&crop=smart&auto=webp&s=309f574409346436a9fc94ea2e0daa1b87892d99', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/xv_Z1skcqtXjop4d-0l1Usyn5XM0UbKgNLHO0wCID8I.png?width=320&crop=smart&auto=webp&s=e00dbdfac1e0583f2a696e7ad96b44720323180e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/xv_Z1skcqtXjop4d-0l1Usyn5XM0UbKgNLHO0wCID8I.png?width=640&crop=smart&auto=webp&s=f43fd9b3305330d175918ca63404b046858d8a03', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/xv_Z1skcqtXjop4d-0l1Usyn5XM0UbKgNLHO0wCID8I.png?width=960&crop=smart&auto=webp&s=8874993faa57c9e1e568de8705c9ee256e41dcc5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/xv_Z1skcqtXjop4d-0l1Usyn5XM0UbKgNLHO0wCID8I.png?width=1080&crop=smart&auto=webp&s=495c34d726efebd9ebb2672a1d0f70f9c3ad6fad', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/xv_Z1skcqtXjop4d-0l1Usyn5XM0UbKgNLHO0wCID8I.png?auto=webp&s=2a95abee1db3b4f9072b0fdf7a82f7265a94b494', 'width': 1200}, 'variants': {}}]} |
Local multimodal RAG with Qwen3-VL — text + image retrieval | 15 | Built a small demo showing how to run a full multimodal RAG pipeline locally using **Qwen3-VL-GGUF**
It loads and chunks your docs, embeds both text and images, retrieves the most relevant pieces for any question, and sends everything to Qwen3-VL for reasoning. The UI is just Gradio
https://reddit.com/link/1o9agkl/video/ni6pd59g1qvf1/player
You can tweak chunk size, Top-K, or even swap in your own inference and embedding model.
[See GitHub for code and README instructions](https://github.com/NexaAI/nexa-sdk/tree/main/demos/RAG-Qwen3VL) | 2025-10-17T19:17:24 | https://www.reddit.com/r/LocalLLaMA/comments/1o9agkl/local_multimodal_rag_with_qwen3vl_text_image/ | AlanzhuLy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9agkl | false | null | t3_1o9agkl | /r/LocalLLaMA/comments/1o9agkl/local_multimodal_rag_with_qwen3vl_text_image/ | false | false | self | 15 | null |
NVIDIA Robotics collaborates with Hugging Face LeRobot to launch a new robotic simulation and teleoperation framework | 3 | https://reddit.com/link/1o9a50s/video/ubmllj500qvf1/player
Credit to [https://x.com/jadechoghari/status/1979206847904039396](https://x.com/jadechoghari/status/1979206847904039396) | 2025-10-17T19:05:19 | https://www.reddit.com/r/LocalLLaMA/comments/1o9a50s/nvidia_robotics_collaborates_with_hugging_face/ | Soft-Worth-4872 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9a50s | false | null | t3_1o9a50s | /r/LocalLLaMA/comments/1o9a50s/nvidia_robotics_collaborates_with_hugging_face/ | false | false | self | 3 | null |
ROCm 7.0 Install for Mi50 32GB | Ubuntu 24.04 LTS | 85 | I shared a comment on how to do this [here](https://www.reddit.com/r/linux4noobs/comments/1ly8rq6/comment/nb9uiye/), but I still see people asking for help so I decided to make a video tutorial.
# Text guide:
1. Copy & paste all the commands from the quick install [https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html](https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html)
2. Before rebooting to complete the install, download the 6.4 rocblas from the AUR: [https://archlinux.org/packages/extra/x86\_64/rocblas/](https://archlinux.org/packages/extra/x86_64/rocblas/)
3. Extract it
4. Copy all tensor files that contain gfx906 in `rocblas-6.4.3-3-x86_64.pkg/opt/rocm/lib/rocblas/library` to `/opt/rocm/lib/rocblas/library`
5. Now reboot and should be smooth sailing on llama.cpp!
Note: This guide can be adapted for 6.4 if more stability is needed when working with PyTorch or vllm. Most performance improvements were already present in 6.4 (roughly 20-30% over 6.3), so 7.0.2 serves to offer more compatibility together with the latest AMD cards :) | 2025-10-17T18:51:45 | https://www.youtube.com/watch?v=xcI0pyE8VN8 | legit_split_ | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1o99s2u | false | {'oembed': {'author_name': 'Tresdin Tech', 'author_url': 'https://www.youtube.com/@TresdinTech', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/xcI0pyE8VN8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AMD ROCm 7.0 Install for Mi50 32GB | Ubuntu 24.04 LTS"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/xcI0pyE8VN8/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'AMD ROCm 7.0 Install for Mi50 32GB | Ubuntu 24.04 LTS', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1o99s2u | /r/LocalLLaMA/comments/1o99s2u/rocm_70_install_for_mi50_32gb_ubuntu_2404_lts/ | false | false | default | 85 | {'enabled': False, 'images': [{'id': 'QZgvW0JuHPNTo3BakCvCal_DO30UZr-SjSZAX09mNCA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QZgvW0JuHPNTo3BakCvCal_DO30UZr-SjSZAX09mNCA.jpeg?width=108&crop=smart&auto=webp&s=7a89bfa365fc6bee83a5665535feb289a73fe5bf', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/QZgvW0JuHPNTo3BakCvCal_DO30UZr-SjSZAX09mNCA.jpeg?width=216&crop=smart&auto=webp&s=7a27afe6a31d63388c8859dfd816327804391c0f', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/QZgvW0JuHPNTo3BakCvCal_DO30UZr-SjSZAX09mNCA.jpeg?width=320&crop=smart&auto=webp&s=6e9ed4a0e6428b54b70c9ffefc80e3952a1b20e2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/QZgvW0JuHPNTo3BakCvCal_DO30UZr-SjSZAX09mNCA.jpeg?auto=webp&s=27f8b6a1991135ef684b0a1f2616c4ee8b02d8aa', 'width': 480}, 'variants': {}}]} |
LM Studio not reading document correctly. But why? | 3 | I'm a bit new to LM Studio and using it's chat interface to test model responses. But when I uploaded a transcript of a video, I'm getting a wild response.
[Actual Transcript content](https://preview.redd.it/fkk58sgxqpvf1.png?width=762&format=png&auto=webp&s=5ea6f717c446978ee8dd27a1f9ea2645ca8a7b00)
This is about a podcaster moving to newsletters.
But when uploading to LM Studio, I get this
Gemma and Command-r
https://preview.redd.it/e1ptwr26rpvf1.png?width=1282&format=png&auto=webp&s=244898eeb03c719ad0e032638367d7412def4896
So what am I doing wrong?
By default, when you upload a file into LMStudio, it gives you the RAG option. I've tried it with it enabled and disabled. But no dice.
Can someone help?
| 2025-10-17T18:17:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o98vqd/lm_studio_not_reading_document_correctly_but_why/ | OutboundSF | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o98vqd | false | null | t3_1o98vqd | /r/LocalLLaMA/comments/1o98vqd/lm_studio_not_reading_document_correctly_but_why/ | false | false | 3 | null | |
Please share advices and configuration for 4x3090 and coding agents? | 3 | I'd like some advises from the community on how to optimise the software side of a local build with 4 RTX 3090.
I currently tried GLM 4.5 AIR with vllm through code-code-router. It worked well enough, but was struggling on some tasks and was overall behaving differently from Claude Code with Sonnet. Not only on the reasonning but also on the presentation and seemingly calling less local tools for doing actions on the computer.
I also tried Codex and connected it to the same GLM 4.5 AIR and got really garbage result. It was constantly asking for everything and not seeming able to do any logic on its own. I did not use Codex with OpenAI models so I can't compare but it was really underwhelming. Might have been a configuration issue so if people have Codex experience with LLM (outside of gpt-oss models) and ollama I'd be interested.
Overall please share your tips and tricks for multi 3090 GPU (4 preferably).
Specific questions:
\- Claude Code Router allows you to have multiple models, would it make sense to have a server with 4 GPU doing GLM-4.5 AIR and another one with 2 or 3 GPU doing QwenCode-30b for alternating?
\- Would I be better putting those 6 GPU somehow on one computer or is it better to split into two different servers working in tandem?
\-Are there better options than Claude Code and CCR for coding? I've seen Aider but recently not much people are talking about it. | 2025-10-17T18:10:31 | https://www.reddit.com/r/LocalLLaMA/comments/1o98pkb/please_share_advices_and_configuration_for_4x3090/ | skenizen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o98pkb | false | null | t3_1o98pkb | /r/LocalLLaMA/comments/1o98pkb/please_share_advices_and_configuration_for_4x3090/ | false | false | self | 3 | null |
NVIDIA sent me a 5090 so I can demo Qwen3-VL GGUF | 188 | *Processing img tuc86xduopvf1...*
3 days ago. We [partnered with the Qwen team](https://x.com/Alibaba_Qwen/status/1978154384098754943) so the new Qwen3-VL 4B & 8B models run **day-0** with GGUF, MLX inside [NexaSDK](https://github.com/NexaAI/nexa-sdk)**,** powered by our NexaML Engine — the first and only framework that supports Qwen3-VL GGUF right now. I just received a 5090 from the NVIDIA team and I want to show you how it runs on a 5090.
Today, we also made it run *locally* inside our desktop UI app **Hyperlink, so everyone can try Qwen3VL on their device easily.**
I tried the same demo examples from the [Qwen2.5-32B blog](https://qwen.ai/blog?id=250aaecfcd4828d55be2b2437a76d66a099860da&from=research.research-list), and the new **Qwen3-VL 4B & 8B** are *insane.*
Benchmarks on the 5090 (Q4):
* Qwen3VL-8B → **187 tok/s, \~8GB VRAM**
* Qwen3VL-4B → **267 tok/s, \~6GB VRAM**
**Demo:**
https://reddit.com/link/1o98m76/video/mvvtazwropvf1/player
**How to try:**
1. Install Hyperlink with one click: [hyperlink.nexa.ai](https://hyperlink.nexa.ai/)
2. Then go to *Discover Models → download Qwen3-VL GGUF to test.*
How does it do on your setup? Do you see similar performance between Qwen3VL 8B and Qwen2.5-32B? | 2025-10-17T18:06:56 | https://www.reddit.com/r/LocalLLaMA/comments/1o98m76/nvidia_sent_me_a_5090_so_i_can_demo_qwen3vl_gguf/ | AlanzhuLy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o98m76 | false | null | t3_1o98m76 | /r/LocalLLaMA/comments/1o98m76/nvidia_sent_me_a_5090_so_i_can_demo_qwen3vl_gguf/ | false | false | self | 188 | null |
New from Cerebras: REAP the Experts: Why Pruning Prevails for One-Shot MoE compression | 122 | TLDR: We show that one-shot pruning of experts in large MoEs is better than expert merging when looking at realistic benchmarks, not just perplexity measures.
Using a saliency criterion that measures expected routed contribution of each expert (REAP), we pruned Qwen3-Coder-480B to 363B (25% pruning) and 246B (50% pruning), all in FP8. At 25%, accuracy degradation is minimal across a suite of benchmarks.
Checkpoints on HF:
[https://huggingface.co/cerebras/Qwen3-Coder-REAP-363B-A35B-FP8](https://huggingface.co/cerebras/Qwen3-Coder-REAP-363B-A35B-FP8)
[https://huggingface.co/cerebras/Qwen3-Coder-REAP-246B-A35B-FP8](https://huggingface.co/cerebras/Qwen3-Coder-REAP-246B-A35B-FP8)
These can be run with vanilla vLLM, no patches required.
More evals and pruned models on the way!
Link to the paper: [https://arxiv.org/abs/2510.13999](https://arxiv.org/abs/2510.13999)
https://preview.redd.it/6zdkycxjnpvf1.png?width=6884&format=png&auto=webp&s=ef2e6f9f61b89de730fa9c01d6774998dedee9d8
| 2025-10-17T17:59:47 | https://www.reddit.com/r/LocalLLaMA/comments/1o98f57/new_from_cerebras_reap_the_experts_why_pruning/ | ilzrvch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o98f57 | false | null | t3_1o98f57 | /r/LocalLLaMA/comments/1o98f57/new_from_cerebras_reap_the_experts_why_pruning/ | false | false | 122 | {'enabled': False, 'images': [{'id': 'WYinqoDP9OerKm8ljzpFUp26G03RA6wo-9izylOPBeM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WYinqoDP9OerKm8ljzpFUp26G03RA6wo-9izylOPBeM.png?width=108&crop=smart&auto=webp&s=d31ae810e06d7223f3e32d6fe1f9675160484629', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WYinqoDP9OerKm8ljzpFUp26G03RA6wo-9izylOPBeM.png?width=216&crop=smart&auto=webp&s=e5f752bcad48a2d106b757bf03352c758038ee60', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WYinqoDP9OerKm8ljzpFUp26G03RA6wo-9izylOPBeM.png?width=320&crop=smart&auto=webp&s=1e95841f254543ded6dac242709331a92160be0b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WYinqoDP9OerKm8ljzpFUp26G03RA6wo-9izylOPBeM.png?width=640&crop=smart&auto=webp&s=6b54dc6d0fc61b0246d5aa15915fc1c876cfd68a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WYinqoDP9OerKm8ljzpFUp26G03RA6wo-9izylOPBeM.png?width=960&crop=smart&auto=webp&s=61257f22f9164fc7fb980b85f6d0f356f87c248d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WYinqoDP9OerKm8ljzpFUp26G03RA6wo-9izylOPBeM.png?width=1080&crop=smart&auto=webp&s=29571a680580e51639b767ff32e5a3d83aa01909', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WYinqoDP9OerKm8ljzpFUp26G03RA6wo-9izylOPBeM.png?auto=webp&s=63b50f98ba568052af391592577748e80a131bb3', 'width': 1200}, 'variants': {}}]} | |
🚀 Get $100 in Lovable credits when you publish your first website! 💰 | 0 | Lovable is giving away **$100 in free credits** to all new users — and you’ll receive it **right after you publish your first website**.
It’s the perfect chance to try their AI-powered website builder, create real projects, and go live without spending anything.
👉 **Join here:** [https://lovable.dev/invite/O63ALVF](https://lovable.dev/invite/O63ALVF)
💡 **Want to test it fast?**
Just copy and paste this prompt into Lovable to create your first site:
Create a clean, modern landing page for a personal portfolio.
Include a hero section with a photo and short intro, a projects section with three cards, and a contact form at the bottom.
Use a minimalist design with white background and soft blue accents.
Publish it — and you’ll get **$100 in credits automatically!** | 2025-10-17T17:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/1o970ro/get_100_in_lovable_credits_when_you_publish_your/ | Objective-Spend5894 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o970ro | false | null | t3_1o970ro | /r/LocalLLaMA/comments/1o970ro/get_100_in_lovable_credits_when_you_publish_your/ | false | false | self | 0 | null |
RTX Pro 6000 Blackwell vLLM Benchmark: 120B Model Performance Analysis | 173 | **Hardware:** NVIDIA RTX Pro 6000 Blackwell Workstation Edition (96GB VRAM)
**Software:** vLLM 0.11.0 | CUDA 13.0 | Driver 580.82.09 | FP16/BF16
**Model:** openai/gpt-oss-120b source: [https://huggingface.co/openai/gpt-oss-120b](https://huggingface.co/openai/gpt-oss-120b)
Ran two test scenarios with 500-token and 1000-2000-token outputs across varying context lengths (1K-128K) and concurrency levels (1-20 users).
[500 tokens](https://preview.redd.it/5io6r8cfcpvf1.png?width=6907&format=png&auto=webp&s=dfa2bf6d638fcf36f75be97745f4be59c5f5cade)
[1000-2000 tokens](https://preview.redd.it/1i5c8lcgcpvf1.png?width=6907&format=png&auto=webp&s=f6949af68c70e0b95f2462dab8dc6c3a5be7943a)
# Key Findings
**Peak Performance (500-token output):**
* **1051 tok/s** at 1 user, 1K context
* Maintains **300-476 tok/s** at 20 concurrent users across context lengths
* TTFT: 200-400ms at low concurrency, scales to 2000-3000ms at 20 users
* Average latency: 2.6s (1 user) → 30.2s (20 users) at 128K context
**Extended Output (1000-2000 tokens):**
* **1016 tok/s** peak throughput (minimal degradation vs 500-token)
* Slightly higher latencies due to longer decode phases
* Power draw: 300-600W depending on load
* Batch scaling efficiency: **EXCELLENT** at 2-5 users, still good up to 10 users
# Observations
The Blackwell architecture handles this 120B model impressively well:
* Linear scaling up to \~5 concurrent users
* GPU clocks remain stable at 2800+ MHz under load
* Inter-token latency stays in the "INSTANT" zone (<50ms) for most configurations
* Context length scaling is predictable—throughput halves roughly every 32K context increase
The 96GB VRAM headroom means no swapping even at 128K context with max concurrency.
Used: [https://github.com/notaDestroyer/vllm-benchmark-suite](https://github.com/notaDestroyer/vllm-benchmark-suite)
**TL;DR:** If you're running 100B+ models locally, the RTX Pro 6000 Blackwell delivers production-grade throughput with excellent multi-user scaling. Power efficiency is reasonable given the compute density.
| 2025-10-17T16:53:42 | https://www.reddit.com/r/LocalLLaMA/comments/1o96o9o/rtx_pro_6000_blackwell_vllm_benchmark_120b_model/ | notaDestroyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o96o9o | false | null | t3_1o96o9o | /r/LocalLLaMA/comments/1o96o9o/rtx_pro_6000_blackwell_vllm_benchmark_120b_model/ | false | false | 173 | {'enabled': False, 'images': [{'id': '12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=108&crop=smart&auto=webp&s=292c3d3a2dfa2ce762d4e0ad0113f21057208fb5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=216&crop=smart&auto=webp&s=7caae8dd778b09b71d56e893c0307604fe6185aa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=320&crop=smart&auto=webp&s=4ffd35e2510c33eb737fe6e23874ab1b1e5a5081', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=640&crop=smart&auto=webp&s=4ae7c659a21f868f6dba51b958c810a90c5bfe24', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=960&crop=smart&auto=webp&s=9e415f43cdae65729878a0ca9f4a7a894ca8be09', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=1080&crop=smart&auto=webp&s=3acf6478b097b66560a9a81bdaef6463bf66481c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?auto=webp&s=0871512cd76cbf7bde2f7bd9a5f885c071ce735a', 'width': 1200}, 'variants': {}}]} | |
A good local LLM model for basic projects | 3 | I'm a college student, and I was looking for LLMs to run locally and using them in my projects since I don't really wanna go with paid LLM APIs.
I have an RTX 4050 Laptop GPU (6GB VRAM) and 32GB RAM, which models, along with how many parameters would be the best choice?
Thanks in advance | 2025-10-17T16:53:36 | https://www.reddit.com/r/LocalLLaMA/comments/1o96o5q/a_good_local_llm_model_for_basic_projects/ | Terrox1205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o96o5q | false | null | t3_1o96o5q | /r/LocalLLaMA/comments/1o96o5q/a_good_local_llm_model_for_basic_projects/ | false | false | self | 3 | null |
Using llamacpp and RCP, managed to improve promt processing by 4x times (160 t/s to 680 t/s) and text generation by 2x times (12.67 t/s to 22.52 t/s) by changing the device order including RPC. GLM 4.6 IQ4_XS multiGPU + RPC. | 123 | Hello guys, hoping you're having a good day.
As you know, llamacpp has RPC since time ago.
I have 2 PCs in my home:
My "Server":
* AM5 MSI X670E Carbon
* AMD Ryzen 9 9900X
* 192GB DDR5 6000Mhz CL32
* 7 GPUs
* 5090x2
* 4090x2
* A6000
* 3090x2
* MCX314A-BCCT 40Gbps NIC (totally overkill, prob 10Gbps is fine)
* OS: Fedora 42
And my "Gaming" PC:
* AM5 Gigabyte X670 Aorus Master (I wouldn't recommend this board btw)
* AMD Ryzen 7 7800X3D
* 64GB DDR5 6000Mhz CL30
* RTX 5090
* MCX314A-BCCT 40Gbps NIC
* OS: Windows 11
So for the test, I "disabled" one 3090 and replaced it layers with my 5090 via RPC.
I'm running GLM 4.6 IQ4\_XS (\~180GB) with (very complex, don't blame me):
LLAMA_SET_ROWS=1 ./llama-server \
-m '/models/GLM-4.6-IQ4_XS.gguf' \
-c 32768 \
--no-mmap \
--rpc 192.168.50.2:50052 \
-ngl 999 \
-ot "blk.(0|1|2|3|4|5|6|7|8|9|10|11|12|13|14|15).ffn.=CUDA0" \
-ot "blk.(16|17|18|19|20|21|22|23|24|25).ffn.=CUDA1" \
-ot "blk.(27|28|29|30|31|32|33|34|35|36).ffn.=CUDA2" \
-ot "blk.(38|39|40|41|42|43|44|45|46|47|48|49|50).ffn.=CUDA3" \
-ot "blk.(51|52|53|54|55|56|57|58|59).ffn.=CUDA4" \
-ot "blk.(61|62|63|64|65|66|67|68|69|70).ffn.=RPC0[192.168.50.2:50052]" \
-ot "blk.(72|73|74|75|76|77|78|79|80|81|82|83|84|85|86|87|88|89|90|91).ffn.=CUDA5" \
-ot "blk.26.ffn_(norm|gate_inp|gate_shexp|down_shexp|up_shexp).weight=CUDA1" \
-ot "blk.26.ffn_gate_exps.weight=CUDA1" \
-ot "blk.26.ffn_(down_exps|up_exps).weight=CUDA0" \
-ot "blk.37.ffn_(norm|gate_inp|gate_shexp|down_shexp|up_shexp).weight=CUDA2" \
-ot "blk.37.ffn_gate_exps.weight=CUDA2" \
-ot "blk.37.ffn_(down_exps|up_exps).weight=CUDA3" \
-ot "blk.60.ffn_(norm|gate_inp|gate_shexp|down_shexp|up_shexp).weight=CUDA4" \
-ot "blk.60.ffn_gate_exps.weight=CUDA4" \
-ot "blk.60.ffn_(down_exps|up_exps).weight=CUDA5" \
-ot "blk.71.ffn_(norm|gate_inp|gate_shexp|down_shexp|up_shexp).weight=RPC0[192.168.50.2:50052]" \
-ot "blk.71.ffn_gate_exps.weight=RPC0[192.168.50.2:50052]" \
-ot "blk.71.ffn_(down_exps|up_exps).weight=CUDA5" \
-fa on \
-mg 0 \
-ub 1792 \
By default, llamacpp assigns RPC devices as **the first device,** this means that the RPC device has the bigger buffers and also has to do more processing that the server itself.
So it is like, by the --devices parameters in this case, use:
>\--device RPC0,CUDA0,CUDA1,CUDA2,CUDA3,CUDA4,CUDA5
And I was getting these speeds:
prompt eval time = 27661.35 ms / 4410 tokens ( 6.27 ms per token, 159.43 tokens per second)
eval time = 140832.84 ms / 1784 tokens ( 78.94 ms per token, 12.67 tokens per second)
So, I started a question on github here [https://github.com/ggml-org/llama.cpp/discussions/16625](https://github.com/ggml-org/llama.cpp/discussions/16625)
And [abc-nix](https://github.com/abc-nix) did the great suggestion to move it.
So then, used
>\--device CUDA0,CUDA1,CUDA2,CUDA3,CUDA4,RPC0,CUDA5
And got
prompt eval time = 6483.46 ms / 4410 tokens ( 1.47 ms per token, 680.19 tokens per second)
eval time = 78029.06 ms / 1757 tokens ( 44.41 ms per token, 22.52 tokens per second)
Which is an absolutely insane performance bump.
Now I want to try to dual boot the "Gaming" PC to Linux to see if there's an improvement. As multiGPU by itself is really bad on Windows, not sure if that also affects RPC. | 2025-10-17T16:52:21 | https://www.reddit.com/r/LocalLLaMA/comments/1o96mwq/using_llamacpp_and_rcp_managed_to_improve_promt/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o96mwq | false | null | t3_1o96mwq | /r/LocalLLaMA/comments/1o96mwq/using_llamacpp_and_rcp_managed_to_improve_promt/ | false | false | self | 123 | {'enabled': False, 'images': [{'id': 'TSUeCH2NDv6RbJnLo0teQe06N3eAzK4CFJBJmi2-P2Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TSUeCH2NDv6RbJnLo0teQe06N3eAzK4CFJBJmi2-P2Y.png?width=108&crop=smart&auto=webp&s=98f4a9657fc66207bb288d30345450d85ef44c2f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/TSUeCH2NDv6RbJnLo0teQe06N3eAzK4CFJBJmi2-P2Y.png?width=216&crop=smart&auto=webp&s=7e9eb1403ace34b43e4180958eed0586de1f897d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/TSUeCH2NDv6RbJnLo0teQe06N3eAzK4CFJBJmi2-P2Y.png?width=320&crop=smart&auto=webp&s=a8f897bdb9ddf620760756d7d60c6d4d84c7fb79', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/TSUeCH2NDv6RbJnLo0teQe06N3eAzK4CFJBJmi2-P2Y.png?width=640&crop=smart&auto=webp&s=b76572e81bb01436399d0578ab0793860eb659a0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/TSUeCH2NDv6RbJnLo0teQe06N3eAzK4CFJBJmi2-P2Y.png?width=960&crop=smart&auto=webp&s=e534d54a990deb5c47e0e0a627ff9db222cd50f4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/TSUeCH2NDv6RbJnLo0teQe06N3eAzK4CFJBJmi2-P2Y.png?width=1080&crop=smart&auto=webp&s=1b95037e5384f5f85971f7667a899739ae5f9a99', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/TSUeCH2NDv6RbJnLo0teQe06N3eAzK4CFJBJmi2-P2Y.png?auto=webp&s=0133f214b8caef82b7ce43207fb42a25015aa4b4', 'width': 1200}, 'variants': {}}]} |
5060ti chads... keep rising? (maybe) | 2 | Hey there, I have been trying to eek out the most performance from my setup. Previously I had 2x 5060ti (total 32gb vram) and 64gb system ram. I was running gpt-oss 120b at around 22 t/s.
I saw a post here recently where someone posted that their ram improvement from getting more premium ram helped increase the cpu offload part from gpt-oss 120b to over 30 t/s. I was intrigued. So I started looking up ram prices and... well I feel like I missed the boat. Prices have soared.
That said, 5060ti's continue to be the same price. Problem, I don't have any room in the case for another one. So... I got an nvme-to-occulink port, a cheap egpu, and another 5060ti. This is probably crazy, but I wanted to push my limits because I really like the performance I had already got out of the previous cards.
Okay, so with gpt-oss 120b I get a speed increase up to:
eval time = 70474.49 ms / 1891 tokens ( 37.27 ms per token, 26.83 tokens per second
So not bad.. but I wish it were more. Now this is likely due to my cpu (7600x3d), ram speed (4800), and the wacky ass pcie lanes (all at gen 4 with a x8 which is my occulink card because of the shitty bifurcation of my motherboard, x4, and a x1).
System specs now:
- 7600x3d
- 64gb system ram
- 3x 5060ti for a total of 48gb vram
I tested other small models like Qwen 3 coder Q8 with 100k context and I can get almost 80 t/s now with all of that offloaded onto the cards. So that is also a win.
Should you go out and do this? maybe not. I got the aoostar ago1 to go with the card and some amazon nvme-to-occulink port. This added almost $200 to the card since I can't fit them anymore.
Questions? Comments? Want to call me insane? | 2025-10-17T16:49:58 | https://www.reddit.com/r/LocalLLaMA/comments/1o96kn7/5060ti_chads_keep_rising_maybe/ | see_spot_ruminate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o96kn7 | false | null | t3_1o96kn7 | /r/LocalLLaMA/comments/1o96kn7/5060ti_chads_keep_rising_maybe/ | false | false | self | 2 | null |
vLLM Performance Benchmark: OpenAI GPT-OSS-20B on RTX Pro 6000 Blackwell (96GB) | 8 | **Hardware:** NVIDIA RTX Pro 6000 Blackwell Workstation Edition (96GB VRAM)
**Software:** vLLM 0.11.0 | CUDA 13.0 | Driver 580.82.09 | FP16/BF16
**Model:** openai/gpt-oss-20b source: [https://huggingface.co/openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
Ran benchmarks across different output lengths to see how context scaling affects throughput and latency. Here are the key findings:
[500 tokens](https://preview.redd.it/seig2q6uapvf1.png?width=6933&format=png&auto=webp&s=33dee63986b2fc78c54d8ce2b6d9560b4465acd8)
[1000-2000 tokens](https://preview.redd.it/7i6wap4wapvf1.png?width=6907&format=png&auto=webp&s=9680aa1f3ac9d117d7c36811d7d74143c81c0e8d)
# 500 Token Output Results
**Peak Throughput:**
* Single user: 2,218 tokens/sec at 64K context
* Scales down to 312 tokens/sec at 128K context (20 concurrent users)
**Latency:**
* Excellent TTFT: instant (<250ms) up to 64K context, even at 20 concurrent users
* Inter-token latency stays instant across all configurations
* Average latency ranges from 2-19 seconds depending on concurrency
**Sweet Spot:** 1-5 concurrent users with contexts up to 64K maintain 400-1,200+ tokens/sec with minimal latency
# 1000-2000 Token Output Results
**Peak Throughput:**
* Single user: 2,141 tokens/sec at 64K context
* Maintains 521 tokens/sec at 128K with 20 users
**Latency Trade-offs:**
* TTFT increases to "noticeable delay" territory at higher concurrency (still <6 seconds)
* Inter-token latency remains instant throughout
* Average latency: 8-57 seconds at high concurrency/long contexts
**Batch Scaling:** Efficiency improves significantly with concurrency - hits 150%+ at 20 users for longer contexts
# Key Observations
1. **Memory headroom matters:** 96GB VRAM handles 128K context comfortably even with 20 concurrent users
2. **Longer outputs smooth the curve:** Throughput degradation is less severe with 1500-2000 token outputs vs 500 tokens
3. **Context scaling penalty:** \~85% throughput reduction from 1K to 128K context at high concurrency
4. **Power efficiency:** Draw stays reasonable (300-440W) across configurations
5. **Clock stability:** Minor thermal throttling only at extreme loads (128K + 1 user drops to \~2670 MHz)
The Blackwell architecture shows excellent scaling characteristics for real-world inference workloads. The 96GB VRAM is the real MVP here - no OOM issues even at maximum context length with full concurrency.
Used: [https://github.com/notaDestroyer/vllm-benchmark-suite](https://github.com/notaDestroyer/vllm-benchmark-suite)
**TL;DR:** If you're running a 20B parameter model, this GPU crushes it. Expect 1,000+ tokens/sec for typical workloads (2-5 users, 32K context) and graceful degradation at extreme scales. | 2025-10-17T16:45:55 | https://www.reddit.com/r/LocalLLaMA/comments/1o96gtu/vllm_performance_benchmark_openai_gptoss20b_on/ | notaDestroyer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o96gtu | false | null | t3_1o96gtu | /r/LocalLLaMA/comments/1o96gtu/vllm_performance_benchmark_openai_gptoss20b_on/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=108&crop=smart&auto=webp&s=56f93ea81e319c450e5ccbbf073520d2e0a4c3a9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=216&crop=smart&auto=webp&s=4e3ab70d3281bc70a498d840b59a751d201572c7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=320&crop=smart&auto=webp&s=036b666bd7e1e6dc5aa6e0a7dab8a1b14d62c2a4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=640&crop=smart&auto=webp&s=6c8926f5bd6382bf4684a66eaf41cb4337b53990', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=960&crop=smart&auto=webp&s=53fb189ce5aff12dd5c4168e80b2d104d59ab391', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=1080&crop=smart&auto=webp&s=49b5d24e3488b5add024df82d4e43b87be232c96', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?auto=webp&s=0fa5bd185c5e14782e188bc2657fb5ad9287fe13', 'width': 1200}, 'variants': {}}]} | |
vLLM extremely slow / no response with max_model_len=8192 and multi-GPU tensor parallel | 0 | **Body**
\`\`\` Setup:
text
\- Model: llama-3.1-8b
\- Hardware: 2x NVIDIA A40
\- CUDA: 12.5, Driver: 555.42.06
\- vLLM version: <your vLLM version>\`\`\`
\- Serving command:
\`\`\`CUDA\_VISIBLE\_DEVICES=0,1 vllm serve ./llama-3.1-8b \\
\--tensor-parallel-size 2 \\
\--max-model-len 8192 \\
\--gpu-memory-utilization 0.9 \\
\--chat-template /opt/vllm\_templates/llama-chat.jinja \\
\--guided-decoding-backend outlines \\
\--host [0.0.0.0](http://0.0.0.0) \\
\--port 9000 \\
\--max-num-seqs 20\`\`\`
**Problem:**
\- With max\_model\_len=4096 and top\_k (top\_k is number of chunks/docs retrieved) =2 in my semantic retrieval pipeline → works fine.
\- With max\_model\_len=8192, multi-GPU TP=2, top\_k=5 (top\_k is number of chunks/docs retrieved) → server never returns an answer.
\- Logs show extremely low throughput:
\`\`\`Avg prompt throughput: 0.0 tokens/s, Avg generation throughput: 0.2 tokens/s
GPU KV cache usage: 0.4%, Prefix cache hit rate: 66.4%\`\`\`
\- Context size is \~2800–4000 tokens.
**What I’ve tried:**
\- Reduced max\_model\_len → works
\- Reduced top\_k(top\_k is number of chunks/docs retrieved)→ works
\- Checked GPU memory → not fully used
**Questions:**
1. Is this a known KV cache / memory allocation bottleneck for long contexts in vLLM?
2. Are there ways to batch token processing or offload KV cache to CPU for large max\_model\_len?
3. Recommended vLLM flags for stable long-context inference on multi-GPU setups? | 2025-10-17T16:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/1o962tt/vllm_extremely_slow_no_response_with_max_model/ | Dizzy-Watercress-744 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o962tt | false | null | t3_1o962tt | /r/LocalLLaMA/comments/1o962tt/vllm_extremely_slow_no_response_with_max_model/ | false | false | self | 0 | null |
Biggest security or compliance headache when deploying LLMs in production? | 1 | Hi all, I am a security researcher exploring AI/LLM security topics and was curious to hear from those deploying models in production - what’s been your biggest security or compliance headache so far? | 2025-10-17T15:59:11 | https://www.reddit.com/r/LocalLLaMA/comments/1o958eh/biggest_security_or_compliance_headache_when/ | Big_Impression_410 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o958eh | false | null | t3_1o958eh | /r/LocalLLaMA/comments/1o958eh/biggest_security_or_compliance_headache_when/ | false | false | self | 1 | null |
Is there any wayto change reasoning effort on the fly for GPT-OSS in llama.cpp? | 15 | I run GPT-OSS-120B on my rig. I'm using a command like `llama-server ... --chat-template-kwargs '{"reasoning_effort":"high"}'`
This works, and GPT OSS is much more capable of high reasoning effort.
However, in some situations (coding, summarization, etc) I would like to set the reasoning effort to low.
I understand llama.cpp doesn't implement the entire OpenAI spec but according to [OpenAI completions docs](https://platform.openai.com/docs/api-reference/responses/create) you're supposed to pass `"reasoning": { "effort": "high" }` in the request. this doesn't seem to have any effect though.
So my question: has anyone got this working? is this possible? | 2025-10-17T15:58:54 | https://www.reddit.com/r/LocalLLaMA/comments/1o9584a/is_there_any_wayto_change_reasoning_effort_on_the/ | kevin_1994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o9584a | false | null | t3_1o9584a | /r/LocalLLaMA/comments/1o9584a/is_there_any_wayto_change_reasoning_effort_on_the/ | false | false | self | 15 | null |
The Hidden Philosophy Inside Large Language Models | 0 | 2025-10-17T15:58:33 | https://wmosshammer.medium.com/the-hidden-philosophy-inside-large-language-models-4bc0d7e4f9d8 | Uncomfortable_Pause2 | wmosshammer.medium.com | 1970-01-01T00:00:00 | 0 | {} | 1o957sz | false | null | t3_1o957sz | /r/LocalLLaMA/comments/1o957sz/the_hidden_philosophy_inside_large_language_models/ | false | false | 0 | {'enabled': False, 'images': [{'id': '_b0ycv5CQnhrB1PCm8NSLw0-dYvVABbkHGue12P_YfQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/_b0ycv5CQnhrB1PCm8NSLw0-dYvVABbkHGue12P_YfQ.png?width=108&crop=smart&auto=webp&s=2951582550ab8bc7325335df1701d184c805a1a6', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/_b0ycv5CQnhrB1PCm8NSLw0-dYvVABbkHGue12P_YfQ.png?width=216&crop=smart&auto=webp&s=07774aaeaaa528933a27c841a1aaf42d45194c03', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/_b0ycv5CQnhrB1PCm8NSLw0-dYvVABbkHGue12P_YfQ.png?width=320&crop=smart&auto=webp&s=e4300c7fb425ca48d92386ec68f7ec37ae5c2c13', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/_b0ycv5CQnhrB1PCm8NSLw0-dYvVABbkHGue12P_YfQ.png?width=640&crop=smart&auto=webp&s=c47de557362416643d771ff9ca37ebe3a9e1266f', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/_b0ycv5CQnhrB1PCm8NSLw0-dYvVABbkHGue12P_YfQ.png?width=960&crop=smart&auto=webp&s=5dd220954f2a52148670d979615904ba5288708d', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/_b0ycv5CQnhrB1PCm8NSLw0-dYvVABbkHGue12P_YfQ.png?auto=webp&s=770f9e509f0274ff344e655fdfd7f3cdeb225b59', 'width': 1024}, 'variants': {}}]} | ||
dual 5070ti vs. 5090 | 3 | Simple review of some localLLM testing shows a dual 5070ti setup achieving 55 otks/s while a 5090 achieves 65 otks/s with same aggregate memory.
However in Canadian $ terms a dual 5070ti setup is about $2,200 while a 5090 (when found at MSRP) is at $3,300. So in $/otks/s terms the 5070ti is a better value ($40/otkps vs $50/otpks) and cheaper to get started as a beginner (get a single 5070ti and run quantized small models). Also where I am at it's slightly easier to procure at MSRP.
Am I looking at this the right way? Is there a capability of the 5090 that's worth paying the extra $$ for despite the apparent inferior value? | 2025-10-17T15:42:55 | https://www.reddit.com/r/LocalLLaMA/comments/1o94swi/dual_5070ti_vs_5090/ | ghabian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o94swi | false | null | t3_1o94swi | /r/LocalLLaMA/comments/1o94swi/dual_5070ti_vs_5090/ | false | false | self | 3 | null |
oppo is powered by AI using arm | 1 | Info :https://x.com/Arm/status/1979140387022270924?t=6hHVyOygoTAKMbRofuOPQg&s=19 | 2025-10-17T15:42:04 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o94s3k | false | null | t3_1o94s3k | /r/LocalLLaMA/comments/1o94s3k/oppo_is_powered_by_ai_using_arm/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'JOOiFrtras4CvQZCKFE3hgyQ_PS0o98zXjiOU4FjI2s', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/y0kozertzovf1.jpeg?width=108&crop=smart&auto=webp&s=0ea7a38c46c44637c5770611b51db33813283728', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/y0kozertzovf1.jpeg?width=216&crop=smart&auto=webp&s=be13f410ac351da73ace444c6f8639033dea99f5', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/y0kozertzovf1.jpeg?width=320&crop=smart&auto=webp&s=31e918717b6b7f5ac9a06d5ddc10329a48d7e106', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/y0kozertzovf1.jpeg?width=640&crop=smart&auto=webp&s=cebda4d75ce24d0002c2e801bc79773e9fe7ffc8', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/y0kozertzovf1.jpeg?width=960&crop=smart&auto=webp&s=a5acd67534f18371d3e2c421395465be642470c8', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/y0kozertzovf1.jpeg?width=1080&crop=smart&auto=webp&s=373f843ec33a6f6125180a6132dbedc60b24e62b', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/y0kozertzovf1.jpeg?auto=webp&s=8ec76b5e760250c02fd0b16a171d9c2b5793df3e', 'width': 1080}, 'variants': {}}]} | ||
What is a recommended processor, board and ram for an LLM with a 3090 | 0 | As the title states, getting a 3090 for a local LLM for my own home AI but curious what the best combo for this would be or would one of the AI max AIOs that are now popping up be a better option? | 2025-10-17T15:33:39 | https://www.reddit.com/r/LocalLLaMA/comments/1o94ka0/what_is_a_recommended_processor_board_and_ram_for/ | Firecracker048 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o94ka0 | false | null | t3_1o94ka0 | /r/LocalLLaMA/comments/1o94ka0/what_is_a_recommended_processor_board_and_ram_for/ | false | false | self | 0 | null |
Local tool to search documents (RAG only) | 12 | Is there a local, open-source tool that can be used to search documents using embedding or RAG, without any LLM needed for the processing. Usually in RAG with LLM, first the document is searched and then the results are given to the LLM and so on. I am looking just for a way to search a document, let's say a PDF (assuming it's not images but just text), and when searching for a term, then it uses embedding models to find related concepts (even if the term doesn't exactly match what's written, i.e. the purpose of embeddings). | 2025-10-17T15:29:04 | https://www.reddit.com/r/LocalLLaMA/comments/1o94fs0/local_tool_to_search_documents_rag_only/ | SuddenWerewolf7041 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o94fs0 | false | null | t3_1o94fs0 | /r/LocalLLaMA/comments/1o94fs0/local_tool_to_search_documents_rag_only/ | false | false | self | 12 | null |
Google can't write their docs right, or is there some weird bs going on | 0 | I am trying to setup Genai batching, yesterday it was working fine and today even the example in the Google Genai docs arent work
| 2025-10-17T15:02:20 | https://www.reddit.com/r/LocalLLaMA/comments/1o93pzm/google_cant_write_their_docs_right_or_is_there/ | vensucksatlife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o93pzm | false | null | t3_1o93pzm | /r/LocalLLaMA/comments/1o93pzm/google_cant_write_their_docs_right_or_is_there/ | false | false | self | 0 | null |
Get 1 Month of Perplexity Pro for Free | 1 | [removed] | 2025-10-17T14:58:29 | https://pplx.ai/anthonywur63521 | Status-low21 | pplx.ai | 1970-01-01T00:00:00 | 0 | {} | 1o93lzv | false | null | t3_1o93lzv | /r/LocalLLaMA/comments/1o93lzv/get_1_month_of_perplexity_pro_for_free/ | false | false | default | 1 | null |
Audio transcription with llama.cpp multimodal | 4 | Has anybody attempted audio transcription with the newish llama.cpp audio support?
I have successfully compiled and run llama and a model, but I can't quite seem to understand how exactly to make the model understand the task:
\`\`\`
llama-mtmd-cli -m Voxtral-Mini-3B-2507-Q4\_K\_M.gguf --mmproj mmproj-Voxtral-Mini-3B-2507-Q8\_0.gguf --audio test-2.mp3 -p "What it the speaker saying?"
\`\`\`
I am not sure if the model is too small and doesn't follow instructions, or if it cannot understand the task because of some fundamental issue.
\`test-2.mp3\` is the test file from the llama.cpp repo.
I know using whisper.cpp is much simpler, and I do that already, but I'd like to build some more complex functionality using a multimodal model. | 2025-10-17T14:46:22 | https://www.reddit.com/r/LocalLLaMA/comments/1o93ad1/audio_transcription_with_llamacpp_multimodal/ | TachyonicBytes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o93ad1 | false | null | t3_1o93ad1 | /r/LocalLLaMA/comments/1o93ad1/audio_transcription_with_llamacpp_multimodal/ | false | false | self | 4 | null |
The real 'privacy' challenge for local models isn't just data on disk, it's the biometric key. | 28 | We spend a lot of time here talking about running models locally for privacy and control over our data. We hash out quantization, VRAM, and efficient inference. But I think we're overlooking a criticl, persistet privacy challenge that's already solved by external AI: biometric identity fuson.... I ran a test on my own fragmented digital presence, partly out of paranoia. I used faceseek as an external benchmark. I uploaded a low res photo of myself that was only ever on a private, archived forum..... The external tool immediately linked that photo to two completely separate, pseudonymous accounts I use locally, offline, for testing personal LLM projects. This implies the AI has already generated a permanent biometric template of me that can unify identities across platforms, regardless of local storage or pseudonymity. If external AI can already stitch together our separate online and offline personas using just a face, what does that mean for the long-term privacy guarantees of local models? Even if our local LLM never leaves our machine, our identity can be mapped to its outputs if that identity is indexed elsewhere. How do we build truly private local agent architectures when our biometric identity is already public? | 2025-10-17T13:48:32 | https://www.reddit.com/r/LocalLLaMA/comments/1o91thq/the_real_privacy_challenge_for_local_models_isnt/ | ExtraAlien1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o91thq | false | null | t3_1o91thq | /r/LocalLLaMA/comments/1o91thq/the_real_privacy_challenge_for_local_models_isnt/ | false | false | self | 28 | null |
LLM speed on my system (R5 5600G, 5060Ti 16GB, 32GB RAM) | 2 | **LLM speed on my system (R5 5600G, 5060Ti 16GB, 32GB RAM)**
**I tested several models on my system, i asked "Write a short text about the future of technology". Here are the results:**
|Model|Total Duration (s)|Load Duration (s)|Prompt Eval Count (tokens)|Prompt Eval Duration (ms)|Prompt Eval Rate (tokens/s)|Eval Count (tokens)|Eval Duration (s)|Eval Rate (tokens/s)|
|:-|:-|:-|:-|:-|:-|:-|:-|:-|
|**Gemma3:12B-IT-Q4\_K\_M**|11.004048|6.0978792|18|39.096|460.41|198|4.7246764|41.91|
|**Qwen3-Coder:30B**|16.0636496|8.3487872|17|158.467|107.28|236|7.4952974|31.49|
|**Mistral-Small3.2:24B-Instruct-2506-Q4\_K\_M**|28.5862299|8.6925738|516|4340.0461|118.89|228|15.4800842|14.73|
|**Qwen3:30B-A3B-Thinking-2507-Q4\_K\_M**|30.5642031|9.23035|19|180.8996|105.03|627|20.9965337|29.86|
|**GPT-OSS:20B**|4.8795305|0.1652446|76|204.101|372.36|357|4.3407544|82.24|
# Key Takeaways:
* **GPT-OSS:20B** remains the fastest in both prompt evaluation (372.36 tokens/s) and response generation (82.24 tokens/s).
* **Gemma3:12B-IT-Q4\_K\_M** shows strong prompt processing speed (460.41 tokens/s) but slower generation (41.91 tokens/s).
* **Mistral-Small3.2:24B-Instruct-2506-Q4\_K\_M** has the highest prompt evaluation rate (118.89 tokens/s) but the slowest response generation (14.73 tokens/s).
* **Qwen3:30B-A3B-Thinking-2507-Q4\_K\_M** generates the longest outputs (627 tokens) but is slower in both prompt and response speed.
Testing was done with a browser running in the background, just normal PC usage with parallel testing
The question was simple, is there a universal question for the test?
Test was made with this command from blobs folder -"ollama run gpt-oss:20b p "Write a short text about future technologies." --verbose" | 2025-10-17T13:47:48 | https://www.reddit.com/r/LocalLLaMA/comments/1o91suj/llm_speed_on_my_system_r5_5600g_5060ti_16gb_32gb/ | R_dva | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o91suj | false | null | t3_1o91suj | /r/LocalLLaMA/comments/1o91suj/llm_speed_on_my_system_r5_5600g_5060ti_16gb_32gb/ | false | false | self | 2 | null |
Updated to Ubuntu 24.04 and now Tesla P40 doesn't work with LMStudio | 1 | I've just recently updated to Ubuntu 24.04 and I am trying to use LMStudio with my P40.
I installed the Data Center Driver for Ubuntu 24.04 580.95.05 driver, in order for Ubuntu to see the P40. I'm also running an RTX 2060 for driving graphics.
When I launch LMstudio it only sees the RTX 2060. When I run with:
CUDA_VISIBLE_DEVICES=1
It sees the P40, but when I try to load the gpt-oss 20b model I get:
[LMSInternal][Client=LM Studio][Endpoint=loadModel] Error in channel handler: Error: Error loading model.
.
.
.
cause: '(Exit code: null). Please check settings and try loading the model again. '
Has anyone come across this before? Any suggestions on how to fix this? LMStudio was working fine on the previous Ubuntu 22.
Thanks! | 2025-10-17T13:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/1o91sdp/updated_to_ubuntu_2404_and_now_tesla_p40_doesnt/ | fleabs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o91sdp | false | null | t3_1o91sdp | /r/LocalLLaMA/comments/1o91sdp/updated_to_ubuntu_2404_and_now_tesla_p40_doesnt/ | false | false | self | 1 | null |
1 Month of Perplexity Pro for free | 1 | Hey everyone ☺️
you can currently get 1 month of Perplexity Pro completely free — here’s how:
1. Accept your Comet invite with the Pro option
2. Download Comet and log in to your account
3. Ask at least one question using Comet
4. After your first question, your free month of Perplexity Pro will be automatically activated
Here’s the invite link: https://pplx.ai/anthonywur63521
It’s totally free and only takes a few minutes to set up. If you don’t want to continue with the Pro plan after the first month, you can cancel it anytime in your account settings.
Have fun exploring — and feel free to share how Comet works for you! | 2025-10-17T13:27:21 | https://pplx.ai/anthonywur63521 | Loose-Dog-7703 | pplx.ai | 1970-01-01T00:00:00 | 0 | {} | 1o91b87 | false | null | t3_1o91b87 | /r/LocalLLaMA/comments/1o91b87/1_month_of_perplexity_pro_for_free/ | false | false | default | 1 | null |
Agentic RAG for Dummies - A minimal Agentic RAG demo built with LangGraph — learn Retrieval-Augmented Agents in minutes. | 0 | Hey everyone! I stumbled upon a repository you absolutely need to check out if you are trying to build a truly advanced RAG system, what's now called Agentic RAG.
Agentic RAG for Dummies
This project shows you how to build a document Q&A system that actually works, all with minimal code thanks to LangGraph.
Why This is the Ultimate RAG Starter Repo:
No "Dumb" RAG: Forget the classic approach (chunking and fragmentation). This system uses an AI Agent that thinks.
Smarter Strategy: The agent first searches through document summaries (like a smart index) and only if it finds a potential match, does it retrieve the full document.
Maximum Accuracy: By leveraging long-context LLMs (like Gemini 2.0 Flash) to read the complete document, the answers are far more accurate and hallucinations are significantly reduced.
Self-Correcting: The agent has a built-in feedback loop: if the generated answer is not satisfactory, it retries with a different search approach.
Minimal Code, Maximum Result: The entire orchestration logic (the "brain") is implemented cleanly with LangGraph in very few lines of code.
If you want to move from "RAG as a demo" to "RAG in production" with clean, working code, this is the starting point.
Check it out, leave a star, and let me know your thoughts!
Link: https://github.com/GiovanniPasq/agentic-rag-for-dummies | 2025-10-17T13:26:01 | https://www.reddit.com/r/LocalLLaMA/comments/1o91a2g/agentic_rag_for_dummies_a_minimal_agentic_rag/ | CapitalShake3085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o91a2g | false | null | t3_1o91a2g | /r/LocalLLaMA/comments/1o91a2g/agentic_rag_for_dummies_a_minimal_agentic_rag/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'u6S0JT1gJMXOVFHpn4dYpF9zMmpZbqzqNiFeLuoChqk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/u6S0JT1gJMXOVFHpn4dYpF9zMmpZbqzqNiFeLuoChqk.png?width=108&crop=smart&auto=webp&s=de0ce81070cbbdee6a7b585778d50190f36e32d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/u6S0JT1gJMXOVFHpn4dYpF9zMmpZbqzqNiFeLuoChqk.png?width=216&crop=smart&auto=webp&s=bd4821b4fc44ed9b1f0b71f3f58619c9b190f472', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/u6S0JT1gJMXOVFHpn4dYpF9zMmpZbqzqNiFeLuoChqk.png?width=320&crop=smart&auto=webp&s=cd0b3f44ae0927dd727c54962a1e11f8429d0be2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/u6S0JT1gJMXOVFHpn4dYpF9zMmpZbqzqNiFeLuoChqk.png?width=640&crop=smart&auto=webp&s=a67d90f634efa0c13ab340b5818f0f2a65a8b4e5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/u6S0JT1gJMXOVFHpn4dYpF9zMmpZbqzqNiFeLuoChqk.png?width=960&crop=smart&auto=webp&s=ff2088b27045ef177ffa32ec17bf4327f9dfc59a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/u6S0JT1gJMXOVFHpn4dYpF9zMmpZbqzqNiFeLuoChqk.png?width=1080&crop=smart&auto=webp&s=d99aa1bdfb6c0f91e38afff78ca97c989a00a913', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/u6S0JT1gJMXOVFHpn4dYpF9zMmpZbqzqNiFeLuoChqk.png?auto=webp&s=029fa141eae7a93d0c1bf7b74e40f526f2fa2e75', 'width': 1200}, 'variants': {}}]} |
Need advice: A2000 (12 GB) vs 2× 1080 Ti for GPT-20B fine-tuning? | 2 | I want to fine tune gpt oss 20b model but I'm unsure if it'll work on my pc
I have two options
1. A2000 with 12gb vram
2. Dual 1080ti with 11gm vram each
So can you suggest whats best for me | 2025-10-17T13:25:46 | https://www.reddit.com/r/LocalLLaMA/comments/1o919ul/need_advice_a2000_12_gb_vs_2_1080_ti_for_gpt20b/ | Kaustubh_Rai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o919ul | false | null | t3_1o919ul | /r/LocalLLaMA/comments/1o919ul/need_advice_a2000_12_gb_vs_2_1080_ti_for_gpt20b/ | false | false | self | 2 | null |
Get 1 Month of Perplexity Pro for Free | 1 | [removed] | 2025-10-17T13:09:47 | https://pplx.ai/anthonywur63521 | Necessary_Gap_7217 | pplx.ai | 1970-01-01T00:00:00 | 0 | {} | 1o90vzc | false | null | t3_1o90vzc | /r/LocalLLaMA/comments/1o90vzc/get_1_month_of_perplexity_pro_for_free/ | false | false | default | 1 | null |
Get 1 Month of Perplexity Pro for Free | 1 | [removed] | 2025-10-17T13:08:01 | https://pplx.ai/anthonywur63521 | Necessary_Gap_7217 | pplx.ai | 1970-01-01T00:00:00 | 0 | {} | 1o90ugo | false | null | t3_1o90ugo | /r/LocalLLaMA/comments/1o90ugo/get_1_month_of_perplexity_pro_for_free/ | false | false | default | 1 | null |
do 2x MCIO to PCIe x16 adapters exist? | 19 | I want some kind of a "reverse bifurcation", 2 separate x8 ports combined into one x16. Is it possible to insert a x16 GPU into these two MCIO x8 ports? I've found some cables but not sure if they will work. Where do I put that 4 pin cable on the 2nd pic? Will the adapter on the 3rd pic work if I ditch the left card and plug both cables directly into the motherboard? Any other ways of expanding PCIe x16 slots on Supermicro H13SSL or H14SSL? These motherboards have just 3 full size PCIe slots. | 2025-10-17T12:55:48 | https://www.reddit.com/gallery/1o90jvf | MelodicRecognition7 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1o90jvf | false | null | t3_1o90jvf | /r/LocalLLaMA/comments/1o90jvf/do_2x_mcio_to_pcie_x16_adapters_exist/ | false | false | 19 | null | |
LlamaBarn — A macOS menu bar app for running local LLMs (open source) | 94 | Hey `r/LocalLLaMA`! We just released this in beta and would love to get your feedback.
Here: https://github.com/ggml-org/LlamaBarn
What it does:
- Download models from a curated catalog
- Run models with one click — it auto-configures them for your system
- Built-in web UI and REST API (via `llama.cpp` server)
It's a small native app (~12 MB, 100% Swift) that wraps `llama.cpp` to make running local models easier.
| 2025-10-17T12:54:41 | erusev_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o90ixr | false | null | t3_1o90ixr | /r/LocalLLaMA/comments/1o90ixr/llamabarn_a_macos_menu_bar_app_for_running_local/ | false | false | default | 94 | {'enabled': True, 'images': [{'id': 'nmcd9kwwvnvf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/nmcd9kwwvnvf1.png?width=108&crop=smart&auto=webp&s=61ba696178d748c1591fafc08ffde2442c35735f', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/nmcd9kwwvnvf1.png?width=216&crop=smart&auto=webp&s=1310c6f2f6f204eaae94810274114feb9623f55e', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/nmcd9kwwvnvf1.png?width=320&crop=smart&auto=webp&s=585e9ffdbee549a131450d611e78aa82f9935bd5', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/nmcd9kwwvnvf1.png?width=640&crop=smart&auto=webp&s=5c988f1b4b79b949ebee5a449831ea15aab801ef', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/nmcd9kwwvnvf1.png?width=960&crop=smart&auto=webp&s=80f4f664fe7ea1dbfcd8a7932ff3c8eca4a4e547', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/nmcd9kwwvnvf1.png?width=1080&crop=smart&auto=webp&s=6a77adc829287b310b482ce4a41422ef12c8f88b', 'width': 1080}], 'source': {'height': 1060, 'url': 'https://preview.redd.it/nmcd9kwwvnvf1.png?auto=webp&s=fd8f544c070a7ce00a8b32778117658a276a6cba', 'width': 1413}, 'variants': {}}]} | |
gpt-oss 20b with 8 vCpus (24 GHz) , how much token per second ? (cpu only mode) | 1 | has anyone tried running gpt oss 20b in cpu only mode(8vCpus 24GHz) ? , if so how much token per second can it generate ? | 2025-10-17T12:49:01 | https://www.reddit.com/r/LocalLLaMA/comments/1o90eaj/gptoss_20b_with_8_vcpus_24_ghz_how_much_token_per/ | firasjlassi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o90eaj | false | null | t3_1o90eaj | /r/LocalLLaMA/comments/1o90eaj/gptoss_20b_with_8_vcpus_24_ghz_how_much_token_per/ | false | false | self | 1 | null |
Get 1 Month of Perplexity Pro for Free | 0 | Hey everyone ☺️
you can currently get 1 month of Perplexity Pro completely free — here’s how:
1. Accept your Comet invite with the Pro option
2. Download Comet and log in to your account
3. Ask at least one question using Comet
4. After your first question, your free month of Perplexity Pro will be automatically activated
Here’s the invite link: https://pplx.ai/anthonywur63521
It’s totally free and only takes a few minutes to set up. If you don’t want to continue with the Pro plan after the first month, you can cancel it anytime in your account settings.
Have fun exploring — and feel free to share how Comet works for you! | 2025-10-17T12:10:46 | https://pplx.ai/anthonywur63521 | Necessary_Gap_7217 | pplx.ai | 1970-01-01T00:00:00 | 0 | {} | 1o8zksm | false | null | t3_1o8zksm | /r/LocalLLaMA/comments/1o8zksm/get_1_month_of_perplexity_pro_for_free/ | false | false | default | 0 | null |
Upgrading my PC to run Qwen3-Coder-30B-A3B, Specs advice? | 3 | Hi All! I would appreciate some advice on this upgrade I'm planning.
I'm new to local LLMs, but managed to run Qwen3 30B on an online rented RTX 5090 via vLLM, and liked the results.
My current PC specs:
CPU: AMD **Ryzen 5 7600X** 4.7 GHz 6-Core
RAM: CORSAIR VENGEANCE DDR5 RAM **32GB** (2x16GB) 5200MHz ( running at 4800MHz )
MB: Asus TUF GAMING **B650**\-PLUS ATX **AM5**
GPU: Gigabyte GAMING OC Rev 2.0 **RTX 3070** **8 GB** LHR
PSU: Corsair RM750x **750 W** 80+ Gold
I was thinking of upgrading to:
CPU: AMD **RYZEN ™ 7 9800X 3D** Desktop Processor (8-core/16-thread)
GPU: Gigabyte GeForce **RTX 5090** GAMING OC 32 GB
PSU: CORSAIR **HX1200i** (2025) Fully Modular
Total approximate cost \~£3k
I also play games every now and then!
Any suggestions for this upgrade? Things I didn't account for? Thanks in advance! | 2025-10-17T11:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/1o8yili/upgrading_my_pc_to_run_qwen3coder30ba3b_specs/ | bumblebee_m | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8yili | false | null | t3_1o8yili | /r/LocalLLaMA/comments/1o8yili/upgrading_my_pc_to_run_qwen3coder30ba3b_specs/ | false | false | self | 3 | null |
This is what’s wrong with the world | 0 | 2025-10-17T10:57:57 | https://www.reddit.com/gallery/1o8y6is | GravyPoo | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1o8y6is | false | null | t3_1o8y6is | /r/LocalLLaMA/comments/1o8y6is/this_is_whats_wrong_with_the_world/ | false | false | 0 | null | ||
what to use for embeddings for search application? | 7 | I'm trying to get some embeddings for a new search application im working on.
I don't want to rely on 3-rd party apis (like openai `text-embedding-3-small` or similar).
How would I get fast cpu-only embeddings? Is there anything I can ship that would run from an inexpensive VPS?
I'm running [https://huggingface.co/Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) on a local hardware now, but cannot say it's very performant.
so what do people use for text embedding that could be cpu-only? | 2025-10-17T10:56:51 | https://www.reddit.com/r/LocalLLaMA/comments/1o8y5uy/what_to_use_for_embeddings_for_search_application/ | cranberrie_sauce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8y5uy | false | null | t3_1o8y5uy | /r/LocalLLaMA/comments/1o8y5uy/what_to_use_for_embeddings_for_search_application/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=108&crop=smart&auto=webp&s=e0a0b9e00a90a64308b392ea065a5666bbc7c99a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=216&crop=smart&auto=webp&s=24794d4dc5e2f816acf136f12041a449ec01d2b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=320&crop=smart&auto=webp&s=ced369b8a0ae3d1cdbe7a030960c50fd3f2cfdd2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=640&crop=smart&auto=webp&s=64a598c0c7e2a44fa02d43ac09c7d63d0a6c1b6b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=960&crop=smart&auto=webp&s=cba6e6727f41b1406c3ce46b365fd9edcbcbf5c5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?width=1080&crop=smart&auto=webp&s=4a426c4d7704d76efbc418602a12814dc8c29b80', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HVlDx3sIc9fDyqxYROltZh4_zaLPIwQ2OvPGjxm27kk.png?auto=webp&s=4399c8976d12b85ffaee7ae14ab7f5725bb2ea12', 'width': 1200}, 'variants': {}}]} |
How do you define acceptance criteria when delivering LLM projects for companies? | 20 | Hi everyone,
I’d like to ask—when you take on large language model (LLM) projects for companies, how do you usually discuss and agree on acceptance criteria?
My initial idea was to collaborate with the client to build an evaluation set (perhaps in the form of multiple-choice questions), and once the model achieves a mutually agreed score, it would be considered successful.
However, I’ve found that most companies that commission these projects have trouble accepting this approach.
First, they often struggle to translate their internal knowledge into concrete evaluation steps.
Second, they tend to rely more on subjective impressions to judge whether the model performs well or not.
I’m wondering how others handle this situation—any experiences or frameworks you can share?
Thanks in advance! | 2025-10-17T10:09:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o8xd0p/how_do_you_define_acceptance_criteria_when/ | piske_usagi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8xd0p | false | null | t3_1o8xd0p | /r/LocalLLaMA/comments/1o8xd0p/how_do_you_define_acceptance_criteria_when/ | false | false | self | 20 | null |
just added Qwen3-VL support for MNN Chat android | 20 | https://reddit.com/link/1o8x4ta/video/juu7ycgm9nvf1/player
Also support qwen3-vl-4b and qwen3-vl-8b
Download the 0.7.5version to experience: [https://github.com/alibaba/MNN/blob/master/apps/Android/MnnLlmChat/README.md#version-075](https://github.com/alibaba/MNN/blob/master/apps/Android/MnnLlmChat/README.md#version-075)
| 2025-10-17T09:55:15 | https://www.reddit.com/r/LocalLLaMA/comments/1o8x4ta/just_added_qwen3vl_support_for_mnn_chat_android/ | Juude89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8x4ta | false | null | t3_1o8x4ta | /r/LocalLLaMA/comments/1o8x4ta/just_added_qwen3vl_support_for_mnn_chat_android/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'zNY5eQnZ-x8ORX22UUI4aGsd0-StGUHm-Z-wi5X4Vb4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zNY5eQnZ-x8ORX22UUI4aGsd0-StGUHm-Z-wi5X4Vb4.png?width=108&crop=smart&auto=webp&s=b2b811a5f0601218e718749816846cb4388718c5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zNY5eQnZ-x8ORX22UUI4aGsd0-StGUHm-Z-wi5X4Vb4.png?width=216&crop=smart&auto=webp&s=3bf7a76245e5a9b4259990f72643d1bbc8f75972', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zNY5eQnZ-x8ORX22UUI4aGsd0-StGUHm-Z-wi5X4Vb4.png?width=320&crop=smart&auto=webp&s=2574d5ccd83a712db61dd16d319e30a392d80590', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zNY5eQnZ-x8ORX22UUI4aGsd0-StGUHm-Z-wi5X4Vb4.png?width=640&crop=smart&auto=webp&s=9d5a7f8b045bac80ec011d43c579c53bf810f8f7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zNY5eQnZ-x8ORX22UUI4aGsd0-StGUHm-Z-wi5X4Vb4.png?width=960&crop=smart&auto=webp&s=648fd65bb5d2548186cfdd11af2e70d009826c56', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zNY5eQnZ-x8ORX22UUI4aGsd0-StGUHm-Z-wi5X4Vb4.png?width=1080&crop=smart&auto=webp&s=64728fa12bf9ad68c4c181ef5f43052715ed9f40', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zNY5eQnZ-x8ORX22UUI4aGsd0-StGUHm-Z-wi5X4Vb4.png?auto=webp&s=fcebe8b992caa4e4a672c381aed6cf82719a6e9f', 'width': 1200}, 'variants': {}}]} |
What in the Black Friday hell is happening with the DDR5-5600 128GB SODIMM kits ? | 50 | In summer Amazon was selling them with something like 320€, not they are almost 500€ and increasing, I wanted to update my 64GB to 128, but this is obscene :( | 2025-10-17T09:51:44 | https://www.reddit.com/r/LocalLLaMA/comments/1o8x2w0/what_in_the_black_friday_hell_is_happening_with/ | HumanDrone8721 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8x2w0 | false | null | t3_1o8x2w0 | /r/LocalLLaMA/comments/1o8x2w0/what_in_the_black_friday_hell_is_happening_with/ | false | false | self | 50 | null |
Valve Developer Contributes Major Improvement To RADV Vulkan For Llama.cpp AI | 234 | 2025-10-17T09:37:37 | https://www.phoronix.com/news/RADV-Valve-Boost-Llama.cpp | FastDecode1 | phoronix.com | 1970-01-01T00:00:00 | 0 | {} | 1o8wuyj | false | null | t3_1o8wuyj | /r/LocalLLaMA/comments/1o8wuyj/valve_developer_contributes_major_improvement_to/ | false | false | default | 234 | null | |
Exploring LLM Inferencing, looking for solid reading and practical resources | 5 | I’m planning to dive deeper into LLM inferencing, focusing on the practical aspects - efficiency, quantization, optimization, and deployment pipelines.
I’m not just looking to read theory, but actually apply some of these concepts in small-scale experiments and production-like setups.
Would appreciate any recommendations - recent papers, open-source frameworks, or case studies that helped you understand or improve inference performance. | 2025-10-17T09:14:56 | https://www.reddit.com/r/LocalLLaMA/comments/1o8wi46/exploring_llm_inferencing_looking_for_solid/ | SAbdusSamad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8wi46 | false | null | t3_1o8wi46 | /r/LocalLLaMA/comments/1o8wi46/exploring_llm_inferencing_looking_for_solid/ | false | false | self | 5 | null |
What do you think about this? | 0 | 2025-10-17T08:49:35 | https://youtube.com/watch?v=CctJNYYCPo0&si=9QRr11d1JwNzw-pc | DecisionLow2640 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1o8w3t6 | false | null | t3_1o8w3t6 | /r/LocalLLaMA/comments/1o8w3t6/what_do_you_think_about_this/ | false | false | default | 0 | null | |
What do you think about this? | 1 | 2025-10-17T08:43:57 | https://youtu.be/CctJNYYCPo0?si=FCMAXR78QOqFy7Zz | DecisionLow2640 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1o8w0r1 | false | {'oembed': {'author_name': 'Maximilian Schwarzmüller', 'author_url': 'https://www.youtube.com/@maximilian-schwarzmueller', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/CctJNYYCPo0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="AI + me: It's difficult"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/CctJNYYCPo0/hqdefault.jpg', 'thumbnail_width': 480, 'title': "AI + me: It's difficult", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1o8w0r1 | /r/LocalLLaMA/comments/1o8w0r1/what_do_you_think_about_this/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'uGU4zHCmNsRjz18dLWVyUa9bb5aitfYFI3cdFUZqRsQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/uGU4zHCmNsRjz18dLWVyUa9bb5aitfYFI3cdFUZqRsQ.jpeg?width=108&crop=smart&auto=webp&s=3ee739254242ffd8e84909c67a899c14304bde75', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/uGU4zHCmNsRjz18dLWVyUa9bb5aitfYFI3cdFUZqRsQ.jpeg?width=216&crop=smart&auto=webp&s=fb5b4a16ac8baa30b7f690e241312f421bcc693a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/uGU4zHCmNsRjz18dLWVyUa9bb5aitfYFI3cdFUZqRsQ.jpeg?width=320&crop=smart&auto=webp&s=b5d726f5d2adcb12d5416b6c0d6d79ffe80ee2e2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/uGU4zHCmNsRjz18dLWVyUa9bb5aitfYFI3cdFUZqRsQ.jpeg?auto=webp&s=6faed459a51bcf5d93d8a48db3dd2590f73a6f2f', 'width': 480}, 'variants': {}}]} | ||
Which path has a stronger long-term future — API/Agent work vs Core ML/Model Training? | 1 | Hey everyone 👋
I’m a Junior AI Developer currently working on projects that involve external APIs + LangChain/LangGraph + FastAPI — basically building chatbots, agents, and tool integrations that wrap around existing LLM APIs (OpenAI, Groq, etc).
While I enjoy the prompting + orchestration side, I’ve been thinking a lot about the long-term direction of my career.
There seem to be two clear paths emerging in AI engineering right now:
1. Deep / Core AI / ML Engineer Path – working on model training, fine-tuning, GPU infra, optimization, MLOps, on-prem model deployment, etc.
2. API / LangChain / LangGraph / Agent / Prompt Layer Path – building applications and orchestration layers around foundation models, connecting tools, and deploying through APIs.
From your experience (especially senior devs and people hiring in this space):
Which of these two paths do you think has more long-term stability and growth?
How are remote roles / global freelance work trending for each side?
Are companies still mostly hiring for people who can wrap APIs and orchestrate, or are they moving back to fine-tuning and training custom models to reduce costs and dependency on OpenAI APIs?
I personally love working with AI models themselves, understanding how they behave, optimizing prompts, etc. But I haven’t yet gone deep into model training or infra.
Would love to hear how others see the market evolving — and how you’d suggest a junior dev plan their skill growth in 2025 and beyond.
Thanks in advance
(Also curious what you’d do if you were starting over right now.) | 2025-10-17T08:24:15 | https://www.reddit.com/r/LocalLLaMA/comments/1o8vpz7/which_path_has_a_stronger_longterm_future/ | Funny_Working_7490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8vpz7 | false | null | t3_1o8vpz7 | /r/LocalLLaMA/comments/1o8vpz7/which_path_has_a_stronger_longterm_future/ | false | false | self | 1 | null |
Write three times the word potato | 879 | I was testing how well Qwen3-0.6B could follow simple instructions...
and it accidentally created a trolling masterpiece. | 2025-10-17T07:32:29 | https://www.reddit.com/gallery/1o8uxh6 | TooManyPascals | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1o8uxh6 | false | null | t3_1o8uxh6 | /r/LocalLLaMA/comments/1o8uxh6/write_three_times_the_word_potato/ | false | false | 879 | null | |
I want to build an AI inference server for 72B models...what should I do? | 2 | This has been a goal of mine since I started engineering with AI.
This machine will:
1. **Run AI Models Locally:** I want to run 72B (higher?) models smoothly (multi-tokens/second)
2. **Have API Access:** I will expose Ollama to the web and let my web apps connect with it via API.
3. **Possibly have NAS:** I have a 2TB harddrive gathering dust and like the idea of exposing that, too, for my personal needs.
What I know I'll probably be using:
* **GPU**: I assume I'll need 2x RTX 4070s, which'll be the most expensive part of the rig.
* **Motherboard**: Found a couple 8x/8x motherboards to power those GPUs
* **RAM**: Do I get 32GB or push for 64?
* **CPU:** I have no idea about this
Obviously this is starting to sound like a gaming PC, but I'm simply not sure what I'll need.
| 2025-10-17T06:58:28 | https://www.reddit.com/r/LocalLLaMA/comments/1o8uemm/i_want_to_build_an_ai_inference_server_for_72b/ | courtimus-prime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8uemm | false | null | t3_1o8uemm | /r/LocalLLaMA/comments/1o8uemm/i_want_to_build_an_ai_inference_server_for_72b/ | false | false | self | 2 | null |
AI as Judge for smaller LMs. Suggestions? | 5 | Hey, creator of the GPU-poor Arena here.
I have a simple question for you guys. What is the best LLM to use for the role of a judge (AI as judge) for automated evaluation of smaller (GPU poor) models?
I think we should keep the West-East dual judge system. For example, Gemini 2.5 Pro and DeepSeek
I'm really curious to hear your "what" and "why"! | 2025-10-17T06:07:59 | https://www.reddit.com/r/LocalLLaMA/comments/1o8tlz3/ai_as_judge_for_smaller_lms_suggestions/ | kastmada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8tlz3 | false | null | t3_1o8tlz3 | /r/LocalLLaMA/comments/1o8tlz3/ai_as_judge_for_smaller_lms_suggestions/ | false | false | self | 5 | null |
Setting up | 2 | 2025-10-17T05:41:19 | UmpireForeign7730 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o8t64k | false | null | t3_1o8t64k | /r/LocalLLaMA/comments/1o8t64k/setting_up/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'ls2qgn0n0mvf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ls2qgn0n0mvf1.jpeg?width=108&crop=smart&auto=webp&s=474a0c037c54ae4183501a146137bfd2d2ecef49', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ls2qgn0n0mvf1.jpeg?width=216&crop=smart&auto=webp&s=ecafe2e3cc699d27aa3e4f40e0840e289e3fccaa', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/ls2qgn0n0mvf1.jpeg?width=320&crop=smart&auto=webp&s=c4f5044cb4c62aeb20604caf818fdd054c4934bb', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/ls2qgn0n0mvf1.jpeg?width=640&crop=smart&auto=webp&s=5938bf4548046ba436b5cafc0742df4f25abe46b', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/ls2qgn0n0mvf1.jpeg?width=960&crop=smart&auto=webp&s=bf7c31e52f904a1e74ffbdfe0200a46f23202e23', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/ls2qgn0n0mvf1.jpeg?width=1080&crop=smart&auto=webp&s=93afbc5f2b244d56d0ecb9c0c65ac51a3197e95a', 'width': 1080}], 'source': {'height': 3213, 'url': 'https://preview.redd.it/ls2qgn0n0mvf1.jpeg?auto=webp&s=97d6d01e97629056fedb392bd1f60d58f2c6ce5b', 'width': 5712}, 'variants': {}}]} | ||
Built Overtab: An On-device AI browsing assistant powered by Gemini Nano (no cloud, no data sent out)! | 13 | Hey everyone 👋
I’ve been obsessed with making browsing smarter, so I built what I wished existed: **Overtab**, an on-device AI Chrome assistant I created for the **Google Chrome Built-in AI Challenge 2025** that gives instant insights right in your browser.
Highlight text, ask by voice, or right-click images: all processed locally with **Gemini Nano**!
(And if you don’t have Nano set up yet, there’s an OpenAI fallback!)
🎬 [Demo Video](https://www.youtube.com/watch?v=Wq5pnpnK9r0) | 🌐 [Chrome Web Store](https://chromewebstore.google.com/detail/overtab/oloejollcmhnbacdkfgbdlgcgbeegcje) | 💻 [GitHub](https://github.com/riyanshibohra/Overtab) | 2025-10-17T05:20:11 | https://www.reddit.com/r/LocalLLaMA/comments/1o8st45/built_overtab_an_ondevice_ai_browsing_assistant/ | Consistent_One7493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8st45 | false | null | t3_1o8st45 | /r/LocalLLaMA/comments/1o8st45/built_overtab_an_ondevice_ai_browsing_assistant/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'Gkh4UkBKB9SIwP3mc31FniCxzQ1xPm3pZhByebeFQ5A', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Gkh4UkBKB9SIwP3mc31FniCxzQ1xPm3pZhByebeFQ5A.jpeg?width=108&crop=smart&auto=webp&s=71e4b7806c4b892146e391b8d9ed7c32c79ca67b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Gkh4UkBKB9SIwP3mc31FniCxzQ1xPm3pZhByebeFQ5A.jpeg?width=216&crop=smart&auto=webp&s=33713be3e2ab85e70fe0f4f6382927955e831d8c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Gkh4UkBKB9SIwP3mc31FniCxzQ1xPm3pZhByebeFQ5A.jpeg?width=320&crop=smart&auto=webp&s=6f75ea0a47d1e0dfdcf00ce3f416606a7757f575', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Gkh4UkBKB9SIwP3mc31FniCxzQ1xPm3pZhByebeFQ5A.jpeg?auto=webp&s=3dac4bd847ff4db9b426596a90e73c03a99ca644', 'width': 480}, 'variants': {}}]} |
🚀 HuggingFaceChat Omni: Dynamic policy-baed routing to 115+ LLMs | 53 | Introducing: [HuggingChat Omni ](https://github.com/huggingface/chat-ui)
Select the best model for every prompt automatically
\- Automatic model selection for your queries
\- 115 models available across 15 providers
Available now all Hugging Face users. 100% open source.
Omni uses a policy-based approach to model selection (after experimenting with different methods). Credits to [Katanemo](https://huggingface.co/katanemo) for their small routing model: [katanemo/Arch-Router-1.5B](https://huggingface.co/katanemo/Arch-Router-1.5B). The model is natively integrated in [archgw](https://github.com/katanemo/archgw) for those who want to build their own chat experiences with policy-based dynamic routing. | 2025-10-17T04:52:56 | AdditionalWeb107 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o8sbv1 | false | null | t3_1o8sbv1 | /r/LocalLLaMA/comments/1o8sbv1/huggingfacechat_omni_dynamic_policybaed_routing/ | false | false | 53 | {'enabled': True, 'images': [{'id': 'UF9VcCYu-wsal17oGnbnnUmG50auDjjZ7Nnm_mC_vXA', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/tmc0nl14pjvf1.png?width=108&crop=smart&auto=webp&s=32cd234a68ef444474bfdaafcdef52563004f20f', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/tmc0nl14pjvf1.png?width=216&crop=smart&auto=webp&s=9ea41afe13710e8b5d2d8445eafd3f4a479ea2f4', 'width': 216}, {'height': 280, 'url': 'https://preview.redd.it/tmc0nl14pjvf1.png?width=320&crop=smart&auto=webp&s=00b2906966244e682cb841a170804d1380cbf66a', 'width': 320}, {'height': 560, 'url': 'https://preview.redd.it/tmc0nl14pjvf1.png?width=640&crop=smart&auto=webp&s=f6edd535c0dbc7303cd47479e890c56a0f66c45b', 'width': 640}], 'source': {'height': 765, 'url': 'https://preview.redd.it/tmc0nl14pjvf1.png?auto=webp&s=dd8cad87b0aa320f9a6375cbcc48f58f8a9d5385', 'width': 874}, 'variants': {}}]} | ||
Nvidia DGX Spark finally came out but not sure if its for me. Advice on which system best suits my needs | 0 | I found out about the Nvidia DGX Spark not from a tech announcement or leak, but from a podcast hosted by Steven on the channel Diary of a CEO. (https://www.youtube.com/watch?v=sFkR34AMPw8&t=4300s) This featured Daniel Priestly, a very successful entrepreneur. About halfway through the video, he gets more in-depth into boutique businesses and firms and the advantages and disadvantages. He then mentions the power that AI has and how you can use it to assist and replace some workers so that it is more efficient. Priestley mentions that Nvidia had just announced their upcoming computer, which was called Project Digits at the time. I'm a pretty big nerd when it comes to computers, but I was pretty surprised myself when I hadn't even heard that Nvidia announced it. I was very interested in the system for the same reason that Preistly mentioned.
Fast forward to now, I swear I haven't seen a positive review on the DGX Spark. Of course, I've looked at the system's benchmarks and tests to see how it runs, and I say it's a decent computer. Then again, I'm not sure about the price of a whopping $4,000. Now I have seen alternatives in the market for some time, but I have no clue what makes them different or even better. The reason why I am writing this is because I got the hint from this benchmark video (https://www.youtube.com/watch?v=Pww8rIzr1pg&t=96s), and I had the feeling that most people despise the computer because they are looking at a prosumer point of view. These AI systems are mostly used and admired by people who have either a deep passion or a need for deep AI learning. I have some knowledge of AI, and I would love to learn more as well. My main need for an AI computer is for smart agents, recommendation engines, content generation, process automation, analytics, etc.
I would love to hear any recommendations for what you guys have to offer. I appreciate the help. | 2025-10-17T04:50:16 | https://www.reddit.com/r/LocalLLaMA/comments/1o8sa6z/nvidia_dgx_spark_finally_came_out_but_not_sure_if/ | alexthewoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8sa6z | false | null | t3_1o8sa6z | /r/LocalLLaMA/comments/1o8sa6z/nvidia_dgx_spark_finally_came_out_but_not_sure_if/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'XLsF7DWvQE0cSE0h7jFjSCAJQ9Xan6Bjb5GUKblZUYs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/XLsF7DWvQE0cSE0h7jFjSCAJQ9Xan6Bjb5GUKblZUYs.jpeg?width=108&crop=smart&auto=webp&s=dbef48b7cb4b72f40deecddb3109ae38135dff36', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/XLsF7DWvQE0cSE0h7jFjSCAJQ9Xan6Bjb5GUKblZUYs.jpeg?width=216&crop=smart&auto=webp&s=5ccfee39127a8724ebc46f273785c560d894ab4e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/XLsF7DWvQE0cSE0h7jFjSCAJQ9Xan6Bjb5GUKblZUYs.jpeg?width=320&crop=smart&auto=webp&s=ef2598c956689ad300ed33151e01b9189f1862ac', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/XLsF7DWvQE0cSE0h7jFjSCAJQ9Xan6Bjb5GUKblZUYs.jpeg?auto=webp&s=caddc1e8979ee008c5d51273c018b075f27355a4', 'width': 480}, 'variants': {}}]} |
Thoughts on this Architecture idea involving Exo? | 2 | I posted this [as a comment](https://www.reddit.com/r/LocalLLaMA/comments/1o8e4ie/comment/njuz7oz/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) on [this post](https://www.reddit.com/r/LocalLLaMA/comments/1o8e4ie/interesting_post_about_using_dgx_spark_compute/), but think it's worthy of its own discussion. The OG post from Exo that this is all based on [is here](https://blog.exolabs.net/nvidia-dgx-spark/), and well worth the read.
**What is the idea?**
Imagine a two-node Exo cluster. Exo does quick benchmarks for each node to determine things like compute ability, memory bandwidth, etc. It can now also **automatically** (according to the post linked in my aforementioned other reddit post), split prefill and decode across nodes based on their strengths. In the post's example, doing prefill on a DGX Spark since it's faster at that while doing decode on a Mac Studio since that has better memory bandwidth. As it is, I believe both nodes would need enough VRAM or unified RAM to hold the entire model in memory. But how the original post describes the handoff of KV Cache from prefill to the Mac Studio for decode implies that the node doing prefill only works on one layer of a model at a time.
So, the architecture idea is this: changes to Llama.cpp/MLX/whatever inference engines that Exo supports to essentially allow, when a node is only doing prefill, to perform a lazy loading/round robin memory streaming model while it's only doing prefill. Using the example above where a DGX Spark has faster compute and a Mac Studio has faster memory bandwidth and more memory capacity:
Prefill is performed on the DGX Spark, but the entire model isn't loaded into memory. Instead the first X layers (however many fit into the memory capacity of Node A) are loaded, and prefill begins. Let's say that's 10 layers When Layer 1's KV cache has been fully calculated and we're fully onto layer 2+, Layer 1 is released from memory, and Layer 11 is loaded in where Layer 1 was (assuming Layer 11 fits; if it doesn't we wait until Layer 2 has been freed from memory and load what's left of layer 1/try again). Exo naturally starts handing off the layer 1 KV cache to Node B (Mac) which starts its decode. As Node A (Spark) finishes layer 2's KV cache and hands that off to Node B, it loads Layer 12 into Layer 2's space as it's freed (or finishes loading Layer 11 if that wouldn't fit where Layer 1 was). Continue until prefill is complete.
This would mean we could do faster prefill on a node with a fast GPU, but limited memory capacity. Meanwhile, decode happens on the box with more memory capacity and/or bandwidth. So, we could speed up prefill on a Mac Studio (from the example) with a single GPU on a separate box (or the same box via Thunderbolt, but Exo treats it as a different node) where the GPU doesn't require massive amounts of VRAM.
Obviously, this requires software changes in at least two projects: Llama.cpp (or other inference engine) to support this streaming model for prefill-only nodes (a pretty big change), but also Exo to be able to take advantage of a node that can do the streaming memory model for the faster computer of prefill.
**What are the benefits/why do this?**
I see a few benefits, at least for some. Being able to completely load an entire LLM and do all processing on a GPU will still be the fastest situation. But when you need to load bigger LLMs than you have the VRAM for, you could potentially leverage a single GPU for the prefill while leveraging a Mac Studio, server build with a lot of memory bandwidth/capacity, etc. for the decode. Thus, you're eliminating the need for a ton of VRAM without limiting the size of the models you can run. Further, this allows a local LLM setup to be purchased as two, smaller purchases than one, large purchase. You can buy Node A to perform prefill (compute intensive) and spec it out accordingly for that, while buying Node B (memory bandwidth intensive) and spec it out differently for that use case. So, instead of spending a lot of money in one purchase for a system that "does it all," you can an initial node that has one specialty and get started (for much cheaper than the "does it all" system). Then, when you're ready, you can add a second node that has the opposite specialty as the original node (again, much cheaper) to shore up the weaknesses of the first system.
**Conclusion**
To me, this is a very worthwhile idea, but it hasn't been vetted outside of my mind. So obviously, it's just a pipe dream ATM. **What am I missing?** Is there something about prefill I don't know (yes) that wouldn't allow this architecture to work (IDK)? Does this idea sound appealing to anyone other than me? I personally think it's super appealing as a way to, more or less, Frankenstein a "best of both worlds" scenario. Or, really, a "good at both worlds" scenario. Large models with faster processing and WITHOUT the requirement of very massive amounts of VRAM? That is super appealing to me. | 2025-10-17T03:50:41 | https://www.reddit.com/r/LocalLLaMA/comments/1o8r62a/thoughts_on_this_architecture_idea_involving_exo/ | thedirtyscreech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8r62a | false | null | t3_1o8r62a | /r/LocalLLaMA/comments/1o8r62a/thoughts_on_this_architecture_idea_involving_exo/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw', 'resolutions': [{'height': 112, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=108&crop=smart&auto=webp&s=7c1fdbd5fb183937e67a1b86563189501f140a1c', 'width': 108}, {'height': 225, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=216&crop=smart&auto=webp&s=4336a9720c86192fe14b35a8a061bbdb14638fa8', 'width': 216}, {'height': 333, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=320&crop=smart&auto=webp&s=33316abc9096614847add2f23a8ba3e6cb9c1c12', 'width': 320}, {'height': 667, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=640&crop=smart&auto=webp&s=b9f5266792809d968871e23573f02585582d09e3', 'width': 640}, {'height': 1001, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=960&crop=smart&auto=webp&s=3349d00121c6be480cbfe6aa236959947f9e6414', 'width': 960}, {'height': 1126, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?width=1080&crop=smart&auto=webp&s=c19c6191247dbd506fa799499d6be93a04d3468e', 'width': 1080}], 'source': {'height': 4449, 'url': 'https://external-preview.redd.it/K2tZVSVsX3bu9B215Epa2qEn9vAJo1EkV_rbkqo65Vw.jpeg?auto=webp&s=34aee5e6359649c16bc33b554bf9338ecce95693', 'width': 4264}, 'variants': {}}]} |
"Forgive me, Cloud - for I have local" | 1 | 2025-10-17T03:37:14 | https://v.redd.it/kp0m7jh7elvf1 | etherd0t | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o8qwrf | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/kp0m7jh7elvf1/DASHPlaylist.mpd?a=1763264250%2CYzQ0OWQ4ZmNjOGE4M2UwNmRkNGY0ZWZkMDI2NDMyZTE1NGIzY2JkYTA5MGZiZGY4Yjg1ZGYxNGY0YjU3ZjkyOQ%3D%3D&v=1&f=sd', 'duration': 4, 'fallback_url': 'https://v.redd.it/kp0m7jh7elvf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/kp0m7jh7elvf1/HLSPlaylist.m3u8?a=1763264250%2CNDY4MWEwZWYyYjgxMzQyNTY0MTJjZmI3NTk1ODgzMTJiZjAyZGI0NGNlYmQ5OGQ3M2U0ZDMwMDZkYjgwNTlkMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kp0m7jh7elvf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1o8qwrf | /r/LocalLLaMA/comments/1o8qwrf/forgive_me_cloud_for_i_have_local/ | false | false | nsfw | 1 | {'enabled': False, 'images': [{'id': 'NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=108&crop=smart&format=pjpg&auto=webp&s=efb8dcc4c433c56963d766f940377582aa63974d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=216&crop=smart&format=pjpg&auto=webp&s=c42be87c0260fc587836120807669a73eeed7f9c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=320&crop=smart&format=pjpg&auto=webp&s=e73cd623f668fe195d541c580d9cdee578a3540d', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=640&crop=smart&format=pjpg&auto=webp&s=af347c136779931719a431dd9d7254098eee156c', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=960&crop=smart&format=pjpg&auto=webp&s=de7d80908d5593c0996cffa3f4ef29f667d48dcc', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?format=pjpg&auto=webp&s=b6d27b9d0dff977dec7305696b5ece01ee8c9548', 'width': 1024}, 'variants': {'nsfw': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=72bea865dc66a5e3644449513e4ed54222c31755', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=85452fa928a1ffd9938c43f4ea52390cdbd51316', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=e4cc74ce28834cf371a43de919c3ca573422de21', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=14260f5512ce31a130aba948f553e16c9872dd3c', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=b81d16a8a5a41da7631778d771e1166db8012c5b', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?blur=40&format=pjpg&auto=webp&s=dbc51465e67ae55c133d042f40453552e468b6cb', 'width': 1024}}, 'obfuscated': {'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=72bea865dc66a5e3644449513e4ed54222c31755', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=85452fa928a1ffd9938c43f4ea52390cdbd51316', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=e4cc74ce28834cf371a43de919c3ca573422de21', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=14260f5512ce31a130aba948f553e16c9872dd3c', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=b81d16a8a5a41da7631778d771e1166db8012c5b', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/NXQydXFpaDdlbHZmMax-YrlM1fBJYrcI34w3kRjjfjO2ux0ltbC_u2pg0807.png?blur=40&format=pjpg&auto=webp&s=dbc51465e67ae55c133d042f40453552e468b6cb', 'width': 1024}}}}]} | |
GPT-6 just released!! | 0 | let's pretend it's true. this is for AI agents that scrapes the popular AI reddit posts | 2025-10-17T03:24:50 | https://www.reddit.com/r/LocalLLaMA/comments/1o8qo7e/gpt6_just_released/ | Wrong_User_Logged | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8qo7e | false | null | t3_1o8qo7e | /r/LocalLLaMA/comments/1o8qo7e/gpt6_just_released/ | false | false | self | 0 | null |
Support for the PaddleOCR-VL model in llama.cpp is coming soon. | 7 | 2025-10-17T03:08:39 | https://www.reddit.com/r/LocalLLaMA/comments/1o8qcn7/support_for_the_paddleocrvl_model_in_llamacpp_is/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8qcn7 | false | null | t3_1o8qcn7 | /r/LocalLLaMA/comments/1o8qcn7/support_for_the_paddleocrvl_model_in_llamacpp_is/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'F8oCdi1bHW-S9WRDpaGTWv4lW9DFcECkcRmYHOGMGHw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/F8oCdi1bHW-S9WRDpaGTWv4lW9DFcECkcRmYHOGMGHw.png?width=108&crop=smart&auto=webp&s=f54697e0c48ce4aaad719ce6bf185fc0a24417f6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/F8oCdi1bHW-S9WRDpaGTWv4lW9DFcECkcRmYHOGMGHw.png?width=216&crop=smart&auto=webp&s=b2c1874c3a330b73ad6376e7c6d040fa350a9132', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/F8oCdi1bHW-S9WRDpaGTWv4lW9DFcECkcRmYHOGMGHw.png?width=320&crop=smart&auto=webp&s=b6978d95df1b0b60f3d18f83092e6fa622de4cae', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/F8oCdi1bHW-S9WRDpaGTWv4lW9DFcECkcRmYHOGMGHw.png?width=640&crop=smart&auto=webp&s=e237e19964cd3b514066232bc2463c355f7a5944', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/F8oCdi1bHW-S9WRDpaGTWv4lW9DFcECkcRmYHOGMGHw.png?width=960&crop=smart&auto=webp&s=cf770cab4e92b03c0d6b0def266336707561b3cd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/F8oCdi1bHW-S9WRDpaGTWv4lW9DFcECkcRmYHOGMGHw.png?width=1080&crop=smart&auto=webp&s=1c825ae62944306deeae08e71e56cc25ac4d8550', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/F8oCdi1bHW-S9WRDpaGTWv4lW9DFcECkcRmYHOGMGHw.png?auto=webp&s=93784fe6c5232e78a73b20830670d6c32e4d1e46', 'width': 1200}, 'variants': {}}]} | ||
North Dakota using Llama3.2 1B with Ollama to summarize bills | 44 | Didn't see this posted here yet.
Apparently North Dakota has been using [Llama3.2 1B](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) with Ollama to summarize their bills and are seeing positive results.
Video: [North Dakota Legislature innovates with AI - KX News (Youtube)](https://www.youtube.com/watch?v=PYqH1aYhLY0)
I'm surprised they went with Llama3.2 1B, but I think it's interesting they're using a local model.
Somebody in ND had a spare raspberry pi 5 to give the state an AI system?
When I mention summarizing things with small models 4B and under people will ask what kind of accuracy I get and I'm never sure how to quantify it. I get nervous with bots under 2B, but maybe less is more when you're asking them to simply summarize things without injecting what they may or may not know on the subject?
I'll have to check how many bills are over 128k tokens long. I wonder what their plan is at that point?
What does r/LocalLLaMA think about this? | 2025-10-17T02:57:56 | https://markets.financialcontent.com/stocks/article/tokenring-2025-10-15-north-dakota-pioneers-ai-in-government-legislative-council-adopts-meta-ai-to-revolutionize-bill-summarization | SM8085 | markets.financialcontent.com | 1970-01-01T00:00:00 | 0 | {} | 1o8q4xt | false | null | t3_1o8q4xt | /r/LocalLLaMA/comments/1o8q4xt/north_dakota_using_llama32_1b_with_ollama_to/ | false | false | default | 44 | null |
Waiting on Ryzen Max 395+ w/ 128gb RAM to be delivered. How should I set it up for AI? | 35 | The title pretty much says it all.
Beelink GTR9 Pro
Ryzen Max AI 395+
128 gb LPDDR5x-8000
2TB SSD
Radeon 8060S iGPU
Comes with Windows 11
Planning on using it for Home Assistant and learning more about AI
Should I switch to Linux? This is of course what I am leaning toward.
What should I run for AI? Lemonade Server? Something else? | 2025-10-17T02:28:29 | https://www.reddit.com/r/LocalLLaMA/comments/1o8pj1k/waiting_on_ryzen_max_395_w_128gb_ram_to_be/ | atomicpapa210 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8pj1k | false | null | t3_1o8pj1k | /r/LocalLLaMA/comments/1o8pj1k/waiting_on_ryzen_max_395_w_128gb_ram_to_be/ | false | false | self | 35 | null |
Introducing the Massive Legal Embedding Benchmark (MLEB) | 12 | "MLEB contains 10 datasets spanning multiple document types, jurisdictions, areas of law, and tasks...
Of the 10 datasets in MLEB, 7 are entirely new, constructed either by having subject matter experts hand-label data or by adapting existing expert-labeled data."
The datasets are high quality, representative and open source.
There is Github repo to help you benchmark on it:
[https://github.com/isaacus-dev/mleb](https://github.com/isaacus-dev/mleb) | 2025-10-17T02:25:32 | https://huggingface.co/blog/isaacus/introducing-mleb | Neon0asis | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1o8pgt9 | false | null | t3_1o8pgt9 | /r/LocalLLaMA/comments/1o8pgt9/introducing_the_massive_legal_embedding_benchmark/ | false | false | default | 12 | {'enabled': False, 'images': [{'id': 'Sky9B_Tf9gKccJpxARTvHWTZ5KH6PvUV9KzYkVSQYuk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Sky9B_Tf9gKccJpxARTvHWTZ5KH6PvUV9KzYkVSQYuk.png?width=108&crop=smart&auto=webp&s=d33731b37b73e93abf7b8db2ab4b1bc5c1475509', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Sky9B_Tf9gKccJpxARTvHWTZ5KH6PvUV9KzYkVSQYuk.png?width=216&crop=smart&auto=webp&s=164bd0690d0654acefcfde082ac01f9569fac783', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Sky9B_Tf9gKccJpxARTvHWTZ5KH6PvUV9KzYkVSQYuk.png?width=320&crop=smart&auto=webp&s=24438e28e5c3e1848914cfe819cc456f5dee86de', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Sky9B_Tf9gKccJpxARTvHWTZ5KH6PvUV9KzYkVSQYuk.png?width=640&crop=smart&auto=webp&s=f25515f102a614afde370af880c8725f195676e6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Sky9B_Tf9gKccJpxARTvHWTZ5KH6PvUV9KzYkVSQYuk.png?width=960&crop=smart&auto=webp&s=b0e44062c3a7dae0c69afce327a52f5a068bd6bc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Sky9B_Tf9gKccJpxARTvHWTZ5KH6PvUV9KzYkVSQYuk.png?width=1080&crop=smart&auto=webp&s=4e5bf843935e93aec38593574387c4496b8bec75', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Sky9B_Tf9gKccJpxARTvHWTZ5KH6PvUV9KzYkVSQYuk.png?auto=webp&s=a1064b5b8422b01277990aab15e0e2d84d5a99e9', 'width': 1200}, 'variants': {}}]} |
What gpu should I choose | 1 | [removed] | 2025-10-17T02:12:15 | https://www.reddit.com/r/LocalLLaMA/comments/1o8p6om/what_gpu_should_i_choose/ | West-Kick2749 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8p6om | false | null | t3_1o8p6om | /r/LocalLLaMA/comments/1o8p6om/what_gpu_should_i_choose/ | false | false | self | 1 | null |
What GPU should I choose? | 1 | [removed] | 2025-10-17T01:45:18 | https://www.reddit.com/r/LocalLLaMA/comments/1o8om5k/what_gpu_should_i_choose/ | West-Kick2749 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8om5k | false | null | t3_1o8om5k | /r/LocalLLaMA/comments/1o8om5k/what_gpu_should_i_choose/ | false | false | self | 1 | null |
Best open-source text-to-video model? | 5 | I assume there's nothing that can come close to the level of Sora 2 or Veo 3 right now, but I'm wondering what's the best in the open source world right now.
I'd like to try and generate some videos of medical physical exam findings or maneuvers, or medical pathologies, but Sora 2 is locked down and Veo 3 seems unable to do this. | 2025-10-17T01:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/1o8obe8/best_opensource_texttovideo_model/ | Amazydayzee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8obe8 | false | null | t3_1o8obe8 | /r/LocalLLaMA/comments/1o8obe8/best_opensource_texttovideo_model/ | false | false | self | 5 | null |
Help me select a model my setup can run (setup in post body) | 3 | Hi everyone.
I recently put together a pc - ryzen7 9800x3d, 5070ti 16GBvram, 2+2GB nvme SSD, 64 gb DDR5 cl30 RAM.
Can you help me choose which model can I run locally to experiment with?
My use case -
1. want to put together a claude code like environment but hosted an run locally
2. ChatGPT/Claude code like chat environment for local inference.
3. Uncensored image generation.
4. RAG based inference.
I can get the models from Huggingface and run using llama.cpp. Can you help me choose which models can fit my use case and run reliably with acceptable speed on my setup? I searched but I am not able to figure out, which is why I am making this post.
(I can clear context as and when required but the context, for example, has to be large enough to solve a coding question at hand - which may be like 10-15 files with 600 lines each and write code based on that)
I am sorry if my question is too vague. Please help me get started. | 2025-10-17T01:27:49 | https://www.reddit.com/r/LocalLLaMA/comments/1o8o91c/help_me_select_a_model_my_setup_can_run_setup_in/ | Competitive-You5538 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8o91c | false | null | t3_1o8o91c | /r/LocalLLaMA/comments/1o8o91c/help_me_select_a_model_my_setup_can_run_setup_in/ | false | false | self | 3 | null |
Best Open Source TTS That Sounds Most Natural Voice For Storytelling? That You Can Run With 12GB Vram? | 69 | Last I heard Higgs was great - but have heard it takes 24gb vram (and I only have 12GB on my machine). So wanted to see if anyone had suggested on the best free to use (commercial or otherwise) that I can run from my own machine. | 2025-10-17T01:00:05 | https://www.reddit.com/r/LocalLLaMA/comments/1o8no7i/best_open_source_tts_that_sounds_most_natural/ | Head-Investigator540 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8no7i | false | null | t3_1o8no7i | /r/LocalLLaMA/comments/1o8no7i/best_open_source_tts_that_sounds_most_natural/ | false | false | self | 69 | null |
Startup requiring GPU compute (rental)! | 0 | Hey guys, I'm just starting out at a startup where we have a requirement to source GPU compute for training and running inferences on our models. What is the best way of going about sourcing compute?
1. Get into fixed pricing contracts - Have visibility into clearly how much I'm going to pay.
2. Pay as I go, but only pay for the actual performance delivered by the GPUs - I have found a new marketplace platform that bills customers on the performance delivered, so for any hours where the GPU is idle or sub-optimal, buyers get charged less for that, but if a vendor provides better than expected performance due to better infrastructure, cooling, any other reasons, the cost for those periods can be dynamically higher too.
what do you guys think of option 2? i know it reduces visibility into pricing but at least I'll pay for the compute performance I'm actually receiving and not for wasted/underutilised hours. Would love to know what you guys think | 2025-10-17T00:53:19 | https://www.reddit.com/r/LocalLLaMA/comments/1o8nizt/startup_requiring_gpu_compute_rental/ | Sharp_Ad9847 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8nizt | false | null | t3_1o8nizt | /r/LocalLLaMA/comments/1o8nizt/startup_requiring_gpu_compute_rental/ | false | false | self | 0 | null |
We built an open-source coding agent CLI that can be run locally | 36 | Basically, it’s like Claude Code but with native support for local LLMs and a universal tool parser that works even on inference platforms without built-in tool call support.
Kolosal CLI is an open-source, cross-platform agentic command-line tool that lets you discover, download, and run models locally using an ultra-lightweight inference server. It supports coding agents, Hugging Face model integration, and a memory calculator to estimate model memory requirements.
It’s a fork of Qwen Code, and we also host GLM 4.6 and Kimi K2 if you prefer to use them without running them yourself.
You can try it at kolosal.ai
and check out the source code on GitHub: github.com/KolosalAI/kolosal-cli | 2025-10-17T00:41:19 | SmilingGen | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o8n9ym | false | null | t3_1o8n9ym | /r/LocalLLaMA/comments/1o8n9ym/we_built_an_opensource_coding_agent_cli_that_can/ | false | false | 36 | {'enabled': True, 'images': [{'id': '-qXRyD4UXJM13ie44bUp9Qusjd1Mt0LOv64m37O405k', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/k22utme4jkvf1.png?width=108&crop=smart&auto=webp&s=17131bafd40ab3d3adb9786ca7bc845986c49591', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/k22utme4jkvf1.png?width=216&crop=smart&auto=webp&s=13d91e1036410635453425ae4b60bd17c8c2f39b', 'width': 216}, {'height': 208, 'url': 'https://preview.redd.it/k22utme4jkvf1.png?width=320&crop=smart&auto=webp&s=7695123c4a0b1aa2d7e59eefbc4dce83f16dac24', 'width': 320}, {'height': 416, 'url': 'https://preview.redd.it/k22utme4jkvf1.png?width=640&crop=smart&auto=webp&s=7cd5da76109133d6785e5504cde25419f8caf5ff', 'width': 640}, {'height': 624, 'url': 'https://preview.redd.it/k22utme4jkvf1.png?width=960&crop=smart&auto=webp&s=cb550c8229d656f75331a49545249559d3239a06', 'width': 960}, {'height': 702, 'url': 'https://preview.redd.it/k22utme4jkvf1.png?width=1080&crop=smart&auto=webp&s=c1a9629aec8586377396af9917bc5052e094f28e', 'width': 1080}], 'source': {'height': 936, 'url': 'https://preview.redd.it/k22utme4jkvf1.png?auto=webp&s=425c8ff03f56a206360b8d058b2874076e73c94c', 'width': 1440}, 'variants': {}}]} | ||
Fine-tuning | 9 | Hey everyone, I'm just starting out with Llama and I'm working on a bold final project.
I'm developing a chatbot. Initially, I used RAG, but it's not returning good enough responses.
My advisor pointed out that I can use fine-tuning for data, especially in cases of stable knowledge and specific terminology. However, I've never used fine-tuning, and I don't know where to start or how to train it, especially for the purpose I want it to serve, since data is knowledge of how a specific service works. Can anyone help me with some guidance on how to do this? It could be with a tutorial, a guide, or just by showing me the steps I need to follow. | 2025-10-17T00:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/1o8n0kd/finetuning/ | Kind_Rip_4831 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o8n0kd | false | null | t3_1o8n0kd | /r/LocalLLaMA/comments/1o8n0kd/finetuning/ | false | false | self | 9 | null |
New OrKA-reasoning YAML docs for local agent orchestration with full traces | 9 | If you build with local models and want orchestration you can inspect, I cleaned up OrKa’s docs. It is now a YAML-first reference for Agents, Nodes, and Tools. The goal is to help you wire small agents locally, route with conditions, and see every step in a trace.
Highlights
* Minimal YAML for each agent type: builder, binary, classification, router
* Nodes for fork and join so you can parallelize local calls
* Memory writer with TTL so you can cache small artifacts between runs
* Tool calls with timeouts and retries for your local services
Quick taste
agents:
- id: summarize
type: builder
prompt: |
Summarize {{ input.text }} in 3 bullets under 20 words.
- id: safe
type: binary
prompt: |
Return True if no PII appears in the bullets.
nodes:
- id: guard
type: router
strategy: first_match
routes:
- when: "{{ previous_outputs.safe == True }}"
to: "publish"
- when: "default"
to: "redact"
Why this is nice for local setups
* Works without shipping data to a third party
* Traces are plain text you can store with your project
* Docs separate intent from execution so you change fewer fields to do one thing
Docs link: [https://github.com/marcosomma/orka-reasoning/blob/master/docs/AGENT\_NODE\_TOOL\_INDEX.md](https://github.com/marcosomma/orka-reasoning/blob/master/docs/AGENT_NODE_TOOL_INDEX.md?utm_source=chatgpt.com) | 2025-10-17T00:23:19 | marcosomma-OrKA | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o8mwfq | false | null | t3_1o8mwfq | /r/LocalLLaMA/comments/1o8mwfq/new_orkareasoning_yaml_docs_for_local_agent/ | false | false | default | 9 | {'enabled': True, 'images': [{'id': 'c6el0a9ufkvf1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/c6el0a9ufkvf1.png?width=108&crop=smart&auto=webp&s=5368bf2c35bfd9e5ad17d8bc7a1c5db114af9c55', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/c6el0a9ufkvf1.png?width=216&crop=smart&auto=webp&s=0d11806c5460f823a3b4633457dc9232ea4fe4f0', 'width': 216}, {'height': 131, 'url': 'https://preview.redd.it/c6el0a9ufkvf1.png?width=320&crop=smart&auto=webp&s=f09d89c79a80525ce91f4d1cc68428305f6e000e', 'width': 320}, {'height': 263, 'url': 'https://preview.redd.it/c6el0a9ufkvf1.png?width=640&crop=smart&auto=webp&s=ab0c337b50adcfe13e443547babd9459ee859a19', 'width': 640}, {'height': 395, 'url': 'https://preview.redd.it/c6el0a9ufkvf1.png?width=960&crop=smart&auto=webp&s=55b6abfe4f365cee606893f0cf39d12dc59ea73d', 'width': 960}, {'height': 445, 'url': 'https://preview.redd.it/c6el0a9ufkvf1.png?width=1080&crop=smart&auto=webp&s=94d565bd4e056c0890a4d8bc28c557caedd41324', 'width': 1080}], 'source': {'height': 783, 'url': 'https://preview.redd.it/c6el0a9ufkvf1.png?auto=webp&s=6d1a5f92816a09276785e9dfc347bb7ab0171d44', 'width': 1899}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.