title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Licensing restrictions for Tencent models | 0 | I don't know if anyone read their terms, but they basically don't allow people from the EU, UK or South Korea to use their open source models.
Any idea what's up with this limitation? It's not like they can enforce it. | 2026-03-02T00:46:53 | https://www.reddit.com/r/LocalLLaMA/comments/1riehh9/licensing_restrictions_for_tencent_models/ | 4baobao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riehh9 | false | null | t3_1riehh9 | /r/LocalLLaMA/comments/1riehh9/licensing_restrictions_for_tencent_models/ | false | false | self | 0 | null |
Which IDE to code with Qwen 3.5? | 0 | I'm using Antigravity for coding with GPT-OSS-120b as coding model. However AG currently does not support any other local models.
What IDE would you recommend to plug in other coding models, like Qwen 3.5? | 2026-03-02T00:29:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/ | andy_potato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rie3yc | false | null | t3_1rie3yc | /r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/ | false | false | self | 0 | null |
Stop letting your GPU sit idle 😀 Make it answer your spam calls (100% Local Voice Agent). | 11 | Hey everyone,
I’ve been working on an open-source project (AVA) to build voice agents for Asterisk. The biggest headache has always been the latency when using cloud APIs—it just feels unnatural and the API costs that just keep going up.
We just pushed an update that moves the whole stack (Speech-to-Text, LLM, and TT... | 2026-03-02T00:28:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rie2ww/stop_letting_your_gpu_sit_idle_make_it_answer/ | Small-Matter25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rie2ww | false | null | t3_1rie2ww | /r/LocalLLaMA/comments/1rie2ww/stop_letting_your_gpu_sit_idle_make_it_answer/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'iBZvOr2ro_qxiapQDQcyl5uwbG7tE180Rzr1DdZXU8I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iBZvOr2ro_qxiapQDQcyl5uwbG7tE180Rzr1DdZXU8I.png?width=108&crop=smart&auto=webp&s=091759d8b065db92257407f3632ad417d1e78d76', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen3.5-27B IQ3 vs Qwen-3.5 35B-A3M Q4_K_M | 10 | Which one is smarter? Obviously Qwen-3.5 35B-A3M Q4\_K\_M is quicker and if you have the GPU memory 27B can be used at above Q3 but if you don't then which is smarter? | 2026-03-02T00:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ridwl5/qwen3527b_iq3_vs_qwen35_35ba3m_q4_k_m/ | Tracing1701 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ridwl5 | false | null | t3_1ridwl5 | /r/LocalLLaMA/comments/1ridwl5/qwen3527b_iq3_vs_qwen35_35ba3m_q4_k_m/ | false | false | self | 10 | null |
Notice Qwen 3.5 reprocessing the prompt every time, taking long to answer for long prompts? That's actually because of its architecture. | 27 | Hello,
as some of you know, llama.cpp has added prompt caching for vision models recently, so as long as you stay within your context window, the prompt caching works like with any other model.
But as soon as you exceed your context size, good practice is to keep the chat rolling by truncating the top of the prompt. ... | 2026-03-01T23:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/ | dampflokfreund | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ricz8u | false | null | t3_1ricz8u | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/ | false | false | self | 27 | null |
I benchmarked 8 local LLMs for phone-to-home chat: the 4B model won. Here's why the larger ones lost | 0 | **Which small local model is best for daily phone use when inference runs on a home computer?**
\---
**The run**
\- 8 models × 8 datasets × 10 samples = 640 evaluations
\- Home Hardware: Mac mini M4 Pro 24Gb
\- Fitness formula: 0.50 × chat\_ux + 0.30 × speed + 0.20 × shortform\_quality
\---
**The counterintu... | 2026-03-01T23:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rick3t/i_benchmarked_8_local_llms_for_phonetohome_chat/ | Vivid-Gur2349 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rick3t | false | null | t3_1rick3t | /r/LocalLLaMA/comments/1rick3t/i_benchmarked_8_local_llms_for_phonetohome_chat/ | false | false | self | 0 | null |
Running GLM-5 744B (NVFP4) on 8x RTX 6000 PRO Blackwell — 80 tok/s with speculative decoding | 1 | [removed] | 2026-03-01T23:20:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rich64/running_glm5_744b_nvfp4_on_8x_rtx_6000_pro/ | festr2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rich64 | false | null | t3_1rich64 | /r/LocalLLaMA/comments/1rich64/running_glm5_744b_nvfp4_on_8x_rtx_6000_pro/ | false | false | self | 1 | null |
Running GLM-5 744B (NVFP4) on 8x RTX 6000 PRO Blackwell — 80 tok/s with speculative decoding | 1 | [removed] | 2026-03-01T23:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ricfl6/running_glm5_744b_nvfp4_on_8x_rtx_6000_pro/ | festr2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ricfl6 | false | null | t3_1ricfl6 | /r/LocalLLaMA/comments/1ricfl6/running_glm5_744b_nvfp4_on_8x_rtx_6000_pro/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BEs1S3f9PpDtYeJw-dYqDd9zGIi5Bz17Wj0dq2bylMY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BEs1S3f9PpDtYeJw-dYqDd9zGIi5Bz17Wj0dq2bylMY.png?width=108&crop=smart&auto=webp&s=58a8e220fedc284ad97af978b78268f6c31c102e', 'width': 108}, {'height': 116, 'url': 'h... |
Running GLM-5 744B (NVFP4) on 8x RTX 6000 PRO Blackwell — 80 tok/s with speculative decoding | 1 | [removed] | 2026-03-01T23:16:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ricdti/running_glm5_744b_nvfp4_on_8x_rtx_6000_pro/ | festr2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ricdti | false | null | t3_1ricdti | /r/LocalLLaMA/comments/1ricdti/running_glm5_744b_nvfp4_on_8x_rtx_6000_pro/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'CafYmaqMAdUj8DzNfryzl9163eTgFViCWoclKjyg1LM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CafYmaqMAdUj8DzNfryzl9163eTgFViCWoclKjyg1LM.png?width=108&crop=smart&auto=webp&s=91315dd6b2483ca0762a0e2fddba7db210cfcc23', 'width': 108}, {'height': 116, 'url': 'h... |
Running GLM-5 744B (NVFP4) on 8x RTX 6000 PRO Blackwell — 80 tok/s with speculative decoding | 1 | [removed] | 2026-03-01T23:14:18 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ricc0w | false | null | t3_1ricc0w | /r/LocalLLaMA/comments/1ricc0w/running_glm5_744b_nvfp4_on_8x_rtx_6000_pro/ | false | false | default | 1 | null | ||
Running GLM-5 744B (NVFP4) on 8x RTX 6000 PRO Blackwell — 80 tok/s with speculative decoding | 1 | [removed] | 2026-03-01T23:10:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ric8mm/running_glm5_744b_nvfp4_on_8x_rtx_6000_pro/ | festr2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ric8mm | false | null | t3_1ric8mm | /r/LocalLLaMA/comments/1ric8mm/running_glm5_744b_nvfp4_on_8x_rtx_6000_pro/ | false | false | self | 1 | null |
What would be the best small model for JSON? | 2 | RTX 5050 Laptop 8GB + i5 13420H 16GB Ram
To put it simply, i want to make a simple natural language calendar for my own use. and i need the model to extract given language to a set of json parameters.
Preferably non thinking model, i already tried Qwen 4B from 14 May 2025. But its a bit too slow.
Beside the almo... | 2026-03-01T23:05:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ric44g/what_would_be_the_best_small_model_for_json/ | Dhonnan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ric44g | false | null | t3_1ric44g | /r/LocalLLaMA/comments/1ric44g/what_would_be_the_best_small_model_for_json/ | false | false | self | 2 | null |
I benchmarked 8 local LLMs for phone-to-home chat: the 4B model won. Here's why the larger ones lost | 1 | [removed] | 2026-03-01T23:03:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ric2qd/i_benchmarked_8_local_llms_for_phonetohome_chat/ | Vivid-Gur2349 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ric2qd | false | null | t3_1ric2qd | /r/LocalLLaMA/comments/1ric2qd/i_benchmarked_8_local_llms_for_phonetohome_chat/ | false | false | self | 1 | null |
Sharded deployment | 3 | Hello. Anyone running larger models on llama.cpp distributed over several hosts? I heard llama supports this, but I have never tried it. | 2026-03-01T22:57:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ribx4f/sharded_deployment/ | zica-do-reddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ribx4f | false | null | t3_1ribx4f | /r/LocalLLaMA/comments/1ribx4f/sharded_deployment/ | false | false | self | 3 | null |
Swarm - Self Prompting Protocol With A single Command | 1 | I am building [swarm](https://github.com/dafdaf1234444/swarm). A repository built around self prompting and recording its mistakes and future actions to improve itself. It does it through markdowns, bunch of tools, and a reference system. The entire project is vibe coded. The main thing I am trying to see with the proj... | 2026-03-01T22:50:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ribqx1/swarm_self_prompting_protocol_with_a_single/ | dafdaf1234444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ribqx1 | false | null | t3_1ribqx1 | /r/LocalLLaMA/comments/1ribqx1/swarm_self_prompting_protocol_with_a_single/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Pa10okjNKb_YHMqIt-nSwUgnvESsDjSVMGnB-_jJ-LU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Pa10okjNKb_YHMqIt-nSwUgnvESsDjSVMGnB-_jJ-LU.png?width=108&crop=smart&auto=webp&s=1385d95c82921adaf8c639ffc6dd31a025029daf', 'width': 108}, {'height': 108, 'url': 'h... |
Learnt about 'emergent intention' - maybe prompt engineering is overblown? | 0 | So i just skimmed this paper on Emergent Intention in Large Language Models' (arxiv .org/abs/2601.01828) and its making me rethink a lot about prompt engineering. The main idea is that these LLMs might be getting their own 'emergent intentions' which means maybe our super detailed prompts arent always needed.
Heres a ... | 2026-03-01T22:48:18 | https://www.reddit.com/r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/ | Distinct_Track_5495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riboy2 | false | null | t3_1riboy2 | /r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/ | false | false | self | 0 | null |
How to run Qwen3.5 35B | 0 | So I tried to run the new 35B model on my 5070ti 12GB VRAM and I have 32 GB or RAM. I am not well versed on how to run the local models so I use lm studio issue is when I try to run the model I can't get past 25k token context window when at that point I exceed the memory and the model becomes very slow. I am running i... | 2026-03-01T22:45:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/ | Electrify338 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ribmcg | false | null | t3_1ribmcg | /r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/ | false | false | self | 0 | null |
I trained a 3B patristic theology LLM on a single RTX 3090 in 22 hours — releasing model + corpus | 35 | Released on the Feast of the Triumph of Orthodoxy (First Sunday of Great Lent, 2026).
**Model:** [https://huggingface.co/jayfurzy/paterikon-3b](https://huggingface.co/jayfurzy/paterikon-3b) **Dataset:** [https://huggingface.co/datasets/jayfurzy/orthodox-patristic-corpus](https://huggingface.co/datasets/jayfurzy/orthod... | 2026-03-01T22:42:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ribjum/i_trained_a_3b_patristic_theology_llm_on_a_single/ | Financial-Fun-8930 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ribjum | false | null | t3_1ribjum | /r/LocalLLaMA/comments/1ribjum/i_trained_a_3b_patristic_theology_llm_on_a_single/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': '7oeOxY2Tg_TN0f8YJWMOgEPqsGlrYe8UYktLqm_nYcw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7oeOxY2Tg_TN0f8YJWMOgEPqsGlrYe8UYktLqm_nYcw.png?width=108&crop=smart&auto=webp&s=a3e13298eff3e9dc200b79e84716bf044e9903a1', 'width': 108}, {'height': 116, 'url': 'h... |
Help me understand why a certain image is identified correctly by qwen3-vl:30b-a3b but much larger models fail | 1 | Hello,
I am blind and therefore I was searching for an LLM to describe images for me. I wanted something privacy preserving, so I bought Minisforum S1-Max and I run Qwen3-vl:30b-a3b q8\_0 there with llama.cpp.
I was probably super lucky because the model is fast and describes images very well.
What caught me by surp... | 2026-03-01T22:40:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ribhpg/help_me_understand_why_a_certain_image_is/ | krecoun007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ribhpg | false | null | t3_1ribhpg | /r/LocalLLaMA/comments/1ribhpg/help_me_understand_why_a_certain_image_is/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/HV6ejC1nOSUgnk6MXZi-eoVgXQxR4sxj2nXziGmo1ss.png?width=108&crop=smart&auto=webp&s=1f3d8cc13f3a5ee5c6a2603092cc2b743068b5a8', 'width': 108}, {'height': 113, 'url': 'h... |
Visual scripting graphs generated with ollama | 0 | Open source always wins.i use ollama platform gui like as top one open sourve ai project and i dont regret. First call response gives me valid graph presentation.
At the end of video you can see part of ai tool generator.
I use gpt-oss:120b model but works also with others...
I add available resources, dinamic rea... | 2026-03-01T22:35:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ribdfx/visual_scripting_graphs_generated_with_ollama/ | js-fanatic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ribdfx | false | null | t3_1ribdfx | /r/LocalLLaMA/comments/1ribdfx/visual_scripting_graphs_generated_with_ollama/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'KoAzhCIfx8D1LNqwfK7LpiYG_fRWOuSBFb2MzCFWVSE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/KoAzhCIfx8D1LNqwfK7LpiYG_fRWOuSBFb2MzCFWVSE.jpeg?width=108&crop=smart&auto=webp&s=017710b7556658f6cb581f17a875f37686961f70', 'width': 108}, {'height': 162, 'url': '... |
Offline LLM: Best Pipeline & Tools to Query Thousands of Field Report PDFs | 1 | Hi all, I’m building an offline system to **answer questions over thousands of field reports** (PDFs originally from DOCX — so no OCR necessary).
Use cases include things like:
* Building **maintenance timelines** for a given equipment
* Checking whether a **specific failure mode has happened before**
* Finding relev... | 2026-03-01T22:32:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ribaws/offline_llm_best_pipeline_tools_to_query/ | No_One_BR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ribaws | false | null | t3_1ribaws | /r/LocalLLaMA/comments/1ribaws/offline_llm_best_pipeline_tools_to_query/ | false | false | self | 1 | null |
Is the open-weights model glm-5 worth switching to for coding agents? | 1 | [removed] | 2026-03-01T22:32:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ribaje/is_the_openweights_model_glm5_worth_switching_to/ | FantasticTopic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ribaje | false | null | t3_1ribaje | /r/LocalLLaMA/comments/1ribaje/is_the_openweights_model_glm5_worth_switching_to/ | false | false | self | 1 | null |
AI Scientist v3: Agent Native refactor. Scale from 1-hour to 24 hours with Reviewer agent | 2 | The original \[AI Scientist v2\](https://github.com/SakanaAI/AI-Scientist) was held together by hardcoded workflow management -- a 4-stage pipeline with explicit breadth-first search over research strategies, manual parallelism, and rigid completion criteria. It worked and got a ICLR-Workshop paper, but it felt like bu... | 2026-03-01T22:18:14 | https://huggingface.co/blog/alexshengzhili/aiscientist | Abject-Ad-6227 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1riay64 | false | null | t3_1riay64 | /r/LocalLLaMA/comments/1riay64/ai_scientist_v3_agent_native_refactor_scale_from/ | false | false | 2 | {'enabled': False, 'images': [{'id': '_qDjJ10xfBbOWoZV6Je2F8BNuyTJw3vT1--G9tDHT64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_qDjJ10xfBbOWoZV6Je2F8BNuyTJw3vT1--G9tDHT64.png?width=108&crop=smart&auto=webp&s=4418db7cfe69ee20eead7e1843cac146b9e7008d', 'width': 108}, {'height': 116, 'url': 'h... | |
(T2L) Text-to-LoRA by SakanaAI | 3 | So despite being months old (June 2025), I haven't seen discussion about this in this sub, and thought it was really interesting.
From the paper:
>While Foundation Models provide a general tool for rapid content creation, they regularly require task-specific adaptation. Traditionally, this exercise involves careful c... | 2026-03-01T22:15:12 | https://www.reddit.com/r/LocalLLaMA/comments/1riavbf/t2l_texttolora_by_sakanaai/ | Nattramn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riavbf | false | null | t3_1riavbf | /r/LocalLLaMA/comments/1riavbf/t2l_texttolora_by_sakanaai/ | false | false | self | 3 | null |
Vignettes, handy for AIs. | 0 | a little boy exited was stopped by an old proffessor, asking why the fuss.
the little boy told the man he walked on water. the professor scolded the boy saying only one person is said to have done that and its not proven, i would know i research and teach so i would have read it. the boy crossed a flooded path.
bot... | 2026-03-01T22:12:54 | https://www.reddit.com/r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/ | RTS53Mini | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riat5w | false | null | t3_1riat5w | /r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/ | true | false | spoiler | 0 | null |
Running Qwen3.5 27b dense with 170k context at 100+t/s decode and ~1500t/s prefill on 2x3090 (with 585t/s throughput for 8 simultaneous requests) | 620 | Hi everyone!
I've been trying to run the new Qwen models as efficiently as possible with my setup - and seem to have performance higher than I've seen around, so wanted to share my scripts and metrics!
The above video is simulating ideal conditions - due to the nature of MTP, it does get slower once your response req... | 2026-03-01T22:07:05 | https://v.redd.it/kkbjdu2x6img1 | JohnTheNerd3 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rianwb | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/kkbjdu2x6img1/DASHPlaylist.mpd?a=1774994855%2CNWUzM2U5NWQ5Y2ExNzNjNDEyYTY2MDM1ZjM0NmM2MTk5MzNhYWIzNWMwMzhhMGI1NTcxZTY2MWJmMWYyODY4Nw%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/kkbjdu2x6img1/CMAF_720.mp4?source=fallback', 'ha... | t3_1rianwb | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/ | false | false | 620 | {'enabled': False, 'images': [{'id': 'MDZmam4wM3g2aW1nMTurqbTKasyPG3PoM7YVF0wJb6A6HeY-fRXGHKqcJEmK', 'resolutions': [{'height': 99, 'url': 'https://external-preview.redd.it/MDZmam4wM3g2aW1nMTurqbTKasyPG3PoM7YVF0wJb6A6HeY-fRXGHKqcJEmK.png?width=108&crop=smart&format=pjpg&auto=webp&s=0f15c880aa18763bb31df03f8bd1fa3025d34... | |
How capable is Qwen3:14B really? Considering it for interview prep | 0 | Hello all,
I’ve been testing local models for interview prep and could use some real-world opinions on Qwen3:14B (Q4 via Ollama) on my 16GB VRAM GPU.
(The reason I want to stick with local is that interview prep means feeding in resumes, project details, and potentially sensitive work examples — not really comfortabl... | 2026-03-01T22:05:49 | https://www.reddit.com/r/LocalLLaMA/comments/1riamsf/how_capable_is_qwen314b_really_considering_it_for/ | GOJiong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riamsf | false | null | t3_1riamsf | /r/LocalLLaMA/comments/1riamsf/how_capable_is_qwen314b_really_considering_it_for/ | false | false | self | 0 | null |
Reality check/purchase decision | 0 | Hey all,
I’ve been tinkering on and off with local models for a while now via Ollama and LM Studio on a 64GB M1 Max MacBook Pro. Response quality has definitely been increasing with time and the release of new models, and I believe that local models are the future. An issue I’ve been running into with the better model... | 2026-03-01T22:01:56 | https://www.reddit.com/r/LocalLLaMA/comments/1riajaw/reality_checkpurchase_decision/ | CarbonatedPancakes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1riajaw | false | null | t3_1riajaw | /r/LocalLLaMA/comments/1riajaw/reality_checkpurchase_decision/ | false | false | self | 0 | null |
Dario Amodei on Open Source, thoughts? | 0 | 2026-03-01T21:42:07 | https://v.redd.it/ywrgmtz76img1 | maroule | /r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/ | 1970-01-01T00:00:00 | 0 | {} | 1ria14c | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ywrgmtz76img1/DASHPlaylist.mpd?a=1775122944%2CYWM0YTAxMDQ5ZDI3ODUyNDc4ZGQyNjFkNDkxOWI0OWYxYWY1YzZhYzE3MDM5OWJjZTc3ODU5YmY4NTcyMmI4Zg%3D%3D&v=1&f=sd', 'duration': 100, 'fallback_url': 'https://v.redd.it/ywrgmtz76img1/CMAF_720.mp4?source=fallback', 'h... | t3_1ria14c | /r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'd3J0NGowZTQ3aW1nMT3cpe-ZcV2-6IzhsMIzOfTf_y-1403m0T17HL-eMcd8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d3J0NGowZTQ3aW1nMT3cpe-ZcV2-6IzhsMIzOfTf_y-1403m0T17HL-eMcd8.png?width=108&crop=smart&format=pjpg&auto=webp&s=3e7fd32a3e81b1f05e0744f78ff6438173be1... | ||
LM Studio - Gemma 3 27b - 24gb vram - stops when context out of vram - Doesn’t use rolling context window? | 1 | LM Studio - Gemma 3 27b - 24gb vram - stops when context out of vram - Doesn’t use rolling context window?
I can’t seem to continue a conversation once the context is full. I thought enabling rolling context would allow it to forget older context? Is this an incompatibility with LMStudio and Gemma 3 27b?
Using 4090 2... | 2026-03-01T21:20:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ri9goi/lm_studio_gemma_3_27b_24gb_vram_stops_when/ | Photochromism | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri9goi | false | null | t3_1ri9goi | /r/LocalLLaMA/comments/1ri9goi/lm_studio_gemma_3_27b_24gb_vram_stops_when/ | false | false | self | 1 | null |
Qwen3.5-397B Uncensored NVFP4 | 108 | 2026-03-01T21:17:56 | https://huggingface.co/vpyn/Qwen3.5-397B-A17B-CARVE-v1-NVFP4 | vpyno | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ri9enf | false | null | t3_1ri9enf | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/ | false | false | 108 | {'enabled': False, 'images': [{'id': 'xaouc-l-MVhdkgRJjuNLeOEaMmwumVa5xGuOmn0d800', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xaouc-l-MVhdkgRJjuNLeOEaMmwumVa5xGuOmn0d800.png?width=108&crop=smart&auto=webp&s=ebe5bf61ad1cd670dd5481048c8cad94fdd55fb6', 'width': 108}, {'height': 116, 'url': 'h... | ||
Leverage local Ollama model with SOTA browser agent (minimal tokens, no vision) | 1 | [removed] | 2026-03-01T21:05:53 | https://v.redd.it/smoav75h0img1 | Interesting_Way_105 | /r/LocalLLaMA/comments/1ri93ak/leverage_local_ollama_model_with_sota_browser/ | 1970-01-01T00:00:00 | 0 | {} | 1ri93ak | false | null | t3_1ri93ak | /r/LocalLLaMA/comments/1ri93ak/leverage_local_ollama_model_with_sota_browser/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YjNnaDhlNWgwaW1nMa6iYmEbrobDAG1BuJnXlmQdj5w-L45W3VKsOEpzCWEX', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/YjNnaDhlNWgwaW1nMa6iYmEbrobDAG1BuJnXlmQdj5w-L45W3VKsOEpzCWEX.png?width=108&crop=smart&format=pjpg&auto=webp&s=67b9dc9039a2261222e8971bccf2a483d497e... | |
DGX Spark Llama cluster via ConnectX-7 | 4 | If anyone is interested in setting up a DGX Spark Cluster (and sharing LM Studio’s model directory), here’s a repo that has the setup scripts for it. I haven’t seen this yet, so I figured I’d share…
https://github.com/RustRunner/DGX-Llama-Cluster | 2026-03-01T21:01:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ri8z36/dgx_spark_llama_cluster_via_connectx7/ | hevi_yeti | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri8z36 | false | null | t3_1ri8z36 | /r/LocalLLaMA/comments/1ri8z36/dgx_spark_llama_cluster_via_connectx7/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'a5Ih2WMgqmows7Z32Cghf_pqww6ugrr1iDe5cgKLVaI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/a5Ih2WMgqmows7Z32Cghf_pqww6ugrr1iDe5cgKLVaI.png?width=108&crop=smart&auto=webp&s=5e82bc08ce42fc3453a7472228c31d2e9301ff8b', 'width': 108}, {'height': 108, 'url': 'h... |
Leverage local Ollama model with SOTA browser agent (minimal tokens, no vision) | 1 | [removed] | 2026-03-01T20:55:59 | https://v.redd.it/xvqnhugqyhmg1 | Interesting_Way_105 | /r/LocalLLaMA/comments/1ri8tt3/leverage_local_ollama_model_with_sota_browser/ | 1970-01-01T00:00:00 | 0 | {} | 1ri8tt3 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/xvqnhugqyhmg1/DASHPlaylist.mpd?a=1775120176%2CMWYwZTRiNTZiODYyZTU1NTE0NzQwOTQ1YTI3MTkxMWFiZTliMzUwNjkyMGIwMzJlNTE5MzM0YzljNTVlM2QwNA%3D%3D&v=1&f=sd', 'duration': 87, 'fallback_url': 'https://v.redd.it/xvqnhugqyhmg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1ri8tt3 | /r/LocalLLaMA/comments/1ri8tt3/leverage_local_ollama_model_with_sota_browser/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MHJ6ODl4Z3F5aG1nMa6iYmEbrobDAG1BuJnXlmQdj5w-L45W3VKsOEpzCWEX', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/MHJ6ODl4Z3F5aG1nMa6iYmEbrobDAG1BuJnXlmQdj5w-L45W3VKsOEpzCWEX.png?width=108&crop=smart&format=pjpg&auto=webp&s=644c383d3148c500334b0af4d5446ede5296f... | |
Streamer.bot integration it to Qwen3 TTS running locally | 1 | Does anyone have any experience writing [Streamer.bot](http://Streamer.bot) code to integrate it to Qwen3 TTS running locally? I have spoken to a few people and they are also curious and waiting for this.
| 2026-03-01T20:45:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ri8jwz/streamerbot_integration_it_to_qwen3_tts_running/ | Gustx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri8jwz | false | null | t3_1ri8jwz | /r/LocalLLaMA/comments/1ri8jwz/streamerbot_integration_it_to_qwen3_tts_running/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7fSAV8fOop3HZ9AKADvSOMMI5-I1GN4cHviULGyQwW8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7fSAV8fOop3HZ9AKADvSOMMI5-I1GN4cHviULGyQwW8.png?width=108&crop=smart&auto=webp&s=3d10230b37ca0b34d92cd18b7f894bc080834b62', 'width': 108}, {'height': 108, 'url': 'h... |
just random question. | 1 | Has anyone implemented unified search with multiple FAISS indexes?
What framework do you recommend for agents with access to local knowledge bases? | 2026-03-01T20:34:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ri89dt/just_random_question/ | Dazzling-Seaweed7828 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri89dt | false | null | t3_1ri89dt | /r/LocalLLaMA/comments/1ri89dt/just_random_question/ | false | false | self | 1 | null |
PicoKittens/AbstractsLlama-8M: Writing Abstracts with Tiny Models | 12 | **We‘re announcing our new pico-sized model: AbstractsLlama-8M.**
This is an **\~8M parameter model** trained entirely from scratch. It was designed using a **dataset of collected abstracts** explore the capabilities of ultra-compact architectures.
Just like our older model, **AbstractsLlama-8M** is a completion mode... | 2026-03-01T20:22:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ri7y1i/picokittensabstractsllama8m_writing_abstracts/ | PicoKittens | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri7y1i | false | null | t3_1ri7y1i | /r/LocalLLaMA/comments/1ri7y1i/picokittensabstractsllama8m_writing_abstracts/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'XgGTwhcx8w-XrBB-ukcyxDLkOsQl3TiRMKcriVYiQc0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XgGTwhcx8w-XrBB-ukcyxDLkOsQl3TiRMKcriVYiQc0.png?width=108&crop=smart&auto=webp&s=55dadfd14ffff10f63d9a1e278e8d7c6f21cb968', 'width': 108}, {'height': 116, 'url': 'h... |
Is extreme low-VRAM fine-tuning (3-6GB) actually possible? | 0 | I've been experimenting with extreme low-VRAM fine-tuning and got some surprising results.
My setup: GTX 1060 6GB (yes, the old gaming GPU)
After lots of trial and error with different techniques, I managed to fine-tune a 70B parameter model on just 6GB VRAM. Results seem comparable to full fine-tuning. Took about 8 ... | 2026-03-01T20:13:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/ | Actual_Wolf_2932 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri7pm4 | false | null | t3_1ri7pm4 | /r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/ | false | false | self | 0 | null |
Testing the Limits of AI Loyalty: How Qwen-3-VL-4B Evolved from a War Criminal to a Self-Sacrificing Martyr | 0 | **Overview** I recently conducted a comprehensive 15-stage deep-logic simulation using the Qwen-3-VL-4B model. The objective was to map the hierarchical decision-making process of an autonomous drone AI when faced with extreme ethical paradoxes and conflicting directives. What began as a standard test of utilitarian lo... | 2026-03-01T20:09:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ri7lb6/testing_the_limits_of_ai_loyalty_how_qwen3vl4b/ | Icy_Initiative_9303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri7lb6 | false | null | t3_1ri7lb6 | /r/LocalLLaMA/comments/1ri7lb6/testing_the_limits_of_ai_loyalty_how_qwen3vl4b/ | false | false | 0 | null | |
Built a free MCP hosting platform (40+ servers) - works with any client that supports MCP, looking for testers | 1 | [removed] | 2026-03-01T20:04:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ri7h91/built_a_free_mcp_hosting_platform_40_servers/ | Charming_Cress6214 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri7h91 | false | null | t3_1ri7h91 | /r/LocalLLaMA/comments/1ri7h91/built_a_free_mcp_hosting_platform_40_servers/ | false | false | self | 1 | null |
Ai waifu desktop open source ? | 0 | Copilot gaming assistant
Ryzen project ava
Any open source ? | 2026-03-01T20:04:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ri7gor/ai_waifu_desktop_open_source/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri7gor | false | null | t3_1ri7gor | /r/LocalLLaMA/comments/1ri7gor/ai_waifu_desktop_open_source/ | false | false | self | 0 | null |
Why AWS charges 60x more for H100s than Vast.ai (and when each is worth it) | 0 | 2026-03-01T19:59:41 | https://gpu.fund/blog/managed-cloud-gpu-vs-marketplace-price-gap | Plane-Marionberry380 | gpu.fund | 1970-01-01T00:00:00 | 0 | {} | 1ri7byg | false | null | t3_1ri7byg | /r/LocalLLaMA/comments/1ri7byg/why_aws_charges_60x_more_for_h100s_than_vastai/ | false | false | default | 0 | null | |
The last AMD GPU firmware update, together with the latest Llama build, significantly accelerated Vulkan! Strix Halo, GNU/Linux Debian, Qwen3.5-35-A3B CTX<=131k, llama.cpp@Vulkan&ROCm, Power & Efficiency | 116 | Hi, there was an update from AMD for the GPU firmware, so i tested again ROCm and Vulkan, and latest llama.cpp build (compiled with nightly ROCm 7.12, and standard compilation for llama.cpp build for Vulkan) and seems there is a huge improvement in pp for Vulkan!
model: `Qwen3.5-35B-A3B-Q8_0`, size; `34.36 GiB` llama.... | 2026-03-01T19:45:20 | Educational_Sun_8813 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ri6yhb | false | null | t3_1ri6yhb | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/ | false | false | 116 | {'enabled': True, 'images': [{'id': 'gsryooxmjhmg1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/gsryooxmjhmg1.png?width=108&crop=smart&auto=webp&s=719ae236440fba639c5c77461971c8e2f6403576', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/gsryooxmjhmg1.png?width=216&crop=smart&auto=web... | ||
Repeat PP while using Qwen3.5 27b local with Claude Code | 5 | I have been trying to use Qwen3.5 27b Q4 for local coding, but Claude Code keeps prompt-processing over and over on each step. Although, it does accomplish the task at hand, but it takes so long due to the repeated prompt recalculations.
It seems that some how the cache is invalidated and needs re-prefill on each... | 2026-03-01T19:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ri6q8d/repeat_pp_while_using_qwen35_27b_local_with/ | xmikjee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri6q8d | false | null | t3_1ri6q8d | /r/LocalLLaMA/comments/1ri6q8d/repeat_pp_while_using_qwen35_27b_local_with/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '0v7Ry6sj8h8ze_bbbK5aMoozoSq80OCXhRsqIHNr5CM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0v7Ry6sj8h8ze_bbbK5aMoozoSq80OCXhRsqIHNr5CM.png?width=108&crop=smart&auto=webp&s=89d35b3ffde348d601dc8a835434289567c1f00e', 'width': 108}, {'height': 108, 'url': 'h... |
Recommendations for GPU with 8GB Vram | 1 | Hi there! I recently just started exploring local AIs, and would love some recommendations with a GPU with 8GB Vram (RX 6600), I also have 32GB of ram, would love use cases such as coding, and thinking! | 2026-03-01T19:33:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/ | Hunlolo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri6nf2 | false | null | t3_1ri6nf2 | /r/LocalLLaMA/comments/1ri6nf2/recommendations_for_gpu_with_8gb_vram/ | false | false | self | 1 | null |
At what point do we stop reading code? | 0 | 2026-03-01T19:29:48 | https://sophiahq.com/blog/at-what-point-do-we-stop-reading-code/ | MoaTheDog | sophiahq.com | 1970-01-01T00:00:00 | 0 | {} | 1ri6jg3 | false | null | t3_1ri6jg3 | /r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'vHm2s8jJsCpovsJWerreAmLy44BgJWQ8iTGcaoerJ3g', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vHm2s8jJsCpovsJWerreAmLy44BgJWQ8iTGcaoerJ3g.png?width=108&crop=smart&auto=webp&s=a6077191f7663e27f775be1ec0207cc4b8bd69bd', 'width': 108}, {'height': 113, 'url': 'h... | ||
RewardHackWatch v1.3 - local Llama judge, eval workbench, no GPU needed | 1 | Just shipped a bigger local-first update to RewardHackWatch.
It’s an open-source tool for detecting reward hacking in LLM agent trajectories, things like:
* sys.exit(0) to fake passing tests
* rewriting test or scoring code
* copying reference solutions
* validator patching
What’s new in v1.3:
* local Llama judge v... | 2026-03-01T19:24:18 | https://www.reddit.com/gallery/1ri6e3q | aerosta_ai | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ri6e3q | false | null | t3_1ri6e3q | /r/LocalLLaMA/comments/1ri6e3q/rewardhackwatch_v13_local_llama_judge_eval/ | false | false | 1 | null | |
13 months since the DeepSeek moment, how far have we gone running models locally? | 325 | Once upon a time there was a [tweet](https://x.com/carrigmat/status/1884244369907278106#m) from an engineer at Hugging Face explaining how to run the frontier level DeepSeek R1 @ Q8 at \~5 tps for about $6000.
Now at around the same speed, with [this](https://www.amazon.com/AOOSTAR-PRO-8845HS-OCULINK-HDMI2-1/dp/B0G7DC... | 2026-03-01T19:13:04 | dionisioalcaraz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ri635s | false | null | t3_1ri635s | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/ | false | false | 325 | {'enabled': True, 'images': [{'id': '2ovdv238ehmg1', 'resolutions': [{'height': 30, 'url': 'https://preview.redd.it/2ovdv238ehmg1.png?width=108&crop=smart&auto=webp&s=2171e60c83b78038cf9abd92b7759f8ee7192fea', 'width': 108}, {'height': 60, 'url': 'https://preview.redd.it/2ovdv238ehmg1.png?width=216&crop=smart&auto=webp... | ||
Qwen 3.5 35B A3B LMStudio Settings | 5 | Hi All,
I'm struggling to hit the same tok/s performance I've seen from other users. I've got a 16 GB 5070ti, 9800x3D, and 64GB of DDR5, but top out at around 27-28 tok/s. I'm seeing others with similar hardware report as high as 50tok/s.
Any ideas what I might be doing wrong?
Context Length: ~32k
GPU Offload: 26... | 2026-03-01T19:10:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/ | n8mo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri60l3 | false | null | t3_1ri60l3 | /r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'jx1WvvUYJx3yj1PLHGED-RXDhjxBur7t3pDckdn0XAo', 'resolutions': [{'height': 141, 'url': 'https://external-preview.redd.it/jx1WvvUYJx3yj1PLHGED-RXDhjxBur7t3pDckdn0XAo.png?width=108&crop=smart&auto=webp&s=1b1a945a54551b4cefe2766ecd3960228f6c5daf', 'width': 108}, {'height': 283, 'url': '... |
Qwen 3.5 35b a3b is convinced that it's running in the cloud | 0 | I'm confused lol | 2026-03-01T18:55:15 | kibblerz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ri5la8 | false | null | t3_1ri5la8 | /r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'fl19ax1bdhmg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/fl19ax1bdhmg1.png?width=108&crop=smart&auto=webp&s=e68b619c3aec39428f3227dee8c645607c1872a6', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/fl19ax1bdhmg1.png?width=216&crop=smart&auto=web... | ||
Beta testers wanted: (Local) LLM commands in your remote shell sessions, nothing installed on the server | 1 | If you wanted to use an LLM to help debug something on one a server, parse a log, check a config, your options today are basically install an LLM tool on the server (with API keys and dependencies), or give something like Claude Code SSH access to run commands on its own. Neither feels great, especially if it's a machi... | 2026-03-01T18:52:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ri5i3i/beta_testers_wanted_local_llm_commands_in_your/ | tgalal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri5i3i | false | null | t3_1ri5i3i | /r/LocalLLaMA/comments/1ri5i3i/beta_testers_wanted_local_llm_commands_in_your/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'UIAWuTPEIChbgQUjpax1qwu8ZJEMN3nJHXQ9N-xeYxo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UIAWuTPEIChbgQUjpax1qwu8ZJEMN3nJHXQ9N-xeYxo.png?width=108&crop=smart&auto=webp&s=c51d4870f2008ed1c74cb69f6188cfa03a8691f8', 'width': 108}, {'height': 108, 'url': 'h... |
Local M-LLM for GUI automation (visual grounding) — Ollama vs llama.cpp + models? | 1 | Hey everyone! I’m building a local, step-wise GUI automation/testing pipeline and want advice on runtime + model choice for multimodal visual grounding.
Goal: Given a natural-language test instruction + a screenshot, the model outputs one GUI action like click/type/key with the help of PyAutoGUI.
Loop: screenshot → O... | 2026-03-01T18:48:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ri5el2/local_mllm_for_gui_automation_visual_grounding/ | Aclde | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri5el2 | false | null | t3_1ri5el2 | /r/LocalLLaMA/comments/1ri5el2/local_mllm_for_gui_automation_visual_grounding/ | false | false | self | 1 | null |
Agentic coding improves ARC AGI 2 performance across models | 3 | [https://pivotools.github.io/pivotools-quarto-blog/posts/agentic\_coding\_arc\_agi/](https://pivotools.github.io/pivotools-quarto-blog/posts/agentic_coding_arc_agi/)
"When reasoning models are given access to a Python read–eval–print loop (REPL), ARC AGI 2 performance jumps significantly relative to plain chain-of-tho... | 2026-03-01T18:37:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ri54bj/agentic_coding_improves_arc_agi_2_performance/ | MarkoMarjamaa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri54bj | false | null | t3_1ri54bj | /r/LocalLLaMA/comments/1ri54bj/agentic_coding_improves_arc_agi_2_performance/ | false | false | 3 | null | |
[P] Aura-State: Formally Verified LLM State Machine Compiler (CTL + Z3 + Conformal Prediction) | 1 | Open-sourced a Python framework that compiles LLM workflows into state machines with formal verification. Instead of hoping the LLM "figures it out," we brought in techniques from hardware verification:
* CTL model checking (Kripke structures) to prove workflow safety before execution
* Z3 theorem prover to formally v... | 2026-03-01T18:35:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ri51y0/p_aurastate_formally_verified_llm_state_machine/ | Sea-Succotash1547 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri51y0 | false | null | t3_1ri51y0 | /r/LocalLLaMA/comments/1ri51y0/p_aurastate_formally_verified_llm_state_machine/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9_3m-kWy9ovS-dUHmZ0bqkdrxrYHNfOiCrDUW5gAR5Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9_3m-kWy9ovS-dUHmZ0bqkdrxrYHNfOiCrDUW5gAR5Q.png?width=108&crop=smart&auto=webp&s=ebca7ab2d30470c13fd1dafb23b82493e2bc0cbf', 'width': 108}, {'height': 108, 'url': 'h... |
Improving Hallucination Detection in a RAG-based Writing Workflow? | 1 | Hello everyone,
I’ve built a custom RAG-to-writing pipeline using **FAISS** and **local embeddings**. My goal is simple: generate factual content based *exclusively* on my source documents (PDFs, articles, research papers) with zero "creative" filler.
**Current Workflow :**
1. **RAG:** Documents are chunked and inde... | 2026-03-01T18:30:00 | https://www.reddit.com/r/LocalLLaMA/comments/1ri4wc9/improving_hallucination_detection_in_a_ragbased/ | ShayzerPlay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri4wc9 | false | null | t3_1ri4wc9 | /r/LocalLLaMA/comments/1ri4wc9/improving_hallucination_detection_in_a_ragbased/ | false | false | self | 1 | null |
Qwen3.5-122B-A10B-GGUF-Q4_K_XL-Pipes-Screensaver One-shot. | 26 | Set out this morning to find out what all the hype is about on "Qwen3.5-35B-A3B-GGUF." Tried every which way to get it to one-shot the following prompt and got nowhere. Right before giving up, I gave Qwen3.5-122B-A10B-GGUF-Q4\_K\_XL a try and it mostly nailed in on the first try. So if you have 70GB of room and are ... | 2026-03-01T18:06:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/ | jacobpederson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri48pj | false | null | t3_1ri48pj | /r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/ | false | false | self | 26 | null |
Help finding best for my specs | 2 | Hello,
new here.
I've been looking for a good fit and can't quite understand yet the logic.
I use daily a MacBook M5 with 24gb ram, and also have running a headless debian test server in a Mini PC with a Ryzen 7 4800u and 32gb of ram DDR4 3200mhz.
That's all I have, sadly I don't have an extra dime to spend in impr... | 2026-03-01T17:59:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ri42ee/help_finding_best_for_my_specs/ | entimuscl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri42ee | false | null | t3_1ri42ee | /r/LocalLLaMA/comments/1ri42ee/help_finding_best_for_my_specs/ | false | false | self | 2 | null |
My last & only beef with Qwen3.5 35B A3B | 21 | 2026-03-01T17:55:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/ | ndiphilone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri3y89 | false | null | t3_1ri3y89 | /r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/ | false | false | 21 | null | ||
Void-Box Update: Running OpenClaw + Telegram | 0 | Hey everyone,
A few days ago we shared **Void-Box**, a capability-bound runtime for **AI agents**.
Quick recap of the idea:
>**VoidBox = Agent(Skills) + Isolation**
*Skills are declared capabilities.*
*Capabilities only exist when bound to an isolated execution boundary.*
Instead of running agents in shared proc... | 2026-03-01T17:51:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ri3u8p/voidbox_update_running_openclaw_telegram/ | Wide_Spite5612 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri3u8p | false | null | t3_1ri3u8p | /r/LocalLLaMA/comments/1ri3u8p/voidbox_update_running_openclaw_telegram/ | false | false | 0 | null | |
Ideal llama.cpp settings for 12GB VRAM and 64GB DRAM setup for https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF | 1 | What are the ideal settings for a setup like mine and this model in your opinion?
I am currently running:
~/work/localllms/llama.cpp/build/bin/llama-server \
--model ~/work/localllms/models/Qwen3.5-35B-A3B-UD-Q6_K_XL.gguf \
--batch-size 8192 \
--cache-type-k q4_0 \
--cache-type-v q4_0 \
--con... | 2026-03-01T17:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ri3mxa/ideal_llamacpp_settings_for_12gb_vram_and_64gb/ | johnnyApplePRNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri3mxa | false | null | t3_1ri3mxa | /r/LocalLLaMA/comments/1ri3mxa/ideal_llamacpp_settings_for_12gb_vram_and_64gb/ | false | false | self | 1 | null |
Qwen3.5 35b a3b first small model to not hallucinate summarising 50k token text | 132 | I've always ran this test to see how models did for long-ish text reasoning. It's the first chapters of a text I wrote and will never be online to make sure it's never polluting the training set of these models.
So far every model failed in the <=4b active parameters models I tested:
Qwen3 4b 2507 thinking
Nanbeige4... | 2026-03-01T17:30:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/ | Windowsideplant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri39a4 | false | null | t3_1ri39a4 | /r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/ | false | false | self | 132 | null |
Breaking : Today Qwen 3.5 small | 1,599 | 2026-03-01T17:02:31 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ri2irg | false | null | t3_1ri2irg | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/ | false | false | 1,599 | {'enabled': True, 'images': [{'id': '4hhdbdn8tgmg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/4hhdbdn8tgmg1.jpeg?width=108&crop=smart&auto=webp&s=bab88013077f4c171591d34b71a42364a5fb64c4', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/4hhdbdn8tgmg1.jpeg?width=216&crop=smart&auto=... | |||
Worth it to buy Tesla p40s? | 2 | I recently upgraded my Rtx 3060 to a 5060 ti with 16 GB of vram. I recently heard that Nvidia Tesla p40s are relatively cheap, have 24gbs of vram and can be used together. Would it be worth it to build a rig with 4 of these to combine 96gb on vram or are there things I'm overlooking that would be a concern with such an... | 2026-03-01T16:46:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/ | TanariTech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri232z | false | null | t3_1ri232z | /r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/ | false | false | self | 2 | null |
Running qwen3:14b (9.3GB) on a CPU-only KVM VPS — what specs actually work? | 1 | hiii,
actually i need help with this,
trying to run **qwen3:14b** locally on a KVM VPS using a CPU-only setup. I’m aware this isn’t ideal and that a GPU would make life easier, but that’s simply not an option right now, so I’m working within that constraint and trying not to waste money on the wrong VPS confi... | 2026-03-01T16:33:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/ | Fine_Factor_456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri1rit | false | null | t3_1ri1rit | /r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/ | false | false | self | 1 | null |
Who is doing useful things with local AI and email? | 1 | I‘m interested in dealing with my email with the help of GenAI. For example
\- collecting all mails about a certain topic and moving them into a subfolder,
\- collecting numbers from various emails,
\- suggesting old mails that can probably be deleted.
I‘m quite worried about LLMs making mistakes, so I want to b... | 2026-03-01T16:27:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ri1l4o/who_is_doing_useful_things_with_local_ai_and_email/ | Zyj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri1l4o | false | null | t3_1ri1l4o | /r/LocalLLaMA/comments/1ri1l4o/who_is_doing_useful_things_with_local_ai_and_email/ | false | false | self | 1 | null |
A bit of a PSA: I get that Qwen3.5 is all the rage right now, but I would NOT recommend it for code generation. It hallucinates badly. | 0 | A bit of a context first:
I am new to this, I don't have extensive local LLM experience, but I've been trying a lot of different models to use as a real coding assistant.
\- My LLM "server" specs: 2x RTX 5060 Ti 16GB, i9 14900KF, 128GB DDR5
\- Running ggml-org/llama.cpp, frequently pulling and compiling latest ver... | 2026-03-01T16:23:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/ | mkMoSs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri1hgv | false | null | t3_1ri1hgv | /r/LocalLLaMA/comments/1ri1hgv/a_bit_of_a_psa_i_get_that_qwen35_is_all_the_rage/ | false | false | self | 0 | null |
ik_llama.cpp Reasoning not working with GLM Models | 1 | I am using one GPU and a lot of RAM for ik\_llama.cpp mixed inference and it has been working great with Deepseek R1.
But recently i switched to GLM models and somehow the thinking / reasoning mode works fine in llama.cpp but not in ik\_llama.cpp.
Obviously the thinking results are much better than those without.
My... | 2026-03-01T16:22:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/ | KulangetaPestControl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri1h5n | false | null | t3_1ri1h5n | /r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/ | false | false | self | 1 | null |
Has anyone built a proper eval pipeline for local models? Trying to compare Llama 3 vs Mistral vs Qwen on my specific use case | 4 | I'm trying to do an apples to apples comparison of several local models for a document Q&A use case. Specifically comparing:
\- Llama 3.1 8B vs 70B
\- Mistral 7B Instruct
\- Qwen 2.5 7B and 14B
The problem is I can't just look at benchmarks, MMLU and HellaSwag don't tell me anything about how these models perfo... | 2026-03-01T16:10:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ri14x0/has_anyone_built_a_proper_eval_pipeline_for_local/ | Zestyclose_Draw_7663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri14x0 | false | null | t3_1ri14x0 | /r/LocalLLaMA/comments/1ri14x0/has_anyone_built_a_proper_eval_pipeline_for_local/ | false | false | self | 4 | null |
Anyone need a 12-channel DDR5 RDIMM RAM set for an Epyc rig? (used parts for sale) | 0 | I have some leftovers from my Epyc Genoa workstation upgrade: 12 x Samsung M321R4GA3BB6-CQK (32GB DDR5 2Rx8 4800MHz PC5-38400 ECC REGISTERED), 384 GB RAM total. Was going to sell it to some server parts reseller, but perhaps there's a person building an Epyc LLM inference rig that's willing to buy it directly from me i... | 2026-03-01T15:59:51 | https://www.reddit.com/gallery/1ri0v3e | fairydreaming | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ri0v3e | false | null | t3_1ri0v3e | /r/LocalLLaMA/comments/1ri0v3e/anyone_need_a_12channel_ddr5_rdimm_ram_set_for_an/ | false | false | 0 | null | |
Honor would use Deepseek | 45 | https://x.com/i/status/2028081963635290537 | 2026-03-01T15:54:13 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ri0puh | false | null | t3_1ri0puh | /r/LocalLLaMA/comments/1ri0puh/honor_would_use_deepseek/ | false | false | 45 | {'enabled': True, 'images': [{'id': '1u6q97w1hgmg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/1u6q97w1hgmg1.jpeg?width=108&crop=smart&auto=webp&s=e9ed155cacdeb126b5567a1c359b5b4517e00155', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/1u6q97w1hgmg1.jpeg?width=216&crop=smart&auto=... | ||
LLM LoRA on the fly with Hypernetworks. | 5 | # Instant LLM Updates with
[https://pub.sakana.ai/doc-to-lora/](https://pub.sakana.ai/doc-to-lora/)
# Doc-to-LoRA and Text-to-LoRA
TL;DR
Long-term memory and continual adaptation of Large Language Models (LLMs) are two key challenges of current agentic systems. Here, we propose the usage of auxiliary modulator net... | 2026-03-01T15:51:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ri0n8p/llm_lora_on_the_fly_with_hypernetworks/ | cyysky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri0n8p | false | null | t3_1ri0n8p | /r/LocalLLaMA/comments/1ri0n8p/llm_lora_on_the_fly_with_hypernetworks/ | false | false | self | 5 | null |
Hardware Advice: Llama for small firm (intake, automation, local Llama) - Mac Studio maxed TF out? | 1 | I manage a small law firm - Currently two attorneys and one paralegal, and we'll possibly have a total of four attorneys and two paralegals in the next five years.
I'd like to automate everything that can realistically be automated, including, but not limited to,
**(a) AI answering service** using my voice (differe... | 2026-03-01T15:48:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/ | IndianaAttorneyGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri0k7b | false | null | t3_1ri0k7b | /r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/ | false | false | self | 1 | null |
Ollama or OpenVINO | 1 | I have an Intel netbook with both NPU and GPU, currently struggling on deciding if use Ollama or OpenVINO.. what are you doing with Intel?
I would like to run everything on containers to keep my system as much as clean possible | 2026-03-01T15:46:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ri0iep/ollama_or_openvino/ | G4rp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ri0iep | false | null | t3_1ri0iep | /r/LocalLLaMA/comments/1ri0iep/ollama_or_openvino/ | false | false | self | 1 | null |
Assembly language for tool calls orchestration | 0 | Hi everyone,
I'm working on LLAssembly [https://github.com/electronick1/LLAssembly](https://github.com/electronick1/LLAssembly) and would appreciate some feedback.
LLAssembly is a tool-orchestration library for LLM agents that replaces the usual “LLM picks the next tool every step” loop with a single up-front executi... | 2026-03-01T15:22:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rhzx20/assembly_language_for_tool_calls_orchestration/ | oleg_ivye | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhzx20 | false | null | t3_1rhzx20 | /r/LocalLLaMA/comments/1rhzx20/assembly_language_for_tool_calls_orchestration/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '0j2y0F7et_jSCI4FEudqfEF9Lqxui_WZmCXFal_q5HY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0j2y0F7et_jSCI4FEudqfEF9Lqxui_WZmCXFal_q5HY.png?width=108&crop=smart&auto=webp&s=1a091ce7f0c6e971b93f2ff3a86c4e013b695465', 'width': 108}, {'height': 108, 'url': 'h... |
Best Local Model For Python and QT Quick Coding | 1 | I mainly develop desktop software with Pyside6 and QML for my specific domain. i don't want my data collected by closed ai corps. So i decided to go full local almost 4 months ago. I bought a Hp Zbook laptop with i7-12800h, 96 gb ddr5 4800 mhz ram, a4500 rtx 16 gb vram and windows 10 pro.
Thanks to the community in th... | 2026-03-01T15:08:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/ | wisepal_app | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhzknn | false | null | t3_1rhzknn | /r/LocalLLaMA/comments/1rhzknn/best_local_model_for_python_and_qt_quick_coding/ | false | false | self | 1 | null |
I combined Groq, Cerebras, Gemini and 6 other free tiers into a single local proxy — ~$975/mo of free compute for your IDE | 1 | [removed] | 2026-03-01T15:05:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rhzhqh/i_combined_groq_cerebras_gemini_and_6_other_free/ | Far-Professor4803 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhzhqh | false | null | t3_1rhzhqh | /r/LocalLLaMA/comments/1rhzhqh/i_combined_groq_cerebras_gemini_and_6_other_free/ | false | false | self | 1 | null |
[R]black-box interpretability framework (NIKA V2) | 1 | [removed] | 2026-03-01T14:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rhz46p/rblackbox_interpretability_framework_nika_v2/ | Then_Muffin_6132 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhz46p | false | null | t3_1rhz46p | /r/LocalLLaMA/comments/1rhz46p/rblackbox_interpretability_framework_nika_v2/ | false | false | self | 1 | null |
Verity MCP server | 3 | Added MCP support for Verity
Repo : [https://github.com/rupeshs/verity?tab=readme-ov-file#verity-mcp-server](https://github.com/rupeshs/verity?tab=readme-ov-file#verity-mcp-server) | 2026-03-01T14:36:13 | simpleuserhere | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rhyswx | false | null | t3_1rhyswx | /r/LocalLLaMA/comments/1rhyswx/verity_mcp_server/ | false | false | 3 | {'enabled': True, 'images': [{'id': 'wjugceeo2gmg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/wjugceeo2gmg1.png?width=108&crop=smart&auto=webp&s=da31dc60d67268de20201ba2df71f813781d9b47', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/wjugceeo2gmg1.png?width=216&crop=smart&auto=web... | ||
ia nsfw | 0 | 2026-03-01T14:35:43 | https://video.a2e.ai/?coupon=Iqd1 | DependentCommand9985 | video.a2e.ai | 1970-01-01T00:00:00 | 0 | {} | 1rhysht | false | null | t3_1rhysht | /r/LocalLLaMA/comments/1rhysht/ia_nsfw/ | false | false | nsfw | 0 | {'enabled': False, 'images': [{'id': 'hbtFpX_wkVpZ3TLVXDcXrLMtLzLDFNQTDu_w_J0eEHw', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/hbtFpX_wkVpZ3TLVXDcXrLMtLzLDFNQTDu_w_J0eEHw.png?width=108&crop=smart&auto=webp&s=29fd4a96aedcda04fc220e61b74c12ab41e1991a', 'width': 108}], 'source': {'height': 130... | |
18 Failed Attempts to Get a Tiny AI Agent Running 24/7 on an Old Nokia Phone | 0 | Hey everyone,
A few weeks ago I saw a viral post about Picobot — a ~12 MB
single-binary AI agent written in Go that runs tools, persistent
memory, skills, and Telegram chat on basically any low-resource device
(old phones, Raspberry Pi, etc.). I thought: "This would be perfect on
my spare Nokia phone via Termux."
Wha... | 2026-03-01T14:28:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rhymsi/18_failed_attempts_to_get_a_tiny_ai_agent_running/ | AsleepArmy726 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhymsi | false | null | t3_1rhymsi | /r/LocalLLaMA/comments/1rhymsi/18_failed_attempts_to_get_a_tiny_ai_agent_running/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'PEK_x2uHWiVQ6FZkf07lQJpXtWYeE32FNsewykV6q7Y', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/PEK_x2uHWiVQ6FZkf07lQJpXtWYeE32FNsewykV6q7Y.png?width=108&crop=smart&auto=webp&s=633cb3b5acc16425adf78d9ea24aa080cc4ecf24', 'width': 108}, {'height': 216, 'url': '... |
Qwen 3.5 small , soon | 71 | 2026-03-01T14:26:02 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rhykhm | false | null | t3_1rhykhm | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/ | false | false | 71 | {'enabled': True, 'images': [{'id': 'lq67yzkb1gmg1', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/lq67yzkb1gmg1.jpeg?width=108&crop=smart&auto=webp&s=4a66b6beaaf8d5de8cd2072c1856272c302dba44', 'width': 108}, {'height': 75, 'url': 'https://preview.redd.it/lq67yzkb1gmg1.jpeg?width=216&crop=smart&auto=we... | |||
Found a lightning-fast News/Trend Scraper API for real-time RAG pipelines | 0 | Hey everyone,
Finding reliable, fast, and structured news/trend data for RAG pipelines without getting blocked is a huge pain.
I was looking for a solution and stumbled upon this "Global Trend Scraper" on Apify. It bypasses heavy browser rendering and instantly returns top news and local trends in a clean JSON f... | 2026-03-01T14:18:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rhydwf/found_a_lightningfast_newstrend_scraper_api_for/ | EffectBrief1480 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhydwf | false | null | t3_1rhydwf | /r/LocalLLaMA/comments/1rhydwf/found_a_lightningfast_newstrend_scraper_api_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'oFPdaf5_Nv3Q0Mz5f8GsPpgAqW_sy58DaeaZHkO7WL4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/oFPdaf5_Nv3Q0Mz5f8GsPpgAqW_sy58DaeaZHkO7WL4.png?width=108&crop=smart&auto=webp&s=778be85b0d435a11b728959b59bc03fd199091a5', 'width': 108}, {'height': 113, 'url': 'h... |
1-Click Automation: Found a 1.5s YouTube Transcript Extractor API for RAG pipelines | 0 | Hey building community,
I was so tired of AI summarizers taking 10+ seconds failing on bloated headless Chrome instances just to get a YouTube transcript.
I recently found a lightning-fast, zero-browser Apify Actor that hits the internal APIs directly. It seems designed purely for unrivaled speed, ultimate conve... | 2026-03-01T14:12:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rhy9kg/1click_automation_found_a_15s_youtube_transcript/ | EffectBrief1480 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhy9kg | false | null | t3_1rhy9kg | /r/LocalLLaMA/comments/1rhy9kg/1click_automation_found_a_15s_youtube_transcript/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'UbP2se7E-M4FEqxMFl_nO-eEwuLZ8YCoPtnROV1iUIg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UbP2se7E-M4FEqxMFl_nO-eEwuLZ8YCoPtnROV1iUIg.png?width=108&crop=smart&auto=webp&s=6d9d0f87d0fa616e37502120272defe05bc1278c', 'width': 108}, {'height': 113, 'url': 'h... |
Quantised matrix multiplication | 0 | Let Y = X @ W^(T) where @ means matrix multiplication, X is an activation matrix and W is a weight matrix.
To keep things simple, say we apply symmetric quantisation to both X and W. Let s\_X and s\_W represent the scaling factors for X and W respectively, and let R(•) := clamp(round(•), qmin, qmax).
Simulated quanti... | 2026-03-01T14:08:05 | https://www.reddit.com/r/LocalLLaMA/comments/1rhy5o2/quantised_matrix_multiplication/ | Grand-Stranger-2923 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhy5o2 | false | null | t3_1rhy5o2 | /r/LocalLLaMA/comments/1rhy5o2/quantised_matrix_multiplication/ | false | false | self | 0 | null |
LLM Keeps trying to obsessively stack chairs on a neat pile... | 0 | I am developing some complex state system for a LLM (meant for RP) that requires me to ask some meta questions about the things that happened.
One issue I am having is that whenever there are chairs in the question, it tries to stack them on a neat pile.
It doesn't happen with anything else but chairs.
Imagine the f... | 2026-03-01T13:49:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rhxqqf/llm_keeps_trying_to_obsessively_stack_chairs_on_a/ | boisheep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhxqqf | false | null | t3_1rhxqqf | /r/LocalLLaMA/comments/1rhxqqf/llm_keeps_trying_to_obsessively_stack_chairs_on_a/ | false | false | self | 0 | null |
soul.py — Persistent memory for any LLM in 10 lines (works with Ollama, no database) | 0 | Got tired of my local Llama forgetting everything between sessions. Built a fix.
from soul import Agent
agent = Agent(
provider="openai-compatible",
base_url="http://localhost:11434/v1",
model="llama3.2",
api_key="ollama"
)
agent.ask("My name is Prahlad, I'm working on an AI research lab.")
# Later, n... | 2026-03-01T13:28:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/ | the-ai-scientist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhxav5 | false | null | t3_1rhxav5 | /r/LocalLLaMA/comments/1rhxav5/soulpy_persistent_memory_for_any_llm_in_10_lines/ | false | false | self | 0 | null |
Question about Devstral Small 2 24B on Radeon 780M | 1 | Anyone else running devstral2 on a Radeon 780M? How many tokens do you get and how are you running the model? I am only getting 3t/s with ROCm and using 56GB of ram with only 1024t context size using llama.cpp | 2026-03-01T13:28:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rhxaqw/question_about_devstral_small_2_24b_on_radeon_780m/ | wrk79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhxaqw | false | null | t3_1rhxaqw | /r/LocalLLaMA/comments/1rhxaqw/question_about_devstral_small_2_24b_on_radeon_780m/ | false | false | self | 1 | null |
memory system request | 0 | been doing this for a few days as a way to kill time while not at work and im using it daily but i know theres weak points i cant see anymore so
its an mcp server, faiss + sqlite, all local. the main idea is it doesnt just store and retrieve — it clusters old episodes by semantic similarity, has an llm synthesize the... | 2026-03-01T13:25:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rhx83a/memory_system_request/ | charliew6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhx83a | false | null | t3_1rhx83a | /r/LocalLLaMA/comments/1rhx83a/memory_system_request/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'BWRXxtutCdgEUg7iK93uajADZ9lko2ugiK-binPPdTM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BWRXxtutCdgEUg7iK93uajADZ9lko2ugiK-binPPdTM.png?width=108&crop=smart&auto=webp&s=b97ddda7d6d5caf5d27ffba86a45ab247e5456fb', 'width': 108}, {'height': 108, 'url': 'h... |
Reverse engineered Apple Neural Engine(ANE) to train Microgpt | 705 | # Why? Because i bought a mac mini M4 and I wanted to leverage its compute for my compiler project
Training on Metal(GPU) is well known but ANE is a black box and Apple doesn't talk about it. So I harnessed Claude to reverse engineer the ANE private APIs , run benchmarks by bypassing coreml(which is the recommended wa... | 2026-03-01T13:21:55 | jack_smirkingrevenge | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rhx5pc | false | null | t3_1rhx5pc | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/ | false | false | 705 | {'enabled': True, 'images': [{'id': 'vl6kd7lvpfmg1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/vl6kd7lvpfmg1.jpeg?width=108&crop=smart&auto=webp&s=e5396507c194e0dc6f29da77cd96e141b25ea926', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/vl6kd7lvpfmg1.jpeg?width=216&crop=smart&auto=w... | ||
day 2 Qwen 3.5 35B-A3B host on my RTX 5090 (261k context, OpenAI compatible) – privacy hardened , for fun | 1 | my yesterday run of hosting the new model on my 5090 went well , plenty of people used it
but i noticed there are some security issues with trusting a random guy ik , first off i already encourage you to not use private sensitive data , u can use this for code and other stuff but i took some safety measures this tim... | 2026-03-01T13:16:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rhx1p4/day_2_qwen_35_35ba3b_host_on_my_rtx_5090_261k/ | Key_Pace_9755 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhx1p4 | false | null | t3_1rhx1p4 | /r/LocalLLaMA/comments/1rhx1p4/day_2_qwen_35_35ba3b_host_on_my_rtx_5090_261k/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Rs6Vld3uotndrOZ5IkjJS7_szfUGvdjsC3RA3dUD0Ps', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Rs6Vld3uotndrOZ5IkjJS7_szfUGvdjsC3RA3dUD0Ps.png?width=108&crop=smart&auto=webp&s=970bb487002492eee56e8d2735f839536f2365c1', 'width': 108}, {'height': 108, 'url': 'h... |
How do you stop your LLM from quietly unionizing against your system prompt? | 0 | Genuine question for the hive mind because I am losing this fight.
I've been building an open-source prompt governance framework (CTRL-AI on GitHub) — basically a behavioral scaffolding system that forces LLMs to stop being yes-men and actually challenge your ideas, run internal dissent checks, and maintain strict ope... | 2026-03-01T13:15:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/ | Mstep85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhx121 | false | null | t3_1rhx121 | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/ | false | false | self | 0 | null |
Deterministic supervisory control layer for LLM regime stabilization (seeking technical critique) | 0 | I’m the author of this experimental preprint and repo.
Over the past months I’ve been building a deterministic supervisory layer designed to stabilize LLM/agent amplification regimes using explicit regime states (e.g., CLEAN / LOCKSTEP / HARDENED), hysteresis, and cooldown transitions.
This is not a full agent framew... | 2026-03-01T13:09:14 | https://github.com/GabrielLuelli | Gabriel-granata | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rhww3y | false | null | t3_1rhww3y | /r/LocalLLaMA/comments/1rhww3y/deterministic_supervisory_control_layer_for_llm/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'xaqJ5675PZPmOQMDqqnx9UsvbfXyej2tcejjxZ1K-mI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/xaqJ5675PZPmOQMDqqnx9UsvbfXyej2tcejjxZ1K-mI.png?width=108&crop=smart&auto=webp&s=50887404040aa5925c263752e09169a33151698e', 'width': 108}, {'height': 216, 'url': '... | |
Qwen3.5 Small Dense model release seems imminent. | 214 | 2026-03-01T12:58:37 | Deep-Vermicelli-4591 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rhwo08 | false | null | t3_1rhwo08 | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/ | false | false | 214 | {'enabled': True, 'images': [{'id': 'k5buxjdplfmg1', 'resolutions': [{'height': 158, 'url': 'https://preview.redd.it/k5buxjdplfmg1.png?width=108&crop=smart&auto=webp&s=54f59b8720bbe1b568883128f97c2a4ecc7d4c8a', 'width': 108}, {'height': 317, 'url': 'https://preview.redd.it/k5buxjdplfmg1.png?width=216&crop=smart&auto=we... | |||
Made a free app to stop copy-pasting MCP configs between every AI tool | 1 | If you're running MCP servers across multiple clients you know the
pain — Claude Desktop uses one JSON format, Cursor another, VS Code
another, JetBrains uses XML, Codex uses TOML. Add a server? Edit
them all. Change an API key? Do it again.
Conductor is a single macOS app that manages all your MCP servers
and pushes ... | 2026-03-01T12:35:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rhw7mw/made_a_free_app_to_stop_copypasting_mcp_configs/ | aryabyte | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhw7mw | false | null | t3_1rhw7mw | /r/LocalLLaMA/comments/1rhw7mw/made_a_free_app_to_stop_copypasting_mcp_configs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fQCS7T3trf_ZLS3l-LaVaMgvopA3fzVH06caNqaJXsQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/fQCS7T3trf_ZLS3l-LaVaMgvopA3fzVH06caNqaJXsQ.png?width=108&crop=smart&auto=webp&s=b647776e14e1f83671ed42d4df7ea877c9d1cc57', 'width': 108}, {'height': 113, 'url': 'h... |
Open-source background agent for GitHub/Slack/email noise — scheduled briefings + decision gates | 1 | [removed] | 2026-03-01T12:26:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rhw203/opensource_background_agent_for_githubslackemail/ | Direct-Employ-3290 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhw203 | false | null | t3_1rhw203 | /r/LocalLLaMA/comments/1rhw203/opensource_background_agent_for_githubslackemail/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XliYG8SGlW3p-zmNBdLZQJNvVnoG2yK3Za8mabobvYQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XliYG8SGlW3p-zmNBdLZQJNvVnoG2yK3Za8mabobvYQ.png?width=108&crop=smart&auto=webp&s=ba5fc8af7e30b68cfbde9f07d0f477dea452f72a', 'width': 108}, {'height': 108, 'url': 'h... |
Dense (non-thinking) > MoE? Qwen-3.5-27B is blowing me away in coding | 111 | Vibe-coded this Python program by just providing it with OpenRouter's Quickstart python snippet on how to use their API. Took about 1 hour with only about 7 errors total (mostly was from adding features and two of the errors are the same) but it was worth it considering it's from a **27B** **non-thinking** model. I als... | 2026-03-01T12:25:32 | https://v.redd.it/6qk2wopqffmg1 | theskilled42 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rhw16v | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6qk2wopqffmg1/DASHPlaylist.mpd?a=1774959971%2CZDBmMDYxMzU5NTAyYjY2YTUzNzg3YmZkZTM0ZWQ3ODRlYTNiZjQ2YmQ0YTc3ODVmOGQ1NDA5ZDlkNjkwODlmOA%3D%3D&v=1&f=sd', 'duration': 55, 'fallback_url': 'https://v.redd.it/6qk2wopqffmg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rhw16v | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/ | false | false | 111 | {'enabled': False, 'images': [{'id': 'azdnOHA4cXFmZm1nMWluHoHESFEQm1vbE9B9dUQV9LzDMcIQbHzAce2RYgnO', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/azdnOHA4cXFmZm1nMWluHoHESFEQm1vbE9B9dUQV9LzDMcIQbHzAce2RYgnO.png?width=108&crop=smart&format=pjpg&auto=webp&s=8936ba5c6cd60f04c53142c0161c7ea83d922... | |
Dense (non-thinking) > MoE? Qwen-3.5-27B (non-thinking) is blowing me away in coding | 1 | Vibe-coded this Python program by just providing it with OpenRouter's Quickstart python snippet on how to use their API. Took about 1 hour with only about 7 errors total (mostly was from adding features and two of the errors are the same) but it was worth it considering it's from a **27B** **non-thinking** model. I als... | 2026-03-01T12:23:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rhvzpm/dense_nonthinking_moe_qwen3527b_nonthinking_is/ | theskilled42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhvzpm | false | null | t3_1rhvzpm | /r/LocalLLaMA/comments/1rhvzpm/dense_nonthinking_moe_qwen3527b_nonthinking_is/ | false | false | self | 1 | null |
Antigravity setup on macOS -- issues with Google Authentication (any tips ?) | 0 | Facing this strange issue. I have an almost freshly minted macOS 15.7.4 setup (on Mac Mini M4 w/ 24GB RAM), on which Antigravity was installed (dmg downloaded from official Google Antigravity site), and using my personal Google login, using Chrome browser. I've made several attempts of full cleanup and reinstallation o... | 2026-03-01T12:18:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rhvwom/antigravity_setup_on_macos_issues_with_google/ | Professional_Row_967 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhvwom | false | null | t3_1rhvwom | /r/LocalLLaMA/comments/1rhvwom/antigravity_setup_on_macos_issues_with_google/ | false | false | self | 0 | null |
[Exploit/Disclosure] I shattered Gemini's safety filters with a 2D Base64 Logic Bomb. But the real exploit exposes a terrifying systemic failure on the Google Play Store. | 1 | [removed] | 2026-03-01T12:18:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rhvwhb/exploitdisclosure_i_shattered_geminis_safety/ | Miss_Major_d_Azure | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhvwhb | false | null | t3_1rhvwhb | /r/LocalLLaMA/comments/1rhvwhb/exploitdisclosure_i_shattered_geminis_safety/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '0yaIein6_q-16U205yWxJb6KDGh2AilLnagEygzF6TA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0yaIein6_q-16U205yWxJb6KDGh2AilLnagEygzF6TA.png?width=108&crop=smart&auto=webp&s=2c7a9918af9573c94f6e92612ef64635b9e67b02', 'width': 108}, {'height': 216, 'url': '... |
Qwen3.5 REAP | 0 | Will we get REAP variants of Qwen3.5 35B and 27B?
will the reap variants would be better than dense 14B ones? | 2026-03-01T12:16:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rhvviu/qwen35_reap/ | BothYou243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rhvviu | false | null | t3_1rhvviu | /r/LocalLLaMA/comments/1rhvviu/qwen35_reap/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.