name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o83qefp | I use 3060 12GBs and 2 P40 24GBs.
My main rig is x6 rtx 3060s and 1 p40 24 gb to pool 96gb vram. The. The extra p40 for automation jobs.
It compares in speed to 1080ti 11gb but sith24gb no problem.
I use x2 nocturnal 120mm fans running q00% speed for running them silent with some 3d printed part to hold it fine, an... | 2 | 0 | 2026-03-01T19:23:59 | Dundell | false | null | 0 | o83qefp | false | /r/LocalLLaMA/comments/1ri232z/worth_it_to_buy_tesla_p40s/o83qefp/ | false | 2 |
t1_o83qdu1 | You forced an inferior solution on them for a problem that didn’t exist. | 1 | 0 | 2026-03-01T19:23:55 | illicITparameters | false | null | 0 | o83qdu1 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o83qdu1/ | false | 1 |
t1_o83qcag | [removed] | 1 | 0 | 2026-03-01T19:23:41 | [deleted] | true | null | 0 | o83qcag | false | /r/LocalLLaMA/comments/1rhcs8p/tiny_small_faster_models_for_13_year_old_laptop/o83qcag/ | false | 1 |
t1_o83q9kx | Why do you say 27B is "highly superior" to R1? It is very *good*, especially for its size. It probably also is better trained for agentic usage. But I'm not convinced it would be actually smarter than R1..? | 47 | 0 | 2026-03-01T19:23:18 | -dysangel- | false | null | 0 | o83q9kx | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83q9kx/ | false | 47 |
t1_o83q1t7 | Actually \~1001 GB/s with a memory OC on the 5070 Ti. Stock is 896 GB/s | 2 | 0 | 2026-03-01T19:22:13 | BORIS3443 | false | null | 0 | o83q1t7 | false | /r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o83q1t7/ | false | 2 |
t1_o83q0a1 | So far, yes. I’m still doing an extensive amount of testing with different configurations and different frameworks so I can come back and give more numbers later. But so far, at least for our purposes with language, models and inference. The seven-year-old Mac Pro is beating out my M1 Ultra, which is kind of crazy.Sinc... | 2 | 0 | 2026-03-01T19:22:00 | JacketHistorical2321 | false | null | 0 | o83q0a1 | false | /r/LocalLLaMA/comments/1rfthhd/local_ai_on_mac_pro_2019/o83q0a1/ | false | 2 |
t1_o83pvf5 | KV quantization takes some extra computation. With the Q4 quant, this might also significantly degrade quality. | 1 | 0 | 2026-03-01T19:21:21 | phenotype001 | false | null | 0 | o83pvf5 | false | /r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o83pvf5/ | false | 1 |
t1_o83pts7 | If that is the official LM Studio version and not a random unsloth or noctrex, etcetera, then I had the same issue. Downloading a different version of the model immediately fixed my speed issues. Bite the bullet on downloading another 20 gigs. I am using the bartowski q4\_K\_L and it was a huge speed jump from the "off... | 6 | 0 | 2026-03-01T19:21:07 | _-_David | false | null | 0 | o83pts7 | false | /r/LocalLLaMA/comments/1ri60l3/qwen_35_35b_a3b_lmstudio_settings/o83pts7/ | false | 6 |
t1_o83psqs | New to speculative decoding here...does mixing dense with MoE work? So 37b-A3B as draft for 27B dense work? | 2 | 0 | 2026-03-01T19:20:59 | hampsonw | false | null | 0 | o83psqs | false | /r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o83psqs/ | false | 2 |
t1_o83ppd3 | what context length were you running it at? 50k tokens is a lot for a 3b active param model, curious if it actually tracked the full window or just got lucky with the important bits being near the end | 1 | 0 | 2026-03-01T19:20:30 | Kaeliibit | false | null | 0 | o83ppd3 | false | /r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o83ppd3/ | false | 1 |
t1_o83pgrj | Once the server is running, connecting front-end applications to it is no different than with ollama. The difference in complexity is "ollama run 'modelname'" with ollama, versus downloading the model you want from huggingface and then pointing the llama-swap config file to it for llama.cpp. A little more manual work... | 1 | 0 | 2026-03-01T19:19:17 | suicidaleggroll | false | null | 0 | o83pgrj | false | /r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/o83pgrj/ | false | 1 |
t1_o83pfbp | I would say benchmaxing play it's tole and newer models are more adapted for the contemporal benchmarks, than older ones. | 18 | 0 | 2026-03-01T19:19:06 | uti24 | false | null | 0 | o83pfbp | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o83pfbp/ | false | 18 |
t1_o83pf73 | Report thinking in tokens. | 1 | 0 | 2026-03-01T19:19:05 | crantob | false | null | 0 | o83pf73 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o83pf73/ | false | 1 |
t1_o83pdj1 | if statements | 1 | 0 | 2026-03-01T19:18:51 | fractalcrust | false | null | 0 | o83pdj1 | false | /r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o83pdj1/ | false | 1 |
t1_o83pbgm | I’m still getting avg 10 tok, will put up new post for MLX perf | 1 | 0 | 2026-03-01T19:18:34 | Honest-Debate-6863 | false | null | 0 | o83pbgm | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o83pbgm/ | false | 1 |
t1_o83p8e2 | friendships end, but anthropic is never going to care to hurt me personally. if your threat model includes places that could subpoena that info, good luck.
that leaves data leaks. | 1 | 0 | 2026-03-01T19:18:09 | eepyCrow | false | null | 0 | o83p8e2 | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o83p8e2/ | false | 1 |
t1_o83p5jm | [deleted] | 1 | 0 | 2026-03-01T19:17:45 | [deleted] | true | null | 0 | o83p5jm | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83p5jm/ | false | 1 |
t1_o83ozw1 | Against other quantizations, it’s competitive. Some models degrade heavily on quant variants and isn’t fully understood yet hence I picked very niche problems to measure their true effectiveness. I’d say it’s still more reliable than new ones. LFM still hard to beat for edge deployments | 1 | 0 | 2026-03-01T19:16:57 | Honest-Debate-6863 | false | null | 0 | o83ozw1 | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o83ozw1/ | false | 1 |
t1_o83owba | Youre both compute and vram bottlenecked, what i would recommend would be to go for the 35A3B version and offload a few layers to the RAM, i think it should be around 20gb so should fit in your config. Since it is active 3b the speed will significantly be better than 27B | 2 | 0 | 2026-03-01T19:16:28 | rulerofthehell | false | null | 0 | o83owba | false | /r/LocalLLaMA/comments/1refvmr/qwen_3_27b_is_impressive/o83owba/ | false | 2 |
t1_o83otqz | Past 31ish years for me. Best intellectual adventure of my life. | 1 | 0 | 2026-03-01T19:16:08 | crantob | false | null | 0 | o83otqz | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o83otqz/ | false | 1 |
t1_o83omen | I'd pay for a good local linux expert LLM with all webcrap excised from memory. | 2 | 0 | 2026-03-01T19:15:07 | crantob | false | null | 0 | o83omen | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o83omen/ | false | 2 |
t1_o83okl3 | Probably on my machine that has 3090Ti / 64GB Ram Unified Ram. I have a 64GB M3 Max Macbook Pro that i'm also going to be testing it on | 1 | 0 | 2026-03-01T19:14:52 | saucedy | false | null | 0 | o83okl3 | false | /r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o83okl3/ | false | 1 |
t1_o83ohc1 | [removed] | 1 | 0 | 2026-03-01T19:14:25 | [deleted] | true | null | 0 | o83ohc1 | false | /r/LocalLLaMA/comments/1rh2tmu/benchmarking_opensource_llms_for_security/o83ohc1/ | false | 1 |
t1_o83of4e | As soon as it’s released, will add it. | 1 | 0 | 2026-03-01T19:14:07 | Honest-Debate-6863 | false | null | 0 | o83of4e | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o83of4e/ | false | 1 |
t1_o83oe9i | Except the don't work. Not well. And they never will. no matter what you do with memory, the one and only fuction an llm has is outputtoken(context\_window). that's it. that's all there is. and so any 'memory' has to be injected into 'context\_window' as raw text formatted 'somehow'. and this raw text has to be interpr... | 1 | 0 | 2026-03-01T19:14:00 | No-Complex6705 | false | null | 0 | o83oe9i | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o83oe9i/ | false | 1 |
t1_o83ocu6 | I tried the 35b last week, so it probably was the older one. Used UD-q4\_K\_XL | 1 | 0 | 2026-03-01T19:13:48 | JsThiago5 | false | null | 0 | o83ocu6 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o83ocu6/ | false | 1 |
t1_o83obmv | full p
These are the basic ones, if it’s 0 on these like some models are they are not even in the level of any utility. I’ve tested various combinations and found this to be a good filter of generalized capabilities. | 1 | 0 | 2026-03-01T19:13:38 | Honest-Debate-6863 | false | null | 0 | o83obmv | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o83obmv/ | false | 1 |
t1_o83oba6 | No, sorry. I didn’t have time to test different software. And I use it remotely over VPN, so I just run a llama-server.
I just installed a bunch of drivers and libs until it ran without errors. | 2 | 0 | 2026-03-01T19:13:35 | ProfessionalSpend589 | false | null | 0 | o83oba6 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o83oba6/ | false | 2 |
t1_o83oaea | This was probably asked in a different form, but is there a guide available for starting out with llama.cpp? Most guides Ive seen are with ollama. Or is it as simple as running llama.cpp and then starting llama-server, then following similar steps to connect it to the server with continue.dev? I know you said not to us... | 1 | 0 | 2026-03-01T19:13:28 | MakutaArguilleres | false | null | 0 | o83oaea | false | /r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/o83oaea/ | false | 1 |
t1_o83o8sn | 1 | 0 | 2026-03-01T19:13:15 | Own-Potential-2308 | false | null | 0 | o83o8sn | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o83o8sn/ | false | 1 | |
t1_o83o1oa | How did you get stable-diffusion.cpp working with the cuda image, since it's not packaged with llama-swap? | 1 | 0 | 2026-03-01T19:12:16 | blackhawk74 | false | null | 0 | o83o1oa | false | /r/LocalLLaMA/comments/1rhohqk/how_to_switch_qwen_35_thinking_onoff_without/o83o1oa/ | false | 1 |
t1_o83o01m | This one took a bit longer.
https://preview.redd.it/tfsv2czbghmg1.jpeg?width=4032&format=pjpg&auto=webp&s=f012d8a671add97cf216437ba820297846cb1682 | 2 | 0 | 2026-03-01T19:12:02 | ProfessionalSpend589 | false | null | 0 | o83o01m | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o83o01m/ | false | 2 |
t1_o83nzoj | I came across Open WebUI in my research and may end up needing a more polished middleware/tooling than the current minimalist approach I'm starting with. | 1 | 0 | 2026-03-01T19:11:59 | SteppenAxolotl | false | null | 0 | o83nzoj | false | /r/LocalLLaMA/comments/1rhj0l9/mcp_server_for_searxngnonapi_local_search/o83nzoj/ | false | 1 |
t1_o83nyti | Added it | 1 | 0 | 2026-03-01T19:11:52 | Honest-Debate-6863 | false | null | 0 | o83nyti | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o83nyti/ | false | 1 |
t1_o83ntjw | Fair enough. In this case I'd still consider the firmware an "operating system" here since it has file systems and drivers, but I guess we're just nitpicking. This is a cool project! | 1 | 0 | 2026-03-01T19:11:09 | -dysangel- | false | null | 0 | o83ntjw | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o83ntjw/ | false | 1 |
t1_o83ntn4 | This gets you a star. | 2 | 0 | 2026-03-01T19:11:09 | crantob | false | null | 0 | o83ntn4 | false | /r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o83ntn4/ | false | 2 |
t1_o83noe8 | wow.. i find it totally usable for **24.58 t/s**, and thanks a lot for your update!
but just curious, did you notice any perf improvement on LM Studio as it provides different runtimes (vulkan, ROCM) and you can easily switch between them? | 1 | 0 | 2026-03-01T19:10:26 | Maleficent-Ad5999 | false | null | 0 | o83noe8 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o83noe8/ | false | 1 |
t1_o83nnw2 | GLM flash + Qwen 35 3.5 + Qwen 32 please. | 1 | 0 | 2026-03-01T19:10:22 | Long_comment_san | false | null | 0 | o83nnw2 | false | /r/LocalLLaMA/comments/1rhuvyc/benchmarking_88_smol_gguf_models_quickly_on_a/o83nnw2/ | false | 1 |
t1_o83nn1f | The verification loop is similar to ReAct in structure (extract -> check -> retry), but the check step isn't another LLM call, it's Z3 and sandbox execution. The key difference is that the routing uses MCTS rather than LLM-decided next steps, and we can prove workflow-level properties before execution via CTL model che... | 1 | 0 | 2026-03-01T19:10:15 | Sea-Succotash1547 | false | null | 0 | o83nn1f | false | /r/LocalLLaMA/comments/1ri51y0/p_aurastate_formally_verified_llm_state_machine/o83nn1f/ | false | 1 |
t1_o83nmws | I think that is where we disagree. I don’t view it as creation as there was no novel thought that went into it. The same way I’m not posting my k3s sonarr and qbittorrent nor was I posting my calculus homework in college - yes, it took a lot of effort, but I’m not doing anything worth sharing, it’s all been done before... | 0 | 0 | 2026-03-01T19:10:14 | SMELLYCHEESE8 | false | null | 0 | o83nmws | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o83nmws/ | false | 0 |
t1_o83nedl | You can probably get good results out of the 35B q4 with CPU offloading. | 6 | 0 | 2026-03-01T19:09:06 | tarruda | false | null | 0 | o83nedl | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83nedl/ | false | 6 |
t1_o83n9dn | You spent $1500 to get something that kind of sort of auto configured some networks cameras... And you got 'lucky'... in that it's very easy for open claw to fuck-up entirely. I've messed around with ti too but let's be honest, it's mostly about seeing if it can do anything useful and being surprised when it does, and ... | 0 | 0 | 2026-03-01T19:08:25 | No-Complex6705 | false | null | 0 | o83n9dn | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o83n9dn/ | false | 0 |
t1_o83n70o | Switching to Q8_0 KV felt like cleaning my glasses — everything seemed fine until suddenly it was noticeably finer. Good PSA, this one gets quietly blamed on the model way too often. | 2 | 0 | 2026-03-01T19:08:06 | theagentledger | false | null | 0 | o83n70o | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o83n70o/ | false | 2 |
t1_o83n5vx | This makes no sense. You'll need to give it a tool for it to look up realtime data. | 5 | 0 | 2026-03-01T19:07:57 | chensium | false | null | 0 | o83n5vx | false | /r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/o83n5vx/ | false | 5 |
t1_o83mv9p | Also i havent seen anywhere that swa is suggested. Do you have any reasong to include? | 1 | 0 | 2026-03-01T19:06:31 | Mir4can | false | null | 0 | o83mv9p | false | /r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o83mv9p/ | false | 1 |
t1_o83msm1 | I'm on and off for both local and "frontier" models, getting enthusiastic about local models once in a while. I always go back to GPT-OSS 20b. It's the best model at that size I've tried. | 3 | 0 | 2026-03-01T19:06:10 | Abject-Kitchen3198 | false | null | 0 | o83msm1 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83msm1/ | false | 3 |
t1_o83mrxr | [deleted] | 1 | 0 | 2026-03-01T19:06:04 | [deleted] | true | null | 0 | o83mrxr | false | /r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o83mrxr/ | false | 1 |
t1_o83mqkv | Qwen team is spoiling me so much. Can't handle this much dopamine. | 3 | 0 | 2026-03-01T19:05:53 | Right-Law1817 | false | null | 0 | o83mqkv | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83mqkv/ | false | 3 |
t1_o83mpxy | When an LLM extracts total cost = $1500 from text, it might hallucinate that number instead of actually calculating area \* cost\_per\_sqft. Aura-State catches this by running the extracted values through Z3 (a theorem prover) which mathematically proves whether total == area \* cost\_per\_sqft. If it doesn't, Z3 retur... | -1 | 0 | 2026-03-01T19:05:48 | Sea-Succotash1547 | false | null | 0 | o83mpxy | false | /r/LocalLLaMA/comments/1ri51y0/p_aurastate_formally_verified_llm_state_machine/o83mpxy/ | false | -1 |
t1_o83mkax | vscode auto-completion | 1 | 0 | 2026-03-01T19:05:03 | Mashic | false | null | 0 | o83mkax | false | /r/LocalLLaMA/comments/1rhykhm/qwen_35_small_soon/o83mkax/ | false | 1 |
t1_o83mgk2 | Fixed, thanks for catching that. You're right that I used AI to help with the README write-up. Still, the algorithms are real implementations; the CTL verification uses pyModelChecking on actual Kripke structures, Z3 runs real SMT proofs on extracted data.
The docs/ALGORITHMS.md has the actual formulas and implem... | 0 | 0 | 2026-03-01T19:04:33 | Sea-Succotash1547 | false | null | 0 | o83mgk2 | false | /r/LocalLLaMA/comments/1ri51y0/p_aurastate_formally_verified_llm_state_machine/o83mgk2/ | false | 0 |
t1_o83mffp | I believe not. I can confirm that nightly builds of vllm support it, I was able to run it this way. Qwen team states that nightly builds of SGLang should support it; althrough it absolutely refused to load the model in AWQ quant. | 2 | 0 | 2026-03-01T19:04:24 | No-Refrigerator-1672 | false | null | 0 | o83mffp | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83mffp/ | false | 2 |
t1_o83maut | I think Claude respects country laws. I am in France, ChatGPT wouldn’t care and oppose me DMCA. I told about it to Claude and specified it that I am in France, and besides telling me that he had no problem helping me reverse engineer something, it also told me that the fact that I live in France would have have made co... | 3 | 0 | 2026-03-01T19:03:47 | folays | false | null | 0 | o83maut | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o83maut/ | false | 3 |
t1_o83m8pb | Yes | 1 | 0 | 2026-03-01T19:03:30 | mrpogiface | false | null | 0 | o83m8pb | false | /r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o83m8pb/ | false | 1 |
t1_o83m798 | I would recommend just starting with llama.cpp, but if you really want to try Ollama you can. I didn't have a lot of luck with continue.dev, lots of bugs and zero response to bug reports from the devs. I eventually abandoned it and went to RooCode and then OpenCode. I don't use it for autocomplete though, just code ... | 1 | 0 | 2026-03-01T19:03:19 | suicidaleggroll | false | null | 0 | o83m798 | false | /r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/o83m798/ | false | 1 |
t1_o83m1j1 | Happy to hear that. | 2 | 0 | 2026-03-01T19:02:33 | Subject-Tea-5253 | false | null | 0 | o83m1j1 | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o83m1j1/ | false | 2 |
t1_o83ly4p | That's a really stupid thing to ask an llm | 21 | 0 | 2026-03-01T19:02:06 | Velocita84 | false | null | 0 | o83ly4p | false | /r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/o83ly4p/ | false | 21 |
t1_o83lux6 | What do you mean by "simulated quantization" here? | 1 | 0 | 2026-03-01T19:01:40 | ilintar | false | null | 0 | o83lux6 | false | /r/LocalLLaMA/comments/1rhy5o2/quantised_matrix_multiplication/o83lux6/ | false | 1 |
t1_o83lokw | I'm trying to find more uses for local models. I'm a major fan. Anything text based I try, but sound, image, video, I'm not sure when I'll see that locally. | 6 | 0 | 2026-03-01T19:00:49 | ptear | false | null | 0 | o83lokw | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83lokw/ | false | 6 |
t1_o83lo8e | 6 | 0 | 2026-03-01T19:00:46 | 7657786425658907653 | false | null | 0 | o83lo8e | false | /r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/o83lo8e/ | false | 6 | |
t1_o83llkk | Ok, here's some benchmarks.
My VM I mentioned earlier
```
ollama run qwen3:14b --verbose "Write a 500 word introduction to AI"
CPU: Xeon E5-2697 v2, 16 threads in a VM
total duration: 18m10.737957383s
load duration: 53.694647417s
prompt eval count: 20 token(s)
prompt eval duration: 9.670528963s
prom... | 2 | 0 | 2026-03-01T19:00:25 | deenspaces | false | null | 0 | o83llkk | false | /r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o83llkk/ | false | 2 |
t1_o83lkz4 | infinite speed | 1 | 0 | 2026-03-01T19:00:20 | mouseofcatofschrodi | false | null | 0 | o83lkz4 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83lkz4/ | false | 1 |
t1_o83lijs | The Chinese here are on a roll. Local models will be the only thing working once the AI bubble pops. | 2 | 0 | 2026-03-01T19:00:01 | 05032-MendicantBias | false | null | 0 | o83lijs | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83lijs/ | false | 2 |
t1_o83lhrz | I get 53 t/s on my 5060Ti 16GB as well with these recommended settings and same ddr5 system ram | 1 | 0 | 2026-03-01T18:59:55 | No_War_8891 | false | null | 0 | o83lhrz | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o83lhrz/ | false | 1 |
t1_o83lg6i | it does, but lm studio not | 0 | 0 | 2026-03-01T18:59:42 | mouseofcatofschrodi | false | null | 0 | o83lg6i | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83lg6i/ | false | 0 |
t1_o83lez2 | I'll try, running with Qwen's recommended params for now | 2 | 0 | 2026-03-01T18:59:33 | ndiphilone | false | null | 0 | o83lez2 | false | /r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o83lez2/ | false | 2 |
t1_o83l9pd | As i realized 3.5 models are very sensitive to penalties. Try with suggested repetition and presence penalties. It solved any looping problems of mine. | 5 | 0 | 2026-03-01T18:58:51 | Mir4can | false | null | 0 | o83l9pd | false | /r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o83l9pd/ | false | 5 |
t1_o83l8hj | Would you still recommend vLLM or Llama.cpp for Qwen 3.5, then? Thanks! | 2 | 0 | 2026-03-01T18:58:41 | anthonybustamante | false | null | 0 | o83l8hj | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83l8hj/ | false | 2 |
t1_o83l51x | uuh. didnt saw that. just wanted to be friendly. therefore i am always asking for details. using ai is not a crime nowadays, most developers use it for good, but sometimes it's bad slop. | 2 | 0 | 2026-03-01T18:58:15 | Charming_Support726 | false | null | 0 | o83l51x | false | /r/LocalLLaMA/comments/1ri51y0/p_aurastate_formally_verified_llm_state_machine/o83l51x/ | false | 2 |
t1_o83l1az | Yeah, all 3 I tested just couldn't be convinced. It's really funny. Once, I even gave all the logs, proof and even streaming view of it's own responses and it still went "but wait, I'm a Cloud Hosted model!" and ignored the whole thing :D | 3 | 0 | 2026-03-01T18:57:45 | Medium_Chemist_4032 | false | null | 0 | o83l1az | false | /r/LocalLLaMA/comments/1ri5la8/qwen_35_35b_a3b_is_convinced_that_its_running_in/o83l1az/ | false | 3 |
t1_o83kx5m | You can even put a 24GB 3090 on any PCIEx1 and get big wins. Consider it. | 1 | 0 | 2026-03-01T18:57:12 | crantob | false | null | 0 | o83kx5m | false | /r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/o83kx5m/ | false | 1 |
t1_o83kpzr | for dense models: https://old.reddit.com/r/LocalLLaMA/comments/1ri1rit/running_qwen314b_93gb_on_a_cpuonly_kvm_vps_what/o82wms6/
for MoE models it's a bit different: you need to divide memory bandwidth by amount of active parameters multiplied by size in bits, i.e. if it is nnnB-A10B in 8 bits then divide memory bandwi... | 2 | 0 | 2026-03-01T18:56:16 | MelodicRecognition7 | false | null | 0 | o83kpzr | false | /r/LocalLLaMA/comments/1ri42ee/help_finding_best_for_my_specs/o83kpzr/ | false | 2 |
t1_o83kn28 | Will try it in earnest soon. | 1 | 0 | 2026-03-01T18:55:53 | wrk79 | false | null | 0 | o83kn28 | false | /r/LocalLLaMA/comments/1rhxaqw/question_about_devstral_small_2_24b_on_radeon_780m/o83kn28/ | false | 1 |
t1_o83kj10 | It get a solid 17t/s with qwen. I just wanted to know if something is wrong with my setup. | 1 | 0 | 2026-03-01T18:55:22 | wrk79 | false | null | 0 | o83kj10 | false | /r/LocalLLaMA/comments/1rhxaqw/question_about_devstral_small_2_24b_on_radeon_780m/o83kj10/ | false | 1 |
t1_o83kdtp | Agents will be the REGERT of mankind. | 1 | 0 | 2026-03-01T18:54:41 | crantob | false | null | 0 | o83kdtp | false | /r/LocalLLaMA/comments/1rhv06r/how_are_you_preventing_runaway_ai_agent_behavior/o83kdtp/ | false | 1 |
t1_o83jydy | There already is a non-upstream ANE driver written. It’ll take so time before they get to M4. They have just gotten M3 to the same state of usability as their initial Alpha release. But it will take some more time to get a new an overhauled GPU driver for M3 and later. | 20 | 0 | 2026-03-01T18:52:38 | cAtloVeR9998 | false | null | 0 | o83jydy | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o83jydy/ | false | 20 |
t1_o83jwud | Interesting though I think these are different failure modes. In my case it's specifically fp8\_e4m3 KV cache quantization on SM120 that produces corrupt output, while bf16 KV on the same hardware with the same model produces correct output. The weights are FP8 in both cases. On a Tesla M40 with FP32 KV, could be a dif... | 2 | 0 | 2026-03-01T18:52:26 | awwwyeah206 | false | null | 0 | o83jwud | false | /r/LocalLLaMA/comments/1rhmepa/qwen35122b_on_blackwell_sm120_fp8_kv_cache/o83jwud/ | false | 2 |
t1_o83jul2 | [removed] | 1 | 0 | 2026-03-01T18:52:08 | [deleted] | true | null | 0 | o83jul2 | false | /r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83jul2/ | false | 1 |
t1_o83jr14 | I'm always left disappointed. Tried the latest 30B MoE briefly and the "reasoning" takes forever, repeatedly checking same assumptions, sometimes ending in an endless loop. | 9 | 0 | 2026-03-01T18:51:40 | Abject-Kitchen3198 | false | null | 0 | o83jr14 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83jr14/ | false | 9 |
t1_o83jni1 | it **is** an AI slop
## 📦 Installation
```bash
pip install git+https://github.com/YOUR_USERNAME/aura-state.git
```
> YOUR_USERNAME | 3 | 0 | 2026-03-01T18:51:12 | MelodicRecognition7 | false | null | 0 | o83jni1 | false | /r/LocalLLaMA/comments/1ri51y0/p_aurastate_formally_verified_llm_state_machine/o83jni1/ | false | 3 |
t1_o83jjmr | At least someone else will be ;) | 1 | 0 | 2026-03-01T18:50:42 | ubrtnk | false | null | 0 | o83jjmr | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o83jjmr/ | false | 1 |
t1_o83jhqp | This is with the updated GGUF | 7 | 0 | 2026-03-01T18:50:28 | ndiphilone | false | null | 0 | o83jhqp | false | /r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o83jhqp/ | false | 7 |
t1_o83jf6v | I'm not speaking to the technical ingenuity of the persons AI assisted creation, I'm speaking to your response to someone talking about something they created with AI that they're proud of, and your first response is to shit on them.
No different than shitting on someone because they chose to drive a longer route to ... | 1 | 0 | 2026-03-01T18:50:08 | TheBurkMeister | false | null | 0 | o83jf6v | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o83jf6v/ | false | 1 |
t1_o83jex8 | When did you download the model? I believe that Unsloth uploaded new versions of Qwen 3.5 35B to Huggingface on the 27th Feb with fixes for the looping issue.
https://unsloth.ai/docs/models/qwen3.5 | 5 | 0 | 2026-03-01T18:50:06 | rmhubbert | false | null | 0 | o83jex8 | false | /r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o83jex8/ | false | 5 |
t1_o83jdwa | So your recommendation is to start with Ollama and [continue.dev](http://continue.dev), then change over to llama.cpp once I've gotten the hang of it? | 1 | 0 | 2026-03-01T18:49:58 | MakutaArguilleres | false | null | 0 | o83jdwa | false | /r/LocalLLaMA/comments/1rhg2ir/trying_to_set_up_a_vscode_server_local_llm/o83jdwa/ | false | 1 |
t1_o83jdso | Agreed. I'll still keep that angle for sure but I can do that with a decently powered Mac Mini lol | 1 | 0 | 2026-03-01T18:49:57 | ubrtnk | false | null | 0 | o83jdso | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o83jdso/ | false | 1 |
t1_o83j6vi | Nope, not for Qwen3.5
"speculative decoding not supported by this context" | 2 | 0 | 2026-03-01T18:49:02 | xanduonc | false | null | 0 | o83j6vi | false | /r/LocalLLaMA/comments/1rhwo08/qwen35_small_dense_model_release_seems_imminent/o83j6vi/ | false | 2 |
t1_o83j6bk | Always interested in such stuff.
The Idea might be good, but your repo reads like endless AI Slop. Impossible to read and to follow.
Seems like you are chaining ReAct-Loops with dedicated criteria? | 2 | 0 | 2026-03-01T18:48:58 | Charming_Support726 | false | null | 0 | o83j6bk | false | /r/LocalLLaMA/comments/1ri51y0/p_aurastate_formally_verified_llm_state_machine/o83j6bk/ | false | 2 |
t1_o83j55i | >All of this was during Cold War era secrecy. Why do we have no innovation? Why did we go to the moon in the 60s and cannot seem to find any paradigm shifts in methodology to do it orders of magnitude more effectively?
There have been incredible innovations in decades after the 60s though. Like, the entire personal co... | 2 | 0 | 2026-03-01T18:48:48 | huffalump1 | false | null | 0 | o83j55i | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o83j55i/ | false | 2 |
t1_o83j4qw | [https://brave-rollback-bhps.pagedrop.io](https://brave-rollback-bhps.pagedrop.io)
First try with Qwen3.5-35B-A3B-UD-Q4\_K\_XL.gguf | 12 | 0 | 2026-03-01T18:48:45 | MustBeSomethingThere | false | null | 0 | o83j4qw | false | /r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o83j4qw/ | false | 12 |
t1_o83iznu | Again, maybe. You need to see because I am not sure what the questions are and etc. I think the document review would be the best case because you don't care if it takes 1 hour to process documents. But if it would take 1 minute (I am not saying it will be it could) to respond to a question for the AI receptionist on a... | 1 | 0 | 2026-03-01T18:48:05 | knownboyofno | false | null | 0 | o83iznu | false | /r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o83iznu/ | false | 1 |
t1_o83iwoy | This information has really helped a ton. I use a lot of different models and since updating with this information, I've seen an average of 25% increase in tokens/sec. Thank you so very much for this. | 2 | 0 | 2026-03-01T18:47:42 | ClintonKilldepstein | false | null | 0 | o83iwoy | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o83iwoy/ | false | 2 |
t1_o83iqmx | 2 | 0 | 2026-03-01T18:46:54 | ProfessionalSpend589 | false | null | 0 | o83iqmx | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o83iqmx/ | false | 2 | |
t1_o83iktt | [removed] | 1 | 0 | 2026-03-01T18:46:09 | [deleted] | true | null | 0 | o83iktt | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o83iktt/ | false | 1 |
t1_o83ijtu | LSTM is still king for smartphone AI features for text | -1 | 0 | 2026-03-01T18:46:02 | wektor420 | false | null | 0 | o83ijtu | false | /r/LocalLLaMA/comments/1ri0puh/honor_would_use_deepseek/o83ijtu/ | false | -1 |
t1_o83iidl | Would it be applicable to Qwen 3.5 35B-A3.5B? | 1 | 0 | 2026-03-01T18:45:50 | oxygen_addiction | false | null | 0 | o83iidl | false | /r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/o83iidl/ | false | 1 |
t1_o83ihrl | GT 730 4GB VRAM from 2014 | 9 | 0 | 2026-03-01T18:45:46 | DarkWolfX2244 | false | null | 0 | o83ihrl | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83ihrl/ | false | 9 |
t1_o83iazi | I'm cool with those here as long as there's evidence and we're not just upvoting hype posts here now.. uhh hmm. | 8 | 0 | 2026-03-01T18:44:53 | ptear | false | null | 0 | o83iazi | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83iazi/ | false | 8 |
t1_o83i1f1 | What would be the preferred model to fully utilize 96gb of vram? | 1 | 0 | 2026-03-01T18:43:40 | 55234ser812342423 | false | null | 0 | o83i1f1 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83i1f1/ | false | 1 |
t1_o83hx5s | Thou shalt not worship the math. | 18 | 0 | 2026-03-01T18:43:07 | Silver-Champion-4846 | false | null | 0 | o83hx5s | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o83hx5s/ | false | 18 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.