name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o86tp7e | The community patches shows very little kl divergence between the original and heretic models in qwen 3.5. Is the kl divergence good enough metric for the heretic product to be good or do we need to check for other parameters (like ppl or bpw)? | 1 | 0 | 2026-03-02T06:41:41 | RickyRickC137 | false | null | 0 | o86tp7e | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o86tp7e/ | false | 1 |
t1_o86tomb | Yes this actually seems correct (ie use BF16 KV cache), but OP's original premise is incorrect, since I'm unsure why it's related to our quants / Unsloth. | 29 | 0 | 2026-03-02T06:41:32 | danielhanchen | false | null | 0 | o86tomb | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86tomb/ | false | 29 |
t1_o86tlje | I'm glad it helps! I haven't tested non-unsloth models. Sadly, I also don't know anybody else that owns a similar setup or that is interested in local inference | 2 | 0 | 2026-03-02T06:40:47 | DarkTechnophile | false | null | 0 | o86tlje | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o86tlje/ | false | 2 |
t1_o86tlas | Ugh. Look at the actual attention kernel, that will be where the kv cache is actually consumed and you'll see what precision it needs / expects. | 2 | 0 | 2026-03-02T06:40:43 | dsanft | false | null | 0 | o86tlas | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86tlas/ | false | 2 |
t1_o86te79 | Use the GGUF version of GLM 4.6v Flash. It is a 9B model and a 5-6 bit quant would work pretty well for most tasks, and take up 5-5.5 GB of vRAM. | 2 | 0 | 2026-03-02T06:38:58 | Away-Albatross2113 | false | null | 0 | o86te79 | false | /r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o86te79/ | false | 2 |
t1_o86tcsq | Hella wild to think about. I built my first local setup about a year ago, right after that DeepSeek moment. Was running a pretty modest rig, 2x3090s, and it felt like magic just getting anything usable at [all.Now](http://all.Now) running Qwen3.5-35B-A3B on the same setup and it is straight up competitive with API mode... | 3 | 0 | 2026-03-02T06:38:38 | ElectricalOpinion639 | false | null | 0 | o86tcsq | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o86tcsq/ | false | 3 |
t1_o86taiw | Doesn't matter, it's a MoE model, so it doesn't have to fit entirely within VRAM, only the active experts need to. | 1 | 0 | 2026-03-02T06:38:05 | Eternal_Ohm | false | null | 0 | o86taiw | false | /r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o86taiw/ | false | 1 |
t1_o86t6ln | There's none at the moment. Only 122b available, which is good! | 3 | 0 | 2026-03-02T06:37:08 | RickyRickC137 | false | null | 0 | o86t6ln | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o86t6ln/ | false | 3 |
t1_o86t5nu | ok | 0 | 0 | 2026-03-02T06:36:55 | jwpbe | false | null | 0 | o86t5nu | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86t5nu/ | false | 0 |
t1_o86t1az | Will also try changing the port. I left it at default and llama.cpp chose 8080. | 1 | 0 | 2026-03-02T06:35:52 | vharishankar | false | null | 0 | o86t1az | false | /r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86t1az/ | false | 1 |
t1_o86su2t | men are fragile emotionally and signal to each other like this when they don't have the vocabulary to express themselves. when you point this out they whine that it's 'just jokes' or something along those lines | -2 | 0 | 2026-03-02T06:34:07 | jwpbe | false | null | 0 | o86su2t | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86su2t/ | false | -2 |
t1_o86sp90 | You're weird. | 5 | 0 | 2026-03-02T06:32:58 | IamNetworkNinja | false | null | 0 | o86sp90 | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86sp90/ | false | 5 |
t1_o86slsu | Will give it a shot | 0 | 0 | 2026-03-02T06:32:06 | vpyno | false | null | 0 | o86slsu | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o86slsu/ | false | 0 |
t1_o86sl45 | Thanks. I added --jinja and --chat-template because searching for this issue led me to it. I was not even sure if I entered the parameter correctly. | 1 | 0 | 2026-03-02T06:31:56 | vharishankar | false | null | 0 | o86sl45 | false | /r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86sl45/ | false | 1 |
t1_o86siuw | 3 | 0 | 2026-03-02T06:31:21 | waescher | false | null | 0 | o86siuw | false | /r/LocalLLaMA/comments/1rij4sj/what_is_the_most_ridiculously_good_goto_llm_for/o86siuw/ | false | 3 | |
t1_o86si3t | Yeah, sad to not have many models in this size.
There's longcat flash lite, but 2 months later, still not supported by llama.cpp yet :/ | 1 | 0 | 2026-03-02T06:31:10 | mr_zerolith | false | null | 0 | o86si3t | false | /r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o86si3t/ | false | 1 |
t1_o86sdwi | [https://huggingface.co/datasets/jaymunshi/open-swara/blob/main/ATTRIBUTION.md](https://huggingface.co/datasets/jaymunshi/open-swara/blob/main/ATTRIBUTION.md) its already there. | 3 | 0 | 2026-03-02T06:30:10 | Tasty-Ad-5172 | false | null | 0 | o86sdwi | false | /r/LocalLLaMA/comments/1riiwtp/open_swara_4065_humanized_voice_samples_across_44/o86sdwi/ | false | 3 |
t1_o86sdw5 | Thanks a lot. I was getting confused with a lot of conflicting information when I searched the web. Add to this, AI generated answers on this topic are fairly outdated (ironic). | 2 | 0 | 2026-03-02T06:30:09 | vharishankar | false | null | 0 | o86sdw5 | false | /r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86sdw5/ | false | 2 |
t1_o86s9jb | I recently saw multiple people reporting issues with f16 cache in Qwen3.5 models, while confirming that bf16 working fine; one of most detailed reports that I saw so far, with multiple cache quantizations tested, was this one: [https://www.reddit.com/r/LocalLLaMA/comments/1rii2pd/comment/o865qxw/](https://www.reddit.co... | 27 | 0 | 2026-03-02T06:29:06 | Lissanro | false | null | 0 | o86s9jb | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86s9jb/ | false | 27 |
t1_o86s867 | "cringe alt right slang" 😂some people man | 3 | 0 | 2026-03-02T06:28:46 | Odd-Ordinary-5922 | false | null | 0 | o86s867 | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86s867/ | false | 3 |
t1_o86s5tl | Good Point. Will do. | 3 | 0 | 2026-03-02T06:28:13 | Tasty-Ad-5172 | false | null | 0 | o86s5tl | false | /r/LocalLLaMA/comments/1riiwtp/open_swara_4065_humanized_voice_samples_across_44/o86s5tl/ | false | 3 |
t1_o86s4pu | My weird clarification is that their CoT and "default personality" for the job would be totally different. | 1 | 0 | 2026-03-02T06:27:56 | TomLucidor | false | null | 0 | o86s4pu | false | /r/LocalLLaMA/comments/1rik3ge/what_is_the_personality_of_a_chinese_llm_when/o86s4pu/ | false | 1 |
t1_o86s449 | thisnis bit true at all. you seem to have no idea how OS work.
an OS is not „slow“. it might be „heavy“ but there is a very negligible penalty in speed.
when it comes to speed you are faster when you use the ultra optimized libraries of clever people or the drivers of the manufacturers. thats why those exist.
as y... | 1 | 0 | 2026-03-02T06:27:48 | howardhus | false | null | 0 | o86s449 | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o86s449/ | false | 1 |
t1_o86s12z | for me that only happens at like 60k context | 1 | 0 | 2026-03-02T06:27:04 | Odd-Ordinary-5922 | false | null | 0 | o86s12z | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86s12z/ | false | 1 |
t1_o86rzlm | At least put this on huggingface with attributions where these voices came from. | 8 | 0 | 2026-03-02T06:26:42 | Kahvana | false | null | 0 | o86rzlm | false | /r/LocalLLaMA/comments/1riiwtp/open_swara_4065_humanized_voice_samples_across_44/o86rzlm/ | false | 8 |
t1_o86rv3s | mogception! | 5 | 0 | 2026-03-02T06:25:36 | Rheumi | false | null | 0 | o86rv3s | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86rv3s/ | false | 5 |
t1_o86rt3c | Could you do the 122B variant using the same method? | 0 | 0 | 2026-03-02T06:25:07 | theinvisibleman_ | false | null | 0 | o86rt3c | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o86rt3c/ | false | 0 |
t1_o86rsrv | On 3 Mi50 16gb I'm getting about 25 tk/sec at 6000 tokens with Q8 with the 35B and 10 tk/sec at about the same prompt size for 27B. | 5 | 0 | 2026-03-02T06:25:03 | MotokoAGI | false | null | 0 | o86rsrv | false | /r/LocalLLaMA/comments/1rikb4w/qwen_35_amd_mi50_32gb_benchmarks/o86rsrv/ | false | 5 |
t1_o86rrc6 | I’m just taking a guess here and don’t hesitate to stop me if I’m reaching, but it might be because the image in OP is entirely centered around it. | 5 | 0 | 2026-03-02T06:24:42 | redditscraperbot2 | false | null | 0 | o86rrc6 | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86rrc6/ | false | 5 |
t1_o86rqxf | <thinking>
The user made a profound observation that nobody cares.
But wait - they might mean that nobody is NOT a single body. That would mean that they ACTUALLY want me to understand that everybody cares.
Let me think about this...
Actually wait - the user is frustrated that I'm not listening to them. It's clear... | 17 | 0 | 2026-03-02T06:24:36 | brownman19 | false | null | 0 | o86rqxf | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86rqxf/ | false | 17 |
t1_o86rnth | The model might be hiking with different shoes where the terrain at (bf16) makes for excellent grip to get out of holes and not slip into one prematurely. | -11 | 0 | 2026-03-02T06:23:52 | ThisWillPass | false | null | 0 | o86rnth | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86rnth/ | false | -11 |
t1_o86rnrn | Then you must have improperly set context size in OpenCode, because OpenCode is doing compaction of the context. Just truncating is horrible, unless we speak of a Waifu. | 1 | 0 | 2026-03-02T06:23:51 | debackerl | false | null | 0 | o86rnrn | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o86rnrn/ | false | 1 |
t1_o86rm85 | it’s still too slow for me. | 1 | 0 | 2026-03-02T06:23:28 | bobaburger | false | null | 0 | o86rm85 | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o86rm85/ | false | 1 |
t1_o86rioj | Or you could just use llama-vscode
[https://marketplace.visualstudio.com/items?itemName=ggml-org.llama-vscode](https://marketplace.visualstudio.com/items?itemName=ggml-org.llama-vscode)
[https://github.com/ggml-org/llama.vscode?tab=readme-ov-file](https://github.com/ggml-org/llama.vscode?tab=readme-ov-file) | 1 | 0 | 2026-03-02T06:22:38 | MotokoAGI | false | null | 0 | o86rioj | false | /r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86rioj/ | false | 1 |
t1_o86ric2 | The evidence here is pretty weak. The f32 result matching f16 identically is actually a pretty damning result, paradoxically. f32 is a strict superset of both f16 and bf16’s representable values. If f16’s narrower dynamic range were genuinely misrepresenting attention values that bf16 handles correctly, f32 should mat... | 42 | 0 | 2026-03-02T06:22:33 | claythearc | false | null | 0 | o86ric2 | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86ric2/ | false | 42 |
t1_o86rfrz | mogged op but then got framemogged by losing aura | 19 | 0 | 2026-03-02T06:21:57 | Odd-Ordinary-5922 | false | null | 0 | o86rfrz | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86rfrz/ | false | 19 |
t1_o86rdeo | that’s i’m not sure. i haven’t use ollama for quite a long time. llama.cpp was easier to tune for me | 6 | 0 | 2026-03-02T06:21:23 | bobaburger | false | null | 0 | o86rdeo | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o86rdeo/ | false | 6 |
t1_o86rbab | Give me recipe for a nice friday meal with my friends. | 1 | 0 | 2026-03-02T06:20:54 | Total_Activity_7550 | false | null | 0 | o86rbab | false | /r/LocalLLaMA/comments/1rfc7d3/ollama_dons_support_qwen3535b_yet/o86rbab/ | false | 1 |
t1_o86r93s | You didn’t hold frame. | 4 | 0 | 2026-03-02T06:20:23 | redditscraperbot2 | false | null | 0 | o86r93s | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86r93s/ | false | 4 |
t1_o86r7zz | Why is this cringe alt right slang making it's way into the space? Are you guys just trolling or do you really watch clav | -5 | 0 | 2026-03-02T06:20:08 | cool_fox | false | null | 0 | o86r7zz | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86r7zz/ | false | -5 |
t1_o86r6w5 | It speaks to models adapting to the precision they’re trained on. Those rounding errors are the noise it needs to inference comfortably. Just spit balling. | -3 | 0 | 2026-03-02T06:19:51 | ThisWillPass | false | null | 0 | o86r6w5 | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86r6w5/ | false | -3 |
t1_o86r166 | Here’s the problem, you just spiked your cortisol. | 16 | 0 | 2026-03-02T06:18:30 | DryWeb3875 | false | null | 0 | o86r166 | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86r166/ | false | 16 |
t1_o86r0og | apiBase should be like ```http:127.0.0.1:5000/v1/``` with 5000 the port llama-server is listening on
Not sure your 8080 port is a good choice as it will interfere with a web server running on the same machine. | 1 | 0 | 2026-03-02T06:18:23 | ali0une | false | null | 0 | o86r0og | false | /r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86r0og/ | false | 1 |
t1_o86qscu | [deleted] | 1 | 0 | 2026-03-02T06:16:26 | [deleted] | true | null | 0 | o86qscu | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o86qscu/ | false | 1 |
t1_o86qfzu | i'm not participating in a misogynist dick waving contest | -26 | 0 | 2026-03-02T06:13:28 | jwpbe | false | null | 0 | o86qfzu | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86qfzu/ | false | -26 |
t1_o86qe9n | First, test and see what is actually the output form llama-server, with something like this:
```
curl --request POST \
--url http://localhost:8080/completion \
--header "Content-Type: application/json" \
--data '{"prompt": "Building a website can be done in 10 simple steps:","n_predict": 128}'
```
Second,... | 1 | 0 | 2026-03-02T06:13:03 | Ill-Fishing-1451 | false | null | 0 | o86qe9n | false | /r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86qe9n/ | false | 1 |
t1_o86qbgk | Daniel just replied in a comment, doesn't seem to be an issue or affect output: [https://www.reddit.com/r/LocalLLaMA/comments/1rik253/comment/o86ooix/](https://www.reddit.com/r/LocalLLaMA/comments/1rik253/comment/o86ooix/) | 2 | 0 | 2026-03-02T06:12:23 | yoracale | false | null | 0 | o86qbgk | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86qbgk/ | false | 2 |
t1_o86q9df | You’ve probably got a config mismatch more than a llama.cpp issue.
A few things jump out:
* `api-base` should be `apiBase`
* `tabAutocompleteModel` is old config style and shouldn’t be set up like that in the current YAML
* your config looks duplicated / malformed at the end, which can break parsing
* if your server ... | 1 | 0 | 2026-03-02T06:11:54 | tallen0913 | false | null | 0 | o86q9df | false | /r/LocalLLaMA/comments/1rikga6/running_vs_code_continue_and_llamacpp_in/o86q9df/ | false | 1 |
t1_o86q6b3 | Daniel just replied in a comment, doesn't seem to be an issue: [https://www.reddit.com/r/LocalLLaMA/comments/1rik253/comment/o86ooix/](https://www.reddit.com/r/LocalLLaMA/comments/1rik253/comment/o86ooix/) | 4 | 0 | 2026-03-02T06:11:11 | yoracale | false | null | 0 | o86q6b3 | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86q6b3/ | false | 4 |
t1_o86q44d | Daniel just replied, doesn't seem to be an issue: [https://www.reddit.com/r/LocalLLaMA/comments/1rik253/comment/o86ooix/](https://www.reddit.com/r/LocalLLaMA/comments/1rik253/comment/o86ooix/) | 4 | 0 | 2026-03-02T06:10:41 | yoracale | false | null | 0 | o86q44d | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86q44d/ | false | 4 |
t1_o86q1l3 | The domain specific eval is so much more valuable than public benchmarks. I built a 150-question eval set for my use case using Confident AI's dataset tools, and the model rankings were completely different from what the leaderboards suggested. Saved me from deploying a model that looked great on paper but sucked for m... | 1 | 0 | 2026-03-02T06:10:06 | Parking-Concern9575 | false | null | 0 | o86q1l3 | false | /r/LocalLLaMA/comments/1ri14x0/has_anyone_built_a_proper_eval_pipeline_for_local/o86q1l3/ | false | 1 |
t1_o86q032 | My quadro K4200 is going to be sweating | 2 | 0 | 2026-03-02T06:09:46 | NotTJButCJ | false | null | 0 | o86q032 | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o86q032/ | false | 2 |
t1_o86puhd | [removed] | 1 | 0 | 2026-03-02T06:08:27 | [deleted] | true | null | 0 | o86puhd | false | /r/LocalLLaMA/comments/1ri14x0/has_anyone_built_a_proper_eval_pipeline_for_local/o86puhd/ | false | 1 |
t1_o86ps4m | Oh-my-pi | 1 | 0 | 2026-03-02T06:07:55 | Queasy_Asparagus69 | false | null | 0 | o86ps4m | false | /r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o86ps4m/ | false | 1 |
t1_o86pryl | [removed] | 1 | 0 | 2026-03-02T06:07:53 | [deleted] | true | null | 0 | o86pryl | false | /r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/o86pryl/ | false | 1 |
t1_o86pldg | Why not use a 3-bit model? | 1 | 0 | 2026-03-02T06:06:21 | moahmo88 | false | null | 0 | o86pldg | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o86pldg/ | false | 1 |
t1_o86phka | Casual Flex | 1 | 0 | 2026-03-02T06:05:28 | Model_Mafia | false | null | 0 | o86phka | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o86phka/ | false | 1 |
t1_o86pgi1 | Well cloud tool translator like tori or manju are way more convinient | 1 | 0 | 2026-03-02T06:05:13 | Specialist-Yak4023 | false | null | 0 | o86pgi1 | false | /r/LocalLLaMA/comments/1p77tf2/comic_manga_translation/o86pgi1/ | false | 1 |
t1_o86pd9g | The DeepSeek moment fundamentally shifted the conversation from 'only big labs can do this' to 'efficient architecture matters more than raw compute.' In 13 months we've gone from thinking you need billions in GPU clusters to people running competitive models on a single high-end consumer GPU. The democratization of AI... | 1 | 0 | 2026-03-02T06:04:27 | Soft-Analyst-9452 | false | null | 0 | o86pd9g | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o86pd9g/ | false | 1 |
t1_o86pa5j | 100+ t/s decode on a 27B dense model is wild. A year ago we were celebrating getting 20 t/s on models half that size. The combination of better quantization, optimized inference engines, and actual hardware improvements is compounding faster than most people expected. At this rate, local models running at production-qu... | 1 | 0 | 2026-03-02T06:03:44 | Soft-Analyst-9452 | false | null | 0 | o86pa5j | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86pa5j/ | false | 1 |
t1_o86p9kq | but.. isn't that just within measurement error/range of uncertainty? (note the +/- 0.04170)
PPL = 6.5497 +/- 0.04170PPL = 6.5497 +/- 0.04170 | 64 | 0 | 2026-03-02T06:03:35 | bfroemel | false | null | 0 | o86p9kq | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86p9kq/ | false | 64 |
t1_o86p1ss | for text it works for me if I paste a json like this into th input window
{
"type": "text",
"source\_lang\_code": "cs",
"target\_lang\_code": "de-DE",
"text": "V nejhorším případě i k prasknutí čočky.",
} | 1 | 0 | 2026-03-02T06:01:46 | DocWolle | false | null | 0 | o86p1ss | false | /r/LocalLLaMA/comments/1rhvlfp/llamacpptranslategemma_how_to_translate_text_from/o86p1ss/ | false | 1 |
t1_o86p11z | Will any of them work reliably with ollama in Docker? I'm on 0.17.4 and nothing works. | -2 | 1 | 2026-03-02T06:01:35 | captainrv | false | null | 0 | o86p11z | false | /r/LocalLLaMA/comments/1riir6o/lots_of_new_qwen35_27b_imaxtrix_quants_from/o86p11z/ | false | -2 |
t1_o86p0rg | You just brvtally mogged op | 22 | 0 | 2026-03-02T06:01:31 | redditscraperbot2 | false | null | 0 | o86p0rg | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86p0rg/ | false | 22 |
t1_o86p0da | Uhm, I'm not an expert in that benchmarks specifically, but a statistician would say that it does prove anything if the two means are within the standard deviation of each others. You have 68% chance that the real PPL is within +/- 1 standard deviation if the results are normally distributed.
If the improvement was du... | 25 | 0 | 2026-03-02T06:01:26 | debackerl | false | null | 0 | o86p0da | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86p0da/ | false | 25 |
t1_o86opxj | nobody cares | 27 | 0 | 2026-03-02T05:59:02 | jwpbe | false | null | 0 | o86opxj | false | /r/LocalLLaMA/comments/1rikvi8/no_way/o86opxj/ | false | 27 |
t1_o86ooix | Nice investigation! BF16 or FP16 might make a difference as shown in your tests, but note:
1. The baseline logits at https://huggingface.co/unsloth/Qwen3.5-35B-A3B-Experiments-GGUF are computed with `--batch-size 16384 --ubatch-size 16384` and ctx-size 512 (comparable to bartowski, AesSedai, Ubergarm etc). We also use... | 108 | 0 | 2026-03-02T05:58:42 | danielhanchen | false | null | 0 | o86ooix | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86ooix/ | false | 108 |
t1_o86on6h | Qwen3-Coder-Next in Q6_K_XL is already very capable if you’re prompting it in English and don’t use niche programming languages like Visual Basic or COBOL.
You also don’t need 600GB RAM for this. An AMD Threadripper 9965WX with 128GB DDR5 RAM is fine. If you want to save some money because of the RAM crisis, go for no... | 1 | 0 | 2026-03-02T05:58:23 | Hurricane31337 | false | null | 0 | o86on6h | false | /r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/o86on6h/ | false | 1 |
t1_o86om1t | Can you ELI5? The numbers you posted show an improvement (-0.0014) that's lower than the test's error margin (± 0.04170). If this measurement is the only datapoint you're working with then you're basically tracking noise.
Llama.cpp defaults to f16 because bf16 performance varies among supported platforms, and f16 is ... | 93 | 0 | 2026-03-02T05:58:07 | 666666thats6sixes | false | null | 0 | o86om1t | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86om1t/ | false | 93 |
t1_o86okat | [removed] | 1 | 0 | 2026-03-02T05:57:43 | [deleted] | true | null | 0 | o86okat | false | /r/LocalLLaMA/comments/1riic5m/running_llamaserver_as_a_persistent_systemd/o86okat/ | false | 1 |
t1_o86ok9h | Only know some paid tools like manju.pro | 1 | 0 | 2026-03-02T05:57:42 | Specialist-Yak4023 | false | null | 0 | o86ok9h | false | /r/LocalLLaMA/comments/1j22nt0/tool_for_manga_translation/o86ok9h/ | false | 1 |
t1_o86ohdv | given the state of hr these days, you never know ... | 1 | 0 | 2026-03-02T05:57:03 | Frizzoux | false | null | 0 | o86ohdv | false | /r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o86ohdv/ | false | 1 |
t1_o86oe02 | Why would perplexity with fp32 be higher than bf16 ? | 4 | 0 | 2026-03-02T05:56:16 | MammayKaiseHain | false | null | 0 | o86oe02 | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86oe02/ | false | 4 |
t1_o86odov | I will be messaging you in 2 days on [**2026-03-04 05:55:14 UTC**](http://www.wolframalpha.com/input/?i=2026-03-04%2005:55:14%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o86o9ky/?context=3)
[**CLICK THIS LINK**... | 1 | 0 | 2026-03-02T05:56:12 | RemindMeBot | false | null | 0 | o86odov | false | /r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o86odov/ | false | 1 |
t1_o86o9ky | RemindMe! 2 days | 1 | 0 | 2026-03-02T05:55:14 | IrisColt | false | null | 0 | o86o9ky | false | /r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o86o9ky/ | false | 1 |
t1_o86o21b | This is a real eye-opener, thanks!!! | 1 | 0 | 2026-03-02T05:53:29 | IrisColt | false | null | 0 | o86o21b | false | /r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o86o21b/ | false | 1 |
t1_o86o1z5 | f32 should work (at a cost) for systems without native bf16 right? | 7 | 0 | 2026-03-02T05:53:28 | gofiend | false | null | 0 | o86o1z5 | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86o1z5/ | false | 7 |
t1_o86o0vk | [deleted] | 1 | 0 | 2026-03-02T05:53:13 | [deleted] | true | null | 0 | o86o0vk | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o86o0vk/ | false | 1 |
t1_o86nwvw | old gpu does not support bf16 acceleration | 8 | 0 | 2026-03-02T05:52:18 | Conscious_Chef_3233 | false | null | 0 | o86nwvw | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86nwvw/ | false | 8 |
t1_o86nr9b | \- Write a persona for the coding agent (CLAUDE.md, SOUL.md, or for whatever coding harness you're using)
\- Give it a scene to operate in, office space, your bedroom, basement, sex dungeon or whatever
\- Talk to it.
\- Reward it if it did a good job, i.e. "I touch your pp. Good work, buddy. Want some more? Then ... | 1 | 0 | 2026-03-02T05:51:01 | gripntear | false | null | 0 | o86nr9b | false | /r/LocalLLaMA/comments/1riboy2/learnt_about_emergent_intention_maybe_prompt/o86nr9b/ | false | 1 |
t1_o86np4j | > quantized KV cache forces a lot of the dequantization overhead onto the CPU
Wouldn't dequantizing undo the point of quantizing? Genuinely asking. | 0 | 0 | 2026-03-02T05:50:32 | IrisColt | false | null | 0 | o86np4j | false | /r/LocalLLaMA/comments/1rhvi09/psa_if_your_local_coding_agent_feels_dumb_at_30k/o86np4j/ | false | 0 |
t1_o86nlmd | livin the dream | 1 | 0 | 2026-03-02T05:49:43 | ab2377 | false | null | 0 | o86nlmd | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86nlmd/ | false | 1 |
t1_o86njpf | A GPU for these models? They'd run on my phone... | 2 | 0 | 2026-03-02T05:49:17 | droptableadventures | false | null | 0 | o86njpf | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o86njpf/ | false | 2 |
t1_o86nfl9 | For me manju.pro is the best | 1 | 0 | 2026-03-02T05:48:19 | Specialist-Yak4023 | false | null | 0 | o86nfl9 | false | /r/LocalLLaMA/comments/1nm8bvz/automated_high_quality_manga_translations/o86nfl9/ | false | 1 |
t1_o86n3k0 | This. | 0 | 0 | 2026-03-02T05:45:35 | IrisColt | false | null | 0 | o86n3k0 | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o86n3k0/ | false | 0 |
t1_o86n2h2 | any advice on what of the new quen models is best suited to my 8gb 3060ti, 32gb ddr4? | 1 | 0 | 2026-03-02T05:45:21 | NonStopArseGas | false | null | 0 | o86n2h2 | false | /r/LocalLLaMA/comments/1rifxfe/whats_the_best_local_model_i_can_run_with_16_gb/o86n2h2/ | false | 1 |
t1_o86n1je | Is that much of a surprise?
Graphics cards are neat, but GPGPU isn't particularly fast or efficient. They're just cheap and versatile. Dedicated hardware has traditionally been a bit faster on that front (Google's Tensor Processors, the various NPUs, etc.). Similar to how cryptocurrency mining moved from GPUs to dedic... | 3 | 0 | 2026-03-02T05:45:09 | techno156 | false | null | 0 | o86n1je | false | /r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o86n1je/ | false | 3 |
t1_o86n1it | thanks | 2 | 0 | 2026-03-02T05:45:08 | gondoravenis | false | null | 0 | o86n1it | false | /r/LocalLLaMA/comments/1rij9k1/whats_the_best_local_model_i_can_run_with_8gb/o86n1it/ | false | 2 |
t1_o86n01l | Hi, I am using an NVIDIA RTX 5000 series GPU for my story narration projects. I also use multiple software applications. An RTX 3000 series GPU would also be a good laptop option to begin with. | 2 | 0 | 2026-03-02T05:44:49 | archadigi | false | null | 0 | o86n01l | false | /r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o86n01l/ | false | 2 |
t1_o86mv74 | To be fair, that image is not obviously a snow angel. It can be interpreted as a show angel, but it can just as easily be interpreted differently. It's kind of like seeing shapes in clouds.
You might try asking your LLM to give you 5 or 10 possible shapes the snow resembles. That might get you better results. | 1 | 0 | 2026-03-02T05:43:44 | chensium | false | null | 0 | o86mv74 | false | /r/LocalLLaMA/comments/1ribhpg/help_me_understand_why_a_certain_image_is/o86mv74/ | false | 1 |
t1_o86mtr4 | RTX 6000 96GB costs 8K, so 600 GB RAM should cost 48k right? The rest of the system shouldnt cost more than 10k I believe? Please help me understand this | 1 | 0 | 2026-03-02T05:43:24 | pauljeba | false | null | 0 | o86mtr4 | false | /r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/o86mtr4/ | false | 1 |
t1_o86mtrk | There is a solution, truncate middle, then it never reprocesses the system prompt. Or here’s a trick I’ve learned. Make the evaluation batch 4096 which makes it read faster | 1 | 0 | 2026-03-02T05:43:24 | Savantskie1 | false | null | 0 | o86mtrk | false | /r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o86mtrk/ | false | 1 |
t1_o86mr1k | None of the small models are MoE? | 2 | 0 | 2026-03-02T05:42:47 | vulkany | false | null | 0 | o86mr1k | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o86mr1k/ | false | 2 |
t1_o86moa1 | RTX 6000 96GB costs 8K, so 600 GB RAM should cost 48k right? The rest of the system shouldnt cost more than 10k I believe? Please help me understand this | 1 | 0 | 2026-03-02T05:42:10 | pauljeba | false | null | 0 | o86moa1 | false | /r/LocalLLaMA/comments/1rbvbzt/best_opensource_coder_model_for_replacing_claude/o86moa1/ | false | 1 |
t1_o86mo86 | I just want to say: I agree with your assessment.
I hope the walls of text were AI generated, because they don't make much sense. One would hope that a therapist is involved somewhere. | 1 | 0 | 2026-03-02T05:42:10 | _bones__ | false | null | 0 | o86mo86 | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o86mo86/ | false | 1 |
t1_o86mmma | Does the code in the example video... work at all? Genuinely asking. | 1 | 0 | 2026-03-02T05:41:49 | IrisColt | false | null | 0 | o86mmma | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o86mmma/ | false | 1 |
t1_o86m9sm | Perfect. I have that one currently 50% of the way through loading into the GPU.
What kind of output tok/s are you getting? | 2 | 0 | 2026-03-02T05:38:57 | Laabc123 | false | null | 0 | o86m9sm | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o86m9sm/ | false | 2 |
t1_o86lzbd | the corpus curation is what jumps out -- 786K passages from 123 Church Fathers is serious data archaeology. did you do any deduplication across Russian translations? patristic texts get re-translated a lot and near-duplicate passages in CPT can skew the model toward certain stylistic registers.
also curious about toke... | 3 | 0 | 2026-03-02T05:36:36 | BP041 | false | null | 0 | o86lzbd | false | /r/LocalLLaMA/comments/1ribjum/i_trained_a_3b_patristic_theology_llm_on_a_single/o86lzbd/ | false | 3 |
t1_o86lnrs | a) Not a good idea.
There are problem classes where AI simply cannot function well. Communication is one of those areas where there is an uncanny valley between the words, and the actual meaning. AI runs into loops, can't circumlocute, can't halt, and there are aspects of communication where if you distort reflected a... | 1 | 0 | 2026-03-02T05:34:03 | MostlyVerdant-101 | false | null | 0 | o86lnrs | false | /r/LocalLLaMA/comments/1ri0k7b/hardware_advice_llama_for_small_firm_intake/o86lnrs/ | false | 1 |
t1_o86llbi | Can something like this affect qwen3-coder-next? 🤔 | 5 | 0 | 2026-03-02T05:33:30 | oginome | false | null | 0 | o86llbi | false | /r/LocalLLaMA/comments/1rik253/psa_qwen_35_requires_bf16_kv_cache_not_f16/o86llbi/ | false | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.