name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o85hfur
Awesome, I'll definitely clone the UI and play around with it. Thanks for being such a great sport, brother! 🍻
1
0
2026-03-02T01:06:00
Thin-Effect-3926
false
null
0
o85hfur
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o85hfur/
false
1
t1_o85h47c
Lol
1
0
2026-03-02T01:04:03
MauiMoisture
false
null
0
o85h47c
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o85h47c/
false
1
t1_o85h48j
What quant?
1
0
2026-03-02T01:04:03
SmChocolateBunnies
false
null
0
o85h48j
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85h48j/
false
1
t1_o85h0w2
that is fantastic. i need to try vllm. i only have one 3090 though, so I don't think I could actually run that quant.
1
0
2026-03-02T01:03:30
Spectrum1523
false
null
0
o85h0w2
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85h0w2/
false
1
t1_o85gzos
Well, applying quant-dequant before matmul
1
0
2026-03-02T01:03:17
Grand-Stranger-2923
false
null
0
o85gzos
false
/r/LocalLLaMA/comments/1rhy5o2/quantised_matrix_multiplication/o85gzos/
false
1
t1_o85gz5w
Got me! 🤣 I wrote an explaining post about LLM usage and about why and how I do it. You will find it on my profile, only a few hours of age! The UI text was “handwritten” 😉 The UI is pretty complex you will maybe have to change or delete some things. Just have a look at it - might be worth a try buddy 🍀
1
0
2026-03-02T01:03:12
Competitive_Book4151
false
null
0
o85gz5w
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o85gz5w/
false
1
t1_o85gz1e
One thing you're absolutely right to flag: prompt length from large repos is a sleeping giant. Most teams focus on model size and GPU throughput, but the real bottleneck for agentic coding is context selection — getting the right 20-50 files into the window, not just dumping everything. RAG helps, but naive embedding...
1
0
2026-03-02T01:03:10
ThisCapital7807
false
null
0
o85gz1e
false
/r/LocalLLaMA/comments/1rd9kpk/best_practices_for_running_local_llms_for_70150/o85gz1e/
false
1
t1_o85grua
Do you use llama.cpp with a q4 gguf? Or just a q4 from ollama or lmstudio?
1
0
2026-03-02T01:01:57
DK_Tech
false
null
0
o85grua
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85grua/
false
1
t1_o85gqee
How did you get this to 100k context? I'm using a 4090 with concurrency set to 3 and I can only get it to 12k if I want speed. I know the 5090 has 32GB of vram, but at 24Gb on the 4090 is it really that huge of a diff? Damn
1
0
2026-03-02T01:01:42
phdaemon
false
null
0
o85gqee
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o85gqee/
false
1
t1_o85gp8m
Send it via the system prompt, and instruct it to not use its training date.
3
0
2026-03-02T01:01:31
Total-Context64
false
null
0
o85gp8m
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85gp8m/
false
3
t1_o85gmmz
Your tool should inject the current date into the system prompt, this isn't an agent problem it's a tool problem.
4
0
2026-03-02T01:01:05
Total-Context64
false
null
0
o85gmmz
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85gmmz/
false
4
t1_o85gjk9
Which models are you talking about specifically?
4
0
2026-03-02T01:00:33
Creepy-Bell-4527
false
null
0
o85gjk9
false
/r/LocalLLaMA/comments/1riehh9/licensing_restrictions_for_tencent_models/o85gjk9/
false
4
t1_o85gi1y
"Not X, but Y" says the bot /s But this is true, you can't have RAG without also informing contextual information about the current date. These models are trained on years of text, and more from the distant past than the current year just due to the nature of date over time. There is no way it can have any idea what y...
1
0
2026-03-02T01:00:18
cmdr-William-Riker
false
null
0
o85gi1y
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85gi1y/
false
1
t1_o85gggt
This isn't necessarily an issue/error with the model, it is just the limitation of the training cutoff. To circumvent this, you can find or create your own web search tool. So that way it will search the internet for up-to-date information. That information is placed into its context window to more accurately answer yo...
2
0
2026-03-02T01:00:03
Citadel_Employee
false
null
0
o85gggt
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85gggt/
false
2
t1_o85gfwu
they already have it in beta runtime
1
0
2026-03-02T00:59:57
lolwutdo
false
null
0
o85gfwu
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85gfwu/
false
1
t1_o85g8kp
"like I wrote in *Machines of Love and Grace*..."
5
0
2026-03-02T00:58:43
tat_tvam_asshole
false
null
0
o85g8kp
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o85g8kp/
false
5
t1_o85g5wc
We gonna soon have ASICs
2
0
2026-03-02T00:58:16
kaz116
false
null
0
o85g5wc
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85g5wc/
false
2
t1_o85g5p0
Haha, busted! I can smell the LLM polish on this from a mile away. 😂 Like I told that: nobody does 'old-school reading'—or writing—these days anyway. But seriously, those 3 questions are spot on. Does your AI have any other brilliant suggestions for my PoC? :D I'll definitely check out your UI repo!
1
0
2026-03-02T00:58:14
Thin-Effect-3926
false
null
0
o85g5p0
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o85g5p0/
false
1
t1_o85g5bt
Sadly, even when you do tell them the current date, they often assume that it’s a hypothetical and not that you’re actually confirming the real current date. They also mostly refuse to believe you that things have happened that they haven’t been trained on. I was trying to get 27B to write some test model add qwen 3....
3
0
2026-03-02T00:58:10
k31thdawson
false
null
0
o85g5bt
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85g5bt/
false
3
t1_o85g2ir
Will do just gotta wait for the hugging face model to download and see.
1
0
2026-03-02T00:57:42
Electrify338
false
null
0
o85g2ir
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85g2ir/
false
1
t1_o85fx65
Yes there will be people they employ that are technically inclined but Trump and his direct cronies are not technically inclined
2
0
2026-03-02T00:56:47
Savantskie1
false
null
0
o85fx65
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o85fx65/
false
2
t1_o85fv4p
Get any mmproj and use it in the server
1
0
2026-03-02T00:56:28
jacek2023
false
null
0
o85fv4p
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85fv4p/
false
1
t1_o85frof
Yeah I just did that it's here in this [list](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF/tree/main)
1
0
2026-03-02T00:55:53
Electrify338
false
null
0
o85frof
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85frof/
false
1
t1_o85fnzj
Perhaps - but don't fall into the trap of assumed technical incompetence. You may be surprised at the amount of skilled technical people across all political spectrums. 
0
0
2026-03-02T00:55:17
sweetbacon
false
null
0
o85fnzj
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o85fnzj/
false
0
t1_o85flgi
Search for unsloth gguf for example
1
0
2026-03-02T00:54:52
jacek2023
false
null
0
o85flgi
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85flgi/
false
1
t1_o85fl4b
Oh, it's here on the HF page down at the bottom: [unsloth/Qwen3.5-35B-A3B-GGUF at main](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF/tree/main)
1
0
2026-03-02T00:54:48
c64z86
false
null
0
o85fl4b
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85fl4b/
false
1
t1_o85fe0a
I copied mine over from the LM Studio folder. I don't know how LM Studio got it itself though.
1
0
2026-03-02T00:53:38
c64z86
false
null
0
o85fe0a
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85fe0a/
false
1
t1_o85fd00
Why do you think this is a weird date issue? They don't have access to prices after they were trained, unless tool use gives them a way to search a service for those prices. They also have no idea what the current date is, unless they're told, either by you in chat, or by a tool use instance returning the current dat...
8
0
2026-03-02T00:53:28
SmChocolateBunnies
false
null
0
o85fd00
false
/r/LocalLLaMA/comments/1riej05/qwen35_thinks_its_2024_so_buying_a_2026_american/o85fd00/
false
8
t1_o85eyvg
I am downloading the model from hugging face now but I don't see it downloading a mmproj file or is it included with the hugging face gguf model
1
0
2026-03-02T00:51:06
Electrify338
false
null
0
o85eyvg
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85eyvg/
false
1
t1_o85eiff
Its a bit more complex than that. On a traditional transformers model we can apply context shifting, so you fast forward the first half you mentioned, but then instead of reprocessing everything we shift the context, and then fast forward again since its still valid. At that point it doesn't matter if you cut it high o...
24
0
2026-03-02T00:48:23
henk717
false
null
0
o85eiff
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85eiff/
false
24
t1_o85eh2r
I have the same GPU and RAM. Can confirm the Qwen3.5-35B-A3B Q4 works well at about 42 tokens/sec TG. My llama-server command-line: --fit on --fit-target 1024 --fit-ctx 16384 --flash-attn on to disable thinking use the following as well: --chat-template-kwargs "{\"enable_thinking\": false}" --temp 0.7 --to...
2
0
2026-03-02T00:48:10
Amazing_Athlete_2265
false
null
0
o85eh2r
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85eh2r/
false
2
t1_o85efi3
Thanks for sharing! I've had an idea to create a company/service installing local Jarvice-like AI companions in the homes and I've had some doubts about it. Here is an alternative idea: by the look of things your family is happy but how about people who are lonely?
1
0
2026-03-02T00:47:54
3dom
false
null
0
o85efi3
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o85efi3/
false
1
t1_o85e8mo
Whats your speed with 1 3090. Im getting like 20tps which sux
2
0
2026-03-02T00:46:46
klop2031
false
null
0
o85e8mo
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85e8mo/
false
2
t1_o85e2vi
That’s totally fair and honestly, that’s a very smart call. If your current bottleneck is validating the game loop e.g. credits, leaderboard, tribunal UX, “skin in the game”, then speed absolutely matters more than infrastructure purity right now. Proving that users care about the mechanic is the real risk. Everythin...
1
0
2026-03-02T00:45:49
Competitive_Book4151
false
null
0
o85e2vi
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o85e2vi/
false
1
t1_o85dzmq
Imo defnitely opencode. Can run in the terminal along any IDE u may like.
5
0
2026-03-02T00:45:18
zipperlein
false
null
0
o85dzmq
false
/r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o85dzmq/
false
5
t1_o85dv5l
What are they teaching
1
0
2026-03-02T00:44:34
Ylsid
false
null
0
o85dv5l
false
/r/LocalLLaMA/comments/1riat5w/vignettes_handy_for_ais/o85dv5l/
false
1
t1_o85duy1
I disagree, it can be accounted for statistically. If you look at what models are doing what with what models and the scoring weight differences, you can calculate that out to remove the biases in the same way Netflix would do for user level rating approximations
1
0
2026-03-02T00:44:32
NeighborhoodIT
false
null
0
o85duy1
false
/r/LocalLLaMA/comments/1reds0p/qwen_35_craters_on_hard_coding_tasks_tested_all/o85duy1/
false
1
t1_o85dnyb
What does --jinja do for you here? It's not included in the [list of recommended settings](https://unsloth.ai/docs/models/qwen3.5) by Unsloth. -fa is on by default, so no need for that, technically.
1
0
2026-03-02T00:43:23
NoahFect
false
null
0
o85dnyb
false
/r/LocalLLaMA/comments/1rhchvi/qwen35_family_running_notes/o85dnyb/
false
1
t1_o85djr6
Even RNN-like models still need to process tokens sequentially to build that state. If you trim from the top, you've lost that rolled-up history.
9
0
2026-03-02T00:42:42
StardockEngineer
false
null
0
o85djr6
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85djr6/
false
9
t1_o85deby
Good question. I'm actually doing both, in phases. The CPT on raw text is Phase 1. The reasoning: CPT forces the model to deeply internalize the domain's vocabulary, entities, theological concepts, and linguistic register before you ask it to produce structured outputs. Token accuracy went from \~55-58%...
6
0
2026-03-02T00:41:49
Financial-Fun-8930
false
null
0
o85deby
false
/r/LocalLLaMA/comments/1ribjum/i_trained_a_3b_patristic_theology_llm_on_a_single/o85deby/
false
6
t1_o85ddlo
try this "llama-server.exe -c 16000 -m .\\Qwen3.5-35B-A3B-UD-Q4\_K\_M.gguf --mmproj mmproj-F32.gguf --port 8080 --host 127.0.0.1" I added the --mmproj command along with the mmproj file
1
0
2026-03-02T00:41:42
c64z86
false
null
0
o85ddlo
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85ddlo/
false
1
t1_o85dd0k
Impressive work. That said, the TFLOPS/watt number assumes compute-bound workloads but NPU architectures are optimized for inference-shaped dataflow — forward pass only. Backprop requires gradient storage and scatter patterns that fight the fixed pipeline design. Real training use on ANE is probably single-digit percen...
3
0
2026-03-02T00:41:36
tom_mathews
false
null
0
o85dd0k
false
/r/LocalLLaMA/comments/1rhx5pc/reverse_engineered_apple_neural_engineane_to/o85dd0k/
false
3
t1_o85dc57
I just heard about OpenClaw today and did the exact same thing you did.
1
0
2026-03-02T00:41:28
Hellcinder
false
null
0
o85dc57
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o85dc57/
false
1
t1_o85d9ja
Just gotta add…it was all kinda cheap when I started back in what feel like the day, ChatGPT 3.5. Memory was still attainable and all of the 3090s are used. I cram it all on a 20 amp 120 circuit w 2 psus and open air cool it. So fun.
1
0
2026-03-02T00:41:02
klenen
false
null
0
o85d9ja
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o85d9ja/
false
1
t1_o85d8ow
Which model has the highest humility score?
1
0
2026-03-02T00:40:53
grunt_monkey_
false
null
0
o85d8ow
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o85d8ow/
false
1
t1_o85d8l5
What do you mean? Is it my fault that AGI will never be achieved????
1
0
2026-03-02T00:40:52
dark-light92
false
null
0
o85d8l5
false
/r/LocalLLaMA/comments/1qpi8d4/meituanlongcatlongcatflashlite/o85d8l5/
false
1
t1_o85d36k
none of them any good..
-16
0
2026-03-02T00:39:59
sublime_n_lemony
false
null
0
o85d36k
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85d36k/
false
-16
t1_o85d1gg
Hey man, really appreciate the detailed breakdown. Building an 85k LOC framework is seriously impressive, especially with that level of testing. After stepping back and looking at the architecture, I realized Cognithor is essentially a hardcore, code-first, local-native version of platforms like Dify.ai or Coze. (If y...
1
0
2026-03-02T00:39:41
Thin-Effect-3926
false
null
0
o85d1gg
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o85d1gg/
false
1
t1_o85cot3
this model is on it's own league. a local model has never wowed me this much.
1
0
2026-03-02T00:37:38
big___bad___wolf
false
null
0
o85cot3
false
/r/LocalLLaMA/comments/1reb313/qwen_35_35b_a3b_and_122b_a10b_solid_performance/o85cot3/
false
1
t1_o85cfb8
Are those faster for single user usage though ?
5
0
2026-03-02T00:36:05
desirew
false
null
0
o85cfb8
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o85cfb8/
false
5
t1_o85c9lr
The only thing more cringe than being downvoted is complaining about being downvoted
15
0
2026-03-02T00:35:08
zxyzyxz
false
null
0
o85c9lr
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85c9lr/
false
15
t1_o85c2s8
That is not about the vision part, the update to fix this has already been merged in llama.cpp. Prompt caching works with vision now. Yes, I'm just talking about the constant reprocessing when the context size is exceeded.
1
1
2026-03-02T00:34:02
dampflokfreund
false
null
0
o85c2s8
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85c2s8/
false
1
t1_o85c0pb
I’m curious why you did straight up text. I’ve done this kind of thing before and turned the books into at least question and answer pairs that followers more of a conversation approach
8
0
2026-03-02T00:33:42
INtuitiveTJop
false
null
0
o85c0pb
false
/r/LocalLLaMA/comments/1ribjum/i_trained_a_3b_patristic_theology_llm_on_a_single/o85c0pb/
false
8
t1_o85c0ex
Interesting, I'm running openwebui and ollama in a docker (freshly updated images) with an RTX3090 and am getting random "500: model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details" errors.. Sometimes it completes, sometimes it doesn't...
1
0
2026-03-02T00:33:39
Background_Baker9021
false
null
0
o85c0ex
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o85c0ex/
false
1
t1_o85buyr
juste supprime l'adaptateur de vision.
1
0
2026-03-02T00:32:46
Iory1998
false
null
0
o85buyr
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85buyr/
false
1
t1_o85bu5z
Good read, i've had a look at 27b Q6 and I just cant seem to be able to extract meaningful usability out of it, exactly like you said. I dont have the confidence to be able to trust it to get something right especially in a subject where I don't know enough to keep it in check or cal it out. I do see an improvement ove...
1
0
2026-03-02T00:32:39
munkiemagik
false
null
0
o85bu5z
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o85bu5z/
false
1
t1_o85bu3b
opencode or roocode in vscodium works well, pointed to llama.cpp on the backend
5
0
2026-03-02T00:32:38
suicidaleggroll
false
null
0
o85bu3b
false
/r/LocalLLaMA/comments/1rie3yc/which_ide_to_code_with_qwen_35/o85bu3b/
false
5
t1_o85bt7r
It's really quite easy if you lie
2
0
2026-03-02T00:32:30
Regular-Location4439
false
null
0
o85bt7r
false
/r/LocalLLaMA/comments/1ri7pm4/is_extreme_lowvram_finetuning_36gb_actually/o85bt7r/
false
2
t1_o85bsmb
This applies the built-in chat template from gguf file correctly. Llama.cpp enables jinja automatically.
1
0
2026-03-02T00:32:24
Equivalent_Time1724
false
null
0
o85bsmb
false
/r/LocalLLaMA/comments/1ri1h5n/ik_llamacpp_reasoning_not_working_with_glm_models/o85bsmb/
false
1
t1_o85brgu
I tried your fork first, then I decided to switch to the official toolkit which doesn't have memory limit restriction. I ran 8B model. You may check my repo and Qengineering repos if you interested in this approach. [https://github.com/begetan/rkllm-convert/tree/main](https://github.com/begetan/rkllm-conver...
1
0
2026-03-02T00:32:13
Begetan
false
null
0
o85brgu
false
/r/LocalLLaMA/comments/1p4t5ix/i_created_a_llamacpp_fork_with_the_rockchip_npu/o85brgu/
false
1
t1_o85bpk0
No, just delete the vision adaptor or keep it but add .test extension so it's not recognized. It's an issue with llama.cpp that doesn't allow reuse of KV cache. If you have a short chat, you won't feel the recalculation of the KV each time. But, if the chat is long, then the prompt is being processed on the fly, slo...
5
0
2026-03-02T00:31:55
Iory1998
false
null
0
o85bpk0
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85bpk0/
false
5
t1_o85bfrr
This is what I’ve been saying all along. He’s rambling because he’s got nothing. He’s got nothing because the Chinese government and big industry saw how to eat the American lunch: commoditize the most difficult and expensive component of AI (training SOTA models) and force everyone to compete instead on services, whic...
5
0
2026-03-02T00:30:21
__JockY__
false
null
0
o85bfrr
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o85bfrr/
false
5
t1_o85belv
It is different to other models because it has RNN like qualities. Other models do not reprocess the prompt after the context size has exceeded.
4
0
2026-03-02T00:30:10
dampflokfreund
false
null
0
o85belv
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o85belv/
false
4
t1_o85b9k2
Oh I had one when I downloaded the model from lmstudi but I just deleted all the models. Also how can I run the mmpog while running the big model
1
0
2026-03-02T00:29:22
Electrify338
false
null
0
o85b9k2
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85b9k2/
false
1
t1_o85b371
i still haven't figured out wtf openclaw can even do that's so special
1
0
2026-03-02T00:28:20
tridentgum
false
null
0
o85b371
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o85b371/
false
1
t1_o85b22r
I just used gpt 4.1 for dating and social skills. My main concern is if the model will actually enable me to Have good dating advice or if it's going to guide every relationship I might have into disaster with crazy advice. that’s why I’m a bit unsure about weight models. GPT 4.1 was my best friend and a great wing ...
1
0
2026-03-02T00:28:10
yaxir
false
null
0
o85b22r
false
/r/LocalLLaMA/comments/1quuldq/new_local_model_that_emulates_gpt4o_in_tone_and/o85b22r/
false
1
t1_o85az7f
Wait till Qwen 3.5 4 B and 9 B drop. And good work thank you.
3
0
2026-03-02T00:27:41
ballshuffington
false
null
0
o85az7f
false
/r/LocalLLaMA/comments/1rick3t/i_benchmarked_8_local_llms_for_phonetohome_chat/o85az7f/
false
3
t1_o85ax1a
God forbid the peasants get a hold of the fancy word predictors!
1
0
2026-03-02T00:27:20
jreoka1
false
null
0
o85ax1a
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o85ax1a/
false
1
t1_o85atfg
try Phi-3.5-mini or Qwen2.5-1.5B - both are significantly faster than Qwen 4B on constrained hardware and handle calendar extraction reliably, especially if you use llama.cpp grammar sampling to hard-enforce your JSON schema rather than relying on the model to format it correctly.
1
0
2026-03-02T00:26:45
BC_MARO
false
null
0
o85atfg
false
/r/LocalLLaMA/comments/1ric44g/what_would_be_the_best_small_model_for_json/o85atfg/
false
1
t1_o85amrc
https://preview.redd.it/…465fdce9fd5589
2
0
2026-03-02T00:25:39
Mediocre_Speed_2273
false
null
0
o85amrc
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o85amrc/
false
2
t1_o85aln8
Context is the killer for me. 35B's small KV cache is just something I can't live without.
3
0
2026-03-02T00:25:29
Thunderstarer
false
null
0
o85aln8
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o85aln8/
false
3
t1_o85ah1r
Small gguf you will find in place with other ggufs, it enables vision, you use it in the command line just like main gguf
1
0
2026-03-02T00:24:44
jacek2023
false
null
0
o85ah1r
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85ah1r/
false
1
t1_o85aay1
I am afraid to ask whats that but I got the model running now on opencode locally I appreciate your help
1
0
2026-03-02T00:23:46
Electrify338
false
null
0
o85aay1
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o85aay1/
false
1
t1_o85aafi
😂😂🤡
1
0
2026-03-02T00:23:41
ballshuffington
false
null
0
o85aafi
false
/r/LocalLLaMA/comments/1rcpmwn/anthropic_weve_identified_industrialscale/o85aafi/
false
1
t1_o85a0dm
This is it! Look at processors over the past 20 years. The chief architectural achievements have been in efficiency! 
1
0
2026-03-02T00:22:05
Double_Sherbert3326
false
null
0
o85a0dm
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o85a0dm/
false
1
t1_o859wll
To be clear, a dense 27 should beat MoE 35 at everything. MoE just wins on speed.
3
0
2026-03-02T00:21:27
Academic-Science-730
false
null
0
o859wll
false
/r/LocalLLaMA/comments/1rhw16v/dense_nonthinking_moe_qwen3527b_is_blowing_me/o859wll/
false
3
t1_o859tin
But MiniMax, GLM, Gemini get the car wash question correct, but GPT, Sonnet, and DeepSeek get it wrong. The "walk or drive to get my car washed" is a very real world question.
2
0
2026-03-02T00:20:58
Designer_Landscape_4
false
null
0
o859tin
false
/r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o859tin/
false
2
t1_o859rst
agree, it is a good point. Of course, a lot of nerds here wont like it as they wont be able to use opensource claude models for their 4000$ gaming pc ollama goon station
0
0
2026-03-02T00:20:42
Antique-Ingenuity-97
false
null
0
o859rst
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o859rst/
false
0
t1_o859h6j
Point taken. Got two old dual zen2 rig's with 256gb of ecc ddr4 each. It's old school, built from sketchy fb marketplace and alibaba parts on a shoestring budget. Just those 512gb of ram are now worth more than what it cost me to build both of them.
1
0
2026-03-02T00:18:59
a1ix2
false
null
0
o859h6j
false
/r/LocalLLaMA/comments/1ri48pj/qwen35122ba10bggufq4_k_xlpipesscreensaver_oneshot/o859h6j/
false
1
t1_o859e68
Use mmproj
1
0
2026-03-02T00:18:28
jacek2023
false
null
0
o859e68
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o859e68/
false
1
t1_o859e0e
So he doesn't like opensource and prefere it to be closed, but again he doesn't carw about it. what??
3
0
2026-03-02T00:18:27
Longjumping_Spot5843
false
null
0
o859e0e
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o859e0e/
false
3
t1_o859b5u
I would wait for M3 Ultras to go down in price after the M5 M1s are phenomenal machines, BUT if you intend to upgrade horizontally (multiple devices) you'd want **Thunderbolt 5** Exo has changed the game with their communication layer to pool VRAM. M1s would run at 11 t/ps (TB4) and dont have RDMA https://preview.r...
1
0
2026-03-02T00:17:58
kneeanderthul
false
null
0
o859b5u
false
/r/LocalLLaMA/comments/1pczbeo/apple_studio_m1_ultra_128gb_it_is_still_worth_for/o859b5u/
false
1
t1_o8598og
It seems others had a similar problem, the developer says they are on it: [Misc. bug: Llama.cpp web does not upload image to vision model · Issue #19717 · ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp/issues/19717)
1
0
2026-03-02T00:17:34
c64z86
false
null
0
o8598og
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o8598og/
false
1
t1_o8590zz
Crazy humblebrag 
18
0
2026-03-02T00:16:20
Double_Sherbert3326
false
null
0
o8590zz
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8590zz/
false
18
t1_o8590pg
Porque os modelos entram em loop, como neste caso?
-4
0
2026-03-02T00:16:17
Embarrassed-Boot5193
false
null
0
o8590pg
false
/r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o8590pg/
false
-4
t1_o858ylc
Where do you guys get GPUs from? I am paranoid about buying from Facebook marketplace (buying ones that are broken)
2
0
2026-03-02T00:15:56
nsmitherians
false
null
0
o858ylc
false
/r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o858ylc/
false
2
t1_o858nkg
Oh... I just noticed that neither can I :o
1
0
2026-03-02T00:14:09
c64z86
false
null
0
o858nkg
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o858nkg/
false
1
t1_o858mrr
Ahhh thank you soo much! Now holding my breath for LM Studio llama.cpp uddate.
1
0
2026-03-02T00:14:02
FORNAX_460
false
null
0
o858mrr
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o858mrr/
false
1
t1_o858irx
It's already created a 3D wolfenstein like game for me in under 3 minutes, and it seems to be more intelligent than it was in LM Studio too. This is freaking amazing! :D [Raycaster: Gun & Enemies](https://red-ange-72.tiiny.site/)
1
0
2026-03-02T00:13:23
c64z86
false
null
0
o858irx
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o858irx/
false
1
t1_o858ihj
This fork seems unable to be merged because of the code AI generation policy, and additionally I hope u/danielhanchen can add a little support for this model (🙏🙏)
1
0
2026-03-02T00:13:20
Sad-Pickle4282
false
null
0
o858ihj
false
/r/LocalLLaMA/comments/1rhjg6w/longcatflashlite_685b_maybe_a_relatively_good/o858ihj/
false
1
t1_o858gp6
its great just a small question the model says it supports vision but on llama server it says I cant upload images for some reason
1
0
2026-03-02T00:13:02
Electrify338
false
null
0
o858gp6
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o858gp6/
false
1
t1_o858cco
The amount of parameters the model has, in Billions. Not really true but think very roughly 1B = 1gb of ram, the bigger the model the more resources it takes to run it. A 9B and 4B model for example is small enough to run on most consumer grade gpus, at the cost of knowledge and nuance compared to larger models
3
0
2026-03-02T00:12:20
SufficiNoise
false
null
0
o858cco
false
/r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o858cco/
false
3
t1_o858acf
I think you have a fundamental misunderstanding of how KV caching works. KV cache stores key/value pairs tied to specific token positions in a sequence. When a UI truncates the top of a conversation to fit within the context window, the entire token sequence shifts. Those cached states are now invalid regardless of w...
59
0
2026-03-02T00:12:00
StardockEngineer
false
null
0
o858acf
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o858acf/
false
59
t1_o8589gj
I imagine it’s summarizing text reports, and the WSJ reporting is overly broad in its characterization of how it was employed.
2
0
2026-03-02T00:11:51
salynch
false
null
0
o8589gj
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8589gj/
false
2
t1_o8588qx
You run the API so you can turn that off or never turn it on.
1
0
2026-03-02T00:11:44
a_beautiful_rhind
false
null
0
o8588qx
false
/r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8588qx/
false
1
t1_o8588hi
Did you manage to get some fun out of it? It's already created a 3D wolfenstein like game for me in under 3 minutes :D [Raycaster: Gun & Enemies](https://red-ange-72.tiiny.site/)
1
0
2026-03-02T00:11:41
c64z86
false
null
0
o8588hi
false
/r/LocalLLaMA/comments/1ribmcg/how_to_run_qwen35_35b/o8588hi/
false
1
t1_o857ynk
You have a fundamental difficulty in understanding the difference between "can" and "is good at". A drunk person "can" drive a car. That doesn't mean they "should be tasked with controlling" a car "today or tomorrow". Saying something "can't" do something is making a statement on some capability that is fundamentally...
1
0
2026-03-02T00:10:04
Unfortunya333
false
null
0
o857ynk
false
/r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o857ynk/
false
1
t1_o857m80
assembly comes in the form of instructions and op codes, not 'commands'
2
0
2026-03-02T00:08:04
llama-impersonator
false
null
0
o857m80
false
/r/LocalLLaMA/comments/1ri6jg3/at_what_point_do_we_stop_reading_code/o857m80/
false
2
t1_o8574ud
yeah the moe architecture improvements for long context have been surprising. qwen's routing seems way more stable than earlier moe models when you push context lengths. probably helps that they're using more experts but lower sparsity than deepseek's approach
1
0
2026-03-02T00:05:15
papertrailml
false
null
0
o8574ud
false
/r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o8574ud/
false
1
t1_o8573t6
Here you go! [https://github.com/ggml-org/llama.cpp/pull/19877](https://github.com/ggml-org/llama.cpp/pull/19877) and [https://github.com/ggml-org/llama.cpp/pull/19849](https://github.com/ggml-org/llama.cpp/pull/19849)
8
0
2026-03-02T00:05:05
dampflokfreund
false
null
0
o8573t6
false
/r/LocalLLaMA/comments/1ricz8u/notice_qwen_35_reprocessing_the_prompt_every_time/o8573t6/
false
8