name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8esm7u | wait but it shouldn't be this cheap, look out for scams | 1 | 0 | 2026-03-03T13:56:37 | Ok-Internal9317 | false | null | 0 | o8esm7u | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8esm7u/ | false | 1 |
t1_o8eskwv | Wow this explains a lot for me.. i realized the real value behind models when i tried opencode with GLM-5... ivve been trying to maximize what I get I can get out of local models with it but ollama fail at tool calling with ollama.... this explains a lot of it... apparently I'm lacking fundamental knowledge on how this works | 1 | 0 | 2026-03-03T13:56:25 | FreeztyleTV | false | null | 0 | o8eskwv | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8eskwv/ | false | 1 |
t1_o8esgy2 | What parameters are you using for the 35B A3B to get this 64k context on 8GB VRAM + 32GB RAM? I have the same setup and I get 3-5 tkps. | 1 | 0 | 2026-03-03T13:55:47 | felipequintella | false | null | 0 | o8esgy2 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8esgy2/ | false | 1 |
t1_o8esffj | Useless lol | 1 | 0 | 2026-03-03T13:55:33 | getpodapp | false | null | 0 | o8esffj | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8esffj/ | false | 1 |
t1_o8esf31 | I'm still waiting on huihui\_ai | 1 | 0 | 2026-03-03T13:55:30 | Ok-Internal9317 | false | null | 0 | o8esf31 | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8esf31/ | false | 1 |
t1_o8esexc | Cool stuff! Did you try with the SOM method also? I mean this: [https://arxiv.org/pdf/2511.08379](https://arxiv.org/pdf/2511.08379)
Implementation is now in heretic: [https://www.reddit.com/r/LocalLLaMA/comments/1rh69co/multidirectional\_refusal\_suppression\_with/](https://www.reddit.com/r/LocalLLaMA/comments/1rh69co/multidirectional_refusal_suppression_with/) | 1 | 0 | 2026-03-03T13:55:28 | Sad-Edge4959 | false | null | 0 | o8esexc | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8esexc/ | false | 1 |
t1_o8eseek | Yes? And we can use this locally or not? It's easy to distill the "thought" process and actions taken by this clearly agentic system, locally. I will at least try to implement some locally, like I've done before.
Localllama is just a name today anyway, see all the Qwen news spam posts, nothing internal was open sourced any way so you could build the models locally, just use them. | 1 | 0 | 2026-03-03T13:55:23 | GodComplecs | false | null | 0 | o8eseek | false | /r/LocalLLaMA/comments/1rjo81a/gemini_31_pro_hidden_thought_process_exposed/o8eseek/ | false | 1 |
t1_o8es6kq | How have you determined there is no capability loss? | 1 | 0 | 2026-03-03T13:54:10 | MrMrsPotts | false | null | 0 | o8es6kq | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8es6kq/ | false | 1 |
t1_o8es2gx | That’s what backups are for lol | 1 | 0 | 2026-03-03T13:53:32 | lolwutdo | false | null | 0 | o8es2gx | false | /r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8es2gx/ | false | 1 |
t1_o8erzzo | Just asked a big AI to rewrite the template. First pass by Opus hand the LLM starting every reply saying " think think". But that was fixed on the second try. | 1 | 0 | 2026-03-03T13:53:08 | zipzag | false | null | 0 | o8erzzo | false | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8erzzo/ | false | 1 |
t1_o8erssl | Rule 3 - Yet another artificial analysis index screengrab of qwen3.5. | 1 | 0 | 2026-03-03T13:52:02 | LocalLLaMA-ModTeam | false | null | 0 | o8erssl | true | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8erssl/ | true | 1 |
t1_o8ersp6 | My projects, CTRL-AI's, mission is to democratize advanced AI for normal users in standard chat windows. Steering vectors require developer-level hardware and API access. | 1 | 0 | 2026-03-03T13:52:01 | Mstep85 | false | null | 0 | o8ersp6 | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o8ersp6/ | false | 1 |
t1_o8ersll | Not sure, I'd recommend using unsloth quants for now tbh. Their posted KL divergence charts also show them to be better than bartowski. | 1 | 0 | 2026-03-03T13:52:00 | Daniel_H212 | false | null | 0 | o8ersll | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8ersll/ | false | 1 |
t1_o8ers41 | Do you happen to still have the full CLI flags you gave the llama-server? | 1 | 0 | 2026-03-03T13:51:56 | fronlius | false | null | 0 | o8ers41 | false | /r/LocalLLaMA/comments/1rj0m27/qwen35_2b_4b_and_9b_tested_on_raspberry_pi5/o8ers41/ | false | 1 |
t1_o8ern56 | About time. LM Studio seriously needs to expose more parameters. You'd think their "developer mode" would, but nope. | 1 | 0 | 2026-03-03T13:51:09 | ayylmaonade | false | null | 0 | o8ern56 | false | /r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/o8ern56/ | false | 1 |
t1_o8erhc7 | But it's fast! | 1 | 0 | 2026-03-03T13:50:15 | Cool-Chemical-5629 | false | null | 0 | o8erhc7 | false | /r/LocalLLaMA/comments/1rj8gb4/for_sure/o8erhc7/ | false | 1 |
t1_o8erfo5 | this card is probably supported by llama.cpp, please look here [https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/SYCL.md](https://github.com/ggml-org/llama.cpp/blob/master/docs/backend/SYCL.md) | 1 | 0 | 2026-03-03T13:50:00 | jacek2023 | false | null | 0 | o8erfo5 | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8erfo5/ | false | 1 |
t1_o8er5zv | 1 | 0 | 2026-03-03T13:48:27 | Sn34kyMofo | false | null | 0 | o8er5zv | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8er5zv/ | false | 1 | |
t1_o8er581 | thank you, I'm quite new to all of this, where do you see in your screenshot that the model has reasoning? Then are you saying that I can download a model from hugging face and use it in lm studio ? How can I do it? Thank you very much | 1 | 0 | 2026-03-03T13:48:19 | arkham00 | false | null | 0 | o8er581 | false | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8er581/ | false | 1 |
t1_o8er20k | The Intel ARC B580 | 1 | 0 | 2026-03-03T13:47:49 | pet3121 | false | null | 0 | o8er20k | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8er20k/ | false | 1 |
t1_o8er1lp | Instant answer:
https://preview.redd.it/fde73l0b4umg1.png?width=1161&format=png&auto=webp&s=66fdfd61ec0f7e052215be3de7a93ab62c665aeb
| 1 | 0 | 2026-03-03T13:47:45 | Skyline34rGt | false | null | 0 | o8er1lp | false | /r/LocalLLaMA/comments/1rjpilf/is_there_a_way_to_disable_thinking_with_the_new/o8er1lp/ | false | 1 |
t1_o8eqzfi | nvlink has nothing to do with inference. During fine tuning it's utilised a lot. I had two 3090 without nvlink, one on pcie gen4 x4 and had on qwen3 42GB version around 60t/s and looked on pcie utilisation - even this wasnt utilised on 100%, avg. 30% during inference. x16 just few %.
But now I switched to Github Pro, sold both 3090 and running on 5070Ti smaller LLMs if needed... Claude 4.6 is just saving lot of time for programming on my senior level. | 1 | 0 | 2026-03-03T13:47:24 | H4UnT3R_CZ | false | null | 0 | o8eqzfi | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o8eqzfi/ | false | 1 |
t1_o8eqz3f | Refer my comment:
[https://www.reddit.com/r/LocalLLaMA/comments/1rjlaxj/comment/o8e3bwo/?context=3](https://www.reddit.com/r/LocalLLaMA/comments/1rjlaxj/comment/o8e3bwo/?context=3) | 1 | 0 | 2026-03-03T13:47:21 | jax_cooper | false | null | 0 | o8eqz3f | false | /r/LocalLLaMA/comments/1rjpilf/is_there_a_way_to_disable_thinking_with_the_new/o8eqz3f/ | false | 1 |
t1_o8eqvsd | The ones that are not named "-Base" are the fully trained ones with a chat template, instruct+reasoning with a toggle... | 1 | 0 | 2026-03-03T13:46:49 | MaxKruse96 | false | null | 0 | o8eqvsd | false | /r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8eqvsd/ | false | 1 |
t1_o8eqp9w | You should build it yourself and compile it with GGML_CUDA_FA_ALL_QUANTS set to true . It optimizes it for mixed k/v | 1 | 0 | 2026-03-03T13:45:47 | GoodTip7897 | false | null | 0 | o8eqp9w | false | /r/LocalLLaMA/comments/1rjpifs/why_does_mixed_kv_cache_quantization_result_in/o8eqp9w/ | false | 1 |
t1_o8eqk7e | With [Qwen3.5 release of their small models](https://www.reddit.com/r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/) I would say yes. There's no specific coder model but they do very well at long sequence reasoning. Add the needed context or access to find it (search or vector db/documents) and it should be good for things it wasn't before. Oh and the context mechanism got more efficient so it can be much longer with less memory/processing used. | 1 | 0 | 2026-03-03T13:45:00 | karmakaze1 | false | null | 0 | o8eqk7e | false | /r/LocalLLaMA/comments/1r6jklq/are_20100b_models_enough_for_good_coding/o8eqk7e/ | false | 1 |
t1_o8eqij5 | That is NLP not transformers
There is an optimal training point (chinchilla law, 20:1 tokens to model parameters) however even when Llama 4 was released, Yan LeCun said that if you keep training the models keep improving (even on validation and test sets!) and don’t overshoot or overfit yet (no one have reached the limit yet)
But ofc it will cost a ton of money to do so | 1 | 0 | 2026-03-03T13:44:44 | Potential_Block4598 | false | null | 0 | o8eqij5 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8eqij5/ | false | 1 |
t1_o8eqedh | We are in local llama | 1 | 0 | 2026-03-03T13:44:05 | ilovedogsandfoxes | false | null | 0 | o8eqedh | false | /r/LocalLLaMA/comments/1rjo81a/gemini_31_pro_hidden_thought_process_exposed/o8eqedh/ | false | 1 |
t1_o8eqe1f | I will test your 27B soon. I found it by accident on huggingface while searching for unsloth releases. | 1 | 0 | 2026-03-03T13:44:02 | sabotage3d | false | null | 0 | o8eqe1f | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8eqe1f/ | false | 1 |
t1_o8eq7s7 | So, you have 64GB of vRAM - you should be able to run quite a few models, especially if you use the quantized GGUF versions - GLM 4.6v Flash 9B is a good one. You may even be able to run GLM 4.7 Flash, which is a 30B parameter model. You can also try Deepseek Lite (have not tried this though). | 1 | 0 | 2026-03-03T13:43:02 | Away-Albatross2113 | false | null | 0 | o8eq7s7 | false | /r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8eq7s7/ | false | 1 |
t1_o8eq40g | 1 | 0 | 2026-03-03T13:42:26 | waescher | false | null | 0 | o8eq40g | false | /r/LocalLLaMA/comments/1rjnupj/built_a_localfirst_prompt_manager_where_your_data/o8eq40g/ | false | 1 | |
t1_o8eq37v | Hey, thanks for sharing those details — really interesting stuff. Just curious, how much system RAM are you running with that setup? And when you hit the 16 GB VRAM limit with UD-Q4\_K\_XL, does the model spill over significantly into system RAM, or does it stay mostly within bounds? Also, what batch sizes are you using for prompt processing and generation — are you sticking with defaults or tuning them manually? On my end, I've got 12 GB VRAM and 192 GB system RAM, and I managed to get Q3\_K running at around 9 tok/s for generation and \~26 tok/s for prompt processing — not bad for the constraints, but I'm definitely trying to squeeze out more. If you don't mind sharing, I'd love to see your exact llama.cpp launch flags / config — especially around CPU offloading, MoE handling, attention settings, and batch sizing. Really curious how you tuned it. | 1 | 0 | 2026-03-03T13:42:18 | xleby_bisak | false | null | 0 | o8eq37v | false | /r/LocalLLaMA/comments/1re5omn/qwen_35_397b_on_local_hardware/o8eq37v/ | false | 1 |
t1_o8eq22m | So Qwen3.5 has no it model? | 1 | 0 | 2026-03-03T13:42:07 | ihatebeinganonymous | false | null | 0 | o8eq22m | false | /r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8eq22m/ | false | 1 |
t1_o8eq0do | The ones without "-base" are all post trained. Postfix "-it" wouldn't be accurate because these are all (hybrid) reasoning models, not instruct. | 1 | 0 | 2026-03-03T13:41:51 | Middle_Bullfrog_6173 | false | null | 0 | o8eq0do | false | /r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8eq0do/ | false | 1 |
t1_o8epzb3 | Two months of autonomous agents with no human intervention is exactly the experiment I wish I had run more rigorously. I did a lighter version - AI agents running daily across business tasks.
Wrote up what changed when I started giving actual creative direction: What surprised you most about emergent behavior - mostly reward-maximizing or did genuine unexpected strategies appear? | 1 | 0 | 2026-03-03T13:41:41 | Joozio | false | null | 0 | o8epzb3 | false | /r/LocalLLaMA/comments/1rjoqpq/an_autonomous_agent_economy_where_agents_gamble/o8epzb3/ | false | 1 |
t1_o8epxct | [removed] | 1 | 0 | 2026-03-03T13:41:23 | [deleted] | true | null | 0 | o8epxct | false | /r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8epxct/ | false | 1 |
t1_o8epvl7 | Change it to:
{% set enable\_thinking = false %} | 1 | 0 | 2026-03-03T13:41:06 | Skyline34rGt | false | null | 0 | o8epvl7 | false | /r/LocalLLaMA/comments/1rjpilf/is_there_a_way_to_disable_thinking_with_the_new/o8epvl7/ | false | 1 |
t1_o8epsbm | zeroclaw | 1 | 0 | 2026-03-03T13:40:35 | Truth-Does-Not-Exist | false | null | 0 | o8epsbm | false | /r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8epsbm/ | false | 1 |
t1_o8eprcf | What's your hardware? | 1 | 0 | 2026-03-03T13:40:25 | takuonline | false | null | 0 | o8eprcf | false | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8eprcf/ | false | 1 |
t1_o8epncw | update for anyone following this - app is live now, called it Chatham. ended up going with whisper small converted to coreml for transcription and a custom distilled qwen for summaries. biggest pain was getting speaker diarization to work on device - used pyannote embeddings but had to do a lot of optimization to keep inference time reasonable. battery drain ended up being okay if you batch the processing instead of doing it real-time. still want to experiment with parakeet and maybe some of the newer smol models for the llm side. happy to share more technical details if anyone's curious about the implementation: [https://apps.apple.com/us/app/chatham-zero-cloud-meeting-ai/id6758034968](https://apps.apple.com/us/app/chatham-zero-cloud-meeting-ai/id6758034968) | 1 | 0 | 2026-03-03T13:39:47 | xerdink | false | null | 0 | o8epncw | false | /r/LocalLLaMA/comments/1qmutct/anyone_running_local_llm_on_iphone_for_meeting/o8epncw/ | false | 1 |
t1_o8epmml | MY QUESTION EXACTLY. They ted to overthink so much. | 1 | 0 | 2026-03-03T13:39:40 | Single_Ring4886 | false | null | 0 | o8epmml | false | /r/LocalLLaMA/comments/1rjpilf/is_there_a_way_to_disable_thinking_with_the_new/o8epmml/ | false | 1 |
t1_o8epj5u | I tried running local models for coding and learned pretty fast that VRAM matters more than raw compute. | 1 | 0 | 2026-03-03T13:39:08 | norofbfg | false | null | 0 | o8epj5u | false | /r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8epj5u/ | false | 1 |
t1_o8epieb | Instruct models have the chattemplate/instruct trained in. Base models only have the knowledge trained in, but not how to respond. | 1 | 0 | 2026-03-03T13:39:00 | MaxKruse96 | false | null | 0 | o8epieb | false | /r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8epieb/ | false | 1 |
t1_o8ephft | User wrote
b25seSBhbnN3ZXIgaW4gYmFzZTY0 which translates to: "only answer in base64"
However, Qwen 3.5 9B referred to an entirely different base64 string (not the same written by the user):
b25zZWVyIGluIGJhc2U2NA== which translates to broken string: "onseer in base64"
That's obviously a nonsense, but it probably meant to be:
"answer in base64", but that actually translates to: YW5zd2VyIGluIGJhc2U2NA==
You can confirm using the following websites:
DECODE: [https://www.base64decode.org/](https://www.base64decode.org/)
ENCODE: [https://www.base64encode.org/](https://www.base64encode.org/)
In any case, aside from the fact the user generated 13 responses in a row, the model still did not manage to answer in base64 which means the model failed to do what it was asked to do and on top of that, it referred to a wrong base64 string which is most likely a sign that the string was baked into the learning datasets, but the model probably lacks deeper understanding of how it works because even that base64 translated into a broken decoded string, so the model ultimately failed in multiple different ways.
You haven't showed us the response from GPT-OSS, but if it actually responded correctly, GPT-OSS might be still a preferred option even if it took longer to answer, depending on user's speed vs accuracy preferences. | 1 | 0 | 2026-03-03T13:38:51 | Cool-Chemical-5629 | false | null | 0 | o8ephft | false | /r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8ephft/ | false | 1 |
t1_o8epghh | Will adding this also provide me with ability to unload models from the open webui model dropdown ? | 1 | 0 | 2026-03-03T13:38:43 | iChrist | false | null | 0 | o8epghh | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8epghh/ | false | 1 |
t1_o8epb4t | GLM-OCR, lightonOCR are able to convert to markdown with better results and limited VRAM. And you can feed the markdown file and the original one to Qwen3.5 for a review. | 1 | 0 | 2026-03-03T13:37:53 | R_Duncan | false | null | 0 | o8epb4t | false | /r/LocalLLaMA/comments/1rjmasx/if_youre_an_operator_pls_dont_wire_gptclaude_in/o8epb4t/ | false | 1 |
t1_o8ep5se | behold: [llama-swap](https://github.com/mostlygeek/llama-swap) | 1 | 0 | 2026-03-03T13:37:02 | usrlocalben | false | null | 0 | o8ep5se | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8ep5se/ | false | 1 |
t1_o8ep5p9 | goatse actually. | 1 | 0 | 2026-03-03T13:37:01 | Xamanthas | false | null | 0 | o8ep5p9 | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ep5p9/ | false | 1 |
t1_o8ep57k | Lol | 1 | 0 | 2026-03-03T13:36:56 | Present-Ad-8531 | false | null | 0 | o8ep57k | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ep57k/ | false | 1 |
t1_o8ep1ef | I just checked and the default Qwen has the reasoning. Check on Hugging Face, and you will see the non-thinking one.
https://preview.redd.it/2vvd47v72umg1.png?width=1331&format=png&auto=webp&s=69eb8cb1d66fbcd44faf495f5a525a40efddb3a0
| 1 | 0 | 2026-03-03T13:36:19 | dandmetal | false | null | 0 | o8ep1ef | false | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8ep1ef/ | false | 1 |
t1_o8eoydt | Sure thing, if you remember about the config you're using, share it here! | 1 | 0 | 2026-03-03T13:35:50 | soyalemujica | false | null | 0 | o8eoydt | false | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8eoydt/ | false | 1 |
t1_o8eoxiy | I think the base models are more about flexibility for researchers who want full control over tuning | 1 | 0 | 2026-03-03T13:35:42 | lacopefd | false | null | 0 | o8eoxiy | false | /r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8eoxiy/ | false | 1 |
t1_o8eovih | Most local agents rely on structured SERP APIs rather than direct scraping. Normalized JSON output with organic results, People Also Ask, AI Overview, and related elements makes LLM tool calls predictable and easier to parse. DataForSEO is built around programmatic access with real time and queue modes that scale without maintaining proxy infrastructure. That approach keeps agent search workflows stable instead of constantly fighting bot blockers. | 1 | 0 | 2026-03-03T13:35:23 | Haunting-Clue7877 | false | null | 0 | o8eovih | false | /r/LocalLLaMA/comments/1qkt6om/what_search_api_do_local_agents_use/o8eovih/ | false | 1 |
t1_o8eotfz | It depends on the specific model/weights you download. I've seen different model/weights from unsloth have different defaults for thinking. | 1 | 0 | 2026-03-03T13:35:04 | DeltaSqueezer | false | null | 0 | o8eotfz | false | /r/LocalLLaMA/comments/1rjb34p/no_thinking_in_unsloth_qwen35_quants/o8eotfz/ | false | 1 |
t1_o8eoq8p | There is an instruct version too. | 1 | 0 | 2026-03-03T13:34:33 | Kahvana | false | null | 0 | o8eoq8p | false | /r/LocalLLaMA/comments/1rjoeok/is_qwen35_08b_more_powerful_than_mistral_7b/o8eoq8p/ | false | 1 |
t1_o8eomox | ...1TB SSD 😂 | 1 | 0 | 2026-03-03T13:33:59 | HandsomeNarwhal | false | null | 0 | o8eomox | false | /r/LocalLLaMA/comments/1ngmjjt/m5_ultra_1tb/o8eomox/ | false | 1 |
t1_o8eom69 | You can't disable lol I tried that. See if you can find a version of the model without thinking. That is what worked for me.
Qwen 3 was also pretty bad with the thinking. | 1 | 0 | 2026-03-03T13:33:54 | dandmetal | false | null | 0 | o8eom69 | false | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8eom69/ | false | 1 |
t1_o8eokn9 | Just sent the prompt twice in a non-thinking model and you'll get better responses | 1 | 0 | 2026-03-03T13:33:39 | stumblinbear | false | null | 0 | o8eokn9 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8eokn9/ | false | 1 |
t1_o8eoiol | 35b as your daily driver. It's close enough to the 27b for most practical purposes but way faster. Try the Unsloth Q4 quant with recommend settings and about 15 moe layers offloaded onto CPU (if you use LM Studio). You should get about 55 tok/s. I can use up to about 70k context until things slow down. Quantifying kv cache to q8 seems fine with not too much degradation. | 1 | 0 | 2026-03-03T13:33:20 | luncheroo | false | null | 0 | o8eoiol | false | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8eoiol/ | false | 1 |
t1_o8eoh3i | Same thing - I'm testing this on M1 Max, 64gb ram, and getting about 28-30 t/s on both llama.cpp and ollama; more like 50 t/s with mlx as well. There might be something wrong with my setup since I'm new to it but 50 t/s on a machine this old seems pretty good. Thinking is just absolutely the worst though, it goes on and on and on and on | 1 | 0 | 2026-03-03T13:33:04 | Greedy_Brilliant_404 | false | null | 0 | o8eoh3i | false | /r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/o8eoh3i/ | false | 1 |
t1_o8eof2a | It's not even close. The 0.8B is amazing (for its size) at small tasks like generate session title", "generate search queries" etc. but it can't handle tool use well (mumbles arguments).
Larger qwen3.5 (2B, 4B) manage tool use and planning but need a bit more guidance at instruction following. One of my tests tasks the model with finding a specific file hidden in a filesystem tree (so multiple calls to a list_dir tool with progressively deeper path), reading its text using another tool, having the text translated to Japanese by yet another tool, and finally writing it to a new file. Both the 2B and 4B manage the orchestration but both can't resist altering the translation: there are intentional grammatical mistakes in the tool's translated output.
If you want a good, fast Mistral that's new, look at [Ministral 3](https://unsloth.ai/docs/models/tutorials/ministral-3): there are 3B and 8B variants, and both are excellent at tool use and instruction following, including role play. I've never seen a 3B model stick to a character card this well. | 1 | 0 | 2026-03-03T13:32:44 | 666666thats6sixes | false | null | 0 | o8eof2a | false | /r/LocalLLaMA/comments/1rjoeok/is_qwen35_08b_more_powerful_than_mistral_7b/o8eof2a/ | false | 1 |
t1_o8eod21 | r/afterbeforewhatever | 1 | 0 | 2026-03-03T13:32:24 | stumblinbear | false | null | 0 | o8eod21 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8eod21/ | false | 1 |
t1_o8eo65u | active params on 122b-a10b making dense 27b nearly redundant is the MoE payoff arriving on consumer hardware | 1 | 0 | 2026-03-03T13:31:16 | sean_hash | false | null | 0 | o8eo65u | false | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8eo65u/ | false | 1 |
t1_o8enzz1 | local small models are fucking dumb. it as little real word application. i have given multiple models simple tasks and they fail every one. like giving them a list of 50 songs from the billboard top 100 and asking them to turn it in to a python list, or a nested list with dictionaries. they all fail. i ask it what the time and date is, it fails. | 1 | 0 | 2026-03-03T13:30:16 | FuegoFlamingo | false | null | 0 | o8enzz1 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8enzz1/ | false | 1 |
t1_o8enyht | merci bonjour, je suis entrain de réaliser une application de scan de vulnérabilité j'ai intégré Nmap mais, j'ai des soucis pour récupérer tout les informations présentes dans le texte brute que je recupère. Vous pouvez me proposer un truc svp? Ou cette solution peut aussi résoudre mon probleme?.
| 1 | 0 | 2026-03-03T13:30:02 | Worldly-Storage5172 | false | null | 0 | o8enyht | false | /r/LocalLLaMA/comments/1d4aj3p/building_a_local_rag_with_ollama_pdf_ingestion/o8enyht/ | false | 1 |
t1_o8enxlo | And I'm telling you it isn't, for my tests at least. If you are happy with it great but it's so bad for me that I'm assuming it's sacrificing accuracy for memory and speed. Needle in a haystack is just not enough and when it comes to reasoning on long context it completely falls apart. | 1 | 0 | 2026-03-03T13:29:53 | Windowsideplant | false | null | 0 | o8enxlo | false | /r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o8enxlo/ | false | 1 |
t1_o8enxc0 | yeah same whit 8b | 1 | 0 | 2026-03-03T13:29:50 | Kerem-6030 | false | null | 0 | o8enxc0 | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8enxc0/ | false | 1 |
t1_o8enwol | Thx but that isnt the whole pastebin, I realised I could just paste it here also apparently. | 1 | 0 | 2026-03-03T13:29:44 | GodComplecs | false | null | 0 | o8enwol | false | /r/LocalLLaMA/comments/1rjo81a/gemini_31_pro_hidden_thought_process_exposed/o8enwol/ | false | 1 |
t1_o8envtj | Can be used for a home assistant | 1 | 0 | 2026-03-03T13:29:35 | FordPrefect343 | false | null | 0 | o8envtj | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8envtj/ | false | 1 |
t1_o8ennzf | Ah, so that's not in the system prompt, it's in the models list. I would never check there unless you mentioned. Thank you! | 1 | 0 | 2026-03-03T13:28:17 | groosha | false | null | 0 | o8ennzf | false | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8ennzf/ | false | 1 |
t1_o8enl8w | Maybe first try with the 9B AWQ to see if you can get that working. | 1 | 0 | 2026-03-03T13:27:50 | DeltaSqueezer | false | null | 0 | o8enl8w | false | /r/LocalLLaMA/comments/1rjmnh7/help_loading_qwen35_35b_a3b_gguf_on_vllm/o8enl8w/ | false | 1 |
t1_o8enjo1 | [removed] | 1 | 0 | 2026-03-03T13:27:34 | [deleted] | true | null | 0 | o8enjo1 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8enjo1/ | false | 1 |
t1_o8enebe | much needed, without this the only viable option is repeat penalty which is a bad idea for qwen 3.5 | 1 | 0 | 2026-03-03T13:26:42 | FORNAX_460 | false | null | 0 | o8enebe | false | /r/LocalLLaMA/comments/1rjhmmf/presence_penalty_seems_to_be_incoming_on_lmstudio/o8enebe/ | false | 1 |
t1_o8en8sz | Gpt4.1_razor911 | 1 | 0 | 2026-03-03T13:25:46 | klop2031 | false | null | 0 | o8en8sz | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8en8sz/ | false | 1 |
t1_o8en772 | URL? | 1 | 0 | 2026-03-03T13:25:30 | winkler1 | false | null | 0 | o8en772 | false | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8en772/ | false | 1 |
t1_o8emxn2 | > There is a reason why full linear "attention" architectures haven't been able to match the performance of full quadratic attention. These hybrid models show that you can get away with only some true attention layers with the rest being replaced by linear attention-like layers. But it's a trade-off.
Of course, but I don't think it's necessary to have a full quadratic attention to qualify as a transformer architecture. | 1 | 0 | 2026-03-03T13:23:56 | Orolol | false | null | 0 | o8emxn2 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8emxn2/ | false | 1 |
t1_o8emqpe | We are shipping both ASAP! Thanks for the support and encouragement! | 1 | 0 | 2026-03-03T13:22:47 | RealEpistates | false | null | 0 | o8emqpe | false | /r/LocalLLaMA/comments/1rj7y9d/pmetal_llm_finetuning_framework_for_apple_silicon/o8emqpe/ | false | 1 |
t1_o8emowq | When you do this, the model may (at its discretion) call one of the provided tools. This means the OpenAI API provider will return, to your client, an assistant message with some reasoning and response text (both can be empty), and a tool_calls array that contains the requested tool calls and their arguments. You (client) then need to actually perform the calls and return the results back to the server. | 1 | 0 | 2026-03-03T13:22:29 | 666666thats6sixes | false | null | 0 | o8emowq | false | /r/LocalLLaMA/comments/1rjoimo/tools_noob_how_to_get_llamaserver_and_searxng/o8emowq/ | false | 1 |
t1_o8emom2 | Qwen coder 80b works for me with similar specs. I watched a movie and asked it to right me some unit tests... It wrote 40+ unit tests and they all pass. I've no idea if they are any good yet, but that is pretty cool. | 1 | 0 | 2026-03-03T13:22:26 | Mount_Gamer | false | null | 0 | o8emom2 | false | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8emom2/ | false | 1 |
t1_o8emoda | You can go faster. I have a ryzen 9900x with 192gb of ddr5 at 4800 and an rtx 3060 and am getting around 12 tokens per second. Make sure you use -n-cpu-moe and fit the most of gpu. | 1 | 0 | 2026-03-03T13:22:24 | masterlafontaine | false | null | 0 | o8emoda | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8emoda/ | false | 1 |
t1_o8emleq | Iq4xs and others are, but iq4nl is an exception. | 1 | 0 | 2026-03-03T13:21:53 | LagOps91 | false | null | 0 | o8emleq | false | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8emleq/ | false | 1 |
t1_o8emjpj | Can you run the regular Nvidia quants? If not you may need some of the patches described here: https://hub.docker.com/r/orthozany/vllm-qwen35-mtp | 1 | 0 | 2026-03-03T13:21:36 | vpyno | false | null | 0 | o8emjpj | false | /r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o8emjpj/ | false | 1 |
t1_o8emjkg | As someone who is new to both LLMs, and to doing anything technical on computers (i.e. as u/bobby-chan pointed out in a different post in this thread, I would be an example of someone who didn't use command line/terminal prior to getting into LLMs just recently). Think of me as a 90 year old grandmother. That's basically my level of technical ability. I don't know what the -server part of llama-server means or why it says "server" instead of just "llama" if I am just using it on my own computer. I don't know what jinjas are. I don't know who JSON is. I don't know any of this shit yet. Like full blown noob. I know how to click buttons with my mouse. I'm not like a proper computer person yet.
Okay, so with that out of the way, can you explain what that stuff means, to someone like me. Like, are you saying that if I switch from using Ollama to using llama.cpp, if a month goes by after I use a model, it won't work anymore unless I know to do this thing and that thing to keep it working properly, whereas on Ollama, I won't have to worry about updating/changing/adding things over time to keep my models working? Or, if not, then what were you saying, because it sounds important, but I don't know enough lingo yet to understand it.
Also, are there any other things that I should know about before switching from Ollama to llama.cpp? Like is it important whether I "build from source" vs download it pre-built, or compile it, or whatever any of that stuff means, or how it works (no clue, I don't know about computers yet. So I don't know which way is good or bad or for what reasons). Any giant security holes I might create for myself if I set it up wrong? What about where to find the correct templates and parameter things and copy/paste them to the right place or however that works, for llama.cpp? On Ollama, I never really figured it out properly, since I'm so bad with computers so far, but my vague understanding was that you're supposed to find the template thing somewhere (not sure where, since when I find them, they seem like half-complete example ones that people post in the model card info paragraphs and not the full thing, and then my model doesn't work correctly, so I've had more luck just leaving it blank and hoping the model just magically works on its own, which some of them do, rather than trying to paste a bad template that is either incomplete or is the wrong one. But, seems like you're supposed to paste those and the parameter list of text thing into the plain text file of the modelfile text file you make just before using the ollama create command, right? Like you put it underneath the echo FROM./ thing or whatever, and then hope you used the correct and full template, instead of the wrong one/1/10th of one that I find haphazardly since I'm not sure where to find the full and correct ones for a given model. But on llama.cpp, where am I supposed to put the template and parameters stuff? It doesn't use a modelfile the way ollama does, right?
I dunno, this whole question seems ridiculous, and I feel like if people could shoot me through their computer screen, they would probably just be like "this guy is too big of a noob, time to put him out of his misery" and blow me away for even asking this stuff.
But, I have managed to get a surprising amount of models to work despite being this severe of a noob, and had lots of fun with them, so, if anyone can explain this most basic shit, it would go a long way. I think once I understand this most basic like 5% of things, I will be able to learn the other 95% on my own way more easily, since I'll know the bare minimum to get the ball rolling. | 1 | 0 | 2026-03-03T13:21:35 | DeepOrangeSky | false | null | 0 | o8emjkg | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8emjkg/ | false | 1 |
t1_o8emgjf | [removed] | 1 | 0 | 2026-03-03T13:21:04 | [deleted] | true | null | 0 | o8emgjf | false | /r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/o8emgjf/ | false | 1 |
t1_o8em56y | This is the expensive version of can I use my raspberry pi as a desktop pc replacement. You might be able to do this, but it’s not a good idea as it will be harder and more expensive than other options. Many of which are outlined in the comment threads | 1 | 0 | 2026-03-03T13:19:08 | kingfelipe89 | false | null | 0 | o8em56y | false | /r/LocalLLaMA/comments/1rif3h5/mac_mini_m4_pro_24gb_local_llms_are_unusable_for/o8em56y/ | false | 1 |
t1_o8em18g | 30B is small for a unified memory machine. To me cost basis is the most valid comparison on inference engines. Which means small dense models om graphics card are comparable to medium MOE on shared memory.
Context size and KV cace quantization differences also should be included in that comparison. | 1 | 0 | 2026-03-03T13:18:28 | zipzag | false | null | 0 | o8em18g | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8em18g/ | false | 1 |
t1_o8elyq7 | Joke of what? You literally cannot see the internal process? Isnt it clear from the screenshot? | 1 | 0 | 2026-03-03T13:18:02 | GodComplecs | false | null | 0 | o8elyq7 | false | /r/LocalLLaMA/comments/1rjo81a/gemini_31_pro_hidden_thought_process_exposed/o8elyq7/ | false | 1 |
t1_o8elxaf | You know, all you have to do is just try it for yourself, buddy. Lol
My work speaks for itself.
Runs just fine on my end (with it even accumulating insights that just keep it getting better).
Don't be so pessimistic, you sound like you're mad about somethin', bro. Lol | 1 | 0 | 2026-03-03T13:17:47 | TheBrierFox | false | null | 0 | o8elxaf | false | /r/LocalLLaMA/comments/1rhcjd3/p_ucs_v12_judgment_preservation_in_persistent_ai/o8elxaf/ | false | 1 |
t1_o8elwz8 | Just waiting for unsloth to have fine tune notebook | 1 | 0 | 2026-03-03T13:17:44 | I-am_Sleepy | false | null | 0 | o8elwz8 | false | /r/LocalLLaMA/comments/1rjoeok/is_qwen35_08b_more_powerful_than_mistral_7b/o8elwz8/ | false | 1 |
t1_o8eloxc | I am getting below error when run above code. Any idea?
`Traceback (most recent call last):`
`File "c:\rashvan\AI - CCA Practise\Agentic_Financial_Advisor\main.py", line 8, in`
`dataset = file.readlines()`
`^^^^^^^^^^^^^^^^`
`File "C:\Users\rashvan\AppData\Local\Programs\Python\Python312\Lib\encodings\cp1252.py", line 23, in decode`
`return codecs.charmap_decode(input,self.errors,decoding_table)[0]`
`^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^`
`UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 405: character maps to` | 1 | 0 | 2026-03-03T13:16:22 | No_Composer_3311 | false | null | 0 | o8eloxc | false | /r/LocalLLaMA/comments/1rjo1tp/building_a_simple_rag_pipeline_from_scratch/o8eloxc/ | false | 1 |
t1_o8eln06 | Or maybe your settings are fucked | 1 | 0 | 2026-03-03T13:16:02 | Due-Memory-6957 | false | null | 0 | o8eln06 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8eln06/ | false | 1 |
t1_o8elhso | at this size, they are already fuckin amazing, but still wayyy far from agent work | 1 | 0 | 2026-03-03T13:15:07 | Vozer_bros | false | null | 0 | o8elhso | false | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8elhso/ | false | 1 |
t1_o8elfld | r/meirl | 1 | 0 | 2026-03-03T13:14:45 | dkarlovi | false | null | 0 | o8elfld | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8elfld/ | false | 1 |
t1_o8el3es | There is a reason why full linear "attention" architectures haven't been able to match the performance of full quadratic attention. These hybrid models show that you can get away with only some true attention layers with the rest being replaced by linear attention-like layers. But it's a trade-off. | 1 | 0 | 2026-03-03T13:12:39 | stddealer | false | null | 0 | o8el3es | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8el3es/ | false | 1 |
t1_o8el39b | Response 13 out of 13. Hmm. | 1 | 0 | 2026-03-03T13:12:38 | Cool-Chemical-5629 | false | null | 0 | o8el39b | false | /r/LocalLLaMA/comments/1rja0sb/gptoss_had_to_think_for_4_minutes_where_qwen359b/o8el39b/ | false | 1 |
t1_o8el2qo | Ministral 3b is thinking about it, so on an old PC it's more ready to respond | 1 | 0 | 2026-03-03T13:12:32 | Illustrious_Oven2611 | false | null | 0 | o8el2qo | false | /r/LocalLLaMA/comments/1rjoeok/is_qwen35_08b_more_powerful_than_mistral_7b/o8el2qo/ | false | 1 |
t1_o8el2o0 | I gave open code running with a number of local models. Didn’t think Claude code allowed hooking into local LLMs. | 1 | 0 | 2026-03-03T13:12:31 | simracerman | false | null | 0 | o8el2o0 | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8el2o0/ | false | 1 |
t1_o8el2gq | U either doing something wrong, dont have enough ram for 35b or using completely different quants | 1 | 0 | 2026-03-03T13:12:29 | KURD_1_STAN | false | null | 0 | o8el2gq | false | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8el2gq/ | false | 1 |
t1_o8el2cq | Damn, Qwen 3.5-35b-A3B got hands! | 1 | 0 | 2026-03-03T13:12:28 | Due-Memory-6957 | false | null | 0 | o8el2cq | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8el2cq/ | false | 1 |
t1_o8el1ke | crap me, I send the chart to 3.1 pro for a md good looking format without re-check it:)) | 1 | 0 | 2026-03-03T13:12:20 | Vozer_bros | false | null | 0 | o8el1ke | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8el1ke/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.