name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7wfs44 | They’re referring to the active parameter count. | 0 | 0 | 2026-02-28T16:23:32 | JermMX5 | false | null | 0 | o7wfs44 | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wfs44/ | false | 0 |
t1_o7wfrwd | Ohh ! ...I got to know ! See, first thing brother I'm using AI for coding purposes...as i want good quality of the coding and also if running 24B at Q4 also is a problem..? I mean it is loading and running okay okay ! ..but anyways here is the thing i have GitHub co pilot pro (student developer pack ) i came here just ... | 1 | 0 | 2026-02-28T16:23:30 | Less_Strain7577 | false | null | 0 | o7wfrwd | false | /r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7wfrwd/ | false | 1 |
t1_o7wfosy | For Finetuning:
The support in finetuning libraries are stable for older models. I am having all kinds of problems with Unsloth and Mistral 3.2, Ministral, Devstral, and Qwen MoE’s but Codestral, Llama 3, Qwen3 4B, Mistral Nemo, all just work.
Certain dataset-generation techniques can be tailored to specific models, t... | 31 | 0 | 2026-02-28T16:23:04 | aaronr_90 | false | null | 0 | o7wfosy | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wfosy/ | false | 31 |
t1_o7wfnxi | but you use ollama? | 1 | 0 | 2026-02-28T16:22:57 | Altruistic_Heat_9531 | false | null | 0 | o7wfnxi | false | /r/LocalLLaMA/comments/1rh3xey/seeking_help_improving_ocr_in_my_rag_pipeline/o7wfnxi/ | false | 1 |
t1_o7wfnn5 | What is your goal? If it’s to get useful work done, like classification, and that is going well, why reengineer every few months? If your goal is to have great chats, that is something else. | 1 | 0 | 2026-02-28T16:22:54 | Intelligent-Gas-2840 | false | null | 0 | o7wfnn5 | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wfnn5/ | false | 1 |
t1_o7wfm6s | [removed] | 1 | 0 | 2026-02-28T16:22:42 | [deleted] | true | null | 0 | o7wfm6s | false | /r/LocalLLaMA/comments/1mtjfhr/olla_v0016_lightweight_llm_proxy_for_homelab/o7wfm6s/ | false | 1 |
t1_o7wfk4n | Because money. Idk why everyone thinks the employees are good people. They are all just there in silicone valley for the money. They don't care about any of you. | 55 | 0 | 2026-02-28T16:22:25 | redditsublurker | false | null | 0 | o7wfk4n | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wfk4n/ | false | 55 |
t1_o7wfh7q | which is better 3.5-35B-A3B or simply 3.5-35B? | 0 | 1 | 2026-02-28T16:22:01 | elswamp | false | null | 0 | o7wfh7q | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wfh7q/ | false | 0 |
t1_o7wfdfh | Stack? | 1 | 0 | 2026-02-28T16:21:31 | alphatrad | false | null | 0 | o7wfdfh | false | /r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7wfdfh/ | false | 1 |
t1_o7wfau3 | RAGFlow loves to make a lot of requests that my poor 12 GB 4070 TI can't go through at a reasonable speed when I'm trying to ingest tens of datasheets, technical documents etc.
Add RAPTOR and GraphRAG and yeah the hardware needed gets real, really fast | 2 | 0 | 2026-02-28T16:21:09 | MaverickPT | false | null | 0 | o7wfau3 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wfau3/ | false | 2 |
t1_o7wf8j7 | I would have loved to do that, but I access to my iGPU and the eGPU responds with error:
\> \[ 584.821074\] xgpu\_nv\_mailbox\_trans\_msg: **2489 callbacks suppressed**
\> \[ 584.821086\] amdgpu 0000:64:00.0: amdgpu: trn=2 ACK should not assert! wait again !
\> \[ 584.823213\] amdgpu 0000:64:00.0: amdgpu: trn=2 ... | 6 | 0 | 2026-02-28T16:20:51 | ProfessionalSpend589 | false | null | 0 | o7wf8j7 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wf8j7/ | false | 6 |
t1_o7wf6qc | Probably this: https://www.reddit.com/r/LocalLLaMA/comments/1qpi8d4/meituanlongcatlongcatflashlite/
But I didn't test it myself, and I dont know if llama.cpp properly supports this. | 1 | 0 | 2026-02-28T16:20:36 | Several-Tax31 | false | null | 0 | o7wf6qc | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7wf6qc/ | false | 1 |
t1_o7wf33d | Your post is getting popular and we just featured it on our Discord! [Come check it out!](https://discord.gg/PgFhZ8cnWW)
You've also been given a special flair for your contribution. We appreciate your post!
*I am a bot and this action was performed automatically.* | 1 | 0 | 2026-02-28T16:20:06 | WithoutReason1729 | false | null | 0 | o7wf33d | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wf33d/ | true | 1 |
t1_o7wf1jg | I have this exact problem with the Qwen3.5-27B using the Unsloth listed params:
`temperature=1.0, top_p=1.0, top_k=40, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0`
and setting
`reasoning-budget = 0`
It's putting vast amounts of "thinking" as inline comments on generation. | 11 | 0 | 2026-02-28T16:19:53 | ariagloris | false | null | 0 | o7wf1jg | false | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7wf1jg/ | false | 11 |
t1_o7wf17p | The improvement is you skip the first “thinking” bit of text in the response, it doesn’t impact t/s | 1 | 0 | 2026-02-28T16:19:51 | ianlpaterson | false | null | 0 | o7wf17p | false | /r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/o7wf17p/ | false | 1 |
t1_o7wf11n | 27b thinks so much!! But the thinking quality is really good and It's worth the wait if I don't have to keep redirecting the model.
After running MoE models like q3 30b a3b @ \~55t/s since last summer, it's a return to Earth to be running 27b @ \~8.5t/s! (8-bit MLX on a binned M4 Pro MBP/48GB). | 2 | 0 | 2026-02-28T16:19:49 | MrPecunius | false | null | 0 | o7wf11n | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7wf11n/ | false | 2 |
t1_o7wezrt | Why would you want it to "generate" smells? Audio is needed just like video, image and text, but smells are just... I don't know what to say, maybe to enrich the embeddings and increase the model's relational awareness? | -7 | 0 | 2026-02-28T16:19:38 | Silver-Champion-4846 | false | null | 0 | o7wezrt | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7wezrt/ | false | -7 |
t1_o7wet36 | General knowledge q/a, providing two excel sheets and it able to use data off both to give me info I need, and generic text copy | 4 | 0 | 2026-02-28T16:18:43 | SocialDinamo | false | null | 0 | o7wet36 | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wet36/ | false | 4 |
t1_o7werlv | why API provider and not local? the embedding models aren't that big | 2 | 0 | 2026-02-28T16:18:31 | meganoob1337 | false | null | 0 | o7werlv | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7werlv/ | false | 2 |
t1_o7werb9 | `ollama launch codex --config` | 2 | 0 | 2026-02-28T16:18:28 | shuravi108 | false | null | 0 | o7werb9 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7werb9/ | false | 2 |
t1_o7wemna | I'm on a weak gaming rig, so the 8B model. | 2 | 0 | 2026-02-28T16:17:49 | PlainBread | false | null | 0 | o7wemna | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7wemna/ | false | 2 |
t1_o7welqv | I have trust issues with quants, so since I can I use the q8 | 27 | 0 | 2026-02-28T16:17:42 | SocialDinamo | false | null | 0 | o7welqv | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7welqv/ | false | 27 |
t1_o7weclc | A single 3090 is much faster. First step is to loose Ollama. Next is to ditch Windows if your using more than 2 Gpus. I did a lot of testing on my PCs and Linux gave me 3-6x faster with 3 gpus. | 1 | 0 | 2026-02-28T16:16:26 | lemondrops9 | false | null | 0 | o7weclc | false | /r/LocalLLaMA/comments/1rgynmf/dual_3060_and_single_3090_whats_the_point_of_the/o7weclc/ | false | 1 |
t1_o7wecd6 | I doubt it too, but if true it will be a big step forward in multi-modal models. It would also give a lot of real world intuition | 1 | 0 | 2026-02-28T16:16:24 | -dysangel- | false | null | 0 | o7wecd6 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7wecd6/ | false | 1 |
t1_o7we5iq | we only ask for the google sign up, once the app is booted up. nothing else! | 1 | 0 | 2026-02-28T16:15:27 | EmbarrassedAsk2887 | false | null | 0 | o7we5iq | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7we5iq/ | false | 1 |
t1_o7we4pm | plus unless it can generate smells, is it really multimodal? | 21 | 0 | 2026-02-28T16:15:20 | -dysangel- | false | null | 0 | o7we4pm | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7we4pm/ | false | 21 |
t1_o7we431 | https://www.reddit.com/r/LocalLLaMA/s/18uIZHXu2z and then the same guy did a follow up with some experiments requested in the comments | 7 | 0 | 2026-02-28T16:15:15 | JumboShock | false | null | 0 | o7we431 | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7we431/ | false | 7 |
t1_o7we0s9 | according to people familiar with the matter | 1 | 0 | 2026-02-28T16:14:47 | -dysangel- | false | null | 0 | o7we0s9 | false | /r/LocalLLaMA/comments/1rh095c/deepseek_v4_will_be_released_next_week_and_will/o7we0s9/ | false | 1 |
t1_o7we05n | Mistral never fine-tuned DeepSeek - I don't know where you are getting your information from | 6 | 0 | 2026-02-28T16:14:42 | pvp239 | false | null | 0 | o7we05n | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7we05n/ | false | 6 |
t1_o7wduie | Thought of giving more context: I just wanna make use of these AI's to access one of the Trading platform API i have and do a 5 mins once analysis on the data and another to fetch the news from any stock market sharing API and anotehr AI to do the Calculation using TA lib on whether to Buy or sell a stock if so when to... | 1 | 0 | 2026-02-28T16:13:55 | Network-Zealousideal | false | null | 0 | o7wduie | false | /r/LocalLLaMA/comments/1rh6e38/how_to_make_ai_collaborate_to_get_my_work_done/o7wduie/ | false | 1 |
t1_o7wdl8w | [removed] | 1 | 0 | 2026-02-28T16:12:38 | [deleted] | true | null | 0 | o7wdl8w | false | /r/LocalLLaMA/comments/1q5lf5p/top_open_llm_for_consumers_start_of_2026_bookmark/o7wdl8w/ | false | 1 |
t1_o7wdkvn | They aren't going down and you guys are being dramatic.
All the government can do is order it's agencies and the military not to use the technology. That's it. They aren't banning it at the consumer level, that would quite literally take an act of congress.
Relax. | 4 | 0 | 2026-02-28T16:12:35 | Orpheusly | false | null | 0 | o7wdkvn | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wdkvn/ | false | 4 |
t1_o7wdkm8 | No worries! It's a real open-source library, not a promo post — but I get the skepticism, there's a lot of noise out there. | 1 | 0 | 2026-02-28T16:12:33 | MotorAlternative8045 | false | null | 0 | o7wdkm8 | false | /r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wdkm8/ | false | 1 |
t1_o7wdjjf | Sorry, they used V3's architecture with their datasets is more correct to say. The result came out poor nonetheless | 1 | 0 | 2026-02-28T16:12:24 | ForsookComparison | false | null | 0 | o7wdjjf | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7wdjjf/ | false | 1 |
t1_o7wdipt | I sent a concerned email to make sure they stick to this, and move to solving the environmental issues to Anthropic | 1 | 0 | 2026-02-28T16:12:17 | Talamae-Laeraxius | false | null | 0 | o7wdipt | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7wdipt/ | false | 1 |
t1_o7wdirc | [removed] | 1 | 0 | 2026-02-28T16:12:17 | [deleted] | true | null | 0 | o7wdirc | false | /r/LocalLLaMA/comments/1q5lf5p/top_open_llm_for_consumers_start_of_2026_bookmark/o7wdirc/ | false | 1 |
t1_o7wdid6 | Does anyone actually have it? I Purchased and will be testing today. But could do with guidance | 1 | 0 | 2026-02-28T16:12:14 | CharmingUsual7267 | false | null | 0 | o7wdid6 | false | /r/LocalLLaMA/comments/1khqyds/gmk_evox2_ai_max_395_minipc_review/o7wdid6/ | false | 1 |
t1_o7wdhjm | [removed] | 1 | 0 | 2026-02-28T16:12:07 | [deleted] | true | null | 0 | o7wdhjm | false | /r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wdhjm/ | false | 1 |
t1_o7wdh0a | I’m building for this. Essentially a suite of OSS pre configs you can refresh and update on known good builds. It’s a pain because it’s a lot of moving pieces but it’s really sweet hitting a button and all your stuff is deployed and just works. | 1 | 0 | 2026-02-28T16:12:02 | Signal_Ad657 | false | null | 0 | o7wdh0a | false | /r/LocalLLaMA/comments/1rh52t9/config_drift_is_the_silent_killer_of_local_model/o7wdh0a/ | false | 1 |
t1_o7wdg95 | Too many to mention. I am waiting for a version of qwen 3.5 that is small enough to fit on my machine. | 0 | 0 | 2026-02-28T16:11:56 | HigherConfusion | false | null | 0 | o7wdg95 | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7wdg95/ | false | 0 |
t1_o7wddtc | A GPU is typically hundreds of times faster, but it does depend on your use case. | 1 | 0 | 2026-02-28T16:11:36 | Protopia | false | null | 0 | o7wddtc | false | /r/LocalLLaMA/comments/1rgixk7/accuracy_vs_speed_my_top_5/o7wddtc/ | false | 1 |
t1_o7wd9va | Mistral never fine-tuned Deepseek - where does this come from? | 2 | 0 | 2026-02-28T16:11:02 | pvp239 | false | null | 0 | o7wd9va | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7wd9va/ | false | 2 |
t1_o7wd6jv | I'm sure I could ask ChatGPT how to connect my local model, but if you wouldn't mind, what commands are you launching it with, or how are you setting up the local model connection? I use LM Studio to host a local server with 27b. | 1 | 0 | 2026-02-28T16:10:35 | _-_David | false | null | 0 | o7wd6jv | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wd6jv/ | false | 1 |
t1_o7wd5r0 | I ran q8 when I tested locally. I also ran with Openrouter to rule out in case my local setup has a problem. I think they serve unquantized version? | 2 | 0 | 2026-02-28T16:10:28 | chibop1 | false | null | 0 | o7wd5r0 | false | /r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7wd5r0/ | false | 2 |
t1_o7wd4z4 | Like the other poster said, make sure you are using the correct settings for the model.
We recommend using the following set of sampling parameters for generation
* Thinking mode for general tasks: `temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Thinking mode for pre... | 2 | 0 | 2026-02-28T16:10:21 | thegr8anand | false | null | 0 | o7wd4z4 | false | /r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o7wd4z4/ | false | 2 |
t1_o7wd0fh | Why does this sub even allow it? I'm glad it's called out so fast at least. | 1 | 0 | 2026-02-28T16:09:43 | LickMyTicker | false | null | 0 | o7wd0fh | false | /r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wd0fh/ | false | 1 |
t1_o7wd03c | What kind of office tasks? | 3 | 0 | 2026-02-28T16:09:40 | mzinz | false | null | 0 | o7wd03c | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wd03c/ | false | 3 |
t1_o7wcyuw | So, why not putting it against a model of the same caliber?
Qwen3.3-122B-A10B is on the same "size" category. I wonder if that is just milrs better. | 4 | 0 | 2026-02-28T16:09:30 | guesdo | false | null | 0 | o7wcyuw | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wcyuw/ | false | 4 |
t1_o7wcxba | [removed] | 1 | 0 | 2026-02-28T16:09:18 | [deleted] | true | null | 0 | o7wcxba | false | /r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wcxba/ | false | 1 |
t1_o7wcwlq | lol the model finding loopholes to think anyway is both hilarious and kind of unsettling. like it knows it needs to reason but you told it not to... so it just does it somewhere else | 121 | 0 | 2026-02-28T16:09:11 | RobertLigthart | false | null | 0 | o7wcwlq | false | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7wcwlq/ | false | 121 |
t1_o7wcu1z | drant and pgvector are awesome — if you want to run a server.
LokulMem runs **in the browser tab itself**. No deployment, no infra, zero data leaving the device. Think less "vector database" and more "your users' AI actually remembers them — privately, locally, no backend required."
Different tool for a different pro... | -1 | 0 | 2026-02-28T16:08:50 | MotorAlternative8045 | false | null | 0 | o7wcu1z | false | /r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wcu1z/ | false | -1 |
t1_o7wct3m | How is it 1/3 the size if gpt-oss-120b is literally the same size as Qwen-3-30b?
Considering OSS-120B is only available in MXFP4 and they've optimized the KV-Caches pretty agressively via SWA/SA, I believe Qwen-3-30b may be even a bit harder to run due to GQA and larger cache sizes.
Qwen-3.5-35B has gated delta-net l... | -5 | 0 | 2026-02-28T16:08:41 | netikas | false | null | 0 | o7wct3m | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wct3m/ | false | -5 |
t1_o7wcsef | I didnt delete qwen3 30b but I cant fathom why I'd ever load it up ever again. 35b is simply better by alot. It replaced 30b in my processes perfectly. Qwen3.5 has a data cut off of January 2026; though according the model it think it knows about July 2026 things. This is literally frontier for models. But if I were to... | 0 | 0 | 2026-02-28T16:08:36 | sleepingsysadmin | false | null | 0 | o7wcsef | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wcsef/ | false | 0 |
t1_o7wcrxu | I have 2x3090 (with NVLink) and a 2080Ti, along with 256GB DDR4-3200, which would you recommend? | 3 | 0 | 2026-02-28T16:08:32 | decrement-- | false | null | 0 | o7wcrxu | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7wcrxu/ | false | 3 |
t1_o7wcqvf | I am horny for the 9B one. | 11 | 0 | 2026-02-28T16:08:23 | dryadofelysium | false | null | 0 | o7wcqvf | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wcqvf/ | false | 11 |
t1_o7wcppk | lol | 5 | 0 | 2026-02-28T16:08:13 | rusl1 | false | null | 0 | o7wcppk | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wcppk/ | false | 5 |
t1_o7wco1h | Russia has democracy, open elections with thousands of Western officials observers, and they never reported any issues with the election process.
Who told you that Russia doesn't have democracy? Ahh the same people brainwashed you to take the CV19 poison? | 1 | 0 | 2026-02-28T16:07:59 | ImportancePitiful795 | false | null | 0 | o7wco1h | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7wco1h/ | false | 1 |
t1_o7wcnmw | Fair point — there are plenty of self-hosted vector DBs. LokulMem isn't one of them.
It runs **entirely in the browser** , no server to host, no Docker container, no infra to maintain. Storage is IndexedDB, embeddings run via a quantised MiniLM in a SharedWorker using Transformers.js. Zero backend, zero API key.
The ... | -2 | 0 | 2026-02-28T16:07:55 | MotorAlternative8045 | false | null | 0 | o7wcnmw | false | /r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wcnmw/ | false | -2 |
t1_o7wcgns | KLD is the base metric, the "you must be this tall to ride the ride." Nvidia uses KLD and only KLD when they make their NVFP4s. | 1 | 0 | 2026-02-28T16:06:57 | Phaelon74 | false | null | 0 | o7wcgns | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7wcgns/ | false | 1 |
t1_o7wcdzf | It does not necessarily make them better for a particular purpose. And if it does, there may be no need in it whatsoever. Implement, test, run as is until there is an actual need for change. | 12 | 0 | 2026-02-28T16:06:34 | Prudent-Ad4509 | false | null | 0 | o7wcdzf | false | /r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7wcdzf/ | false | 12 |
t1_o7wcdks | Which ones have you already tested? | 1 | 0 | 2026-02-28T16:06:31 | AppealThink1733 | false | null | 0 | o7wcdks | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7wcdks/ | false | 1 |
t1_o7wcdco | What size memory is required? Could I run it on a Mac mini maxed out? Or Mac Studio? | 1 | 0 | 2026-02-28T16:06:29 | Lastb0isct | false | null | 0 | o7wcdco | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7wcdco/ | false | 1 |
t1_o7wccxc | I don't know about everyone else - I was using GPT4all because it was very easy to install and run, but I couldn't get any newer models to run on it. Recently I switched to LM Studio because I wanted to run one of the latest Mistral models, and now that it's working and it does what I want it to do, I'm not testing any... | 3 | 0 | 2026-02-28T16:06:26 | Sr4f | false | null | 0 | o7wccxc | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7wccxc/ | false | 3 |
t1_o7wcbz6 | "Clone repo" where, on GitHub or in my PC? I'm very new to this, a complete beginner, and I'm not a programmer, just a fiction writer ;-) THX | 1 | 0 | 2026-02-28T16:06:18 | timeshifter24 | false | null | 0 | o7wcbz6 | false | /r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7wcbz6/ | false | 1 |
t1_o7wcao4 | A lot of people are very happy with the 35B model, and it is much faster. So, just something that has to be tested out. | 1 | 0 | 2026-02-28T16:06:07 | coder543 | false | null | 0 | o7wcao4 | false | /r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7wcao4/ | false | 1 |
t1_o7wc6ge | Ah. Report back with how it goes. I’m assuming you’re going to offload some of it to RAM? | 1 | 0 | 2026-02-28T16:05:31 | Laabc123 | false | null | 0 | o7wc6ge | false | /r/LocalLLaMA/comments/1rh1ec9/swapping_gptoss120b_for_qwen35122b_on_a_128gb/o7wc6ge/ | false | 1 |
t1_o7wc67x | Anyone got a M4 and could comment on performance? | 1 | 0 | 2026-02-28T16:05:29 | ChickenShieeeeeet | false | null | 0 | o7wc67x | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wc67x/ | false | 1 |
t1_o7wc5yc | Remember, we ONLY do MXFP4 on models that were done with QAT. If you do MXFP4 on a model that is not QAT, it will be worse. NVFP4 is also only good if they do QAD and/or QAT. | 1 | 0 | 2026-02-28T16:05:27 | Phaelon74 | false | null | 0 | o7wc5yc | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7wc5yc/ | false | 1 |
t1_o7wc5ja | New release 1.1.0 of RabbitLLM just came out. | 1 | 0 | 2026-02-28T16:05:23 | Protopia | false | null | 0 | o7wc5ja | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7wc5ja/ | false | 1 |
t1_o7wc4i6 | I wonder what effect quantization has on these. I've had great results with the fp8 version of Qwen3-Coder-Next in my orchestrated agent workflows. Same with the unquantized version of GLM-4.7-Flash. Both manage to maintain state and purpose over long running tasks. | 1 | 0 | 2026-02-28T16:05:14 | rmhubbert | false | null | 0 | o7wc4i6 | false | /r/LocalLLaMA/comments/1rh12xz/qwen3535b_nailed_my_simple_multiagent_workflow/o7wc4i6/ | false | 1 |
t1_o7wc2i3 | Came here to say this | 3 | 0 | 2026-02-28T16:04:58 | Tyme4Trouble | false | null | 0 | o7wc2i3 | false | /r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wc2i3/ | false | 3 |
t1_o7wc1sw | I thought "local" in /r/LocalLLaMA stood for not having a subscription in the first place? | 13 | 0 | 2026-02-28T16:04:52 | Protheu5 | false | null | 0 | o7wc1sw | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wc1sw/ | false | 13 |
t1_o7wc1l7 | Qdrant, postgress vector? | 3 | 0 | 2026-02-28T16:04:50 | DedsPhil | false | null | 0 | o7wc1l7 | false | /r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wc1l7/ | false | 3 |
t1_o7wc084 | Not as fluent. I still haven't found a model I can run on my machine, that is as good at Danish as Gemma 12B | 3 | 0 | 2026-02-28T16:04:39 | HigherConfusion | false | null | 0 | o7wc084 | false | /r/LocalLLaMA/comments/1rh3thm/rip_gemma_leave_your_memories_here/o7wc084/ | false | 3 |
t1_o7wbwks | I will check that | 1 | 0 | 2026-02-28T16:04:09 | Mrdeadbuddy | false | null | 0 | o7wbwks | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7wbwks/ | false | 1 |
t1_o7wbsu4 | That’s encouraging. I will have to play with it this weekend. gpt-oss-120b has been my go-to for good tool use, accurate summarization, and modest world knowledge since release, particularly once I converted the derestricted versions back to MXFP4.
Thanks for the suggestion! | 4 | 0 | 2026-02-28T16:03:37 | txgsync | false | null | 0 | o7wbsu4 | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wbsu4/ | false | 4 |
t1_o7wbs7l | Unless they do QAD or QAT, NVFP4 is worse than W4A16. | 1 | 0 | 2026-02-28T16:03:32 | Phaelon74 | false | null | 0 | o7wbs7l | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7wbs7l/ | false | 1 |
t1_o7wbrwa | Yup, should be runable. You need moe offload but still should be usable fast. | 2 | 0 | 2026-02-28T16:03:29 | Refefer | false | null | 0 | o7wbrwa | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7wbrwa/ | false | 2 |
t1_o7wbqjq | 16 | 1 | 0 | 2026-02-28T16:03:18 | Mrdeadbuddy | false | null | 0 | o7wbqjq | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7wbqjq/ | false | 1 |
t1_o7wboao | That's hyperbole. It would blacklist contracted companies from using Claude *to work on military contracts.* It wouldn't blacklist AWS from serving up Claude to consumers who have nothing to do with military contracts. | 1 | 1 | 2026-02-28T16:02:59 | Bite_It_You_Scum | false | null | 0 | o7wboao | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wboao/ | false | 1 |
t1_o7wbmq9 | Maybe opening up R1 model parameters for Roo Code or Kilo Code. That one works for Qwen3.5 deployed with vLLM or llamacpp.
https://preview.redd.it/z6u8bksmd9mg1.png?width=1135&format=png&auto=webp&s=4487240fc875e1250af5b4af522aa2853415526b
| 1 | 0 | 2026-02-28T16:02:46 | lly0571 | false | null | 0 | o7wbmq9 | false | /r/LocalLLaMA/comments/1rh0yim/how_to_use_qwen_35_35b_with_any_agentic_coding/o7wbmq9/ | false | 1 |
t1_o7wbjyk | slop post. there are lots of self-hosted vector dbs.
| 6 | 0 | 2026-02-28T16:02:22 | StewedAngelSkins | false | null | 0 | o7wbjyk | false | /r/LocalLLaMA/comments/1rh639o/your_ollama_setup_is_private_your_memory_layer/o7wbjyk/ | false | 6 |
t1_o7wbj5l | ok that's what I was wondering. So a 35B doesn't perform as well as a 27B? I have one mac mini m4 base model which is 16GB and from what I'm reading could find other mac mini's to add to it. My only fear with running it all on the server itself is lack of ability to monitor it if something starts going wrong. An exte... | 1 | 0 | 2026-02-28T16:02:16 | MartiniCommander | false | null | 0 | o7wbj5l | false | /r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7wbj5l/ | false | 1 |
t1_o7wbdcu | I've 4090 48G, and unluckly resizable bar don't works out of box.
And I can not risk flash bios for a memory modified card. | 1 | 0 | 2026-02-28T16:01:27 | XForceForbidden | false | null | 0 | o7wbdcu | false | /r/LocalLLaMA/comments/1r66jyp/vllm_maximum_performance_on_multi3090/o7wbdcu/ | false | 1 |
t1_o7wb9x6 | >Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.
[https://x.com/sama/status/2027... | 4 | 0 | 2026-02-28T16:00:57 | GarbanzoBenne | false | null | 0 | o7wb9x6 | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wb9x6/ | false | 4 |
t1_o7wb72m | Geez, that is so hilarious and insane!
| 18 | 0 | 2026-02-28T16:00:33 | natufian | false | null | 0 | o7wb72m | false | /r/LocalLLaMA/comments/1rh5luv/qwen35_35ba3b_evaded_the_zeroreasoning_budget_by/o7wb72m/ | false | 18 |
t1_o7wazcr | [removed] | 1 | 0 | 2026-02-28T15:59:28 | [deleted] | true | null | 0 | o7wazcr | false | /r/LocalLLaMA/comments/1inmu01/best_llm_router_comparison/o7wazcr/ | false | 1 |
t1_o7waxju | That’s helpful. | 1 | 0 | 2026-02-28T15:59:13 | silenceimpaired | false | null | 0 | o7waxju | false | /r/LocalLLaMA/comments/1rh4p8i/okay_im_overthinking_yes_yes_you_are_qwen_35_27b/o7waxju/ | false | 1 |
t1_o7wam9p | At what does it run for you before and after disabling / enabling thining? | 1 | 0 | 2026-02-28T15:57:39 | ChickenShieeeeeet | false | null | 0 | o7wam9p | false | /r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/o7wam9p/ | false | 1 |
t1_o7wajvl | I'm not sure about your application, but I've found that the standard software development strategy applies:
- Break it into smaller pieces.
- Write better tests | 3 | 0 | 2026-02-28T15:57:20 | Tai9ch | false | null | 0 | o7wajvl | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7wajvl/ | false | 3 |
t1_o7waeys | Because they didn’t build it, the LLM assumed it together. Should a damn calendar require an RTX3090 and an extraordinary amount of RAM and high speed storage? Goofy as hell | 2 | 0 | 2026-02-28T15:56:40 | SMELLYCHEESE8 | false | null | 0 | o7waeys | false | /r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7waeys/ | false | 2 |
t1_o7waaku | I plan on doing that this week. What front end are you using for chat? | 1 | 0 | 2026-02-28T15:56:03 | Wildnimal | false | null | 0 | o7waaku | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7waaku/ | false | 1 |
t1_o7wa9uv | Cries in RTX 2060 with 6GB VRAM :-/
2-3tps on llama-server
llama-server \\
\-hf bartowski/Qwen\_Qwen3.5-35B-A3B-GGUF:IQ2\_XS \\
\-ngl 15 \\
\-c 24000 \\
\-b 2048 \\
\--port 8129 \\
\--host [0.0.0.0](http://0.0.0.0) \\
\--chat-template-kwargs '{"enable\_thinking": false}' \\
\--reasoning-budget... | 4 | 0 | 2026-02-28T15:55:58 | AppealSame4367 | false | null | 0 | o7wa9uv | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wa9uv/ | false | 4 |
t1_o7wa9xl | Can you link them, please? | 1 | 0 | 2026-02-28T15:55:58 | r00rback | false | null | 0 | o7wa9xl | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wa9xl/ | false | 1 |
t1_o7wa8r5 | They've released 0 local models. Fuck 'em. | 10 | 0 | 2026-02-28T15:55:48 | Retnik | false | null | 0 | o7wa8r5 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7wa8r5/ | false | 10 |
t1_o7wa4xm | Yea? Source it.
If that was the sticking point and then OpenAi wouldnt have been allowed to sign either. I call BS. | 5 | 0 | 2026-02-28T15:55:17 | quantgorithm | false | null | 0 | o7wa4xm | false | /r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7wa4xm/ | false | 5 |
t1_o7wa1x8 | That makes sense, especially if your taxonomy is evolving.
Intent + entity extraction is usually more flexible than rigid multi-layer classification, particularly if categories are going to change over time.
One thing I’d watch for though: if the LLM is doing all the heavy lifting, latency and cost can creep up quick... | 1 | 0 | 2026-02-28T15:54:53 | Individual_Round7690 | false | null | 0 | o7wa1x8 | false | /r/LocalLLaMA/comments/1nvre5c/ticket_categorization_classifying_tickets_into/o7wa1x8/ | false | 1 |
t1_o7wa1yv | Waited for it for days! Yes! | 3 | 0 | 2026-02-28T15:54:53 | AppealSame4367 | false | null | 0 | o7wa1yv | false | /r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7wa1yv/ | false | 3 |
t1_o7w9zix | Isn't GLM 5 comparable to Opus? Their benchmark shows they are somewhere close | 1 | 0 | 2026-02-28T15:54:33 | CommercialGuitar1104 | false | null | 0 | o7w9zix | false | /r/LocalLLaMA/comments/1rggpu9/glm5code/o7w9zix/ | false | 1 |
t1_o7w9y9r | try the kwargs chat template with no thinking if you haven't already. example in unsloth docs for qwen 3.5. | 1 | 0 | 2026-02-28T15:54:23 | Opposite-Station-337 | false | null | 0 | o7w9y9r | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7w9y9r/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.