name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7uf6ry | Yes, but will it be WAY better than qwen3.5 9b? | 5 | 0 | 2026-02-28T07:37:00 | _-_David | false | null | 0 | o7uf6ry | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7uf6ry/ | false | 5 |
t1_o7uf6po | I've been using Devstral Small 2 to great effect as of late. Especially in combination with Mistral Vibe. And no I don't vibe code whole projects, but what I use it as a personal quality gate, after "finishing" a project I run mistral vibe over it with the instruction of making sure my code is ready for opening a pull ... | 3 | 0 | 2026-02-28T07:36:59 | WildDogOne | false | null | 0 | o7uf6po | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7uf6po/ | false | 3 |
t1_o7uf3te | > chatting with Claude Opus about picking up a USB drive at Target.
> Simple stuff
if only you knew how bad things really are... | 1 | 0 | 2026-02-28T07:36:14 | MelodicRecognition7 | false | null | 0 | o7uf3te | false | /r/LocalLLaMA/comments/1rglpxg/i_caught_claude_opus_doing_the_exact_same_thing/o7uf3te/ | false | 1 |
t1_o7uf0tz | The 120b once I redownload it. Looks like mine is corrupted or something. | 1 | 0 | 2026-02-28T07:35:29 | Ok-Measurement-1575 | false | null | 0 | o7uf0tz | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uf0tz/ | false | 1 |
t1_o7uexjy | it is an underrated model for sure. | 2 | 0 | 2026-02-28T07:34:40 | llama-impersonator | false | null | 0 | o7uexjy | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uexjy/ | false | 2 |
t1_o7uexds | That never happened | 20 | 0 | 2026-02-28T07:34:38 | Material_Policy6327 | false | null | 0 | o7uexds | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uexds/ | false | 20 |
t1_o7uewwf | At least they probably will be bought and integrated, e.g. Neither Amazon, Microsoft nor Apple has their own solution. | 1 | 0 | 2026-02-28T07:34:30 | LevianMcBirdo | false | null | 0 | o7uewwf | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uewwf/ | false | 1 |
t1_o7uewuc | Do you find that setup noticeably beneficial over, say, arguing with Claude in plan mode for a bit? | 1 | 0 | 2026-02-28T07:34:29 | slvrsmth | false | null | 0 | o7uewuc | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uewuc/ | false | 1 |
t1_o7uewco | I wonder if there’s a way to deeply embed this into an IDE like you can with Claude and Xcode.
https://developer.apple.com/videos/play/tech-talks/111428/ | 1 | 0 | 2026-02-28T07:34:23 | BahnMe | false | null | 0 | o7uewco | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uewco/ | false | 1 |
t1_o7ueucn | i don't think your youtube title being A in panic mode is correct. They made a very conscious decision not to engage into supporting a fascist regime. Plain simple. They don't want to be called the Siemens family after the facts.... | 13 | 0 | 2026-02-28T07:33:52 | arousedsquirel | false | null | 0 | o7ueucn | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ueucn/ | false | 13 |
t1_o7uerpq | normally you use like 4B models no? | 2 | 0 | 2026-02-28T07:33:11 | FPham | false | null | 0 | o7uerpq | false | /r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o7uerpq/ | false | 2 |
t1_o7ueqx1 | https://www.anthropic.com/news/statement-department-of-war | 4 | 0 | 2026-02-28T07:32:58 | Deep90 | false | null | 0 | o7ueqx1 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ueqx1/ | false | 4 |
t1_o7uemlo | I've personally really liked Step 3.5 Flash. It runs well on my system, doesn’t crater with larger contexts and it has pretty good quality output and versatility. | 4 | 0 | 2026-02-28T07:31:51 | spaceman_ | false | null | 0 | o7uemlo | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uemlo/ | false | 4 |
t1_o7uemg2 | I got this gem out when I asked it to analyze a photo... It had a whole existential crisis on me.
(Wait, I need to check if I should mention the pose)
Standing, smiling.
Okay.
(Wait, I need to check if I should mention the setting)
Outdoors, porch.
Okay.
(Wait, I need to check if I should mention the weat... | 2 | 0 | 2026-02-28T07:31:49 | _-_David | false | null | 0 | o7uemg2 | false | /r/LocalLLaMA/comments/1rgu3s0/qwen3535ba3b_be_overthinking_like/o7uemg2/ | false | 2 |
t1_o7ueln6 | no drama, I was curious just about how to run it continuously without interruption to get a result. | 3 | 0 | 2026-02-28T07:31:36 | Steus_au | false | null | 0 | o7ueln6 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ueln6/ | false | 3 |
t1_o7ueknj | provide sources | -1 | 0 | 2026-02-28T07:31:21 | thejacer | false | null | 0 | o7ueknj | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ueknj/ | false | -1 |
t1_o7uehbe | > 2025
and is still in 2026, at least in Russia the only source of bleeding edge AI-related information is 2ch - Russian 4chan, all other "tech" sources are simply paid/advertisement theads of tech companies. | 13 | 0 | 2026-02-28T07:30:30 | MelodicRecognition7 | false | null | 0 | o7uehbe | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7uehbe/ | false | 13 |
t1_o7uefw8 | 35B MoE for out of the box inference. But waiting for smaller models to finetune. | 1 | 0 | 2026-02-28T07:30:09 | Middle_Bullfrog_6173 | false | null | 0 | o7uefw8 | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uefw8/ | false | 1 |
t1_o7uee5e | Im not sure that's a very valuable heuristic there. Where did you find it?
Just a quick look at major MoEs and dense models of the same period/company:
Mixtral 8x22B (Apr 2024) - 141B total, 39B active
- GM prediction: ~74B dense equivalent
- MMLU: ~77% / GSM8K: 90.8% (instruct, maj@8)
- Beats: Llama 2 70B decisively... | 2 | 0 | 2026-02-28T07:29:42 | MmmmMorphine | false | null | 0 | o7uee5e | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7uee5e/ | false | 2 |
t1_o7uedak | Indeed. | 1 | 0 | 2026-02-28T07:29:29 | Ok-Measurement-1575 | false | null | 0 | o7uedak | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7uedak/ | false | 1 |
t1_o7ue94c | llm greenhorn myself, so keep this in mind.
My setup is llama.cpp + opencode and I use [unsloth/Qwen3.5-35B-A3B-GGUF](https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF).
```
when is your cutoff date?
Thinking: The user is asking about my cutoff date, which is straightforward factual information I should provide wi... | 3 | 0 | 2026-02-28T07:28:26 | unnamed_one1 | false | null | 0 | o7ue94c | false | /r/LocalLLaMA/comments/1rguzz2/qwen_35_cutoff_date_is_2024/o7ue94c/ | false | 3 |
t1_o7ue8ur | 5090 + 5060ti 16gb. I've been thinking about buying several more 5060's. But I might just wait to see what Gemma 4 looks like before I commit any further. 80gb would be nice though.. | 1 | 0 | 2026-02-28T07:28:22 | _-_David | false | null | 0 | o7ue8ur | false | /r/LocalLLaMA/comments/1rg8gkx/qwen35_35b_a3b_45_ts_128k_ctx_on_single_16gb_5060/o7ue8ur/ | false | 1 |
t1_o7ue5n4 | I was using Q4_0 until now for the RAM savings on some models. (On others I have picked correctly by accident).
But today I’m picking my GPU with int4 hardware acceleration and I read my choice until now doesn’t benefit from this, so I’ll move to the newer format too.
| 1 | 0 | 2026-02-28T07:27:34 | ProfessionalSpend589 | false | null | 0 | o7ue5n4 | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7ue5n4/ | false | 1 |
t1_o7udwia | I get 10-11 toks/second with 8gb vram and 33gb ddr5 5600mhz ram.
Using LM Studio | 2 | 0 | 2026-02-28T07:25:15 | Wildnimal | false | null | 0 | o7udwia | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7udwia/ | false | 2 |
t1_o7udvs3 | Not only that, Chinese posted all papers on that. | 3 | 0 | 2026-02-28T07:25:04 | FPham | false | null | 0 | o7udvs3 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7udvs3/ | false | 3 |
t1_o7udv0z | This makes no sense. | 3 | 0 | 2026-02-28T07:24:53 | ttkciar | false | null | 0 | o7udv0z | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7udv0z/ | false | 3 |
t1_o7udt9t | 27b benchmarks as a gpt-5-mini replacement. And the agentic benchmark scores are insane. Anything that is tool-use heavy, or I/O bound, and doesn't require the absolute fastest inference speed is going to get this beauty assigned to it. Now to get a 4b(or smaller) model for speculative decoding! | 9 | 0 | 2026-02-28T07:24:26 | _-_David | false | null | 0 | o7udt9t | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7udt9t/ | false | 9 |
t1_o7udp72 | I don’t get it either when FemtoClaw only uses 1MB of RAM | 1 | 0 | 2026-02-28T07:23:26 | kediacorp | false | null | 0 | o7udp72 | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7udp72/ | false | 1 |
t1_o7udlr9 | 35BA3 since I only have ddr5, no gpu. It is two times slower than the old 30A3, but could be useful for a quick chat or small changes and questions, feels much more capable from my early tests. | 1 | 0 | 2026-02-28T07:22:35 | Thomas-Lore | false | null | 0 | o7udlr9 | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7udlr9/ | false | 1 |
t1_o7udj13 | That is why I use Gemma 3 as my image prompt generator. | 3 | 0 | 2026-02-28T07:21:55 | arbv | false | null | 0 | o7udj13 | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7udj13/ | false | 3 |
t1_o7udfc6 | the small card could be a bottleneck, have you tried to use 32GB VRAM - 2x5060Ti only?
make sure to disable Hyperthreading/SMT and enable Turbo Boost in the BIOS. | 1 | 0 | 2026-02-28T07:21:00 | MelodicRecognition7 | false | null | 0 | o7udfc6 | false | /r/LocalLLaMA/comments/1rgkmd7/ways_to_improve_prompt_processing_when_offloading/o7udfc6/ | false | 1 |
t1_o7ude4c | 27b punches at deepseek so the 27b ... Why choose anything? Go try it | 6 | 0 | 2026-02-28T07:20:42 | Ok_Technology_5962 | false | null | 0 | o7ude4c | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ude4c/ | false | 6 |
t1_o7udc6l | I guess the people who took out second mortgages to buy RTX 6000 Pro's are the real winners here. | 3 | 0 | 2026-02-28T07:20:14 | laterbreh | false | null | 0 | o7udc6l | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7udc6l/ | false | 3 |
t1_o7ud7do | There was similar discussions of SpaceX being nationalized by Biden admin. | -37 | 0 | 2026-02-28T07:19:01 | KeikakuAccelerator | false | null | 0 | o7ud7do | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ud7do/ | false | -37 |
t1_o7ud4nh | 35B A3B, 16gb vram (5080) plus 64gb ddr4 3200mhz. getting 11-13 toks/second. but damn... the 'thinking' part can be slow.
PS it also passes the car wash test...😁 | 3 | 0 | 2026-02-28T07:18:19 | yuhjulio | false | null | 0 | o7ud4nh | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ud4nh/ | false | 3 |
t1_o7uczxt | lold at chinese bots downvoting legit questions. | 2 | 0 | 2026-02-28T07:17:09 | MelodicRecognition7 | false | null | 0 | o7uczxt | false | /r/LocalLLaMA/comments/1rgips0/how_does_training_an_ai_on_another_ai_actually/o7uczxt/ | false | 2 |
t1_o7ucyid | Imagine if some random guy with contacts just finetunes a chinese model and sells it to the military. | 4 | 0 | 2026-02-28T07:16:47 | True_Requirement_891 | false | null | 0 | o7ucyid | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ucyid/ | false | 4 |
t1_o7ucyb4 | As someone that has used all of the open source models locally on my multi rtx pro system, I can safely tell you youre not evaluating minimax 2.5 properly. | 1 | 0 | 2026-02-28T07:16:44 | laterbreh | false | null | 0 | o7ucyb4 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7ucyb4/ | false | 1 |
t1_o7ucx6b | yes cause llama cpp does not support kv cache reuse for qwen 3.5 35b a3b multimodal models yet, but its a work in process!!
[https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1563](https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/1563) | 3 | 0 | 2026-02-28T07:16:26 | FORNAX_460 | false | null | 0 | o7ucx6b | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7ucx6b/ | false | 3 |
t1_o7ucvke | 397b but i'm too lazy to move a few large models to hdd to clear up room for it right now. eventually i will banish step-3.5-flash to the underworld and see if the 3 bit big qwen is superior to the 4 bit GLM 4.7. | 2 | 0 | 2026-02-28T07:16:02 | llama-impersonator | false | null | 0 | o7ucvke | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ucvke/ | false | 2 |
t1_o7ucoc0 | I'm mainly interested in the 27B dense, because when all other factors are equal a dense model gives you the most smarts for a given inference memory budget. It fits nicely in my 32GB MI60 at Q4_K_M with space for context (K and V caches).
I do plan on evaluating the 35B eventually too, just to see how it stacks up a... | 25 | 0 | 2026-02-28T07:14:14 | ttkciar | false | null | 0 | o7ucoc0 | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ucoc0/ | false | 25 |
t1_o7uco7n | I think 27B for multi card as it's dense. 35B or 122B if have a single card or unified system memory setup | 1 | 0 | 2026-02-28T07:14:12 | AloneSYD | false | null | 0 | o7uco7n | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uco7n/ | false | 1 |
t1_o7ucnxe | I got it thanks...I'm also using quantized 7B models | 1 | 0 | 2026-02-28T07:14:08 | Less_Strain7577 | false | null | 0 | o7ucnxe | false | /r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7ucnxe/ | false | 1 |
t1_o7uckgd | I don't have crazy hardware, so i haven't \*thoroughly\* tested it, but this is the vibe i get from my testing. If I have to go below q4 to run it, I'm better going down a tier of model and getting the q4
| 3 | 0 | 2026-02-28T07:13:18 | National_Meeting_749 | false | null | 0 | o7uckgd | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7uckgd/ | false | 3 |
t1_o7ucjzy | Successful community -> community grows -> loses the magic. The vicious circle of internet life (and maybe IRL?) | 13 | 0 | 2026-02-28T07:13:11 | BlobbyMcBlobber | false | null | 0 | o7ucjzy | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ucjzy/ | false | 13 |
t1_o7uchgo | Running 35B @ Q6_K | 6 | 0 | 2026-02-28T07:12:33 | BumblebeeParty6389 | false | null | 0 | o7uchgo | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uchgo/ | false | 6 |
t1_o7ucgfn | January 2024 I was waxing poetic about deploying finetuned small models for enterprise business bullshit and I had to stop to cite localllama..."they all just use AI for porn, but they're really good at it." | 4 | 0 | 2026-02-28T07:12:18 | thejacer | false | null | 0 | o7ucgfn | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ucgfn/ | false | 4 |
t1_o7ucbni | does that mean you only have to wait for a maximum of an hour, not the whole day like other AI apps? that's great. thank you for the info man😊 | 1 | 0 | 2026-02-28T07:11:08 | mmooncake | false | null | 0 | o7ucbni | false | /r/LocalLLaMA/comments/1r9p1zu/what_are_the_rate_limits_for_arena_lmarena/o7ucbni/ | false | 1 |
t1_o7uc7cb | AWS isnt barred from business with Anthropic.
Data centers also exist in European countries btw. | 0 | 0 | 2026-02-28T07:10:03 | randombsname1 | false | null | 0 | o7uc7cb | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uc7cb/ | false | 0 |
t1_o7uc4i8 | I'm not going to believe they just discovered now that the chinese are distilling claude. They are doing it since 2023 at least. | 0 | 0 | 2026-02-28T07:09:21 | ortegaalfredo | false | null | 0 | o7uc4i8 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uc4i8/ | false | 0 |
t1_o7uc3hz | But what would be their backbone? Where is the cloud provider that does not work with US military? | 1 | 0 | 2026-02-28T07:09:06 | FPham | false | null | 0 | o7uc3hz | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uc3hz/ | false | 1 |
t1_o7uc0g2 | Like AWS or any cloud provider carrying Anthropic.... how they going to serve claude? On AOL disks in mail? | -1 | 0 | 2026-02-28T07:08:21 | FPham | false | null | 0 | o7uc0g2 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uc0g2/ | false | -1 |
t1_o7ubz7h | I'm curious if giving a few GB of VRAM to the 4B draft model could help folks that offload layers of 122B onto CPU. | 1 | 0 | 2026-02-28T07:08:03 | ForsookComparison | false | null | 0 | o7ubz7h | false | /r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7ubz7h/ | false | 1 |
t1_o7ubon0 | They don't have opinions so they just pick what's closest to what they'd generate.
Claude really likes Qwen outputs.
Gemini and Llama models are pretty tight.
ChatGPT and Deepseek hate each other.
Lots of fun patterns that don't mean much. | 3 | 0 | 2026-02-28T07:05:27 | ForsookComparison | false | null | 0 | o7ubon0 | false | /r/LocalLLaMA/comments/1rgt43l/using_a_third_llm_as_a_judge_to_evaluate_two/o7ubon0/ | false | 3 |
t1_o7ubmsg | Which model? | 1 | 0 | 2026-02-28T07:05:02 | wiltors42 | false | null | 0 | o7ubmsg | false | /r/LocalLLaMA/comments/1rgt4m4/not_creeped_out_at_all_i_swear/o7ubmsg/ | false | 1 |
t1_o7ubko8 | > I have 48gb of VRAM in my system
So you multiple 5060 ti 16GBs on yr machine ? or am i not getting you ? | 1 | 0 | 2026-02-28T07:04:27 | gmmarcus | false | null | 0 | o7ubko8 | false | /r/LocalLLaMA/comments/1rg8gkx/qwen35_35b_a3b_45_ts_128k_ctx_on_single_16gb_5060/o7ubko8/ | false | 1 |
t1_o7ubkh7 | Yeah but you have no idea why. It's clear, but won't argue that here. | -4 | 0 | 2026-02-28T07:04:23 | AcePilot01 | false | null | 0 | o7ubkh7 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ubkh7/ | false | -4 |
t1_o7ubhfc | [removed] | 1 | 0 | 2026-02-28T07:03:37 | [deleted] | true | null | 0 | o7ubhfc | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ubhfc/ | false | 1 |
t1_o7ubhae | You have 10 seconds to comply.
You now have 5 seconds to comply.
pew pew pew pew pew. lol | 2 | 0 | 2026-02-28T07:03:35 | AcePilot01 | false | null | 0 | o7ubhae | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ubhae/ | false | 2 |
t1_o7ubgbr | To me the fact that Dario and friends didn't cave to requests to make a kill-friendly Claude means that they think there's more money in pushing for safety regulations than there is in getting US military contracts.
*THAT* is scary for the future of self hosted LLMs if true. | 12 | 0 | 2026-02-28T07:03:21 | ForsookComparison | false | null | 0 | o7ubgbr | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ubgbr/ | false | 12 |
t1_o7ube15 | negotiating tactic, read the art of war. I bet you within a week a deal will be made. If not, a court action. | 0 | 0 | 2026-02-28T07:02:48 | AcePilot01 | false | null | 0 | o7ube15 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ube15/ | false | 0 |
t1_o7ubbrx | 122B NVFP4 once someone figures out how to get it stable on DGX spark 😬😥 | 6 | 0 | 2026-02-28T07:02:14 | lenjet | false | null | 0 | o7ubbrx | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ubbrx/ | false | 6 |
t1_o7ub9c1 | This is relatable. From thinking, "Wow! Qwen3-30b-a3b is actually decent! Maybe there is something to this local stuff", to buying a 5090 and saying, "Okay, but what is the actual use-case for this though" and never turning it on. I tried out opencode after GLM-4.7-Flash came out, but the finnicky looping behavior put ... | 6 | 0 | 2026-02-28T07:01:37 | _-_David | false | null | 0 | o7ub9c1 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ub9c1/ | false | 6 |
t1_o7ub7wa | Bytedance? | -12 | 0 | 2026-02-28T07:01:17 | FPham | false | null | 0 | o7ub7wa | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7ub7wa/ | false | -12 |
t1_o7ub7ub | A legalistic argument means Anthropic already lost. Most military stuff simply doesn't play by normal rules. They will never have another government contract. At this point they might be seized on suspicion of trying to alter military operations or intelligence. Especially since Anthropic is already enmeshed in classif... | -2 | 0 | 2026-02-28T07:01:16 | fervoredweb | false | null | 0 | o7ub7ub | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ub7ub/ | false | -2 |
t1_o7ub7ez | Are you training it with STDP only? Why there is a line scaler.scale(loss).backward()? | 1 | 0 | 2026-02-28T07:01:10 | cloudhan | false | null | 0 | o7ub7ez | false | /r/LocalLLaMA/comments/1rfddpi/training_a_144m_spiking_neural_network_for_text/o7ub7ez/ | false | 1 |
t1_o7ub6ce | Reaped down the 397b 22% for mlx. Posted 4bit and 2bit online on hf. | 4 | 0 | 2026-02-28T07:00:54 | HealthyCommunicat | false | null | 0 | o7ub6ce | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ub6ce/ | false | 4 |
t1_o7ub45x | What stack are you running? | 1 | 0 | 2026-02-28T07:00:23 | deadly_sin_666 | false | null | 0 | o7ub45x | false | /r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7ub45x/ | false | 1 |
t1_o7ub41w | Yes you should because of tool calling fixes | 1 | 0 | 2026-02-28T07:00:22 | yoracale | false | null | 0 | o7ub41w | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7ub41w/ | false | 1 |
t1_o7ub03u | Need to download some cudas. | 2 | 0 | 2026-02-28T06:59:25 | No-Consequence-1779 | false | null | 0 | o7ub03u | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7ub03u/ | false | 2 |
t1_o7uazma | Thanks. I need to read up to understand 'small quant', ''MOE', etc.
> I find LLMs are more useful trained on a specific knowledge
So u train yr LLMs as well ?
p.s I an a heavy Claude Code - Opus 4.6 user. Looking to setup a local LLM for the fam. | 0 | 0 | 2026-02-28T06:59:17 | gmmarcus | false | null | 0 | o7uazma | false | /r/LocalLLaMA/comments/1rgcosw/trained_and_quantized_an_llm_on_a_gtx_1650_4gb/o7uazma/ | false | 0 |
t1_o7uax22 | Thanks I will try that! | 1 | 0 | 2026-02-28T06:58:40 | NaiRogers | false | null | 0 | o7uax22 | false | /r/LocalLLaMA/comments/1rgiait/switched_to_qwen35122ba10bi1gguf/o7uax22/ | false | 1 |
t1_o7uaw1c | 2 things that are already being done, fyi. Not reiterating it all here, but there is a valid point to both sides. Leaning towards anthropic, for now, but we still want to be the best technically and in military.
BUT things like the surv, or full control isn't really ready for that, and the surveillance might be the ... | -10 | 0 | 2026-02-28T06:58:25 | AcePilot01 | false | null | 0 | o7uaw1c | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uaw1c/ | false | -10 |
t1_o7uasly | lol “trying to squeeze the 35B into Mac Studios” if you can’t get that to fit then I don’t know what you’re doing wrong. | 23 | 0 | 2026-02-28T06:57:36 | And-Bee | false | null | 0 | o7uasly | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uasly/ | false | 23 |
t1_o7uarzh | the 35b I only have a 4gb gpu, but 32gb of ddr5 | 12 | 0 | 2026-02-28T06:57:27 | thebadslime | false | null | 0 | o7uarzh | false | /r/LocalLLaMA/comments/1rgvma8/which_size_of_qwen35_are_you_planning_to_run/o7uarzh/ | false | 12 |
t1_o7uap8w | **Update: Just reached 24.0% accuracy.**
I noticed many of you are lurking—probably skeptical that a solo student on a MacBook can outperform $10B clusters in efficiency. I get it.
To be clear: This isn't a simple script. The engine has grown to **127,700 lines of Python code**, featuring a massive library of symboli... | 1 | 0 | 2026-02-28T06:56:47 | Other_Train9419 | false | null | 0 | o7uap8w | false | /r/LocalLLaMA/comments/1rgmcw3/verantyx_235_on_arcagi2_on_a_macbook_06s_per_task/o7uap8w/ | false | 1 |
t1_o7uak9u | Wait, what? Why? | 2 | 0 | 2026-02-28T06:55:33 | Big_Mix_4044 | false | null | 0 | o7uak9u | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7uak9u/ | false | 2 |
t1_o7uak6q | “I dont have an argument. You mentioned Trump which activated my bot response conditions. I hide my comment history.” | 17 | 0 | 2026-02-28T06:55:32 | mspaintshoops | false | null | 0 | o7uak6q | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uak6q/ | false | 17 |
t1_o7uaja5 | made sense to me, any commercial activity is anything commercial. Not hard to understand lol. | 1 | 0 | 2026-02-28T06:55:19 | AcePilot01 | false | null | 0 | o7uaja5 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uaja5/ | false | 1 |
t1_o7uaj0j | Where did you hear about its cutoff being 2026? | 1 | 0 | 2026-02-28T06:55:14 | CluelessOuphe | false | null | 0 | o7uaj0j | false | /r/LocalLLaMA/comments/1rguzz2/qwen_35_cutoff_date_is_2024/o7uaj0j/ | false | 1 |
t1_o7uai6e | I am almost sure they are gonna move to EU | 2 | 0 | 2026-02-28T06:55:02 | Feeling-Currency-360 | false | null | 0 | o7uai6e | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uai6e/ | false | 2 |
t1_o7uahl4 | No not in lm studio I use llama.cpp | 1 | 0 | 2026-02-28T06:54:54 | Potential_Block4598 | false | null | 0 | o7uahl4 | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7uahl4/ | false | 1 |
t1_o7uah9g | For spec decoding all that matters is active param count. If your model has a10b, you need something that has less that, a3b models so that it has any effect | 1 | 0 | 2026-02-28T06:54:49 | HealthyCommunicat | false | null | 0 | o7uah9g | false | /r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o7uah9g/ | false | 1 |
t1_o7uago6 | If you need intelligence, there is Nanbeige4.1-3B (more like 4B minus a bit).
It probably won’t be good, but you can try. | 2 | 0 | 2026-02-28T06:54:40 | HenkPoley | false | null | 0 | o7uago6 | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7uago6/ | false | 2 |
t1_o7uaf4w | It's been the dark ages for Mistral a while now.
The latest Magistral and Ministral are quite good but just sadly outclassed right now.
Their Deepseek tune is awful and I shutter to think at how much money they burnt on it.
Some people claim to have a good Devstral experience but I can't get there personally. Too mu... | 9 | 0 | 2026-02-28T06:54:18 | ForsookComparison | false | null | 0 | o7uaf4w | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7uaf4w/ | false | 9 |
t1_o7uadgk | I think them bitching about Chinese models "stealing" a few days ago has something to do with this, just don;t know what. | 3 | 0 | 2026-02-28T06:53:54 | FPham | false | null | 0 | o7uadgk | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uadgk/ | false | 3 |
t1_o7uacfs | You clearly have no clue on how DoD business works. Just don't bother adding more. They have to follow the directives if they want to do business with the government.
If you work at Boeing, you need a clearance (depending on what parts) and if they do government work, they have to have certain policies that comply ... | 2 | 0 | 2026-02-28T06:53:40 | AcePilot01 | false | null | 0 | o7uacfs | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7uacfs/ | false | 2 |
t1_o7uacdc | Isn’t anthropic against open source | 33 | 0 | 2026-02-28T06:53:39 | stutteringdog | false | null | 0 | o7uacdc | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7uacdc/ | false | 33 |
t1_o7ua6h2 | In my case, no. Actually 122b is a lot better, for coding and general use, even in Q3. | 2 | 0 | 2026-02-28T06:52:13 | Key_Papaya2972 | false | null | 0 | o7ua6h2 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7ua6h2/ | false | 2 |
t1_o7ua2ns | Mixtral gang in the house. It was the first local LLM I could run on my potato that was useful. Before I sold a kidney to buy a GPU. | 6 | 0 | 2026-02-28T06:51:18 | RegularRecipe6175 | false | null | 0 | o7ua2ns | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ua2ns/ | false | 6 |
t1_o7ua0i5 | With your hardware, why don't you run 27B at Q8 (not the KV cache, the model quant!) ?
It is expected to be one level above 35B-A3B. | 5 | 0 | 2026-02-28T06:50:47 | PhilippeEiffel | false | null | 0 | o7ua0i5 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ua0i5/ | false | 5 |
t1_o7ua0bm | Thanks. | 1 | 0 | 2026-02-28T06:50:44 | gmmarcus | false | null | 0 | o7ua0bm | false | /r/LocalLLaMA/comments/1rg5uee/best_way_to_run_qwen3535ba3b_on_mac/o7ua0bm/ | false | 1 |
t1_o7u9xgy | They tuned Deepseek (ended up pretty awful) and released some smaller dense models that are decent at translation (ended up decent).
If you really need a Western model and don't like gpt oss then Magistral is pretty okay. | 1 | 0 | 2026-02-28T06:50:02 | ForsookComparison | false | null | 0 | o7u9xgy | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7u9xgy/ | false | 1 |
t1_o7u9vpj | Anthropic, the company known for zealously lobbying to centralize control over AI while bashing open source, just [abandoned their key "safety pledge"](https://edition.cnn.com/2026/02/25/tech/anthropic-safety-policy-change) two days ago. If they're willing to compromise on "AI safety", the core of their sanctimonious b... | 1 | 0 | 2026-02-28T06:49:37 | graifall | false | null | 0 | o7u9vpj | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u9vpj/ | false | 1 |
t1_o7u9v73 | yeah but it hasn't really been trained to do well in that scenario. qwen instruct models have. | 1 | 0 | 2026-02-28T06:49:29 | llama-impersonator | false | null | 0 | o7u9v73 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7u9v73/ | false | 1 |
t1_o7u9uu5 | Change the name of tianamen square and describe similar events, but altered slightly
See what the LLMs say
| 3 | 0 | 2026-02-28T06:49:24 | Vusiwe | false | null | 0 | o7u9uu5 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u9uu5/ | false | 3 |
t1_o7u9o72 | I think you underestimate how much money the government pays for all this. The grants, the contracts etc. I would argue that, in support of it from the OTHER companies, which leads to that business with the company to company... Nvidia to anthropic in support of both nvidia and anthropic to the government, I think you... | 1 | 0 | 2026-02-28T06:47:47 | AcePilot01 | false | null | 0 | o7u9o72 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u9o72/ | false | 1 |
t1_o7u9m9d | oof. What's Mistral up to these days? Will they be able to bounce back with all the heavy Chinese competition? | 3 | 0 | 2026-02-28T06:47:19 | tengo_harambe | false | null | 0 | o7u9m9d | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7u9m9d/ | false | 3 |
t1_o7u9ly1 | What quant do you use? | 1 | 0 | 2026-02-28T06:47:15 | Voxandr | false | null | 0 | o7u9ly1 | false | /r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/o7u9ly1/ | false | 1 |
t1_o7u9g9f | I love being downvoted by bots | -4 | 1 | 2026-02-28T06:45:51 | DinoAmino | false | null | 0 | o7u9g9f | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7u9g9f/ | false | -4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.