name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7tghlo | lol- 5yrs - I've been a software engineer for 20yrs for a company you know. Shipping apps in production. | -2 | 0 | 2026-02-28T03:10:02 | alphatrad | false | null | 0 | o7tghlo | false | /r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7tghlo/ | false | -2 |
t1_o7tge88 | If you're into corporate jargon, LLMs are already part of the OODA loop for business decisions. It doesn't take much to use them in a military domain. | 2 | 0 | 2026-02-28T03:09:27 | SkyFeistyLlama8 | false | null | 0 | o7tge88 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tge88/ | false | 2 |
t1_o7tgdvx | Claude Cowork can use the browser, any agent + Playwright CLI can use the browser. But also it sounds like you are trying to shoehorn the agent into the somewhat inefficient ways you do it instead of letting it write code or use APIs to do ultimately what you are trying to do 10x more efficiently. | 2 | 0 | 2026-02-28T03:09:23 | productif | false | null | 0 | o7tgdvx | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7tgdvx/ | false | 2 |
t1_o7tgazr | I don't believe Microsoft has the ability to sever ties with Anthropic, because they rely on Linux and other open source software for running Azure and those suppliers will be utilizing Anthropic whether Microsoft cares or not. Microsoft will not even have the ability to verify their compliance with any order to aband... | -5 | 0 | 2026-02-28T03:08:52 | Similar_Director6322 | false | null | 0 | o7tgazr | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tgazr/ | false | -5 |
t1_o7tg78t | Yeah that's up to you | 1 | 0 | 2026-02-28T03:08:12 | RhubarbSimilar1683 | false | null | 0 | o7tg78t | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7tg78t/ | false | 1 |
t1_o7tg760 | I have the EXACT same rankings lol! | 2 | 0 | 2026-02-28T03:08:11 | KvAk_AKPlaysYT | false | null | 0 | o7tg760 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tg760/ | false | 2 |
t1_o7tg5hs | Lm studio has many unresolved bugs, it lags behind llama.cpp. however llama.cpp has just as many bugs on windows. You can get better accuracy and speed on Linux | 1 | 0 | 2026-02-28T03:07:54 | RhubarbSimilar1683 | false | null | 0 | o7tg5hs | false | /r/LocalLLaMA/comments/1rgr249/newbie_question_best_achievable_fullylocal_llm/o7tg5hs/ | false | 1 |
t1_o7tg45i | I appreciate that you're not pushing an agenda. The concurrency point is well taken and something I hadn't fully thought through. You're right that research has a way of scaling in directions you don't anticipate.
That said my workflow by its nature tends to be sequential and I don't see that changing dramatically giv... | 1 | 0 | 2026-02-28T03:07:40 | TelevisionGlass4258 | false | null | 0 | o7tg45i | false | /r/LocalLLaMA/comments/1rfvh4c/going_fully_offline_with_ai_for_research_where_do/o7tg45i/ | false | 1 |
t1_o7tg2q4 | There's preliminary NPU support in llama.cpp for the last few Snapdragon generations but you'll have to build it yourself. I don't know if it works for the 35B-A3B.
I can get the old 30B-A3B running on Adreno OpenCL GPU on a Snapdragon laptop but the 35B-A3B fails to run. | 1 | 0 | 2026-02-28T03:07:25 | SkyFeistyLlama8 | false | null | 0 | o7tg2q4 | false | /r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7tg2q4/ | false | 1 |
t1_o7tg054 | Confused by your snark really.
I'm not following all the details but I do know now that you'll have to redownload everything. | 0 | 0 | 2026-02-28T03:06:58 | mister2d | false | null | 0 | o7tg054 | false | /r/LocalLLaMA/comments/1rf38xe/do_not_download_qwen_35_unsloth_gguf_until_bug_is/o7tg054/ | false | 0 |
t1_o7tfzmp | **Looking to get a custom PC built mainly for deepfake video and motion AI work. I’m new to the AI scene, so I’d really appreciate any feedback on this build. It comes in just under £2,000 and aims to balance performance and cost. Would this setup be good for running live deepfake apps and similar AI tasks?**
|Compone... | 1 | 0 | 2026-02-28T03:06:53 | Loud-Boysenberry8234 | false | null | 0 | o7tfzmp | false | /r/LocalLLaMA/comments/1n89ryn/most_affordable_ai_computer_with_gpu_gputer_you/o7tfzmp/ | false | 1 |
t1_o7tftbx | edited | 3 | 0 | 2026-02-28T03:05:44 | onceagainsilent | false | null | 0 | o7tftbx | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tftbx/ | false | 3 |
t1_o7tfnk3 | Curious, if u swapped the second v100 into the first v100 that booted successfully, will it boot? | 1 | 0 | 2026-02-28T03:04:41 | jikilan_ | false | null | 0 | o7tfnk3 | false | /r/LocalLLaMA/comments/1rgfude/computer_wont_boot_with_2_tesla_v100s/o7tfnk3/ | false | 1 |
t1_o7tffr4 | There are ARM64 optimizations in llama.cpp that should speed up CPU inference on those platforms. You need very fast RAM though.
On Snapdragon X Elite, 135 GB/s DDR5x RAM, I'm getting around 10 t/s for Qwen 35B-A3B. | 2 | 0 | 2026-02-28T03:03:19 | SkyFeistyLlama8 | false | null | 0 | o7tffr4 | false | /r/LocalLLaMA/comments/1rg87bj/qwen3535ba3b_running_on_a_raspberry_pi_5_16gb_and/o7tffr4/ | false | 2 |
t1_o7tfdvq | ROCm is solid. I have run it for days straight without error. | 1 | 0 | 2026-02-28T03:03:00 | coreyfro | false | null | 0 | o7tfdvq | false | /r/LocalLLaMA/comments/1rgotwp/llamaserver_doesnt_see_rocm_device_strix_halo/o7tfdvq/ | false | 1 |
t1_o7tfd9u | >LLMs could be used to summarize information in a kill chain, allowing the human in the loop to make the final live/die decision.
i was going to say this sounds like a terrible way to convey such information, but given that the "human in the loop" is literally just there to have someone to lynch when the robot does wa... | 1 | 0 | 2026-02-28T03:02:54 | StewedAngelSkins | false | null | 0 | o7tfd9u | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tfd9u/ | false | 1 |
t1_o7tfb07 | The unspoken standard is to quantize models to 4-bit these days. And that works out to approximately
Number of B's / 2 = Around the number of gigabytes the model will be on disk
That's a rough estimate that gives you an idea of what you might be able to run. In reality all of the q4 versions of the 35b model I have ... | 2 | 0 | 2026-02-28T03:02:29 | _-_David | false | null | 0 | o7tfb07 | false | /r/LocalLLaMA/comments/1rg8gkx/qwen35_35b_a3b_45_ts_128k_ctx_on_single_16gb_5060/o7tfb07/ | false | 2 |
t1_o7tf967 | LongCat-Flash-Lite is attractive for its size(69B A3B) & new architecture. llama.cpp support is pending, here 2 draft PRs for this model.
[https://github.com/ggml-org/llama.cpp/pull/19182](https://github.com/ggml-org/llama.cpp/pull/19182)
[https://github.com/ggml-org/llama.cpp/pull/19167](https://github.com/ggml-org/... | 3 | 0 | 2026-02-28T03:02:10 | pmttyji | false | null | 0 | o7tf967 | false | /r/LocalLLaMA/comments/1rgkxy3/list_of_models_that_you_might_have_missed/o7tf967/ | false | 3 |
t1_o7tf60z | reading this comment 2 years later is painful | 1 | 0 | 2026-02-28T03:01:36 | Alternative_Will5974 | false | null | 0 | o7tf60z | false | /r/LocalLLaMA/comments/17rb4rd/looking_for_cpu_inference_hardware_8_channel_ram/o7tf60z/ | false | 1 |
t1_o7tf61g | > But just mucking around with Qwen2.5-14B in LMStudio with only one 50-page board pack is giving me uselessly incomplete answers at 3tk/s
simple, use a smaller model. opus' answer:
> Qwen3-8B (Q4_K_M) via Ollama and AnythingLLM's desktop app to manage document chunking, workspace isolation, and local embedding (usin... | 1 | 0 | 2026-02-28T03:01:36 | 9gxa05s8fa8sh | false | null | 0 | o7tf61g | false | /r/LocalLLaMA/comments/1rgr249/newbie_question_best_achievable_fullylocal_llm/o7tf61g/ | false | 1 |
t1_o7tezaj | It could even go as far as chasing an identified target who hides in a car or a building. Larger models would be able to understand the target is inside another target.
A more chilling use would be identify soldiers and security personnel vs civilians. Dump a truckload of autonomous killer UAVs into a conflict zone, l... | 2 | 0 | 2026-02-28T03:00:24 | SkyFeistyLlama8 | false | null | 0 | o7tezaj | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tezaj/ | false | 2 |
t1_o7tesji | Made a decision to shift to a different ticketing system. Using intent and entity.
Intents and products are generated from the knowledge base, while some other entities are extracted during the runtime of pipeline. Intent and product entity list generation to make it easier for the people who will do analysis on the d... | 1 | 0 | 2026-02-28T02:59:13 | Important-Novel1546 | false | null | 0 | o7tesji | false | /r/LocalLLaMA/comments/1nvre5c/ticket_categorization_classifying_tickets_into/o7tesji/ | false | 1 |
t1_o7tell9 | AllenAI and LLM360 release fully open-source models -- they publish not only weights, but also their training datasets, training software, and technical papers describing their theory and practice.
So, **most** LLM labs are not open source, but saying "none" is inaccurate. | 5 | 0 | 2026-02-28T02:58:00 | ttkciar | false | null | 0 | o7tell9 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tell9/ | false | 5 |
t1_o7tejza | It’s probably that it’s q4_0 vs q4_K_M or some other more modern quantization… not that it’s 4 bit. | 7 | 0 | 2026-02-28T02:57:43 | silenceimpaired | false | null | 0 | o7tejza | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7tejza/ | false | 7 |
t1_o7tecaq | I hope not! Federal thugs depriving themselves of the best technology in the industry, and thereby limiting their power to commit evil acts, is both entertaining and genuinely good for the country. | 6 | 0 | 2026-02-28T02:56:21 | ttkciar | false | null | 0 | o7tecaq | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tecaq/ | false | 6 |
t1_o7te7pd | Everythings computtor, buy now a tessler. | 1 | 0 | 2026-02-28T02:55:32 | Denial_Jackson | false | null | 0 | o7te7pd | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7te7pd/ | false | 1 |
t1_o7te5x7 | It's too slow on my hardware, but it's better than both | 9 | 0 | 2026-02-28T02:55:14 | JsThiago5 | false | null | 0 | o7te5x7 | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7te5x7/ | false | 9 |
t1_o7tdymw | You can edit the jinja template. Change the following lines at the bottom:
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- if enable_thinking is defined and enable_thinking is false %}
{{- '<think>\n\n</think>\n\n' }}
{%- else %}
{{- '<think>\n' ... | 3 | 0 | 2026-02-28T02:53:57 | iz-Moff | false | null | 0 | o7tdymw | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7tdymw/ | false | 3 |
t1_o7tdycn | Sure, they've had image recognition for years. The fact that they want to use Claude to do the dirty work suggests they need something more granular that understands complex instructions. | 5 | 0 | 2026-02-28T02:53:54 | eposnix | false | null | 0 | o7tdycn | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tdycn/ | false | 5 |
t1_o7tdubt | I had the same issue then saw a post on here about system prompt. I used the system prompt from here : https://github.com/asgeirtj/system_prompts_leaks/blob/main/Google%2Fgemini_in_chrome.md
Now it’s thinking reasonably | 6 | 0 | 2026-02-28T02:53:12 | Benderbboson | false | null | 0 | o7tdubt | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7tdubt/ | false | 6 |
t1_o7tdnms | I'd recommend trying Qwen 3.5 27B at a medium quant like Q5, and with partial offloading, Qwen 3.5 35B which should be very fast | 1 | 0 | 2026-02-28T02:52:02 | ArsNeph | false | null | 0 | o7tdnms | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tdnms/ | false | 1 |
t1_o7tdmg0 | I don't know if this is true, but apparently they are fine with OpenAI taking the mantle under the existing contract. If so, it does seem like a fuck you to woke Anthropic in particular. And of course, OpenAI donated more money to Trump's campaign than Anthropic did I imagine. | 6 | 0 | 2026-02-28T02:51:49 | nullmove | false | null | 0 | o7tdmg0 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tdmg0/ | false | 6 |
t1_o7tdkmd | I love how many people are (re?)discovering that dense models have advantages over MoE :-) | 135 | 0 | 2026-02-28T02:51:30 | ttkciar | false | null | 0 | o7tdkmd | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7tdkmd/ | false | 135 |
t1_o7tdhdr | Zuckerberg will happily provide that. OSS-120B-kill Q8_0. | 3 | 0 | 2026-02-28T02:50:57 | SkyFeistyLlama8 | false | null | 0 | o7tdhdr | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tdhdr/ | false | 3 |
t1_o7tdgkq | I wouldn’t be surprised if this isn’t rushed to court because of the precedent of listing an American company as a supply chain risk to national security.
My guess is this will be resolved in Anthropic’s favor with regard to this specific order. They’ll lose the contract but gain much more than the initial $200M. | 22 | 0 | 2026-02-28T02:50:48 | jeremiah256 | false | null | 0 | o7tdgkq | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tdgkq/ | false | 22 |
t1_o7tddbm | Interesting — I’m not seeing the same results. I’m a software engineer with about 5 years of experience, and I regularly work in fairly large, complex codebases. That’s probably why I don’t feel it’s on par with Sonnet 4 for my use cases. It’s possible my llama.cpp setup isn’t fully optimized, but even accounting for t... | 2 | 0 | 2026-02-28T02:50:14 | Virtual-Listen4507 | false | null | 0 | o7tddbm | false | /r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7tddbm/ | false | 2 |
t1_o7td9w7 | You don't even need something that fancy.
Use a small model that can run on a smartphone or laptop NPU to identify humans in a free fire zone. If it looks like a human and it's carrying a weapon or on a motorized vehicle, it's fair game. Add a tiny LLM to classify the kind of target you're aiming at: a human, some anc... | 4 | 0 | 2026-02-28T02:49:38 | SkyFeistyLlama8 | false | null | 0 | o7td9w7 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7td9w7/ | false | 4 |
t1_o7td4gj | Thankyou, have you use qwen 3.5 35b for making this? And whats the reason you think non thinking may perform better? | 1 | 0 | 2026-02-28T02:48:41 | 9r4n4y | false | null | 0 | o7td4gj | false | /r/LocalLLaMA/comments/1rgawnq/help_qwen_35_35b_cant_able_to_create_this_html/o7td4gj/ | false | 1 |
t1_o7tcx0s | twice as fast as mine. I wish.
P520 dual P102-100, W-2135, 128GB DDR4 Quad channel. Getting 420PP and 26TG.
Very usable at my speed. Running workflows on your setup must make you feel like...
https://preview.redd.it/ggb7c29le5mg1.jpeg?width=640&format=pjpg&auto=webp&s=367f07c1deb5a41e72be1d651b53d18f7c9cb52d
| 1 | 0 | 2026-02-28T02:47:23 | Boricua-vet | false | null | 0 | o7tcx0s | false | /r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7tcx0s/ | false | 1 |
t1_o7tcvtn | I prefer ZeroClaw | 1 | 0 | 2026-02-28T02:47:11 | Regular_Ad_5615 | false | null | 0 | o7tcvtn | false | /r/LocalLLaMA/comments/1rd980h/zeroclaw_or_should_i_go_full_ironclaw/o7tcvtn/ | false | 1 |
t1_o7tcu76 | Love ZeroClaw, they are better than IronClaw since Iron is focused on Near Coin.. and it's secure unlike OpenClaw. | 1 | 0 | 2026-02-28T02:46:54 | Regular_Ad_5615 | false | null | 0 | o7tcu76 | false | /r/LocalLLaMA/comments/1rd980h/zeroclaw_or_should_i_go_full_ironclaw/o7tcu76/ | false | 1 |
t1_o7tcnw2 | It's definitely more finicky but so worth it. | 1 | 0 | 2026-02-28T02:45:48 | ImaginaryBluejay0 | false | null | 0 | o7tcnw2 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tcnw2/ | false | 1 |
t1_o7tckch | Finetuned vision models probably, for target ID and guidance. These can already run on smartphone and car infotainment hardware.
LLMs could be used to summarize information in a kill chain, allowing the human in the loop to make the final live/die decision.
Skynet isn't a robotic arm or a scary skull robot, it's a bu... | -1 | 0 | 2026-02-28T02:45:11 | SkyFeistyLlama8 | false | null | 0 | o7tckch | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tckch/ | false | -1 |
t1_o7tck4k | Toutes les recherches mènent à r/LocalLLaMa en fait xD | -3 | 1 | 2026-02-28T02:45:09 | Adventurous-Paper566 | false | null | 0 | o7tck4k | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tck4k/ | false | -3 |
t1_o7tcjbm | If Nvidia is on the list, you should also add LiquidAI. At least at the useless rank :D | 3 | 0 | 2026-02-28T02:45:01 | Ill-Fishing-1451 | false | null | 0 | o7tcjbm | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tcjbm/ | false | 3 |
t1_o7tcfzm | While these seems like reasonable guidelines, can you imagine Lockheed Martin or General Dynamic putting restrictions on the weapons they sell to the military.
Or imagine Oppenheimer telling Truman how he was allowed to use the atomic bomb.
The more fundamental principle here is the military wanting to have operation... | -17 | 0 | 2026-02-28T02:44:27 | StarMNF | false | null | 0 | o7tcfzm | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tcfzm/ | false | -17 |
t1_o7tcebc | I'm not sure if this is helpful, but the official LM Studio versions of Qwen3.5 models allow you to set the server reasoning to enabled/disabled in the developer view under the inference tab. But the quants I have used all lack support for this configuration variable, and the toggle switch disappears in LM Studio while... | 3 | 0 | 2026-02-28T02:44:10 | _-_David | false | null | 0 | o7tcebc | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7tcebc/ | false | 3 |
t1_o7tca32 | I want to try .cpp, if I can only get it working. | 1 | 0 | 2026-02-28T02:43:27 | SoMuchLasagna | false | null | 0 | o7tca32 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tca32/ | false | 1 |
t1_o7tc6bz | None of that contradicts what I said or supports your original assertion. | 3 | 0 | 2026-02-28T02:42:49 | NNN_Throwaway2 | false | null | 0 | o7tc6bz | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7tc6bz/ | false | 3 |
t1_o7tc456 | Well, 3.5 35B is very new, nemotron is ancient by now. I had a choice between 30B A3B and OSS 20B when I started my current project, and I'm bored of both so I randomly picked nemotron to see how good the mamba hybrid architecture is. I have zero hope that it would do agentic assistant, but I was shocked that it does. ... | 1 | 0 | 2026-02-28T02:42:26 | o0genesis0o | false | null | 0 | o7tc456 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tc456/ | false | 1 |
t1_o7tc0cy | I think Trump will TACO after industry leaders talk to him. He underestimates how intertwined Anthropics is with the tech industry. | 7 | 0 | 2026-02-28T02:41:46 | Ok_Warning2146 | false | null | 0 | o7tc0cy | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tc0cy/ | false | 7 |
t1_o7tbwn1 | Microsoft is covering all bases by doing business with the .gov and with all the major AI model providers, including Anthropic. There could be a national security angle here: if Microsoft has to choose between dumping all Azure and Microsoft 365 government contracts or severing ties with Anthropic, guess who'll get the... | 5 | 0 | 2026-02-28T02:41:08 | SkyFeistyLlama8 | false | null | 0 | o7tbwn1 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tbwn1/ | false | 5 |
t1_o7tbw1n | Imagine if Pewdiepie becomes a top AI researcher | 1 | 0 | 2026-02-28T02:41:02 | AntoineMacron | false | null | 0 | o7tbw1n | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7tbw1n/ | false | 1 |
t1_o7tbsj6 | Let’s not shoot this guy down like that. At least he is sharing to the community in his own way | 1 | 0 | 2026-02-28T02:40:24 | jikilan_ | false | null | 0 | o7tbsj6 | false | /r/LocalLLaMA/comments/1rglpxg/i_caught_claude_opus_doing_the_exact_same_thing/o7tbsj6/ | false | 1 |
t1_o7tbrdk | Hope it's performing okay for you. When I did ollama I got about 1/10th the performance and pretty much none of the useful models were working for me. | 1 | 0 | 2026-02-28T02:40:12 | ImaginaryBluejay0 | false | null | 0 | o7tbrdk | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tbrdk/ | false | 1 |
t1_o7tbq71 | If wer are only counting regular 5.2 i agree but 5.3 codex is the real job stealer for coders | 3 | 0 | 2026-02-28T02:40:00 | itsjase | false | null | 0 | o7tbq71 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tbq71/ | false | 3 |
t1_o7tbp5k | Bijan made a [video](https://www.youtube.com/watch?v=ase1Qmyo4Wg) on it the other day, referring to that story. It demonstrates the process on a very small scale. | 1 | 0 | 2026-02-28T02:39:49 | lisploli | false | null | 0 | o7tbp5k | false | /r/LocalLLaMA/comments/1rgips0/how_does_training_an_ai_on_another_ai_actually/o7tbp5k/ | false | 1 |
t1_o7tbjif | It would be funny if it weren't also dangerous as hell.
You could finetune a local model to conduct disinformation campaigns and to do a RAG flow on which citizens should be apprehended and/or executed. | 1 | 0 | 2026-02-28T02:38:50 | SkyFeistyLlama8 | false | null | 0 | o7tbjif | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tbjif/ | false | 1 |
t1_o7tbicm | I'm glad you responded to my feedback more graciously than some. It really goes a long way toward encouraging dialogue with mutual effort toward understanding and respect.
I'm not any kind of expert when it comes to the difficulties in translating between MLX and development on Apple silicon to other platforms. Is exp... | 1 | 0 | 2026-02-28T02:38:38 | _-_David | false | null | 0 | o7tbicm | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7tbicm/ | false | 1 |
t1_o7tbf7c | It’s not chinese vs america
It’s open source vs closed | 1 | 0 | 2026-02-28T02:38:05 | Abject-Tomorrow-652 | false | null | 0 | o7tbf7c | false | /r/LocalLLaMA/comments/1r9zt8m/the_top_3_models_on_openrouter_this_week_chinese/o7tbf7c/ | false | 1 |
t1_o7tbf1q | Even the little Qwen3.5 27B easily matches that, yes: https://artificialanalysis.ai/?models=gpt-oss-120b%2Cdeepseek-v3-2-reasoning%2Cgrok-4-1-fast-reasoning%2Cgrok-4%2Cminimax-m2-5%2Cglm-5%2Cqwen3-5-27b | 2 | 0 | 2026-02-28T02:38:04 | coder543 | false | null | 0 | o7tbf1q | false | /r/LocalLLaMA/comments/1rgq0vc/can_a_local_hosted_llm_keep_up_with_grok_41_fast/o7tbf1q/ | false | 2 |
t1_o7tbaxo | In an age where AI companies lie to their teeth to secure funding. It's refreshing to see companies be honest about what they can provide. | 1 | 0 | 2026-02-28T02:37:21 | AntoineMacron | false | null | 0 | o7tbaxo | false | /r/LocalLLaMA/comments/1r26zsg/zai_said_they_are_gpu_starved_openly/o7tbaxo/ | false | 1 |
t1_o7tbaoo | I tried going the llama.cpp route (using ChatGPT 5.2 to assist) and could not seem to get my GPU connected. Eventually cracked and deployed ollama via Docker. Wanted to do llama.cpp, though. | 1 | 0 | 2026-02-28T02:37:18 | SoMuchLasagna | false | null | 0 | o7tbaoo | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tbaoo/ | false | 1 |
t1_o7tba4y | I'm getting 10 t/s purely on CPU inference on Snapdragon X ARM64 at Q4_0, about 20 GB unified RAM. It looks like a config problem. | 1 | 0 | 2026-02-28T02:37:13 | SkyFeistyLlama8 | false | null | 0 | o7tba4y | false | /r/LocalLLaMA/comments/1rg4apu/qwen_35_architecture_analysis_parameter/o7tba4y/ | false | 1 |
t1_o7tb5th | I love the sound of this, and what you wrote about other things, so I installed it. It's asking me for a login into your website on booting once it's disconnected from the Internet. If I have to log into your website from it, after it's installed, how is it actually local? Why would you need me to login to your webs... | 1 | 0 | 2026-02-28T02:36:28 | SmChocolateBunnies | false | null | 0 | o7tb5th | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7tb5th/ | false | 1 |
t1_o7tb4iv | I am actually starved just to afford to buy a gpu | 1 | 0 | 2026-02-28T02:36:15 | AntoineMacron | false | null | 0 | o7tb4iv | false | /r/LocalLLaMA/comments/1r26zsg/zai_said_they_are_gpu_starved_openly/o7tb4iv/ | false | 1 |
t1_o7taxe2 | insane monster | 2 | 1 | 2026-02-28T02:35:01 | 9gxa05s8fa8sh | false | null | 0 | o7taxe2 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7taxe2/ | false | 2 |
t1_o7tatsh | > Any US company powerful enough
*ahem* Front page of the internet? | 4 | 0 | 2026-02-28T02:34:23 | Consumerbot37427 | false | null | 0 | o7tatsh | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tatsh/ | false | 4 |
t1_o7tatl9 | Good question. In my experience, most pipelines naturally settle into sequential flow — model reasons, hands off, your code runs. If that's where your workflow stays, Apple's unified memory is fine and you won't notice the shared bandwidth.
Where it breaks down is when your research scales. Running multiple models simu... | 2 | 0 | 2026-02-28T02:34:21 | melanov85 | false | null | 0 | o7tatl9 | false | /r/LocalLLaMA/comments/1rfvh4c/going_fully_offline_with_ai_for_research_where_do/o7tatl9/ | false | 2 |
t1_o7taqf7 | How do you feel about q3.5 35B A3B?
| 1 | 0 | 2026-02-28T02:33:48 | s1mplyme | false | null | 0 | o7taqf7 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7taqf7/ | false | 1 |
t1_o7taq6c | yes, it will. | 1 | 0 | 2026-02-28T02:33:46 | pfn0 | false | null | 0 | o7taq6c | false | /r/LocalLLaMA/comments/1rg68e6/starting_a_phd_in_ml_what_is_the_best_infra_i_can/o7taq6c/ | false | 1 |
t1_o7taopx | Thank you for this very detailed analysis!
As for the MXFP4, yes it's not liked on many tensor types, better be used only on ffn\_(up|down|gate)\_exps tensors, like the official MXFP4\_MOE quant. Anywhere else I've tried also, did not go well.
and omg not gonna lie, my puny quants being mentioned by the quant overlor... | 2 | 0 | 2026-02-28T02:33:30 | noctrex | false | null | 0 | o7taopx | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7taopx/ | false | 2 |
t1_o7tam6s | I'm not going to lie, atleast for me, the current GPT (closed) models feel very poorly made. In practice I find them to be pretty dumb. | 4 | 0 | 2026-02-28T02:33:04 | StanPlayZ804 | false | null | 0 | o7tam6s | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7tam6s/ | false | 4 |
t1_o7talbf | Might want to try the Vulkan implementation. In my experience, the ROCm implementation is finicky and slower. | 1 | 0 | 2026-02-28T02:32:54 | tankman35 | false | null | 0 | o7talbf | false | /r/LocalLLaMA/comments/1rgotwp/llamaserver_doesnt_see_rocm_device_strix_halo/o7talbf/ | false | 1 |
t1_o7tafso | None of it is open source. It’s freeware. There’s a difference. Freeware is not open. It is just free. | 2 | 1 | 2026-02-28T02:31:57 | onceagainsilent | false | null | 0 | o7tafso | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tafso/ | false | 2 |
t1_o7ta9ys | Its 2026, bro. People dont magically get better hardware | 8 | 0 | 2026-02-28T02:30:58 | Available-Craft-5795 | false | null | 0 | o7ta9ys | false | /r/LocalLLaMA/comments/1rgl42y/qwen_35_122b_hallucinates_horribly/o7ta9ys/ | false | 8 |
t1_o7ta3sc | dont quote me on this, as i have absolutely no experience with multy gpu setups nor with models>30b, you can offload kv to your gpu, increase eval batch size, quantize kv cache to 8 bits. | 1 | 0 | 2026-02-28T02:29:54 | FORNAX_460 | false | null | 0 | o7ta3sc | false | /r/LocalLLaMA/comments/1rgkmd7/ways_to_improve_prompt_processing_when_offloading/o7ta3sc/ | false | 1 |
t1_o7ta3k7 | GPT-OSS 20B is your best bet imo. It will need to offload some to CPU and RAM. Big caveat: I got shit performance with ollama when I tried this because it didn't offload to CPU well. I ended up compiling Llama.cpp and running it with the Claude Code and Codex plug-ins in VS code.
I don't have a 3090 but I have an A4... | 2 | 0 | 2026-02-28T02:29:52 | ImaginaryBluejay0 | false | null | 0 | o7ta3k7 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7ta3k7/ | false | 2 |
t1_o7ta0me | La chance! J'ai 10-11 tps avec 2 4060 ti 16Go et 65k context donc tu auras peut-être 20 tps. | 1 | 0 | 2026-02-28T02:29:21 | Adventurous-Paper566 | false | null | 0 | o7ta0me | false | /r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/o7ta0me/ | false | 1 |
t1_o7t9x42 | I cant think of it happening before either, even when they find foreign spys in government contracted companies they just call it a security failure. However I dont think this is something the U.S. Trump or not can walk back their decision on now, right? You cant just say that then a week later be nah anthropic is ok n... | 1 | 0 | 2026-02-28T02:28:44 | Lesser-than | false | null | 0 | o7t9x42 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7t9x42/ | false | 1 |
t1_o7t9u9l | It *can,* depending on both "small" and "let you run"'s definition. KV cache adds up.
A system that can load up to a 120B model that you slap a 4B model is probably not going to be able to set context so high that the 4B overflows--models have built in maximums they can accept.
But if your system is on the bubble, yo... | 4 | 0 | 2026-02-28T02:28:15 | Late-Assignment8482 | false | null | 0 | o7t9u9l | false | /r/LocalLLaMA/comments/1rgqfne/does_setting_a_small_context_size_let_you_run_a/o7t9u9l/ | false | 4 |
t1_o7t9i02 | Qwen3.5-27B
The model to dethrone Llama 3.3 70B though was Qwen3-VL-32B | 5 | 0 | 2026-02-28T02:26:08 | ForsookComparison | false | null | 0 | o7t9i02 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7t9i02/ | false | 5 |
t1_o7t9hvg | China doing way more for the open source LLM community here | 29 | 0 | 2026-02-28T02:26:07 | random_phantom | false | null | 0 | o7t9hvg | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7t9hvg/ | false | 29 |
t1_o7t9bu9 | So the messed up accuracy over the boards is durnto bug? Dord it also happened on previous qwen? I use vLLM before | 1 | 0 | 2026-02-28T02:25:05 | Voxandr | false | null | 0 | o7t9bu9 | false | /r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/o7t9bu9/ | false | 1 |
t1_o7t9bb8 | thanks OP! great project | 1 | 0 | 2026-02-28T02:24:59 | Impossible_Ground_15 | false | null | 0 | o7t9bb8 | false | /r/LocalLLaMA/comments/1rgiw5c/seline_is_back_your_os_goto_agent_framework_w_gui/o7t9bb8/ | false | 1 |
t1_o7t9b52 | That honestly blows my mind. Are you only counting output tokens and thinking tokens? Like I said I've had the opposite experience where Nemotron is fast and Qwen is fast but not nearly and also has to reprocess all of the context each time. At 30k you'd feel it. I'll upgrade my llama.cpp tonight to make sure I get the... | 1 | 0 | 2026-02-28T02:24:57 | nicholas_the_furious | false | null | 0 | o7t9b52 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7t9b52/ | false | 1 |
t1_o7t96aq | What do you use now? | 1 | 0 | 2026-02-28T02:24:07 | s1mplyme | false | null | 0 | o7t96aq | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7t96aq/ | false | 1 |
t1_o7t95nm | Getting 95 on a r9700. Windows lm studio with bad settings. | 1 | 0 | 2026-02-28T02:24:00 | No-Consequence-1779 | false | null | 0 | o7t95nm | false | /r/LocalLLaMA/comments/1rglgma/qwen_35_llamacpp_turn_of_reasoning_and_performance/o7t95nm/ | false | 1 |
t1_o7t92oe | Un glm 5 air deberían de sacar | 1 | 0 | 2026-02-28T02:23:28 | Altruistic_Plate1090 | false | null | 0 | o7t92oe | false | /r/LocalLLaMA/comments/1rggpu9/glm5code/o7t92oe/ | false | 1 |
t1_o7t8zxb | Haven't tried any of GPTOSS till today | 1 | 0 | 2026-02-28T02:23:00 | Voxandr | false | null | 0 | o7t8zxb | false | /r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/o7t8zxb/ | false | 1 |
t1_o7t8uqp | One challenge is: how do you know when to remove something? If a user's opinions / context changes, it's hard to even know because the user won't explicitly tell you to remove something. | 1 | 0 | 2026-02-28T02:22:07 | Open-Marionberry-943 | false | null | 0 | o7t8uqp | false | /r/LocalLLaMA/comments/1rg489b/what_are_some_edge_cases_that_break_ai_memory/o7t8uqp/ | false | 1 |
t1_o7t8u31 | I will try it today | 1 | 0 | 2026-02-28T02:22:00 | Voxandr | false | null | 0 | o7t8u31 | false | /r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/o7t8u31/ | false | 1 |
t1_o7t8tc2 | I've been doing a semi complex noodle shop simulation on glm 4.7 flash, nemotron 30B then swapped nemotron for qwen 3.5 35B
On a rtx 4060 + 24gb ddr5
All at q4 gguf, 40k ctx window, q8 kv cache, recommend settings for all
After 30k both nemotron and glm 4.7 flash really slowed down compared to qwen 3.5 35B, but... | 2 | 0 | 2026-02-28T02:21:53 | Acceptable_Home_ | false | null | 0 | o7t8tc2 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7t8tc2/ | false | 2 |
t1_o7t8rpr | I mostly use Qwen3.5-27b right now, but I don't see me deleting my collection of Mistral finetunes any time soon.
Not using much from the rest of your list as most seem either bigger or moe blobs. | 10 | 0 | 2026-02-28T02:21:37 | lisploli | false | null | 0 | o7t8rpr | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7t8rpr/ | false | 10 |
t1_o7t8r74 | Beautiful UI as well! Thanks. Consider trying GLM 4.6V Flash which is a 9B dense model for quick vision tasks. It runs at 30+ t/s for dual 5060 ti at Q8\_0. | 2 | 0 | 2026-02-28T02:21:31 | Tccybo | false | null | 0 | o7t8r74 | false | /r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o7t8r74/ | false | 2 |
t1_o7t8qic | Slower? As in tokens per second? You might be doing something wrong. It is supposed to be quite fast. | 6 | 0 | 2026-02-28T02:21:24 | Snoo_28140 | false | null | 0 | o7t8qic | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7t8qic/ | false | 6 |
t1_o7t8p8n | It hasn’t | 3 | 0 | 2026-02-28T02:21:11 | onceagainsilent | false | null | 0 | o7t8p8n | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7t8p8n/ | false | 3 |
t1_o7t8ou0 | I was expecting anthropic to cave. good on them. Also it seems like a massive security risk to feed secrets to a company you don't control.
Also in my head canon heggy was given a wedgie on the way out. | 3 | 0 | 2026-02-28T02:21:07 | honato | false | null | 0 | o7t8ou0 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7t8ou0/ | false | 3 |
t1_o7t8od1 | I find the most annoying change to be the sycophancy that’s built into these newer models, everything is “That’s a great question!” , “You are right!”, “Absolutely!” | 14 | 0 | 2026-02-28T02:21:02 | Torodaddy | false | null | 0 | o7t8od1 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7t8od1/ | false | 14 |
t1_o7t8jm5 | [removed] | 1 | 0 | 2026-02-28T02:20:13 | [deleted] | true | null | 0 | o7t8jm5 | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7t8jm5/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.