name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7tw49b
From another experiment I'm doing, what has worked was programming homeostasis and letting the system just think on its own - the reverie you discuss - and observing the results. Very interesting.
2
0
2026-02-28T04:57:06
vbaranov
false
null
0
o7tw49b
false
/r/LocalLLaMA/comments/1rewz9p/we_build_sleep_for_local_llms_model_learns_facts/o7tw49b/
false
2
t1_o7tw2jq
Hmm I used Gemini 3.1 flash in antigravity and did it in one shot and connected it to openwebui as well.... Just asked and when I came back it was running in the background
2
0
2026-02-28T04:56:45
quiteconfused1
false
null
0
o7tw2jq
false
/r/LocalLLaMA/comments/1rgthzm/gemini_pro_31_couldnt_solve_a_docker_ollama/o7tw2jq/
false
2
t1_o7tvyw2
Yeah it's the first time that I post about it, but definitely not the first time I've experienced this. I think gemini models are strong on the UI front, not really on the infra side.
1
0
2026-02-28T04:56:01
CarsonBuilds
false
null
0
o7tvyw2
false
/r/LocalLLaMA/comments/1rgthzm/gemini_pro_31_couldnt_solve_a_docker_ollama/o7tvyw2/
false
1
t1_o7tvvt0
im waiting for the updated quants as well. the UD Q2KXL was pretty good already imho.
1
0
2026-02-28T04:55:25
My_Unbiased_Opinion
false
null
0
o7tvvt0
false
/r/LocalLLaMA/comments/1rg05k7/qwen_35_122b_a10b_3584_score_on_natint_ugi/o7tvvt0/
false
1
t1_o7tvv4i
You’re talking subsets of a larger concept.
1
0
2026-02-28T04:55:17
StardockEngineer
false
null
0
o7tvv4i
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7tvv4i/
false
1
t1_o7tvrn3
You can so easily prompt around that. You couldn't prompt around the model being ass, you just had to deal with it.
4
0
2026-02-28T04:54:35
PunnyPandora
false
null
0
o7tvrn3
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tvrn3/
false
4
t1_o7tvqrv
Holy shit so im not the only one lol. Ive been trying 3.1 for docker setup for various apps and the model straight up puts in the wrong commands in the config file.
2
0
2026-02-28T04:54:24
My_Unbiased_Opinion
false
null
0
o7tvqrv
false
/r/LocalLLaMA/comments/1rgthzm/gemini_pro_31_couldnt_solve_a_docker_ollama/o7tvqrv/
false
2
t1_o7tvp7z
This exact development is why local llms are so important. Big corporations cannot be trusted to stand up to regimes. I will 100% only give my money to Anthropic now, and cancel our remaining ChatGPT sub. Our money has power, just look at Netflix. It didn't get there by going all in on defense spending.
1
0
2026-02-28T04:54:06
lechatsportif
false
null
0
o7tvp7z
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tvp7z/
false
1
t1_o7tvn2z
It's one single json file 😭, 100% vibe coded: [https://github.com/AlexsJones/llmfit/blob/main/llmfit-core/data/hf\_models.json](https://github.com/AlexsJones/llmfit/blob/main/llmfit-core/data/hf_models.json)
9
0
2026-02-28T04:53:40
Beano09
false
null
0
o7tvn2z
false
/r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7tvn2z/
false
9
t1_o7tvk5m
All models do that buddy... You changed, not the models
1
0
2026-02-28T04:53:05
PunnyPandora
false
null
0
o7tvk5m
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tvk5m/
false
1
t1_o7tvih1
Nostalgic for something that was objectively worse, classic
1
0
2026-02-28T04:52:46
PunnyPandora
false
null
0
o7tvih1
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tvih1/
false
1
t1_o7tv8rs
llamacpp supports self speculative decoding which doesn't require an additional model. The typical setup is something like: `--spec-type ngram-mod --spec-ngram-size-n 24 --draft-min 48 --draft-max 64` It likely doesn't hit as often as a real model but it has effectively 0 overhead. You can read more about it her...
8
0
2026-02-28T04:50:54
Betadoggo_
false
null
0
o7tv8rs
false
/r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o7tv8rs/
false
8
t1_o7tv7jd
Pony was shit 
1
0
2026-02-28T04:50:39
PunnyPandora
false
null
0
o7tv7jd
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tv7jd/
false
1
t1_o7tv20h
Mm might just be because mxfp4 is slightly bigger. That seems normal
1
0
2026-02-28T04:49:35
danielhanchen
false
null
0
o7tv20h
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tv20h/
false
1
t1_o7tuzix
So as you iterate over test set you iterate over the prompt? Why not iterate over the code that is generated and improve that as passed-tesg numbers improve? What am I missing
1
0
2026-02-28T04:49:05
street_melody
false
null
0
o7tuzix
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tuzix/
false
1
t1_o7tur0d
Same. I don't think the geometric mean estimation has been accurate for a long while. I have a 14B model (Ministral 3) on my spare RX 480 as a parallel task model for use in open-webui, and it is not _nearly_ as good as either Qwen3-30b-a3b nor Qwen3.5-35b-a3b. It's still "good enough" for things like making search qu...
17
0
2026-02-28T04:47:25
Thunderstarer
false
null
0
o7tur0d
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7tur0d/
false
17
t1_o7tuqgq
They’re using them in CI/CD pipelines which downloads the model on every run
18
0
2026-02-28T04:47:19
am17an
false
null
0
o7tuqgq
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tuqgq/
false
18
t1_o7tuq2o
That's basically Coder model vs non-coder, so it's probably not fair. At this time, we should just compare against Qwen3 Next 80B A3B instead. I have high hope for Qwen3.5 Coder 35B A3B :D
0
0
2026-02-28T04:47:14
bobaburger
false
null
0
o7tuq2o
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7tuq2o/
false
0
t1_o7tun0o
I haven't actually measured it myself, just an estimate I'd read somewhere but it's probably wrong. Regardless still slow
1
0
2026-02-28T04:46:37
do_u_think_im_spooky
false
null
0
o7tun0o
false
/r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o7tun0o/
false
1
t1_o7tujem
I do something similar, except I write most of it, and then we rewrite it along the way depending on the vibes
2
0
2026-02-28T04:45:53
PunnyPandora
false
null
0
o7tujem
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tujem/
false
2
t1_o7tug2y
That’s not my understanding of what happened. Anthropic just makes an LLM. It’s being used indirectly by companies that contract with the military like Palantir. Like every AI company, Anthropic has added safe guards to their LLM, probably via system prompts where it refuses certain requests. The military essentiall...
-7
0
2026-02-28T04:45:13
StarMNF
false
null
0
o7tug2y
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tug2y/
false
-7
t1_o7tuciz
Anthropic has been labeled a supply chain risk.  The company is effectively dead. Major funding will run dry. Credit won't be extended. Talent will scramble to keep security clearance.  Now major developers will be poached by rival corps. Anthropoc can't even sell-off it's assests outside its circle of US competitors b...
-4
0
2026-02-28T04:44:29
fervoredweb
false
null
0
o7tuciz
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tuciz/
false
-4
t1_o7tub6q
Do or did they ever offer updates? Or was it’s knowledge frozen at a certain date
3
0
2026-02-28T04:44:13
randylush
false
null
0
o7tub6q
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tub6q/
false
3
t1_o7tu8jo
I disagree with 1 - human approval may not be possible at war time due to jamming or other connection issues. We all know that some other countries will use AI and will not care much about this. So we going to have semi autonomous weapons, while others fully autonomous. I think human approval should be a must only in t...
-8
0
2026-02-28T04:43:41
MatchaFlatWhite
false
null
0
o7tu8jo
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tu8jo/
false
-8
t1_o7ttymd
4070. not too much difference, 10% - 15% though.
1
0
2026-02-28T04:41:43
Conscious_Chef_3233
false
null
0
o7ttymd
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7ttymd/
false
1
t1_o7ttxeb
you create a few test prompts for what ever your use case is, like maybe it's a system prompt for a harness, or a tool description. for example The test is usually a success or a fail, so for a coding harness, you give it a coding task, give it the test system prompt, run test cases where you have a coding task and yo...
3
0
2026-02-28T04:41:28
Far-Low-4705
false
null
0
o7ttxeb
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ttxeb/
false
3
t1_o7ttt7r
The cloud SOTA models are full of shit. Their information is super out of date. If you really force them to do searches and use up-to-date information they give more plausible results, but I wouldn't trust them for anything at this point. I can't personally attest to whether what overand says is feasible, but I certain...
1
0
2026-02-28T04:40:37
AbsolutelyStateless
false
null
0
o7ttt7r
false
/r/LocalLLaMA/comments/1r656d7/qwen35397ba17b_is_out/o7ttt7r/
false
1
t1_o7ttsg0
Well, you're in luck if you want to try this. It uses searxng as the base and brings in actual page fetching with chrome. Also has GitHub and stackoverlow apis with a bunch of garbage data handling for all of it. I'm pretty impressed.
1
0
2026-02-28T04:40:28
Xp_12
false
null
0
o7ttsg0
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7ttsg0/
false
1
t1_o7ttlx6
Can you explain what you mean by that? I think I have higher expectations than I should given my hardware lol
2
0
2026-02-28T04:39:08
RickoT
false
null
0
o7ttlx6
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7ttlx6/
false
2
t1_o7ttkvt
And this is why any and all attempts at "alignment" are bound to fall. Because we can't solve "alignment" for humans either.
1
0
2026-02-28T04:38:54
Loose_Object_8311
false
null
0
o7ttkvt
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ttkvt/
false
1
t1_o7ttiie
Depends on how long you need to keep the rented instance running. How good are P40s now compared to say a 3090? Can you still make a decent rig with these or are they too old now?
1
0
2026-02-28T04:38:25
randylush
false
null
0
o7ttiie
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ttiie/
false
1
t1_o7tticy
Well hopefully take a look and get back to you. Oh interesting didn't know it also affected safetensors too 🤔
1
0
2026-02-28T04:38:23
danielhanchen
false
null
0
o7tticy
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tticy/
false
1
t1_o7ttel3
I didn't say anything about you.
-6
0
2026-02-28T04:37:37
MikeLPU
false
null
0
o7ttel3
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7ttel3/
false
-6
t1_o7tteik
Yes there are lots of development towards LLMs that will work on edge devices. You should check out Gemma 3n: https://unsloth.ai/docs/models/tutorials/gemma-3-how-to-run-and-fine-tune/gemma-3n-how-to-run-and-fine-tune
2
0
2026-02-28T04:37:36
danielhanchen
false
null
0
o7tteik
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tteik/
false
2
t1_o7tt9se
it came from the ollama site: [https://ollama.com/library/deepseek-r1](https://ollama.com/library/deepseek-r1)
0
0
2026-02-28T04:36:37
RickoT
false
null
0
o7tt9se
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tt9se/
false
0
t1_o7tt8pv
You're missing the part where they didn't just decide to use a different product, which is nobody would dispute is their right. Anthropic didn't say the US shouldn't build autonomous drones, they just said they didn't want to be the ones doing it. This is the government showing up at your door and telling you what to d...
12
0
2026-02-28T04:36:24
_tresmil_
false
null
0
o7tt8pv
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tt8pv/
false
12
t1_o7tt8go
Oh that's weird, what GPU are u using? It sometimes does happen
1
0
2026-02-28T04:36:21
danielhanchen
false
null
0
o7tt8go
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tt8go/
false
1
t1_o7tt81d
They told them this would happen. Dude usually follows through.
1
0
2026-02-28T04:36:16
jeffwadsworth
false
null
0
o7tt81d
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tt81d/
false
1
t1_o7tt668
I'd still try it.
1
0
2026-02-28T04:35:53
tankman35
false
null
0
o7tt668
false
/r/LocalLLaMA/comments/1rgotwp/llamaserver_doesnt_see_rocm_device_strix_halo/o7tt668/
false
1
t1_o7tt5zs
Mass surveillance is already illegal and the military already has autonomous targeting systems. The real issue is that the federal government didn’t want a private company to dictate the terms of use, over US law. And honestly, why isn’t that a good thing? Every sci-fi dystopian novel or movie for the past 50 years ha...
-9
0
2026-02-28T04:35:51
Informal_Warning_703
false
null
0
o7tt5zs
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tt5zs/
false
-9
t1_o7tt5wb
Thank you so much Paul we really appreciate the support!! If you encounter any issues let us know! 🥰
1
0
2026-02-28T04:35:49
danielhanchen
false
null
0
o7tt5wb
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tt5wb/
false
1
t1_o7tt589
Instruction is technically one turn, multi turn are chat. Qwen also distinct thinking/reasoning (multi turn with reasoning) and instruct (multi turn with no reasoning)
3
0
2026-02-28T04:35:41
LinkSea8324
false
null
0
o7tt589
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7tt589/
false
3
t1_o7tt37a
Hey thanks for your response and for releasing your MXFP4 quants! Yes, as you can see our MXFP4 quant also achieves high perplexity and KLD so it really seems that MXFP4 just does badly on KLD on perplexity. But is it actually fine for real world performance? There needs to be more testing for that
2
0
2026-02-28T04:35:16
danielhanchen
false
null
0
o7tt37a
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tt37a/
false
2
t1_o7tt030
There is no 14b Deepseek. You really are lost.
1
0
2026-02-28T04:34:37
jeffwadsworth
false
null
0
o7tt030
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tt030/
false
1
t1_o7tsxsa
Asking us what you should do with YOUR local AI? Honestly, I've been there. Great, I've got Stable Diff, QwenTTS, Ollama, and ACE-Step all running locally on my 12G of VRAM.....now what. Well, I'm probably not going to build the next breakthrough application coding with a 7B model (although stranger things have h...
3
0
2026-02-28T04:34:09
No-Butterscotch-218
false
null
0
o7tsxsa
false
/r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7tsxsa/
false
3
t1_o7tsutq
MXFP4 is mostly used for GGUFs. The rest of the quantizations you mentioned are for safetensors. We haven't tested the other quant types but will be interesting to see. Auto round requires a bit of training to compete maybe that's why
2
0
2026-02-28T04:33:34
danielhanchen
false
null
0
o7tsutq
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tsutq/
false
2
t1_o7tsqqw
The Wyoming Protocol is a standard, open-source communication protocol created by the Home Assistant team to allow voice assistant components—such as wake word detection, speech-to-text (STT), and text-to-speech (TTS)—to communicate with Home Assistant. Basically this allows me to use Parakeet MLX in Home Assistant.
2
0
2026-02-28T04:32:44
whysee0
false
null
0
o7tsqqw
false
/r/LocalLLaMA/comments/1rgqyhg/wyoming_parakeet_mlx/o7tsqqw/
false
2
t1_o7tsoxx
It will only slightly alleviate the issue. This fix is mainly for compatibility and less errors when you're using codex Claude code open code etc. I will update you once 122b is done. If you see a lot of loops and overthinking, make sure you follow the inference settings: https://unsloth.ai/docs/models/qwen3.5#thinki...
2
0
2026-02-28T04:32:22
danielhanchen
false
null
0
o7tsoxx
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tsoxx/
false
2
t1_o7tsjyf
That's a slur on all porcines,
9
0
2026-02-28T04:31:24
roosterfareye
false
null
0
o7tsjyf
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tsjyf/
false
9
t1_o7tsjwl
> human in the loop See, that’s the thing the U.S. military wants to avoid. Their issue with Anthropic is *specifically* that they do **not** want a human in the loop. They want killbots that shoot first, ask questions never, and won’t remember doing it.
-1
0
2026-02-28T04:31:23
n8mo
false
null
0
o7tsjwl
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tsjwl/
false
-1
t1_o7tsj0z
I am using SearXNG for my searching which is alll I thought I needed...
1
0
2026-02-28T04:31:12
RickoT
false
null
0
o7tsj0z
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tsj0z/
false
1
t1_o7tsf61
Did you try updating LMStudio? I matrix files are very important for Q4 quants and under
2
0
2026-02-28T04:30:26
danielhanchen
false
null
0
o7tsf61
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tsf61/
false
2
t1_o7tsf6u
if I had the money to buy that, I'd put it in my gaming rig lol
1
0
2026-02-28T04:30:26
RickoT
false
null
0
o7tsf6u
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tsf6u/
false
1
t1_o7tsazv
Still converting, will update y'all once it's updated
2
0
2026-02-28T04:29:35
danielhanchen
false
null
0
o7tsazv
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tsazv/
false
2
t1_o7tsajd
Interesting, I did not know there were requirements for prompting
1
0
2026-02-28T04:29:30
RickoT
false
null
0
o7tsajd
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7tsajd/
false
1
t1_o7ts7jy
It does have web search with searxng.... was there something else i needed to do?
1
0
2026-02-28T04:28:54
RickoT
false
null
0
o7ts7jy
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7ts7jy/
false
1
t1_o7ts7l1
I know. They didn’t build PDF rendering. Other engineers did. Same as with agentic AI. OpenClaw invented nothing - other engineers did. Sigh. 😔
1
0
2026-02-28T04:28:54
leo-k7v
false
null
0
o7ts7l1
false
/r/LocalLLaMA/comments/1re854d/the_reality_behind_the_openclaw_hype/o7ts7l1/
false
1
t1_o7ts5jf
Definitely sounds very important but unfortunately benchmarks like that would take 2 weeks to run so it'll likely be unfeasible. Benjamin Marie might be able to do it though and his tests do kind of test that for real world usecase performance
2
0
2026-02-28T04:28:30
danielhanchen
false
null
0
o7ts5jf
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7ts5jf/
false
2
t1_o7ts555
So here are the models I tried: * deepseek-r1:14b * lfm2:24b * Llama3.1:8B * mistral:7b * qwen2.5:14b-instruct * qwen3.5:27b -- just slow enough to be useless * sparksammy/qwen3.5-27b-unsloth:small-hotfixed -- Kept getting error 500 when trying to use it I am using SearXNG for my search engine, got it working after ...
1
0
2026-02-28T04:28:25
RickoT
false
null
0
o7ts555
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7ts555/
false
1
t1_o7ts4sm
It has never happened before, BUT, and American company has never refused to allow the military to use their tech without their ongoing permission.
-2
1
2026-02-28T04:28:21
unrulywind
false
null
0
o7ts4sm
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7ts4sm/
false
-2
t1_o7ts39p
Can you explain I don't understand- what is being tested here?
2
0
2026-02-28T04:28:03
street_melody
false
null
0
o7ts39p
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ts39p/
false
2
t1_o7ts394
you write a script to grab the content of your email, take the content and ask an llm (using an api) to classify the email content. its very easy. You dont need opwnelaw. You can ask an llm how to do it.
1
0
2026-02-28T04:28:02
ReadersAreRedditors
false
null
0
o7ts394
false
/r/LocalLLaMA/comments/1r9gve8/i_feel_left_behind_what_is_special_about_openclaw/o7ts394/
false
1
t1_o7trzx0
більша модель просто завалить ОЗП та увійде в цикл
1
0
2026-02-28T04:27:22
Smart-Trip-8255
false
null
0
o7trzx0
false
/r/LocalLLaMA/comments/18kqge8/what_is_better_7bq4_k_m_or_13b_q2_k/o7trzx0/
false
1
t1_o7trymq
I feel like comparing a 30b MoE to a 9b model doesn't sound right. I've been using the 30b and now the 35b and they seem WAY better than any 9b. Like way better.
21
0
2026-02-28T04:27:06
Space__Whiskey
false
null
0
o7trymq
false
/r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7trymq/
false
21
t1_o7try8l
You are thinking it from a different perspective try models ranging from 1.5-4 With temp 0.1111
0
0
2026-02-28T04:27:01
Hot_Inspection_9528
false
null
0
o7try8l
false
/r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7try8l/
false
0
t1_o7trxcl
Because you have 5090 you can use q6 I think as well
1
0
2026-02-28T04:26:51
danielhanchen
false
null
0
o7trxcl
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7trxcl/
false
1
t1_o7trxeo
in llama cpp unsloths suggestions worked pretty well for me
1
0
2026-02-28T04:26:51
Old-Sherbert-4495
false
null
0
o7trxeo
false
/r/LocalLLaMA/comments/1rgek4m/what_are_your_expectations_for_the_small_series/o7trxeo/
false
1
t1_o7trvk3
That's fine, you can copy and paste our chat template and use it in that one
2
0
2026-02-28T04:26:29
danielhanchen
false
null
0
o7trvk3
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7trvk3/
false
2
t1_o7trsf3
Thank you we appreciate it. We hope to provide results like this for the next few quants as well
2
0
2026-02-28T04:25:51
danielhanchen
false
null
0
o7trsf3
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7trsf3/
false
2
t1_o7trom3
I agree with what you are saying, my opinion is you will not see any major tech company adjusting their strategy over this and Boeing, if required, will just add some layers of plausable deniability between them and usage of Claude.
1
0
2026-02-28T04:25:04
Similar_Director6322
false
null
0
o7trom3
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7trom3/
false
1
t1_o7trlbk
I'm running it locally and can use thinking on or off flags as well as specify an exact reasoning budget. That's because I read the manual.
-1
0
2026-02-28T04:24:24
OptimizeLLM
false
null
0
o7trlbk
false
/r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7trlbk/
false
-1
t1_o7trknk
it is 2018
1
0
2026-02-28T04:24:16
MasterApplication717
false
null
0
o7trknk
false
/r/LocalLLaMA/comments/1rfdxxc/benchmarked_phi35mini_vs_qwen253b_across_10_task/o7trknk/
false
1
t1_o7trcn3
time to dust off the ol 120B Derestricted. lel
2
0
2026-02-28T04:22:43
My_Unbiased_Opinion
false
null
0
o7trcn3
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7trcn3/
false
2
t1_o7tr928
Really appreciate the architecture critique -- you nailed it. We explicitly avoided coupling the bond to individual disputes for exactly that reason. The bond is a market-entry commitment, disputes are output-level accountability. Different mechanisms for different failure modes. On the philosophical reasoning benc...
1
0
2026-02-28T04:22:02
Bourbeau
false
null
0
o7tr928
false
/r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/o7tr928/
false
1
t1_o7tr5ye
But will they stop letting Anthropic use their TPUs? That's a commercial transaction.
10
0
2026-02-28T04:21:25
cafedude
false
null
0
o7tr5ye
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tr5ye/
false
10
t1_o7tr4sg
Because that one shot it and its the 35b non thinking.
2
0
2026-02-28T04:21:11
l33t-Mt
false
null
0
o7tr4sg
false
/r/LocalLLaMA/comments/1rgawnq/help_qwen_35_35b_cant_able_to_create_this_html/o7tr4sg/
false
2
t1_o7tqyvo
Anthropic (and OpenAi) were already going to fall to Google/Gemini over the next 6mos-year...while this is an interesting socio-political exercise to watch live, lol, nothing IMO really has changed except Google will probably gain dominance even faster. The deck is stacked for Google. Hate me if you want, but I almost...
5
0
2026-02-28T04:20:02
aallsbury
false
null
0
o7tqyvo
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tqyvo/
false
5
t1_o7tqqyr
> Hegseth declared on X that effective immediately, "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." The decision could have a wide-ranging impact, given the sheer number of companies that contract with the Pentagon. From (Hegs...
14
0
2026-02-28T04:18:29
darvs7
false
null
0
o7tqqyr
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tqqyr/
false
14
t1_o7tqr02
What did you see and know then?
3
0
2026-02-28T04:18:29
nitfizz
false
null
0
o7tqr02
false
/r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7tqr02/
false
3
t1_o7tqql5
good catch, that's my bad. the token was not part of 35b's training data, so the model just ignores it or treats it as noise rather than an instruction. the /no\_think approach works on the 7B and 14B variants where it was explicitly included in the training mix. for 35b you're left with the system prompt path or budge...
1
0
2026-02-28T04:18:24
Ok_Flow1232
false
null
0
o7tqql5
false
/r/LocalLLaMA/comments/1rg0487/system_prompt_for_qwen35_27b35ba3b_to_reduce/o7tqql5/
false
1
t1_o7tqp29
Speaking of Xaomi, [https://huggingface.co/xiaomi-research/MiLMMT-46-4B-v0.1](https://huggingface.co/xiaomi-research/MiLMMT-46-4B-v0.1) for translation on edge devices...
2
0
2026-02-28T04:18:07
DeProgrammer99
false
null
0
o7tqp29
false
/r/LocalLLaMA/comments/1rgkxy3/list_of_models_that_you_might_have_missed/o7tqp29/
false
2
t1_o7tqm1g
the intelligence drop with /no\_think is real, it's not just perception. the model was specifically trained to reason before answering, so forcing it to skip that step hits harder on tasks that actually need multi-step resolution. for purely extractive or formatting tasks it's fine, but for anything that requires plann...
2
0
2026-02-28T04:17:31
Ok_Flow1232
false
null
0
o7tqm1g
false
/r/LocalLLaMA/comments/1rg0487/system_prompt_for_qwen35_27b35ba3b_to_reduce/o7tqm1g/
false
2
t1_o7tqlhq
They get better over time. The good ones in that range aren't that much worse than chatgpt 2.5 turbo 
1
0
2026-02-28T04:17:25
Feztopia
false
null
0
o7tqlhq
false
/r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7tqlhq/
false
1
t1_o7tqhg9
[removed]
1
0
2026-02-28T04:16:37
[deleted]
true
null
0
o7tqhg9
false
/r/LocalLLaMA/comments/1kqbhvi/is_parquet_the_best_format_for_ai_datasets_now/o7tqhg9/
false
1
t1_o7tqg5t
Agents and smarter models have become actual handicaps. In a way they have made the entire use an LLM.a very attractive proposition for the mass. Unfortunately folks are losing that touch of what these LLMs are about and how to extract the most from it. Folks could have a problem that requires 8k tokens and burn 20...
2
0
2026-02-28T04:16:21
segmond
false
null
0
o7tqg5t
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tqg5t/
false
2
t1_o7tqfi6
1. LLMs and AI weren't very much the hype train as it is today, hence Open Source LLM subs are generaly more niche and filled with real geeks and nerds. 2. Now that more of the Open Source LLM space are Chinese, LLM subs start to get infested with Chinese bots, pro or anti.
3
0
2026-02-28T04:16:14
bene_42069
false
null
0
o7tqfi6
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tqfi6/
false
3
t1_o7tqdme
That's not what instruct means. It's the type of intended behavior you get from training and fine tuning to literally act as a conversational model that gets instructed. As opposed to something like a text model
3
0
2026-02-28T04:15:52
iMrParker
false
null
0
o7tqdme
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7tqdme/
false
3
t1_o7tqbjf
also, would you mind if I ask you some questions in the DM? I am trying to run my very fast local model and I just want some perspective from someone who has already ran one I also miss ChatGPT 4.1 and would like a similar model
1
0
2026-02-28T04:15:27
yaxir
false
null
0
o7tqbjf
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tqbjf/
false
1
t1_o7tq8ng
Flash
1
0
2026-02-28T04:14:54
Witty_Mycologist_995
false
null
0
o7tq8ng
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tq8ng/
false
1
t1_o7tq7s9
Honestly it just seems like a waste of all the compute to end up with something marginally better than one of last year's flops. Literally nobody is going to use it. Why? Because gpt-oss-120b is less than 1/3 the size of Maverick and wipes the floor with it, too. Why would anyone run Trinity when gpt-oss-120b is small...
0
0
2026-02-28T04:14:44
__JockY__
false
null
0
o7tq7s9
false
/r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7tq7s9/
false
0
t1_o7tq7gq
Which Variant if glm?
3
0
2026-02-28T04:14:40
yaxir
false
null
0
o7tq7gq
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tq7gq/
false
3
t1_o7tpyaj
Fair point — I should’ve provided more context. My first test with any model is always the same: a simple Vue 3 + Vuetify todo app. It should be straightforward. In this case, it struggled with styling consistency, wiring, and even missed basic syntax issues like unclosed script tags after several prompts. After that...
2
0
2026-02-28T04:12:52
Virtual-Listen4507
false
null
0
o7tpyaj
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7tpyaj/
false
2
t1_o7tpr9m
What framework are you using? I've never had it fail a tool use or fail to find the correct information in a web search yet.
1
0
2026-02-28T04:11:30
metigue
false
null
0
o7tpr9m
false
/r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7tpr9m/
false
1
t1_o7tpqbv
The results of this test are amazing.
3
0
2026-02-28T04:11:19
Illustrious-Can-4163
false
null
0
o7tpqbv
false
/r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7tpqbv/
false
3
t1_o7tpqcf
I hate anthropic for being so close source, but at least they have SOME principles
5
0
2026-02-28T04:11:19
Alexercer
false
null
0
o7tpqcf
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7tpqcf/
false
5
t1_o7tpq9b
How are you measuring bandwidth? I'm intrigued by your post since I have basically half your machine - Lenovo P520 single xeon w/ 96GB quad-channel DDR4 2133 with a single 5060ti and am considering adding a 2nd 5060. I tested my memory bandwidth using the Intel latency checker and got 50+ GB/s (compared to my Ryzen 7 ...
1
0
2026-02-28T04:11:18
dwkdnvr
false
null
0
o7tpq9b
false
/r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o7tpq9b/
false
1
t1_o7tppz7
Not a bad idea, but the nearest Microcenter is a 7-hour drive one-way. If they could ship, that would be great. Seems a lot of the items don't ship though, so I don't typically even bother looking at their website.
1
0
2026-02-28T04:11:15
x8code
false
null
0
o7tppz7
false
/r/LocalLLaMA/comments/1r26zsg/zai_said_they_are_gpu_starved_openly/o7tppz7/
false
1
t1_o7tppd2
Wow, folks were renting instances for Q8? Why didn't you get P40s? Just 4 of them was good enough to give you good speed if you were on a budget.
1
0
2026-02-28T04:11:08
segmond
false
null
0
o7tppd2
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tppd2/
false
1
t1_o7tpoqx
On a rtx 3060 Windows laptop I didnt see any improvement whatsoever between the precompiled and one I did myself. Maybe a stronger gpu or linux has differences.
1
0
2026-02-28T04:11:00
Icy_Butterscotch6661
false
null
0
o7tpoqx
false
/r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o7tpoqx/
false
1
t1_o7tpm3p
Do tell us what system you were running 405B at. The very few of us that managed to run it were running it at 0.3tk/sec. I was feeling high and mighty to have it run at 1.5tk/sec
3
0
2026-02-28T04:10:29
segmond
false
null
0
o7tpm3p
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7tpm3p/
false
3