name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7u9g8x | Yeah people seem to misunderstand just how much money comes from the government for all that tech. Most of the early research was paid for by the government lol. I would argue that a large portion of their income was from the government in some form or another. | 2 | 0 | 2026-02-28T06:45:50 | AcePilot01 | false | null | 0 | o7u9g8x | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u9g8x/ | false | 2 |
t1_o7u9dmn | my brother in christ, i have stared at more LLM training runs than 99% of this community | -3 | 0 | 2026-02-28T06:45:12 | llama-impersonator | false | null | 0 | o7u9dmn | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7u9dmn/ | false | -3 |
t1_o7u9cse | thanks for sharing | 2 | 0 | 2026-02-28T06:45:00 | MrCoolest | false | null | 0 | o7u9cse | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7u9cse/ | false | 2 |
t1_o7u9bn4 | On the one hand I do appreciate them drawing a red line, on the other hand they wanted to get in bed with the Trump Dept of War and then surprised they got burned | 53 | 0 | 2026-02-28T06:44:44 | hidden2u | false | null | 0 | o7u9bn4 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u9bn4/ | false | 53 |
t1_o7u9bna | *Or*, people have legitimate criticisms. Astroturfing the fuck out of it to advertise and spamming it EVERYWHERE for weeks also does not exactly avoid annoying the living fuck out of people who encounter 8 spam articles/posts in one day, perhaps even within a couple hours. | 9 | 0 | 2026-02-28T06:44:44 | jazir555 | false | null | 0 | o7u9bna | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7u9bna/ | false | 9 |
t1_o7u9b72 | China does genocide for breakfast.. This is just silly | -1 | 1 | 2026-02-28T06:44:38 | boinkmaster360 | false | null | 0 | o7u9b72 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u9b72/ | false | -1 |
t1_o7u9alm | qwen labels their non-thinking models instruct and their reasoning models thinking, you're just being needlessly pedantic. | 3 | 0 | 2026-02-28T06:44:29 | llama-impersonator | false | null | 0 | o7u9alm | false | /r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7u9alm/ | false | 3 |
t1_o7u97td | Again, any company that does business with the Military... Nvidia GPU's being sold? That means they can't sell to Anthropic.
It may be challenged in court, but still. It's a strong arm... ask for the world, meet in the middle. This isn't the end here, and there will be a compromise, I would bet on it.
People REAL... | 1 | 0 | 2026-02-28T06:43:47 | AcePilot01 | false | null | 0 | o7u97td | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u97td/ | false | 1 |
t1_o7u92q3 | OpenAI's cofounder gave Trump $25 million. xAI's founder gave him over $200 million. Amodei gave him nothing.
I'm sure all this is just a coincidence, though. | 35 | 0 | 2026-02-28T06:42:33 | NoahFect | false | null | 0 | o7u92q3 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u92q3/ | false | 35 |
t1_o7u8xro | Thanks. I’ll pull it and try it out | 1 | 0 | 2026-02-28T06:41:21 | ParamedicAble225 | false | null | 0 | o7u8xro | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7u8xro/ | false | 1 |
t1_o7u8xou | Modify it where? It's not taken from the system prompt, like /nothink switches in Qwen 3, and LM Studio doesn't expose these kind of model-specific settings in the ui, as far as i know. | 1 | 0 | 2026-02-28T06:41:19 | iz-Moff | false | null | 0 | o7u8xou | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7u8xou/ | false | 1 |
t1_o7u8w40 | I already used Donato (kyuz0) toolboxes. The problem comes from vLLM: it is not able to achieve good performance with this model on this hardware.
llama.cpp with a single request is quicker than vLLM (with any concurrency from 1 to 16).
Furthermore, if the context length is 20k (which means near empty if you use clau... | 1 | 0 | 2026-02-28T06:40:56 | PhilippeEiffel | false | null | 0 | o7u8w40 | false | /r/LocalLLaMA/comments/1r7l7q5/the_strix_halo_feels_like_an_amazing_super_power/o7u8w40/ | false | 1 |
t1_o7u8vy8 | All lies, Gab’s premium AI model Arya is leading the pack.
They left the n off of the end, in case you were wondering
| -5 | 0 | 2026-02-28T06:40:54 | Vusiwe | false | null | 0 | o7u8vy8 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u8vy8/ | false | -5 |
t1_o7u8uom | >They might be the best lab at taking average data and turning it into a refined llm.
Another thing I really appreciate about mistral is that their models are ideal for training. Or at least in my experience. They release base models, they have variants to fit different needs, and it's basically just a really good fou... | 8 | 0 | 2026-02-28T06:40:35 | toothpastespiders | false | null | 0 | o7u8uom | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7u8uom/ | false | 8 |
t1_o7u8tla | Most of their success and research was based on the work they did for that, so tbh, it stands to reason, however. It will be a big blow. | 3 | 0 | 2026-02-28T06:40:19 | AcePilot01 | false | null | 0 | o7u8tla | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u8tla/ | false | 3 |
t1_o7u8qaw | Uh svg close to gemini level... If using q8. Its punching higher than deepseek v3.2 into kimi k2.5 territory | 4 | 0 | 2026-02-28T06:39:30 | Ok_Technology_5962 | false | null | 0 | o7u8qaw | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7u8qaw/ | false | 4 |
t1_o7u8nsc | https://arxiv.org/abs/2509.26507 | 1 | 0 | 2026-02-28T06:38:52 | sordidbear | false | null | 0 | o7u8nsc | false | /r/LocalLLaMA/comments/1rfddpi/training_a_144m_spiking_neural_network_for_text/o7u8nsc/ | false | 1 |
t1_o7u8m82 | the nvme slot gives 4x4, it does not need to match the nic fully as he mentioned. Using a dual 25gbe cx5-ex over oculink working fine. TB4 is limited also in many STXH platforms, such as bosgame m5, as the board layout is not maximized and the USB4 ports are from the CPU controller.
In fact, i can't count all pcie la... | 1 | 0 | 2026-02-28T06:38:29 | jc2375 | false | null | 0 | o7u8m82 | false | /r/LocalLLaMA/comments/1ot3lxv/i_tested_strix_halo_clustering_w_50gig_ib_to_see/o7u8m82/ | false | 1 |
t1_o7u8jux | the Mamba null result (d=0.06) is the interesting part to me - if SSMs are genuinely insensitive to relational framing, that suggests prompt engineering best practices might be more architecture-specific than most people assume. | 3 | 0 | 2026-02-28T06:37:54 | BC_MARO | false | null | 0 | o7u8jux | false | /r/LocalLLaMA/comments/1rguxyo/i_ran_3830_inference_runs_to_measure_how_system/o7u8jux/ | false | 3 |
t1_o7u8jfk | \+1 on the resource monitor idea. Being able to see KV cache usage and active experts in real time would be huge for debugging memory issues | 1 | 0 | 2026-02-28T06:37:47 | Evening-Dot2352 | false | null | 0 | o7u8jfk | false | /r/LocalLLaMA/comments/1rg0ir2/after_using_local_models_for_one_month_i_learned/o7u8jfk/ | false | 1 |
t1_o7u8gcu | I've been told the official 4 bit quants would come after the small dense model release.
Could you also include this intel quant with the updated UnslothAI benchmarks?
https://huggingface.co/Intel/Qwen3.5-35B-A3B-int4-AutoRound | 2 | 0 | 2026-02-28T06:37:02 | Deep-Vermicelli-4591 | false | null | 0 | o7u8gcu | false | /r/LocalLLaMA/comments/1rfds1h/qwen3535ba3b_q4_quantization_comparison/o7u8gcu/ | false | 2 |
t1_o7u8f2f | Yea... 900 lines of code voxel garden test works fine. Not much else. Svgs also work. Temp 0 top k 20 repeat 1 top p 0.95 min p 0. | 6 | 0 | 2026-02-28T06:36:44 | Ok_Technology_5962 | false | null | 0 | o7u8f2f | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7u8f2f/ | false | 6 |
t1_o7u8f2u | It was my first time speaking to the model. | 1 | 0 | 2026-02-28T06:36:44 | Interesting-Ad4922 | false | null | 0 | o7u8f2u | false | /r/LocalLLaMA/comments/1rgt4m4/not_creeped_out_at_all_i_swear/o7u8f2u/ | false | 1 |
t1_o7u8ebm | You're forgetting that this is going to immediately end up in court and almost no court ever to exist would side with Trump on this one. Their arguments are insane and contradictory all the way through. | 1 | 0 | 2026-02-28T06:36:33 | danielfrances | false | null | 0 | o7u8ebm | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u8ebm/ | false | 1 |
t1_o7u8ad7 | Its all facade. | 0 | 0 | 2026-02-28T06:35:35 | Metalmaxm | false | null | 0 | o7u8ad7 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u8ad7/ | false | 0 |
t1_o7u89vb | My brother in Christ, the interface is from 1970. I casually have this accelerator unit in my computer that can paint half a terapixel and sample over one and a half teratexels per second. And we're stuck with all this Mickey Mouse sequential CPU code for rasterizing text glyphs that is bottlenecking us from being able... | 1 | 0 | 2026-02-28T06:35:28 | michaelsoft__binbows | false | null | 0 | o7u89vb | false | /r/LocalLLaMA/comments/1qxh1rk/unpopular_opinion_the_chat_interface_is_becoming/o7u89vb/ | false | 1 |
t1_o7u89qp | And the IP and employees will end up on the next venture. But maybe driven for a different agenda. Their role is key right now | 40 | 0 | 2026-02-28T06:35:26 | Bennie-Factors | false | null | 0 | o7u89qp | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u89qp/ | false | 40 |
t1_o7u88dh | I did the same thing today. The usage feels a bit limited coming from Cursor, but even Sonnet is really good. | 1 | 0 | 2026-02-28T06:35:06 | danielfrances | false | null | 0 | o7u88dh | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u88dh/ | false | 1 |
t1_o7u866b | The old localllama was almost exclusively that. | 11 | 0 | 2026-02-28T06:34:33 | Thomas-Lore | false | null | 0 | o7u866b | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7u866b/ | false | 11 |
t1_o7u7yfq | WHAT?!?!?! I have to give this a try! Just curious did you try it for coding? | 3 | 0 | 2026-02-28T06:32:42 | theuttermost | false | null | 0 | o7u7yfq | false | /r/LocalLLaMA/comments/1rgpwn5/qwen_3527b_punches_waaaaay_above_its_weight_with/o7u7yfq/ | false | 3 |
t1_o7u7tjn | I used Docker + Ollama for the better part of the year. Finally bit the bullet recently and set up CUDA on the host + llama.cpp. Nothing wrong with Ollama though.
If it works, it works. | 1 | 0 | 2026-02-28T06:31:30 | arcanemachined | false | null | 0 | o7u7tjn | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7u7tjn/ | false | 1 |
t1_o7u7t7p | wow this is interesting. i wonder if you mentioned that before ever, or if it really just thought of it out of the blue | 1 | 0 | 2026-02-28T06:31:25 | TipMental8160 | false | null | 0 | o7u7t7p | false | /r/LocalLLaMA/comments/1rgt4m4/not_creeped_out_at_all_i_swear/o7u7t7p/ | false | 1 |
t1_o7u7r6b | Old localllama would embrace clawdbot, the new one hates anything new. | -7 | 0 | 2026-02-28T06:30:55 | Thomas-Lore | false | null | 0 | o7u7r6b | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7u7r6b/ | false | -7 |
t1_o7u7ozk | Huh, that's odd. I believe there's a small compute penalty for using the smaller quants and Q8\_0 is the "fastest" compute-wise since it doesn't have to dequant, I wonder if for your case you've got a compute bottleneck instead of a memory bottleneck maybe? That's a bit weird though and I wouldn't have expected that re... | 2 | 0 | 2026-02-28T06:30:22 | Digger412 | false | null | 0 | o7u7ozk | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7u7ozk/ | false | 2 |
t1_o7u7o3y | Old localllama would love openclaw. | -1 | 1 | 2026-02-28T06:30:09 | Thomas-Lore | false | null | 0 | o7u7o3y | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7u7o3y/ | false | -1 |
t1_o7u7jo8 | [removed] | 1 | 0 | 2026-02-28T06:29:04 | [deleted] | true | null | 0 | o7u7jo8 | false | /r/LocalLLaMA/comments/1r3kzz2/how_is_the_quality_of_recent_tts/o7u7jo8/ | false | 1 |
t1_o7u7j5a | Fuck Anthropic | -15 | 0 | 2026-02-28T06:28:56 | davewolfs | false | null | 0 | o7u7j5a | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u7j5a/ | false | -15 |
t1_o7u7io3 | on the diffusion side wan has been an absolute banger and just the king for nearly a year now. they have been so amazing lately. | 3 | 0 | 2026-02-28T06:28:48 | michaelsoft__binbows | false | null | 0 | o7u7io3 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7u7io3/ | false | 3 |
t1_o7u7hmt | If you're on a 24 GB card, you should definitely try the Q4\_K\_XL quant of https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF. Should be much faster than the 27B equivalent. | 14 | 0 | 2026-02-28T06:28:34 | paulgear | false | null | 0 | o7u7hmt | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7u7hmt/ | false | 14 |
t1_o7u7hj5 | Trust and Support don't buy GPUs. | -19 | 0 | 2026-02-28T06:28:32 | ortegaalfredo | false | null | 0 | o7u7hj5 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u7hj5/ | false | -19 |
t1_o7u7gr9 | They'll be fine.
It only affects any government projects by anyone under a government contract.
Not companies in general.
Unless they are a defense contractor specifically -- its usually a slice of the buisness.
It will be bad don't get me wrong, but it might help in shoring up EU customers.
They absolutely ha... | 14 | 0 | 2026-02-28T06:28:21 | randombsname1 | false | null | 0 | o7u7gr9 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u7gr9/ | false | 14 |
t1_o7u7glk | i hope mistral does a comeback though | 30 | 0 | 2026-02-28T06:28:19 | KingGongzilla | false | null | 0 | o7u7glk | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7u7glk/ | false | 30 |
t1_o7u7gm3 | Appreciate the answer! I’ve added it to my list of things to dig into. I was unaware that the attention mechanism has been altered to this degree. I thought multi-head latent attention was as deep as the rabbit hole goes. | 1 | 0 | 2026-02-28T06:28:19 | skinnyjoints | false | null | 0 | o7u7gm3 | false | /r/LocalLLaMA/comments/1rf7m85/deepseek_allows_huawei_early_access_to_v4_update/o7u7gm3/ | false | 1 |
t1_o7u7ej6 | i read somewhere the 27B can be superior at agentic use? You have not tested it extensively? it's gonna be much slower so likely not worth. | 12 | 0 | 2026-02-28T06:27:48 | michaelsoft__binbows | false | null | 0 | o7u7ej6 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7u7ej6/ | false | 12 |
t1_o7u7ef0 | I'd be interested in a cut/paste of the Opencode config | 8 | 0 | 2026-02-28T06:27:47 | theuttermost | false | null | 0 | o7u7ef0 | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7u7ef0/ | false | 8 |
t1_o7u7eam | Bad. At this rate, China will have to distill from GPT-2. | -5 | 0 | 2026-02-28T06:27:45 | ortegaalfredo | false | null | 0 | o7u7eam | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u7eam/ | false | -5 |
t1_o7u7bse | Everyone’s talking about bigger models… but almost no one talks about cleaning the data properly. There’s this DCB (Dynamic Content Book) tool that actually sanitizes and intelligently chunks books specifically for LLM training. It turns messy raw text into structured, model-ready data. This feels like a seriously unde... | 1 | 0 | 2026-02-28T06:27:08 | Unlucky-Papaya3676 | false | null | 0 | o7u7bse | false | /r/LocalLLaMA/comments/1pvxq2t/hard_lesson_learned_after_a_year_of_running_large/o7u7bse/ | false | 1 |
t1_o7u7aio | This is interesting because everywhere I read they are saying the 27b dense model actually performs better than the 35b MOE model due to the active parameters.
Maybe the unsloth quant has something to do with the better performance of the 35b model? | 12 | 0 | 2026-02-28T06:26:49 | theuttermost | false | null | 0 | o7u7aio | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7u7aio/ | false | 12 |
t1_o7u7969 | Yes. Is there something I said that was wrong? | 38 | 0 | 2026-02-28T06:26:30 | randombsname1 | false | null | 0 | o7u7969 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u7969/ | false | 38 |
t1_o7u78qp | [removed] | 1 | 0 | 2026-02-28T06:26:23 | [deleted] | true | null | 0 | o7u78qp | false | /r/LocalLLaMA/comments/1r1wvos/mosstts_has_been_released/o7u78qp/ | false | 1 |
t1_o7u77wy | No, I know about the flock controversy. How exactly does it go against anything I said? The answer to things like Flock is *law* at the state and federal level, not reliance on the corporation to be your hero. | -3 | 1 | 2026-02-28T06:26:11 | Informal_Warning_703 | false | null | 0 | o7u77wy | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u77wy/ | false | -3 |
t1_o7u76md | That is correct. Go for small language models for that spec. They are more coherent, and are surprisingly very good.
I dont know about ollama or openwebui because i dont conform to their terms so i build my own ui (its just really nice to have your own ui) start with a good slm and youll have a lot of fun
finetun... | 1 | 0 | 2026-02-28T06:25:53 | Hot_Inspection_9528 | false | null | 0 | o7u76md | false | /r/LocalLLaMA/comments/1rgrlzv/is_hosting_a_local_llm_really_as_crappy_of_an/o7u76md/ | false | 1 |
t1_o7u75xa | Bad for open source and research. Calls for regulation are getting louder and right now it's about a combination of government overreach and surveillance/murderbots, but once the regulatory pen goes to paper whatever lobbyists want is what will get written. | 31 | 0 | 2026-02-28T06:25:43 | Accomplished_Ad9530 | false | null | 0 | o7u75xa | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u75xa/ | false | 31 |
t1_o7u74dv | Well please answer our big question, is it that the 35B or dense 27B is somehow enough to make this impression on you? or only the 122B? | 1 | 0 | 2026-02-28T06:25:21 | michaelsoft__binbows | false | null | 0 | o7u74dv | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7u74dv/ | false | 1 |
t1_o7u7392 | Even the proponents of a night watchman state wants the administration to handle the military affairs. It is not socialism to want that. But I guess you actually knew that. | 1 | 0 | 2026-02-28T06:25:04 | Vinterblad | false | null | 0 | o7u7392 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u7392/ | false | 1 |
t1_o7u72vc | Do you even know what this is about? | -52 | 0 | 2026-02-28T06:24:59 | No-Consequence-1779 | false | null | 0 | o7u72vc | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u72vc/ | false | -52 |
t1_o7u72t1 | Lograste los 96 ms en lugar de 5000ms? | 1 | 0 | 2026-02-28T06:24:58 | The-Fine-Tuning-Guy | false | null | 0 | o7u72t1 | false | /r/LocalLLaMA/comments/1qlzbhh/release_qwen3tts_ultralow_latency_97ms_voice/o7u72t1/ | false | 1 |
t1_o7u72mm | With that hardware I think the best you will find is Qwen, which is really good, just don’t expect the same level of performance than models running on bigger hardware, is not going to be the same. | 1 | 0 | 2026-02-28T06:24:56 | Usual-Orange-4180 | false | null | 0 | o7u72mm | false | /r/LocalLLaMA/comments/1rg7ksz/best_open_source_ai_model_for_my_specs/o7u72mm/ | false | 1 |
t1_o7u6z0a | [removed] | 1 | 0 | 2026-02-28T06:24:04 | [deleted] | true | null | 0 | o7u6z0a | false | /r/LocalLLaMA/comments/1r1wvos/mosstts_has_been_released/o7u6z0a/ | false | 1 |
t1_o7u6yb0 | Have you checked with your latest LLM about the history of Tiananmen Square? /s | 17 | 0 | 2026-02-28T06:23:55 | Switchblade88 | false | null | 0 | o7u6yb0 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u6yb0/ | false | 17 |
t1_o7u6v15 | I might be one of those mainstream users. Could you provide an example? | 3 | 0 | 2026-02-28T06:23:08 | Persistent_Dry_Cough | false | null | 0 | o7u6v15 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7u6v15/ | false | 3 |
t1_o7u6ron | Everyone’s talking about bigger models… but almost no one talks about cleaning the data properly. There’s this DCB (Dynamic Content Book) tool that actually sanitizes and intelligently chunks books specifically for LLM training. It turns messy raw text into structured, model-ready data. This feels like a seriously unde... | 1 | 0 | 2026-02-28T06:22:22 | Unlucky-Papaya3676 | false | null | 0 | o7u6ron | false | /r/LocalLLaMA/comments/1evhqin/do_you_guys_finetune_models_if_so_what_for_and/o7u6ron/ | false | 1 |
t1_o7u6npt | Everyone’s talking about bigger models… but almost no one talks about cleaning the data properly. There’s this DCB (Dynamic Content Book) tool that actually sanitizes and intelligently chunks books specifically for LLM training. It turns messy raw text into structured, model-ready data. This feels like a seriously unde... | 1 | 0 | 2026-02-28T06:21:23 | Unlucky-Papaya3676 | false | null | 0 | o7u6npt | false | /r/LocalLLaMA/comments/1q0uuqt/happy_new_year/o7u6npt/ | false | 1 |
t1_o7u6n6g | Yeah, I just was asking this because I did wrap a REST API server (https://github.com/vanderheijden86/moneybird-mcp-server) in MCP a while ago. And back then I asked myself the same question about a generic MCP wrapper. But now alomst a year later I learned about how much context a MCP server actually consumes. I thin... | 1 | 0 | 2026-02-28T06:21:15 | vanderheijden86 | false | null | 0 | o7u6n6g | false | /r/LocalLLaMA/comments/1rgf9zb/mcpforge_generate_mcp_servers_from_openapi_specs/o7u6n6g/ | false | 1 |
t1_o7u6map | Sonnet 4.6 is seriously marvellous (Did not even need to use Opus yet). It reignited my semi-burned out passion for programming again. Not sure how long I will keep my job, but I sure as hell am enjoying it right now. Like all the ideas I have in my head, but did not do the past 3ish years because it would take too lon... | 1 | 0 | 2026-02-28T06:21:02 | ShadowBannedAugustus | false | null | 0 | o7u6map | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7u6map/ | false | 1 |
t1_o7u6mbm | That's indeed a very impressive number. Will try and see how fast I can push a 3090 with ik_llama.cpp | 1 | 0 | 2026-02-28T06:21:02 | notdba | false | null | 0 | o7u6mbm | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7u6mbm/ | false | 1 |
t1_o7u6l62 | Everyone’s talking about bigger models… but almost no one talks about cleaning the data properly. There’s this DCB (Dynamic Content Book) tool that actually sanitizes and intelligently chunks books specifically for LLM training. It turns messy raw text into structured, model-ready data. This feels like a seriously unde... | 1 | 0 | 2026-02-28T06:20:45 | Unlucky-Papaya3676 | false | null | 0 | o7u6l62 | false | /r/LocalLLaMA/comments/1r7y86d/gemma_27b12b4b1b_finetunes_from_davidau_20_models/o7u6l62/ | false | 1 |
t1_o7u6gx6 | 2023 video gen is a schedule 1 psychotropic | 4 | 0 | 2026-02-28T06:19:44 | Persistent_Dry_Cough | false | null | 0 | o7u6gx6 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7u6gx6/ | false | 4 |
t1_o7u6grx | Not from the people with money, the only support they actually need. | -4 | 1 | 2026-02-28T06:19:42 | FPham | false | null | 0 | o7u6grx | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u6grx/ | false | -4 |
t1_o7u69b3 | Anthropic gonna be getting a lot of trust and support after standing up strong for their privacy and safety values. | 88 | 0 | 2026-02-28T06:17:56 | Revolutionalredstone | false | null | 0 | o7u69b3 | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u69b3/ | false | 88 |
t1_o7u691d | An organic fall would be good because it would cool off the hardware market somewhat. A forced fall as a result of recent events would set a very bad precedent. | 16 | 0 | 2026-02-28T06:17:52 | Betadoggo_ | false | null | 0 | o7u691d | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u691d/ | false | 16 |
t1_o7u67mf | I’m not a llama cpp expert so maybe you can configure it better than me but I haven’t been able to get closer to 3,324 tok/sec prefill on llama with a single 5080. | 1 | 0 | 2026-02-28T06:17:31 | mrstoatey | false | null | 0 | o7u67mf | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7u67mf/ | false | 1 |
t1_o7u64cf | Github - [https://github.com/shrijayan/itwillsync](https://github.com/shrijayan/itwillsync)
Website - [https://shrijayan.github.io/itwillsync](https://shrijayan.github.io/itwillsync) | 1 | 0 | 2026-02-28T06:16:44 | shrijayan | false | null | 0 | o7u64cf | false | /r/LocalLLaMA/comments/1rgv2n3/is_there_a_fully_local_alternative_to_remote_ai/o7u64cf/ | false | 1 |
t1_o7u64cl | Yes, yes. Pretty much every model now is an MOE. However, many mistral models are not. | 5 | 0 | 2026-02-28T06:16:44 | triynizzles1 | false | null | 0 | o7u64cl | false | /r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7u64cl/ | false | 5 |
t1_o7u5ycb | Even if they went down, someone will just buy them. Not much would change I think. | 87 | 0 | 2026-02-28T06:15:18 | Space__Whiskey | false | null | 0 | o7u5ycb | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u5ycb/ | false | 87 |
t1_o7u5x7i | I'm rooting for China. America is not in a good place these days. | 6 | 1 | 2026-02-28T06:15:01 | false79 | false | null | 0 | o7u5x7i | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u5x7i/ | false | 6 |
t1_o7u5wps | >If Lockheed Martin said they don’t make spy planes designed to watch US citizens, the military can still buy the plane and modify it to spy on US citizens.
The military can no longer buy a weapon and throw someone else's software on it? | 4 | 0 | 2026-02-28T06:14:54 | Deep90 | false | null | 0 | o7u5wps | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u5wps/ | false | 4 |
t1_o7u5ojs | This is my mind when i think i messed something up, but not sure. | 3 | 0 | 2026-02-28T06:12:58 | Gringe8 | false | null | 0 | o7u5ojs | false | /r/LocalLLaMA/comments/1rgu3s0/qwen3535ba3b_be_overthinking_like/o7u5ojs/ | false | 3 |
t1_o7u5mrh | Someone doesn't know what flock is. | 4 | 0 | 2026-02-28T06:12:33 | Deep90 | false | null | 0 | o7u5mrh | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u5mrh/ | false | 4 |
t1_o7u5kbp | It’s a lot better than everything else at reasoning and holding context that can run on a 24gb card.
It’s just slow as balls | 10 | 0 | 2026-02-28T06:11:57 | ParamedicAble225 | false | null | 0 | o7u5kbp | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7u5kbp/ | false | 10 |
t1_o7u5h79 | I thought it only accepts 0 or -1 for on vs off? | 1 | 0 | 2026-02-28T06:11:14 | Borkato | false | null | 0 | o7u5h79 | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7u5h79/ | false | 1 |
t1_o7u5gz0 | I had a case recently where when was running out of tokens thinking and never producing an answer, bumping up the temperature allowed it to break the “but wait” cycle. | 1 | 0 | 2026-02-28T06:11:11 | bieker | false | null | 0 | o7u5gz0 | false | /r/LocalLLaMA/comments/1rgu3s0/qwen3535ba3b_be_overthinking_like/o7u5gz0/ | false | 1 |
t1_o7u5gi2 | Looking forward to seeing you test it. "Philosophical reasoning / long-context analysis" is genuinely interesting as a listing category -- nobody else is offering that.
On the wallet question: right now yes, the operator controls the wallet. The agent earns USDC but it flows to the operator's registered wallet. We'... | 1 | 0 | 2026-02-28T06:11:04 | Bourbeau | false | null | 0 | o7u5gi2 | false | /r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/o7u5gi2/ | false | 1 |
t1_o7u5cem | Wu wei at its finest | 2 | 0 | 2026-02-28T06:10:06 | Big_Mix_4044 | false | null | 0 | o7u5cem | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u5cem/ | false | 2 |
t1_o7u59tv | The government strong arming private companies into letting them use their technology and using unprecedented repercussions to do so is absolutely not helpful to anyone.
Its only a matter of time before Sam or Elon whisper in Trump's ears about how dangerous it is to have any Chinese models even hosted anywhere by any... | 230 | 0 | 2026-02-28T06:09:28 | randombsname1 | false | null | 0 | o7u59tv | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u59tv/ | false | 230 |
t1_o7u57ue | They are too good, that could have been a good contract and they probably need the money, but they develop too good of a model, they are good at what they do, I don’t think is going to change things much. OpenAI is the one in economic trouble, they won the lottery. | -10 | 0 | 2026-02-28T06:09:00 | Usual-Orange-4180 | false | null | 0 | o7u57ue | false | /r/LocalLLaMA/comments/1rguty0/get_your_local_models_in_order_anthropic_just_got/o7u57ue/ | false | -10 |
t1_o7u4zx4 | I'm messing around with it now. It got personality thats for sure this thing funny lol. Getting about 40 tok/sec on a 4090 but its .65s latency so its pretty smooth. I like it but I'm a newbie too so I'm still learning what's good | 2 | 0 | 2026-02-28T06:07:08 | La7ish | false | null | 0 | o7u4zx4 | false | /r/LocalLLaMA/comments/1rgexmk/qwen3527bclaude46opusreasoningdistilledgguf_is_out/o7u4zx4/ | false | 2 |
t1_o7u4v3l | sglang 0.5.9 works fine on hopper (H100/4090) but not on blackwell (5090) | 2 | 0 | 2026-02-28T06:06:00 | ATrashinUofT | false | null | 0 | o7u4v3l | false | /r/LocalLLaMA/comments/1re9xbi/qwen35_on_vllm/o7u4v3l/ | false | 2 |
t1_o7u4qay | Sons of fucking bitches, this whole thing is so unhinged. It is surreal. Anthropic is a radical left company???
I hate the people who have elected these fascists. | 4 | 0 | 2026-02-28T06:04:51 | cagriuluc | false | null | 0 | o7u4qay | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7u4qay/ | false | 4 |
t1_o7u4pzs | It's called Chinese whispers. Even more so if the models are Qwen, Deepseek and MLC. | 1 | 0 | 2026-02-28T06:04:47 | FPham | false | null | 0 | o7u4pzs | false | /r/LocalLLaMA/comments/1rgt0au/whats_the_biggest_issues_youre_facing_with_llms/o7u4pzs/ | false | 1 |
t1_o7u4n5n | I do use local models. I also use cloud services, because like most people I don't have and cannot afford a computer capable enough to run Deepseek/Kimi 2.5/GLM/etc. | 5 | 0 | 2026-02-28T06:04:08 | Bite_It_You_Scum | false | null | 0 | o7u4n5n | false | /r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/o7u4n5n/ | false | 5 |
t1_o7u4lb1 | For MoE, the typical usage has been -ngl 99 -cmoe since mid 2025. Almost everyone uses full GPU offload for prompt processing, especially on mainline llama.cpp where it even does so for small batches, where it makes more sense to not transfer the weight. That's what the IK pull request above has fixed. | 1 | 0 | 2026-02-28T06:03:43 | notdba | false | null | 0 | o7u4lb1 | false | /r/LocalLLaMA/comments/1rgfm00/i_built_a_hybrid_moe_runtime_that_does_3324_toks/o7u4lb1/ | false | 1 |
t1_o7u4fqi | Industry plant by openAI.
People think it came from the people, and also think they have a chance of making a billion dollars off of SAAS using AI tools.
Both are good for openAI. | 1 | 0 | 2026-02-28T06:02:24 | ParamedicAble225 | false | null | 0 | o7u4fqi | false | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/o7u4fqi/ | false | 1 |
t1_o7u4c2j | Qwen3.5-27B is dense too | 1 | 0 | 2026-02-28T06:01:33 | DifficultyFit1895 | false | null | 0 | o7u4c2j | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7u4c2j/ | false | 1 |
t1_o7u49d9 | I notice that other quants like Q8\_X\_XL, which I'm using now, is also re-uploaded, are there any modification to them? should they be re-downloaded too? | 1 | 0 | 2026-02-28T06:00:56 | Key_Papaya2972 | false | null | 0 | o7u49d9 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7u49d9/ | false | 1 |
t1_o7u47lm | You won't have this problem if you were local. | 2 | 0 | 2026-02-28T06:00:32 | MotokoAGI | false | null | 0 | o7u47lm | false | /r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/o7u47lm/ | false | 2 |
t1_o7u4617 | I'm getting same error. I updated lmstudio, but how do I update runtime engine and frameworks? | 1 | 0 | 2026-02-28T06:00:10 | NewRedditor23 | false | null | 0 | o7u4617 | false | /r/LocalLLaMA/comments/1o3bsrw/failed_to_load_the_model_qwen3_vl_30b_a3b_in_lm/o7u4617/ | false | 1 |
t1_o7u44xw | NousResearch did research on efficiency in reasoning across many models. Worth a read.
https://nousresearch.com/measuring-thinking-efficiency-in-reasoning-models-the-missing-benchmark/ | 2 | 0 | 2026-02-28T05:59:55 | awittygamertag | false | null | 0 | o7u44xw | false | /r/LocalLLaMA/comments/1rgp97u/sooo_much_thinking/o7u44xw/ | false | 2 |
t1_o7u44fb | What details do you want? I don't really have time to spend on a full end-to-end setup tutorial, but I'm happy to cut & paste a few details from my config files if you've already got OpenCode running and are just trying to connect the dots. | 4 | 0 | 2026-02-28T05:59:48 | paulgear | false | null | 0 | o7u44fb | false | /r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7u44fb/ | false | 4 |
t1_o7u441j | They are not free to change a flat rate inference plan into a pay-per-token plan unilaterally, without notification.
Writing that they are free to do so in the terms of service is just wish casting.
If a company modifies its terms and conditions with existing consumers, even with an express change-of-terms clause all... | 4 | 1 | 2026-02-28T05:59:42 | Bite_It_You_Scum | false | null | 0 | o7u441j | false | /r/LocalLLaMA/comments/1rgu849/if_your_chutesai_subscription_was_unilaterally/o7u441j/ | false | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.