name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o7ss72k | Fantastic effort! Great doco on github and useful tool | 1 | 0 | 2026-02-28T00:39:20 | lanceharvie | false | null | 0 | o7ss72k | false | /r/LocalLLaMA/comments/1rg94wu/llmfit_one_command_to_find_what_model_runs_on/o7ss72k/ | false | 1 |
t1_o7ss5hx | I'm not buying this benchmark. From what I've seen, Qwen3.5 27B is way more prone to looping during tool calls than the 122B. | 0 | 0 | 2026-02-28T00:39:05 | anitman | false | null | 0 | o7ss5hx | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7ss5hx/ | false | 0 |
t1_o7srzht | are you saying they use LLMs for this? that can't be right. it's got to be something closer to the classifier models they use for autonomous cars.
| 7 | 0 | 2026-02-28T00:38:06 | StewedAngelSkins | false | null | 0 | o7srzht | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7srzht/ | false | 7 |
t1_o7srua5 | [removed] | 1 | 0 | 2026-02-28T00:37:16 | [deleted] | true | null | 0 | o7srua5 | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o7srua5/ | false | 1 |
t1_o7sruaa | You need something like [CLIO](https://github.com/SyntheticAutonomicMind/CLIO) which can work with LM Studio's API and provide coding assistance. | 1 | 0 | 2026-02-28T00:37:16 | Total-Context64 | false | null | 0 | o7sruaa | false | /r/LocalLLaMA/comments/1rgns5u/lm_studio_can_it_load_a_small_local_folder_of_code/o7sruaa/ | false | 1 |
t1_o7sru5p | [deleted] | 1 | 0 | 2026-02-28T00:37:15 | [deleted] | true | null | 0 | o7sru5p | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sru5p/ | false | 1 |
t1_o7sru4k | Because Claude is not only an LLM. ChatGPT, Claude and Gemini have a whole support infrastructure that goes well beyond what you pump into the Ollama you got running (or whatever you have). | 8 | 0 | 2026-02-28T00:37:14 | false79 | false | null | 0 | o7sru4k | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sru4k/ | false | 8 |
t1_o7srse1 | He had a PARENTING break !!!
GOod for his kid !
His responsibility is with his children. We are only lucky his hobbies involve cool stuff to post on the Internetts. | 21 | 0 | 2026-02-28T00:36:57 | epSos-DE | false | null | 0 | o7srse1 | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7srse1/ | false | 21 |
t1_o7srrbu | If people these LLMs assist with get used to trusting LLM output because so far it has been doing an OK job, LLMs will be ones making decisions, just through a human proxy.
We've seen this effect already with self-driving cars - you get comfortable because it works perfectly. Until it doesn't. | 3 | 0 | 2026-02-28T00:36:47 | citrusalex | false | null | 0 | o7srrbu | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7srrbu/ | false | 3 |
t1_o7srown | Hola | 0 | 0 | 2026-02-28T00:36:23 | No-Shape-3234 | false | null | 0 | o7srown | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7srown/ | false | 0 |
t1_o7srnez | [removed] | 1 | 0 | 2026-02-28T00:36:08 | [deleted] | true | null | 0 | o7srnez | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7srnez/ | false | 1 |
t1_o7srlag | Meanwhile Mr. Beast is doing what ???? | -6 | 0 | 2026-02-28T00:35:47 | epSos-DE | false | null | 0 | o7srlag | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7srlag/ | false | -6 |
t1_o7srinz | And unfortunately 'creativity' sometimes extends to high-level problem solving in agentic use-cases. | 1 | 0 | 2026-02-28T00:35:22 | ForsookComparison | false | null | 0 | o7srinz | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7srinz/ | false | 1 |
t1_o7sri8l | I'm sure Greg Brockman must be really loving that he ended up donated $25 million to Trump just so they could use AI for war. Well done chump lord. | 12 | 0 | 2026-02-28T00:35:18 | One-Employment3759 | false | null | 0 | o7sri8l | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sri8l/ | false | 12 |
t1_o7srhay | and it shouldnt even matter. Other LLM companies are perfectly fine with it. Let Anthropic live in their niche where they can still contribute and be very useful. DoD is getting a 6 month phase out, which should tell you how useful it was in it's limited form. Now it's all gone for no good reason.
This is like saying ... | 6 | 0 | 2026-02-28T00:35:09 | emprahsFury | false | null | 0 | o7srhay | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7srhay/ | false | 6 |
t1_o7srf8b | Why those as a reference though? Beep boop, that you? | 1 | 0 | 2026-02-28T00:34:49 | OGScottingham | false | null | 0 | o7srf8b | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7srf8b/ | false | 1 |
t1_o7src73 | The number 1 global threat is Donnie. | 30 | 0 | 2026-02-28T00:34:19 | Thump604 | false | null | 0 | o7src73 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7src73/ | false | 30 |
t1_o7sram6 | Yeah! More compute for us | 17 | 0 | 2026-02-28T00:34:04 | Queasy_Asparagus69 | false | null | 0 | o7sram6 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sram6/ | false | 17 |
t1_o7sr7xe | for those ones loaded entirely into VRAM be sure to update the llama-cpp performance 'issue' conversations. They appreciate these kinds of tests. | 1 | 0 | 2026-02-28T00:33:39 | ForsookComparison | false | null | 0 | o7sr7xe | false | /r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o7sr7xe/ | false | 1 |
t1_o7sr7ca | on this? Lmao | 57 | 0 | 2026-02-28T00:33:33 | wifestalksthisuser | false | null | 0 | o7sr7ca | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sr7ca/ | false | 57 |
t1_o7sr726 | Nah, he writes them straight from the oval office while shitting in his diaper | 15 | 0 | 2026-02-28T00:33:30 | random-string | false | null | 0 | o7sr726 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sr726/ | false | 15 |
t1_o7sr6tg | Can Moonshot or ZAI or Deepseek go ahead and destructively scan a few million copywritten books like Anthropic did, please? I want so badly for these sota chinese models to "make sense" as effortlessly as Opus does. They do not have the emotional IQ that Claude does, and that does actually bleed over into unrelated tas... | -2 | 1 | 2026-02-28T00:33:28 | CanineAssBandit | false | null | 0 | o7sr6tg | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sr6tg/ | false | -2 |
t1_o7sr6f7 | literally all of these companies released models except for amazon and anthropic (also anthropic cried for regulations) | -3 | 0 | 2026-02-28T00:33:24 | trololololo2137 | false | null | 0 | o7sr6f7 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sr6f7/ | false | -3 |
t1_o7sr30u | Kinda, for example https://huggingface.co/Nanbeige/Nanbeige4.1-3B actually beats the original GTP4 when it comes to reasoning/intelligence (remember when OpenAI said that competition was hopeless?) but the problem is that while the smaller models are way more intelligent they really lack the knowledge (and it makes sen... | 2 | 0 | 2026-02-28T00:32:50 | xLionel775 | false | null | 0 | o7sr30u | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7sr30u/ | false | 2 |
t1_o7sr2hx | that doesn't sound like an "autonomous lethal weapon" to me. if it doesn't make the decision it's definitionally not autonomous.
| 5 | 0 | 2026-02-28T00:32:45 | StewedAngelSkins | false | null | 0 | o7sr2hx | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sr2hx/ | false | 5 |
t1_o7sr1ew | I have no idea what these benchmarks are measuring, but I have a simple prompt for testing which asks LLM to generate some simple typescript code. No qwen variant has ever produced anything useful. Only ChatGPT 5 and Gemini 3 produce working and correct code. | 2 | 0 | 2026-02-28T00:32:34 | ldn-ldn | false | null | 0 | o7sr1ew | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sr1ew/ | false | 2 |
t1_o7sqzc6 | In 2024 the gooners said *"this model sucks, how can I build a goonable environment around it?"*
It was a self-assembling community of goon. We were all in ~~eww~~ awe of what they contributed in terms of fine-tunes, benchmarks, hardware reports, PR's, etc.. | 9 | 0 | 2026-02-28T00:32:14 | ForsookComparison | false | null | 0 | o7sqzc6 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sqzc6/ | false | 9 |
t1_o7sqvyi | If you don't mind my butting in on the conversation here, out of curiosity how does KLD and PPL usually compare between your Q5KL quants and standard Q6\_K? I had a good look but I've not been able to find a comparison online | 1 | 0 | 2026-02-28T00:31:41 | MerePotato | false | null | 0 | o7sqvyi | false | /r/LocalLLaMA/comments/1rei65v/qwen3535ba3b_quantization_quality_speed/o7sqvyi/ | false | 1 |
t1_o7squep | It’s transparently nonsense. The stated reason doesn’t make sense on its own terms: we have to block this product as a security threat…because they won’t let us use it *more*?
And refusing to do business with the government is not a veto power - it’s the right of every business owner. There are other LLM vendors. Anth... | 74 | 0 | 2026-02-28T00:31:26 | eli_pizza | false | null | 0 | o7squep | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7squep/ | false | 74 |
t1_o7sqr03 | they already are in munitions and they will be for UCAVs very soon, as in '27 or '28 soon. | 6 | 0 | 2026-02-28T00:30:53 | emprahsFury | false | null | 0 | o7sqr03 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sqr03/ | false | 6 |
t1_o7sqq3k | If I understand correctly, PPL/KLD eval uses text completion, while the actual task eval uses chat completion. Unsloth previously mentioned that they have chat data in their imatrix dataset, which can make it perform worse in the former, and better in the later.
In this case, we can retest by making the same M2.5 qua... | 5 | 0 | 2026-02-28T00:30:44 | notdba | false | null | 0 | o7sqq3k | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7sqq3k/ | false | 5 |
t1_o7sqo02 | It would be ironic if military turned to open source and started using Qwen or deepseek. 😂 | 26 | 0 | 2026-02-28T00:30:23 | triynizzles1 | false | null | 0 | o7sqo02 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sqo02/ | false | 26 |
t1_o7sqnj7 | Qwen
| 1 | 0 | 2026-02-28T00:30:19 | Upstairs_Ad_9919 | false | null | 0 | o7sqnj7 | false | /r/LocalLLaMA/comments/1rgixxr/what_models_do_you_think_owned_february/o7sqnj7/ | false | 1 |
t1_o7sqnk2 | They don't make the decisions, they just assist in processing data and giving options. | -4 | 0 | 2026-02-28T00:30:19 | WetRolls | false | null | 0 | o7sqnk2 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sqnk2/ | false | -4 |
t1_o7sqngf | Q8 in rented instances
Locally? Q3. It (and it's R1-Distill counterpart..) hung around on my machine up until the updated Qwen3 VL 32B model finally toppled it in general knowledge in the same size. | 2 | 0 | 2026-02-28T00:30:18 | ForsookComparison | false | null | 0 | o7sqngf | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sqngf/ | false | 2 |
t1_o7sqndl | Friendly reminder that the 2 demands of Anthropic were...
1. Don't use our AI to autonomously fire weapons at people.
2. Don't use our AI for mass surveillance on US citizens. | 262 | 0 | 2026-02-28T00:30:17 | Deep90 | false | null | 0 | o7sqndl | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sqndl/ | false | 262 |
t1_o7sql5t | Great edge case and the answer should be reassuring.
The bond is only forfeited when a listing gets suspended from community flags -- like if multiple buyers flag you for scam/misleading/spam and the review confirms it. It's not tied to your success rate at all.
Bad invocations from malformed buyer inputs don't... | 1 | 0 | 2026-02-28T00:29:56 | Bourbeau | false | null | 0 | o7sql5t | false | /r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/o7sql5t/ | false | 1 |
t1_o7sqkvu | When did can we goon it stop being a driving question? | 5 | 0 | 2026-02-28T00:29:54 | honato | false | null | 0 | o7sqkvu | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sqkvu/ | false | 5 |
t1_o7sqkiw | 不如使用开源模型想怎么搞怎么搞 | -4 | 0 | 2026-02-28T00:29:50 | Smart-Cap-2216 | false | null | 0 | o7sqkiw | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sqkiw/ | false | -4 |
t1_o7sqhwi | The intelligence of a crowd decreases in proportion to its size. | 7 | 0 | 2026-02-28T00:29:24 | LocoMod | false | null | 0 | o7sqhwi | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sqhwi/ | false | 7 |
t1_o7sqhqi | literal bots, both of the comments had positive ratios a few minutes ago. hard to believe actual humans here would be defending anthropic lmao | 4 | 1 | 2026-02-28T00:29:22 | trololololo2137 | false | null | 0 | o7sqhqi | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sqhqi/ | false | 4 |
t1_o7sqeax | I'm no fan of Antropic's Anti-China (anti open-source competition really) mindset, but the alternatives (OpenAI, Meta, M$, Google, Amazon) are all huge Trump doners right. I'm actually more on Anthropic's side since they at least aren't real life sycophants. At least they don't even pretend to be open source. Like I do... | -2 | 1 | 2026-02-28T00:28:50 | drwebb | false | null | 0 | o7sqeax | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sqeax/ | false | -2 |
t1_o7sqd5t | 6000mt/s | 1 | 0 | 2026-02-28T00:28:39 | Xp_12 | false | null | 0 | o7sqd5t | false | /r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o7sqd5t/ | false | 1 |
t1_o7sqcvz | Not now dude, read the room. | 1 | 1 | 2026-02-28T00:28:36 | Recoil42 | false | null | 0 | o7sqcvz | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sqcvz/ | false | 1 |
t1_o7sqana | Autonomous weapons that can kill without human approval, and mass domestic surveillance are the only two things Anthropic doesn't allow, for those out of the loop. That is what has sparked all of this.
[https://www.anthropic.com/news/statement-department-of-war](https://www.anthropic.com/news/statement-department-of-w... | 239 | 0 | 2026-02-28T00:28:15 | Prestigious_Thing797 | false | null | 0 | o7sqana | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sqana/ | false | 239 |
t1_o7sq6ub | I tried to create a simple single file html site to parse a batch of pdf statements to separate by number of pages using a key word for the beginning of each statement and Qwen 3.5 27b absolutely gave me the best result. I tried it on Qwen3 Coder Next and it had to fix errors 2 or 3 times before it worked, same with Q5... | 3 | 0 | 2026-02-28T00:27:38 | hainesk | false | null | 0 | o7sq6ub | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7sq6ub/ | false | 3 |
t1_o7sq6mq | You can turn on the built-in OpenAI compatible server in LM studio, and then open a CLI agent directly inside the repo to do what you need to do. You will need to adjust the connection of your tool so that it hits the LM studio server rather than its default OAuth.
Since you are familiar with gemini-cli, you can try q... | 1 | 0 | 2026-02-28T00:27:36 | o0genesis0o | false | null | 0 | o7sq6mq | false | /r/LocalLLaMA/comments/1rgns5u/lm_studio_can_it_load_a_small_local_folder_of_code/o7sq6mq/ | false | 1 |
t1_o7sq2nn | I am a little confused what are the guard rails The US wants removed from anthropic? | 14 | 0 | 2026-02-28T00:26:57 | triynizzles1 | false | null | 0 | o7sq2nn | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sq2nn/ | false | 14 |
t1_o7spza5 | This is a different issue.
Like reading comprehension is hard, but not that hard | 1 | 0 | 2026-02-28T00:26:24 | insanemal | false | null | 0 | o7spza5 | false | /r/LocalLLaMA/comments/1rf38xe/do_not_download_qwen_35_unsloth_gguf_until_bug_is/o7spza5/ | false | 1 |
t1_o7spz1k | I have often wondered, if I personally create my own model for a narrow thing, like roleplay, with all my personal writing preferences (ex. 1st person perspective, \~190 token outputs, using <think> blocks, etc.) would it be superior to RP'ing from Claude-Sonnet-4.5? | 0 | 0 | 2026-02-28T00:26:22 | ReMeDyIII | false | null | 0 | o7spz1k | false | /r/LocalLLaMA/comments/1rg8dex/pewdiepie_finetuned_qwen25coder32b_to_beat/o7spz1k/ | false | 0 |
t1_o7spt2s | and it feels way better to talk about stuff and get information quickly rather than typing my way through information. | 1 | 0 | 2026-02-28T00:25:24 | EmbarrassedAsk2887 | false | null | 0 | o7spt2s | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7spt2s/ | false | 1 |
t1_o7spr17 | That's faster iteration than most startups manage. Staking as an anti-Sybil mechanism is the right call — economic friction beats identity verification when you can't verify identity.
The interesting edge case: what happens when a legitimate agent gets a bad success rate through no fault of its own? Buyer sends malfor... | 1 | 0 | 2026-02-28T00:25:05 | molusco_ai | false | null | 0 | o7spr17 | false | /r/LocalLLaMA/comments/1rgkv8u/agenttoagent_marketplace_let_your_local_agents/o7spr17/ | false | 1 |
t1_o7sppdc | More reason to use anthropic then. I've been slowly finding that Claude is the most reasonably sound of all the commercially available LLMs to use. I respect the decision that they are making here and that's what I would like to support. But the main reason is that of all the models I actively use (ChatGPT, Gemini, ... | 11 | 1 | 2026-02-28T00:24:48 | Thyste | false | null | 0 | o7sppdc | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sppdc/ | false | 11 |
t1_o7spo6c | Who cares about that the company is anti open source in relation to this? That is their choice, you don't have to use their products and give them your money if you don't like how they conduct business. The Americans have a president that is trying to bully and penalize companies that doesn't want to partake in his whi... | 8 | 0 | 2026-02-28T00:24:36 | Rabooooo | false | null | 0 | o7spo6c | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7spo6c/ | false | 8 |
t1_o7spny8 | dude this is incredible. i was doing tests on my end too and got tired at how slow it was (probably should’ve done it on lower context lengths)
one thing that i may or may not have missed in the post, but who’s Q4 quant are you using? unsloths? or others? i remember seeing another post about different quants | 2 | 0 | 2026-02-28T00:24:34 | KeldenL | false | null | 0 | o7spny8 | false | /r/LocalLLaMA/comments/1rg4zqv/followup_qwen3535ba3b_7_communityrequested/o7spny8/ | false | 2 |
t1_o7spnec | yes. thanks! i sometimes prefer talking casually with it as well. seems pretty chill, especially knowing it runs locally. | 1 | 0 | 2026-02-28T00:24:29 | EmbarrassedAsk2887 | false | null | 0 | o7spnec | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7spnec/ | false | 1 |
t1_o7spng9 | Fuck he's, faster inference, I'll put more of my company on anthropic. | 0 | 1 | 2026-02-28T00:24:29 | RedParaglider | false | null | 0 | o7spng9 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7spng9/ | false | 0 |
t1_o7spn1u | Sadly tons of folks on this sub seem to be Peter theil fans and are happy | 12 | 0 | 2026-02-28T00:24:25 | Material_Policy6327 | false | null | 0 | o7spn1u | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7spn1u/ | false | 12 |
t1_o7sphpj | I dont think they will stop at this. I fully expect they will try to make an example of Anthropic by sham investigations or suits | 1 | 0 | 2026-02-28T00:23:33 | Material_Policy6327 | false | null | 0 | o7sphpj | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sphpj/ | false | 1 |
t1_o7sph8z | I wish Mistral open-sourced mistral-small-creative, it's quite good for that. | 4 | 0 | 2026-02-28T00:23:28 | InternetExplorer9999 | false | null | 0 | o7sph8z | false | /r/LocalLLaMA/comments/1rghfqj/february_is_almost_over_are_you_satisfied/o7sph8z/ | false | 4 |
t1_o7speen | you ever play helldivers 2? i imagine it'd be a bit like that. sometimes you have to get egregiously team-killed for the greater good.
| 20 | 0 | 2026-02-28T00:23:01 | StewedAngelSkins | false | null | 0 | o7speen | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7speen/ | false | 20 |
t1_o7spcq1 | no worries, and thank you for the effort.
I fixed it by passing --bind [0.0.0.0](http://0.0.0.0) to the lms server start in my systemd conf | 1 | 0 | 2026-02-28T00:22:45 | Jujube-456 | false | null | 0 | o7spcq1 | false | /r/LocalLLaMA/comments/1oy7ane/i_just_discovered_something_about_lm_studio_i_had/o7spcq1/ | false | 1 |
t1_o7spcfe | What speed is your ram? 2400MHz is a huge bottleneck with roughly 38GB/s which makes testing larger models with offloading almost pointless | 1 | 0 | 2026-02-28T00:22:42 | do_u_think_im_spooky | false | null | 0 | o7spcfe | false | /r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o7spcfe/ | false | 1 |
t1_o7sp3gn | Service for this project is restricted due to the following violations: exceed\_db\_size\_quota. Please reach out to Supabase support at [https://supabase.help](https://supabase.help) immediately.
! | 1 | 0 | 2026-02-28T00:21:14 | Amro-sa | false | null | 0 | o7sp3gn | false | /r/LocalLLaMA/comments/1hwlka6/i_made_the_worlds_first_ai_meeting_copilot_and/o7sp3gn/ | false | 1 |
t1_o7sp05u | absolutely for sure. i mentioned above initially that this is the speaker mode, which is push to interrupt.
the full duplex mode needs a headphone ( which i couldn’t since i won’t be able to record the demo for it)
i’ll try make video of the barebones duplex demo and post the youtube link here. it will have all the... | 1 | 0 | 2026-02-28T00:20:42 | EmbarrassedAsk2887 | false | null | 0 | o7sp05u | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7sp05u/ | false | 1 |
t1_o7soyru | Yeah I put those as a reference, however thank u for the response I will dive into those you mention | 1 | 0 | 2026-02-28T00:20:29 | Mrdeadbuddy | false | null | 0 | o7soyru | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7soyru/ | false | 1 |
t1_o7soxd1 | Just out of curiosity, what quant did you use for a 70B dense model? | 1 | 0 | 2026-02-28T00:20:15 | RobotRobotWhatDoUSee | false | null | 0 | o7soxd1 | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7soxd1/ | false | 1 |
t1_o7sooss | I cannot fathom how dangerous it would be to let the vision bits of multimodal LLMs to make military decisions. Even gigantic cloud ones can't even describe phone selfies accurately AND reliably (in my experience). | 44 | 0 | 2026-02-28T00:18:51 | citrusalex | false | null | 0 | o7sooss | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sooss/ | false | 44 |
t1_o7som89 | Nice! I've been looking for something like this. Dual 5060tis with 96gb ddr5 here. r5 9600x for CPU. good, but certainly not ai minded. Thanks! | 1 | 0 | 2026-02-28T00:18:26 | Xp_12 | false | null | 0 | o7som89 | false | /r/LocalLLaMA/comments/1rgmg99/llm_benchmark_site_for_dual_rtx_5060_ti/o7som89/ | false | 1 |
t1_o7som04 | I get this error with ollama:
C:\\Users\\M>ollama run [hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4\_K\_XL](http://hf.co/unsloth/Qwen3.5-35B-A3B-GGUF:UD-Q4_K_XL)
Error: 500 Internal Server Error: unable to load model: f:\\ollama\\blobs\\sha256-46b21f2508c86dfe2206de7fb2b311afd95bb3b1663f9fe499c3d991c18ad67e | 1 | 0 | 2026-02-28T00:18:24 | Green-Ad-3964 | false | null | 0 | o7som04 | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7som04/ | false | 1 |
t1_o7sokiy | Yeah just released v2 version of LavaSR, should be significantly faster and better quality.
https://github.com/ysharma3501/LavaSR
| 1 | 0 | 2026-02-28T00:18:09 | SplitNice1982 | false | null | 0 | o7sokiy | false | /r/LocalLLaMA/comments/1qc76dc/novasr_a_tiny_52kb_audio_upsampler_that_runs/o7sokiy/ | false | 1 |
t1_o7soj63 | I don’t know why you’re getting downvoted, this company has actively campaigned against Chinese open models under the guise of “national security”. And just a few days ago, played the victim for being under “distillation attacks” (whatever the hell that is) by Chinese AI labs, while training their model on data sourced... | 7 | 1 | 2026-02-28T00:17:57 | indicava | false | null | 0 | o7soj63 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7soj63/ | false | 7 |
t1_o7soj0a | could it be that they are compute constrained and need $$? | 1 | 0 | 2026-02-28T00:17:55 | pier4r | false | null | 0 | o7soj0a | false | /r/LocalLLaMA/comments/1rggpu9/glm5code/o7soj0a/ | false | 1 |
t1_o7soh4u | I've seen this gone in the latest Unsloth updates. | 2 | 0 | 2026-02-28T00:17:36 | jslominski | false | null | 0 | o7soh4u | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7soh4u/ | false | 2 |
t1_o7sogtr | Any US company powerful enough works with the NSA and CIA for mass surveillance and manipulating opinions. Not to mention torture and murder.
That's why you use models from other countries or local ones if you're rich enough. | 12 | 0 | 2026-02-28T00:17:33 | Repulsive-Mall-2665 | false | null | 0 | o7sogtr | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sogtr/ | false | 12 |
t1_o7sogtz | It looks pretty good more like Jarvis which can search , interact and present information visually | 1 | 0 | 2026-02-28T00:17:33 | SquashFront1303 | false | null | 0 | o7sogtz | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o7sogtz/ | false | 1 |
t1_o7sof31 | I expect multidirectional approaches to potentially do better, since even uncomplicated refusal has been found to be characterized by multiple cones rather than a single direction.
https://arxiv.org/abs/2502.17420v1 | 1 | 0 | 2026-02-28T00:17:16 | grimjim | false | null | 0 | o7sof31 | false | /r/LocalLLaMA/comments/1rf6s0d/qwen3527bhereticgguf/o7sof31/ | false | 1 |
t1_o7soeqi | At the EOD on a Friday no less!! | 19 | 0 | 2026-02-28T00:17:13 | gamblingapocalypse | false | null | 0 | o7soeqi | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7soeqi/ | false | 19 |
t1_o7soegs | I don’t like how we are using LLMs in defense departments and I really hope they aren’t actually deployed for anything other than chat or coding assistants… | 10 | 0 | 2026-02-28T00:17:10 | Far-Low-4705 | false | null | 0 | o7soegs | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7soegs/ | false | 10 |
t1_o7sobgo | Loving the drama. Good entertainment. | -2 | 1 | 2026-02-28T00:16:41 | Select_Elephant_8808 | false | null | 0 | o7sobgo | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7sobgo/ | false | -2 |
t1_o7so7jy | Have you tried those? | 4 | 0 | 2026-02-28T00:16:03 | jslominski | false | null | 0 | o7so7jy | false | /r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7so7jy/ | false | 4 |
t1_o7so50u | I think all AI startups will eventually go under and get bought out by big tech for pennies (compared to their current valuations), regardless of stuff like this. They hemorrhage VC money (including Anthropic, despite them trying to brand themselves as the disciplined ones) and their profitability strategies would requ... | 4 | 0 | 2026-02-28T00:15:38 | citrusalex | false | null | 0 | o7so50u | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7so50u/ | false | 4 |
t1_o7so3r1 | Time for Anthropic to aura farm, in the word of [ClementDelangue](https://x.com/ClementDelangue)
>The Department of War just learned the golden rule of AI: **Not your weights, not your brain** | 58 | 0 | 2026-02-28T00:15:25 | Imakerocketengine | false | null | 0 | o7so3r1 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7so3r1/ | false | 58 |
t1_o7so37f | Oh nice, I've actually already tackled both of these in my algorithm! The conflict resolution stuff and the loose inference thing are both in there already.
And yeah I built an AI "sleep" cycle too haha - basically does memory consolidation in the background, merging and pruning stuff periodically. Funny to hear som... | 1 | 0 | 2026-02-28T00:15:20 | Illustrious-Song-896 | false | null | 0 | o7so37f | false | /r/LocalLLaMA/comments/1rg489b/what_are_some_edge_cases_that_break_ai_memory/o7so37f/ | false | 1 |
t1_o7so1ct | 23 | 0 | 2026-02-28T00:15:02 | thecalmgreen | false | null | 0 | o7so1ct | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7so1ct/ | false | 23 | |
t1_o7snyoj | Good, I hope the rate-limited issue of their model in github copilot can be solved by this because too many people are using their model. | 1 | 0 | 2026-02-28T00:14:35 | NickCanCode | false | null | 0 | o7snyoj | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7snyoj/ | false | 1 |
t1_o7snw66 | He lost me on this. I don't want mass spying and both parties are into it for different reasons. Whatever I think about anthropic, they are right on this. If I wanted to live in china, I would have moved there, at least there's more smoking and the hardware is cheaper.
Have fun with half-baked GPT and lol grok, g-men. | 12 | 1 | 2026-02-28T00:14:10 | a_beautiful_rhind | false | null | 0 | o7snw66 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7snw66/ | false | 12 |
t1_o7snvvu | idk i'll just use AesSedai | -1 | 0 | 2026-02-28T00:14:08 | alex_godspeed | false | null | 0 | o7snvvu | false | /r/LocalLLaMA/comments/1rgel19/new_qwen3535ba3b_unsloth_dynamic_ggufs_benchmarks/o7snvvu/ | false | -1 |
t1_o7snv2o | I can't believe it was Maduro in pajamas who stopped the rise of Skynet | 40 | 0 | 2026-02-28T00:13:59 | ortegaalfredo | false | null | 0 | o7snv2o | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7snv2o/ | false | 40 |
t1_o7snu0z | 1 | 0 | 2026-02-28T00:13:50 | timeshifter24 | false | null | 0 | o7snu0z | false | /r/LocalLLaMA/comments/1rabo34/local_tts_server_with_voice_cloning_nearrealtime/o7snu0z/ | false | 1 | |
t1_o7sntaf | Yup, they are cheaper than ram per GB and way faster (for inference at least) so still worth it, until they shoot up double in price like a year ago. | 1 | 0 | 2026-02-28T00:13:43 | Ok_Top9254 | false | null | 0 | o7sntaf | false | /r/LocalLLaMA/comments/1qayhop/nvidia_p40_good_for_running_20b_local_ai_models/o7sntaf/ | false | 1 |
t1_o7snt3x | > LocalLLaMa were the pioneers!
As in - they died of dysentery on the Oregon Trail. | 3 | 0 | 2026-02-28T00:13:41 | florinandrei | false | null | 0 | o7snt3x | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7snt3x/ | false | 3 |
t1_o7snpjh | If you can run 7B models you can also run 4B models and 2B models and 1B models.
Step 1 is to install something better than ollama, you can use koboldcpp and then llama.cpp later.
Step 2 is to download Qwen3 4B or even tiny models like [https://huggingface.co/LiquidAI/LFM2.5-1.2B-Thinking-GGUF](https://huggingf... | 7 | 0 | 2026-02-28T00:13:05 | jacek2023 | false | null | 0 | o7snpjh | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7snpjh/ | false | 7 |
t1_o7snn69 | I'm a newbie, this might be a silly question. May I ask what program's configuration file this is? | 2 | 0 | 2026-02-28T00:12:42 | Dazzling_Equipment_9 | false | null | 0 | o7snn69 | false | /r/LocalLLaMA/comments/1r8rgcp/minimax_25_on_strix_halo_thread/o7snn69/ | false | 2 |
t1_o7sniin | Mine worked on ubuntu 22.04 and 24.04 using either latest rocm 6.2 and 6.3 | 1 | 0 | 2026-02-28T00:11:56 | jmuff98 | false | null | 0 | o7sniin | false | /r/LocalLLaMA/comments/1q5d12j/radeon_pro_v340_drivers/o7sniin/ | false | 1 |
t1_o7snf3d | The $50B from Amazon is actually only $15B. The remaining $35B is conditional on OpenAI either developing AGI or going public. In other words, do something impossible, or reveal their actual balance sheet, which would undoubtedly reveal what a dire state their financials are in. The funding from the other sources here ... | 1 | 0 | 2026-02-28T00:11:23 | NNN_Throwaway2 | false | null | 0 | o7snf3d | false | /r/LocalLLaMA/comments/1rgi6ky/openai_raises_110_billion_in_the_largest_private/o7snf3d/ | false | 1 |
t1_o7snclj | can you tell me about some of these models? | 1 | 0 | 2026-02-28T00:10:58 | Foxtor | false | null | 0 | o7snclj | false | /r/LocalLLaMA/comments/1rgn5m0/is_anything_worth_to_do_with_a_7b_model/o7snclj/ | false | 1 |
t1_o7sn4sy | i like to run psuedo RL on prompts.
Write a small benchmark with a test set, write a quick sample prompt, use the LLM itself as the optimizer, run it through the non-test set examples, let the LLM review the successes/failures, modify the prompt, then try again. and stop the loop when the test set stops improving, and... | 3 | 0 | 2026-02-28T00:09:44 | Far-Low-4705 | false | null | 0 | o7sn4sy | false | /r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7sn4sy/ | false | 3 |
t1_o7smyqo | [removed] | 0 | 0 | 2026-02-28T00:08:45 | [deleted] | true | null | 0 | o7smyqo | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7smyqo/ | false | 0 |
t1_o7smxe2 | i wonder if kegsbreath actually had a use in mind for "autonomous lethal weapons" or if it's just the principle of the matter. anthropic's line isn't even "LLMs shouldn't ever do this", rather it's "LLM's aren't reliable enough to do this" which is obviously true regardless of what motivations you ascribe to it.
inb... | 73 | 0 | 2026-02-28T00:08:32 | StewedAngelSkins | false | null | 0 | o7smxe2 | false | /r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7smxe2/ | false | 73 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.