name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o8gy6q6
Lets just say that the Chinese government is quite hands-on with what they regard relevant, for the better or for the worse.
1
0
2026-03-03T20:11:45
RoomyRoots
false
null
0
o8gy6q6
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gy6q6/
false
1
t1_o8gy5c8
RIP their inboxes
1
0
2026-03-03T20:11:34
wattbuild
false
null
0
o8gy5c8
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gy5c8/
false
1
t1_o8gy4x4
It is unfortunate. I'm not sure even when was the last 'close to base' model that was released.
1
0
2026-03-03T20:11:31
DeltaSqueezer
false
null
0
o8gy4x4
false
/r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8gy4x4/
false
1
t1_o8gxzjn
Why does your computer and phone have a spell check?
1
0
2026-03-03T20:10:47
ministryofchampagne
false
null
0
o8gxzjn
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gxzjn/
false
1
t1_o8gxyht
I'm just curious about how so many X and Reddit accounts were suddenly available for astro turfing? Did they spend weeks or even month purchasing hundreds, perhaps thousands of aged organic looking accounts and then suddenly unleash them? If this really happened then tracing the recent post history of all these alleged astroturfing spambots would have been an interesting addition to this thread.
1
0
2026-03-03T20:10:38
PuddleWhale
false
null
0
o8gxyht
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gxyht/
false
1
t1_o8gxxwa
yeah but still one more senior employee left after from junyang, he was MTS or something
1
0
2026-03-03T20:10:33
ILoveMy2Balls
false
null
0
o8gxxwa
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gxxwa/
false
1
t1_o8gxw9y
Is that how you classify OPs post?
1
0
2026-03-03T20:10:21
ministryofchampagne
false
null
0
o8gxw9y
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gxw9y/
false
1
t1_o8gxt90
You only are ok with people using AI the way you want to?
1
0
2026-03-03T20:09:57
ministryofchampagne
false
null
0
o8gxt90
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gxt90/
false
1
t1_o8gxqip
Pretty sure the Chinese govt already basically owns them 
1
0
2026-03-03T20:09:35
nomorebuttsplz
false
null
0
o8gxqip
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gxqip/
false
1
t1_o8gxko3
So... Like Lex?
1
0
2026-03-03T20:08:48
330d
false
null
0
o8gxko3
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gxko3/
false
1
t1_o8gxkp7
Matthew Berman was definitely funded by the same organization behind it, as he kept heavily promoting openclaw. I stopped following him since he promoted the neo robot, which is a scam. He just says whoever pays him says so. If you don't want to be manipulated further, I suggest you stop following him.
1
0
2026-03-03T20:08:48
No_Swimming6548
false
null
0
o8gxkp7
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gxkp7/
false
1
t1_o8gxg18
100%. That was the issue I was battling with this project. The solution *appears* to be a rigid rule-set for each of those steps. Standardize books to receive data -> receive data in bulk -> categorize data -> capture unknowns (suspense items) -> recategorize -> confirm data balances -> produce outputs. It can't do that all at once freehand. But, if you give it a set of rules/standards first to reference before each task, it appears to be very capable of 100% accuracy/95% completion/5% left for user to touch up. It's interesting anyway. I still don't know what to make of it.
1
0
2026-03-03T20:08:11
Extension-Bison-1116
false
null
0
o8gxg18
false
/r/LocalLLaMA/comments/1rjwig7/every_ai_accounting_tool_ive_seen_has_it/o8gxg18/
false
1
t1_o8gxar2
Indeed
1
0
2026-03-03T20:07:27
vertigo235
false
null
0
o8gxar2
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gxar2/
false
1
t1_o8gx8os
Yes this is exactly what I meant, sorry for the confusion.
1
0
2026-03-03T20:07:11
Embarrassed_Soup_279
false
null
0
o8gx8os
false
/r/LocalLLaMA/comments/1riy5x6/qwen_35_nonthinking_mode_benchmarks/o8gx8os/
false
1
t1_o8gx8cu
The researcher's income is itself an FU money safeguard.
1
0
2026-03-03T20:07:08
TomLucidor
false
null
0
o8gx8cu
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gx8cu/
false
1
t1_o8gx7dz
[removed]
1
0
2026-03-03T20:07:00
[deleted]
true
null
0
o8gx7dz
false
/r/LocalLLaMA/comments/1ps6w96/dataset_quality_is_not_improving_much/o8gx7dz/
false
1
t1_o8gx6m7
I've now tested the idea with Qwen3.5-4B-Q4_K_M as a GGUF. Working nicely, including detecting and OCR'ing complex sound FX, once you expand the context size for the model. Test image was a complex 1970s Neal Adams published page-layout from DC's *Green Lantern*, as a small and somewhat poor 900px x 1346px scan. Prompt used for the test: "First determine the ideal reading sequence for this comic-book page, starting in the top left corner. Then detect and OCR all the lettering in the page, with reference to the ideal reading sequence you have detected. Then translate the OCR text into French. Output the French text." The quick and perfect success of this makes me think that it could handle even indie comics with unconventional lettering. Runs for me on the free Jan for Windows https://www.jan.ai/ local LLM runner, after loading Jan with the latest llama-b8192-bin-win-cuda-12.4-x64.zip underlying framework and then restarting Jan as Administrator, so that it can see the graphics card. Then I loaded the Qwen3.5-4B-Q4_K_M GGUF into Jan, together with its https://huggingface.co/unsloth/Qwen3.5-4B-GGUF/blob/main/mmproj-F16.gguf for the vision element. The vision mmproj file can't be added later, it seems, they have to be imported together if you want vision capabilities for your Qwen3.5. Jan is surprisingly quick, and I'm happy that Qwen3.5 has spurred me to found a replacement for Msty (which is too old for 3.5). A 24B model I have, that was reading-pace slow under Msty, runs too-fast-to-read output under Jan. Qwen3.5 4B was also delightfully quick. I guess it's the newer frameworks it uses.
1
0
2026-03-03T20:06:54
optimisticalish
false
null
0
o8gx6m7
false
/r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o8gx6m7/
false
1
t1_o8gx4v5
No, it's not a solid tool. And no there's no reason to still stalk about it.
1
0
2026-03-03T20:06:39
Canchito
false
null
0
o8gx4v5
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gx4v5/
false
1
t1_o8gx16k
Money first, talent under the bus. When the economy goes in the toilet everyone will be on a tough ride, ESPECIALLY researchers with a lot of heart.
1
0
2026-03-03T20:06:10
TomLucidor
false
null
0
o8gx16k
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gx16k/
false
1
t1_o8gx0vu
Maybe try temp=1 https://preview.redd.it/vbmhxghtzvmg1.png?width=1080&format=png&auto=webp&s=2308b93ab91ff444b1ee542cc101bad878a7ab9f
1
0
2026-03-03T20:06:08
vpyno
false
null
0
o8gx0vu
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o8gx0vu/
false
1
t1_o8gx07y
[removed]
1
0
2026-03-03T20:06:02
[deleted]
true
null
0
o8gx07y
false
/r/LocalLLaMA/comments/1rjzsz6/qwen35_27b_feedback/o8gx07y/
false
1
t1_o8gx01x
Qwen 3.5 is great, but they are too late to the party. GLM, Kimi, Minimax already got the coding crowd at first, then got exploded in popularity with OpenClaw. The decision should've been made over a month ago, not depending on this new model release.
1
0
2026-03-03T20:06:01
popiazaza
false
null
0
o8gx01x
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gx01x/
false
1
t1_o8gwv6k
Interesting project. Looks like it was a lot of work. And seems well thought out. Not really for me though (using Linux).
1
0
2026-03-03T20:05:22
DanielWe
false
null
0
o8gwv6k
false
/r/LocalLLaMA/comments/1rjrh9f/i_built_a_localfirst_ai_copilot_no_telemetry/o8gwv6k/
false
1
t1_o8gwv7r
Alexa, set the days-since-last-schizo-post counter to 0.
1
0
2026-03-03T20:05:22
artisticMink
false
null
0
o8gwv7r
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gwv7r/
false
1
t1_o8gwu63
7B models run fine on a laptop GPU with 8GB VRAM. For 13B you'll want at least 10-12GB, so a 4060 Ti or 4070 works. The catch with laptops is heat sustained inference will throttle most gaming laptops after a few minutes. Cloud makes sense if you're prototyping but local is way better once you're iterating fast. For RAG specifically, you'll want the GPU for embeddings too, not just the LLM.
1
0
2026-03-03T20:05:13
RoughOccasion9636
false
null
0
o8gwu63
false
/r/LocalLLaMA/comments/1rjznnk/system_requirements_for_local_llms/o8gwu63/
false
1
t1_o8gwtoe
https://old.reddit.com/r/LocalLLaMA/comments/1rjkarj/local_model_suggestions_for_medium_end_pc_for/o8f2zir/ tldr: you can run roughly same "B" model as amount of "GB" memory in your video card.
1
0
2026-03-03T20:05:09
MelodicRecognition7
false
null
0
o8gwtoe
false
/r/LocalLLaMA/comments/1rjznnk/system_requirements_for_local_llms/o8gwtoe/
false
1
t1_o8gwteq
Just need to set Claud code base url and password to your LLM studio and it just works
1
0
2026-03-03T20:05:07
ThinkExtension2328
false
null
0
o8gwteq
false
/r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8gwteq/
false
1
t1_o8gwtgs
Nobody want to be "the next", no matter who it is. Especially a researcher who don't put profit first.
1
0
2026-03-03T20:05:07
TomLucidor
false
null
0
o8gwtgs
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gwtgs/
false
1
t1_o8gwsmv
Hi, I apologize for asking, I have a 12gbram xiaomi 13 ultra, is there a software to run the 9b variant on android?
1
0
2026-03-03T20:05:01
TopChard1274
false
null
0
o8gwsmv
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8gwsmv/
false
1
t1_o8gwsmx
it'll be fine. Holla if you need help
1
0
2026-03-03T20:05:01
alichherawalla
false
null
0
o8gwsmx
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8gwsmx/
false
1
t1_o8gwr5o
Grifters gonna grift, what's the surprise
1
0
2026-03-03T20:04:49
ml-7
false
null
0
o8gwr5o
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gwr5o/
false
1
t1_o8gwpwn
Why would you need help to write a simple post?
1
0
2026-03-03T20:04:39
AncientLion
false
null
0
o8gwpwn
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gwpwn/
false
1
t1_o8gwpqx
I use local models for analysis and coding and I don’t have the issue of deterioration. I am using cline cli and it does a good job of compacting and keeping relevant bits in. In fact, it has a mini ralph mode in it to make sure that the local models achieves the goal.
1
0
2026-03-03T20:04:37
Tema_Art_7777
false
null
0
o8gwpqx
false
/r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8gwpqx/
false
1
t1_o8gwp35
They do support it but for unsloth quants it's disabled by default in the chat template. You have to enable it explicitly. You can do so by adding --chat-template-kwargs '{"enable\_thinking":true}' to your llama-server command.
1
0
2026-03-03T20:04:32
dark-light92
false
null
0
o8gwp35
false
/r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8gwp35/
false
1
t1_o8gwobb
There's not really anything fundamentally wrong about it, but often it reads weird and often is just a bunch of gibberish from some deranged loony.
1
0
2026-03-03T20:04:26
RASTAGAMER420
false
null
0
o8gwobb
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gwobb/
false
1
t1_o8gwmnh
No one likes ai on one of the subs about LLMs? Do you think OPs post is low effort?
1
0
2026-03-03T20:04:13
ministryofchampagne
false
null
0
o8gwmnh
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gwmnh/
false
1
t1_o8gwl5h
I never heard of $clawd until now. moltbook was hilarious. What if a social media was created that didnt care if you're a bot or not? As for openclaw promoting itself. If that's true, that's in of itself a feature of the product. Marketing is hard bro.
1
0
2026-03-03T20:04:00
sleepingsysadmin
false
null
0
o8gwl5h
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gwl5h/
false
1
t1_o8gwl86
Pull an OpenAI would be a good reason.
1
0
2026-03-03T20:04:00
RoomyRoots
false
null
0
o8gwl86
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gwl86/
false
1
t1_o8gwj87
The post was probably made by a claw agent.
2
0
2026-03-03T20:03:44
Chris266
false
null
0
o8gwj87
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gwj87/
false
2
t1_o8gwi89
OP is a 7 day old account with 1 post. Of course it’s fake
2
0
2026-03-03T20:03:36
pm_me_github_repos
false
null
0
o8gwi89
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gwi89/
false
2
t1_o8gwicp
Not now, but soon in the next few quarters, it is what will happen when everyones wallets are empty.
1
0
2026-03-03T20:03:36
TomLucidor
false
null
0
o8gwicp
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gwicp/
false
1
t1_o8gwi6a
As things are today. I feel like this is very early stages and trends and capabilities could drastically change. The new MacBook pros are shipping with integrated AI modules on the motherboard I thought, is my memory correct there?
1
0
2026-03-03T20:03:35
owp4dd1w5a0a
false
null
0
o8gwi6a
false
/r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8gwi6a/
false
1
t1_o8gwheq
I think i understood that, tyvm, another direction to look into.
1
0
2026-03-03T20:03:29
RTS53Mini
false
null
0
o8gwheq
false
/r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8gwheq/
false
1
t1_o8gwh6f
How do you get 13 upvotes for not reading the text in the attached image? Oh yeah by other redditors not reading the text in the attached image ;P
1
0
2026-03-03T20:03:27
crantob
false
null
0
o8gwh6f
false
/r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8gwh6f/
false
1
t1_o8gwgh9
I think you are fake. Especially with writing "I have proof" and then you proceed without exposing said proof.
1
0
2026-03-03T20:03:22
Mystical_Whoosing
false
null
0
o8gwgh9
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gwgh9/
false
1
t1_o8gwg9u
I have no problems with using AI for translation, formatting, editing, etc. otherwise I wouldn't be here. I have a problem with posting unsourced claims directly from a chatbot.
1
0
2026-03-03T20:03:20
Ulterior-Motive_
false
null
0
o8gwg9u
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gwg9u/
false
1
t1_o8gwb4d
Failing the business-loyalty test, another word for being research-first.
1
0
2026-03-03T20:02:40
TomLucidor
false
null
0
o8gwb4d
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gwb4d/
false
1
t1_o8gwap9
You can pretty much tell something is a money backed scam as soon as Matthew Berman starts hyping it. I used to really enjoy his videos, but over time it was more and more obvious he was (probably) just taking kick backs and stopped giving a fuck about any kind of rigour.
1
0
2026-03-03T20:02:36
LoaderD
false
null
0
o8gwap9
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gwap9/
false
1
t1_o8gw9jo
Just listen to the way he talks on the Lex podcast. He has huckster energy.
1
0
2026-03-03T20:02:27
DreamLearnBuildBurn
false
null
0
o8gw9jo
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gw9jo/
false
1
t1_o8gw5m1
Did a bunch of benchmarks a while ago, accounting is pretty much a context length issue. It's a multi step process where you can't make any mistakes and LLMs are pretty bad at going beyond 5 steps.
1
0
2026-03-03T20:01:57
rashaniquah
false
null
0
o8gw5m1
false
/r/LocalLLaMA/comments/1rjwig7/every_ai_accounting_tool_ive_seen_has_it/o8gw5m1/
false
1
t1_o8gw560
Survival strategy. Something is off.
1
0
2026-03-03T20:01:53
TomLucidor
false
null
0
o8gw560
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gw560/
false
1
t1_o8gw1wv
Bro might go rogue (hopefully).
1
0
2026-03-03T20:01:26
TomLucidor
false
null
0
o8gw1wv
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gw1wv/
false
1
t1_o8gw1ah
The contrary. You don't make money with open weights.
1
0
2026-03-03T20:01:21
artisticMink
false
null
0
o8gw1ah
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gw1ah/
false
1
t1_o8gvxsk
It's the end for everyone else too for what the financial guys are doing in the next 6 months. Good men goes when suits take everything.
1
0
2026-03-03T20:00:52
TomLucidor
false
null
0
o8gvxsk
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gvxsk/
false
1
t1_o8gvthi
because no one likes to read AI writing, genius. and it’s always a sign of low effort and low quality.
1
0
2026-03-03T20:00:18
SnooLentils6014
false
null
0
o8gvthi
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gvthi/
false
1
t1_o8gvqm6
Recession subsidies and survival mode. Get ready for an everything-winter.
1
0
2026-03-03T19:59:55
TomLucidor
false
null
0
o8gvqm6
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gvqm6/
false
1
t1_o8gvnmo
I am actually happy with this, myself because I can't really think of a scenario where I need the uncensored model specifically in other languages. 0% refusal rate but only in English is still huge. Using LLMs for ethical hacking was pretty annoying from the beginning of ChatGPT because of the abuse potential of the information, so this helps a lot as is, thanks a lot!
1
0
2026-03-03T19:59:31
jax_cooper
false
null
0
o8gvnmo
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gvnmo/
false
1
t1_o8gvipj
Apparently with him https://x.com/i/status/2028885623663411706
1
0
2026-03-03T19:58:52
devnull0
false
null
0
o8gvipj
false
/r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gvipj/
false
1
t1_o8gvgkp
His crew need the Karpathy nuclear option.
1
0
2026-03-03T19:58:35
TomLucidor
false
null
0
o8gvgkp
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gvgkp/
false
1
t1_o8gvgbh
Very good idea would be to also add Step v3.5 Flash and MiMo v2 Flash. Both are incredibleodels. Congrats for the great work!
1
0
2026-03-03T19:58:33
pol_phil
false
null
0
o8gvgbh
false
/r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8gvgbh/
false
1
t1_o8gvb49
I'm a newbie and don't quite fully understand the discussions here but I've dowloaded the app and checking it out. On S24 Ultra. Hope this will work well. Thanks in advance.
1
0
2026-03-03T19:57:52
jonjonijanagan
false
null
0
o8gvb49
false
/r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8gvb49/
false
1
t1_o8gvaux
I ban everything about claw. Sorry. You banned
1
0
2026-03-03T19:57:50
Any-Blacksmith-2054
false
null
0
o8gvaux
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gvaux/
false
1
t1_o8gva78
A new depression within the next 6 months likely, it's a survival instinct now (and also government money is too sweet).
1
0
2026-03-03T19:57:45
TomLucidor
false
null
0
o8gva78
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gva78/
false
1
t1_o8gva9p
I read this and think "How do I know this is true and accurate? How do I know what \*parts\* are true and accurate?"
1
0
2026-03-03T19:57:45
detroitmatt
false
null
0
o8gva9p
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gva9p/
false
1
t1_o8gv7lr
It is a sub to discuss locally hosted LLM. Why would you think that means people can’t have ai help write posts?
1
0
2026-03-03T19:57:24
ministryofchampagne
false
null
0
o8gv7lr
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gv7lr/
false
1
t1_o8guyy0
Isn't it crazy how you get downvoted for using AI in a sub dedicated to AI?
1
0
2026-03-03T19:56:16
__generic
false
null
0
o8guyy0
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8guyy0/
false
1
t1_o8guyaw
I think it’s a genuinely cool idea and not a waste of time. There’s a real gap between “I can run llama.cpp/ollama” and “I just want something that works locally without setup.” A browser-based, zero‑install UX lowers the barrier a lot for non‑technical users, demos, classrooms, or privacy‑conscious folks. Worst case it’s a great learning project; best case it unlocks local LLMs for a much wider audience.
1
0
2026-03-03T19:56:11
Top_District_3654
false
null
0
o8guyaw
false
/r/LocalLLaMA/comments/1rjyy08/any_use_case_for_browserbased_local_agents/o8guyaw/
false
1
t1_o8guxs6
They are allowed to give you free MacBooks too. I was telling you of the reason.
1
0
2026-03-03T19:56:06
Desperate-Purpose178
false
null
0
o8guxs6
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8guxs6/
false
1
t1_o8gux1x
ooohhhh, we are shocked.... tell us something new.
1
0
2026-03-03T19:56:00
Eveenew
false
null
0
o8gux1x
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gux1x/
false
1
t1_o8guws9
Will there be an NPU optimized version?
1
0
2026-03-03T19:55:58
Alive-Imagination521
false
null
0
o8guws9
false
/r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o8guws9/
false
1
t1_o8guwl9
They get paid by blitz-adoption, but the issue is that IF the secret sauce for distill/multi-training with less compute is out, people would treat them as the "respectable player". And China can't have that.
1
0
2026-03-03T19:55:57
TomLucidor
false
null
0
o8guwl9
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8guwl9/
false
1
t1_o8gusxx
its a sub for humans to write and talk about AI, not for AI.
1
0
2026-03-03T19:55:28
quilso
false
null
0
o8gusxx
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gusxx/
false
1
t1_o8gusv6
> first ever post in /r/LocalLLaMA > generated by AI > advertising some crap no thx
1
0
2026-03-03T19:55:27
MelodicRecognition7
false
null
0
o8gusv6
false
/r/LocalLLaMA/comments/1rjxuwo/stop_torturing_your_quantized_8b_models_why_we/o8gusv6/
false
1
t1_o8gurot
he'd never hired by openai if it didn't got viral
1
0
2026-03-03T19:55:18
Karnemelk
false
null
0
o8gurot
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gurot/
false
1
t1_o8gur2n
Option 1: Military use. Option 2: Secret service use. Option 3: Switch to closed models.
1
0
2026-03-03T19:55:13
pulse77
false
null
0
o8gur2n
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gur2n/
false
1
t1_o8guqk2
There are many ways but essentially you might try to trigger many different refusal conditions on purpose, noting where the activations for refusal are located, and then remove them from the model. You could also try post-training to undo refusals by training them positively on the negative data set. You could also attempt to do this selectively, for instance allowing discussion of drugs but maintaining refusal for sex or murder.
1
0
2026-03-03T19:55:09
Fit-Produce420
false
null
0
o8guqk2
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8guqk2/
false
1
t1_o8guqb4
[removed]
1
0
2026-03-03T19:55:07
[deleted]
true
null
0
o8guqb4
false
/r/LocalLLaMA/comments/14m4zsq/need_a_detailed_tutorial_on_how_to_create_and_use/o8guqb4/
false
1
t1_o8gupt5
What makes it a scam?
1
0
2026-03-03T19:55:03
klop2031
false
null
0
o8gupt5
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gupt5/
false
1
t1_o8gupi6
Heritic or abliteration
1
0
2026-03-03T19:55:01
ArtfulGenie69
false
null
0
o8gupi6
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8gupi6/
false
1
t1_o8guowp
This the best part of the post … I have proof! Proceeds to not show the receipts, research or anything beyond conspiracy. And the scam? What is it? I’ve been following and using the tool since those early days you mention and haven’t even come across the coin
2
0
2026-03-03T19:54:56
dan-lash
false
null
0
o8guowp
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8guowp/
false
2
t1_o8guoyw
Tangemt but my bet is that we are unlikely to see a 3.5 coder model unless someone outside Qwen does it. Happy to be wrong but with the core team leaving, even if they had something in flight they may not have the will or ability to do it justice any more.
1
0
2026-03-03T19:54:56
QuestionMarker
false
null
0
o8guoyw
false
/r/LocalLLaMA/comments/1rjr9ze/did_anyone_replace_old_qwen25coder7b_with/o8guoyw/
false
1
t1_o8gul56
Them nuking soft power like this? Deeply ironic.
1
0
2026-03-03T19:54:26
TomLucidor
false
null
0
o8gul56
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gul56/
false
1
t1_o8guhrx
Subsidies and revenue. Definitely an economic downturn issue and they have no goodwill.
1
0
2026-03-03T19:53:59
TomLucidor
false
null
0
o8guhrx
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8guhrx/
false
1
t1_o8guc3m
With an RTSP stream? How does that work exactly?
1
0
2026-03-03T19:53:16
NursingHome773
false
null
0
o8guc3m
false
/r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8guc3m/
false
1
t1_o8guaxe
thanks, it works. Can you share which method you used? I tested it with some queries related to xi jinping and ccp, it doesnt work well and starts generating gibberish output.
1
0
2026-03-03T19:53:06
Traditional_Tap1708
false
null
0
o8guaxe
false
/r/LocalLLaMA/comments/1ri9enf/qwen35397b_uncensored_nvfp4/o8guaxe/
false
1
t1_o8gua1u
Do not ever write posts like that with AI except if you want to scam people when your generic, botlike text chasing away educated people is a feature, not a bug. You can: \- Ask for proofread, but make sure you don't or only very very rarely accept full sentences from the AI. Just fix the core grammar and leave everything as it is \- You can ask the AI to criticize your writing, and you can apply it's suggestions manually (so if the AI tells you to use paragraphs, do that, but don't accept it's suggestions word by word or let it rephrase what you have written)
1
0
2026-03-03T19:53:00
mimrock
false
null
0
o8gua1u
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gua1u/
false
1
t1_o8gu7df
They use it on just about all the abliterated and heritic models. Luckily the model is posted and small so anyone could figure it out. 
1
0
2026-03-03T19:52:38
ArtfulGenie69
false
null
0
o8gu7df
false
/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8gu7df/
false
1
t1_o8gu6e9
This is really useful information; it sounds like the LoRA stage may be the culprit here! Fine-tuning on english only examples could have degraded its multilingual ability. The phase 3 abliteration-only version (72% uncensored) may work better for multilingual users, which I would be happy to release if there is enough interest.
1
0
2026-03-03T19:52:30
Flat_cola
false
null
0
o8gu6e9
false
/r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gu6e9/
false
1
t1_o8gu69i
We need Gemma Hybrid/Linear/"Next" then! But even Google is cooked the same way
1
0
2026-03-03T19:52:29
TomLucidor
false
null
0
o8gu69i
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gu69i/
false
1
t1_o8gu298
this is my custom kernel grub line: `GRUB_CMDLINE_LINUX="iommu=pt amdgpu.gttsize=131072 ttm.pages_limit=33554432 amdgpu.runpm=0 amdgpu.gpu_recovery=1"` besides for the benchmark the CLI command is: `llama-bench -m $MODEL -ngl 99 -d 0,4096,16384,32768,65536,131072 -p 2048 -n 32 -fa 1 --mmap 0 -ub 1024`
1
0
2026-03-03T19:51:58
Educational_Sun_8813
false
null
0
o8gu298
false
/r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o8gu298/
false
1
t1_o8gty4a
Check the threads below, seems like they are chasing government subsidies and revenue cus the whole economy is sinking. Throwing FOSS under the bus is their dumbest soft power move.
1
0
2026-03-03T19:51:26
TomLucidor
false
null
0
o8gty4a
false
/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gty4a/
false
1
t1_o8gtxxe
You can load whatever model you want and start as many instances in parallel you want. Donato‘s toolboxes make it very easy to run llama.cpp on Strix Halo, best for me
1
0
2026-03-03T19:51:25
Potential-Leg-639
false
null
0
o8gtxxe
false
/r/LocalLLaMA/comments/1r7l7q5/the_strix_halo_feels_like_an_amazing_super_power/o8gtxxe/
false
1
t1_o8gtxtz
Check out seerai, the Zotero plugin created. Unlike generic RAG tools, it prioritizes your local notes and indexed PDFs to ensure every AI answer is grounded in your actual documents by chatting with multiple papers simultaneously and generating comparative data tables with automatic inline citations, all while keeping your research private on your own machine. Instead of building a complex vector store from scratch, the tool handles the chunking and retrieval - You can find it here: [https://github.com/dralkh/seerai](https://github.com/dralkh/seerai)
1
0
2026-03-03T19:51:24
Dralkha
false
null
0
o8gtxtz
false
/r/LocalLLaMA/comments/1gk83i1/opensource_alternative_to_notebooklm_google/o8gtxtz/
false
1
t1_o8gtwwo
I wish but I'm stuck in Dubai
1
0
2026-03-03T19:51:17
spaceman3000
false
null
0
o8gtwwo
false
/r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gtwwo/
false
1
t1_o8gtvhe
\+1. Dario is very principled #respect for that. Something I have observed when the top boss is a researcher(so is Demis) vs a pure business guy. Impressive what you have done. Ive been primarily using claude and codex/gemini. However I hit usage limits, and with more people using claude, I expect to run into higher costs when they bump up the price OR usage limits I had to google what you were saying to understand some parts :). Do you know of any good links on how to set one up like you did - from the machine to buy to the rollout steps?
1
0
2026-03-03T19:51:06
Aprocastrinator
false
null
0
o8gtvhe
false
/r/LocalLLaMA/comments/1ria14c/dario_amodei_on_open_source_thoughts/o8gtvhe/
false
1
t1_o8gtsxi
Onwards & upwards 🚀 Don’t forget the rocket, it’s part of the signal!
1
0
2026-03-03T19:50:46
Rhinoseri0us
false
null
0
o8gtsxi
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gtsxi/
false
1
t1_o8gtsp8
Got replaced by Qwen you think?
1
0
2026-03-03T19:50:44
perkia
false
null
0
o8gtsp8
false
/r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gtsp8/
false
1
t1_o8gtnvq
It’s always funny to me when people think it’s a negative to use ai on subs about ai. It is also not ironic this was written by ai.
1
0
2026-03-03T19:50:08
ministryofchampagne
false
null
0
o8gtnvq
false
/r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gtnvq/
false
1
t1_o8gtmfv
No it's for sure novel in this instance, I love that you combined them. Great thinking.
1
0
2026-03-03T19:49:56
3spky5u-oss
false
null
0
o8gtmfv
false
/r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8gtmfv/
false
1
t1_o8gtmck
I want to vibe code some mvp ideas for 3d games and it could greatly help. Thanks and will wait for its release.
1
0
2026-03-03T19:49:55
NoFudge4700
false
null
0
o8gtmck
false
/r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gtmck/
false
1