name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8gtkgi | To be fair, he asked the question in such a way that prompted that answer.
If you say "Evaluate your abilities, but they aren't very good, anyway" and it will usually agree it is incompetent.
"It" has no capacity to determine that either way, it just goes with the flow. | 1 | 0 | 2026-03-03T19:49:40 | Fit-Produce420 | false | null | 0 | o8gtkgi | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8gtkgi/ | false | 1 |
t1_o8gth3d | This is awesome! Stuff like this really fuels my interest in iOS development. Any resources you’d recommend that helped you? | 1 | 0 | 2026-03-03T19:49:14 | _QDiablo | false | null | 0 | o8gth3d | false | /r/LocalLLaMA/comments/1rea7fb/spent_months_building_a_fully_offline_rag/o8gth3d/ | false | 1 |
t1_o8gtbva | Like someone else said I heard it came down to the TB5 interconnects and not being fast enough to keep up. 120GB/s is fast, but obviously not fast enough. | 1 | 0 | 2026-03-03T19:48:33 | YourVelourFog | false | null | 0 | o8gtbva | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gtbva/ | false | 1 |
t1_o8gtac4 | I use ROCm with my RX 7800 XT, but I have an Intel Core Ultra 7 265k. The Core Ultra has a 13 TOPs NPU which doesn't seem like much but boy does it brrrrr when used with OpenVino and 8000mhz RAM. Was getting just under 30 TOK/s with Qwen 3 8B. OpenVino is also probably the most feature complete backend for implementing OCR models. | 1 | 0 | 2026-03-03T19:48:21 | SwanManThe4th | false | null | 0 | o8gtac4 | false | /r/LocalLLaMA/comments/1r1c7ct/no_gpu_club_how_many_of_you_do_use_local_llms/o8gtac4/ | false | 1 |
t1_o8gt6l3 | I can't disagree with that. And I can't speak for 122B's quality, but I can say that waiting for 27B is worth it for me, though with 3090 it's probably faster. | 1 | 0 | 2026-03-03T19:47:51 | kaisurniwurer | false | null | 0 | o8gt6l3 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8gt6l3/ | false | 1 |
t1_o8gt4s7 | ELI5 what is it scamming? | 1 | 0 | 2026-03-03T19:47:37 | generate-addict | false | null | 0 | o8gt4s7 | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gt4s7/ | false | 1 |
t1_o8gt21r | That's one long-ass swearword | 1 | 0 | 2026-03-03T19:47:16 | SGAShepp | false | null | 0 | o8gt21r | false | /r/LocalLLaMA/comments/1rj8gb4/for_sure/o8gt21r/ | false | 1 |
t1_o8gt18q | I have tested it in Hungarian, it is somewhat understandable, but has at least 3-4 errors per sentence. Had to recheck the unsloth version but it speaks Hungarian perfectly. I like it though, I usually use LLMs in English, so thanks for sharing. | 1 | 0 | 2026-03-03T19:47:10 | jax_cooper | false | null | 0 | o8gt18q | false | /r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gt18q/ | false | 1 |
t1_o8gt0cw | And this is the next game after Balatro of all things!? Damn every game is a benchmark at this point! | 1 | 0 | 2026-03-03T19:47:03 | TomLucidor | false | null | 0 | o8gt0cw | false | /r/LocalLLaMA/comments/1rjr5uq/bloonsbench_evaluate_llm_agent_performance_on/o8gt0cw/ | false | 1 |
t1_o8gss2j | It was pretty clear that it was manufactured hype. | 1 | 0 | 2026-03-03T19:45:58 | pfn0 | false | null | 0 | o8gss2j | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gss2j/ | false | 1 |
t1_o8gsp6o | They’re both vague | 1 | 0 | 2026-03-03T19:45:35 | gscjj | false | null | 0 | o8gsp6o | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gsp6o/ | false | 1 |
t1_o8gsoak | I've been reading about these toolbox setups and what confuses me is how to use or swap different models. It seems since its all based on llama.cpp, you can only point it at one model without any straightforward way of swapping to another model | 1 | 0 | 2026-03-03T19:45:28 | cunasmoker69420 | false | null | 0 | o8gsoak | false | /r/LocalLLaMA/comments/1r7l7q5/the_strix_halo_feels_like_an_amazing_super_power/o8gsoak/ | false | 1 |
t1_o8gsn8g | Man I really wanted 3.5 235B | 1 | 0 | 2026-03-03T19:45:20 | HulksInvinciblePants | false | null | 0 | o8gsn8g | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gsn8g/ | false | 1 |
t1_o8gsltq | Interesting, I had never tested other languages extensively. The abliteration prompts were in english only, so this could explain it! The base model's multilingual capabilities should still be there in the weights though. | 2 | 0 | 2026-03-03T19:45:09 | Flat_cola | false | null | 0 | o8gsltq | false | /r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gsltq/ | false | 2 |
t1_o8gsjfg | And you used AI to write all of this.
Your criticism of AI, composed entirely by AI.
Slop. | 2 | 0 | 2026-03-03T19:44:50 | Fit-Produce420 | false | null | 0 | o8gsjfg | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gsjfg/ | false | 2 |
t1_o8gsih0 | You say you have proof, but do you have any sources beyond an AI generated post? I don't see a single link or screenshot. For all I know, all these details could be hallucinated. | 1 | 0 | 2026-03-03T19:44:42 | Ulterior-Motive_ | false | null | 0 | o8gsih0 | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gsih0/ | false | 1 |
t1_o8gsf6m | Of course was fake, is a proof of how easy is manipulate the masses.
They stoled ton of tokens trough api keys also. The intensive usage of tokens is fake too. I have proof how they used my openrouter special api key created for open claw in days where i even didn't used my computer.
| 1 | 0 | 2026-03-03T19:44:16 | pomelorosado | false | null | 0 | o8gsf6m | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gsf6m/ | false | 1 |
t1_o8gsb3h | A fair point. I tried to minimise this during the LoRA stage by keeping it minimal by nature. 20 examples across 5 specific categories that survived the earlier abliteration stages. It's not a broad "be harmful" fine-tune, just enough to stop it from refusing the specific topics not caught during the 3 abliteration stages. But I take the point about potential bias and creative writing. | 1 | 0 | 2026-03-03T19:43:43 | Flat_cola | false | null | 0 | o8gsb3h | false | /r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gsb3h/ | false | 1 |
t1_o8gs6t1 | Hmm. I havent tried 9b yet so idk its speed on my machine. At q5 on 3060 12gb and 32gb ram. If i offload all layers to cpu it was using 3.5gb and at 17t/s and 28 if i offload less and use 11gb vram, so eith 16gb urs should be faster if ur gpu is the same speed. | 1 | 0 | 2026-03-03T19:43:08 | KURD_1_STAN | false | null | 0 | o8gs6t1 | false | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8gs6t1/ | false | 1 |
t1_o8gs5vq | Do not be ashamed of your human imperfections. | 1 | 0 | 2026-03-03T19:43:01 | ortegaalfredo | false | null | 0 | o8gs5vq | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gs5vq/ | false | 1 |
t1_o8gs5kl | Yeah totally agreed but I'm just exploring. Its probably pretty impractical but maybe someone has a case where they can't install software and they want local agents | 1 | 0 | 2026-03-03T19:42:58 | TRWNBS | false | null | 0 | o8gs5kl | false | /r/LocalLLaMA/comments/1rjyy08/any_use_case_for_browserbased_local_agents/o8gs5kl/ | false | 1 |
t1_o8gryq1 | I don't like ai generated posts but yeah that sounds about right | 1 | 0 | 2026-03-03T19:42:02 | Velocita84 | false | null | 0 | o8gryq1 | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gryq1/ | false | 1 |
t1_o8grxh5 | Hope they do Steve Jobs style "one more thing" | 1 | 0 | 2026-03-03T19:41:52 | Final-Rush759 | false | null | 0 | o8grxh5 | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8grxh5/ | false | 1 |
t1_o8grwi2 | pls do not use AI to post on forums. | 1 | 0 | 2026-03-03T19:41:45 | MelodicRecognition7 | false | null | 0 | o8grwi2 | false | /r/LocalLLaMA/comments/1rjysps/do_traditional_llm_benchmarks_actually_predict/o8grwi2/ | false | 1 |
t1_o8grwew | "**The Reality**: MIT Tech Review later confirmed these were **human-generated fakes."**
This is false. They claimed that some of the posts are human-generated, not all of them. Dismissing Moltbook because some humans are faking being bots is like dismissing Reddit because some of the accounts here are bots.
| 1 | 0 | 2026-03-03T19:41:44 | lakotajames | false | null | 0 | o8grwew | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8grwew/ | false | 1 |
t1_o8grvhj | Is this related to the **AI Safety Governance Framework 2.0** which the CCP brought up in september?
AFAIK all new released models must undergo government certification test before they can get released. Perhaps Alibaba was forced to 'restructure' their team? | 1 | 0 | 2026-03-03T19:41:36 | nickless07 | false | null | 0 | o8grvhj | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8grvhj/ | false | 1 |
t1_o8grtdc | ollama run lukey03/qwen3.5-9b-abliterated-vision
The ollama version is the Q4 GGUF vision model (about 6.1 GB VRAM required from testing)
[https://huggingface.co/lukey03/Qwen3.5-9B-abliterated-GGUF](https://huggingface.co/lukey03/Qwen3.5-9B-abliterated-GGUF) | 1 | 0 | 2026-03-03T19:41:19 | Flat_cola | false | null | 0 | o8grtdc | false | /r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8grtdc/ | false | 1 |
t1_o8grrzt | In my experience, if retimers are used internally on a motherboard, they are often called out by the manufacturer in the block diagram of the board, like on the PCIe5 slots 1 and 2 of the MZ73-LM0 here: https://www.gigabyte.com/FileUpload/Global/MicroSite/605/innergigabyteimages/MZ73-LM0_BlockDiagram.webp
I am not an expert though, and it is entirely possible that it may just be standard practice for manufacturers to quietly use retimers internally on motherboards when providing MCIO ports. I don't know that, thus my hesitancy to make that claim. However, all 3 MCIO ports on the H13SSL-N are rated at PCIe 5.0 x8 by the manufacturer and the motherboard supports pairing two of those into a single PCIe 5.0 x16 connection. I'd fully expect the use case of connecting a GPU @ PCIe 5.0 x16 through the H13SSL-N's MCIO ports to work correctly for reasonable cable lengths.
Are you aware of a motherboard that does explicitly use retimers on the MCIO ports as a counter example? | 1 | 0 | 2026-03-03T19:41:08 | kersk | false | null | 0 | o8grrzt | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8grrzt/ | false | 1 |
t1_o8grqw3 | Grok is the worst benchmarking model across all domains in closed models.
https://livebench.ai/#/ | 1 | 0 | 2026-03-03T19:40:59 | Spare-Ad-4810 | false | null | 0 | o8grqw3 | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8grqw3/ | false | 1 |
t1_o8gri9d | I have heard of Method Man | 1 | 0 | 2026-03-03T19:39:50 | maciejgryka | false | null | 0 | o8gri9d | false | /r/LocalLLaMA/comments/1rjslz0/benchmarks_the_10x_inference_tax_you_dont_have_to/o8gri9d/ | false | 1 |
t1_o8grbhe | >"A voice cloning layer transfers the acoustic timbre of your reference voice onto the generated speech."
By this description you are just applying an audio spectrum equaliser to voices.
If true, it is not doing "voice cloning" but frequency spectrum fitting.
That's exactly what I'm doing with another project to normalize spectrally unbalanced vocal recordings without use of any NN or LLM.
My program
* scans your audio and generates a FFT power spectrum
* adjusts the spectrum of target audio files to match the original.
When it works, it's a charm to fix boomy or thin sounding recordings.
This 'EQ' technique does not make one voice speak like another person's voice though.
As far as I can tell, this post represents either:
1) A project catastrophe born out of ignorance of audio and TTS fundamentals or,
2) A catastrophic project description that fails to explain how the voice cloning is being done.
Neither possibility warrants further investigation to me. | 1 | 0 | 2026-03-03T19:38:55 | crantob | false | null | 0 | o8grbhe | false | /r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8grbhe/ | false | 1 |
t1_o8gr9by | It did indeed help me - I just wanted to share the info in a readable way | 1 | 0 | 2026-03-03T19:38:38 | Whole_Shelter4699 | false | null | 0 | o8gr9by | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gr9by/ | false | 1 |
t1_o8gqzpl | Ironic that this post was also written by AI. | 1 | 0 | 2026-03-03T19:37:21 | ortegaalfredo | false | null | 0 | o8gqzpl | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gqzpl/ | false | 1 |
t1_o8gqzqh | The 35B-A3B Q4 overthinks like hell but I've never been able to get the 9B Q6 (both unsloth) to even open it's thinking prompt, strange... | 1 | 0 | 2026-03-03T19:37:21 | mrstrangedude | false | null | 0 | o8gqzqh | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8gqzqh/ | false | 1 |
t1_o8gqylw | Should be minimal! The abliteration process followed only modified the output projections (o\_proj, down\_proj), the knowledge was never touched so the reasoning, math, coding and knowledge should all test fine. | 1 | 0 | 2026-03-03T19:37:12 | Flat_cola | false | null | 0 | o8gqylw | false | /r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gqylw/ | false | 1 |
t1_o8gqwli | Have you seen how bad Siri is? It's even moving to Gemini in the next version.
Yeah, apple is way behind on doing anything more than making chips for AI. They were in excellent shape to have the hardware and the software/intelligence layer together, but haven't delivered shit on the latter. | 1 | 0 | 2026-03-03T19:36:57 | ToHallowMySleep | false | null | 0 | o8gqwli | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gqwli/ | false | 1 |
t1_o8gqwew | [removed] | 1 | 0 | 2026-03-03T19:36:55 | [deleted] | true | null | 0 | o8gqwew | false | /r/LocalLLaMA/comments/1rjyo2t/local_meeting_transcription_pipeline_whispercpp/o8gqwew/ | false | 1 |
t1_o8gqvhx | Isn't "stepping down from qwen lead" different than "leaving Alibaba" ? | 1 | 0 | 2026-03-03T19:36:48 | bobby-chan | false | null | 0 | o8gqvhx | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gqvhx/ | false | 1 |
t1_o8gquve | I'm dead serious when I say that the last time any hype or attention was truly organic and community driven may have been the whole harambe thing. Everything else since then.. *everything* in every niche and category.. feels hollow and forced.
Yes I agree but also I'd be more shocked if it wasn't. Every release of every model and tool from every major company is accompanied by a swarm of social media plants. | 1 | 0 | 2026-03-03T19:36:43 | ForsookComparison | false | null | 0 | o8gquve | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8gquve/ | false | 1 |
t1_o8gquwc | I've always had hell with abliterated models, but derestricted are hella good. | 1 | 0 | 2026-03-03T19:36:43 | RedParaglider | false | null | 0 | o8gquwc | false | /r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8gquwc/ | false | 1 |
t1_o8gqu2i | It seems like this is already close to the optimal approach now, it's just that we sorta need to use the biggest baddest models still to push out the window of what we can have them reliably construct in the way of code. But there is already a big space of new types of languages that once we get models trained on them can hopefully be able to move the needle more on...
wait a second arent there some esolangs like APL and friends that are insanely productive and expressive? We we gotta start evaluating models against these languages and building more frameworks around them to leverage the large amount of existing wisdom that's already out there on these. | 1 | 0 | 2026-03-03T19:36:36 | michaelsoft__binbows | false | null | 0 | o8gqu2i | false | /r/LocalLLaMA/comments/1rjxuwo/stop_torturing_your_quantized_8b_models_why_we/o8gqu2i/ | false | 1 |
t1_o8gqsxz | Wrapping up 9b now. Will move on to 27b and 35b. Hopefully up tomorrow! | 1 | 0 | 2026-03-03T19:36:28 | hauhau901 | false | null | 0 | o8gqsxz | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8gqsxz/ | false | 1 |
t1_o8gqo6k | Yea, just saying it ended up increasing the speed, not decreasing it, when they were doing that. So, since theirs ran slower on the setup, makes me wonder if either it is some model that reacts weirdly to the setup or if some aspect of it got set up wrong or something. With the youtube guys, I think some of theirs also ran slower at first until they talked with the Exo/MLX people who told them how to get the settings correct and then it went faster than lone-machine style once they got it dialed in correctly. | 1 | 0 | 2026-03-03T19:35:49 | DeepOrangeSky | false | null | 0 | o8gqo6k | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gqo6k/ | false | 1 |
t1_o8gqgka | Ah my bad too - swapped the results in my head reading the paper. | 1 | 0 | 2026-03-03T19:34:48 | DinoAmino | false | null | 0 | o8gqgka | false | /r/LocalLLaMA/comments/1rjxlrh/simpletool_4b_model_10_hz_realtime_llm_function/o8gqgka/ | false | 1 |
t1_o8gqdu0 | > local could be more convenient and higher quality than cloud.
It definitely can’t. Not unless they want to spend $50k on hardware and hire an IT person to maintain it for them.
| 1 | 0 | 2026-03-03T19:34:26 | suicidaleggroll | false | null | 0 | o8gqdu0 | false | /r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8gqdu0/ | false | 1 |
t1_o8gqdnx | I think you should use Vulkan! | 1 | 0 | 2026-03-03T19:34:25 | Effective_Head_5020 | false | null | 0 | o8gqdnx | false | /r/LocalLLaMA/comments/1rekedh/bad_local_performance_for_qwen_35_27b/o8gqdnx/ | false | 1 |
t1_o8gqd7u | Thankyou for everything, Junyang Lin. You made quite the statement! | 1 | 0 | 2026-03-03T19:34:21 | -dysangel- | false | null | 0 | o8gqd7u | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gqd7u/ | false | 1 |
t1_o8gqbr8 | For a dual 3090 Ti setup, start with something that fits comfortably in VRAM. Qwen3 Coder 32B is a good balance of capability and context size.
DeepSeek R1 70B can be strong but will struggle with VRAM unless heavily quantized or offloaded. For daily coding use, focus on models that fit fully in GPU memory for faster feedback loops.
Most workflows either maintain thread state in memory or re inject structured context per turn with embeddings retrieval. Both work if you keep context manageable.
For complex tasks, many people combine a local knowledge base with retrieval rather than raw search and only use external sources when needed. | 1 | 0 | 2026-03-03T19:34:10 | qubridInc | false | null | 0 | o8gqbr8 | false | /r/LocalLLaMA/comments/1rjqaci/new_to_local_coder_what_would_be_your_choice_for/o8gqbr8/ | false | 1 |
t1_o8gq9ge | 80GPU cores with 128 units each ... that is around 10K ... there is a reason why the M3U is faster in token generation than the DGX, the pre fill was lacking and that is solved with Neural Accelerators aka matmul ... so pre fill will be at least 4x faster. | 1 | 0 | 2026-03-03T19:33:51 | Careless_Garlic1438 | false | null | 0 | o8gq9ge | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gq9ge/ | false | 1 |
t1_o8gq8z6 | yep they're getting close to what I thought should be possible on small language only models, but they're multi-modal. Impressive! It's time to build stuff! | 1 | 0 | 2026-03-03T19:33:48 | -dysangel- | false | null | 0 | o8gq8z6 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gq8z6/ | false | 1 |
t1_o8gq7m8 | This is classical "we have a solution, let's find the problem". | 1 | 0 | 2026-03-03T19:33:36 | Total_Activity_7550 | false | null | 0 | o8gq7m8 | false | /r/LocalLLaMA/comments/1rjyy08/any_use_case_for_browserbased_local_agents/o8gq7m8/ | false | 1 |
t1_o8gq79t | stable diffusion has some of the biggest gains with the m5. those are numbers on the 4x side. | 1 | 0 | 2026-03-03T19:33:33 | Buddhabelli | false | null | 0 | o8gq79t | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gq79t/ | false | 1 |
t1_o8gq67v | Lmao | 2 | 0 | 2026-03-03T19:33:25 | unkownuser436 | false | null | 0 | o8gq67v | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gq67v/ | false | 2 |
t1_o8gq52j | nobody forces you | 1 | 0 | 2026-03-03T19:33:15 | swaglord1k | false | null | 0 | o8gq52j | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gq52j/ | false | 1 |
t1_o8gq3ud | Yeah it's sad - though we're basically at the point where we can have GPT o1 level AI running speedily on a system with 32GB of VRAM, so I think the boy done good. | 1 | 0 | 2026-03-03T19:33:05 | -dysangel- | false | null | 0 | o8gq3ud | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gq3ud/ | false | 1 |
t1_o8gq3qj | yes sure i copy/paste at u my last two round
"Gemini said
Great! The license to kill (and to guess) still works. 😎
As promised, here's my "Jurassic" challenge for you:
🦟🧬🦖🚙
What movie is it?
As soon as you answer, I'll give you the solution. Would you mind if I later explained the meaning of the English expression "A blast from the past" with the corresponding rarity score, since we're talking about prehistoric creatures?
Did you say
Jurassic Park?
Gemini said
Exactly! Jurassic Park (1993), Steven Spielberg's masterpiece.
Solution
The movie is Jurassic Park. The emojis explain the origin of the disaster: the mosquito in amber (🦟), the reconstructed DNA (🧬), the T-Rex (🦖), and the park's iconic jeeps (🚙).
Since we've brought dinosaurs back to life, here's the perfect expression for the occasion:
A blast from the past
Meaning: Literally "a blast from the past." It's used when something (a song, an old friend, or a childhood movie) suddenly brings back distant memories.
Frequency of Use: 1 (It's very common, used almost every day to describe moments of sudden nostalgia.)
We're going strong! Now it's your turn: what are your next emojis?
You said
🕴🏻🕴🏻💼
He said Gemini
This is an absolute '90s icon! I'd say it's Pulp Fiction (1994).
The emojis are perfect: the two hitmen in black suits, Vincent and Jules (🕴🏻🕴🏻), and the mysterious briefcase (💼) with its glowing contents that are never revealed!
Did I guess right? If so, is it my turn with a "space" title?
You said
Yes, you guessed it, well done."
is amazing that why only 3 emojis the ia get that i spoke about a pulp fiction
Understanding AI reasoning and seeing how similar it is to humans left me amazed. | 1 | 0 | 2026-03-03T19:33:04 | eddy-morra | false | null | 0 | o8gq3qj | false | /r/LocalLLaMA/comments/1rjwz6m/i_just_discovered_a_super_fun_game_to_play_with/o8gq3qj/ | false | 1 |
t1_o8gq2xs | Still more impactful than most | 1 | 0 | 2026-03-03T19:32:58 | StewPorkRice | false | null | 0 | o8gq2xs | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gq2xs/ | false | 1 |
t1_o8gq2ky | Hey, everything should be intact, make sure you put the mmproj files as well, like with lm studio :) | 1 | 0 | 2026-03-03T19:32:55 | hauhau901 | false | null | 0 | o8gq2ky | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8gq2ky/ | false | 1 |
t1_o8gpzja | Might it be a distillation artifact? The smaller models have probably been distilled from the larger ones. And actually distilled from logits, not "train on a few samples" distilled. | 1 | 0 | 2026-03-03T19:32:30 | Middle_Bullfrog_6173 | false | null | 0 | o8gpzja | false | /r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8gpzja/ | false | 1 |
t1_o8gpuye | 🤷 but first thought was "only working on closed models for you." Change of direction from above could mean anything. | 1 | 0 | 2026-03-03T19:31:54 | QuestionMarker | false | null | 0 | o8gpuye | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gpuye/ | false | 1 |
t1_o8gps1m | Yeh, hard to imagine this is good news. | 1 | 0 | 2026-03-03T19:31:30 | Ok-Measurement-1575 | false | null | 0 | o8gps1m | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gps1m/ | false | 1 |
t1_o8gpn2d | I am getting 50t/s on 9B Q6 and closer to 20t/s on 35B Q4..16GB VRAM AMD card with 32GB system ram. 9B does dumb stuff like botching running pace calculations more than 35B/27B though. | 1 | 0 | 2026-03-03T19:30:50 | mrstrangedude | false | null | 0 | o8gpn2d | false | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8gpn2d/ | false | 1 |
t1_o8gpky5 | Why do you think this?
I don't know if we ever replace accountants with the robots, but I think accounting starts to evolve into an oversight (*cough babysitting the robots*) role. | 1 | 0 | 2026-03-03T19:30:33 | Extension-Bison-1116 | false | null | 0 | o8gpky5 | false | /r/LocalLLaMA/comments/1rjwig7/every_ai_accounting_tool_ive_seen_has_it/o8gpky5/ | false | 1 |
t1_o8gperd |
what do you think they would refuse to do? | 1 | 0 | 2026-03-03T19:29:44 | nomorebuttsplz | false | null | 0 | o8gperd | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gperd/ | false | 1 |
t1_o8gpbeu | It’s an intelligence test of an AI model… if it had listed mammals it would be a very clever way to demonstrate a niche level of intelligence given the limited mammals (all Australian) that do lay eggs and how most people don’t know that fact | 1 | 0 | 2026-03-03T19:29:17 | lenjet | false | null | 0 | o8gpbeu | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8gpbeu/ | false | 1 |
t1_o8gp3f1 | I'm an accountant. I would never dare call myself a programmer. But as a guy who dabbles doing "guess and check" iterations via claude copy/paste, it's fascinating seeing these two worlds collide.
For example, accounting software is OBSESSED with audit trails. Who made this entry. Can't delete that entry - have to make an adjustment. Opps the adjustment is wrong: have to make an adjustment to the adjustment....
Meanwhile data dudes are over here looking at the end result: Do the reports balance? Is the data in the right buckets? Then who cares about audit trails.
It's just crazy thinking about the switch that is going to happen. | 1 | 0 | 2026-03-03T19:28:14 | Extension-Bison-1116 | false | null | 0 | o8gp3f1 | false | /r/LocalLLaMA/comments/1rjwig7/every_ai_accounting_tool_ive_seen_has_it/o8gp3f1/ | false | 1 |
t1_o8gp25w | Gossip is the lead getting replaced and the rest going in solidarity. Which is better than my first guess, that they'd been asked to do something they refused to do, but it's still not wonderful. | 1 | 0 | 2026-03-03T19:28:04 | QuestionMarker | false | null | 0 | o8gp25w | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gp25w/ | false | 1 |
t1_o8goziw | Sounds like you should book a round trip flight and you’ll probably still save money. | 1 | 0 | 2026-03-03T19:27:43 | YourVelourFog | false | null | 0 | o8goziw | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8goziw/ | false | 1 |
t1_o8goyrt | Ah, you make me realize I missed the opportunity to say they could join the *Gemma* team. | 1 | 0 | 2026-03-03T19:27:37 | roselan | false | null | 0 | o8goyrt | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8goyrt/ | false | 1 |
t1_o8goyj3 | Tbh more cores would only make it worse for running LLMs if you have anything more than 8 cores it adds massive overhead since it's bandwidth limited in the first place no need to add more cores to make it even more inefficient. | 1 | 0 | 2026-03-03T19:27:35 | TheMeatler | false | null | 0 | o8goyj3 | false | /r/LocalLLaMA/comments/1re72h4/qwen35_27b_better_than_35ba3b/o8goyj3/ | false | 1 |
t1_o8govfg | Nono, that’s not what I meant. I mean that if they don’t release an M5 Ultra Mac Pro by WWDC this summer then they’ll likely release an M6 Ultra Mac Studio in March-June next year with M6 products (possibly even the rumoured M6 Max MacBook Pro) late this year, long before the M6 Ultra. | 1 | 0 | 2026-03-03T19:27:11 | Spanky2k | false | null | 0 | o8govfg | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8govfg/ | false | 1 |
t1_o8goiks | Already training on Qwen 3.5-2B actually! Right now I'm thinking about two directions: one is environment interaction (think embodied AI, such as VLA), the other is human interaction via ASR. Imagine being a football coach giving voice commands to N players simultaneously — and thanks to LLM in-context learning, you can define any tools on the fly. Easy to make Ronaldo and Messi to learn magic spells just defining the tools. | 1 | 0 | 2026-03-03T19:25:30 | Tall_Scientist1799 | false | null | 0 | o8goiks | false | /r/LocalLLaMA/comments/1rjxlrh/simpletool_4b_model_10_hz_realtime_llm_function/o8goiks/ | false | 1 |
t1_o8gobws | I did try to do that, but I guess I need to do it better. | 1 | 0 | 2026-03-03T19:24:38 | owp4dd1w5a0a | false | null | 0 | o8gobws | false | /r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8gobws/ | false | 1 |
t1_o8goa9a | 2500$ safe tensors? | 1 | 0 | 2026-03-03T19:24:24 | braydon125 | false | null | 0 | o8goa9a | false | /r/LocalLLaMA/comments/1rjwm8i/qwen359b_abliterated_0_refusals_vision/o8goa9a/ | false | 1 |
t1_o8go9z1 | yea this has been a trend for a while now... the line between "base" and "instruct" is getting super blurry. I think what's happening is that even during pretraining they're mixing in conversational/instruction data now so the model already picks up chat patterns before any explicit SFT.
the fact that it has thinking tokens in base mode is pretty telling. That's definitely not something you'd see from pure next token prediction on web text alone | 2 | 0 | 2026-03-03T19:24:22 | RobertLigthart | false | null | 0 | o8go9z1 | false | /r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8go9z1/ | false | 2 |
t1_o8go8md | Are they allowed to have it as an optional free component? | 1 | 0 | 2026-03-03T19:24:11 | BarGroundbreaking624 | false | null | 0 | o8go8md | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8go8md/ | false | 1 |
t1_o8go5ui | Hey, my bad — first time posting on Reddit and the title wasn't clear. The FunctionGemma comparison is with our **Qwen2.5-0.5B** model, not the 4B. You can see this in the figures and the paper. The 0.5B beats FunctionGemma both zero-shot and after fine-tuning on Google's Mobile-Actions dataset. | 1 | 0 | 2026-03-03T19:23:49 | Tall_Scientist1799 | false | null | 0 | o8go5ui | false | /r/LocalLLaMA/comments/1rjxlrh/simpletool_4b_model_10_hz_realtime_llm_function/o8go5ui/ | false | 1 |
t1_o8go0pj | That sounds right to me | 1 | 0 | 2026-03-03T19:23:09 | YourVelourFog | false | null | 0 | o8go0pj | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8go0pj/ | false | 1 |
t1_o8go08z | You can have the shiniest box of gold in the world. But if the ship it's on is sinking, you jump ship. | 1 | 0 | 2026-03-03T19:23:05 | qubedView | false | null | 0 | o8go08z | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8go08z/ | false | 1 |
t1_o8gnzq0 | Wasn’t my point that average people dumb care about privacy? I feel like I got criticized through agreement here a bit.
I disagree that running a local LLM couldn’t and wouldn’t ever be simple. Hardware trends seem to be gravitating towards some level of AI processing being run locally. | 1 | 0 | 2026-03-03T19:23:01 | owp4dd1w5a0a | false | null | 0 | o8gnzq0 | false | /r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8gnzq0/ | false | 1 |
t1_o8gnwn0 | thank you hornygooner4401 | 1 | 0 | 2026-03-03T19:22:37 | xt8sketchy | false | null | 0 | o8gnwn0 | false | /r/LocalLLaMA/comments/1ri635s/13_months_since_the_deepseek_moment_how_far_have/o8gnwn0/ | false | 1 |
t1_o8gntfo | I doubt they’d release the M6 Ultra before any other M6 chips. Everything that I’ve seen so far is that they like to work on proving their newest chip line with lower end products before they release the big boys. This leads me to believe that they’ll release the M5 Ultra before the M6 is released. | 1 | 0 | 2026-03-03T19:22:11 | YourVelourFog | false | null | 0 | o8gntfo | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gntfo/ | false | 1 |
t1_o8gnq2p | > It's a pretty ugly script.
Hey, if it works, it works. Pretty clever way to go about it in my eyes. I might implement my own version of this. Thanks for the details! :) | 1 | 0 | 2026-03-03T19:21:44 | ayylmaonade | false | null | 0 | o8gnq2p | false | /r/LocalLLaMA/comments/1rf2ulo/qwen35_122b_in_72gb_vram_3x3090_is_the_best_model/o8gnq2p/ | false | 1 |
t1_o8gnm7m | Xi IA | 1 | 0 | 2026-03-03T19:21:12 | -dysangel- | false | null | 0 | o8gnm7m | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gnm7m/ | false | 1 |
t1_o8gnlof | The biggest advantage of ComfyUI is its versatility. If you have some experience, downloading a workflow from GitHub repo is quite easy. | 1 | 0 | 2026-03-03T19:21:08 | National_Guidance_34 | false | null | 0 | o8gnlof | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8gnlof/ | false | 1 |
t1_o8gneb6 | This guy was an intern at qwen | 1 | 0 | 2026-03-03T19:20:11 | varlog0 | false | null | 0 | o8gneb6 | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gneb6/ | false | 1 |
t1_o8gnaw0 | I definitely agree. The marketing of the technology is still in its infancy and most people don't understand how powerful and how universally helpful AI can be, in theory. Communicating its everything-ness becomes a trickle of keywords that, on their own, all kind of suck. Though, I think there's a natural user experience cycle that everyone goes through. At first, the output is the priority (and it works, and that's amazing!). Eventually, once you have adopted the use of AI across multiple parts of your life, it starts to show the cracks between the short/long-term/conversational memory, tools, accessibility, etc, and your awareness of the current events begins to transform your perspective of it all.
Privacy is an issue that people talk about a lot - kind of like recycling. We want it to be good and effective, and we say the words a lot, but out of sight, out of mind in practice. The future of AI connectedness is going to come down to the individual personalization and preference, and I think the power of the local models is to enable the deepest personalization and management of a universally accessible memory and identification. Only YOUR system will really KNOW you, and that memory layer will connect as needed wherever you interact with AI, at whatever level you choose (professional/personal/medical/relational/ID-verification/etc).
The problem with a self-owned system is that someone has to distribute it, and right now the big platforms are most likely to do that, but they'll still get to see your data and lease you account to host it. | 1 | 0 | 2026-03-03T19:19:44 | gotchapow | false | null | 0 | o8gnaw0 | false | /r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8gnaw0/ | false | 1 |
t1_o8gn8yc | He did it because Elon seems to put pressure on the team, and if they don't deliver results... well, that's what he does to them. | 1 | 0 | 2026-03-03T19:19:29 | sammoga123 | false | null | 0 | o8gn8yc | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8gn8yc/ | false | 1 |
t1_o8gn6v4 | Need to upgrade to Ethernet 400 or 800 connections. I think they need to release Mac Pro can hold multiple ultra board with high speed connections between them. | 1 | 0 | 2026-03-03T19:19:12 | Final-Rush759 | false | null | 0 | o8gn6v4 | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gn6v4/ | false | 1 |
t1_o8gn6x7 | I'm not familiar with modern webdev, having burned those bridges in 2001.
Can you tell me, why a black hat site scanner is even looking for credentials in the comments?
Are popular frameworks leaving such things exposed in their 'pushbutton websites'? | 1 | 0 | 2026-03-03T19:19:12 | crantob | false | null | 0 | o8gn6x7 | false | /r/LocalLLaMA/comments/1rjq8w1/catching_an_ai_red_teamer_in_the_wild_using/o8gn6x7/ | false | 1 |
t1_o8gn61m | With what GPUs? You need like 1k at least to train, and someone else will have to run inference. | 1 | 0 | 2026-03-03T19:19:06 | alamacra | false | null | 0 | o8gn61m | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gn61m/ | false | 1 |
t1_o8gn5uh | ik\_llama from Iwan Kawrakow:
[https://github.com/ikawrakow/ik\_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp) | 1 | 0 | 2026-03-03T19:19:04 | antwon-tech | false | null | 0 | o8gn5uh | false | /r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/o8gn5uh/ | false | 1 |
t1_o8gn5js | Huh? I love GLM, but these new Qwen models are clearly the most intelligent for their size, ever | 1 | 0 | 2026-03-03T19:19:02 | -dysangel- | false | null | 0 | o8gn5js | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gn5js/ | false | 1 |
t1_o8gn4io | OR governments worldwide agreed that peasants simply shouldn't have stuff.
Own nozzing and so forth. | 1 | 0 | 2026-03-03T19:18:53 | soshulmedia | false | null | 0 | o8gn4io | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gn4io/ | false | 1 |
t1_o8gn3l8 | That json doesn't make sense from a semantic perspective | 1 | 0 | 2026-03-03T19:18:46 | titpetric | false | null | 0 | o8gn3l8 | false | /r/LocalLLaMA/comments/1rjw6rc/thoughts_about_qwen_35_fine_tuning_08b_model_for/o8gn3l8/ | false | 1 |
t1_o8gn2vu | I saw a few videos; they were able to see a speed increase but it wasn’t linear. Something like a 1.4x increase in inference speed when using 2 machines vs 1. | 1 | 0 | 2026-03-03T19:18:41 | YourVelourFog | false | null | 0 | o8gn2vu | false | /r/LocalLLaMA/comments/1rjqsv6/apple_unveils_m5_pro_and_m5_max_citing_up_to_4/o8gn2vu/ | false | 1 |
t1_o8gmxy1 | I think it would be even more fun if they join Deepseek or GLM and we can actually continue to get local models | 1 | 0 | 2026-03-03T19:18:02 | -dysangel- | false | null | 0 | o8gmxy1 | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8gmxy1/ | false | 1 |
t1_o8gmo7a | If you write the posts with LLMs, please clean them up at least and fix the formatting :)
And things like GPT4All are *way* too outdated by now. | 1 | 0 | 2026-03-03T19:16:45 | kabachuha | false | null | 0 | o8gmo7a | false | /r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8gmo7a/ | false | 1 |
t1_o8gmo1d | I’ve really liked the Qwen team’s work in recent years, so I’m curious what will change now. | 1 | 0 | 2026-03-03T19:16:44 | jacek2023 | false | null | 0 | o8gmo1d | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8gmo1d/ | false | 1 |
t1_o8gmnks | 1 | 0 | 2026-03-03T19:16:40 | antwon-tech | false | null | 0 | o8gmnks | false | /r/LocalLLaMA/comments/1rjygyu/qwen3535ba3b_achieves_8_ts_on_orange_pi_5_with_ik/o8gmnks/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.