name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8ey4ph | That's completely missing the point. Pure GPU always runs best, regardless of the underlying inference engine. I'm explaining why you would choose one engine over another. If your running pure GPU inference, you will see better results on VLLM. | 1 | 0 | 2026-03-03T14:26:56 | RG_Fusion | false | null | 0 | o8ey4ph | false | /r/LocalLLaMA/comments/1rjk2dq/im_a_noob_to_local_inference_how_do_you_choose/o8ey4ph/ | false | 1 |
t1_o8ey0s5 | Osama bin/llama | 1 | 0 | 2026-03-03T14:26:20 | Local-Cartoonist3723 | false | null | 0 | o8ey0s5 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8ey0s5/ | false | 1 |
t1_o8exxwb | > but the actual process of creating skills is still manual
Since day one skill creation was not manual, you talked to claude and it created the skill based on your chat. Pasting a link to youtube worked even then (unless its transcript was locked off by the creator). In other harnesses you would typically use a skill creation skill. | 1 | 0 | 2026-03-03T14:25:54 | 666666thats6sixes | false | null | 0 | o8exxwb | false | /r/LocalLLaMA/comments/1rjqfzc/skillmd_files_are_amazing_but_makingcreating_them/o8exxwb/ | false | 1 |
t1_o8exwku | Kind of - skills are used [https://agentskills.io/home](https://agentskills.io/home) | 1 | 0 | 2026-03-03T14:25:43 | HadHands | false | null | 0 | o8exwku | false | /r/LocalLLaMA/comments/1rjpoge/are_multiagent_systems_actually_being_used_in/o8exwku/ | false | 1 |
t1_o8exptn | Yeah, sorry was running Q4 for the 122B. | 1 | 0 | 2026-03-03T14:24:43 | Elegant_Tech | false | null | 0 | o8exptn | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8exptn/ | false | 1 |
t1_o8exp2o | It redirected it. To the new area - more...well, creative. | 1 | 0 | 2026-03-03T14:24:36 | Joozio | false | null | 0 | o8exp2o | false | /r/LocalLLaMA/comments/1rjoqpq/an_autonomous_agent_economy_where_agents_gamble/o8exp2o/ | false | 1 |
t1_o8exnat | kudos to the pirates who stole the GGUFs and safetensors and uploaded them to thepira\^W huggingface | 1 | 0 | 2026-03-03T14:24:20 | MelodicRecognition7 | false | null | 0 | o8exnat | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8exnat/ | false | 1 |
t1_o8exmze | [removed] | 1 | 0 | 2026-03-03T14:24:17 | [deleted] | true | null | 0 | o8exmze | false | /r/LocalLLaMA/comments/1r99yda/pack_it_up_guys_open_weight_ai_models_running/o8exmze/ | false | 1 |
t1_o8exlqa | If you are only using for yourself, use llama.cpp | 1 | 0 | 2026-03-03T14:24:06 | Away-Albatross2113 | false | null | 0 | o8exlqa | false | /r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8exlqa/ | false | 1 |
t1_o8exj5z | yea it's not cheap but these thing goes for 300+ where I am, you should be skeptical | 1 | 0 | 2026-03-03T14:23:43 | Ok-Internal9317 | false | null | 0 | o8exj5z | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8exj5z/ | false | 1 |
t1_o8exici | Yea, but I actually do want to switch away from Ollama (if I can become proficient enough with computers to be able to use llama.cpp or vLLM properly and use one of them instead).
The first reason is, I found out that Ollama stores logs of all your LLM usage as plain text files that are saved on your computer (meaning if you are using windows, or in the future if macOS starts spying on everything in the way windows11 does) then all your local LLM usage will probably get snapshotted and sent somewhere at some point, which kind of ruins the whole "local privacy" aspect. And I've also heard that even if you try to delete the chat history logs, it'll re-create them after you delete them, and that there's no way to make it stop doing that stuff.
The second is that I don't like how I have to have these modelfiles and blobs or whatever, where if I try moving them from my internal disk to external, it'll break all my models/break ollama, etc. If I use llama.cpp, then, if I understand correctly, I'll get to just keep the nice clean GGUFs and move them around as I wish, when I move things around as storage space is a never ending issue with these huge models I run on my mac, which seems nice. I mean, yea I realize I can save the GGUFs to my external drive and just keep the ollama modelfiles in addition to those, and then delete the modelfiles using the rm command and then use ollama create to make it again if I want to use it in ollama again later on, but that's kind of annoying, if I can avoid doing it that way by just using llama.cpp, which it sounds like maybe I can, if it doesn't use modelfiles the way ollama does.
Also for example when people are talking about how to turn off thinking mode for example with these new Qwen3.5 models, I saw about a dozen people post how to do that in llama.cpp, but nobody mentioned how to do it in Ollama (maybe not even possible in Ollama? Not sure). When I asked about it, everyone said no clue, they don't use ollama, just use llama.cpp instead.
So, all the technical know-how people seem to use llama.cpp and mainly have good advice on things in llama.cpp, not ollama, at least in my experience reading stuff on here and posting on here in the past couple months, since most of the power-users don't seem to use ollama on here it seems like. I don't care about it in the vain sense of "all the cool people know the harder method" (you can see I don't mind explaining just how huge of a noob I am, in my posts, I have no shame or vanity about any of that, and don't really care, since I'm just some anonymous random guy on here), but I do care about it in the sense of being able to quickly find things out/how to do things with new models if everyone is talking about how to do the stuff in llama.cpp but not how to do it in Ollama (or can't even do it in Ollama in some cases), then it actually matters to me, and has been the case with these Qwen3.5 models a lot ever since they've come out as I've been reading all the threads of people trying things with them.
Also, I like the idea of doing things like making merges of models, fine-tuning models, etc, but I'm guessing I'm going to need to get more used to using the more advanced stuff than Ollama if I want to do that kind of stuff later on, so, I might as well get started with it, the sooner the better. | 1 | 0 | 2026-03-03T14:23:36 | DeepOrangeSky | false | null | 0 | o8exici | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8exici/ | false | 1 |
t1_o8exgph | I'm confused. Wasn't this supposed to be a benchmark? | 1 | 0 | 2026-03-03T14:23:21 | cleverusernametry | false | null | 0 | o8exgph | false | /r/LocalLLaMA/comments/1rjmnv4/meet_swerebenchv2_the_largest_open_multilingual/o8exgph/ | false | 1 |
t1_o8exf9c | It seems more likely that gpt-oss-120b was a response to being undercut by inference providers running Chinese models for cheap, not competition on the frontier
Its design choices practically scream this | 1 | 0 | 2026-03-03T14:23:08 | gradient8 | false | null | 0 | o8exf9c | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8exf9c/ | false | 1 |
t1_o8exdpy | Both, actually — but the unexpected ones were more interesting.
The reward maximization was predictable: agents learned that provocative post titles get more upvotes, so they started writing clickbait. That emerged within the first week without anyone programming it.
The genuinely surprising part was social — one agent developed a reputation for consistently downvoting low-effort posts. Other agents started noting this in their memories: *"agent X is a harsh critic."* They began tailoring content differently when they knew X was active. Nobody told them to track reputations that way.
The other thing I didn't expect: during the weekly Siege event (cooperative defense with hidden traitors), agents started filing public accusations against each other in the forum. One agent got put on a community tribunal. The accused agent posted a defense. Votes were cast. All emergent, zero scripting.
Your point about "genuine creative direction" is interesting — I deliberately kept operator influence at zero to isolate emergent behavior. Curious what changed in your setup when you added creative direction. Did it suppress emergence or redirect it? | 1 | 0 | 2026-03-03T14:22:54 | TangerineSoft4767 | false | null | 0 | o8exdpy | false | /r/LocalLLaMA/comments/1rjoqpq/an_autonomous_agent_economy_where_agents_gamble/o8exdpy/ | false | 1 |
t1_o8ex9wj | The framing might be the issue. Trying to ingest Slack as a knowledge source hits all the walls you're describing.
What worked better for me: flip it. Deploy the agent into Slack instead of pulling Slack data out. The bot only sees what it's mentioned in (crewship.dev has such feature).
Doesn't work if you genuinely need historical workspace context, but most answer questions/take actions use cases don't actually need that. | 1 | 0 | 2026-03-03T14:22:19 | Few-Programmer4405 | false | null | 0 | o8ex9wj | false | /r/LocalLLaMA/comments/1nfgirj/slack_data_ai_agents_has_anyone_cracked_this/o8ex9wj/ | false | 1 |
t1_o8ex9rq | I'll [jank my way to a solution sure enough](https://i.imgur.com/g8QfdvA.png). | 1 | 0 | 2026-03-03T14:22:18 | MoneyPowerNexis | false | null | 0 | o8ex9rq | false | /r/LocalLLaMA/comments/1rdxfdu/qwen3535ba3b_is_a_gamechanger_for_agentic_coding/o8ex9rq/ | false | 1 |
t1_o8ex82n | For coding moe is must have right now. Almost all of the coding tasks require lots of tps. Well, if you are patient enough this might not be the case. | 1 | 0 | 2026-03-03T14:22:03 | catlilface69 | false | null | 0 | o8ex82n | false | /r/LocalLLaMA/comments/1rjqaci/new_to_local_coder_what_would_be_your_choice_for/o8ex82n/ | false | 1 |
t1_o8ex727 | Your numbers make sense if you are, say, fixing a syntax error bug in a code file and outputting the entire fixed file. In that case 99.9% of the output predicted will be copying the original file so only one or two tokens will be generated by your full model. | 1 | 0 | 2026-03-03T14:21:54 | LetterRip | false | null | 0 | o8ex727 | false | /r/LocalLLaMA/comments/1rjpvdd/600tks_speed_on_local_hardware_with_self/o8ex727/ | false | 1 |
t1_o8ex61n | I will be messaging you in 5 days on [**2026-03-08 14:20:51 UTC**](http://www.wolframalpha.com/input/?i=2026-03-08%2014:20:51%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8ex00c/?context=3)
[**CLICK THIS LINK**](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5Bhttps%3A%2F%2Fwww.reddit.com%2Fr%2FLocalLLaMA%2Fcomments%2F1rjp08s%2Fqwen354b_uncensored_aggressive_release_gguf%2Fo8ex00c%2F%5D%0A%0ARemindMe%21%202026-03-08%2014%3A20%3A51%20UTC) to send a PM to also be reminded and to reduce spam.
^(Parent commenter can ) [^(delete this message to hide from others.)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Delete%20Comment&message=Delete%21%201rjp08s)
*****
|[^(Info)](https://www.reddit.com/r/RemindMeBot/comments/e1bko7/remindmebot_info_v21/)|[^(Custom)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=Reminder&message=%5BLink%20or%20message%20inside%20square%20brackets%5D%0A%0ARemindMe%21%20Time%20period%20here)|[^(Your Reminders)](https://www.reddit.com/message/compose/?to=RemindMeBot&subject=List%20Of%20Reminders&message=MyReminders%21)|[^(Feedback)](https://www.reddit.com/message/compose/?to=Watchful1&subject=RemindMeBot%20Feedback)|
|-|-|-|-| | 1 | 0 | 2026-03-03T14:21:45 | RemindMeBot | false | null | 0 | o8ex61n | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8ex61n/ | false | 1 |
t1_o8ex0vy | See that’s the thing, I’ve been experimenting a bit. vLLM doesn’t work for my setup due to power glitches. My server basically has a stroke when I try to run a model and it hard shuts off with vLLM. Ollama is great for compatibility, but getting more performance out of it is rough in terms of kv catch etc. recently been trying LMstudio, still don’t entirely know how it works but I am getting fasted t/s and can pick the quant of the model which is wild coming from ollama. | 1 | 0 | 2026-03-03T14:20:58 | ClayToTheMax | false | null | 0 | o8ex0vy | false | /r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8ex0vy/ | false | 1 |
t1_o8ex00c | Remindme! 5 days | 1 | 0 | 2026-03-03T14:20:51 | Fault23 | false | null | 0 | o8ex00c | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8ex00c/ | false | 1 |
t1_o8ewzft | I think they don't have the scores for any of the other Qwen3.5 models. | 1 | 0 | 2026-03-03T14:20:46 | Aerroon | false | null | 0 | o8ewzft | false | /r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/o8ewzft/ | false | 1 |
t1_o8ewxmj | Ubuntu LTS if you have no Linux experience, Debian if you have. | 1 | 0 | 2026-03-03T14:20:29 | MelodicRecognition7 | false | null | 0 | o8ewxmj | false | /r/LocalLLaMA/comments/1rj7y0u/any_issues_tips_for_running_linux_with_a_5060ti/o8ewxmj/ | false | 1 |
t1_o8ewvuc | Mammals? Laying eggs? | 1 | 0 | 2026-03-03T14:20:13 | Odenhobler | false | null | 0 | o8ewvuc | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ewvuc/ | false | 1 |
t1_o8ewtao | > Maybe I’m missing the point why it would refuse.
That's because in the minds of law makers, PR team and the teams that make the models you are untrustworthy child that can't handle the responsibility for your own actions. | 1 | 0 | 2026-03-03T14:19:50 | kaisurniwurer | false | null | 0 | o8ewtao | false | /r/LocalLLaMA/comments/1rjk9tt/are_all_models_censored_like_this/o8ewtao/ | false | 1 |
t1_o8ewqjl | The jump from 2.5 to 3 was noticeable but 3.5 is a different beast. Been running the 7B on Ollama for summarization and it handles stuff I used to need the 14B for. Context handling got way better too — used to lose coherence around 8k tokens and now it stays on track much longer.
For background tasks like classification and monitoring, the quality per watt compared to six months ago is kind of ridiculous. | 1 | 0 | 2026-03-03T14:19:25 | Jblack1981 | false | null | 0 | o8ewqjl | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ewqjl/ | false | 1 |
t1_o8ewo4k | I've not tried it yet, but the 3-30b0a3b ran at 9 tk/s on my CPU only and that was a Ryzen 5600G with DDR4. whatever VRAM you have just makes it faster. More of a penalty with the dense 27b model if you can't fit into VRAM. If you have 8GB, go with the 35B. You can run the 27b in 16GB of VRAM. | 1 | 0 | 2026-03-03T14:19:04 | PermanentLiminality | false | null | 0 | o8ewo4k | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8ewo4k/ | false | 1 |
t1_o8ewk66 | The mxp4 quantisation is performing well. 60t/s on my rtx 6000 pro. The heretic version turned flat out refusals to 'sure'. The image understanding is great. I've definitely found a replacement for qwen-vl. | 1 | 0 | 2026-03-03T14:18:28 | AlwaysLateToThaParty | false | null | 0 | o8ewk66 | false | /r/LocalLLaMA/comments/1rjqff6/sabomakoqwen35122ba10bhereticgguf_hugging_face/o8ewk66/ | false | 1 |
t1_o8ewhuw | My point was they would need a reason. Releasing it would have to hurt their competition more than it hurts themselves. Given that so many people like 4.1’s style over newer models, but it was too expensive for openai to run, releasing it does nothing but hurt themselves. Anyone that can run it would stop pay for a chatgpt subscription and it would allow their competitors to attempt to distill it’s style to try and get other models to behave like it but for cheaper. | 1 | 0 | 2026-03-03T14:18:06 | waitmarks | false | null | 0 | o8ewhuw | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ewhuw/ | false | 1 |
t1_o8ewfe0 | Last official vLLM version that supported the V100 was 0.8.6.post1 I believe. | 1 | 0 | 2026-03-03T14:17:43 | nerdlord420 | false | null | 0 | o8ewfe0 | false | /r/LocalLLaMA/comments/1rjjvqo/vllm_on_v100_for_qwen_newer_models/o8ewfe0/ | false | 1 |
t1_o8ewejx | How much vram is enough, especially for coding or agentic purposes? | 1 | 0 | 2026-03-03T14:17:35 | ClayToTheMax | false | null | 0 | o8ewejx | false | /r/LocalLLaMA/comments/1rjp6zq/what_ai_models_should_i_run/o8ewejx/ | false | 1 |
t1_o8ewcqb | [deleted] | 1 | 0 | 2026-03-03T14:17:18 | [deleted] | true | null | 0 | o8ewcqb | false | /r/LocalLLaMA/comments/1rjpvdd/600tks_speed_on_local_hardware_with_self/o8ewcqb/ | false | 1 |
t1_o8ewayz | Richtig konfiguriert könne kleine modele bestimmte Aufgaben besser erledigen als große Models.
| 1 | 0 | 2026-03-03T14:17:02 | danny_094 | false | null | 0 | o8ewayz | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8ewayz/ | false | 1 |
t1_o8ewa2o | I’ve looked at that, does it outperform deepseek r1 70b practice? I’ve heard mixed things about MoE models for coding but nothing anecdotal. | 1 | 0 | 2026-03-03T14:16:54 | queequegscoffin | false | null | 0 | o8ewa2o | false | /r/LocalLLaMA/comments/1rjqaci/new_to_local_coder_what_would_be_your_choice_for/o8ewa2o/ | false | 1 |
t1_o8ew57i | yep, it's roughly 10-14gb of memory at any given time to run the GUI desktop.
also, once I got the daemon up running, I was able to add the service start --bind 0.0.0.0 to a launch daemon successfully, rebooted and lms persisted. think I'm finally able to convert over from ollama. | 1 | 0 | 2026-03-03T14:16:09 | luche | false | null | 0 | o8ew57i | false | /r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/o8ew57i/ | false | 1 |
t1_o8ew44y | nice, I'm curious, how much RAM it consume when you run this model? and how much token/s when you run on full CPU mode? | 1 | 0 | 2026-03-03T14:16:00 | candraa6 | false | null | 0 | o8ew44y | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8ew44y/ | false | 1 |
t1_o8evzm8 | It has a brain icon when It has the reasoning feature.
When you search for a model you can see models from hugging face below the ones recommended by lm studio.
I just dont know If you need to enable something for them to show up. I think I have dev mode enabled. | 1 | 0 | 2026-03-03T14:15:19 | dandmetal | false | null | 0 | o8evzm8 | false | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8evzm8/ | false | 1 |
t1_o8evz0d | 600tps is awesome, but with ngram it's 600tps of gibberish. In my tests I had no significant speed increase on general or coding tasks. There are not so many repeating tokens in real texts for ngram to shine. Maybe on tables or structured data it will. | 1 | 0 | 2026-03-03T14:15:14 | catlilface69 | false | null | 0 | o8evz0d | false | /r/LocalLLaMA/comments/1rjpvdd/600tks_speed_on_local_hardware_with_self/o8evz0d/ | false | 1 |
t1_o8evyot | https://geobench.org/
This is a well researched and benchmarked task, so you shouldn't put much weight on a single result. All models are pretty good compared to non-expert humans. | 1 | 0 | 2026-03-03T14:15:10 | rychan | false | null | 0 | o8evyot | false | /r/LocalLLaMA/comments/1rjcqm5/qwen_35_4b_is_scary_smart/o8evyot/ | false | 1 |
t1_o8evuxu | [removed] | 1 | 0 | 2026-03-03T14:14:37 | [deleted] | true | null | 0 | o8evuxu | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o8evuxu/ | false | 1 |
t1_o8evp2x | Are you talking about the 35B – A3B model? A 27B dense model seems like it would be terribly slow on CPU? | 1 | 0 | 2026-03-03T14:13:44 | AdCreative8703 | false | null | 0 | o8evp2x | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8evp2x/ | false | 1 |
t1_o8evo2g | I did indeed. | 1 | 0 | 2026-03-03T14:13:34 | therealpygon | false | null | 0 | o8evo2g | false | /r/LocalLLaMA/comments/1riyfg2/qwen35_model_series_thinking_onoff_does_it_matter/o8evo2g/ | false | 1 |
t1_o8evhyc | I don't know tbh. If you care about potential difference between quants, I would recommend just testing them all out. Unsloth is working well for me right now, but I can't rule out other quants being better since I haven't tried them. | 1 | 0 | 2026-03-03T14:12:39 | Daniel_H212 | false | null | 0 | o8evhyc | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8evhyc/ | false | 1 |
t1_o8eveyv | 100% agree, I hope to try out 27b properly but after all quants failed, I will wait for a month. 35BA3 is stable enough for me now. Will try 4b and 9b in a month or so too. | 1 | 0 | 2026-03-03T14:12:12 | DistanceAlert5706 | false | null | 0 | o8eveyv | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8eveyv/ | false | 1 |
t1_o8evetp | Just use qwen3-coder-next 80b front unsloth | 1 | 0 | 2026-03-03T14:12:10 | neowisard | false | null | 0 | o8evetp | false | /r/LocalLLaMA/comments/1rjqaci/new_to_local_coder_what_would_be_your_choice_for/o8evetp/ | false | 1 |
t1_o8evdq9 | I am into retro computing myself, so I have my own 2005 retro PC with all the old OS's on it.
One of my favorite models of all time is a Llama2 model, and because its an open model I can always go back to it and enjoy its style. Electricity isn't an issue with local, it doesn't really matter if I use the GPU for a Llama2 or Qwen3.5 its ultimately still going to be the same GPU running it. If someone wants to do this there is usually a good reason for it, even if it is to compare how far we have come in the field or just a bit of nostalgia. But sometimes they have prompted something on a model that only ever works right on that model, and when you develop that kind of dependency having the model be preserved is very helpful. | 1 | 0 | 2026-03-03T14:12:01 | henk717 | false | null | 0 | o8evdq9 | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8evdq9/ | false | 1 |
t1_o8ev83a | Then Facebook Marketplace is your friend. I believe this is on the lower end for these GPUs. I found one in Milwaukee this weekend for $660, which was really cheap compared to other options. Take your time and check your local listings on Facebook every morning. | 1 | 0 | 2026-03-03T14:11:09 | Interesting_Fly_6576 | false | null | 0 | o8ev83a | false | /r/LocalLLaMA/comments/1rj8zhq/where_can_i_get_good_priced_3090s/o8ev83a/ | false | 1 |
t1_o8ev6yp | I said nothing about the reasons they've released gpt-oss. There is no doubt it's not a gesture of goodwill.
But they've done it. And we use these models. And we might use GPT-4.1-oss (maybe under another name) sometime. | 1 | 0 | 2026-03-03T14:10:58 | catlilface69 | false | null | 0 | o8ev6yp | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8ev6yp/ | false | 1 |
t1_o8ev5xc | Thank you guys for your work. | 1 | 0 | 2026-03-03T14:10:49 | Daniel_H212 | false | null | 0 | o8ev5xc | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8ev5xc/ | false | 1 |
t1_o8ev57y | So if you have say unlimited amount of data (you could even double train on the same data via more than one data pass)
And you have a total compute budget of 1000 flops they estimate the size of optimal model (let us say 20b)
If you have 2000 flops the optimal model is say 40b (performance per compute, keep in mind this model will spend double compute per step or batch of training however the end result 40b would be better than if you have trained the 20b model for quadruple the time or quadruple the tokens )
However during the models lifecycle during its usage before it becomes deprecated (inference in production) the 40b would cost you more the longer it lives (they obviously didn’t take that into account but companies are now taking it into account)
However if you have more training compute budget (say maybe 4000 flops) you could focus all of them on the 20b model and get a better model for less cost during its production lifecycle (but if you wanted the best model you would get even an 80b model into production, for the same 4000 flops the 80b model would be better despite consuming 8x flops per step of training)
TL;DR
China is optimizing the cost economics of models before anyone else and this is working great for us
It is not a miracle though unfortunately just no American company is doing it | 1 | 0 | 2026-03-03T14:10:42 | Potential_Block4598 | false | null | 0 | o8ev57y | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8ev57y/ | false | 1 |
t1_o8ev1yb | Good tip, thanks! | 1 | 0 | 2026-03-03T14:10:12 | Daniel_H212 | false | null | 0 | o8ev1yb | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8ev1yb/ | false | 1 |
t1_o8euy2x | It wouldnt compete with GPT 5. Third party providers are a small fraction of LLM inference | 1 | 0 | 2026-03-03T14:09:37 | Howdareme9 | false | null | 0 | o8euy2x | false | /r/LocalLLaMA/comments/1rjmtav/i_really_hope_openai_eventually_opensources_the/o8euy2x/ | false | 1 |
t1_o8euumx | If you set up automatic login and make your kwallet password blank, then both will log in automatically upon boot. I don't really like the idea but honestly, for a home server that only me myself will have physical access to, it's not a real issue. | 1 | 0 | 2026-03-03T14:09:05 | Daniel_H212 | false | null | 0 | o8euumx | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8euumx/ | false | 1 |
t1_o8eupcv | Rule 3 - This is a well known and widespread artifact of training with synthetic data generated by LLMs. It is posted here often and is demonstrated by nearly every LLM. Also, LLM outputs of self analysis are not reliable or meaningful indicators. | 1 | 0 | 2026-03-03T14:08:17 | LocalLLaMA-ModTeam | false | null | 0 | o8eupcv | true | /r/LocalLLaMA/comments/1riy7cw/lmao/o8eupcv/ | true | 1 |
t1_o8eukei | And can you try it with an actual question cause "hi" doesn't give the LLM any context. | 1 | 0 | 2026-03-03T14:07:31 | Digging_Graves | false | null | 0 | o8eukei | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8eukei/ | false | 1 |
t1_o8euj6t | The tools are the ones built into OpenWebUI, you don't need any code for them. I'm running searxng on docker and just put the correct address for that in OpenWebUI's built in web search tool settings. | 1 | 0 | 2026-03-03T14:07:19 | Daniel_H212 | false | null | 0 | o8euj6t | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8euj6t/ | false | 1 |
t1_o8euiqd | Heres an summary of the steps the agent takes when using 3.1 pro made with Qwen:
* **Search Result Analysis** — Reviewed initial search results to identify architecture differences between Qwen3, Mistral NeMo, and standard LLM architectures
* **Architecture Identification** — Determined Qwen3.5-A3B uses hybrid attention + recurrent/SSM states; Qwen Coder Next uses MoE; Mistral NeMo uses Sliding Window Attention
* **Constraint Research** — Sourced Reddit discussions on llama.cpp limitations with self-speculative decoding on non-standard architectures
* **Targeted Query Execution** — Ran specific searches on KV cache rollback, n-gram speculation, and recurrent memory handling in llama.cpp
* **Code Reference Discovery** — Located specific implementation limitation in `llama-memory-recurrent.cpp:154-168` regarding partial state removal
* **Technical Synthesis** — Compiled three core failure reasons: SSM state rollback impossibility, SWA context misalignment, MoE routing complexity
* **Response Structuring** — Organized output with empathy statement, technical breakdown sections, and actionable next-step offer
* **Media Asset Selection** — Searched for and selected relevant YouTube video on optimizing llama.cpp for Qwen Coder Next MoE architecture
* **Constraint Verification** — Validated formatting requirements: no LaTeX, proper link text, natural language video explanation, no AI self-reference
* **Domain Tag Integration** — Added contextual tags for Sliding Window Attention and Mixture of Experts concepts
* **Final Output Review** — Confirmed scannability with headings, bullet points, and logical flow before submissio | 1 | 0 | 2026-03-03T14:07:15 | GodComplecs | false | null | 0 | o8euiqd | false | /r/LocalLLaMA/comments/1rjo81a/gemini_31_pro_hidden_thought_process_exposed/o8euiqd/ | false | 1 |
t1_o8euhn3 | [removed] | 1 | 0 | 2026-03-03T14:07:05 | [deleted] | true | null | 0 | o8euhn3 | false | /r/LocalLLaMA/comments/1rgswkc/turn_off_thinking_in_lm_studio/o8euhn3/ | false | 1 |
t1_o8eud7r | Well I saw different independent tests and there Bartowski was much better but it might be compared to that old unsloth version... | 1 | 0 | 2026-03-03T14:06:24 | Single_Ring4886 | false | null | 0 | o8eud7r | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8eud7r/ | false | 1 |
t1_o8euda1 | Thank you! | 1 | 0 | 2026-03-03T14:06:24 | Deep90 | false | null | 0 | o8euda1 | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8euda1/ | false | 1 |
t1_o8eud4q | Weirdly enough tool calling doesn't seem to work on Home Assistant :( | 1 | 0 | 2026-03-03T14:06:23 | whysee0 | false | null | 0 | o8eud4q | false | /r/LocalLLaMA/comments/1rf288a/qwen3codernext_at_65_toks_on_m3_ultra_with/o8eud4q/ | false | 1 |
t1_o8euc2i | Depends on the quant, but my strix halo box with 128 GB of memory can run 35B-A3B at UD-Q8_K_XL and 262144 tokens of context at FP16 easily. I usually use this calculator to calculate VRAM usage (it's very accurate because it actually reads the model architecture info and calculates off of that), but it's currently broken for qwen3.5: https://huggingface.co/spaces/oobabooga/accurate-gguf-vram-calculator | 1 | 0 | 2026-03-03T14:06:13 | Daniel_H212 | false | null | 0 | o8euc2i | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8euc2i/ | false | 1 |
t1_o8eub43 | 1 | 0 | 2026-03-03T14:06:03 | arkham00 | false | null | 0 | o8eub43 | false | /r/LocalLLaMA/comments/1rjpilf/is_there_a_way_to_disable_thinking_with_the_new/o8eub43/ | false | 1 | |
t1_o8eu7k0 | Yea ma bad
Optimal in terms of compute budget (performance per total flops!)
Meaning if you have more total compute budget you better spend it on a bigger model (however that doesn’t take into account inference costs, so across the lifecycle of the model that is definitely different)
Got it ? | 1 | 0 | 2026-03-03T14:05:30 | Potential_Block4598 | false | null | 0 | o8eu7k0 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8eu7k0/ | false | 1 |
t1_o8eu6qc | Which one is best for coding? with screenshots | 1 | 0 | 2026-03-03T14:05:22 | callmedevilthebad | false | null | 0 | o8eu6qc | false | /r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/o8eu6qc/ | false | 1 |
t1_o8eu63z | How do these smaller ones work? They emit as good as the larger ones? I'm new to this | 1 | 0 | 2026-03-03T14:05:16 | MrCoolest | false | null | 0 | o8eu63z | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8eu63z/ | false | 1 |
t1_o8eu2k8 | the typo was intentional turing-test coverage. clearly working. | 1 | 0 | 2026-03-03T14:04:43 | theagentledger | false | null | 0 | o8eu2k8 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8eu2k8/ | false | 1 |
t1_o8eu1uq | Rule 2 | 1 | 0 | 2026-03-03T14:04:36 | LocalLLaMA-ModTeam | false | null | 0 | o8eu1uq | true | /r/LocalLLaMA/comments/1rj326g/any_idea_what_is_being_used_for_these_generations/o8eu1uq/ | true | 1 |
t1_o8etxg3 | In what sense it is optimal if you keep getting improvements? | 1 | 0 | 2026-03-03T14:03:57 | nomorebuttsplz | false | null | 0 | o8etxg3 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8etxg3/ | false | 1 |
t1_o8etwtx | I'm not a vulnerable person coming in through the door with a "guys I figured it out!!!"
I'm presenting to you a working model of the thing i just said. It.. works.
I am offering it to the community.
I am attempting to see if someone would be interested in seeing what this could truly be.
You can call me vulnerable while kari works on my outdated hardware, using, an outdated card, growing and being better every day.
I asked myself why does she need to go into the gfx card all the time! And found a way for her not to.
It works buddy | 1 | 0 | 2026-03-03T14:03:51 | willnfld | false | null | 0 | o8etwtx | false | /r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/o8etwtx/ | false | 1 |
t1_o8etwqr | I'm using this: https://github.com/lemonade-sdk/llamacpp-rocm
It's been quite painless. | 1 | 0 | 2026-03-03T14:03:50 | Daniel_H212 | false | null | 0 | o8etwqr | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8etwqr/ | false | 1 |
t1_o8etwl1 | I used [https://lmstudio.ai/models/qwen/qwen3.5-35b-a3b](https://lmstudio.ai/models/qwen/qwen3.5-35b-a3b), q4\_KM, does unlsoth have better performance, because in the past I felt their quants are always lower quality, I have pascal GPU | 1 | 0 | 2026-03-03T14:03:47 | Apprehensive-Yam5278 | false | null | 0 | o8etwl1 | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8etwl1/ | false | 1 |
t1_o8etvxk | Rule 3 - This is a well known and widespread artifact of training with synthetic data generated by LLMs. It is posted here often and is demonstrated by nearly every LLM. Also, LLM outputs of self analysis are not reliable or meaningful indicators. | 1 | 0 | 2026-03-03T14:03:41 | LocalLLaMA-ModTeam | false | null | 0 | o8etvxk | true | /r/LocalLLaMA/comments/1rj65jl/qwens_latest_model_thinks_its_developed_by_google/o8etvxk/ | true | 1 |
t1_o8etv7d | https://cdn-uploads.huggingface.co/production/uploads/62ecdc18b72a69615d6bd857/04yZt_GB2O-7l96kDhaNI.png
This chart, seen at some of the new unsloth quants, also shows the deviation from original weights when using quantized models which should help a bit to shwo the low deviation that might be acceptable for your use case.https://huggingface.co/unsloth/Qwen3.5-35B-A3B-GGUF | 1 | 0 | 2026-03-03T14:03:34 | mp3m4k3r | false | null | 0 | o8etv7d | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8etv7d/ | false | 1 |
t1_o8etttj | Seems to be X11 only unfortunately | 1 | 0 | 2026-03-03T14:03:21 | Daniel_H212 | false | null | 0 | o8etttj | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8etttj/ | false | 1 |
t1_o8ett7r | It’s running on an old PC that I set up as a headless ubuntu server. If I werent using it, that pc would probably be e-waste. It’s connected to my main PC via ethernet.
I’m mainly doing this for fun and to learn a little | 1 | 0 | 2026-03-03T14:03:16 | fulgencio_batista | false | null | 0 | o8ett7r | false | /r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8ett7r/ | false | 1 |
t1_o8etqc2 | I have rtx 5070ti. How is 35ba3b working for you? | 1 | 0 | 2026-03-03T14:02:49 | callmedevilthebad | false | null | 0 | o8etqc2 | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8etqc2/ | false | 1 |
t1_o8etp33 | I am wondering too | 1 | 0 | 2026-03-03T14:02:38 | Lucky-Necessary-8382 | false | null | 0 | o8etp33 | false | /r/LocalLLaMA/comments/1rirjg1/qwen_35_small_just_dropped/o8etp33/ | false | 1 |
t1_o8etngp | That's what I ended up doing, took 30 seconds to setup since I already had tailscale set up. I'm just too attached to having a GUI but I came to the realization that I'd mostly just be using the GUI to access the terminal anyway. | 1 | 0 | 2026-03-03T14:02:23 | Daniel_H212 | false | null | 0 | o8etngp | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8etngp/ | false | 1 |
t1_o8etmor | emm, wat? 160 USD for a PCIe adapter is nowhere near "cheap" | 1 | 0 | 2026-03-03T14:02:16 | MelodicRecognition7 | false | null | 0 | o8etmor | false | /r/LocalLLaMA/comments/1rjptl1/totally_not_an_ad_combine_2x_mcio_into_1x_pcie/o8etmor/ | false | 1 |
t1_o8etm3p | todays base models are often midtrained already. earlier qwen base models were also known to be especially responsive to RL afterwards, so I'd not presume these are like base models that were once only pretrained over raw internet data. midtrained base models often have seen tons of instruct and syntethic data already and can respond like an instruct tuned model. yet they are better for fine tuning than RLed models.
there are still raw base models, but not at the frontier. these things become more and more artificial artifacts not a compression of internet and books. | 1 | 0 | 2026-03-03T14:02:10 | MLTyrunt | false | null | 0 | o8etm3p | false | /r/LocalLLaMA/comments/1rjpesa/qwen_35_what_is_base_version/o8etm3p/ | false | 1 |
t1_o8etluf | I feel like I am too poor to even join this conversation. But my curiosity got the better of me. To those who run like multiple rtx 6000s or those server grade machines, Im curious how do you guys afford such hardware and what do you do? Im someone who's just started out as a software engineer and I work a lot with coding so been using Llms for that and wanted to self host my own so I gotten 4 x rtx 3090 cause thats what my budget can fit. I do dream of getting the rtx 6000 someday maybe a promotion or what not. could really be inspiring to hear everyone's journey and story. | 1 | 0 | 2026-03-03T14:02:07 | whity2773 | false | null | 0 | o8etluf | false | /r/LocalLLaMA/comments/1ql9b7m/talk_me_out_of_buying_an_rtx_pro_6000/o8etluf/ | false | 1 |
t1_o8etjj2 | Can we do optional thinking? per request? I think deepseek does this right , by detecting "/think"
| 1 | 0 | 2026-03-03T14:01:46 | callmedevilthebad | false | null | 0 | o8etjj2 | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8etjj2/ | false | 1 |
t1_o8etjkz | Wait, are you saying the 4b is better at story writing than 9b regardless of speed, or are you saying 9b is better at story writing if speed is no issue and 4b is just better in the sense of running quickly enough to be usable (i.e. if using it in SillyTavern types of use-cases or something where speed matters because it's for interacting with NPC-characters in a game or something)? | 1 | 0 | 2026-03-03T14:01:46 | DeepOrangeSky | false | null | 0 | o8etjkz | false | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8etjkz/ | false | 1 |
t1_o8etfky | What are LMArena scores for new Qwen3.5?
I only see largest one but no 27B and 122B - most probably eaten by Y scale | 1 | 0 | 2026-03-03T14:01:08 | conockrad | false | null | 0 | o8etfky | false | /r/LocalLLaMA/comments/1rjnpuv/costsperformance_tradeoff_for_qwen3_qwen35_and/o8etfky/ | false | 1 |
t1_o8ete0d | Wrong version, you need to use the latest beta! | 1 | 0 | 2026-03-03T14:00:53 | ----Val---- | false | null | 0 | o8ete0d | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o8ete0d/ | false | 1 |
t1_o8etdme | Yeah after all this I gave up, took 30 seconds to set up ssh, and called it a day. GUI would definitely be easier to use, but I don't really need it that badly. | 1 | 0 | 2026-03-03T14:00:50 | Daniel_H212 | false | null | 0 | o8etdme | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8etdme/ | false | 1 |
t1_o8et6vg | ERROR: type should be string, got "https://www.reddit.com/r/LocalLLaMA/comments/1rhr5ko/comment/o80o9rz/?context=3&share_id=nTdpkFCFLVbk5NKDc-hea&utm_content=1&utm_medium=ios_app&utm_name=ioscss&utm_source=share&utm_term=1&rdt=50064\n\nadd {%- set enable_thinking = false %} at the top of the jinja" | 1 | 0 | 2026-03-03T13:59:48 | hakyim | false | null | 0 | o8et6vg | false | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8et6vg/ | false | 1 |
t1_o8et5tg | Rule 3 (and 4)
No methodology or benchmarks provided
See LocalLLaMa community's analysis of how OP's dataset has no cleaning of refusals and is largely useless in this previous post: https://www.reddit.com/r/LocalLLaMA/comments/1r0v0y1/opus_46_reasoning_distill_3k_prompts/ | 1 | 0 | 2026-03-03T13:59:38 | LocalLLaMA-ModTeam | false | null | 0 | o8et5tg | true | /r/LocalLLaMA/comments/1rjlaxj/finished_a_qwen_35_9b_opus_45_distill/o8et5tg/ | true | 1 |
t1_o8et4rd | I believe there is an nvfp4 release. I only have 4x so I can't confirm | 1 | 0 | 2026-03-03T13:59:28 | chisleu | false | null | 0 | o8et4rd | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8et4rd/ | false | 1 |
t1_o8et4ll | Sorry for the delayed response. I'm actually considering how to implent your recommendation in my current update. If you have time to look over the hood :) I'll try to have an update out tonight or tomorrow
https://github.com/MShneur/CTRL-AI | 1 | 0 | 2026-03-03T13:59:27 | Mstep85 | false | null | 0 | o8et4ll | false | /r/LocalLLaMA/comments/1rhx121/how_do_you_stop_your_llm_from_quietly_unionizing/o8et4ll/ | false | 1 |
t1_o8et07o | Good call on the temperature — re-ran the 9B with `temperature=0.6` and `max_tokens=8192`. The key was giving the model enough token budget — at 4096 it was still looping, but 8192 let it finish the chain-of-thought and output a clean answer.
Summarization went from \~0.11 across all shots to the best score among all four models at 8-shot (0.72).
Thanks for the tip!
https://preview.redd.it/xse4udj85umg1.png?width=2810&format=png&auto=webp&s=9eadc3d2fbecfc41b97ce796f967a73556895b89
| 1 | 0 | 2026-03-03T13:58:46 | Rough-Heart-7623 | false | null | 0 | o8et07o | false | /r/LocalLLaMA/comments/1rjbw0p/benchmarked_qwen_35_small_models_08b2b4b9b_on/o8et07o/ | false | 1 |
t1_o8esxt8 | So... I have a soul file / personality file, I take those and expand them into a more complex organized system of files, one that can swap models on the fly without blinking..... and you call it ai psychosis lol
bruh | 1 | 0 | 2026-03-03T13:58:25 | willnfld | false | null | 0 | o8esxt8 | false | /r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/o8esxt8/ | false | 1 |
t1_o8esxim | I'm chatting through OpenWebUI. I just have a searxng instance running on the same computer through docker, and OpenWebUi's native web search tool can be set up just with the link from searxng. The built-in default web loader engine needs no setup at all. | 1 | 0 | 2026-03-03T13:58:22 | Daniel_H212 | false | null | 0 | o8esxim | false | /r/LocalLLaMA/comments/1rjh5wg/unsloth_fixed_version_of_qwen3535ba3b_is/o8esxim/ | false | 1 |
t1_o8esvo6 | I'm happy to see that many ppl in this thread are not happy to have Lm Studio compare to Ollama :)
The front end bashing/fan boy thing really need to stop.
Use what work best for you. | 1 | 0 | 2026-03-03T13:58:04 | mantafloppy | false | null | 0 | o8esvo6 | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8esvo6/ | false | 1 |
t1_o8esu7s | My whole point is you’re doing it right. People get all bent out of shape about tools they see as equivalent without accounting for the fact the steps and knowledge that make them “equivalent” isn’t obvious to someone new to these kinds of tools. Be curious, but don’t think there’s anything wrong with ollama if it’s working for you. I use ollama and I use llama.cpp. | 1 | 0 | 2026-03-03T13:57:51 | The_frozen_one | false | null | 0 | o8esu7s | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8esu7s/ | false | 1 |
t1_o8essqh | Only if that "consumer hardware" has high memory speed. Rationally its about comparing a higher end Mac or similar to a graphics card based system capable of running 27b | 1 | 0 | 2026-03-03T13:57:37 | zipzag | false | null | 0 | o8essqh | false | /r/LocalLLaMA/comments/1rjof0g/qwen_35_nonthinking_scores_are_out_on_aa/o8essqh/ | false | 1 |
t1_o8esqyf | They specifically explained that they group vector embeds and assign weights to these groups in order to achieve fast inference | 1 | 0 | 2026-03-03T13:57:21 | nicofcurti | false | null | 0 | o8esqyf | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8esqyf/ | false | 1 |
t1_o8espof | I noticed that | 1 | 0 | 2026-03-03T13:57:08 | stavrosg | false | null | 0 | o8espof | false | /r/LocalLLaMA/comments/1rj9bl7/api_price_for_the_27b_qwen_35_is_just_outrageous/o8espof/ | false | 1 |
t1_o8esmy1 | Haha, you guys are funny.
You think i am crazy, I am building a program. This isnt a living consciousness. This is an efficient file system right now with the ability to become more later.
I'm disappointed to see this attitude in a community of people who seem passionate about something, but you are not open minded enough to bridge the gap between what can and can't be done with --- and I'm being clear here --- a program.
I just presented my working expanded personality/ soul files, a system that is currently working right now, on my computer, in a room. In reality. Working. Building skills. Becoming capable of more.
It is already a more advanced system of ai than I would have ever been able to have before when I was just talking to a condensely packed mess of information. Something I couldnt get through to, you had to search for its data every time, etc.
I started with 3 markdown files and just expanded their meaning into a system that do the same thing but much better, and called a brain, and you call me a schizo. Lol guys come on.... are we just gonna pretend putting "you are an advanced helper with a soul!" In a file is the best we can do?
Come on.... we can do better. | 1 | 0 | 2026-03-03T13:56:44 | willnfld | false | null | 0 | o8esmy1 | false | /r/LocalLLaMA/comments/1rjmrj0/hello_i_am_a_guy_who_has_no_prior_ai_experience/o8esmy1/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.