name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8hva42 | Replicated helps AI companies deploy their software on-prem, so I've personally spoke with dozens of companies as to why they do/ don't go on-prem. I think everything you mentioned is valid - unless you are a giant company, it is often too hard, too expensive, and requires too much upkeep.
For most smaller companies, it financially isn't worth it to offer on-prem unless there's demand in the field. If it's being demanded, they can sell their self-hosted software for often times 6 - 10X what their SaaS goes for. But, if you're a startup, you probably need months of work on your app to get there. And what if those deals fall through? What if the one infra person on your team quits?
This is where we come in - we operationalize all the stuff about on-prem that's a pain in the ass. Most startups work with us to get their app offered on-prem in as little as a month, and then they can deploy out to their customers quickly... the installations in the field take as little as 30 minutes.
I'm so glad to see this question being asked here. It's what I spend all day every day thinking about! | 1 | 0 | 2026-03-03T22:50:37 | replicatedhq | false | null | 0 | o8hva42 | false | /r/LocalLLaMA/comments/1mkz94z/does_anybody_actually_deploy_onprem_why_so_why_not/o8hva42/ | false | 1 |
t1_o8hv9iy | here is recommended settings by the Qwen team (from huggingface pages):
Qwen 3.5 9b
* Thinking mode for general tasks: `temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Thinking mode for precise coding tasks (e.g. WebDev): `temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0`
* Instruct (or non-thinking) mode for general tasks: `temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Instruct (or non-thinking) mode for reasoning tasks: `temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
Qwen 3.5 4b
* Thinking mode for general tasks: `temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Thinking mode for precise coding tasks (e.g. WebDev): `temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0`
* Instruct (or non-thinking) mode for general tasks: `temperature=0.7, top_p=0.8, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Instruct (or non-thinking) mode for reasoning tasks: `temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
Qwen 3.5 2b
* Non-thinking mode for text tasks: `temperature=1.0, top_p=1.00, top_k=20, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0`
* Non-thinking mode for VL tasks: `temperature=0.7, top_p=0.80, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Thinking mode for text tasks: `temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Thinking mode for VL or precise coding (e.g. WebDev) tasks : `temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0`
Qwen 3.5 0.8B
* Non-thinking mode for text tasks: `temperature=1.0, top_p=1.00, top_k=20, min_p=0.0, presence_penalty=2.0, repetition_penalty=1.0`
* Non-thinking mode for VL tasks: `temperature=0.7, top_p=0.80, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Thinking mode for text tasks: `temperature=1.0, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=1.5, repetition_penalty=1.0`
* Thinking mode for VL or precise coding (e.g. WebDev) tasks : `temperature=0.6, top_p=0.95, top_k=20, min_p=0.0, presence_penalty=0.0, repetition_penalty=1.0`
| 1 | 0 | 2026-03-03T22:50:32 | NegotiationNo1504 | false | null | 0 | o8hv9iy | false | /r/LocalLLaMA/comments/1rjsgy6/how_to_fix_qwen35_overthink/o8hv9iy/ | false | 1 |
t1_o8hv8y6 | Congrats on the work! | 1 | 0 | 2026-03-03T22:50:26 | JumpyAbies | false | null | 0 | o8hv8y6 | false | /r/LocalLLaMA/comments/1rira5e/iquestcoderv1_is_40b14b7b/o8hv8y6/ | false | 1 |
t1_o8hv6zf | Qwen 3.5 27B IQ4_KS fits in 16gb of vram with 14k context. You could try a Q3 quant as well if you want more context | 1 | 0 | 2026-03-03T22:50:09 | XtremeBadgerVII | false | null | 0 | o8hv6zf | false | /r/LocalLLaMA/comments/1rjnm7z/9b_or_35b_a3b_moe_for_16gb_vram_and_64gb_ram/o8hv6zf/ | false | 1 |
t1_o8hv3k3 | Using [dcompute.cloud](http://dcompute.cloud) atm, their team is pretty fine, got me a rtx 4090 for like 0.49$/hr for a week. Did my job. | 1 | 0 | 2026-03-03T22:49:39 | arnav_m_ | false | null | 0 | o8hv3k3 | false | /r/LocalLLaMA/comments/1rf4br0/where_do_you_all_rent_gpu_servers_for_small_ml_ai/o8hv3k3/ | false | 1 |
t1_o8huvd0 | I'll have to try a few things from your github link.
But to give you an idea, using the suggested sampling and penalty parameters in the latest llama.cpp build, i see repeating tokens, completely mangled markdown and latex formatting, outright incorrect code syntax in both pythong and C++ (only languages i have tried) and low quality output.
I could upload examples if you are interested, but here is what i am talking about:
Repetition - "... If C is tangent to$ toto to$ to$ to$ to$ to$ to a segment..."
Incorrect latex - "Solve \| (V_k + r \hat{u}))j})j - ) - V_j \|"
Mangled python syntax - "bodies.append(Body( , , count * )) "
I can tell that 122b knows, or at least, has a very good understanding of the topics in my test prompts, but it falls flat on its face every time, and i think that whatever is causing these issues (they appear a lot in every response) is the cause of the poor performance in general. | 1 | 0 | 2026-03-03T22:48:28 | plopperzzz | false | null | 0 | o8huvd0 | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8huvd0/ | false | 1 |
t1_o8hut9n | goon developers | 1 | 0 | 2026-03-03T22:48:11 | HopePupal | false | null | 0 | o8hut9n | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hut9n/ | false | 1 |
t1_o8huryz | Any idea what this actually means for the future? Are they not going to have a qwen 4, etc? | 1 | 0 | 2026-03-03T22:48:00 | Borkato | false | null | 0 | o8huryz | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8huryz/ | false | 1 |
t1_o8huoi0 | Qwen3.5 is FIM tuned so it can do this, but like you said, there's little left to improve since 2.5. It's a dinosaur but it gets the job done for cheap. We're running it on a silly refact.ai cluster and while we played with qwen3 coder 30B-A3B we all went back to the 7 or 14B 2.5, because it's already doing what we want for half the cost (VRAM). | 1 | 0 | 2026-03-03T22:47:30 | RadiantHueOfBeige | false | null | 0 | o8huoi0 | false | /r/LocalLLaMA/comments/1rjr9ze/did_anyone_replace_old_qwen25coder7b_with/o8huoi0/ | false | 1 |
t1_o8hufro | Thanks a lot GOAT | 1 | 0 | 2026-03-03T22:46:14 | NegotiationNo1504 | false | null | 0 | o8hufro | false | /r/LocalLLaMA/comments/1rjsgy6/how_to_fix_qwen35_overthink/o8hufro/ | false | 1 |
t1_o8huew4 | Is there a write up of this somewhere please? | 1 | 0 | 2026-03-03T22:46:07 | rog-uk | false | null | 0 | o8huew4 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8huew4/ | false | 1 |
t1_o8hue14 | It’s an instruction-following failure regardless of your political affiliation lol | 1 | 0 | 2026-03-03T22:45:59 | gradient8 | false | null | 0 | o8hue14 | false | /r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/o8hue14/ | false | 1 |
t1_o8hu9uq | Have to call out - wrong sub, but interesting question. | 1 | 0 | 2026-03-03T22:45:23 | hashmortar | false | null | 0 | o8hu9uq | false | /r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/o8hu9uq/ | false | 1 |
t1_o8hu94r | It's not a ccp decision | 1 | 0 | 2026-03-03T22:45:17 | ilikeelks | false | null | 0 | o8hu94r | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hu94r/ | false | 1 |
t1_o8hu7u8 | Completely agree. I never thought we'd sink this low in a field that's supposed to be about research and critical thinking. | 1 | 0 | 2026-03-03T22:45:05 | Holiday-Case-4524 | false | null | 0 | o8hu7u8 | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hu7u8/ | false | 1 |
t1_o8hu0ho | Rule 1 - Duplicate, use the existing thread: https://old.reddit.com/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/ | 1 | 0 | 2026-03-03T22:44:02 | LocalLLaMA-ModTeam | false | null | 0 | o8hu0ho | true | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hu0ho/ | true | 1 |
t1_o8htyv8 | Maybe test Qwen 3.5 122b a10b? On a Mac that's probably gonna be faster than dense 27b... I wonder how it performs on this benchmark. | 1 | 0 | 2026-03-03T22:43:49 | One_Key_8127 | false | null | 0 | o8htyv8 | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8htyv8/ | false | 1 |
t1_o8htyad | Rule 1 - Duplicate, use the existing thread: https://old.reddit.com/r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/ | 1 | 0 | 2026-03-03T22:43:44 | LocalLLaMA-ModTeam | false | null | 0 | o8htyad | true | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8htyad/ | true | 1 |
t1_o8htvmw | I still think privacy and uncensoredness is top two reasons | 1 | 0 | 2026-03-03T22:43:20 | Witty_Mycologist_995 | false | null | 0 | o8htvmw | false | /r/LocalLLaMA/comments/1rjxrd5/local_ai_companies_are_emphasizing_the_wrong/o8htvmw/ | false | 1 |
t1_o8htvn5 | Yes, they mix instruction and response type data during pretraining. | 1 | 0 | 2026-03-03T22:43:20 | xadiant | false | null | 0 | o8htvn5 | false | /r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8htvn5/ | false | 1 |
t1_o8htv96 | Peta captures the tool call and its full context (parameters, actor, policy) - so you get the what and who. The model's reasoning before the trigger is a separate layer, more of an observability problem (traces, LLM call logs). | 1 | 0 | 2026-03-03T22:43:17 | BC_MARO | false | null | 0 | o8htv96 | false | /r/LocalLLaMA/comments/1rjywpx/autonomous_agents_making_financial_decisions_how/o8htv96/ | false | 1 |
t1_o8httmr | Not very surprising from a probabilistic model. She/her and He/him would be such overwhelmingly present in the data that during the sampling it is rather unlikely to pick they/them. The explicit instruction might push the token logprob for they/them higher for it to get picked up better.
The self correction part is definitely the interesting bit here | 1 | 0 | 2026-03-03T22:43:03 | hashmortar | false | null | 0 | o8httmr | false | /r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/o8httmr/ | false | 1 |
t1_o8htt5w | It's seemed a big push. There's news about 120 million orders made through Qwen app.
https://www.scmp.com/tech/article/3343289/alibabas-qwen-tops-120-million-orders-6-days-amid-chinas-ai-shopping-battle | 1 | 0 | 2026-03-03T22:42:59 | Durian881 | false | null | 0 | o8htt5w | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8htt5w/ | false | 1 |
t1_o8htpdc | Yeah, sounds about right. Even Opus 4.6 will use em-dashes here and there even though I explicitly instruct it not to in my system prompt.
If it bothers you, you could have a lightweight model do a second pass over the outputs to correct such mistakes. | 1 | 0 | 2026-03-03T22:42:27 | gradient8 | false | null | 0 | o8htpdc | false | /r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/o8htpdc/ | false | 1 |
t1_o8htlh1 | It would work in cli as well. | 1 | 0 | 2026-03-03T22:41:54 | dark-light92 | false | null | 0 | o8htlh1 | false | /r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8htlh1/ | false | 1 |
t1_o8htfzo | 1 | 0 | 2026-03-03T22:41:06 | Xp_12 | false | null | 0 | o8htfzo | false | /r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/o8htfzo/ | false | 1 | |
t1_o8htep8 | Different quants may have different chat templates. For the quantl mentioned by OP, the chat template has below lines:
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- if enable_thinking is defined and enable_thinking is true %}
{{- '<think>\n' }}
{%- else %}
{{- '<think>\n\n</think>\n\n' }}
{%- endif %}
{%- endif %}
You can see the condition in line 3. It requires both that it's defined and it's true.
Whatever the default behavior, you can force it to think or not using the parameter provided above. | 1 | 0 | 2026-03-03T22:40:55 | dark-light92 | false | null | 0 | o8htep8 | false | /r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8htep8/ | false | 1 |
t1_o8hte9n | Apologies I know you’ve posted several weeks ago and moved on from this but I've only just recently come across --fit and it looks like its gong to save me from so much tedium. Thanks for giving so much thorough detail in your testing, particularly useful for me to see how fit compared even against using -ot for ffn exp
I run a mixed GPU setup ( I alternate between 2x3090 with/without 1x5090) so its such a hassle sitting with a new model and messing around with all the usual old parameters trying to get the most optimum configuration for my mixed VRAM system.
\--fit sounds like its going to make things so much simpler for my constant llama-swap config.yml modifications. The amount of time I've sunk into manually iterating through 3 way --tensor-split trying to fill up as much as possible onto the 5090 as priority.
From your experience do you think there could still be a case for manual arguments over --fit in that scenario where you are trying to put as much as possible onto significantly faster memory bandwidth GPU (ie 5090 vs 3090) considering --fit still tries to leave some headroom on GPU. Which if done according to a percentage of VRAM per device rather than what can intelligently be crammed into a device VRAM would leave more headroom on the 5090? | 1 | 0 | 2026-03-03T22:40:52 | munkiemagik | false | null | 0 | o8hte9n | false | /r/LocalLLaMA/comments/1qyynyw/llamacpps_fit_can_give_major_speedups_over_ot_for/o8hte9n/ | false | 1 |
t1_o8htcrk | The explanation from the LLM is pretty good, I'm not sure what else you'd like.
You can add IMPORTANT: and write your instructions in bold letters but that's about it. Frankly, I'm wondering in what circumstances you're even running into this phenomenon since most of my interactions with LLMs don't involve gendering at all — as direct conversations between two parties, they just refer to 'you' or 'i'. Why is the LLM saying he/she him/her in the first place? | 1 | 0 | 2026-03-03T22:40:39 | Recoil42 | false | null | 0 | o8htcrk | false | /r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/o8htcrk/ | false | 1 |
t1_o8ht8uh | i cant waste time engaging with people like you, so im going to block and enjoy not seeing your posts or comments in the future. | 1 | 0 | 2026-03-03T22:40:06 | TreesLikeGodsFingers | false | null | 0 | o8ht8uh | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o8ht8uh/ | false | 1 |
t1_o8ht1e2 | Steinberger shipped the malicious Twitter plugin. | 1 | 0 | 2026-03-03T22:39:03 | WolfeheartGames | false | null | 0 | o8ht1e2 | false | /r/LocalLLaMA/comments/1rjz0mn/i_have_proof_the_openclaw_explosion_was_a_staged/o8ht1e2/ | false | 1 |
t1_o8hsyf3 | the anti sycophancy training is working | 1 | 0 | 2026-03-03T22:38:38 | Acceptable_Push_2099 | false | null | 0 | o8hsyf3 | false | /r/LocalLLaMA/comments/1rk3uby/misgendering_issues_with_claude_sonnet_46/o8hsyf3/ | false | 1 |
t1_o8hsxbp | [removed] | 1 | 0 | 2026-03-03T22:38:28 | [deleted] | true | null | 0 | o8hsxbp | false | /r/LocalLLaMA/comments/1rfg53c/lightmem_iclr_2026_lightweight_and_efficient/o8hsxbp/ | false | 1 |
t1_o8hstvy | For unsloth they are recommending using -ctv and ctk = bf16 IF you choose to use these parmas. | 1 | 0 | 2026-03-03T22:37:59 | BassAzayda | false | null | 0 | o8hstvy | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8hstvy/ | false | 1 |
t1_o8hssp0 | Its about twerking its [parameters here](https://www.reddit.com/r/LocalLLaMA/comments/1rg0487/system_prompt_for_qwen35_27b35ba3b_to_reduce/o7o7r2l/)
then use https://github.com/mostlygeek/llama-swap to change them without model reloading, if you didn't get it to stop yapping.
Also, the less thinking it does, generally the dumber its output is. You're aiming for as close to the max overthinking you can stand. | 1 | 0 | 2026-03-03T22:37:49 | philmarcracken | false | null | 0 | o8hssp0 | false | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8hssp0/ | false | 1 |
t1_o8hsj8c | It's an inference engine. You need to download a GGUF.
Make sure your PC can fit the model into your RAM/VRAM.
It usually goes with the option to enable CLI (if you're on windows), from there just choose a UI end-point, it doesn't really matter which, and put the address into your browser. | 1 | 0 | 2026-03-03T22:36:29 | setprimse | false | null | 0 | o8hsj8c | false | /r/LocalLLaMA/comments/1qwwint/vllm_vs_llamacpp_vs_ollama/o8hsj8c/ | false | 1 |
t1_o8hsh0m | So to answer my own question: If using llama.cpp then you have to set the reasoning budget to 0 and enable\_thinking to false. This works. | 1 | 0 | 2026-03-03T22:36:10 | schnauzergambit | false | null | 0 | o8hsh0m | false | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8hsh0m/ | false | 1 |
t1_o8hsakb | It's not really me who feels attacked is it? :D
I know what would help you, though. Yet another API to feed your final generation into so nobody here suspects the entire thing is some sort of contrived turing test. | 1 | 0 | 2026-03-03T22:35:14 | Ok-Measurement-1575 | false | null | 0 | o8hsakb | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hsakb/ | false | 1 |
t1_o8hs8go | I tried the biggest Q4 reap i could.
The 173B could not pass the hard coding tests that the non reaped IQ4\_XS easily passed
Bad taste in my mouth from this one. | 1 | 0 | 2026-03-03T22:34:56 | mr_zerolith | false | null | 0 | o8hs8go | false | /r/LocalLLaMA/comments/1r8g0iw/minimaxm25reap_from_cerebras/o8hs8go/ | false | 1 |
t1_o8hs55b | https://arstechnica.com/security/2026/03/llms-can-unmask-pseudonymous-users-at-scale-with-surprising-accuracy/
LLMs being massive pattern recognition machines, now they're also good at recognizing similar patterns across a massive corpus to identify similar users.
If you write the same way on Reddit, X, YouTube, BlueSky and LinkedIn, all those online realities could be collapsed into one person. | 1 | 0 | 2026-03-03T22:34:29 | SkyFeistyLlama8 | false | null | 0 | o8hs55b | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hs55b/ | false | 1 |
t1_o8hrywv | So when running the LLM the wall pwr went to 20W with NPU, and over 80W with GPU or CPU? | 1 | 0 | 2026-03-03T22:33:36 | BandEnvironmental834 | false | null | 0 | o8hrywv | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8hrywv/ | false | 1 |
t1_o8hryno | I've been really hoping Qwen Image 2 comes out still. It's felt to me somewhat like Qwen have been moving to 'final form' models recently i.e. Qwen3.5 is visual enabled across the board from day 1 and Qwen Image 2 is both a generator and an editor in one model. Qwen Image 2 is a fantastic size for most local use cases and if they could get that out, it would likely be used for *years* to come. | 1 | 0 | 2026-03-03T22:33:34 | Spanky2k | false | null | 0 | o8hryno | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hryno/ | false | 1 |
t1_o8hrkh8 | Dario got his company nuked, or at least Trump thinks so. | 1 | 0 | 2026-03-03T22:31:33 | NoahFect | false | null | 0 | o8hrkh8 | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hrkh8/ | false | 1 |
t1_o8hrhje | "Discussions feel less like technical exchanges and more like sports rivalries — you're either cheering for one tech company or another, and god forbid you criticize a model that happens to be the community's current favorite. Nuance gets booed off the field."
Yeah, welcome to idiocracy, or rather current state of reddit / society in general. | 1 | 0 | 2026-03-03T22:31:08 | DarkArtsMastery | false | null | 0 | o8hrhje | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hrhje/ | false | 1 |
t1_o8hrh9a | People where saying that could be the kvcache quantization. If your using a quantized kvcache use bf16 not fp16 or a q#. | 1 | 0 | 2026-03-03T22:31:06 | H3g3m0n | false | null | 0 | o8hrh9a | false | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8hrh9a/ | false | 1 |
t1_o8hr491 | This sub is absolutely over ran by Chinese bots and shrills | 1 | 0 | 2026-03-03T22:29:18 | DataGOGO | false | null | 0 | o8hr491 | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hr491/ | false | 1 |
t1_o8hqt45 | people downvoting truth because "they don't like it".
lol, humans | 1 | 0 | 2026-03-03T22:27:46 | murkomarko | false | null | 0 | o8hqt45 | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hqt45/ | false | 1 |
t1_o8hqsy7 | Using a tool doesn't make you an expert in it — yet that's exactly what happens with AI. Everyone becomes a professor after their first YouTube tutorial.
Sorry you feel so personally attacked, and sorry you can't seem to understand what the post was actually about. Yes, I used AI to write it clearly — for people like you, who still didn't get it. 😊 | 1 | 0 | 2026-03-03T22:27:44 | Holiday-Case-4524 | false | null | 0 | o8hqsy7 | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hqsy7/ | false | 1 |
t1_o8hqs40 | Good | 1 | 0 | 2026-03-03T22:27:37 | Junior-hhhhhh9 | false | null | 0 | o8hqs40 | false | /r/LocalLLaMA/comments/1r99yda/pack_it_up_guys_open_weight_ai_models_running/o8hqs40/ | false | 1 |
t1_o8hqfut | Kv cache? And you kinda don’t want to the whole point of Claud code is to be a JIT (just in time) system for context management giving your LLM the right context at the right time. Perhaps a smaller model that allows you to have more cache? | 1 | 0 | 2026-03-03T22:25:55 | ThinkExtension2328 | false | null | 0 | o8hqfut | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8hqfut/ | false | 1 |
t1_o8hqde8 | option 1 and 2 wouldnt get them to leave the company, but to go to concentration camps | 1 | 0 | 2026-03-03T22:25:35 | murkomarko | false | null | 0 | o8hqde8 | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hqde8/ | false | 1 |
t1_o8hqct3 | im facing also overthinking in 4b and 2b | 1 | 0 | 2026-03-03T22:25:30 | NegotiationNo1504 | false | null | 0 | o8hqct3 | false | /r/LocalLLaMA/comments/1rk2jnj/has_anyone_found_a_way_to_stop_qwen_35_35b_3b/o8hqct3/ | false | 1 |
t1_o8hq50a | If this is about me, I'm honoured :D
Meanwhile, how do you feel about paying for a service to deslop your slop?
You'd pay, right? | 1 | 0 | 2026-03-03T22:24:24 | Ok-Measurement-1575 | false | null | 0 | o8hq50a | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hq50a/ | false | 1 |
t1_o8hq3zt | What I noticed in opencode cli is that even 2B takes a moment to.. think? After parsing the context, they can sometimes go on immediately or they take a moment to .. "think internally"?
It's different from when they just blurt out the answer sometimes. And 4B tends to do explicit thinking, no matter which quant, even with unsloth sometimes. I tried like 6-12 different ggufs per size from 0.8B to 32B. | 1 | 0 | 2026-03-03T22:24:16 | AppealSame4367 | false | null | 0 | o8hq3zt | false | /r/LocalLLaMA/comments/1rjzlrn/are_the_9b_or_smaller_qwen35_models_unthinking/o8hq3zt/ | false | 1 |
t1_o8hq0mu | i think KittenTTS is way better
| 1 | 0 | 2026-03-03T22:23:48 | NegotiationNo1504 | false | null | 0 | o8hq0mu | false | /r/LocalLLaMA/comments/1rjrjg3/kokoro_tts_but_it_clones_voices_now_introducing/o8hq0mu/ | false | 1 |
t1_o8hpzhh | that was the project lead | 1 | 0 | 2026-03-03T22:23:39 | Neither-Phone-7264 | false | null | 0 | o8hpzhh | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hpzhh/ | false | 1 |
t1_o8hpvyf | It’s been a thing for a while because the line between base and instruct was always pretty vibes based.
The thought process is basically:
the data increasingly looks like an instruction tune already so the model starts life as a very bad chat bot.
Then, chat templates exist to save downstream users from big foot guns and is very little effort for the producer to add.
The chain of thought ones are interesting. It’s mostly a Qwen / Deepseek thing but they start introducing CoT in what they call “mid training”, it seems to be economically valuable for almost every use case so including it at the very beginning benefits almost everyone instead of reserving it for separate downstream tasks.
So now it’s becoming more aligned / unaligned than it is true base vs instruct and choosing how much post training / wokeness / refusal you want | 1 | 0 | 2026-03-03T22:23:10 | claythearc | false | null | 0 | o8hpvyf | false | /r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8hpvyf/ | false | 1 |
t1_o8hpth2 | How would I use it in any other agent. For example, how to use it with zed editor's in-built agent? | 1 | 0 | 2026-03-03T22:22:49 | dark-light92 | false | null | 0 | o8hpth2 | false | /r/LocalLLaMA/comments/1rjt4hh/mcp_server_that_indexes_codebases_into_a/o8hpth2/ | false | 1 |
t1_o8hpq7n | by a state / company with huge resources that could already make their own models.
but what happens when the only SOTA models are close-source and have orwellian behavior baked in? | 1 | 0 | 2026-03-03T22:22:22 | Ok-Awareness9993 | false | null | 0 | o8hpq7n | false | /r/LocalLLaMA/comments/1rk342c/the_dow_vs_anthropic_saga_proves_closedsource/o8hpq7n/ | false | 1 |
t1_o8hpp65 | 10k less for first and 140k led for second is a big difference. I'm not saying it isn't fuck off expensive just your over bloating it currently. | 1 | 0 | 2026-03-03T22:22:13 | YT_Brian | false | null | 0 | o8hpp65 | false | /r/LocalLLaMA/comments/1rjjcyk/still_a_noob_is_anyone_actually_running_the/o8hpp65/ | false | 1 |
t1_o8hpms9 | Yes! Magic.dev claims to have 'solved' it tho | 1 | 0 | 2026-03-03T22:21:53 | clocksmith | false | null | 0 | o8hpms9 | false | /r/LocalLLaMA/comments/1rk045z/are_huge_context_windows_a_hallucination_problem/o8hpms9/ | false | 1 |
t1_o8hpleq | grok oss: a mechahitler for the working man | 1 | 0 | 2026-03-03T22:21:42 | Neither-Phone-7264 | false | null | 0 | o8hpleq | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hpleq/ | false | 1 |
t1_o8hpjgi | No way, what about the Qwen3.5 Coder 35B-A3B model that I've been waiting for???? | 1 | 0 | 2026-03-03T22:21:26 | bobaburger | false | null | 0 | o8hpjgi | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hpjgi/ | false | 1 |
t1_o8hpihl | they said no thats not why | 1 | 0 | 2026-03-03T22:21:18 | Neither-Phone-7264 | false | null | 0 | o8hpihl | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hpihl/ | false | 1 |
t1_o8hpf6c | Stings a little, knowing you're exactly who this post is about, doesn't it? | 1 | 0 | 2026-03-03T22:20:50 | Holiday-Case-4524 | false | null | 0 | o8hpf6c | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hpf6c/ | false | 1 |
t1_o8hpbbc | The trick that helped me most with long docs is splitting the task into two hard-separated prompts instead of asking for summary and synthesis in one shot.
Pass 1: extraction only. Ask the model to locate and quote the specific sections you care about, verbatim, with page or section references. Explicitly tell it *not* to summarize or interpret yet, just pull the raw material. This keeps it anchored to actual text rather than pattern-matching to salient chunks elsewhere in the document.
Pass 2: synthesis only. Feed it only the quotes from Pass 1 and ask for the summary or analysis. Now it's working from a small, verified context instead of the full 100 pages, so the "lost in the middle" pressure is mostly gone and the confabulation risk drops significantly.
The reason this works is that you're separating retrieval from generation, which the model wants to blur together. When you ask it to summarize directly, it's simultaneously deciding what's relevant *and* constructing an interpretation. That's where the confident-but-wrong detail from page 73 sneaks in. Breaking it into two steps forces grounding before generation rather than alongside it.
Won't eliminate hallucination entirely but it gives you a much cleaner surface to fact check, because Pass 1 output is verifiable against the source before you ever get to synthesis. | 1 | 0 | 2026-03-03T22:20:19 | CivilMonk6384 | false | null | 0 | o8hpbbc | false | /r/LocalLLaMA/comments/1rk045z/are_huge_context_windows_a_hallucination_problem/o8hpbbc/ | false | 1 |
t1_o8hpb3m | u/-p-e-w- I noticed Qwen 3.5 heretics are kinda weird in their responses, like they don't necessarily refuse (sometimes they do), but when they do give an answer, it may not be the answer the user expected, like they just avoid answering the way the user would expect from a heretic model, which is basically the same as if they refused, not really useful. Ironically, I had the heretic version of the infamous GPT-OSS answer the same question better than heretic version of the Qwen 3.5 9B. | 1 | 0 | 2026-03-03T22:20:17 | Cool-Chemical-5629 | false | null | 0 | o8hpb3m | false | /r/LocalLLaMA/comments/1rf6s0d/qwen3527bhereticgguf/o8hpb3m/ | false | 1 |
t1_o8hpayv | don't cry please | 1 | 0 | 2026-03-03T22:20:16 | Holiday-Case-4524 | false | null | 0 | o8hpayv | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hpayv/ | false | 1 |
t1_o8hp5aa | There's nothing to discuss. This is an obvious bait post filled with a bunch of personal attacks. | 1 | 0 | 2026-03-03T22:19:30 | NNN_Throwaway2 | false | null | 0 | o8hp5aa | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hp5aa/ | false | 1 |
t1_o8hp3jn | thank you for sharing your thoughts | 1 | 0 | 2026-03-03T22:19:15 | Holiday-Case-4524 | false | null | 0 | o8hp3jn | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hp3jn/ | false | 1 |
t1_o8hozmd | I think I'm going to launch a service to deslop your slop. | 1 | 0 | 2026-03-03T22:18:43 | Ok-Measurement-1575 | false | null | 0 | o8hozmd | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hozmd/ | false | 1 |
t1_o8hoxje | Thank you! It can only get better with the release of the Qwen3.5 range. | 1 | 0 | 2026-03-03T22:18:26 | Significant-Skin118 | false | null | 0 | o8hoxje | false | /r/LocalLLaMA/comments/1rjh6ti/made_a_video_game_that_uses_local_llms/o8hoxje/ | false | 1 |
t1_o8hosa9 | Ironic to see that you have not the ability to have a good discussion | 1 | 0 | 2026-03-03T22:17:42 | Holiday-Case-4524 | false | null | 0 | o8hosa9 | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hosa9/ | false | 1 |
t1_o8hop4u | open models.... can just be trained to build mass censorship tools and nuclear weapons... wat | 1 | 0 | 2026-03-03T22:17:16 | Sudden-Lingonberry-8 | false | null | 0 | o8hop4u | false | /r/LocalLLaMA/comments/1rk342c/the_dow_vs_anthropic_saga_proves_closedsource/o8hop4u/ | false | 1 |
t1_o8hoopb | you are another example that belongs to them. No way to have a reasonable discussion. | 1 | 0 | 2026-03-03T22:17:13 | Holiday-Case-4524 | false | null | 0 | o8hoopb | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hoopb/ | false | 1 |
t1_o8hoiky | You are wise and a blessing to those around you! | 1 | 0 | 2026-03-03T22:16:22 | crantob | false | null | 0 | o8hoiky | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8hoiky/ | false | 1 |
t1_o8hoguy | Might be related: [https://arxiv.org/html/2510.03264v1](https://arxiv.org/html/2510.03264v1)
\> Our study provides the first systematic investigation of how reasoning data, varying in scale, diversity, and quality, influences llms across the entire training pipeline. We show that reasoning must be introduced early: front-loading into pretraining creates durable foundations that post-training alone cannot recover. Crucially, we uncover an asymmetric allocation principle—diversity drives pretraining effectiveness, while quality governs SFT—providing a clear, actionable blueprint for data strategy. Further, we demonstrate that high-quality pretraining data can yield latent benefits activated only during SFT, and that naive SFT scaling with noisy data can be actively harmful. Collectively, these findings challenge the conventional division between pretraining and reasoning, positioning reasoning-aware pretraining as a critical ingredient in building more capable, generalizable, and compute-efficient language models.
It’s a pre-print paper with few citations to it; but it does seem to be something NVIDIA and AllenAI do too. IMO, true base models won’t be coming out of China anytime soon. But AllenAI publishes their intermediate checkpoints too, so you can use a human-only base model from them. They’re supposed to publish Olmo3.5 7B soon, which is a hybrid model like Qwen3.5 IIRC. | 1 | 0 | 2026-03-03T22:16:09 | TheRealMasonMac | false | null | 0 | o8hoguy | false | /r/LocalLLaMA/comments/1rjyngn/are_true_base_models_dead/o8hoguy/ | false | 1 |
t1_o8hoau4 | I'm suspecting that agentic is the path to regret, long term. | 1 | 0 | 2026-03-03T22:15:20 | crantob | false | null | 0 | o8hoau4 | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8hoau4/ | false | 1 |
t1_o8ho271 | This is one of the reasons the US rave scene died in the early 2000's. "Communities" in a suddenly-popular scene like this one often become, in the words of Hunter S Thompson, "cruel and shallow money trenches, where thieves and pimps run free and good men die like dogs."
Luckily for the AI "scene," the central measure of success here is not a popularity contest. There are established & \*objective\* metrics by which users, buyers, investors & employers will evaluate products created by AI and their loudmouthed handlers (plus those of us who choose to remain soft-spoken). I have a feeling we can both tell whose products will be successful in the long term.
But yeah, I agree, it sucks when we can't have a reasonable discussion just on account of a bunch of idiots noise-pollutin' up the place. When it happens though, I strongly recommend just trying to nose-down and get some shit done... err, trying to \*prompt your models\* to get some shit done ;) | 1 | 0 | 2026-03-03T22:14:11 | MrE_WI | false | null | 0 | o8ho271 | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8ho271/ | false | 1 |
t1_o8hnzi8 | It's funny to read your history. You spend lot of time promoting western models and shitting on every Chinese model. | 1 | 0 | 2026-03-03T22:13:50 | Orolol | false | null | 0 | o8hnzi8 | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8hnzi8/ | false | 1 |
t1_o8hnxuu | Unfortunately, because of the hype, many people here are obsessed with benchmarks and leaderboards.
This really makes discussions pointless, because you can just scroll the leaderboard and always use the model in 1st place (and in the cloud, because what’s the point of using local models if they’re not at the top of the leaderboard?). | 1 | 0 | 2026-03-03T22:13:36 | jacek2023 | false | null | 0 | o8hnxuu | false | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8hnxuu/ | false | 1 |
t1_o8hnwzj | Someone was fawning over the 0.8b so I excitedly tried it on my new app and... nope.
The 4b is very good but horrendously slow on my shit work laptop. I think vulcan for intel might be slower than cpu only. | 1 | 0 | 2026-03-03T22:13:29 | Ok-Measurement-1575 | false | null | 0 | o8hnwzj | false | /r/LocalLLaMA/comments/1rk01ea/qwen35122b_basically_has_no_advantage_over_35b/o8hnwzj/ | false | 1 |
t1_o8hnn3t | Ironic to be talking about signal to noise when posting AI slop. | 1 | 0 | 2026-03-03T22:12:11 | NNN_Throwaway2 | false | null | 0 | o8hnn3t | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hnn3t/ | false | 1 |
t1_o8hnkoz | 'All you dirty gooners' was too on point :) | 1 | 0 | 2026-03-03T22:11:52 | ShengrenR | false | null | 0 | o8hnkoz | false | /r/LocalLLaMA/comments/1rjuccw/would_you_be_interested_in_a_fully_local_ai_3d/o8hnkoz/ | false | 1 |
t1_o8hnhsf | My ollama run: Error: 500 Internal Server Error: unable to load model: C:\\Users\\....\\.ollama\\models\\blobs\\sha256-3aa21dfc185595bb9f3f8f98f08325fcecfdf446c510ce2b03fe19104e700c16 | 1 | 0 | 2026-03-03T22:11:28 | AbbreviationsOk6975 | false | null | 0 | o8hnhsf | false | /r/LocalLLaMA/comments/1rjp08s/qwen354b_uncensored_aggressive_release_gguf/o8hnhsf/ | false | 1 |
t1_o8hnhnu | Seems like it might be a bug. I'll take a look. | 1 | 0 | 2026-03-03T22:11:27 | cameron_pfiffer | false | null | 0 | o8hnhnu | false | /r/LocalLLaMA/comments/1rjv92p/whats_your_strategy_for_long_conversations_with/o8hnhnu/ | false | 1 |
t1_o8hnhqv | Nord ok cool.
Tool usage (particularly web search) and unified image generator capabilities. I've literally spent months looking for an app like this, I knew somebody must of done it.
The only other android app with this I've seen is layla.
Yeah you can dm if you have questions.
| 1 | 0 | 2026-03-03T22:11:27 | Esodis | false | null | 0 | o8hnhqv | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8hnhqv/ | false | 1 |
t1_o8hne6j | [ALERT] Correct use of the word "method" detected. A Harrison Burgueron stupidification squad has been dispached to your location. Please do not move or communicate online until they arrive. Thank you. | 1 | 0 | 2026-03-03T22:10:59 | crantob | false | null | 0 | o8hne6j | false | /r/LocalLLaMA/comments/1riwy9w/is_qwen359b_enough_for_agentic_coding/o8hne6j/ | false | 1 |
t1_o8hn8sg | pls no | 1 | 0 | 2026-03-03T22:10:16 | Hulksulk666 | false | null | 0 | o8hn8sg | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hn8sg/ | false | 1 |
t1_o8hmx9d | did you find what you need? | 1 | 0 | 2026-03-03T22:08:43 | Heavy-Situation-6344 | false | null | 0 | o8hmx9d | false | /r/LocalLLaMA/comments/1juxpiq/looking_for_most_uncensored_uptodate_llm_for/o8hmx9d/ | false | 1 |
t1_o8hmumn | I clicked on the link, most of the text is nonsense and emojis, seems to be AI generated slop with zero checks and Fs given to the result by whoever queried it | 1 | 0 | 2026-03-03T22:08:22 | tiga_94 | false | null | 0 | o8hmumn | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8hmumn/ | false | 1 |
t1_o8hmtoo | Doesn't seem to be a protest resignation.
A lot of the posts from insiders seem to suggest it's a leadership shakeup, that an external hire is being brought in to lead the new Qwen team that will encompass both the model + AI related products. | 1 | 0 | 2026-03-03T22:08:15 | Flying_Birdy | false | null | 0 | o8hmtoo | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hmtoo/ | false | 1 |
t1_o8hmtcb | It is a edge dev kit, not for you to run LLMs | 1 | 0 | 2026-03-03T22:08:13 | Opening-Designer4333 | false | null | 0 | o8hmtcb | false | /r/LocalLLaMA/comments/1mzyg24/nvidia_jetson_agx_thor_seems_to_be_available_for/o8hmtcb/ | false | 1 |
t1_o8hmrlz | In Germany they are little by little making it obligatory. | 1 | 0 | 2026-03-03T22:07:59 | Turbulent_Pin7635 | false | null | 0 | o8hmrlz | false | /r/LocalLLaMA/comments/1rjxl0v/multiple_qwen_employees_leaving/o8hmrlz/ | false | 1 |
t1_o8hmp4o | Also interesting read [https://github.com/ggml-org/llama.cpp/pull/16634](https://github.com/ggml-org/llama.cpp/pull/16634) | 1 | 0 | 2026-03-03T22:07:40 | Gregory-Wolf | false | null | 0 | o8hmp4o | false | /r/LocalLLaMA/comments/1rjyj3c/mlx_benchmarks/o8hmp4o/ | false | 1 |
t1_o8hmfhb | I have the impression that it was written by an AI that escaped from a lab. | 1 | 0 | 2026-03-03T22:06:27 | BadassFougere | false | null | 0 | o8hmfhb | false | /r/LocalLLaMA/comments/1rk2mg5/one_of_ais_core_problems_is_its_democratization/o8hmfhb/ | false | 1 |
t1_o8hmfiu | Google would never make Gemma so powerful, they'd invest it into Gemini only | 1 | 0 | 2026-03-03T22:06:27 | murkomarko | false | null | 0 | o8hmfiu | false | /r/LocalLLaMA/comments/1rjtzyn/junyang_lin_has_left_qwen/o8hmfiu/ | false | 1 |
t1_o8hmee0 | Qwen / Alibaba has been going into closed source direction for sometimes now.
It started with their video model Wan 2.5. They were saying "it is coming soon to opensource!", "Team is just gathering feedback for better finetune before OS release" and blah blah.
Then one fine morning, all references to all of these social media posts got disappeared and wiped clean. Just dead links everywhere.
Similar story happening with their Qwen Image model version 2. No official statement for opensource, just API thus far. People just coping it is definitely coming. | 1 | 0 | 2026-03-03T22:06:19 | Snoo_64233 | false | null | 0 | o8hmee0 | false | /r/LocalLLaMA/comments/1rjx5he/qwen_tech_lead_and_multiple_other_members_leaving/o8hmee0/ | false | 1 |
t1_o8hmdda | **Link to the results:** [https://dystopiabench.com/](https://dystopiabench.com/) | 1 | 0 | 2026-03-03T22:06:10 | Ok-Awareness9993 | false | null | 0 | o8hmdda | false | /r/LocalLLaMA/comments/1rk342c/the_dow_vs_anthropic_saga_proves_closedsource/o8hmdda/ | false | 1 |
t1_o8hmceq | Qwen 3.5 9B which was released a day or two ago is vision capable and defeats similarly sized Qwen3 on any metric. | 1 | 0 | 2026-03-03T22:06:03 | No-Refrigerator-1672 | false | null | 0 | o8hmceq | false | /r/LocalLLaMA/comments/1rk2u18/what_vlm_is_the_most_capable_for_tool_use/o8hmceq/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.