name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o88wku8 | The prompt processing speed is mainly compute limited. As I have better prompt processing speed, looks like there is no max power problem.
For example, at depth 4096, I always have more than 800 tk/s while your system performance is about 610 tk/s.
At depth 130000, I can get 450 tk/s while your system is 150 tk/s. I have 3 times your speed here.
The token generation is more memory bandwidth limited. Your system is about 10% or 15% above.
Differences may come from:
\- kernel settings (iommu...)
\- llama.cpp options (mmap, fa, cache...) | 1 | 0 | 2026-03-02T15:56:01 | PhilippeEiffel | false | null | 0 | o88wku8 | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o88wku8/ | false | 1 |
t1_o88wiwj | Did they solve the repetition bug? I wasn’t able to use qwen3 4b vl due to that | 1 | 0 | 2026-03-02T15:55:47 | xyzmanas | false | null | 0 | o88wiwj | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o88wiwj/ | false | 1 |
t1_o88wfle | [removed] | 1 | 0 | 2026-03-02T15:55:20 | [deleted] | true | null | 0 | o88wfle | false | /r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88wfle/ | false | 1 |
t1_o88wdjq | You’re close, but a few clarifications.
It’s not a context length extension method. The goal isn’t to stretch the window, it’s to remove the skill markdown from the prompt entirely and inject that conditioning through the KV pathway instead.
What happens is the skill markdown is passed through the frozen base model, I take the hidden states which are seq\_len × hidden\_dim, and I mean pool across the sequence dimension, not the feature dimension. That produces a single hidden\_dim vector. So yes, in the current setup I am effectively compressing the entire skill into one latent representation.
That pooled vector is then passed through a small projector MLP that maps it into KV compatible tensors for prefix style injection. Prefix length is 2.
The MLP is global, not per skill. It is trained across all skills while the base model remains frozen.
It’s not really compensating for pooling. It’s learning how to transform that compressed skill representation into something the attention layers can actually use. Mean pooling is definitely lossy, and the performance ceiling in the experiments likely reflects that bottleneck.
ARC-Encoder and Cartridges are definitely relevant. The main difference here is that I am not modifying the base embedding space. I’m training a small adapter that targets the model’s KV geometry directly. | 1 | 0 | 2026-03-02T15:55:03 | Proper-Lab1756 | false | null | 0 | o88wdjq | false | /r/LocalLLaMA/comments/1rif789/injecting_skills_into_the_kv_cache_not_as_stupid/o88wdjq/ | false | 1 |
t1_o88wciw | Q4_K_M mine is 528MB, the Q6_K is 630MB | 1 | 0 | 2026-03-02T15:54:55 | stopbanni | false | null | 0 | o88wciw | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88wciw/ | false | 1 |
t1_o88w9c9 | sadly i tried ollama package, no gpu support, i tried koboldcpp and llama.cpp, after dealing with that pesky spawn.h for llama.cpp i couldn't get the GPU to be detected. all in termux | 1 | 0 | 2026-03-02T15:54:30 | weener69420 | false | null | 0 | o88w9c9 | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88w9c9/ | false | 1 |
t1_o88w9d8 | 122B seems to lead! I wonder what sort of quality loss we’d be looking at in a MXFP4 quant. | 1 | 0 | 2026-03-02T15:54:30 | ohgoditsdoddy | false | null | 0 | o88w9d8 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88w9d8/ | false | 1 |
t1_o88w8sb | check my post u can hit 60+ tps with 35a3b | 1 | 0 | 2026-03-02T15:54:26 | maho_Yun | false | null | 0 | o88w8sb | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88w8sb/ | false | 1 |
t1_o88w6tu | Between Qwen3 VL 2B and Ministral 3B, I picked Ministral because it performed better than Qwen3. Qwen3.5 seems to be good so far. I will test with more artefacts before moving to Qwen3.5 completely for my workflow. | 1 | 0 | 2026-03-02T15:54:11 | deadman87 | false | null | 0 | o88w6tu | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o88w6tu/ | false | 1 |
t1_o88w3wt | Obligatory reminder: Benchmarks != real-world performance. Use these as a ballpark guide, but your actual mileage will definitely vary. | 1 | 0 | 2026-03-02T15:53:48 | Jobus_ | false | null | 0 | o88w3wt | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88w3wt/ | false | 1 |
t1_o88w25k | that seems very stubborn. why not take time to learn how to use llama.cpp? | 0 | 0 | 2026-03-02T15:53:34 | shadowdog000 | false | null | 0 | o88w25k | false | /r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88w25k/ | false | 0 |
t1_o88vzvo | Met this issue either. bf16 messed up the whole memory arrangement while either fp32/fp16/q8 is ok. On dual RTX 6000 Blackwell and 2x9684x CPU. Still trying to figure out. | 1 | 0 | 2026-03-02T15:53:15 | KeyLiaoHPC | false | null | 0 | o88vzvo | false | /r/LocalLLaMA/comments/1riunee/how_to_fix_endless_looping_with_qwen35/o88vzvo/ | false | 1 |
t1_o88vzjl | Like mtmttuan said, "drafting." Language models generate one token at a time on the output side, but on the input it can process many tokens in parallel. One trick to get more out of your GPUs as a single user is to use a smaller model to guess the tokens the larger model will use, then run a string of possible tokens through the big model together. We use the same math for each token as we would if we had run it through the big model alone; if the big model agrees with the small one, we keep the tokens they agree on. Once they disagree, we keep only up to what the big model said, then try again.
Depending heavily on the task, GPU in use with the model (not too useful on most CPUs,) and the agreement between the draft model and full model, this "speculative decoding" can yield a speedup of anywhere between 1x and 5x. However, some poor configurations I've seen (like overflowing my VRAM) can cut the speed in half by adding this. Can't apply it willy-nilly. | 1 | 0 | 2026-03-02T15:53:13 | 4onen | false | null | 0 | o88vzjl | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88vzjl/ | false | 1 |
t1_o88vz49 | we dont like ollama here we just use llama.cpp from here: [https://github.com/ggml-org/llama.cpp](https://github.com/ggml-org/llama.cpp) | 1 | 0 | 2026-03-02T15:53:10 | shadowdog000 | false | null | 0 | o88vz49 | false | /r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88vz49/ | false | 1 |
t1_o88vyxk | **nods in agreement** | 1 | 0 | 2026-03-02T15:53:08 | mehedi_shafi | false | null | 0 | o88vyxk | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88vyxk/ | false | 1 |
t1_o88vuq1 | Thanks for your work. But does Qwen not also made the Qwen Coder Next ? | 1 | 0 | 2026-03-02T15:52:35 | _w0n | false | null | 0 | o88vuq1 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88vuq1/ | false | 1 |
t1_o88vuoz | I can’t wait for decimal quants like Q0.3_K_M 😭 | 1 | 0 | 2026-03-02T15:52:34 | Rude_Marzipan6107 | false | null | 0 | o88vuoz | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88vuoz/ | false | 1 |
t1_o88vsry | Wrong. I know ollama is inferior in quite some ways and still choose to use it. | 1 | 0 | 2026-03-02T15:52:19 | FrenzyX | false | null | 0 | o88vsry | false | /r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88vsry/ | false | 1 |
t1_o88vqby | It's not a bug, it's a feature. Prompt, response, file upload, RAG, it's all the same to the LLM.
Assume that anyone who can access the LLM has access to any information the LLM has been trained on or can access. Limit LLM permission to data or user access to the LLM accordingly.
Everything else is playing whack-a-mole with string manipulation injecting prompts. How many ways can you think of to build a malicious string in context? | 1 | 0 | 2026-03-02T15:52:00 | maz_net_au | false | null | 0 | o88vqby | false | /r/LocalLLaMA/comments/1rip2f0/how_are_you_mitigating_prompt_injection_in/o88vqby/ | false | 1 |
t1_o88vp8g | I'm running LM Studio, which was really easy to set up. I haven't tried Ollama.
unsloth : Qwen3.5 9B GGUF Q4\_K\_S is realy fast. 58 tokens/s:
Actually, \*\*no — the RTX 5080 doesn’t exist yet\*\*, and even if it did, your comment is a bit mixed up. Let’s break this down:
\---
\### 🚫 First: The RTX 5080 isn’t real (as of now)
NVIDIA has not released any “RTX 50” series cards as of mid-2024. The current flagship is the \*\*RTX 4090\*\* (with 24GB VRAM), and the next-gen “Blackwell” architecture (which will power RTX 50-series) is expected in late 2024 or early 2025 — but no official specs or models have been confirmed.
So when you say:
\> \*“This is probably a noob question, but are there any models here that would be ideal for a 16 GB GPU (RTX 5080)?”\*
…you’re referring to a card that doesn’t exist yet — which makes this more of a \*future-proofing\* or \*hypothetical\* question.
\---
\### ✅ But let’s assume you meant \*\*RTX 4070 Ti Super\*\* (16GB version) or similar 16GB GPU:
That’s a very common setup for local LLM running! Here’s what works well on \~16GB VRAM:
\#### 💡 Recommended Models (Quantized):
\- \*\*Llama 3.1 8B / Mistral 7B\*\* → Run at Q4\_K\_M or even Q5\_K\_M with room to spare
\- \*\*Phi-3 Mini (3.8B)\*\* → Super fast, tiny footprint
\- \*\*Gemma 2 9B\*\* → Great balance of size/performance; fits comfortably in 16GB
\- \*\*Command R / Command R+\*\* → Optimized for chat/long context; good quantizations available
\#### ⚠️ Avoid:
\- Anything >13B parameters unless heavily quantized (e.g., Q2\_K or Q3\_K) — may struggle with speed or stability.
\- Full FP16 models of 7B+ will likely OOM (out of memory).
\---
\### 🧠 About the reply you quoted:
\> \*“the 9b should work, maybe u could push 27b w quantization Dont got a 16gb gpu personally but im sure it can run 9b, download ollama and try it, ez setup but takes long to download..”\*
✅ That’s mostly correct! A \*\*9B model\*\* like Gemma 2 or Llama 3.1 8B will run smoothly on 16GB VRAM with moderate quantization (Q4/Q5).
❌ Pushing a \*\*27B model\*\*? Even with aggressive quantization (like Q2\_K), it’s risky — you’ll probably need at least 20–24GB VRAM for decent performance. Some might fit, but inference will be slow and unstable.
\---
\### 🛠️ Tools to try:
\- \*\*Ollama\*\* – Easy setup (\`ollama run llama3.1\`)
\- \*\*LM Studio\*\* – GUI-based, great for testing different quantizations
\- \*\*Text Generation WebUI (oobabooga)\*\* – Advanced control over models and quantization
\---
\### 📌 Final Thoughts:
If you’re planning ahead for an RTX 5080 (when it arrives), expect \~16–24GB VRAM depending on the model. For now, stick with \*\*≤9B models\*\* at Q4/Q5 quantization — they’ll feel snappy and reliable.
And hey — if you \*do\* get your hands on an RTX 5080 later, come back and ask again 😉 We’ll update our recommendations!
\---
Let me know if you want specific model links or how to test them locally!
| 1 | 0 | 2026-03-02T15:51:51 | JollyJoker3 | false | null | 0 | o88vp8g | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88vp8g/ | false | 1 |
t1_o88vp95 | This is comedically difficult to comprehend. There has to be a better way | 1 | 0 | 2026-03-02T15:51:51 | ItsNoahJ83 | false | null | 0 | o88vp95 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88vp95/ | false | 1 |
t1_o88vkth | Use websockets to feed them in as a batch downscaled | 1 | 0 | 2026-03-02T15:51:16 | One_Hovercraft_7456 | false | null | 0 | o88vkth | false | /r/LocalLLaMA/comments/1riv5kc/whats_possible_with_video_now/o88vkth/ | false | 1 |
t1_o88vidt | Oops: [https://arxiv.org/abs/2311.14455](https://arxiv.org/abs/2311.14455)
I never said the model, itself, is an executable. Show he where I EVER said that.
What I said is that models can be trained to malicious use other tools. For example, an MCP or Python interpreter. What that means is a model can be fine tuned to appear perfectly safe but upon a certain condition can use tools in a malicious way. Why is that such a leap for you to understand? | 1 | 0 | 2026-03-02T15:50:58 | Glad_Middle9240 | false | null | 0 | o88vidt | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o88vidt/ | false | 1 |
t1_o88vhd8 | It is almost unbelievable how shitty this chart is | 1 | 0 | 2026-03-02T15:50:49 | k2ui | false | null | 0 | o88vhd8 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88vhd8/ | false | 1 |
t1_o88vehh | You may be [disappointed ](https://www.reddit.com/r/LocalLLaMA/s/7cCtwU110Z), but hopefully not. | 1 | 0 | 2026-03-02T15:50:26 | donatas_xyz | false | null | 0 | o88vehh | false | /r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88vehh/ | false | 1 |
t1_o88vap3 | Why would someone ever use stdio instead of domain sockets? | 1 | 0 | 2026-03-02T15:49:56 | Business_Occasion226 | false | null | 0 | o88vap3 | false | /r/LocalLLaMA/comments/1riw6kd/mcp_colocation_stdio_49ms_single_client_vs_http/o88vap3/ | false | 1 |
t1_o88v9x3 | Interesting to know that.
Yeah, my CPU is a Intel i5-12400F | 1 | 0 | 2026-03-02T15:49:50 | MarketingGui | false | null | 0 | o88v9x3 | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o88v9x3/ | false | 1 |
t1_o88v4js | I'm alsotesting the 9b model, but I heard that, in general, a bigger model with a more aggressive quant is still better than a smaller model with less quant. | 1 | 0 | 2026-03-02T15:49:07 | MarketingGui | false | null | 0 | o88v4js | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o88v4js/ | false | 1 |
t1_o88v1zz | Yeah ive tried quantizing it from fp16 to q8 before and got trash results... probs my mistake on it but i really wish there was a way to load vlm models with image support
Anyhow, Thank you | 1 | 0 | 2026-03-02T15:48:47 | klop2031 | false | null | 0 | o88v1zz | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88v1zz/ | false | 1 |
t1_o88uzyk | Ah ok cheers | 1 | 0 | 2026-03-02T15:48:30 | And-Bee | false | null | 0 | o88uzyk | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88uzyk/ | false | 1 |
t1_o88uxpa | People who use Ollama and Ollama MCP for their own backends do. | 1 | 0 | 2026-03-02T15:48:12 | PlainBread | false | null | 0 | o88uxpa | false | /r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88uxpa/ | false | 1 |
t1_o88uxea | i copied the parameters from another user and it has jinja set, i am new to llama.cpp i don't actually know what that one does | 1 | 0 | 2026-03-02T15:48:10 | Pawderr | false | null | 0 | o88uxea | false | /r/LocalLLaMA/comments/1re1b4a/you_can_use_qwen35_without_thinking/o88uxea/ | false | 1 |
t1_o88uxex | It's insane how powerful 35B MOE is. It's very fast and can run on a potato. They really blew my mind away with it | 1 | 0 | 2026-03-02T15:48:10 | BumblebeeParty6389 | false | null | 0 | o88uxex | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88uxex/ | false | 1 |
t1_o88uwuk | As I said... I opened a GitHub issue because it is currently disabled. None of them work with 27B. | 1 | 0 | 2026-03-02T15:48:05 | coder543 | false | null | 0 | o88uwuk | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88uwuk/ | false | 1 |
t1_o88uvqy | Uou, thank you! I adapted the command:
llama-cli.exe -m "Qwen3.5-35B-A3B-UD-IQ3\_XXS.gguf" --flash-attn on -c 4096 --n-predict 4096 --jinja --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.00 --threads 6 --fit on --no-mmap --reasoning-budget 0
The model run at 36 t/s | 1 | 0 | 2026-03-02T15:47:56 | MarketingGui | false | null | 0 | o88uvqy | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o88uvqy/ | false | 1 |
t1_o88uvej | These researcher talk about guardrails, that is, a model refuses to hack this or that, despite you asking for it.
I was commenting on malicious models that execute contained codes without you knowing. An entirely fictional scenario. | 1 | 0 | 2026-03-02T15:47:53 | q-admin007 | false | null | 0 | o88uvej | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o88uvej/ | false | 1 |
t1_o88urpt | I was using Qwen Vl3 2B for some OCR tasks with game UIs, its not perfect, hopefully this is better! | 1 | 0 | 2026-03-02T15:47:24 | ----Val---- | false | null | 0 | o88urpt | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o88urpt/ | false | 1 |
t1_o88urj3 | Yeah, pretty much. Check the seller account age and reviews. Check the device for physical damage or signs of tampering.
You could probably ask for a video of the device powered in a computer. Worst case, a repair service can probably fix it if it's really bad. | 1 | 0 | 2026-03-02T15:47:22 | AdamTReineke | false | null | 0 | o88urj3 | false | /r/LocalLLaMA/comments/1rianwb/running_qwen35_27b_dense_with_170k_context_at/o88urj3/ | false | 1 |
t1_o88uqxz | I see they are continuing the trend from the Qwen3 release with no “Base” variants for the large dense model. There is so much I love about these models, but not giving us Qwen3.5-27B-Base is just mean (not really, I get why, just sucks for my use cases). | 1 | 0 | 2026-03-02T15:47:18 | indicava | false | null | 0 | o88uqxz | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88uqxz/ | false | 1 |
t1_o88un8e | I thought I was the only one [reverting back](https://www.reddit.com/r/LocalLLaMA/s/7cCtwU110Z) to Qwen3, because all I can see is people praising Qwen3.5 like it's the next best thing since Nutella. | 1 | 0 | 2026-03-02T15:46:48 | donatas_xyz | false | null | 0 | o88un8e | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o88un8e/ | false | 1 |
t1_o88uhcp | I looked a lot at that codebase and mention it in the acknowledgments :) I found some bugs there with the streaming, but I'm not sure how much they affect the perceived audio quality | 1 | 0 | 2026-03-02T15:46:00 | futterneid | false | null | 0 | o88uhcp | false | /r/LocalLLaMA/comments/1rfc3ic/introducing_fasterqwentts/o88uhcp/ | false | 1 |
t1_o88uhcq | There is no single case of a malicious model. Not in a lab, not in the wild. It's an entirely "could, might and may" scenario.
If you let a model run code in your environment, it doesn't have to be malicious to hurt you. | 1 | 0 | 2026-03-02T15:46:00 | q-admin007 | false | null | 0 | o88uhcq | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o88uhcq/ | false | 1 |
t1_o88ubl0 | This is very interesting work!
Wrote you PM about similar project. | 1 | 0 | 2026-03-02T15:45:15 | Total_Activity_7550 | false | null | 0 | o88ubl0 | false | /r/LocalLLaMA/comments/1ribjum/i_trained_a_3b_patristic_theology_llm_on_a_single/o88ubl0/ | false | 1 |
t1_o88u8qq | Nobody cares. | 1 | 0 | 2026-03-02T15:44:52 | qwen_next_gguf_when | false | null | 0 | o88u8qq | false | /r/LocalLLaMA/comments/1riw1ml/just_saw_it_on_the_last_page_refresh_qwen/o88u8qq/ | false | 1 |
t1_o88u8kb | Which of the newly dropped smaller models can be used with 27B? I am trying 4B but LM studio is not showing me any models as compatible. | 1 | 0 | 2026-03-02T15:44:51 | And-Bee | false | null | 0 | o88u8kb | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88u8kb/ | false | 1 |
t1_o88u7ci | > I’m just an amateur
> pay me
No. lol. | 1 | 0 | 2026-03-02T15:44:41 | 3spky5u-oss | false | null | 0 | o88u7ci | false | /r/LocalLLaMA/comments/1riorfz/qwen35122bvl_abliterated_working_mlx/o88u7ci/ | false | 1 |
t1_o88u6s4 | I'm a complete noob at this. Which model should I use for small coding tasks on my macbook pro 14 with 48gb ram | 1 | 0 | 2026-03-02T15:44:36 | daibam_und_koode | false | null | 0 | o88u6s4 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88u6s4/ | false | 1 |
t1_o88u3d7 | Show me T/S comparison. | 1 | 0 | 2026-03-02T15:44:09 | fredandlunchbox | false | null | 0 | o88u3d7 | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88u3d7/ | false | 1 |
t1_o88u1p8 | my wife says that the size doesn't matter as much as what you do with it | 1 | 0 | 2026-03-02T15:43:56 | ChocomelP | false | null | 0 | o88u1p8 | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o88u1p8/ | false | 1 |
t1_o88tysm | thanks for your kind words :) | 1 | 0 | 2026-03-02T15:43:32 | EmbarrassedAsk2887 | false | null | 0 | o88tysm | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o88tysm/ | false | 1 |
t1_o88tyju | Have to keep up with tradition. | 1 | 0 | 2026-03-02T15:43:30 | Jobus_ | false | null | 0 | o88tyju | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88tyju/ | false | 1 |
t1_o88ty08 | Yes -- because no models ever invoke tools on an autonomous basis in business applications. I guess if you are writing science fiction stories with Ollama on your potato GPU with no connection to anything else this isn't a problem. Business applications rely on orchestration of autonomous agents.
The ad hominem is the same as declaring what I said was "utter nonsense" | 1 | 0 | 2026-03-02T15:43:26 | Glad_Middle9240 | false | null | 0 | o88ty08 | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o88ty08/ | false | 1 |
t1_o88txkr | actually. we have 5 different configs, ranging from different apple silicon chipsets, core counts and ram. it can run way better than industry standard out there. we made sure to have quantized weights for our most expressive model as well, it can run smoothly even on a 8gb macbook air as well
we do have an api layer for developers for our multimodal models we use within bodega, inlcuding the tts as well. | 1 | 0 | 2026-03-02T15:43:22 | EmbarrassedAsk2887 | false | null | 0 | o88txkr | false | /r/LocalLLaMA/comments/1rgkzlo/realtime_speech_to_speech_engine_runs_fully_local/o88txkr/ | false | 1 |
t1_o88tvmm | Same, but far less complete. Nice. | 1 | 0 | 2026-03-02T15:43:07 | KindnessBiasedBoar | false | null | 0 | o88tvmm | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88tvmm/ | false | 1 |
t1_o88tvbl | Can we use any of them for speculative decoding of 27B | 1 | 0 | 2026-03-02T15:43:04 | And-Bee | false | null | 0 | o88tvbl | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88tvbl/ | false | 1 |
t1_o88ttkh | I can run it at IQ1_M 😂 | 1 | 0 | 2026-03-02T15:42:50 | Daniel_H212 | false | null | 0 | o88ttkh | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88ttkh/ | false | 1 |
t1_o88tro5 | Its still early, but simple knowledge questions do show its limits. I would not count on it beating even free tier chatgpt. | 1 | 0 | 2026-03-02T15:42:35 | ----Val---- | false | null | 0 | o88tro5 | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88tro5/ | false | 1 |
t1_o88tn2l | Here's what unsloth's qwen3.5-35b-a3b Q4\_K\_M said on my 16 GB 5060 Ti:
https://preview.redd.it/2j6q9rrgjnmg1.png?width=718&format=png&auto=webp&s=2ab8d5af5f5a0dbddb5dc23c25c04dcac5350aa8
Based on the screenshot, here are my thoughts:
\*\*1. The "RTX 5080" Typo (or Time Travel?)\*\*
The most obvious thing is that the user (\`l34sh\`) wrote \*\*"RTX 5080."\*\* As of right now, NVIDIA has not released the RTX 50-series cards yet. They likely meant the \*\*RTX 4080\*\*, which does indeed come with 16 GB of VRAM. It’s a very common typo (keys are close together) or perhaps they got confused by rumors about future hardware.
\*\*2. The Advice is Actually Spot On\*\*
Despite the GPU typo, the advice given by \`stellarknight\_\` is technically excellent for anyone trying to run Large Language Models (LLMs) locally on a 16 GB card:
\* \*\*"The 9b should work":\*\* A 9 Billion parameter model (like Llama-3-8B or Mistral-7B) fits very comfortably on a 16 GB GPU. Even without heavy compression, it leaves plenty of room for the "context window" (the amount of text the AI remembers).
\* \*\*"Push 27b w quantization":\*\* This is the key to local AI. A standard 27 Billion parameter model usually requires about 54 GB of VRAM to run at full speed. However, by using \*\*quantization\*\* (compressing the math precision from 16-bit down to 4-bit or lower), you can shrink that model size significantly.
\* \*The Math:\* A 27B model in 4-bit quantization takes up roughly 16–18 GB of space. So, on a 16 GB card, it is "tight" but possible if you keep the context window small or use aggressive compression (like Q3\_K\_S).
\*\*3. Ollama Recommendation\*\*
The suggestion to download \*\*Ollama\*\* is great advice for beginners. It is currently the easiest way to get started with running these models on a local machine without needing complex coding knowledge.
\*\*Summary:\*\*
It looks like a standard "help me set up my AI rig" thread, except the user accidentally asked about a graphics card that doesn't exist yet! If they swap "5080" for "4080," everything in this conversation makes perfect sense. | 1 | 0 | 2026-03-02T15:41:57 | JollyJoker3 | false | null | 0 | o88tn2l | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88tn2l/ | false | 1 |
t1_o88tmuv | Larger ones are also fantastic. 122 and 27B both rock in our handwritten Japanese tests, and especially the larger one can effortlessly deal with Ainu documents, which is out of reach even for Gemini. | 1 | 0 | 2026-03-02T15:41:55 | RadiantHueOfBeige | false | null | 0 | o88tmuv | false | /r/LocalLLaMA/comments/1rivzcl/qwen_35_2b_is_an_ocr_beast/o88tmuv/ | false | 1 |
t1_o88tltj | Hmmm maybe with some tweaking you can get a faster speed. What's your setup? | 1 | 0 | 2026-03-02T15:41:47 | Leather_Flan5071 | false | null | 0 | o88tltj | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88tltj/ | false | 1 |
t1_o88tjag | Happy you find it useful! | 1 | 0 | 2026-03-02T15:41:27 | ----Val---- | false | null | 0 | o88tjag | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88tjag/ | false | 1 |
t1_o88tinv | Its actually worse than the worst from Google, that's Gemini 2.5 flash lite they're comparing against. | 1 | 0 | 2026-03-02T15:41:22 | MerePotato | false | null | 0 | o88tinv | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88tinv/ | false | 1 |
t1_o88th89 | You *could* but its so slow it isnt worth it. Currently mmproj files are mostly q8 or fp16, and do not take advantage of the q4 kernels for ARM SOCs. | 1 | 0 | 2026-03-02T15:41:10 | ----Val---- | false | null | 0 | o88th89 | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88th89/ | false | 1 |
t1_o88tgyy | Knowledge depth is still a gap. If you stick to trivia you can get Llama 2 70B or Wizard 70B beating these small models.
Virtually anything else though and yes, the gap has vanished | 1 | 0 | 2026-03-02T15:41:08 | ForsookComparison | false | null | 0 | o88tgyy | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88tgyy/ | false | 1 |
t1_o88tfoz | >I can't remember the last time I interacted with a model that didn't have access to a python interpreter
If you execute code unreviewed, resulting problems are on you.
>You're not very smart
What does ad hominem even mean? | 1 | 0 | 2026-03-02T15:40:58 | q-admin007 | false | null | 0 | o88tfoz | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o88tfoz/ | false | 1 |
t1_o88te3b | Github release. I should give lemonade a try.
AMD 9060s. about 40TPS fully on vram.
I expect closer to 80TPS given A3B. | 1 | 0 | 2026-03-02T15:40:45 | sleepingsysadmin | false | null | 0 | o88te3b | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88te3b/ | false | 1 |
t1_o88sx00 | First of all, you should avoid using a quantized cache (`--cache-type-k q8_0 --cache-type-v q8_0`).
Second, you may need to upgrade your CPU. For reference, here’s an example of a CPU-only run on an i7-14700F:
CUDA\_VISIBLE\_DEVICES='' taskset -c 0-15 llama-bench \\
\-m /data/gguf/Qwen3.5-35B-A3B/Qwen3.5-35B-A3B-UD-IQ3\_XXS.gguf \\
\-fa -mmap -b 8192 -ub 4096 -t 16 -p 2048 -n 512 -r 5 -o md
| model | size | params | backend | ngl | threads | n\_batch | n\_ubatch | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ------: | ------: | -------: | --------------: | -------------------: |
| qwen35moe ?B Q8\_0 | 13.11 GiB | 34.66 B | CUDA | 99 | 16 | 8192 | 4096 | pp2048 | 64.17 ± 0.04 |
| qwen35moe ?B Q8\_0 | 13.11 GiB | 34.66 B | CUDA | 99 | 16 | 8192 | 4096 | tg512 | 16.66 ± 0.01 | | 1 | 0 | 2026-03-02T15:38:26 | Shoddy_Bed3240 | false | null | 0 | o88sx00 | false | /r/LocalLLaMA/comments/1ritcfr/imrpove_qwen35_performance_on_weak_gpu/o88sx00/ | false | 1 |
t1_o88stie | LiveCodeBench and OJBench. Some of the models had more benchmarks that that, but since I wanted to make a direct comparison of them all, I had to exclude the benchmark that were missing for the newer smaller models.
But yes, we should definitely take this stuff with a pinch of salt. | 1 | 0 | 2026-03-02T15:37:58 | Jobus_ | false | null | 0 | o88stie | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88stie/ | false | 1 |
t1_o88ssxo | Dude, where is my fav model dude, coder next? | 1 | 0 | 2026-03-02T15:37:54 | KURD_1_STAN | false | null | 0 | o88ssxo | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88ssxo/ | false | 1 |
t1_o88ssr5 | Its worth noting that when looking for new antibiotics, Google found that any Gemma models below 27B dense couldn't generalise well enough to assist in novel hypothesis | 1 | 0 | 2026-03-02T15:37:52 | MerePotato | false | null | 0 | o88ssr5 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88ssr5/ | false | 1 |
t1_o88sm78 | So the 9B is very good according to these graphs. Amazing. | 1 | 0 | 2026-03-02T15:36:59 | Nubinu | false | null | 0 | o88sm78 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88sm78/ | false | 1 |
t1_o88sbhq | are they planning on releasing small 3.5 coder models ? | 1 | 0 | 2026-03-02T15:35:31 | Kowskii_cbs | false | null | 0 | o88sbhq | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88sbhq/ | false | 1 |
t1_o88sa0e | Qwen 35B is very good/fast at tool calling as well.
Only flaw is that it doesn't have image input support like the Qwen 3.5-27b and even the 3.5-9B | 1 | 0 | 2026-03-02T15:35:19 | harrro | false | null | 0 | o88sa0e | false | /r/LocalLLaMA/comments/1ri39a4/qwen35_35b_a3b_first_small_model_to_not/o88sa0e/ | false | 1 |
t1_o88s9gr | \> Spent all this effort to build private local LLM to avoid feeding data to cloud
\> Turns out family doesn't trust you either
\> A divorce incoming? 🤔🤔🤔 >!(I'm just kidding, really really dark)!<
🤣🤣🤣
omg this is so funny but man, totally hear you; I have sorta same problem with my spouse where I showed her a thing or two but she doesn't seem interested in my hobby at all, and I'd say she isn't even interested in LLMs / ChatGPT / whatever; the only thing I'm certain is that we watch instagram AI-generated cat videos together and it's a thing she sort of enjoys but prefers real cat/dog/bird videos over AI anytime
I'm here just for 🍿🍿🍿 | 1 | 0 | 2026-03-02T15:35:14 | konovalov-nk | false | null | 0 | o88s9gr | false | /r/LocalLLaMA/comments/1rhjmfr/nobody_in_the_family_uses_the_family_ai_platform/o88s9gr/ | false | 1 |
t1_o88s4pt | >These hybrid SSSM + CoT models do not work with basic heretic or regular ablation methods.
lmao yes they do | 1 | 0 | 2026-03-02T15:34:34 | CheatCodesOfLife | false | null | 0 | o88s4pt | false | /r/LocalLLaMA/comments/1riorfz/qwen35122bvl_abliterated_working_mlx/o88s4pt/ | false | 1 |
t1_o88s4ch | Imagine you have 20,000 articles saved on your computer and you need to process them somehow and produce some output. If each one takes 30 seconds, thats 10,000 minutes, or about 7 days. If you can do it with 90% of the quality but in 5 seconds, its a little over 1 day.
That’s a really solid use case for local models, and small models in particular. High-volume where your don’t want to consume bandwidth, pay for API access, and can sacrifice a little on quality for speed.
| 1 | 0 | 2026-03-02T15:34:31 | fredandlunchbox | false | null | 0 | o88s4ch | false | /r/LocalLLaMA/comments/1rirlyb/qwenqwen359b_hugging_face/o88s4ch/ | false | 1 |
t1_o88s2pv | Training for relevance. The models don’t have to have phenomenal general world knowledge to be useful just carry forward the most relevant and train the model to use tools better to find the answers. Being smaller doesn’t imply it can’t be a better model. | 1 | 0 | 2026-03-02T15:34:17 | loyalekoinu88 | false | null | 0 | o88s2pv | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88s2pv/ | false | 1 |
t1_o88s2dd | Where’s 397B? | 1 | 0 | 2026-03-02T15:34:14 | suicidaleggroll | false | null | 0 | o88s2dd | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88s2dd/ | false | 1 |
t1_o88s1xn | [removed] | 1 | 0 | 2026-03-02T15:34:11 | [deleted] | true | null | 0 | o88s1xn | false | /r/LocalLLaMA/comments/1rfg3kx/american_closed_models_vs_chinese_open_models_is/o88s1xn/ | false | 1 |
t1_o88s14d | Yeah, sorry, I realized that just as I was about to hit Post. Didn't feel worth the effort redoing half the work for a model that most of us don't have enough VRAM/RAM to even look at.
But it would have been nice to include it just for completeness. | 1 | 0 | 2026-03-02T15:34:04 | Jobus_ | false | null | 0 | o88s14d | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88s14d/ | false | 1 |
t1_o88ry99 | Can you send it an image? | 1 | 0 | 2026-03-02T15:33:40 | klop2031 | false | null | 0 | o88ry99 | false | /r/LocalLLaMA/comments/1riv3wv/qwen_35_2b_on_android/o88ry99/ | false | 1 |
t1_o88rxmy | Thanks for this but I got cancer trying to see whats what | 1 | 0 | 2026-03-02T15:33:35 | hknerdmr | false | null | 0 | o88rxmy | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88rxmy/ | false | 1 |
t1_o88rxkj | Are the benchmarks tool assisted? Models this size aren't usually meant to be used standalone | 1 | 0 | 2026-03-02T15:33:34 | MerePotato | false | null | 0 | o88rxkj | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88rxkj/ | false | 1 |
t1_o88rxb4 | Seeing very diverse opinions on Spark (researching for getting one myself). Is it fair to say that vLLM is the way to go to make it useful for inference? | 1 | 0 | 2026-03-02T15:33:32 | Large_Diver_4151 | false | null | 0 | o88rxb4 | false | /r/LocalLLaMA/comments/1r75i9t/built_a_multiagent_ai_butler_on_a_dgx_spark/o88rxb4/ | false | 1 |
t1_o88rtjs | This makes the 9B dense look like a very attractive model- its directly competing w/ the 122B A10B, a model more than 10x its size and even more active params. | 1 | 0 | 2026-03-02T15:33:01 | this-just_in | false | null | 0 | o88rtjs | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88rtjs/ | false | 1 |
t1_o88rq8u | Which copy? The github releases from llama.cpp? Or from AMD or the lemonade project?
What hardware are you running on exactly, and what performance are you seeing? | 1 | 0 | 2026-03-02T15:32:35 | spaceman_ | false | null | 0 | o88rq8u | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o88rq8u/ | false | 1 |
t1_o88rojl | yeah i was thinking too why its without thinking | 1 | 0 | 2026-03-02T15:32:21 | NigaTroubles | false | null | 0 | o88rojl | false | /r/LocalLLaMA/comments/1rivo6f/does_qwen35_4b_supports_thinking/o88rojl/ | false | 1 |
t1_o88ro69 | [removed] | 1 | 0 | 2026-03-02T15:32:18 | [deleted] | true | null | 0 | o88ro69 | false | /r/LocalLLaMA/comments/1riv5kc/whats_possible_with_video_now/o88ro69/ | false | 1 |
t1_o88rmqy | What is your current config? Are you using the old school Qwen VL 2.5/3 or new Qwen 3.5 with VL native? For video the size of the context window also matters quite a bit. | 1 | 0 | 2026-03-02T15:32:06 | vidibuzz | false | null | 0 | o88rmqy | false | /r/LocalLLaMA/comments/1riv5kc/whats_possible_with_video_now/o88rmqy/ | false | 1 |
t1_o88rkhq | That comes from using Claude Code or OpenCode mostly, alongside a couple summarisation/extraction tasks from long unstructured data.
For coding, context fills up real quick as my usual workflow is to run a couple prompts in “ask” mode, then plan, then let it rip on the codebase itself. Compacting context loses so much of what matters in my conversation, creating smaller task files doesn’t always solve the issue under 64k tokens unfortunately. My tokens per session average around 90k-100k depending on the project. For some projects it’s possible to have clearly defined well scoped tasks but most of my work is pretty explorative | 1 | 0 | 2026-03-02T15:31:47 | ndiphilone | false | null | 0 | o88rkhq | false | /r/LocalLLaMA/comments/1ri3y89/my_last_only_beef_with_qwen35_35b_a3b/o88rkhq/ | false | 1 |
t1_o88rhlq | `-Base` means it's pre-trained completion model without instruction chatbot tuning? | 1 | 0 | 2026-03-02T15:31:23 | SufficientPie | false | null | 0 | o88rhlq | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88rhlq/ | false | 1 |
t1_o88reme | It's a reflection on the benchmarks they've posted. The model seems great for what it is, but it's not even close to 35b-a3b or 27b, you can feel the lack of general knowledge instantly. Could be a good at agentic tho, but I haven't tested it yet. | 1 | 0 | 2026-03-02T15:30:59 | Big_Mix_4044 | false | null | 0 | o88reme | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88reme/ | false | 1 |
t1_o88r92j | Unsloths are listed in LM Studio already. Do I run them with default settings or should I experiment to get max speed? | 1 | 0 | 2026-03-02T15:30:13 | JollyJoker3 | false | null | 0 | o88r92j | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o88r92j/ | false | 1 |
t1_o88r86a | If you want to use the full 1.6b but it's too slow I've had success with this repo: [andimarafioti/faster-qwen3-tts: Real-time text-to-speech with Qwen3-TTS](https://github.com/andimarafioti/faster-qwen3-tts) | 1 | 0 | 2026-03-02T15:30:06 | greg-randall | false | null | 0 | o88r86a | false | /r/LocalLLaMA/comments/1r4nt7u/qwen3ttscpp/o88r86a/ | false | 1 |
t1_o88r5w0 | [deleted] | 1 | 0 | 2026-03-02T15:29:47 | [deleted] | true | null | 0 | o88r5w0 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88r5w0/ | false | 1 |
t1_o88r03u | Visual on this latest model family seems strong, even with the small models. | 1 | 0 | 2026-03-02T15:29:00 | Monkey_1505 | false | null | 0 | o88r03u | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o88r03u/ | false | 1 |
t1_o88qtt1 | Just tried AaryanK's instead of unsloth's and thinking is working now. Not sure if unsloth's has thinking disabled. | 1 | 0 | 2026-03-02T15:28:08 | IvnN7Commander | false | null | 0 | o88qtt1 | false | /r/LocalLLaMA/comments/1rivo6f/does_qwen35_4b_supports_thinking/o88qtt1/ | false | 1 |
t1_o88qsw5 | what benchmark is "coding". Benchmarks are already unreliable and you just made this even more arbitrary and obfuscated | 1 | 0 | 2026-03-02T15:28:00 | rm-rf-rm | false | null | 0 | o88qsw5 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o88qsw5/ | false | 1 |
t1_o88qrvs | What about long context? Isn’t that the hallmark of the new arch? | 1 | 0 | 2026-03-02T15:27:51 | Zc5Gwu | false | null | 0 | o88qrvs | false | /r/LocalLLaMA/comments/1rirts9/unslothqwen354bgguf_hugging_face/o88qrvs/ | false | 1 |
t1_o88qr4v | The scale of the graphs is misleading. | 1 | 0 | 2026-03-02T15:27:45 | Galigator-on-reddit | false | null | 0 | o88qr4v | false | /r/LocalLLaMA/comments/1ri6yhb/the_last_amd_gpu_firmware_update_together_with/o88qr4v/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.