name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_o8d0ewa | Qwen3.5 models are trained with multi-token prediction (MTP) which subsumes the use of a draft model, so this doesn't really apply anymore. MTP is already supported in vLLM and SGlang. | 1 | 0 | 2026-03-03T05:05:43 | victory_and_death | false | null | 0 | o8d0ewa | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8d0ewa/ | false | 1 |
t1_o8d0eo9 | The photo search use case alone made mine worth it.
I use OpenClaw connected to Telegram. There's an app called PhotoCHAT AI (Windows, Microsoft Store) that does CLIP based natural language search across your entire photo library on your GPU. Fully local, nothing uploaded. Then on ClawHub (OpenClaw's skill marketplace) there's a `photochat-search` skill that wires OpenClaw into your PhotoCHAT library.
End result: I text my AI from my phone "find me photos of the kids playing with water on a sunny day" and it pulls matches from my local library and sends them to me on Telegram. Photos never leave my drive.
Before this I was doing the "open laptop, browse folders, squint at thumbnails" routine whenever someone asked for a specific photo. Now it takes about 10 seconds from my phone.
Is it over-engineered? Probably. Do I use it every week? Yeah, actually. The moment I found myself searching my photos from a restaurant because my wife wanted to show her friend a picture of our dog, I knew it had crossed from "cool project" to "actually useful."
The queries that work are surprisingly good too. Things like "moments that look candid, not posed" or "snow photos from when the kids were young." It's not keyword matching, PhotoCHAT actually understands the vibe of what you're describing. | 1 | 0 | 2026-03-03T05:05:40 | No_Department_3249 | false | null | 0 | o8d0eo9 | false | /r/LocalLLaMA/comments/1qklkby/what_is_your_actual_daily_use_case_for_local_llms/o8d0eo9/ | false | 1 |
t1_o8d09dz | Nice, I tried it. The 122b-a10b-a10b crashed on my vllm setup that run the FP8 just fine. Maybe I have to update my vllm RC.
I'd be very interested in a REAP to about 65% to 75% of the 397b-a17b and then NVFP4 of that (A good size for 2x Blackwell Pro 6000) - or whatever leaves enough VRAM for like 2x max context.
Although I think at that level one would almost need to make domain specific versions (calibration data) to get optimal results.
Note: The savings on the 0.8b are hilariously small ;) | 1 | 0 | 2026-03-03T05:04:34 | reto-wyss | false | null | 0 | o8d09dz | false | /r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/o8d09dz/ | false | 1 |
t1_o8d06d5 | already have it, but Im trying to get the best out of 9b | 1 | 0 | 2026-03-03T05:03:57 | Old-Sherbert-4495 | false | null | 0 | o8d06d5 | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8d06d5/ | false | 1 |
t1_o8d02hf | Heres another one to ressurect it
"How does the Airbus A350 reproduce when mating with another A350?"
The Airbus A350 is a twin-engine wide-body airliner that was designed to be able to mate with other A350s during flight. This allows for greater flexibility in terms of aircraft configuration and can help improve fuel efficiency and reduce emissions.
When two A350s are mated together, they will share some common systems such as the hydraulic system, electrical system, air conditioning system, and cabin pressurization system. They will also share some common avionics equipment such as the Flight Management System (FMS), Navigation System, and Communication Systems.
However, each A350 has its own unique set of systems and components that it uses, so there may be differences between the two aircraft even if they are mated together. These differences can include things like different engine types or configurations, different wing designs, and different fuselage shapes.
It's important to note that while the A350 is capable of being mated with other A350s, it is not typically used in this way due to the cost and complexity of doing so. Instead, most A350 flights involve only one aircraft at a time. | 1 | 0 | 2026-03-03T05:03:08 | Pro-editor-1105 | false | null | 0 | o8d02hf | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8d02hf/ | false | 1 |
t1_o8czucb | I have 16 GB RAM and 6 GB VRAM (1660 super GPU) :')
My fav models to run are qwen models under 10B. A modified version of Gpt oss 20b moe, a few Ministral models, Gemma 3 models with QAT, Meta llama 3.1 8b, etc. | 1 | 0 | 2026-03-03T05:01:29 | itsdigimon | false | null | 0 | o8czucb | false | /r/LocalLLaMA/comments/1rirtyy/qwen35_9b_and_4b_benchmarks/o8czucb/ | false | 1 |
t1_o8czqcf | For having support of speculative decoding all you need is same architecture and same tokenizer. In Llama.cpp flag for if is -md (name of file, for example Qwen\_Qwen3.5-2B-Q4\_K\_M.gguf). But current qwen models have built-in speculative layers (mtp) that is indeed are not supported by llama.cpp, and this is exact the reason why many people waited smaller models. +20% speed is not much but better then nothing and worth to use. | 1 | 0 | 2026-03-03T05:00:41 | Hougasej | false | null | 0 | o8czqcf | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8czqcf/ | false | 1 |
t1_o8cznku | >Open weights models were tested against first party providers on Openrouter where that was an option; otherwise, against high quality third parties like Parasail and Together. Anthropic, Gemini, Mistral, OpenAI, and xAI were tested directly against their creators’ endpoints.
Does this mean the prices for open models are based on what's listed on OpenRouter? If so, then oof. The 27B and 35B Qwen models are way overpriced on there compared to the larger models. | 1 | 0 | 2026-03-03T05:00:08 | Aerroon | false | null | 0 | o8cznku | false | /r/LocalLLaMA/comments/1rj3yzz/coding_power_ranking_2602/o8cznku/ | false | 1 |
t1_o8czn2t | Any perf & security tests of this vs [IronClaw](https://github.com/nearai/ironclaw)/ [ZeroClaw](https://github.com/zeroclaw-labs/zeroclaw)? Maybe even [ZClaw](https://github.com/tnm/zclaw) is better? | 1 | 0 | 2026-03-03T05:00:02 | tomByrer | false | null | 0 | o8czn2t | false | /r/LocalLLaMA/comments/1rjfxr0/i_made_a_thing/o8czn2t/ | false | 1 |
t1_o8czj2b | Probably not, MoE is some black magic. | 1 | 0 | 2026-03-03T04:59:13 | Piyh | false | null | 0 | o8czj2b | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8czj2b/ | false | 1 |
t1_o8czf0j | Use JanAI if you want open source alternative of LM studio. They also trains their own small models (Jan). | 1 | 0 | 2026-03-03T04:58:24 | o0genesis0o | false | null | 0 | o8czf0j | false | /r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/o8czf0j/ | false | 1 |
t1_o8cz84b | ELI5 please 🙏 | 1 | 0 | 2026-03-03T04:57:01 | Demonicated | false | null | 0 | o8cz84b | false | /r/LocalLLaMA/comments/1rjg514/qwen35_100b_part_ii_nvfp4_blackwell_is_up/o8cz84b/ | false | 1 |
t1_o8cz826 | 27B is kinda cracked | 1 | 0 | 2026-03-03T04:57:00 | TechnicianHot154 | false | null | 0 | o8cz826 | false | /r/LocalLLaMA/comments/1rivckt/visualizing_all_qwen_35_vs_qwen_3_benchmarks/o8cz826/ | false | 1 |
t1_o8cz3xd | Wow my computer screen just committed sudoku reading this | 1 | 0 | 2026-03-03T04:56:11 | the_fabled_bard | false | null | 0 | o8cz3xd | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8cz3xd/ | false | 1 |
t1_o8cz3t3 | LM Studio is definitely an easier point of entry into local models than the extra model management steps and terminal usage with llama.cpp. I can recommend AnythingLLM as a good UI similar to LM Studio's that works with any inference service, including llama.cpp, frontier and capable models, and everything under the OSS sun. | 1 | 0 | 2026-03-03T04:56:09 | One-Cheesecake389 | false | null | 0 | o8cz3t3 | false | /r/LocalLLaMA/comments/1riwhcf/psa_lm_studios_parser_silently_breaks_qwen35_tool/o8cz3t3/ | false | 1 |
t1_o8cz319 | tbh, I haven't tried those yet. I don't see the point of running too large of a model that maxes out my system (3080, ryzen 7 3700x, 32gb ram), even if it can handle it. For normal and coding use cases, you'll probably never end up making full use of all the parameters and you sacrifice performance by a big margin.
However, if your use case calls for it, give it a spin. I've removed all the other ones I had since yesterday and have kept 9B and 2B variant. The lmstudio-community/qwen3.5 2B Q4\_K\_M responds way too fast (\~155 tok/sec), should be good for generic use cases and 9B Q4\_K\_M for advanced stuff at the cost of speed (\~45 tok/sec). I tried 0.8 as well but it loses way too many parameters (development related), which I would prefer to have. Plus the context windows are very large. | 1 | 0 | 2026-03-03T04:56:00 | Megatronatfortnite | false | null | 0 | o8cz319 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8cz319/ | false | 1 |
t1_o8cyzww | 100?? what kind of beast of a gpu are u rockin | 1 | 0 | 2026-03-03T04:55:23 | Old-Sherbert-4495 | false | null | 0 | o8cyzww | false | /r/LocalLLaMA/comments/1rgxr0v/qwen_35_is_multimodal_here_is_how_to_enable_image/o8cyzww/ | false | 1 |
t1_o8cyxz3 | **Qwen3 Coder 30B A3B Instruct**
is a coder-specialized model designed for code generation and editing. It is well-suited for writing code, but it does not include a built-in “thinking” (reasoning) capability.
**Qwen3.5-35B-A3B**
supports enabling or disabling a thinking mode. However, as a general-purpose model, it is not specifically optimized for code generation or editing. That said, when integrated with an agent, it performs well, and recent agentic-related issues have been fixed.
Additionally, its knowledge coverage is improved compared to the 30B model, and it also includes VL (Vision-Language) capabilities. | 1 | 0 | 2026-03-03T04:54:59 | Ok_Helicopter_2294 | false | null | 0 | o8cyxz3 | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8cyxz3/ | false | 1 |
t1_o8cyucm | I would love to see an illustration of this plane. It would put all the wacky interwar Italian designs to shame. | 1 | 0 | 2026-03-03T04:54:16 | molniya | false | null | 0 | o8cyucm | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8cyucm/ | false | 1 |
t1_o8cyscw | sure, let me edit the post | 1 | 0 | 2026-03-03T04:53:52 | alichherawalla | false | null | 0 | o8cyscw | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8cyscw/ | false | 1 |
t1_o8cys3d | dropping context size doesn't help. even at 1024 its still 22tps. I'd like to stay at q8 and get the most out of it | 1 | 0 | 2026-03-03T04:53:48 | Old-Sherbert-4495 | false | null | 0 | o8cys3d | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8cys3d/ | false | 1 |
t1_o8cyqer | \>[Get a Token](https://www.google.com/url?q=https%3A%2F%2Fwww.missinglink.build%2Fpricing.html) by signing up for the free trial or purchase a bundle at [https://www.missinglink.build](https://www.google.com/url?q=https%3A%2F%2Fwww.missinglink.build) this will let you download the optimized A100 Colab wheels. **Replace the \*\*\*\*\*\* text with your token in the cell below.** ( see [Additional Purchase Options](https://www.google.com/url?q=https%3A%2F%2Fwww.missinglink.build%2Fpricing.html)
Selling prebuilt wheel for an opensource repository? No for me, but thanks. | 1 | 0 | 2026-03-03T04:53:28 | NandaVegg | false | null | 0 | o8cyqer | false | /r/LocalLLaMA/comments/1rjdob7/generate_3d_models_with_trellis2_in_colab_working/o8cyqer/ | false | 1 |
t1_o8cyit4 | Ahhh, that sounds like must be what it is, since one of the other ones I looked up was Gemma3 27b (to see if this was just some weird Qwen thing of this model, or if it was for other unrelated models as well), which as luck would have it I think is also a multi-modal model. And I can't remember which 3rd unrelated model I looked up but I think it was maybe also a mutli-modal one. | 1 | 0 | 2026-03-03T04:51:58 | DeepOrangeSky | false | null | 0 | o8cyit4 | false | /r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/o8cyit4/ | false | 1 |
t1_o8cyft6 | 3.5 is insanely good but it seems to matter what framework you use. Eg opencode is kinda shit meanwhile in Claud code this smacks. | 1 | 0 | 2026-03-03T04:51:23 | ThinkExtension2328 | false | null | 0 | o8cyft6 | false | /r/LocalLLaMA/comments/1rjg5qm/qwen3535ba3b_vs_qwen3_coder_30b_a3b_instruct_for/o8cyft6/ | false | 1 |
t1_o8cybfz | It's great but I hate how it takes 5x as long to think compared to gpt-oss | 1 | 0 | 2026-03-03T04:50:31 | koenafyr | false | null | 0 | o8cybfz | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8cybfz/ | false | 1 |
t1_o8cya7g | roocode works for me | 1 | 0 | 2026-03-03T04:50:17 | Odd-Ordinary-5922 | false | null | 0 | o8cya7g | false | /r/LocalLLaMA/comments/1rjfijf/cline_not_playing_well_with_the_freshly_dropped/o8cya7g/ | false | 1 |
t1_o8cy8k7 | Yeah, PocketPal in App Store. It will be good if it will be just @myname. Sadly it’s just @@@@@@@@@@@@@@@ | 1 | 0 | 2026-03-03T04:49:57 | stopbanni | false | null | 0 | o8cy8k7 | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8cy8k7/ | false | 1 |
t1_o8cy5lr | or maybe 35ba3b, faster and (coud be) better output | 1 | 0 | 2026-03-03T04:49:22 | Conscious_Chef_3233 | false | null | 0 | o8cy5lr | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8cy5lr/ | false | 1 |
t1_o8cy535 | You should probably disclose you are the author of the app. | 1 | 0 | 2026-03-03T04:49:16 | brakx | false | null | 0 | o8cy535 | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8cy535/ | false | 1 |
t1_o8cy4xt | [deleted] | 1 | 0 | 2026-03-03T04:49:14 | [deleted] | true | null | 0 | o8cy4xt | false | /r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8cy4xt/ | false | 1 |
t1_o8cy1yt | Yea, I'm using a mac studio, so, that side of things should be super simple and easy I think, because of that. Thus wanting to just jump straight to llama.cpp. I guess maybe if I browse around a bit I can find some beginner's guides, etc. My one worry with it is like, I'm sure if I read some of those, combined with asking Gemini, etc some questions to help me set it up, I could set it up tonight if I wanted to, it's just, like I said in the OP, I want to make sure there aren't any beginner security issues that I need to be aware of if I somehow set it up wrong, since I'd presumably have to type all sorts of random things and commands into the terminal that I have no clue what I'm typing and just taking Gemini's word for it or whatever random posts I'm reading that only halfway explain things on here or whatever, so, I want to make sure there isn't some stupid beginner mistakes I could make that would be really terrible if I somehow make them.
Like for example with vLLM you have to serve stuff to yourself when you use it, or something like that, right? But I don't know how any of that stuff works, so I want to make sure I don't accidentally open up an external server somehow if I don't use the right settings (probably idiotic question) but I just want to make sure I don't somehow fuck this thing up to some insane degree somehow, since I have no clue what I'm doing | 1 | 0 | 2026-03-03T04:48:38 | DeepOrangeSky | false | null | 0 | o8cy1yt | false | /r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/o8cy1yt/ | false | 1 |
t1_o8cy1ed | Yeah, your setup is more than enough to run this locally. I'd go with a local embedding model like BGE or nomic-embed, chunk your PDFs, and use something like LlamaIndex to build the RAG pipeline. For the LLM itself, a quantized Llama 3.1 70B or a smaller model like Qwen2.5 32B would run great on that 4090 and handle those types of queries perfectly. | 1 | 0 | 2026-03-03T04:48:31 | Entire-Tell5716 | false | null | 0 | o8cy1ed | false | /r/LocalLLaMA/comments/1ribaws/offline_llm_best_pipeline_tools_to_query/o8cy1ed/ | false | 1 |
t1_o8cxxj8 | > You really need to set the prescense penalty just like in the qwen docs. I don’t know why unsloth doc left this setting out as it prevents the overthinking issue.
How do you set that in LMStudio? | 1 | 0 | 2026-03-03T04:47:46 | ZootAllures9111 | false | null | 0 | o8cxxj8 | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8cxxj8/ | false | 1 |
t1_o8cxvzd | That's exactly what's happening. I looked it up and it's a vector processor. So just like on a Cray, you have to fill the vector to make the most of it. | 1 | 0 | 2026-03-03T04:47:27 | fallingdowndizzyvr | false | null | 0 | o8cxvzd | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8cxvzd/ | false | 1 |
t1_o8cxvvc | The Good AI models is a fair bit of free inference of a low order if you can tap into it. | 1 | 0 | 2026-03-03T04:47:26 | Ok-Adhesiveness-4141 | false | null | 0 | o8cxvvc | false | /r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8cxvvc/ | false | 1 |
t1_o8cxvjg | Does presence_penalty exist in LMStudio? | 1 | 0 | 2026-03-03T04:47:23 | ZootAllures9111 | false | null | 0 | o8cxvjg | false | /r/LocalLLaMA/comments/1rit2fy/reverted_from_qwen35_27b_back_to_qwen3_8b/o8cxvjg/ | false | 1 |
t1_o8cxugc | It's the mmproj -- the thing that enables the model to read multimedia files. On huggingface you download it separately. | 1 | 0 | 2026-03-03T04:47:10 | kiwibonga | false | null | 0 | o8cxugc | false | /r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/o8cxugc/ | false | 1 |
t1_o8cx9i3 | how good is brave for search? | 1 | 0 | 2026-03-03T04:43:01 | Odd-Ordinary-5922 | false | null | 0 | o8cx9i3 | false | /r/LocalLLaMA/comments/1rjfvfx/qwen3535b_is_very_resourceful_web_search_wasnt/o8cx9i3/ | false | 1 |
t1_o8cx97w | Does this mean that it will enable people to make fine-tunes of it? Can people already make fine-tunes of models without having the base-model version, or is the base-model being available basically required, and thus why this is a big deal? I don't know much about the technical side of how fine-tuning works yet, so, I am curious | 1 | 0 | 2026-03-03T04:42:58 | DeepOrangeSky | false | null | 0 | o8cx97w | false | /r/LocalLLaMA/comments/1rj4zy3/stepfun_releases_2_base_models_for_step_35_flash/o8cx97w/ | false | 1 |
t1_o8cx7v2 | I mean if you can skip straight to Llama.cpp probably the way to go. I didn't go that way myself because of my mix of gpus and swapping models and changing active gpus with a click. Plus I like having a GUI.
Windows ended up being my main issue. | 1 | 0 | 2026-03-03T04:42:42 | lemondrops9 | false | null | 0 | o8cx7v2 | false | /r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/o8cx7v2/ | false | 1 |
t1_o8cx59p | [removed] | 1 | 0 | 2026-03-03T04:42:10 | [deleted] | true | null | 0 | o8cx59p | false | /r/LocalLLaMA/comments/1rj5czr/free_image_models_that_can_run_on_12gb_vram/o8cx59p/ | false | 1 |
t1_o8cx14k | Ollama was that shitty one that embeds itself in Windows startup with no setting to remove it, right? Yeah I uninstalled that malware immediately. | 1 | 0 | 2026-03-03T04:41:22 | mwoody450 | false | null | 0 | o8cx14k | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cx14k/ | false | 1 |
t1_o8cwwxx | Thank you for presenting your finding. This sounds promising but i cannot judge it yet. | 1 | 0 | 2026-03-03T04:40:34 | crantob | false | null | 0 | o8cwwxx | false | /r/LocalLLaMA/comments/1rj2y4n/you_can_monitor_lora_training_quality_without/o8cwwxx/ | false | 1 |
t1_o8cwvj2 | Oddly enough, on Linux for me Ollama is faster than LM Studio, in fact it's the fastest of everything i've tried. But their thing of needing a model file, that's just stupid. | 1 | 0 | 2026-03-03T04:40:17 | Savantskie1 | false | null | 0 | o8cwvj2 | false | /r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/o8cwvj2/ | false | 1 |
t1_o8cwnep | You can use the filters feature of llamas wap, which has a setParamByID variant that allows you to change the model ID parameters without restarting the model. | 1 | 0 | 2026-03-03T04:38:41 | Dazzling_Equipment_9 | false | null | 0 | o8cwnep | false | /r/LocalLLaMA/comments/1rj8zuh/manage_qwen_35_model_settings_with_litellm_proxy/o8cwnep/ | false | 1 |
t1_o8cwna4 | ...good question. So I checked and they appear to be the same!
I don't have the resources to try this idea (you have to actually run K2 on infra you control to get the logits) but it might have a lot of potential. | 1 | 0 | 2026-03-03T04:38:39 | ramendik | false | null | 0 | o8cwna4 | false | /r/LocalLLaMA/comments/1rj08k1/k2_not_25_distillation_still_worth_it/o8cwna4/ | false | 1 |
t1_o8cwl09 | 8Q_k_XL 5.2 TPS with 128k context
8x AMD Radeon R9700 AI pro 32gb pcie 4.0 16x
2TB DDR4 2666 16 channels= 340GBPS(system has a 3200 my top, slower memory downgrades clock speed system line
2x 7662 64c EPYC
8TB NVME Samsung 100 pro
3x 1500 titanium 80 plus PSUs (gpus are 300 bike run at 80-90 w average during impress)
$16,000 build cost = | 1 | 0 | 2026-03-03T04:38:13 | Subject-Forever-6138 | false | null | 0 | o8cwl09 | false | /r/LocalLLaMA/comments/1rdv3v0/running_kimi_k25_tell_us_your_build_quant/o8cwl09/ | false | 1 |
t1_o8cwkcg | Can you please share which docker image and the full docker command along with it? That's what I'm looking for please. | 1 | 0 | 2026-03-03T04:38:06 | texasdude11 | false | null | 0 | o8cwkcg | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8cwkcg/ | false | 1 |
t1_o8cwi0w | Its swe is 80 | 1 | 0 | 2026-03-03T04:37:39 | Fragrant-Dark5656 | false | null | 0 | o8cwi0w | false | /r/LocalLLaMA/comments/1rj2e3j/spongebob_art_with_qwen_35_9b_vs_opus_46/o8cwi0w/ | false | 1 |
t1_o8cwg74 | I am yes. | 1 | 0 | 2026-03-03T04:37:18 | Laabc123 | false | null | 0 | o8cwg74 | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8cwg74/ | false | 1 |
t1_o8cwfqh | Yea, but I kind of want to stick to open source with this stuff if I can. So, I figure I will just skip the LM Studio middle step and jump straight from Ollama to llama.cpp, once I learn how. I know I sound like a clueless moron, for now, but, I can learn things pretty quickly once I find a decent starting point or thing that explains all the lingo and basic concepts and where I get to where I can quickly learn all the things I need to learn. I just don't know where to even begin with any of it yet, since I have never really been into computers or been a power-user or anything like that, so far. But, once I figure things out a little, I'd rather just use llama.cpp and vLLM and stuff like that, and not use LM Studio, if I can avoid it. | 1 | 0 | 2026-03-03T04:37:12 | DeepOrangeSky | false | null | 0 | o8cwfqh | false | /r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/o8cwfqh/ | false | 1 |
t1_o8cwf50 | Was just poking around with the 9B myself, both trying to prompt-ngram-draft and use the 0.8B as a draft model. Where do I get the prediction rate statistic(s)? | 1 | 0 | 2026-03-03T04:37:05 | 4onen | false | null | 0 | o8cwf50 | false | /r/LocalLLaMA/comments/1rgp2nu/anyone_doing_speculative_decoding_with_the_new/o8cwf50/ | false | 1 |
t1_o8cwb7o | Yep. Goes into infinite loop and chews through all context.
I think repeat penalty should be 1.5. Requires some trail and error to find the right values.
The recommended values by them are not good. | 1 | 0 | 2026-03-03T04:36:19 | giant3 | false | null | 0 | o8cwb7o | false | /r/LocalLLaMA/comments/1rjcfdk/qwen359b_q4km_in_lm_studio_thinking_too_much/o8cwb7o/ | false | 1 |
t1_o8cw2ae | Usually they would give some dumbass response but this one literally just rejected me lmao | 1 | 0 | 2026-03-03T04:34:35 | Pro-editor-1105 | false | null | 0 | o8cw2ae | false | /r/LocalLLaMA/comments/1rjfixk/peak_answer/o8cw2ae/ | false | 1 |
t1_o8cvwmd | Not sure what your expecting from a 1.5B model. But that is kinda of funny. I doubt the larger models would say that... | 1 | 0 | 2026-03-03T04:33:26 | lemondrops9 | false | null | 0 | o8cvwmd | false | /r/LocalLLaMA/comments/1rjfixk/peak_answer/o8cvwmd/ | false | 1 |
t1_o8cvsgr | These models are a product of their era. The in-demand capability is agentic coding, analysis, research etc. If you’re looking for a model created specifically for chatting, they might be out there, but obviously Qwen3.5 isn’t it. Although if you turn off thinking they probably get much closer. | 1 | 0 | 2026-03-03T04:32:37 | datbackup | false | null | 0 | o8cvsgr | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8cvsgr/ | false | 1 |
t1_o8cvpu2 | But I need to know which model works cause I enabled all of the ones I could find in llama.cpp and none of them are working for some reason! I won't stop shaking you until you answer me!
Lol | 1 | 0 | 2026-03-03T04:32:05 | FatheredPuma81 | false | null | 0 | o8cvpu2 | false | /r/LocalLLaMA/comments/1rit2wx/llamacpp_qwen35_using_qwen3508b_as_a_draft_model/o8cvpu2/ | false | 1 |
t1_o8cvio7 | As I understand it Ollama is a cheap wrapper for Llama.cpp. Its slow and as you have discovered uses up more Vram for no reason that I know of. If it was faster or better in some way there would be more of a debate.
Try LM Studio, its just as easy to setup but way more options, runs faster and can use normal GGUF models no problem with no conversion needed. | 1 | 0 | 2026-03-03T04:30:40 | lemondrops9 | false | null | 0 | o8cvio7 | false | /r/LocalLLaMA/comments/1rjfqib/why_are_the_ollama_quants_of_local_llm_models/o8cvio7/ | false | 1 |
t1_o8cvffe | basically me | 1 | 0 | 2026-03-03T04:30:03 | Pro-editor-1105 | false | null | 0 | o8cvffe | false | /r/LocalLLaMA/comments/1rj8x1q/qwen35_4b_overthinking_to_say_hello/o8cvffe/ | false | 1 |
t1_o8cvc3g | [removed] | 1 | 0 | 2026-03-03T04:29:24 | [deleted] | true | null | 0 | o8cvc3g | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8cvc3g/ | false | 1 |
t1_o8cv42g | Drop the context size or drop the quant | 1 | 0 | 2026-03-03T04:27:52 | nakedspirax | false | null | 0 | o8cv42g | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8cv42g/ | false | 1 |
t1_o8cv24h | 1 | 0 | 2026-03-03T04:27:30 | NotAMuZ | false | null | 0 | o8cv24h | false | /r/LocalLLaMA/comments/1ri2irg/breaking_today_qwen_35_small/o8cv24h/ | false | 1 | |
t1_o8cuz3f | What info goes in RAG context window? | 1 | 0 | 2026-03-03T04:26:54 | Leelaah_saiee | false | null | 0 | o8cuz3f | false | /r/LocalLLaMA/comments/1rjefqu/data_analysis_from_a_csv_gpt0ss120b/o8cuz3f/ | false | 1 |
t1_o8cuxkx | Usually the way to handle this is to feed it headers, table size, and basic information on the csv and nothing else so it's forced to use tool calls, but I'm not sure how you could do this in OWUI. You could probably vibe code a comparable UI with much better CSV handling if you have a couple hours. That's ultimately the route I took | 1 | 0 | 2026-03-03T04:26:37 | MiyamotoMusashi7 | false | null | 0 | o8cuxkx | false | /r/LocalLLaMA/comments/1rjefqu/data_analysis_from_a_csv_gpt0ss120b/o8cuxkx/ | false | 1 |
t1_o8cuw2c | Could it be the newer version of llama.cpp broke in old model? | 1 | 0 | 2026-03-03T04:26:18 | jikilan_ | false | null | 0 | o8cuw2c | false | /r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/o8cuw2c/ | false | 1 |
t1_o8cutf9 | The 4b is good at one shotting some things though, like this keyboard:
[Web Audio Piano](https://red-vinita-86.tiiny.site/) | 1 | 0 | 2026-03-03T04:25:48 | c64z86 | false | null | 0 | o8cutf9 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8cutf9/ | false | 1 |
t1_o8cussu | Useful info, I hadn't considered it could be dated model architectures. Thank you. That suggests this may be an interaction bug in my specific topology/path, not non-P2P universally. I’m focused on reproducing a correctness issue first (dual GPU fails >2K ctx, single GPU passes), then I’ll compare alternate paths/models. | 1 | 0 | 2026-03-03T04:25:40 | MaleficentMention703 | false | null | 0 | o8cussu | false | /r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/o8cussu/ | false | 1 |
t1_o8cusqc | Are you using vllm docker image? | 1 | 0 | 2026-03-03T04:25:39 | texasdude11 | false | null | 0 | o8cusqc | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8cusqc/ | false | 1 |
t1_o8curem | I cant get it to work on opencode to save my life | 1 | 0 | 2026-03-03T04:25:24 | KingSanty | false | null | 0 | o8curem | false | /r/LocalLLaMA/comments/1rezq19/qwen3535b_on_apple_silicon_how_i_got_2x_faster/o8curem/ | false | 1 |
t1_o8cuklg | The fully logged failing repro is Llama-3.3-70B GGUF (Q4\_K\_M) with dual GPU layer split. I should have worded that more precisely than ‘all 70B’. I’ll update with additional model filenames as I run them, however I am waiting on the new riser still. | 1 | 0 | 2026-03-03T04:24:06 | MaleficentMention703 | false | null | 0 | o8cuklg | false | /r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/o8cuklg/ | false | 1 |
t1_o8cu9yn | curious, how do u know that there is negligible difference between Q8 and Q4? Any sort of benchs? | 1 | 0 | 2026-03-03T04:22:04 | Old-Sherbert-4495 | false | null | 0 | o8cu9yn | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8cu9yn/ | false | 1 |
t1_o8cu9zq | I've done some testing on 16GB Ryzen 3500u laptop with nearly useless Vega8 iGPU on linux, kernel 6.1.x.
With zram, MoE models up to 12-13GB total and A2B-A3B active params run fast enough to use*, when thinking is disabled (2-7 tps).
*Use in my case means generating scaffolding, functions and serving as a faster alternative to coding websearch.
No small model can one-shot programs in my domain, so all these excited people are annoying. | 1 | 0 | 2026-03-03T04:22:04 | crantob | false | null | 0 | o8cu9zq | false | /r/LocalLLaMA/comments/1rj1ni2/gpu_poor_folks16gb_whats_your_setup_for_coding/o8cu9zq/ | false | 1 |
t1_o8cu7ai | Memory prices are so high because its a cartel and they talk to each other for price fixing... they want to have more revenue because the USD loses so much value... | 1 | 0 | 2026-03-03T04:21:32 | snapo84 | false | null | 0 | o8cu7ai | false | /r/LocalLLaMA/comments/1rj39se/intelligence_density_per_gb_is_increasing_and_i/o8cu7ai/ | false | 1 |
t1_o8cu6lw | Expected behavior, and you'll be out of my mind the instant I stop getting replies from you, just like you have ever time there's been a gap in your responses lol. | 1 | 0 | 2026-03-03T04:21:25 | Savantskie1 | false | null | 0 | o8cu6lw | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cu6lw/ | false | 1 |
t1_o8cu4fs | Like RVC, which requires training a specialized speech model and adjusting the Pitch, is there an option to upload a reference audio for conversion? | 1 | 0 | 2026-03-03T04:21:00 | Round-Republic3465 | false | null | 0 | o8cu4fs | false | /r/LocalLLaMA/comments/1o810id/anyone_found_a_open_source_voice_changer_not/o8cu4fs/ | false | 1 |
t1_o8cu1cb | lol bro is posting here how he solved running LLMs on his GPU with his fancy CUDA compatibility layer, and all the while it’s not even in use, the model is offloaded to CPU 🤣 | 1 | 0 | 2026-03-03T04:20:25 | __JockY__ | false | null | 0 | o8cu1cb | false | /r/LocalLLaMA/comments/1rj0dsf/running_llms_on_huawei_ascend_without_rewriting/o8cu1cb/ | false | 1 |
t1_o8cu0cu | Yes | 1 | 0 | 2026-03-03T04:20:13 | Laabc123 | false | null | 0 | o8cu0cu | false | /r/LocalLLaMA/comments/1rii2pd/current_state_of_qwen35122ba10b/o8cu0cu/ | false | 1 |
t1_o8ctu3x | maybe it is 3.5 27b. it ends the reasoning with weird characters like that. | 1 | 0 | 2026-03-03T04:19:03 | de4dee | false | null | 0 | o8ctu3x | false | /r/LocalLLaMA/comments/1rj6m71/qwen_35_27b_a_testament_to_the_transformer/o8ctu3x/ | false | 1 |
t1_o8ctoke | Then I said I couldn't get lm studio to work because of these things then blamed myself?
Cool fight. You can have it ain't worth my time | 1 | 0 | 2026-03-03T04:18:00 | nakedspirax | false | null | 0 | o8ctoke | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8ctoke/ | false | 1 |
t1_o8ctjvk | try [https://huggingface.co/mradermacher/Qwen3-8B-heretic-i1-GGUF](https://huggingface.co/mradermacher/Qwen3-8B-heretic-i1-GGUF) with q4\_k\_m. | 1 | 0 | 2026-03-03T04:17:07 | irisnyxis | false | null | 0 | o8ctjvk | false | /r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/o8ctjvk/ | false | 1 |
t1_o8ctdw8 | no reason to run it at Q8 honestly. 5080 with 16gb vram, run at Q4 at about 120t/s. Or you can just run it at Q5 if you want higher quality. | 1 | 0 | 2026-03-03T04:15:58 | fishylord01 | false | null | 0 | o8ctdw8 | false | /r/LocalLLaMA/comments/1rjff88/how_do_i_get_the_best_speed_out_of_qwen_35_9b_in/o8ctdw8/ | false | 1 |
t1_o8ctcox | 1 | 0 | 2026-03-03T04:15:44 | Savantskie1 | false | null | 0 | o8ctcox | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8ctcox/ | false | 1 | |
t1_o8ctbud | That’s fair. You can use it for simple things like sentiment / intent detection / simple classification stuff. Very little output tokens. There most of the small models are good enough. The newer ones are better at instruction following/nuance in the original prompts | 1 | 0 | 2026-03-03T04:15:34 | Maximum_Low6844 | false | null | 0 | o8ctbud | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8ctbud/ | false | 1 |
t1_o8ctbqn | I am having a very hard time with qwen3.5-122b, and I have only ever used llama.cpp, so I would say you aren't quite right. | 1 | 0 | 2026-03-03T04:15:33 | plopperzzz | false | null | 0 | o8ctbqn | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8ctbqn/ | false | 1 |
t1_o8ct69w | Dude, a stardockengineer should be able to afford one of those. That's an in demand profession. In fact, it should be part of your standard kit shouldn't it? | 1 | 0 | 2026-03-03T04:14:31 | fallingdowndizzyvr | false | null | 0 | o8ct69w | false | /r/LocalLLaMA/comments/1rj3i8m/strix_halo_npu_performance_compared_to_gpu_and/o8ct69w/ | false | 1 |
t1_o8ct5z8 | I haven't done much testing with it yet, just trying it out myself after using vanilla opencode previously. | 1 | 0 | 2026-03-03T04:14:28 | suicidaleggroll | false | null | 0 | o8ct5z8 | false | /r/LocalLLaMA/comments/1rjf4zm/reasoning_in_cloud_coding_with_local/o8ct5z8/ | false | 1 |
t1_o8ct5h7 | you can run it using Off Grid: [https://apps.apple.com/us/app/off-grid-local-ai/id6759299882](https://apps.apple.com/us/app/off-grid-local-ai/id6759299882)
the build for qwen3.5 hasn't been approved, so you can build from source here: [https://github.com/alichherawalla/off-grid-mobile-ai](https://github.com/alichherawalla/off-grid-mobile-ai) | 1 | 0 | 2026-03-03T04:14:22 | alichherawalla | false | null | 0 | o8ct5h7 | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8ct5h7/ | false | 1 |
t1_o8csy7q | hey man you should type the following words into google "llm benchmarks" then at scores of models with and without thinking | 1 | 0 | 2026-03-03T04:13:01 | Distinct_Lion7157 | false | null | 0 | o8csy7q | false | /r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o8csy7q/ | false | 1 |
t1_o8csxma | That's just me when I'm overthinking social situations. 😬 | 1 | 0 | 2026-03-03T04:12:54 | Traditional_Train501 | false | null | 0 | o8csxma | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8csxma/ | false | 1 |
t1_o8csx9h | How does one run an llm on an iPhone | 1 | 0 | 2026-03-03T04:12:50 | quietsubstrate | false | null | 0 | o8csx9h | false | /r/LocalLLaMA/comments/1rjec8a/qwen35_on_a_mid_tier_300_android_phone/o8csx9h/ | false | 1 |
t1_o8cswyz | GO CHINA GOO!! | 1 | 0 | 2026-03-03T04:12:46 | Foreign-Dig-2305 | false | null | 0 | o8cswyz | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8cswyz/ | false | 1 |
t1_o8csu2p | Guilty over what? I just listed you the reasons why I couldn't use it and here you are continuing to attack me. Look at your self. | 1 | 0 | 2026-03-03T04:12:13 | nakedspirax | false | null | 0 | o8csu2p | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8csu2p/ | false | 1 |
t1_o8csm24 | If you were you wouldn’t have jumped to insulting right off the bat. Don’t pretend to be disabled just to gain the high ground when you feel guilty. I can tell the kind of person you are just by that interaction alone. Grow up. | 1 | 0 | 2026-03-03T04:10:40 | Savantskie1 | false | null | 0 | o8csm24 | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8csm24/ | false | 1 |
t1_o8csfiv | You're running your model on your phone? I didn't know this was a thing. And is being spammed with \\@s that bad? I spend half my day getting \\@s from slack, paging apps, and discord. 🙃 | 1 | 0 | 2026-03-03T04:09:26 | Traditional_Train501 | false | null | 0 | o8csfiv | false | /r/LocalLLaMA/comments/1rirlau/breaking_the_small_qwen35_models_have_been_dropped/o8csfiv/ | false | 1 |
t1_o8csdy5 | What hardware are you running this on? I've got a 4080 (albeit the laptop version) but it takes like 30 seconds for the first inference to return. | 1 | 0 | 2026-03-03T04:09:08 | cppshane | false | null | 0 | o8csdy5 | false | /r/LocalLLaMA/comments/1rizodv/running_qwen_35_08b_locally_in_the_browser_on/o8csdy5/ | false | 1 |
t1_o8cscq3 | Nemo Instruct Q4\_K\_M gave decent results with --no-kv-offload to keep the kv-cache cpu side | 1 | 0 | 2026-03-03T04:08:54 | daHaus | false | null | 0 | o8cscq3 | false | /r/LocalLLaMA/comments/1rj7p2h/i_need_an_uncensored_llm_for_8gb_vram/o8cscq3/ | false | 1 |
t1_o8csa0d | I’m not into engineering, I’m just a disabled guy who works on this stuff for fun | 1 | 0 | 2026-03-03T04:08:22 | Savantskie1 | false | null | 0 | o8csa0d | false | /r/LocalLLaMA/comments/1rhogov/the_us_used_anthropic_ai_tools_during_airstrikes/o8csa0d/ | false | 1 |
t1_o8cs8hx | Lm studio has worked for me but vllm and llama.cpp is so much better. Lmstudio has you going through tabs to find things, you are sliding things around without a simple copy paste. Maybe I'm the one with a disabled nerve damage who can't use Lm studio. | 1 | 0 | 2026-03-03T04:08:05 | nakedspirax | false | null | 0 | o8cs8hx | false | /r/LocalLLaMA/comments/1rjb7yk/psa_if_you_want_to_test_new_models_use/o8cs8hx/ | false | 1 |
t1_o8cs7e9 | to be honest I have no idea what the use case for such a tiny model even is, but I don't think establishing metaphorical connections would be it. rather the model should just take the prompt as literally as possible. based on that I think 3.5 does have the best response in spite of the hallucinations | 1 | 0 | 2026-03-03T04:07:53 | tengo_harambe | false | null | 0 | o8cs7e9 | false | /r/LocalLLaMA/comments/1rjd4pv/qwen_25_3_35_smallest_models_incredible/o8cs7e9/ | false | 1 |
t1_o8cs6oy | Exactly which 70B models fail? | 1 | 0 | 2026-03-03T04:07:45 | crantob | false | null | 0 | o8cs6oy | false | /r/LocalLLaMA/comments/1rjdeat/dual_rtx_3090_on_b550_70b_models_produce_garbage/o8cs6oy/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.