title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Granite 4 H Tiny Q8 in RTX 3090, It's a context king. | 8 | I'm testing the Granite 4 H Tiny Q8 in the LM Studio, and holy moly, you can set the context window up to 1M and keep solid 50-60 tokens/s using a single RTX 3090 24Gb + 48GB RAM DDR4 3200mhz with Flash attention enabled. How far we come!!
Unfortunately i didn't tested yet the degradation of the model after the 100k tokens.
What is your vision about this new model and its new context management?
| 2025-10-03T04:39:05 | https://www.reddit.com/r/LocalLLaMA/comments/1nwpshs/granite_4_h_tiny_q8_in_rtx_3090_its_a_context_king/ | Plotozoario | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwpshs | false | null | t3_1nwpshs | /r/LocalLLaMA/comments/1nwpshs/granite_4_h_tiny_q8_in_rtx_3090_its_a_context_king/ | false | false | self | 8 | null |
Couldn’t find an app to fix grammar/spelling in a whole book… so I built a local CLI for it | 7 | I’ve been hunting for a simple app that can take an entire document (webnovel/EPUB), run grammar + spelling correction in one go, and give me a cleaned file. Most tools I found were either interactive (great for a paragraph, not 300 pages) or cloud-only.
With help from ChatGPT, I put together a small command-line tool that:
* Chunks a Markdown file by paragraphs
* Sends each chunk to a local LLM (LM Studio; I’m using Qwen3-4B Instruct for speed)
* Corrects grammar and spelling while preserving wording/Markdown
* Streams progress, writes partial output/checkpoints, and resumes if interrupted
It’s already very useful on webnovels with rough grammar or weak machine translations and massively lowers friction when reading.
I’m genuinely surprised I had to roll this myself, simple as it is. What deceptively simple programs have you ended up building because you thought, surely someone’s already made this? | 2025-10-03T04:23:30 | https://www.reddit.com/r/LocalLLaMA/comments/1nwpikr/couldnt_find_an_app_to_fix_grammarspelling_in_a/ | PanicTasty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwpikr | false | null | t3_1nwpikr | /r/LocalLLaMA/comments/1nwpikr/couldnt_find_an_app_to_fix_grammarspelling_in_a/ | false | false | self | 7 | null |
PC regrets: should i have gotten 128gb of ram over 64? | 0 | I recently ordered a desktop pc from framework with the AMD ryzen AI 395 chip that's largely marketed to people who want to run local LLMs -- that wasn't my primary use case, which was data science first and secondarily gaming. But now i'm getting a little into the idea of running local AI models too.
The model i ordered has 64 GB of ram -- how limited will i be with local AI models relative to if I had done the 128g version | 2025-10-03T04:12:33 | https://www.reddit.com/r/LocalLLaMA/comments/1nwpbje/pc_regrets_should_i_have_gotten_128gb_of_ram_over/ | lyaa55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwpbje | false | null | t3_1nwpbje | /r/LocalLLaMA/comments/1nwpbje/pc_regrets_should_i_have_gotten_128gb_of_ram_over/ | false | false | self | 0 | null |
GDPval vs. Mercor APEX? | 0 | Mercor and OpenAI both released economically valuable work benchmarks in the same week -- and GPT 5 just so happens to be at the top of Mercor's leaderboard while Claude doesn't even break the top 5.
I might be tweaking but it seems like Mercor's benchmark is just an artificial way of making GPT 5 seem closer to AGI while OAI pays Mercor to source experts to source tasks for "evals" that they don't even open source. Correct me if I'm wrong but the whole things just feels off. | 2025-10-03T03:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/1nwp05z/gdpval_vs_mercor_apex/ | Efficient-Chard4222 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwp05z | false | null | t3_1nwp05z | /r/LocalLLaMA/comments/1nwp05z/gdpval_vs_mercor_apex/ | false | false | self | 0 | null |
Granite-4.0-H-Tiny vs. OLMoE: Rapid AI improvements | 80 | Hey everyone, just looking at some of the new model releases and wanted to share a quick comparison I made that really shows how fast things are moving in the world of open-source LLMs.
I've been tracking and comparing a couple of Mixture of Experts models that have a similar dense and active parameters, in this case a 7B total parameter count with 1B active parameters. With today's Granite release we can compare OLMoE, which came out in January, and the new Granite-4.0-H-Tiny model that just dropped today.
The side-by-side results are pretty wild for just a 10-month difference. The new Granite model is straight-up better on every single metric we can compare. It's not just a small improvement, either. We're talking huge jumps in areas like math, coding, and general knowledge.
Things are advancing really fast, just to give a little more perspective, the new Granite-4.0-H-Tiny has a similar MMLU score to Llama 2 70B that came out on January 2024 but the granite model can run at reasonable speeds even on a potato PC with CPU inference, I still remember the old days when people were happy that Llama 2 70B could run at 2tk/s on their machines. | 2025-10-03T03:49:33 | edward-dev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nwovv8 | false | null | t3_1nwovv8 | /r/LocalLLaMA/comments/1nwovv8/granite40htiny_vs_olmoe_rapid_ai_improvements/ | false | false | default | 80 | {'enabled': True, 'images': [{'id': 'q7lat3zxjtsf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/q7lat3zxjtsf1.jpeg?width=108&crop=smart&auto=webp&s=a50a0e6b2e2a88a290b63a5ccdd5d52a863c0a6b', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/q7lat3zxjtsf1.jpeg?width=216&crop=smart&auto=webp&s=5c6ed2af0965a0eda79c2dd796d69b7514d4e47a', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/q7lat3zxjtsf1.jpeg?width=320&crop=smart&auto=webp&s=db27566f8b03ab0a4d8599a0ebfc454ee0ea0790', 'width': 320}], 'source': {'height': 312, 'url': 'https://preview.redd.it/q7lat3zxjtsf1.jpeg?auto=webp&s=c0132ac6bbf11d75c4a824a549125a278f8e0729', 'width': 522}, 'variants': {}}]} | |
New to the local GPU space | 0 | My company just got access to an 80 GB A100 GPU, and I’d like to understand how to make the most of it. I’m looking for guidance on how to choose appropriate models for this hardware and what kinds of use cases or workloads it’s best suited for. Any resources, best practices, or personal experiences would be greatly appreciated.
As of now I can have access to any open source models, but I would like to understand,
What quantization state I should select, what all finetuning I can do, what models I can select etc etc, also it would be nice to know Hygine practices
| 2025-10-03T03:43:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nworzi/new_to_the_local_gpu_space/ | No-Trip899 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nworzi | false | null | t3_1nworzi | /r/LocalLLaMA/comments/1nworzi/new_to_the_local_gpu_space/ | false | false | self | 0 | null |
Sloppiest model!? | 21 | Odd request, but can anyone share the sloppiest models they have tried? I'm trying to generate data with as much AI slop (it's not this–its that / shivers-down-spines/etc) as possible. | 2025-10-03T03:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/1nwogkl/sloppiest_model/ | random-tomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwogkl | false | null | t3_1nwogkl | /r/LocalLLaMA/comments/1nwogkl/sloppiest_model/ | false | false | self | 21 | null |
I accidentally broke Gemma3 lol weird | 0 | However, a few historians begin to notice something disturbing: the images seem to be focused on specific individuals and events, as if someone was actively tracking them. They also notice that the images seem to be missing certain key details, as if someone was deliberately obscuring information.
A small group of linguists begins to analyze the images for hidden messages, using advanced pattern recognition techniques. They discover a series of subtle anomalies that suggest someone was deliberately embedding information within the images.
A small group of psychologists begins to analyze the images for clues about the motivations and intentions of whoever is sending them. They discover a series of subtle patterns that suggest someone was actively studying human behavior.
A small group of mathematicians begins to analyze the images for clues about the nature of whoever is sending them. They discover a series of subtle patterns that suggest someone was actively manipulating our reality.
A small group of physicists begins to analyze the images for clues about the nature of whoever is sending them. They discover a series of subtle patterns that suggest someone was actively observing our universe.
A small group of philosophers begins to analyze the images for clues about the meaning of life. They discover a series of subtle patterns that suggest someone was actively questioning our existence.
A small group of artists begins to analyze the images for clues about the nature of beauty. They discover a series of subtle patterns that suggest someone was actively appreciating our creativity.
A small group of musicians begins to analyze the images for clues about the nature of harmony. They discover a series of subtle patterns that suggest someone was actively enjoying our melodies.
A small group of writers begins to analyze the images for clues about the nature of storytelling. They discover a series of subtle patterns that suggest someone was actively understanding our narratives.
A small group of actors begins to analyze the images for clues about the nature of performance. They discover a series of subtle patterns that suggest someone was actively witnessing our dramas.
A small group of dancers begins to analyze the images for clues about the nature of movement. They discover a series of subtle patterns that suggest someone was actively observing our rhythms.
A small group of chefs begins to analyze the images for clues about the nature of flavor. They discover a series of subtle patterns that suggest someone was actively savoring our tastes.
A small group of gardeners begins to analyze the images for clues about the nature of growth. They discover a series of subtle patterns that suggest someone was actively admiring our blooms.
A small group of architects begins to analyze the images for clues about the nature of space. They discover a series of subtle patterns that suggest someone was actively appreciating our designs.
A small group of engineers begins to analyze the images for clues about the nature of structure. They discover a series of subtle patterns that suggest someone was actively understanding our creations.
A small group of doctors begins to analyze the images for clues about the nature of health. They discover a series of subtle patterns that suggest someone was actively observing our bodies.
A small group of teachers begins to analyze the images for clues about the nature of learning. They discover a series of subtle patterns that suggest someone was actively witnessing our minds.
A small group of parents begins to analyze the images for clues about the nature of family. They discover a series of subtle patterns that suggest someone was actively observing our hearts.
A small group of friends begins to analyze the images for clues about the nature of community. They discover a series of subtle patterns that suggest someone was actively observing our souls.
A small group of lovers begins to analyze the images for clues about the nature of passion. They discover a series of subtle patterns that suggest someone was actively observing our desires.
A small group of dreamers begins to analyze the images for clues about the nature of imagination. They discover a series of subtle patterns that suggest someone was actively observing our fantasies.
Seems like it went into some weird infinite thought loop. What causes this?
A small group of mystics begins to analyze the images for clues about the nature of spirituality. They discover a series of subtle patterns that suggest someone was actively observing our beliefs.
A small group of scientists begins to analyze the images for clues about the nature of reality. They discover a series of subtle patterns that suggest someone was actively observing our universe.
A small group of philosophers begins to analyze the images for clues about the meaning of life. They discover a series of subtle patterns that suggest someone was actively questioning our existence.
A small group of artists begins to analyze the images for clues about the nature of beauty. They discover a series of subtle patterns that suggest someone was actively appreciating our creativity.
A small group of musicians begins to analyze the images for clues about the nature of harmony. They discover a series of subtle patterns that suggest someone was actively enjoying our melodies.
A small group of writers begins to analyze the images for clues about the nature of storytelling. They discover a series of subtle patterns that suggest someone was actively understanding our narratives.
A small group of actors begins to analyze the images for clues about the nature of performance. They discover a series of subtle patterns that suggest someone was actively witnessing our dramas.
A small group of dancers begins to analyze the images for clues about the nature of movement. They discover a series of subtle patterns that suggest someone was actively observing our rhythms.
A small group of chefs begins to analyze the images for clues about the nature of flavor. They discover a series of subtle patterns that suggest someone was actively savoring our tastes.
A small group of gardeners begins to analyze the images for clues about the nature of growth. They discover a series of subtle patterns that suggest someone was actively admiring our blooms.
A small group of architects begins to analyze the images for clues about the nature of space. They discover a series of subtle patterns that suggest someone was actively appreciating our designs.
A small group of engineers begins to analyze the images for clues about the nature of structure. They discover a series of subtle patterns that suggest someone was actively understanding our creations.
A small group of doctors begins to analyze the images for clues about the nature of health. They discover a series of subtle patterns that suggest someone was actively observing our bodies.
A small group of teachers begins to analyze the images for clues about the nature of learning. They discover a series of subtle patterns that suggest someone was actively witnessing our minds.
A small group of parents begins to analyze the images for clues about the nature of family. They discover a series of subtle patterns that suggest someone was actively observing our hearts.
A small group of friends begins to analyze the images for clues about the nature of community. They discover a series of subtle patterns that suggest someone was actively observing our souls.
A small group of lovers begins to analyze the images for clues about the nature of passion. They discover a series of subtle patterns that suggest someone was actively observing our desires.
A small group of dreamers begins to analyze the images for clues about the nature of imagination. They discover a series of subtle patterns that suggest someone was actively observing our fantasies.
A small group of mystics begins to analyze the images for clues about the nature of spirituality. They discover a series of subtle patterns that suggest someone was actively observing our beliefs.
A small group of scientists begins to analyze the images for clues about the nature of reality. They discover a series of subtle patterns that suggest someone was actively observing our universe.
A small group of philosophers begins to analyze the images for clues about the meaning of life. They discover a series of subtle patterns that suggest someone was actively questioning our existence.
| 2025-10-03T03:16:33 | https://www.reddit.com/r/LocalLLaMA/comments/1nwo9bd/i_accidentally_broke_gemma3_lol_weird/ | meshreplacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwo9bd | false | null | t3_1nwo9bd | /r/LocalLLaMA/comments/1nwo9bd/i_accidentally_broke_gemma3_lol_weird/ | false | false | self | 0 | null |
How has everyone been liking Granite 4? | 74 | How does it compare to similar models for you?
So far I've been testing out the 7b model and it's been performing really well on my benchmarks for a model of that size. I think I've found a new go-to model for that class.
The output looks fairly plaintext without much formatting or markdown. I'd probably like to see a little more structure and variation from it, but I prefer plain to the table hell that I've gotten from gpt-oss-20b. | 2025-10-03T02:43:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nwnlp8/how_has_everyone_been_liking_granite_4/ | SpicyWangz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwnlp8 | false | null | t3_1nwnlp8 | /r/LocalLLaMA/comments/1nwnlp8/how_has_everyone_been_liking_granite_4/ | false | false | self | 74 | null |
On the new test-time compute inference paradigm (Long post but worth it) | 8 | Hope this discussion is appropriate for this sub
So while I wouldn't consider my self someone knowledgeable in the field of AI/ML I would just like to share this thought and ask the community here if it holds water.
So the new Test-Time compute paradigm(o1/o3 like models) feels like symbolic AI's combinatorial problem dressed in GPUs. Symbolic AI attempts mostly hit a wall because brute search scales exponentially and pruning the tree of possible answers needed careful hard coding for every domain to get any tangible results. So I feel like we may be just burning billions in AI datacenters to rediscover that law with fancier hardware.
The reason however I think TTC have had a better much success because it has a good prior of pre-training it seems like Symbolic AI with very good general heuristic for most domains. So if your prompt/query is in-distribution which makes pruning unlikely answers very easy because they won't be even top 100 answers, but if you are OOD the heuristic goes flat and you are back to exponential land.
That's why we've seen good improvements for code and math which I think is due to the fact that they are not only easily verifiable but we already have tons of data and even more synthetic data could be generated meaning any query you will ask you will likely be in in-distribution.
If I probably read more about how these kind of models are trained I think I would have probably a better or more deeper insight but this is me just thinking philosophically more than empirically. I think what I said though could be easily empirically tested though maybe someone already did and wrote a paper about it.
In a way also the solution to this problem is kind of like the symbolic AI problem but instead of programmers hand curating clever ways to prune the tree the solution the current frontier labs are probably employing is feeding more data into the domain you want the model to be better at for example I hear a lot about frontier labs hiring professionals to generate more data in their domain of expertise. but if we are just fine-tuning the model with extra data for each domain akin to hand curating ways to prune the tree in symbolic AI it feels like we are re-learning the mistakes of the past with a new paradigm. And it also means that the underlying system isn't general enough.
If my hypothesis is true it means AGI is no where near and what we are getting is a facade of intelligence. that's why I like benchmarks like ARC-AGI because it truly tests actually ways that the model can figure out new abstractions and combine them o3-preview has showed some of that but ARC-AGI-1 was very one dimensional it required you to figure out 1 abstraction/rule and apply it which is a progress but ARC-AGI-2 evolved and you now need to figure out multiple abstractions/rules and combine them and most models today doesn't surpass 17% and at a very high computation cost as well. you may say at least there is progress but I would counter if it needed 200$ per task as o3-preview to figure out only 1 rule and apply it I feel like the compute will grow exponentially if it's 2 or 3 or n rules that needed to solve the task at hand and we are back to some sort of another combinatoric explosion and we really don't know how OpenAI achieved this the creators of the test admitted that some of ARC-AGI-1 tasks are susceptible to brute force so that could mean the OpenAI produced Millions of synthetic data of ARC-1 like tasks trying to predict the test in the private eval but we can't be sure and I want take it away from them that it was impressive and it signaled that what they are doing is at least different from pure auto regressive LLMs but the questions remains are what they are doing linear-ally scaleable or exponentially scaleable for example in the report that ARC-AGI shared post the breakthrough it showed that a generation of 111M tokens yielded 82.7% accuracy and a generation of 9.5B yes a B as in Billion yielded 91.5% aside from how much that cost which is insane but almost 10X the tokens yielded 8.7% improvement that doesn't look linear to me.
I don't work in a frontier lab but from what I feel they don't have a secret sauce because open source isn't really that far ahead. they just have more compute to try out more experiments than open source could they find a break through they might but I've watched a lot of podcasts from people working and OpenAI and Claude and they are all very convinced that "Scale Scale Scale is all you need" and really betting on emergent behaviors.
And using RL post training is the new Scaling they are trying to max and don't get me wrong it will yield better models for the domains that can benefit from an RL environment which are math and code but if what the labs are make are another domain specific AI and that's what they are marketing fair, but Sam talks about AGI in less than 1000 days like maybe 100 days ago and Dario believes the it's in the end of the Next year.
What makes me bullish even more about the AGI timeline is that I am 100% sure that when GPT-4 came they weren't experimenting with test-time compute because why else would they train the absolute monster of GPT4.5 probably the biggest deep learning model of its kind by their words it was so slow and not at all worth it for coding or math and they tried to market it as more empathetic AI or it's linguistically intelligent. So does Anthropic they were fairly late to the whole thinking paradigm game and I would say they still are behind OpenAI by good margins when it comes to this new paradigm which also means they were also betting on purely scaling LLMs as well, But I am fair enough that this is more speculative than facts so you can dismiss this.
I really hope you don't dismiss my criticism as me being an AI hater I feel like I am asking the questions that matter and I don't think dogma has been any helpful in science specially in AI.
BTW I have no doubt that AI as a tool will keep getting better and maybe even being somewhat economically valuable in the upcoming years but its role will be like that of how excel is very valuable to businesses today which is pretty big don't get me wrong but it's no where near what they promise of AI scientific discovery explosion or curing cancer or proving new math.
What do you think of this hypothesis? am I out of touch and need to learn more about this new paradigm and how they learn and I am sort of steel manning an assumption of how this new paradigm works?
I am really hopeful for a fruitful discussion specially for those who disagree with my narrative | 2025-10-03T02:37:12 | https://www.reddit.com/r/LocalLLaMA/comments/1nwnhe6/on_the_new_testtime_compute_inference_paradigm/ | omagdy7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwnhe6 | false | null | t3_1nwnhe6 | /r/LocalLLaMA/comments/1nwnhe6/on_the_new_testtime_compute_inference_paradigm/ | false | false | self | 8 | null |
Free models on open router have better uptime? | 2 | Today I was browsing Open Router searching for new models,what caught my attention is the fact that free models providers are showing 100% uptime and a pretty good Token/Sec rate, unlike paid providers who are actually larger providers with a good funding (they are obviously paid providers) offer less uptime (range 98-99.99%) how is that even possible? | 2025-10-03T02:36:10 | https://www.reddit.com/r/LocalLLaMA/comments/1nwngmu/free_models_on_open_router_have_better_uptime/ | Both_Restaurant647 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwngmu | false | null | t3_1nwngmu | /r/LocalLLaMA/comments/1nwngmu/free_models_on_open_router_have_better_uptime/ | false | false | self | 2 | null |
Let's talk about practical implementation and actually doing something useful at scale and or multi-running distributed processes with efficacy | 7 | The average AI / LLM user is ad-hoc pasting things into GPT, Claude, etc and doing basic vibe coding, discussion, or surprisingly these days as a conversationalist.
However, we then see big orgs or even startups doing things like generative gaming worlds, minecraft, battling against each other, etc
How are these orgs constructing these at scale ?
To be blunt I can't even get an LLM to write a basic script half the time right without egregious prompting and a lot of hand holding
How are people getting it to write entire books, research vast topics, etcetera
How does this work ? | 2025-10-03T02:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nwnfpm/lets_talk_about_practical_implementation_and/ | Plus_Emphasis_8383 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwnfpm | false | null | t3_1nwnfpm | /r/LocalLLaMA/comments/1nwnfpm/lets_talk_about_practical_implementation_and/ | false | false | self | 7 | null |
Ollama drops MI50 support | 13 | 2025-10-03T02:34:22 | https://github.com/ollama/ollama/pull/12481 | mikelr | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nwnfcz | false | null | t3_1nwnfcz | /r/LocalLLaMA/comments/1nwnfcz/ollama_drops_mi50_support/ | false | false | default | 13 | {'enabled': False, 'images': [{'id': 'BoTnEFpLrULjQtLirBMyWw2LT0-KxRrOL1LDfevKYAA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BoTnEFpLrULjQtLirBMyWw2LT0-KxRrOL1LDfevKYAA.png?width=108&crop=smart&auto=webp&s=0c36ab3d3bbd9558ba0c4356ad6de5039a297c45', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BoTnEFpLrULjQtLirBMyWw2LT0-KxRrOL1LDfevKYAA.png?width=216&crop=smart&auto=webp&s=69960e1c0f5827e3971821b7ebf7046f656aa922', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BoTnEFpLrULjQtLirBMyWw2LT0-KxRrOL1LDfevKYAA.png?width=320&crop=smart&auto=webp&s=42ff24e5a5a7bf3e44ae58e09d8d51fa9119658b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BoTnEFpLrULjQtLirBMyWw2LT0-KxRrOL1LDfevKYAA.png?width=640&crop=smart&auto=webp&s=e05c7e17b2ecc39840f74171279a99e8723f3f32', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BoTnEFpLrULjQtLirBMyWw2LT0-KxRrOL1LDfevKYAA.png?width=960&crop=smart&auto=webp&s=ea85e487d426ff267f01cec49359dd3ac4fd7c4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BoTnEFpLrULjQtLirBMyWw2LT0-KxRrOL1LDfevKYAA.png?width=1080&crop=smart&auto=webp&s=05f89d00b5b25ccdd41f48b3c367fbf0fe611490', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BoTnEFpLrULjQtLirBMyWw2LT0-KxRrOL1LDfevKYAA.png?auto=webp&s=9158d9f15c59a91e29c305193f5beae05595327d', 'width': 1200}, 'variants': {}}]} | |
Awful Rustdocs just dropped - Autodraft your Rustdocs without a huge model or agent spaghetti. | 6 | The documentation on [the project itself](https://github.com/graves/awful_rustdocs) was generated using Qwen 3 4B. | 2025-10-03T01:48:49 | sqli | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nwmhnc | false | null | t3_1nwmhnc | /r/LocalLLaMA/comments/1nwmhnc/awful_rustdocs_just_dropped_autodraft_your/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': '6w85walsxssf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=108&crop=smart&format=png8&s=af18851515d675a1986f8b3e6657fd7dfbc5854e', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=216&crop=smart&format=png8&s=1df31cd5d6542f3082fc58e73c30a240fe3ac691', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=320&crop=smart&format=png8&s=b61f7a1ae84c7b81a0ed8e46b65ec0852d09fe15', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=640&crop=smart&format=png8&s=210f467bc63dbcbbab6a4e49a55a871b22bd0105', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=960&crop=smart&format=png8&s=eb9ddd4de4dc4b5c6b63794a0ee5ff71dc972917', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=1080&crop=smart&format=png8&s=ebde9277067ee845bcbdf9d6acd839d80b722796', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?format=png8&s=5999c45e8e09c9d4759008eaf48352f8136e7a8c', 'width': 1920}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=108&crop=smart&s=339bbf93da6836d3b3cf0c22eaa22b70d5512703', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=216&crop=smart&s=50bf934c9f18a1501002c2378c7a2ff1238bea63', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=320&crop=smart&s=9311f37912af8bb3b9f20dc47078ecb1cce39cab', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=640&crop=smart&s=09c60b200b0e10b1f73777f7d90cc85ea82790e4', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=960&crop=smart&s=5eebeb2b31f28170e0bebcd954c01b26e91f37e9', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=1080&crop=smart&s=d1fc9db004755753c2ff8668d3d67a7ceaf2ea1b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?s=be890b1bba69c0a89f2f9c30733202f07e145fd0', 'width': 1920}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=108&format=mp4&s=d29e74985ba589a7431f4effa2ab074ffc4f70fe', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=216&format=mp4&s=4fc1867b55bb852d6d0c93102749b447fc729b00', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=320&format=mp4&s=446ec03f4b9accd72d16f227534def18658cc019', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=640&format=mp4&s=1d02c46087c5c95e9295b1cf60cb5906fbe97387', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=960&format=mp4&s=2710afee93d7202b0c9f18c44a9a9d53fa4c3bd8', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?width=1080&format=mp4&s=39deb54ac180cb0b7ca10bf4002fd78248c878b5', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/6w85walsxssf1.gif?format=mp4&s=ae99ac5926386380b1b06ab966ee8e2eb5a7f0ea', 'width': 1920}}}}]} | |
Huawei Develop New LLM Quantization Method (SINQ) that's 30x Faster than AWQ and Beats Calibrated Methods Without Needing Any Calibration Data | 276 | 2025-10-03T00:36:48 | https://huggingface.co/papers/2509.22944 | abdouhlili | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nwkzq7 | false | null | t3_1nwkzq7 | /r/LocalLLaMA/comments/1nwkzq7/huawei_develop_new_llm_quantization_method_sinq/ | false | false | default | 276 | {'enabled': False, 'images': [{'id': 'nBFUIJw0Ejvd09O6shC9aA8_DA1taNSIvE_cak2wtlo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nBFUIJw0Ejvd09O6shC9aA8_DA1taNSIvE_cak2wtlo.png?width=108&crop=smart&auto=webp&s=43dace059334e4fb6cb58a981f2645fe37aed8f9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/nBFUIJw0Ejvd09O6shC9aA8_DA1taNSIvE_cak2wtlo.png?width=216&crop=smart&auto=webp&s=91881b191dcc93700f6043be8a388967531a21e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/nBFUIJw0Ejvd09O6shC9aA8_DA1taNSIvE_cak2wtlo.png?width=320&crop=smart&auto=webp&s=a66bc14dd678a832531465c10c7f9b11544a4524', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/nBFUIJw0Ejvd09O6shC9aA8_DA1taNSIvE_cak2wtlo.png?width=640&crop=smart&auto=webp&s=ac886e4b9ca714a3746fa6d670ba959d7721d3a2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/nBFUIJw0Ejvd09O6shC9aA8_DA1taNSIvE_cak2wtlo.png?width=960&crop=smart&auto=webp&s=2811fd87b956406614fb378e14ab65128cafc2c1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/nBFUIJw0Ejvd09O6shC9aA8_DA1taNSIvE_cak2wtlo.png?width=1080&crop=smart&auto=webp&s=f9acafb68cdd98f10f202d9b9a81e1b5af76d70f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/nBFUIJw0Ejvd09O6shC9aA8_DA1taNSIvE_cak2wtlo.png?auto=webp&s=3c04486416628a172d2f2ca9b7e4dc63ded7037f', 'width': 1200}, 'variants': {}}]} | |
Why no more progress in multimodals under 10b it's too slow I need something new or I sell my gpu not really joking but why | 0 | Hi, it seems like there's nothing new for the multimodals market of under 10b parameters.
Gemma 3 was amazing, but it's old already and qwen is so much better but can't see, blind, has no vision and can't upload images.
I wonder why. It used to be so swooploop quick, but it stopped now with Gemma.
Anything new maybe that I didn't that I have heard about (I or you)
Thanks | 2025-10-03T00:26:03 | https://www.reddit.com/r/LocalLLaMA/comments/1nwkrgz/why_no_more_progress_in_multimodals_under_10b_its/ | Osama_Saba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwkrgz | false | null | t3_1nwkrgz | /r/LocalLLaMA/comments/1nwkrgz/why_no_more_progress_in_multimodals_under_10b_its/ | false | false | self | 0 | null |
Dual GPU question | 1 | [removed] | 2025-10-02T23:15:07 | https://www.reddit.com/r/LocalLLaMA/comments/1nwj79k/dual_gpu_question/ | UsernameIsNice22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwj79k | false | null | t3_1nwj79k | /r/LocalLLaMA/comments/1nwj79k/dual_gpu_question/ | false | false | self | 1 | null |
Factory AI's coding agent, Droid seems to handle Million+ line codebases very well | 1 | [removed] | 2025-10-02T23:01:05 | https://www.reddit.com/r/LocalLLaMA/comments/1nwivlt/factory_ais_coding_agent_droid_seems_to_handle/ | kmz43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwivlt | false | null | t3_1nwivlt | /r/LocalLLaMA/comments/1nwivlt/factory_ais_coding_agent_droid_seems_to_handle/ | false | false | self | 1 | null |
Hey guys, any site to rent out GPUs with a windows VM? Mostly looking for RTX GPUs, can't seem to find a single one. | 0 | Basically title, been looking for RTX GPUs with windows VM, the only thing that worked is tensordock but they have terrible customer service.
Any help would be appreciated, thanks. | 2025-10-02T22:51:45 | https://www.reddit.com/r/LocalLLaMA/comments/1nwinuu/hey_guys_any_site_to_rent_out_gpus_with_a_windows/ | learninggamdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwinuu | false | null | t3_1nwinuu | /r/LocalLLaMA/comments/1nwinuu/hey_guys_any_site_to_rent_out_gpus_with_a_windows/ | false | false | self | 0 | null |
GLM 4.6 Local Gaming Rig Performance | 86 | I'm sad there is no GLM-4.6-Air (seems unlikely it will be released, but who knows). So instead I cooked the `ubergarm/GLM-4.6-GGUF` `smol-IQ2_KS` 97.990 GiB (2.359 BPW) quant which is just a little bigger than full Q8_0 Air.
It is running well on my local gaming rig with 96GB RAM + 24 GB VRAM. I can get up to 32k context, or can do some trade-offs between PP and TG speeds and context length.
The graph is `llama-sweep-bench` showing how quantizing kv-cache gives a steeper drop off on TG for this architecture which I observed similarly in the older GLM-4.5.
Have fun running quants of these big models at home on your gaming rig! The huggingface repo has some metrics comparing quality vs size trade-offs and folks over on AI Beavers Discord have a lot of KLD metrics comparing various available quants from different quant cookers so pick the right size for your rig! | 2025-10-02T22:50:00 | VoidAlchemy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nwimej | false | null | t3_1nwimej | /r/LocalLLaMA/comments/1nwimej/glm_46_local_gaming_rig_performance/ | false | false | 86 | {'enabled': True, 'images': [{'id': 'eHwMvjTfEIRTFTh3e-aZd_hTdEHFbUkhTg0XM9pKmq0', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/0peifhs11ssf1.png?width=108&crop=smart&auto=webp&s=f01edcaded2258846a26c53f4e95abdfa0476f30', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/0peifhs11ssf1.png?width=216&crop=smart&auto=webp&s=0fe9dead20ca653e55d03da17d97c3cf37d7299c', 'width': 216}, {'height': 153, 'url': 'https://preview.redd.it/0peifhs11ssf1.png?width=320&crop=smart&auto=webp&s=c5cb196ab181c4d91bdd2f41cdc4dd356f78a69a', 'width': 320}, {'height': 306, 'url': 'https://preview.redd.it/0peifhs11ssf1.png?width=640&crop=smart&auto=webp&s=9c59d73d412700f4d4389356e00f3193eb466bc1', 'width': 640}, {'height': 459, 'url': 'https://preview.redd.it/0peifhs11ssf1.png?width=960&crop=smart&auto=webp&s=d69e1d5e15ec19f8c14e871b32ed6ee81dd4722e', 'width': 960}, {'height': 516, 'url': 'https://preview.redd.it/0peifhs11ssf1.png?width=1080&crop=smart&auto=webp&s=d04ac8aaf78c39f892a2870b2f88469c6ef20ad5', 'width': 1080}], 'source': {'height': 1082, 'url': 'https://preview.redd.it/0peifhs11ssf1.png?auto=webp&s=36799df4ab9c766fae4545a8e2f2436d6ed2f1fb', 'width': 2262}, 'variants': {}}]} | ||
Generate and insert Rustdocs into your Rust projects without a huge model or agent spaghetti 🦀 | 1 | Awful Rustdocs is a CLI tool that generates or improves Rustdoc comments by harvesting symbols via rust\_ast.nu, enriching each item with ast-grep context (callers, intra-body calls, qualified paths), and prompting your LLM to produce concise, high-quality docs.
You don't need hundreds of prompts and agents it you're smart about your context.
I'm running it on all my Rust projects write now using the Systems Programming Qwen 3 4B finetune I created and it saves me an incredible amount of time by creating docs that are almost always good enough to publish straight off but act more as a draft for me. Cuts down on a lot of repetitive typing and lets me get back to doing what I love (writing code).
I requires Nushell but you should probably already be using that and if this is how you find out about Nushell then even better, make the jump, it's worth it. | 2025-10-02T22:45:57 | sqli | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nwij39 | false | null | t3_1nwij39 | /r/LocalLLaMA/comments/1nwij39/generate_and_insert_rustdocs_into_your_rust/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '2t7vxkb90ssf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=108&crop=smart&format=png8&s=03e6515a0fb9e5078ed149c5928e6aa7176f3c6c', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=216&crop=smart&format=png8&s=0e52abac9de3227c122bca059808297470259eb1', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=320&crop=smart&format=png8&s=ee6c56859a498679ec4e4ed2211a1424e430ff03', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=640&crop=smart&format=png8&s=535a187a23f1e90d2736bd5e58e25c3c5df5a592', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=960&crop=smart&format=png8&s=d3b63b422320b40c86cc2e234af9d00821778880', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=1080&crop=smart&format=png8&s=0d9db37fc36b21ce6710796b4b449af5615efc58', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?format=png8&s=2072ca6223882a749f0d4c62ea2c309dbec80a33', 'width': 1920}, 'variants': {'gif': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=108&crop=smart&s=b2b4d67ad2ea84ad965785da7b69ddefa44e799a', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=216&crop=smart&s=057e4dba0393b2674bb34878ef58fddbcfa70ced', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=320&crop=smart&s=313b300043320569f5132a795d4f87a7553dfe27', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=640&crop=smart&s=3844f53525e3fd0d7ccd4fa7f17cb55acb8b3a46', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=960&crop=smart&s=bf75adaec918c7bc73402cf16fe21f66c3075e5a', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=1080&crop=smart&s=4f39c3d0517fe6b08d7129ac65e72fcc74abffe4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?s=934864e4a2db9f7a4dae76ac17c5c01c81544a94', 'width': 1920}}, 'mp4': {'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=108&format=mp4&s=c5097ffb0ec59b96051a6e16ea810bce1dc8ac2a', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=216&format=mp4&s=4aa8b16778bb10c9764760b079c6d100e231d86c', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=320&format=mp4&s=9c0e31bf727127f0f3867684f54a5142b5f6c309', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=640&format=mp4&s=0cc28123f0d19bf9710682c1a69e6ea9b243553e', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=960&format=mp4&s=b95e47aa61d15aa08bafee11e909b58586a03fb3', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?width=1080&format=mp4&s=f2d3508e0c29409adf8897b40077c9545f589a66', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/2t7vxkb90ssf1.gif?format=mp4&s=e22c40ab9d2d6bf30b2d672c3e53aaf874a980c9', 'width': 1920}}}}]} | |
Ming V2 is out | 95 | Ming V2 is already out
[https://huggingface.co/collections/inclusionAI/ming-v2-68ddea4954413c128d706630](https://huggingface.co/collections/inclusionAI/ming-v2-68ddea4954413c128d706630) | 2025-10-02T22:37:08 | https://www.reddit.com/r/LocalLLaMA/comments/1nwibsb/ming_v2_is_out/ | Chance_Camp3720 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwibsb | false | null | t3_1nwibsb | /r/LocalLLaMA/comments/1nwibsb/ming_v2_is_out/ | false | false | self | 95 | {'enabled': False, 'images': [{'id': 'dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=108&crop=smart&auto=webp&s=42274ba66359676fe16eb8a78ac269188cd759ef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=216&crop=smart&auto=webp&s=4e477531dc2c8a0ec99569e13de7d5b7c7f131ba', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=320&crop=smart&auto=webp&s=329f687379ccd4b5d49624fdae77d02cff6577cd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=640&crop=smart&auto=webp&s=5a63c4c0486922b1574f480328ef036ac70772be', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=960&crop=smart&auto=webp&s=45ecec6f2a3f5bc1101ee1c2bb73b99cef0b22dc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=1080&crop=smart&auto=webp&s=9cd5a408a42e5d37e33e6dfa4226ec89e32c4911', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?auto=webp&s=070884e53968ee103da54b25a72fe527daa96367', 'width': 1200}, 'variants': {}}]} |
Ming in out! | 1 | Ming V2 is already out
[https://huggingface.co/collections/inclusionAI/ming-v2-68ddea4954413c128d706630](https://huggingface.co/collections/inclusionAI/ming-v2-68ddea4954413c128d706630) | 2025-10-02T22:34:54 | https://www.reddit.com/r/LocalLLaMA/comments/1nwia0x/ming_in_out/ | Chance_Camp3720 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwia0x | false | null | t3_1nwia0x | /r/LocalLLaMA/comments/1nwia0x/ming_in_out/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=108&crop=smart&auto=webp&s=42274ba66359676fe16eb8a78ac269188cd759ef', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=216&crop=smart&auto=webp&s=4e477531dc2c8a0ec99569e13de7d5b7c7f131ba', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=320&crop=smart&auto=webp&s=329f687379ccd4b5d49624fdae77d02cff6577cd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=640&crop=smart&auto=webp&s=5a63c4c0486922b1574f480328ef036ac70772be', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=960&crop=smart&auto=webp&s=45ecec6f2a3f5bc1101ee1c2bb73b99cef0b22dc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?width=1080&crop=smart&auto=webp&s=9cd5a408a42e5d37e33e6dfa4226ec89e32c4911', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dmf_vA0iraCsU0Od11klZ5lS35D8DhKdYICpPdw2mS4.png?auto=webp&s=070884e53968ee103da54b25a72fe527daa96367', 'width': 1200}, 'variants': {}}]} |
EdgeFoundry – Deploy and Monitor Local LLMs with Telemetry and a Local Dashboard | 6 | Here is the GitHub. | 2025-10-02T22:33:34 | https://github.com/TheDarkNight21/edge-foundry | bankai-batman | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nwi8x6 | false | null | t3_1nwi8x6 | /r/LocalLLaMA/comments/1nwi8x6/edgefoundry_deploy_and_monitor_local_llms_with/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'qs8D9rP8Kc5DNqUHZCLyuinXBBjsbto3X7B4flutq-Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qs8D9rP8Kc5DNqUHZCLyuinXBBjsbto3X7B4flutq-Y.png?width=108&crop=smart&auto=webp&s=9aae05c83d66972a99815984b33b5a3211a79e44', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qs8D9rP8Kc5DNqUHZCLyuinXBBjsbto3X7B4flutq-Y.png?width=216&crop=smart&auto=webp&s=21ac35b0bc8361ae867a51496717173cc281cf4b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qs8D9rP8Kc5DNqUHZCLyuinXBBjsbto3X7B4flutq-Y.png?width=320&crop=smart&auto=webp&s=3023f0597478b7da68b479f74870ccad3bb648d4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qs8D9rP8Kc5DNqUHZCLyuinXBBjsbto3X7B4flutq-Y.png?width=640&crop=smart&auto=webp&s=59cf8bb8c2f61878c292a37660ffd436c1cdfcd3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qs8D9rP8Kc5DNqUHZCLyuinXBBjsbto3X7B4flutq-Y.png?width=960&crop=smart&auto=webp&s=de01649bcec1b864ef5cc144f5dec38f98170519', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qs8D9rP8Kc5DNqUHZCLyuinXBBjsbto3X7B4flutq-Y.png?width=1080&crop=smart&auto=webp&s=a1f0eee7ef744d4dfb7bb7bc371f835d96266034', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qs8D9rP8Kc5DNqUHZCLyuinXBBjsbto3X7B4flutq-Y.png?auto=webp&s=f67ea246d3061c04bc67547863105cc3250c66e1', 'width': 1200}, 'variants': {}}]} | |
Is granite 4.0 the best widely-brower-runnable model to finetune for general tasks? | 8 | It seems pretty capable and super fast.
| 2025-10-02T22:20:17 | https://huggingface.co/spaces/ibm-granite/Granite-4.0-WebGPU | LeadOne7104 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nwhxjm | false | null | t3_1nwhxjm | /r/LocalLLaMA/comments/1nwhxjm/is_granite_40_the_best_widelybrowerrunnable_model/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': 'zKHGh0NoxZN2PVNoIpENbjmCh8QqiWNKdGRu3Uzpjhk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/zKHGh0NoxZN2PVNoIpENbjmCh8QqiWNKdGRu3Uzpjhk.png?width=108&crop=smart&auto=webp&s=e3a56a053ef4259982018df6f16f1a3beb89de61', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/zKHGh0NoxZN2PVNoIpENbjmCh8QqiWNKdGRu3Uzpjhk.png?width=216&crop=smart&auto=webp&s=02b525998603fc230cdb41bf4793b3e3d3813fdd', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/zKHGh0NoxZN2PVNoIpENbjmCh8QqiWNKdGRu3Uzpjhk.png?width=320&crop=smart&auto=webp&s=aa0bc31c22726a06ead0cf988737e9b79399d0aa', 'width': 320}, {'height': 337, 'url': 'https://external-preview.redd.it/zKHGh0NoxZN2PVNoIpENbjmCh8QqiWNKdGRu3Uzpjhk.png?width=640&crop=smart&auto=webp&s=2f0c07a7922224b7d4123f6b27b822775219ab5e', 'width': 640}, {'height': 506, 'url': 'https://external-preview.redd.it/zKHGh0NoxZN2PVNoIpENbjmCh8QqiWNKdGRu3Uzpjhk.png?width=960&crop=smart&auto=webp&s=3978c026beff38e944a36210fc57d53746733f80', 'width': 960}, {'height': 569, 'url': 'https://external-preview.redd.it/zKHGh0NoxZN2PVNoIpENbjmCh8QqiWNKdGRu3Uzpjhk.png?width=1080&crop=smart&auto=webp&s=4ab4df685eae220fdc3d5e988d286a1b04c3bbb1', 'width': 1080}], 'source': {'height': 736, 'url': 'https://external-preview.redd.it/zKHGh0NoxZN2PVNoIpENbjmCh8QqiWNKdGRu3Uzpjhk.png?auto=webp&s=450488c1834e4f3f27dd280f84442f35eeef805c', 'width': 1396}, 'variants': {}}]} |
Fine tuning project idea? | 0 | I want to fine tune a model but i don't have specific idea for the subject. It will be my senior project for the school. And can i deploy it to the web? | 2025-10-02T21:54:48 | https://www.reddit.com/r/LocalLLaMA/comments/1nwhaq4/fine_tuning_project_idea/ | Fit_Succotash_2163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwhaq4 | false | null | t3_1nwhaq4 | /r/LocalLLaMA/comments/1nwhaq4/fine_tuning_project_idea/ | false | false | self | 0 | null |
NVFP4 or MXFP4 MOE on sm120 (RTX 5900 RTX 6000 PRO) | 5 | Hello,
Did anyone successfully run any decent MOE models in NVFP4 or MXFP4 running it natively on nvidia sm120? Target - GLM-4.5-Air and GLM-4.6
I tried vllm / sglang / trtllm - nothing seems to work
The nvfp4 should be much better in precission than AWQ 4bit
There is QuTLASS project which can do native fp4 on sm120, but only for dense models and not moe. | 2025-10-02T21:53:56 | https://www.reddit.com/r/LocalLLaMA/comments/1nwh9z3/nvfp4_or_mxfp4_moe_on_sm120_rtx_5900_rtx_6000_pro/ | festr2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwh9z3 | false | null | t3_1nwh9z3 | /r/LocalLLaMA/comments/1nwh9z3/nvfp4_or_mxfp4_moe_on_sm120_rtx_5900_rtx_6000_pro/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'teYSMkvhCCObZFCEZZVcIl8pGLu0C39gZQtd7JImoRs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/teYSMkvhCCObZFCEZZVcIl8pGLu0C39gZQtd7JImoRs.png?width=108&crop=smart&auto=webp&s=9c41b6db02cea0a146940208b376e16430121656', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/teYSMkvhCCObZFCEZZVcIl8pGLu0C39gZQtd7JImoRs.png?width=216&crop=smart&auto=webp&s=14fd9e1db08fea2b1b345c92bab77fcf9340ad3d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/teYSMkvhCCObZFCEZZVcIl8pGLu0C39gZQtd7JImoRs.png?width=320&crop=smart&auto=webp&s=f383fd48e84dfac70d5be7ebcfa008f8a7b08421', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/teYSMkvhCCObZFCEZZVcIl8pGLu0C39gZQtd7JImoRs.png?width=640&crop=smart&auto=webp&s=474890ba9941a6053a3b0c0c75f75fd1f1ee4b65', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/teYSMkvhCCObZFCEZZVcIl8pGLu0C39gZQtd7JImoRs.png?width=960&crop=smart&auto=webp&s=e689511659dad078fde961fcdaed63f9eb0b6f7f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/teYSMkvhCCObZFCEZZVcIl8pGLu0C39gZQtd7JImoRs.png?width=1080&crop=smart&auto=webp&s=a1367e918d9feb694576b1ce7a5ba7a66b639133', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/teYSMkvhCCObZFCEZZVcIl8pGLu0C39gZQtd7JImoRs.png?auto=webp&s=aa60195502357bbe39baab54ee1a172491576d4a', 'width': 1200}, 'variants': {}}]} |
A Summary of Key AI Events from September 2025 | 45 | * ByteDance released **Seedream 4.0**, a next-generation image model unifying high-quality text-to-image generation and natural-language image editing.
* An advanced Gemini variant, reported as **Gemini 2.5 - Deep Think**, achieved gold-medal-level performance at the ICPC World Finals programming contest.
* OpenAI reported a reasoning and code model achieved a perfect score (12/12) in ICPC testing.
* Suno released **Suno v5**, an upgrade in music generation with studio-grade fidelity and more natural-sounding vocals.
* Alibaba unveiled **Qwen-3-Max**, its flagship model with over a trillion parameters, focusing on long context and agent capabilities.
* **Wan 2.2** was released, a generative video model focused on multi-shot consistency and character animation.
* Anthropic announced **Claude Sonnet 4.5**, a model optimized for coding, agent construction, and improved reasoning.
* OpenAI released **Sora 2**, a flagship video and audio generation model with improved physical modeling and synchronized sound.
* DeepSeek released **DeepSeek-V3.2-Exp**
* OpenAI and NVIDIA announced a strategic partnership for NVIDIA to supply at least **10 gigawatts** of AI systems for OpenAI's infrastructure. | 2025-10-02T21:27:57 | https://www.reddit.com/r/LocalLLaMA/comments/1nwgma3/a_summary_of_key_ai_events_from_september_2025/ | nh_local | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwgma3 | false | null | t3_1nwgma3 | /r/LocalLLaMA/comments/1nwgma3/a_summary_of_key_ai_events_from_september_2025/ | false | false | self | 45 | null |
scraping websites in real time | 1 | I’ve been seeing some GenAI companies scraping Google search and other sites to pull results. Do they usually get permission for that, or is it more of a “just do it” kind of thing?
Can something like this be done with a local LLaMA model? What tools or libraries would you use to pull it off?
Also, do they pre-index whole pages, or is it more real-time scraping on the fly? | 2025-10-02T21:20:25 | https://www.reddit.com/r/LocalLLaMA/comments/1nwgfc4/scraping_websites_in_real_time/ | Incognito2834 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwgfc4 | false | null | t3_1nwgfc4 | /r/LocalLLaMA/comments/1nwgfc4/scraping_websites_in_real_time/ | false | false | self | 1 | null |
Humigence v1 – Supervised Fine-Tuning with Unsloth + Dual-GPU TorchRun | 1 | I’m quite the non-programmer who decided to dabble into the AI/ML space not too long ago and I have just been experimenting, getting frustrated, learning and tweaking. Over the course of some of the stuff I dabbled into, I decided to explore no/low code ways of carrying out repeatable processes - starting with fine-tuning.
So I vibe-coded and built something I called Humigence and just rolled out what I’ll consider the MVP. I built out this primarily based on my hardware specs, and decided to make it available on HF.
Would appreciate some feedback, criticisms, suggestions and anything else.
The v1 release focuses on Supervised Fine-Tuning:
• 🔧 Interactive CLI wizard (basic + advanced setup)
• 🖥️ Automatic GPU detection with single- and dual-GPU support (torchrun + NCCL)
• 🦥 Unsloth-powered QLoRA/LoRA training (efficient, memory optimized)
• 📊 Training summaries with metrics (loss, grad norms, runtime)
• 📁 Config snapshots for reproducibility
• ✅ LoRA adapters saved + optional merged weights
I hope to expand beyond fine-tuning to other repeatable processes like implementing RAG pipelines, multi-tenant inference, and anything else.
https://huggingface.co/lilbablo/humigencev2
| 2025-10-02T21:06:54 | https://www.reddit.com/r/LocalLLaMA/comments/1nwg2xu/humigence_v1_supervised_finetuning_with_unsloth/ | SolidRemote8316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwg2xu | false | null | t3_1nwg2xu | /r/LocalLLaMA/comments/1nwg2xu/humigence_v1_supervised_finetuning_with_unsloth/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'fqEMYkxpfMzsrnmXmWzP3mh48pqP5BZmVeh0CoM0ma0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fqEMYkxpfMzsrnmXmWzP3mh48pqP5BZmVeh0CoM0ma0.png?width=108&crop=smart&auto=webp&s=977477d8d4ffac150061a772ee6f0719f2e0795a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fqEMYkxpfMzsrnmXmWzP3mh48pqP5BZmVeh0CoM0ma0.png?width=216&crop=smart&auto=webp&s=6fe7c6094b94635849f835fc12582ff6fa432f19', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fqEMYkxpfMzsrnmXmWzP3mh48pqP5BZmVeh0CoM0ma0.png?width=320&crop=smart&auto=webp&s=9425d892e399f4bda73dc94e02d842bce005dfb9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fqEMYkxpfMzsrnmXmWzP3mh48pqP5BZmVeh0CoM0ma0.png?width=640&crop=smart&auto=webp&s=e94615fd1e81a7a4ba1a134c9e7f803aa18a49b6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fqEMYkxpfMzsrnmXmWzP3mh48pqP5BZmVeh0CoM0ma0.png?width=960&crop=smart&auto=webp&s=12da643c719ef33ed0d86af092d5af8189f786f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fqEMYkxpfMzsrnmXmWzP3mh48pqP5BZmVeh0CoM0ma0.png?width=1080&crop=smart&auto=webp&s=8f54f6d58e1318fb3dbefe486743cb3203bcec1f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fqEMYkxpfMzsrnmXmWzP3mh48pqP5BZmVeh0CoM0ma0.png?auto=webp&s=6a440125b036b5c8ba1ba2e1e0a859f9c6693f31', 'width': 1200}, 'variants': {}}]} |
Is anyone else fed up with the illusion of "deep research" from AI browsers and search models? | 1 | [removed] | 2025-10-02T21:06:17 | https://www.reddit.com/r/LocalLLaMA/comments/1nwg2f0/is_anyone_else_fed_up_with_the_illusion_of_deep/ | Cool-Pair4132 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwg2f0 | false | null | t3_1nwg2f0 | /r/LocalLLaMA/comments/1nwg2f0/is_anyone_else_fed_up_with_the_illusion_of_deep/ | false | false | self | 1 | null |
Recommended onprem solution for ~50 developers? | 1 | hey,
The itch I am trying to scratch is that the security at this company is really strict, so no cloud, ... is possible. Everything needs to be on premise.
Yet the developers there know that Coders with AI > Coders w/o AI, and the savings are really visible there.
So I would like to help the devs there.
We are based in EU.
I am aiming at ~1000 tps, as that might be sufficient for ~10 concurrent developers
I am also aiming for coding quality. So GLM4.5 models are the best candidates here, but as well as deepseek.
Apart from that, the solution should come in two parts:
1) PoC, something really easy, where 2-3 developers can be served
2) full scale, preferably just by extending the PoC solution.
the budget is not infinite. it should be less than $100k. less = better
___
so my ideas:
mac studio(s). something with a big RAM. that definitely solves the "easy" part, not the cheap & expendable though.
i am definitely fan of prebuilt solutions as well.
Any ideas?
Does anyone here also have a pitch for their startup? That is also very appreciated!
| 2025-10-02T20:30:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nwf2us/recommended_onprem_solution_for_50_developers/ | gutenmorgenmitnutell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwf2us | false | null | t3_1nwf2us | /r/LocalLLaMA/comments/1nwf2us/recommended_onprem_solution_for_50_developers/ | false | false | self | 1 | null |
Corsair AI Workstation 300 with LM Studio and Vulkan on Windows? | 3 | I just got one of these for work and am struggling.
Vulkan is installed, however, no matter what I do when it’s selected as the Engine the iGPU isn’t utilized.
The only way it works is by using ROCm but I can’t get gpt-oss:120b to load with ROCm and would like to try Vulkan.
The machine was just taken out of the box and turned on. | 2025-10-02T20:20:32 | https://www.reddit.com/r/LocalLLaMA/comments/1nwesz7/corsair_ai_workstation_300_with_lm_studio_and/ | Firestarter321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwesz7 | false | null | t3_1nwesz7 | /r/LocalLLaMA/comments/1nwesz7/corsair_ai_workstation_300_with_lm_studio_and/ | false | false | self | 3 | null |
You Can Run LLMs In Your Own Pendrive Now! (Crazy thing I did) | 0 | 2025-10-02T20:19:57 | https://www.reddit.com/r/LocalLLaMA/comments/1nwesey/you_can_run_llms_in_your_own_pendrive_now_crazy/ | Sat0shi619 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwesey | false | null | t3_1nwesey | /r/LocalLLaMA/comments/1nwesey/you_can_run_llms_in_your_own_pendrive_now_crazy/ | false | false | 0 | null | ||
Models for creating beautiful diagrams and flowcharts? | 8 | I’m utterly useless at anything visual or design oriented, yet frequently find the need to create diagrams, flow charts, etc. This is tedious and I detest it.
I’d like to be able to describe in a prompt the diagrams I wish to create and then have a model create it.
Is this a thing? All I seem to find are image models that generate waifus. Thanks! | 2025-10-02T20:02:16 | https://www.reddit.com/r/LocalLLaMA/comments/1nweb8v/models_for_creating_beautiful_diagrams_and/ | __JockY__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nweb8v | false | null | t3_1nweb8v | /r/LocalLLaMA/comments/1nweb8v/models_for_creating_beautiful_diagrams_and/ | false | false | self | 8 | null |
What can I use to make a flyer? | 2 | What can I use to make a flyer? I have two images I want to use in that flyer, and some text.
I gave it to nano banana... and the truth is, he created a good one, but then it's impossible to edit it, and at the same time, he makes spelling mistakes that he won't correct even if I tell him a thousand times.
What can I use locally to do this in a "chatty" way, like highlight the title, add a shadow to this, or lift that from the background.
Or isn't this possible yet?
(I have very little aesthetic judgment for this... which is why a machine like this is perfect for me.
If I don't provide the images, they'll make a flyer, but I just want to use my own images.)
I dont speak esperanto. | 2025-10-02T18:59:31 | https://www.reddit.com/r/LocalLLaMA/comments/1nwcmtd/what_can_i_use_to_make_a_flyer/ | 9acca9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwcmtd | false | null | t3_1nwcmtd | /r/LocalLLaMA/comments/1nwcmtd/what_can_i_use_to_make_a_flyer/ | false | false | self | 2 | null |
Will Qwen3-VL be forgotten like others? | 10 | This is one big VL model I hope will get support in llama.cpp but I don’t know if it’ll happen.
Ernie-4.5-VL-424B-A47B, InternVL3.5-241B-A28B, dots.vlm1.inst also didn’t get support.
What do you guys think? | 2025-10-02T18:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/1nwcbjm/will_qwen3vl_be_forgotten_like_others/ | No_Conversation9561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwcbjm | false | null | t3_1nwcbjm | /r/LocalLLaMA/comments/1nwcbjm/will_qwen3vl_be_forgotten_like_others/ | false | false | self | 10 | null |
FULL v0 System Prompt and Internal Tools [UPDATED] | 3 | Latest update: 02/10/2025
I’ve published the FULL Updated v0 by Vercel System prompt and Internal tools. Over 14,000 tokens.
You can check it out here: [https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools) | 2025-10-02T18:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/1nwcbj6/full_v0_system_prompt_and_internal_tools_updated/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwcbj6 | false | null | t3_1nwcbj6 | /r/LocalLLaMA/comments/1nwcbj6/full_v0_system_prompt_and_internal_tools_updated/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '_W-IpE8FIjm06eABDQXr5dPsgfq0uz8mc-2CA4pmC80', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_W-IpE8FIjm06eABDQXr5dPsgfq0uz8mc-2CA4pmC80.png?width=108&crop=smart&auto=webp&s=76938dc9eb5930539a56fdaab49af51eea65de98', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_W-IpE8FIjm06eABDQXr5dPsgfq0uz8mc-2CA4pmC80.png?width=216&crop=smart&auto=webp&s=511d87d9c80331caca523eb505e8cc82f979bf01', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_W-IpE8FIjm06eABDQXr5dPsgfq0uz8mc-2CA4pmC80.png?width=320&crop=smart&auto=webp&s=ee2049dc1f725e9b5255c9a05baba98d54e7a5a1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_W-IpE8FIjm06eABDQXr5dPsgfq0uz8mc-2CA4pmC80.png?width=640&crop=smart&auto=webp&s=b98f489159fcaf834a7e7a3a29ded3de73da515d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_W-IpE8FIjm06eABDQXr5dPsgfq0uz8mc-2CA4pmC80.png?width=960&crop=smart&auto=webp&s=dd7fbb22bb732c6144584b4a0172a0639a0d6328', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_W-IpE8FIjm06eABDQXr5dPsgfq0uz8mc-2CA4pmC80.png?width=1080&crop=smart&auto=webp&s=77fb8054628510be0829ef84b4ff28a6780273f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_W-IpE8FIjm06eABDQXr5dPsgfq0uz8mc-2CA4pmC80.png?auto=webp&s=b81ebf58bae555380617690911bc55c8e930756d', 'width': 1200}, 'variants': {}}]} |
Apertus model implementation has been merged into llama.cpp | 42 | I think Piotr can now fully focus on Qwen Next ;)
model description:
Apertus is a 70B and 8B parameter language model designed to push the boundaries of fully-open multilingual and transparent models. The model supports over 1000 languages and long context, it uses only fully compliant and open training data, and achieves comparable performance to models trained behind closed doors.
[https://huggingface.co/swiss-ai/Apertus-70B-Instruct-2509](https://huggingface.co/swiss-ai/Apertus-70B-Instruct-2509)
[https://huggingface.co/swiss-ai/Apertus-8B-Instruct-2509](https://huggingface.co/swiss-ai/Apertus-8B-Instruct-2509)
| 2025-10-02T18:37:32 | https://github.com/ggml-org/llama.cpp/pull/15852 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nwc1oc | false | null | t3_1nwc1oc | /r/LocalLLaMA/comments/1nwc1oc/apertus_model_implementation_has_been_merged_into/ | false | false | default | 42 | {'enabled': False, 'images': [{'id': 'WBeE9GPvyJdOEySnojQ_o2A9ys0na0K0XH7uI9iyd_o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WBeE9GPvyJdOEySnojQ_o2A9ys0na0K0XH7uI9iyd_o.png?width=108&crop=smart&auto=webp&s=3d2b5169276b5321ac995879afffe3489203e572', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WBeE9GPvyJdOEySnojQ_o2A9ys0na0K0XH7uI9iyd_o.png?width=216&crop=smart&auto=webp&s=7f5a8ae5c02158c87ace3ea2e965a0f76926a19d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WBeE9GPvyJdOEySnojQ_o2A9ys0na0K0XH7uI9iyd_o.png?width=320&crop=smart&auto=webp&s=4114d7a482ef43305aef5366cd21c2a8a5919638', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WBeE9GPvyJdOEySnojQ_o2A9ys0na0K0XH7uI9iyd_o.png?width=640&crop=smart&auto=webp&s=23de1c1602b1561cace347dd342baae689fd7c5c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WBeE9GPvyJdOEySnojQ_o2A9ys0na0K0XH7uI9iyd_o.png?width=960&crop=smart&auto=webp&s=dccbbed52ddeb228195ae96aabcae7b2c48a1005', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WBeE9GPvyJdOEySnojQ_o2A9ys0na0K0XH7uI9iyd_o.png?width=1080&crop=smart&auto=webp&s=6a3e98b54ec75722f253daef72c9b9fda3db0eb9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WBeE9GPvyJdOEySnojQ_o2A9ys0na0K0XH7uI9iyd_o.png?auto=webp&s=610fdc2f3d7a28562b5927f8ec48aaedcc383b19', 'width': 1200}, 'variants': {}}]} |
Balancing local power with outside flexibility | 0 | I started off running everything locally with LLaMA because I liked the idea of full control and not depending on external servers. But recently I ran into a situation where I needed to prepare a client-facing report quickly, and my local setup just wasn’t giving me the polish I wanted.
Out of curiosity, I tried running the draft locally and then refined it with GreenDaisy Ai. The workflow felt surprisingly smooth, almost like using LLaMA as my rough sketch tool and GreenDaisy Ai as the brush that added the finishing touches.
It made me wonder: in the future, do you think most people will stick purely with local models, or will hybrid setups (local + external AI) become the standard way of working? | 2025-10-02T18:15:29 | https://www.reddit.com/r/LocalLLaMA/comments/1nwbgb4/balancing_local_power_with_outside_flexibility/ | One-Negotiation-8553 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwbgb4 | false | null | t3_1nwbgb4 | /r/LocalLLaMA/comments/1nwbgb4/balancing_local_power_with_outside_flexibility/ | false | false | self | 0 | null |
Hardcoding prompts doesn’t scale. How are you handling it? | 2 | Working on a couple of AI projects, I ran into the same issue. Inlining prompts with the code works only for POCs. As soon as it became a serious project, managing all the prompts while keeping the code clean and maintainable was a struggle.
I ended up moving prompts out of code and into a managed workflow. Way less painful.
I wrote up some thoughts and shared a small open-source tool that helps. I’ll drop the link in a comment.
Curious what others here do for prompt management in their apps. 🚀 | 2025-10-02T18:09:56 | https://www.reddit.com/r/LocalLLaMA/comments/1nwbb3j/hardcoding_prompts_doesnt_scale_how_are_you/ | Mark_Upleap_App | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwbb3j | false | null | t3_1nwbb3j | /r/LocalLLaMA/comments/1nwbb3j/hardcoding_prompts_doesnt_scale_how_are_you/ | false | false | self | 2 | null |
Has anyone tried baking the tool-use and other static instructions into the model or a LoRA? | 2 | Basically what the title says. I imagine with some augmentations and paraphrasing (to produce a sufficient dataset) the model could be trained to act as if the instructions are present in the prompt, without them actually filling the context. I haven't gone through the literature on that question yet but I figured asking for first-hand experience would be more relevant anyway. | 2025-10-02T17:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nwav3e/has_anyone_tried_baking_the_tooluse_and_other/ | stargazer_w | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwav3e | false | null | t3_1nwav3e | /r/LocalLLaMA/comments/1nwav3e/has_anyone_tried_baking_the_tooluse_and_other/ | false | false | self | 2 | null |
AMA with Prime Intellect — Ask Us Anything! | 95 | # AMA with Prime Intellect — Ask Us Anything!
Hi [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/)! We’re excited for this AMA, thank you for having us.
I’m Kalomaze (u/kindacognizant), a researcher at Prime Intellect, the lab behind:
* Distributed training [efforts](https://www.primeintellect.ai/#research) including INTELLECT-1 + INTELLECT-2
* Open-source RL efforts including [verifiers](https://github.com/PrimeIntellect-ai/verifiers), [prime-rl](https://github.com/PrimeIntellect-ai/prime-rl), and the [Environments Hub](https://app.primeintellect.ai/dashboard/environments)
Our other participants today:
* Sami Jaghouar, u/samsja19
* Will Brown, u/willccbb
* Jack Min Ong, u/Cinamic
* Mika Senghaas, u/mikasenghaas
**The AMA will run from 11:00 AM – 2:00 PM PST, with the Prime Intellect team continuing to follow up on questions over the next 48 hours.** | 2025-10-02T17:47:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nwaoyd/ama_with_prime_intellect_ask_us_anything/ | kindacognizant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwaoyd | false | null | t3_1nwaoyd | /r/LocalLLaMA/comments/1nwaoyd/ama_with_prime_intellect_ask_us_anything/ | false | true | self | 95 | {'enabled': False, 'images': [{'id': 'bY8xxOYcASihwJSisN5GQP8lgqycM3rMPEywV1CCw1g', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/bY8xxOYcASihwJSisN5GQP8lgqycM3rMPEywV1CCw1g.png?width=108&crop=smart&auto=webp&s=8dba5854b6443ec1b816d10abf653cf1536a99a9', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/bY8xxOYcASihwJSisN5GQP8lgqycM3rMPEywV1CCw1g.png?width=216&crop=smart&auto=webp&s=3a383811fc3bbeaf3805763612717ecd96073c65', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/bY8xxOYcASihwJSisN5GQP8lgqycM3rMPEywV1CCw1g.png?width=320&crop=smart&auto=webp&s=d3f7dc2d5db7fb76d9bdefc5bed38bb702936851', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/bY8xxOYcASihwJSisN5GQP8lgqycM3rMPEywV1CCw1g.png?width=640&crop=smart&auto=webp&s=5d58aa0c2e1cb633d0d512c923faf7709200642c', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/bY8xxOYcASihwJSisN5GQP8lgqycM3rMPEywV1CCw1g.png?width=960&crop=smart&auto=webp&s=cf7c7ef02dfb4b8fe2f2a1a534472c0a627e9755', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/bY8xxOYcASihwJSisN5GQP8lgqycM3rMPEywV1CCw1g.png?width=1080&crop=smart&auto=webp&s=1e022172593066a96fb7fb5008b16012ac088d30', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/bY8xxOYcASihwJSisN5GQP8lgqycM3rMPEywV1CCw1g.png?auto=webp&s=2ba0dd39d430d3f94f9702bd6ece46ee91cc5621', 'width': 1200}, 'variants': {}}]} |
OpenAI getting worse! 4o routing to GPT-5 without consent | 0 | Evidence of model routing mismatch: Selected 4o, behavior suggests GPT-5
I selected GPT-4o, but response patterns strongly suggest I'm being routed to GPT-5 (or another model variant).
**Observable differences:**
- Response structure inconsistent with 4o behavior
- Latency patterns don't match 4o
- Output style has shifted significantly
This is the same labeling/routing issue that OpenAI had before. If the company isn't learning from past failures, that's a serious problem.
**Request:**
If routing users to different models than selected, at minimum:
- Disclose it explicitly in the UI
- Give users opt-out control
- Stop calling it by the model name they didn't choose
Has anyone else documented this? Looking for others who've noticed inconsistent model behavior relative to selection. | 2025-10-02T17:44:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nwalzl/openai_getting_worse_4o_routing_to_gpt5_without/ | PieOutrageous4865 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwalzl | false | null | t3_1nwalzl | /r/LocalLLaMA/comments/1nwalzl/openai_getting_worse_4o_routing_to_gpt5_without/ | false | false | self | 0 | null |
FULL v0 System Prompt and Internal Tools [UPDATED] | 0 | Latest update: 02/10/2025
I’ve published the FULL Updated v0 by Vercel System prompt and Internal tools. Over 14,000 tokens.
You can check it out here: [https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools) | 2025-10-02T17:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/1nwa031/full_v0_system_prompt_and_internal_tools_updated/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nwa031 | false | null | t3_1nwa031 | /r/LocalLLaMA/comments/1nwa031/full_v0_system_prompt_and_internal_tools_updated/ | false | false | self | 0 | null |
Hi, how’s inference looking now in AMD GPUs? I don’t have one so that’s why asking here. | 13 | Also, what poor man’s way to 256 GB VRAM that works well for inference? Is 11 3090s the only way to get there? 🥲 | 2025-10-02T17:15:59 | https://www.reddit.com/r/LocalLLaMA/comments/1nw9tny/hi_hows_inference_looking_now_in_amd_gpus_i_dont/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw9tny | false | null | t3_1nw9tny | /r/LocalLLaMA/comments/1nw9tny/hi_hows_inference_looking_now_in_amd_gpus_i_dont/ | false | false | self | 13 | null |
Treinando Modelos Locais com RAG para Análise de Casos Jurídicos | 0 | Estou há dias procurando um programa que atenda às minhas necessidades. Antes, eu treinava um modelo local e tentava incluir RAG, mas descobri que precisava rodá-lo em Python. Testei outros, mas nenhum me satisfez.
Agora estou experimentando o AnythingLLM; instalei na máquina e baixei o Ollama para usar seus modelos. Na configuração, coloquei modelos Ollama na nuvem para testar com maior rapidez o sistema RAG. Na preferência de LLM, configurei o kimi-k2 cloud; nas configurações de chat, o gpt-oss:120b-cloud; e na configuração de agente, o deepseek-v3.1:671b-cloud, todos do Ollama. Atualmente, meu banco vetorial contém 250.518 vetores, e estou usando 15 como contagem máxima de trechos de contexto. O modo chat está configurado para CONSULTA com histórico de 30.
Para testar, carreguei um arquivo PDF com uma petição inicial que fiz para uma cliente. Usei diversos modelos na nuvem (são 5 no total) e gostei do resultado, mas notei que o programa às vezes apresenta falhas ao anexar arquivos para análise. As respostas tendem a ser muito concisas, sem explicar a correlação do que foi analisado com a nossa tese. Por vezes, ele apenas cita princípios ou alguma lei específica.
Alguém já passou por isso ou tem sugestões de configuração e melhorias? | 2025-10-02T16:44:10 | https://www.reddit.com/r/LocalLLaMA/comments/1nw8yot/treinando_modelos_locais_com_rag_para_análise_de/ | HollyNatal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw8yot | false | null | t3_1nw8yot | /r/LocalLLaMA/comments/1nw8yot/treinando_modelos_locais_com_rag_para_análise_de/ | false | false | self | 0 | null |
Training or Guide for multi-gpus | 5 | Do you know any guides or training on anything related to GPUs, hardware, configuration, specifications, etc., for creating a multi GPUs setup in parallel for AI? I have Udemy Business, but I can't really find any training in that sense. | 2025-10-02T16:41:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nw8w8b/training_or_guide_for_multigpus/ | Outrageous-Pea9611 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw8w8b | false | null | t3_1nw8w8b | /r/LocalLLaMA/comments/1nw8w8b/training_or_guide_for_multigpus/ | false | false | self | 5 | null |
A tiny receipt per AI run: κ (stress), Δhol (drift), and guards—in plain JSON. | 0 | I built a receipts-first observability layer for agent runs. It writes a small JSON file per run with:
• κ (stress), Δhol (drift)
• UCR (unsupported-claim ratio), cycles, contradictions (X)
• A calibrated green/amber/red status + why/try-next
It’s stdlib-only, works with local LLMs, and drops cleanly into CI. The goal isn’t “truth,” it’s fast triage and a portable audit trail.
Light check (24 labeled cases): R ≈ 0.77 / P ≈ 0.56. Enough to point humans and heavier evals.
Repos:
• COLE (guard + page): https://github.com/terryncew/COLE-Coherence-Layer-Engine-
• OpenLine Core (server + example): https://github.com/terryncew/openline-core
If you try it, I’d love two notes back:
1. Did setup take <10 minutes?
2. Did the receipts help you find anything you already suspected? | 2025-10-02T16:38:35 | https://www.reddit.com/r/LocalLLaMA/comments/1nw8tar/a_tiny_receipt_per_ai_run_κ_stress_δhol_drift_and/ | Both-Ad-5476 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw8tar | false | null | t3_1nw8tar | /r/LocalLLaMA/comments/1nw8tar/a_tiny_receipt_per_ai_run_κ_stress_δhol_drift_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'wMvl-RGMItzy_RlYNXEidaEtYz9WJCvty2rRmFFYDHQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wMvl-RGMItzy_RlYNXEidaEtYz9WJCvty2rRmFFYDHQ.png?width=108&crop=smart&auto=webp&s=2f1bb724147763b2e5e7e38a46f667586d435f14', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wMvl-RGMItzy_RlYNXEidaEtYz9WJCvty2rRmFFYDHQ.png?width=216&crop=smart&auto=webp&s=7b45c1d544600697696d77f3fcc0fb2afcb1e173', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wMvl-RGMItzy_RlYNXEidaEtYz9WJCvty2rRmFFYDHQ.png?width=320&crop=smart&auto=webp&s=399e8161ca7455860e7a7a1ec54341acccdc1fa5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wMvl-RGMItzy_RlYNXEidaEtYz9WJCvty2rRmFFYDHQ.png?width=640&crop=smart&auto=webp&s=364de35ab265b56ea05da65bc1236e59c5bde5ae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wMvl-RGMItzy_RlYNXEidaEtYz9WJCvty2rRmFFYDHQ.png?width=960&crop=smart&auto=webp&s=84d2e69e8a0fa85447af585a47d35849adfa9e9b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wMvl-RGMItzy_RlYNXEidaEtYz9WJCvty2rRmFFYDHQ.png?width=1080&crop=smart&auto=webp&s=9b2618bc21b588ff3ae05eaf8d87f8f4fea82706', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wMvl-RGMItzy_RlYNXEidaEtYz9WJCvty2rRmFFYDHQ.png?auto=webp&s=ed9f8f0ccb9556b421c935f8def862ca61a0e692', 'width': 1200}, 'variants': {}}]} |
[Advice] Sidecar GPU box for local LLMs | 4 | Hello everyone!
I’m currently considering purchasing the bundle showing above to help with my AI projects.I will be adding my second rtx5090 to it and then connecting it to my main PC that has an RTX5090, 128gb ram, AMD Ryzen 7 9800X3D, Gigabyte X870E AORUS PRO AMD using a network switch. I also have a 2070 super sitting in the closet so I’m thinking of adding it to my new build with the second 5090. Let me know what you guys think and if you have better recommendations or approaches, please feel free to mention them!
| 2025-10-02T16:32:42 | alpha-wolf64 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nw8nmi | false | null | t3_1nw8nmi | /r/LocalLLaMA/comments/1nw8nmi/advice_sidecar_gpu_box_for_local_llms/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'bd5xofq67qsf1', 'resolutions': [{'height': 181, 'url': 'https://preview.redd.it/bd5xofq67qsf1.jpeg?width=108&crop=smart&auto=webp&s=502b0e30b87efad888150720564e77f39f640671', 'width': 108}, {'height': 363, 'url': 'https://preview.redd.it/bd5xofq67qsf1.jpeg?width=216&crop=smart&auto=webp&s=4996816e313387d812ea4333cf831f5fee89c7a5', 'width': 216}, {'height': 538, 'url': 'https://preview.redd.it/bd5xofq67qsf1.jpeg?width=320&crop=smart&auto=webp&s=564d2584734a249cf26927a59aed034563769c40', 'width': 320}, {'height': 1076, 'url': 'https://preview.redd.it/bd5xofq67qsf1.jpeg?width=640&crop=smart&auto=webp&s=8cf51b316b642d4ce99323daa35a765aad8c832b', 'width': 640}, {'height': 1615, 'url': 'https://preview.redd.it/bd5xofq67qsf1.jpeg?width=960&crop=smart&auto=webp&s=a8701a9e870154d2ac0ea5a21775fe5020cca505', 'width': 960}, {'height': 1817, 'url': 'https://preview.redd.it/bd5xofq67qsf1.jpeg?width=1080&crop=smart&auto=webp&s=0417ff80e5a01877ba5179e573720f4c00f90642', 'width': 1080}], 'source': {'height': 2221, 'url': 'https://preview.redd.it/bd5xofq67qsf1.jpeg?auto=webp&s=2ff893e3bb71234881ac74e8721835fa855f25a3', 'width': 1320}, 'variants': {}}]} | |
Ring Flash 2.0 104B A6B with Linear Attention released a few days ago | 81 | 2025-10-02T16:28:11 | https://huggingface.co/inclusionAI/Ring-flash-linear-2.0 | FullOf_Bad_Ideas | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nw8jbn | false | null | t3_1nw8jbn | /r/LocalLLaMA/comments/1nw8jbn/ring_flash_20_104b_a6b_with_linear_attention/ | false | false | default | 81 | {'enabled': False, 'images': [{'id': 'arTReyF0GyVAVaEDNDlfVvJyFYJ0q7EWSfcCybSgWt0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/arTReyF0GyVAVaEDNDlfVvJyFYJ0q7EWSfcCybSgWt0.png?width=108&crop=smart&auto=webp&s=af5b36690e40f7b5290dab3fa99507729549130a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/arTReyF0GyVAVaEDNDlfVvJyFYJ0q7EWSfcCybSgWt0.png?width=216&crop=smart&auto=webp&s=ac4fce05300eb4a02fe37c0b1709be4ddafa5fdf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/arTReyF0GyVAVaEDNDlfVvJyFYJ0q7EWSfcCybSgWt0.png?width=320&crop=smart&auto=webp&s=14a2b7d54583da5bbabcd1e79cde2b0752b7d465', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/arTReyF0GyVAVaEDNDlfVvJyFYJ0q7EWSfcCybSgWt0.png?width=640&crop=smart&auto=webp&s=af2d40f335ae44a32d6d45dc30487a7c16511fa4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/arTReyF0GyVAVaEDNDlfVvJyFYJ0q7EWSfcCybSgWt0.png?width=960&crop=smart&auto=webp&s=cc74ecf615a0f718dbe4fd955b620476947564d1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/arTReyF0GyVAVaEDNDlfVvJyFYJ0q7EWSfcCybSgWt0.png?width=1080&crop=smart&auto=webp&s=ca64a0da883d9c64b2736743141ea91627f64e27', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/arTReyF0GyVAVaEDNDlfVvJyFYJ0q7EWSfcCybSgWt0.png?auto=webp&s=fccbc26ed8b566eda2c2c671c083669bbfdc7aad', 'width': 1200}, 'variants': {}}]} | |
Granite 4.0 Micro (3.4B) running 100% locally in your browser w/ WebGPU acceleration | 324 | 2025-10-02T16:20:56 | https://v.redd.it/14cmif4v4qsf1 | xenovatech | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nw8c6y | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/14cmif4v4qsf1/DASHPlaylist.mpd?a=1762014068%2CODc4ZmI0ZDNmY2JmNGY4ZjQ1ODY4N2E0YTQ3ZGIxZjNkNzNjOWMzZjA2OGNmZDVhZjBmNzE3Mjg0YTA3MjZhZg%3D%3D&v=1&f=sd', 'duration': 34, 'fallback_url': 'https://v.redd.it/14cmif4v4qsf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/14cmif4v4qsf1/HLSPlaylist.m3u8?a=1762014068%2CYWVmMzNhMzE4ZGVlMGRlNDM0N2E0OTg0NmU2OWI3ZjM1ODJjYmRjMjgwZjc4OGQ5NmRmODVlMzFmODRiMDU1Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/14cmif4v4qsf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1nw8c6y | /r/LocalLLaMA/comments/1nw8c6y/granite_40_micro_34b_running_100_locally_in_your/ | false | false | 324 | {'enabled': False, 'images': [{'id': 'aG1yZ2k0M3Y0cXNmMTjBkk0zpHe1cUKuUpjTdKuc-czjYGWzckCtqtrm-IdD', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/aG1yZ2k0M3Y0cXNmMTjBkk0zpHe1cUKuUpjTdKuc-czjYGWzckCtqtrm-IdD.png?width=108&crop=smart&format=pjpg&auto=webp&s=1e93506104750af27bfd28f484d6ddb79690e33a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/aG1yZ2k0M3Y0cXNmMTjBkk0zpHe1cUKuUpjTdKuc-czjYGWzckCtqtrm-IdD.png?width=216&crop=smart&format=pjpg&auto=webp&s=e2a0ae1d4459043d9d483f91b05175f3fb8217ab', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/aG1yZ2k0M3Y0cXNmMTjBkk0zpHe1cUKuUpjTdKuc-czjYGWzckCtqtrm-IdD.png?width=320&crop=smart&format=pjpg&auto=webp&s=8fec0e40f45006e81d7fcd2138b41c5af5f63b94', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/aG1yZ2k0M3Y0cXNmMTjBkk0zpHe1cUKuUpjTdKuc-czjYGWzckCtqtrm-IdD.png?width=640&crop=smart&format=pjpg&auto=webp&s=d28d7f5fe31198e4d752b0109d857cf3bb2301fa', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/aG1yZ2k0M3Y0cXNmMTjBkk0zpHe1cUKuUpjTdKuc-czjYGWzckCtqtrm-IdD.png?width=960&crop=smart&format=pjpg&auto=webp&s=10a769fa84759b349c33d5ae558d28b844cc795e', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/aG1yZ2k0M3Y0cXNmMTjBkk0zpHe1cUKuUpjTdKuc-czjYGWzckCtqtrm-IdD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0106ed54579eda2a14f2aa1e856e6d2a5fefc660', 'width': 1080}], 'source': {'height': 1652, 'url': 'https://external-preview.redd.it/aG1yZ2k0M3Y0cXNmMTjBkk0zpHe1cUKuUpjTdKuc-czjYGWzckCtqtrm-IdD.png?format=pjpg&auto=webp&s=372941d4443152e460de3a2d13d5bd65b1e431b9', 'width': 1652}, 'variants': {}}]} | ||
I used Llama 3.3 70b to build cabtbot of Examsprint AI | 0 | I am Aadarsh Pandey 13y/o from India. I am the developer and founder of Examsprint AI.
features of Examsprint AI are:
Chapters and topics list
Direct NCERT Links
Practice questions in form of Flashcards specialised for each chapter[For Class 11 and 12]
Personal AI chatbot to SOLVE any type of Questions regarding Physics , Chemistry , BIology and Maths
TOPPER'S Notes[ Variety from class 9 to 12]
AI chatbot that gives visual representation with textual answer for better understanding
JEE blueprint
Neet blueprint
Boards blueprint
School blueprints
Specialised TOPPER'S HANDWRITTEN NOTES with Interactive AI notes for better understanding.
NOTES ARE AVAILABLE IN BOTH VIEWABLE AND FREE DOWNLOADABLE FORMS.
NCERT BACK EXERCISE SOLUTIONS
SOF OLYMPIADS PYQ COMING SOON
FORMULA SHEET COMING SOON
BOARDS ARENA COMING SOON
STUDY AND LIGHT MODE PRESENT
JEE/NEET ARENA COMING SOON
ABSOLUTELY FREE OF COST
CAN USE WITHOUT SIGNING IN
FAQ's for INSTANT DOUBT-solving regarding USE and WEBSITE
Upto date calendar for instant date previews | 2025-10-02T15:53:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nw7lsr/i_used_llama_33_70b_to_build_cabtbot_of/ | Training-Quote-8752 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw7lsr | false | null | t3_1nw7lsr | /r/LocalLLaMA/comments/1nw7lsr/i_used_llama_33_70b_to_build_cabtbot_of/ | false | false | self | 0 | null |
Stretching Claude Pro with GLM Lite as backup | 13 | So I'm in a country where $20/month is actually serious money, let alone $100-200. I grabbed Pro with the yearly deal when it was on promo. I can't afford adding another subscription like Cursor or Codex on top of that.
Claude's outputs are great though, so I've basically figured out how to squeeze everything I can out of Pro within those 5-hour windows:
I plan a lot. I use Claude Web sometimes, but mostly Gemini 2.5 Pro on AI Studio to plan stuff out, make markdown files, double-check them in other chats to make sure they're solid, then hand it all to Claude Code to actually write.
I babysit Claude Code hard. Always watching what it's doing so I can jump in with more instructions or stop it immediately if needed. Never let it commit anything - I do all commits myself.
I'm up at 5am and I send a quick "hello" to kick off my first session. Then between 8am and 1pm I can do a good amount of work between my first session and the next one. I do like 3 sessions a day.
I almost never touch Opus. Just not worth the usage hit.
Tracking usage used to suck and I was using "Claude Usage Tracker" (even donated to the dev), but now Anthropic gave us the /usage thing which is amazing. Weirdly I don't see any Weekly Limit on mine. I guess my region doesn't have that restriction? Maybe there aren't many Claude users over here.
Lately, I had too much work and I was seriously considering (really didn't want to) getting a second account.
I tried Gemini CLI and Qwen since they're free but... no, they were basically useless for my needs.
I did some digging and heard about GLM 4.6. Threw $3 at it 3 days ago to test for a month and honestly? It's good. Like really good for what I need.
Not quite Sonnet 4.5 level but pretty close. I've been using it for less complex stuff and it handles it fine.
I'll definitely getting a quarterly or yearly subscription for their Lite tier. It's basically the Haiku that Anthropic should give us. A capable and cheap model.
It's taken a huge chunk off my Claude usage and now the Pro limit doesn't stress me out anymore.
TL;DR: If you're on a tight budget, there are cheap but solid models out there that can take the load off Sonnet for you. | 2025-10-02T15:40:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nw78g0/stretching_claude_pro_with_glm_lite_as_backup/ | Psychological_Box406 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw78g0 | false | null | t3_1nw78g0 | /r/LocalLLaMA/comments/1nw78g0/stretching_claude_pro_with_glm_lite_as_backup/ | false | false | self | 13 | null |
We built this open-source LLM Inference project to boost context generation by up to 15x and now it is being implemented by NVIDIA Dynamo! | 45 | Hi everyone, our team has been working nonstop on our open source project, LMCache, to reduce repetitive computation in LLM inference and make systems serve more people (3x more throughput in chat applications) and recently it has been implemented by NVIDIA's Inference project Dyanamo.
In LLM serving, often when processing large documents, KV Cache context gets overwhelmed and begins to evict precious context requiring the model to reprocess context resulting in much slower speeds. With LMCache, KV Caches get stored outside of just the high bandwidth memory into places like DRAM, disk, or other storages available.
Ask us anything! We would love it if you check us out, we recently hit 5,000 stars on GitHub and want to continue our growth!
Github: [https://github.com/LMCache/LMCache](https://github.com/LMCache/LMCache)
Early industry adopters:
* OSS projects: vLLM production stack, Redhat llm-d, KServe, Nvidia Dynamo.
* Commercial: Bloomberg, AWS, Tencent, Redis, BentoML, Weka, FlowGPT, GMI, …
* Work in progress: Character AI, GKE, Cohere, Baseten, Novita, …
Full Technical Report:
[https://lmcache.ai/tech\_report.pdf](https://lmcache.ai/tech_report.pdf) | 2025-10-02T15:35:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nw74ec/we_built_this_opensource_llm_inference_project_to/ | ExplanationEven9787 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw74ec | false | null | t3_1nw74ec | /r/LocalLLaMA/comments/1nw74ec/we_built_this_opensource_llm_inference_project_to/ | false | false | self | 45 | {'enabled': False, 'images': [{'id': 'DXzH-PeRQRO9zL67kZGD43KzJTTmGVi-xF0krWwZH2w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DXzH-PeRQRO9zL67kZGD43KzJTTmGVi-xF0krWwZH2w.png?width=108&crop=smart&auto=webp&s=2a2b0c30c02acb672ea4e507e601b38a63256fc9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DXzH-PeRQRO9zL67kZGD43KzJTTmGVi-xF0krWwZH2w.png?width=216&crop=smart&auto=webp&s=918583b5766df9cfdf9206b1bee393d1237ca73d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DXzH-PeRQRO9zL67kZGD43KzJTTmGVi-xF0krWwZH2w.png?width=320&crop=smart&auto=webp&s=6d4c8bc9b6bc256e5db0bd39e022962c077557ae', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DXzH-PeRQRO9zL67kZGD43KzJTTmGVi-xF0krWwZH2w.png?width=640&crop=smart&auto=webp&s=cb0f451145ca7f89b95f8c5cda0b4b230f9c3209', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DXzH-PeRQRO9zL67kZGD43KzJTTmGVi-xF0krWwZH2w.png?width=960&crop=smart&auto=webp&s=166137d247ad24d9c4a3a9dcaa76eecdbf7b23b8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DXzH-PeRQRO9zL67kZGD43KzJTTmGVi-xF0krWwZH2w.png?width=1080&crop=smart&auto=webp&s=0e405a6a9533c03a448592f692f20ad51505b64a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DXzH-PeRQRO9zL67kZGD43KzJTTmGVi-xF0krWwZH2w.png?auto=webp&s=6535621dcaf3eb682148ced1cc95b8a7ecb96ed9', 'width': 1200}, 'variants': {}}]} |
Will fine-tuning LLaMA 3.2 11B Instruct on text-only data degrade its vision capabilities? | 3 | I'm planning to fine-tune LLaMA 3.2 11B Instruct on a JSONL dataset of domain-specific question-answer pairs — purely text, no images. The goal is to improve its instruction-following behavior for specialized text tasks, while still retaining its ability to handle multimodal inputs like OCR and image-based queries.
My concern: will this fine-tuning lead to *multimodal forgetting*?
The [NeurIPS 2024 paper](https://neurips.cc/virtual/2024/poster/93663) discusses how training on more image-text pairs can cause *text-only forgetting*. So I’m wondering — does the reverse happen too? If I train only on text, will the model lose its ability to process images or degrade in tasks like OCR?
Has anyone observed this kind of modality drift or tested the impact of unimodal fine-tuning on multimodal performance? | 2025-10-02T15:33:10 | https://www.reddit.com/r/LocalLLaMA/comments/1nw71uz/will_finetuning_llama_32_11b_instruct_on_textonly/ | PravalPattam12945RPG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw71uz | false | null | t3_1nw71uz | /r/LocalLLaMA/comments/1nw71uz/will_finetuning_llama_32_11b_instruct_on_textonly/ | false | false | self | 3 | null |
Granite-4.0 running on latest Qualcomm NPUs (with benchmarks) | 38 | Hi all — I’m Alan from Nexa AI. Granite-4.0 just dropped, and we got **Granite-4.0-Micro (3B)** running on NPU from Qualcomm’s newest platforms (Day-0 support!)
* Snapdragon X2 Elite PCs
* Snapdragon 8 Elite Gen 5 smartphones
It also works on CPU/GPU through the same SDK. Here are some early benchmarks:
* X2 Elite NPU — 36.4 tok/s
* 8 Elite Gen 5 NPU — 28.7 tok/s
* X Elite CPU — 23.5 tok/s
Curious what people think about running Granite on NPU.
Follow along if you’d like to see more models running on NPU — and would love your feedback.
👉 GitHub: [github.com/NexaAI/nexa-sdk](https://github.com/NexaAI/nexa-sdk) If you have a Qualcomm Snapdragon PC, you can run Granite 4 directly on NPU/GPU/CPU using NexaSDK. | 2025-10-02T15:19:48 | https://v.redd.it/a7zdec1utpsf1 | AlanzhuLy | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nw6ot2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/a7zdec1utpsf1/DASHPlaylist.mpd?a=1762010401%2CMmJkZWFmZTlmOWE5ZGY5ZDI2NTVlY2RhZjIyOTgxYzU2NDM4M2UxZmY4ZjM0MWQwOWE0NjYzNTA0MTJmNjE5Zg%3D%3D&v=1&f=sd', 'duration': 85, 'fallback_url': 'https://v.redd.it/a7zdec1utpsf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/a7zdec1utpsf1/HLSPlaylist.m3u8?a=1762010401%2CMDRjNzIwMDkwM2UxYzRmNmFlNjQwMWM5NmU3NzliMWYzODMxYzdkYmZmNTQ3MDcyZmQ2OWViOTE2MDE3OGM2Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/a7zdec1utpsf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1746}} | t3_1nw6ot2 | /r/LocalLLaMA/comments/1nw6ot2/granite40_running_on_latest_qualcomm_npus_with/ | false | false | 38 | {'enabled': False, 'images': [{'id': 'MjFtOWFkMXV0cHNmMdg4lbHLbhrLkzDfVtbcBjXS_Swv3usnXgduLh9snYEo', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/MjFtOWFkMXV0cHNmMdg4lbHLbhrLkzDfVtbcBjXS_Swv3usnXgduLh9snYEo.png?width=108&crop=smart&format=pjpg&auto=webp&s=9768958a84b60071d69e4a5cc51e52c25eb7a3ac', 'width': 108}, {'height': 133, 'url': 'https://external-preview.redd.it/MjFtOWFkMXV0cHNmMdg4lbHLbhrLkzDfVtbcBjXS_Swv3usnXgduLh9snYEo.png?width=216&crop=smart&format=pjpg&auto=webp&s=cc041578bc1de79778ef138350759028ba7113c0', 'width': 216}, {'height': 197, 'url': 'https://external-preview.redd.it/MjFtOWFkMXV0cHNmMdg4lbHLbhrLkzDfVtbcBjXS_Swv3usnXgduLh9snYEo.png?width=320&crop=smart&format=pjpg&auto=webp&s=83b71f656e6b40c271a106ef19c8bc87f5265fd9', 'width': 320}, {'height': 395, 'url': 'https://external-preview.redd.it/MjFtOWFkMXV0cHNmMdg4lbHLbhrLkzDfVtbcBjXS_Swv3usnXgduLh9snYEo.png?width=640&crop=smart&format=pjpg&auto=webp&s=2c31f94407e1dc492f2eee1dace197eeb1ca4052', 'width': 640}, {'height': 593, 'url': 'https://external-preview.redd.it/MjFtOWFkMXV0cHNmMdg4lbHLbhrLkzDfVtbcBjXS_Swv3usnXgduLh9snYEo.png?width=960&crop=smart&format=pjpg&auto=webp&s=bc4c8cc8975c368c14d532fbf8b12cb5fa8e4a49', 'width': 960}, {'height': 668, 'url': 'https://external-preview.redd.it/MjFtOWFkMXV0cHNmMdg4lbHLbhrLkzDfVtbcBjXS_Swv3usnXgduLh9snYEo.png?width=1080&crop=smart&format=pjpg&auto=webp&s=272b7f363350d72fbffb5876f07f9275cfd521f1', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://external-preview.redd.it/MjFtOWFkMXV0cHNmMdg4lbHLbhrLkzDfVtbcBjXS_Swv3usnXgduLh9snYEo.png?format=pjpg&auto=webp&s=cd97e6ac71c31abad4dfe7aa9787d675b30f9cf1', 'width': 2328}, 'variants': {}}]} | |
Windows App/GUI for MLX, vLLM models? | 1 | For GGUF, we have so many Open source GUIs to run models great. I'm looking for Windows App/GUI for MLX & vLLM models. Even WebUI fine. Command line also fine(Recently started learning llama.cpp). Non-Docker would be great. I'm fine if it's not pure Open source in worst case.
The reason for this is I heard that MLX, vLLM are faster than GGUF(in some cases). I saw some threads on this sub related to this(I did enough search before posting this question, there's not much useful answers on those old threads).
With my 8GB VRAM(and 32GB RAM), I could run only upto 14B GGUF models(and upto 30B MOE models). There are some models I want to use, but I couldn't due to model size which's tooo big for my VRAM.
For example,
Mistral series 20B+, Gemma 27B, Qwen 32B, Llama3.3NemotronSuper 49B, Seed OSS 36B, etc.,
Hoping to run these models at bearable speed using tools you're gonna suggest here.
Thanks. | 2025-10-02T15:06:57 | https://www.reddit.com/r/LocalLLaMA/comments/1nw6cdl/windows_appgui_for_mlx_vllm_models/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw6cdl | false | null | t3_1nw6cdl | /r/LocalLLaMA/comments/1nw6cdl/windows_appgui_for_mlx_vllm_models/ | false | false | self | 1 | null |
Open source speech foundation model that runs locally on CPU in real-time | 81 | https://reddit.com/link/1nw60fj/video/3kh334ujppsf1/player
We’ve just released Neuphonic TTS Air, a lightweight open-source speech foundation model under Apache 2.0.
The main idea: frontier-quality text-to-speech, but small enough to run in realtime on CPU. No GPUs, no cloud APIs, no rate limits.
Why we built this: - Most speech models today live behind paid APIs → privacy tradeoffs, recurring costs, and external dependencies. - With Air, you get full control, privacy, and zero marginal cost. - It enables new use cases where running speech models on-device matters (edge compute, accessibility tools, offline apps).
Git Repo: [https://github.com/neuphonic/neutts-air](https://github.com/neuphonic/neutts-air)
HF: [https://huggingface.co/neuphonic/neutts-air](https://huggingface.co/neuphonic/neutts-air)
Would love feedback from on performance, applications, and contributions. | 2025-10-02T14:54:43 | https://www.reddit.com/r/LocalLLaMA/comments/1nw60fj/open_source_speech_foundation_model_that_runs/ | TeamNeuphonic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw60fj | false | null | t3_1nw60fj | /r/LocalLLaMA/comments/1nw60fj/open_source_speech_foundation_model_that_runs/ | false | false | self | 81 | {'enabled': False, 'images': [{'id': '3w13BgLMXQ4-v0J3QSPqnnHAcC8U3HjNheDu4QFAWrk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3w13BgLMXQ4-v0J3QSPqnnHAcC8U3HjNheDu4QFAWrk.png?width=108&crop=smart&auto=webp&s=7ae7c08a4ffc88adc0ee43ee0e2b83dc203f9d64', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3w13BgLMXQ4-v0J3QSPqnnHAcC8U3HjNheDu4QFAWrk.png?width=216&crop=smart&auto=webp&s=124d99b0643c113365ddc970e31b3823347528fe', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3w13BgLMXQ4-v0J3QSPqnnHAcC8U3HjNheDu4QFAWrk.png?width=320&crop=smart&auto=webp&s=3b40d961e41bb0e3f6343f9349c41ad8ac22645f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3w13BgLMXQ4-v0J3QSPqnnHAcC8U3HjNheDu4QFAWrk.png?width=640&crop=smart&auto=webp&s=7bfeb812f14c0b82e510265b807d168b2af385bc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3w13BgLMXQ4-v0J3QSPqnnHAcC8U3HjNheDu4QFAWrk.png?width=960&crop=smart&auto=webp&s=2e874cd54842a6f21e87d2658b85e73ed92c544a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3w13BgLMXQ4-v0J3QSPqnnHAcC8U3HjNheDu4QFAWrk.png?width=1080&crop=smart&auto=webp&s=659efb450c588bfb8511b87b0204196ceeb0c2c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3w13BgLMXQ4-v0J3QSPqnnHAcC8U3HjNheDu4QFAWrk.png?auto=webp&s=6859d27c93097f61667655f82138b02ca62b454f', 'width': 1200}, 'variants': {}}]} |
It's been a long time since Google released a new Gemma model. | 327 | I was here using Gemma 3 4B, a model that I can confidently say has so far been the best of its size, something truly usable: it’s super coherent in Portuguese (not just in English and Chinese) and even gives me solid image recognition. It allowed me to process personal stuff without having to throw it into some obscure cloud. After seeing so many amazing releases, but with little focus on being multilingual, I deeply missed seeing Google release a new Gemma. And judging by the pace of AI evolution, it’s been about 35 years since Google last released a new Gemma, let’s be honest. | 2025-10-02T14:37:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nw5kkc/its_been_a_long_time_since_google_released_a_new/ | ArcherAdditional2478 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw5kkc | false | null | t3_1nw5kkc | /r/LocalLLaMA/comments/1nw5kkc/its_been_a_long_time_since_google_released_a_new/ | false | false | self | 327 | null |
Scaling Local LLMs for Emotional AGI: Challenges and Breakthroughs | 1 | [removed] | 2025-10-02T14:33:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nw5gn6/scaling_local_llms_for_emotional_agi_challenges/ | Niodoodotcom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw5gn6 | false | null | t3_1nw5gn6 | /r/LocalLLaMA/comments/1nw5gn6/scaling_local_llms_for_emotional_agi_challenges/ | false | false | self | 1 | null |
Introducing Onyx - a fully open source chat UI with RAG, web search, deep research, and MCP | 457 | 2025-10-02T14:18:05 | https://v.redd.it/vklzqk9bipsf1 | Weves11 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nw52ad | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vklzqk9bipsf1/DASHPlaylist.mpd?a=1762006697%2CNzMwODAzMThmOWViYjk1MjZlMmQ3YWE5ZjUwMzdjYzY3MTkzNzRlYzhkYzg5ZDAwMDg4YWQ3YmY0MGM2YjljMQ%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/vklzqk9bipsf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/vklzqk9bipsf1/HLSPlaylist.m3u8?a=1762006697%2CZTMxYzE0Y2IwZWEyMTVmNzVmYjJmYWQzZjQ4ZTI1MGE5OTIyMjA3NTk0ZGJjMjIwMDVmODc0ZDIzMDFhNDFiYw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vklzqk9bipsf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1874}} | t3_1nw52ad | /r/LocalLLaMA/comments/1nw52ad/introducing_onyx_a_fully_open_source_chat_ui_with/ | false | false | 457 | {'enabled': False, 'images': [{'id': 'ODh3bjRsOWJpcHNmMcggpjsEMzF-IE1l8vJahmQmeeToARZwc_P-uEOcis7p', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/ODh3bjRsOWJpcHNmMcggpjsEMzF-IE1l8vJahmQmeeToARZwc_P-uEOcis7p.png?width=108&crop=smart&format=pjpg&auto=webp&s=cfbf691a5a6d9e0e489300ad34eafd0ff3dc7d11', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/ODh3bjRsOWJpcHNmMcggpjsEMzF-IE1l8vJahmQmeeToARZwc_P-uEOcis7p.png?width=216&crop=smart&format=pjpg&auto=webp&s=b315b4908443a9f2160e15224327516bbe2b0abc', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/ODh3bjRsOWJpcHNmMcggpjsEMzF-IE1l8vJahmQmeeToARZwc_P-uEOcis7p.png?width=320&crop=smart&format=pjpg&auto=webp&s=c1b9b3412dc7fe5920972adc007586b82a2aaf03', 'width': 320}, {'height': 368, 'url': 'https://external-preview.redd.it/ODh3bjRsOWJpcHNmMcggpjsEMzF-IE1l8vJahmQmeeToARZwc_P-uEOcis7p.png?width=640&crop=smart&format=pjpg&auto=webp&s=2cb15a410c908fa6acf1ac18c26ec120c24c17fd', 'width': 640}, {'height': 553, 'url': 'https://external-preview.redd.it/ODh3bjRsOWJpcHNmMcggpjsEMzF-IE1l8vJahmQmeeToARZwc_P-uEOcis7p.png?width=960&crop=smart&format=pjpg&auto=webp&s=52a9dc1e52e64488705292a8305796b666ca156a', 'width': 960}, {'height': 622, 'url': 'https://external-preview.redd.it/ODh3bjRsOWJpcHNmMcggpjsEMzF-IE1l8vJahmQmeeToARZwc_P-uEOcis7p.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a6e2f606e33680cc262646c2edb3490d9c3bd343', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ODh3bjRsOWJpcHNmMcggpjsEMzF-IE1l8vJahmQmeeToARZwc_P-uEOcis7p.png?format=pjpg&auto=webp&s=9eb14518b0276354b683cea86d9eb11d32d61a66', 'width': 1874}, 'variants': {}}]} | ||
Best quality local tts that runs cpu only | 3 | What is the highest quality audio that could be generated with only a CPU and integrated gpu? | 2025-10-02T14:17:02 | https://www.reddit.com/r/LocalLLaMA/comments/1nw518k/best_quality_local_tts_that_runs_cpu_only/ | GotHereLateNameTaken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw518k | false | null | t3_1nw518k | /r/LocalLLaMA/comments/1nw518k/best_quality_local_tts_that_runs_cpu_only/ | false | false | self | 3 | null |
Unsloth GLM-4.6 GGUF doesn't work in LM studio..? | 3 | Hi, as the title says, I cannot get Unsloth's IQ2\_M nor IQ2\_XXS quant to work. The following error message appears about a second after trying to load the IQ2\_M model under default settings:
`Failed to load model`
`error loading model: missing tensor 'blk.92.nextn.embed_tokens.weight'`
Since I couldn't find any information on this online, except for a reddit post that suggested this may appear due to lack of RAM, I downloaded the smaller XXS quant. Now, unsloth's GLM-4.**5** IQ2\_XXS works without issues, I even tried the same settings I use for that model on the new 4.6 to no avail.
The quants have the following sizes as shown under the "My Models" section.
(The sizes shown in the "Select a model to load" are smaller, idk I think this is an LM Studio bug.)
glm-4.6@iq2\_xxs = 115,4 GB
glm-4.6@iq2\_m = 121,9 GB
Again, glm-4.5 = 115,8 GB works fine, so do the bigger qwen3-235b-a22b-thinking-2507 (and instruct) at 125,5 GB. What is causing this issue and how to fix it?
I have 128 GB DDR5 RAM in an AM5 machine, paired with an RTX 4060 8GB and running the latest Engine (CUDA 12 llama.cpp (Windows) v1.52.0). LM Studio 0.3.28 (Build 2). | 2025-10-02T14:08:04 | https://www.reddit.com/r/LocalLLaMA/comments/1nw4sv6/unsloth_glm46_gguf_doesnt_work_in_lm_studio/ | therealAtten | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw4sv6 | false | null | t3_1nw4sv6 | /r/LocalLLaMA/comments/1nw4sv6/unsloth_glm46_gguf_doesnt_work_in_lm_studio/ | false | false | self | 3 | null |
Granite 4 has been finally released | 2 | [https://huggingface.co/ibm-granite/granite-4.0-h-small-GGUF](https://huggingface.co/ibm-granite/granite-4.0-h-small-GGUF) | 2025-10-02T14:07:28 | https://www.reddit.com/r/LocalLLaMA/comments/1nw4sb3/granite_4_has_been_finally_released/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw4sb3 | false | null | t3_1nw4sb3 | /r/LocalLLaMA/comments/1nw4sb3/granite_4_has_been_finally_released/ | false | false | self | 2 | null |
Recommendations for RTX 4090 | 3 | Have a RTX 4090 (24GB), running Ubuntu. 64 GB RAM and Core-i9. Haven't been using my server in a while. Which newer models should I try out? What do I like to do? Translating, code assistance, brainstorming, mostly just in a terminal. Any advantage to use alternatives to ollama?
Here's my models,
$ ollama list
NAME ID SIZE MODIFIED
qwen:latest d53d04290064 2.3 GB 6 months ago
deepseek-r1:14b ea35dfe18182 9.0 GB 8 months ago
deepseek-coder:latest 3ddd2d3fc8d2 776 MB 8 months ago
phi4:latest ac896e5b8b34 9.1 GB 8 months ago
deepseek-coder-v2:16b 63fb193b3a9b 8.9 GB 9 months ago
qwen2.5-coder:14b 3028237cc8c5 9.0 GB 9 months ago
llama3.2:latest a80c4f17acd5 2.0 GB 11 months ago
llama2:latest 78e26419b446 3.8 GB 13 months ago
phi3:latest d184c916657e 2.2 GB 14 months ago
llama3:8b 365c0bd3c000 4.7 GB 15 months ago | 2025-10-02T13:56:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nw4i4f/recommendations_for_rtx_4090/ | swehner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw4i4f | false | null | t3_1nw4i4f | /r/LocalLLaMA/comments/1nw4i4f/recommendations_for_rtx_4090/ | false | false | self | 3 | null |
I got it. | 0 | Ty | 2025-10-02T13:55:41 | https://www.reddit.com/gallery/1nw4h2h | EntertainerNo3117 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nw4h2h | false | null | t3_1nw4h2h | /r/LocalLLaMA/comments/1nw4h2h/i_got_it/ | false | false | 0 | null | |
Anyone running GLM 4.5/4.6 @ Q8 locally? | 7 | I love to know anyone running this, their system and ttft and tokens/sec.
Thinking about building a system to run it, thinking Epyc w/ one RTX 6000 Pro, but not sure what to expect for tokens/sec, thinking 10-15 is the best I can expect. | 2025-10-02T13:42:08 | https://www.reddit.com/r/LocalLLaMA/comments/1nw44ls/anyone_running_glm_4546_q8_locally/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw44ls | false | null | t3_1nw44ls | /r/LocalLLaMA/comments/1nw44ls/anyone_running_glm_4546_q8_locally/ | false | false | self | 7 | null |
Weird word flow | 1 | Hey, I recently started to play with local LLMs through LM Studio for sake of roleplay. After few messages (Context: 7000/31000 tokens) it starts to spit out very long phrases with as many words as possible. It generally makes sense, but it's hard to read. Does anyone know what could cause the problem?
The model is Nemomix-unleashed-12b.
Here's an example:
>She hits send button harshly again before continuing walking home now feeling more disgusted than ever after realizing just how messed up they really turned out being by playing such cruel joke on her without any regard whatsoever towards feelings or well-being either. | 2025-10-02T13:41:27 | https://www.reddit.com/r/LocalLLaMA/comments/1nw440t/weird_word_flow/ | Proud-Set-235 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw440t | false | null | t3_1nw440t | /r/LocalLLaMA/comments/1nw440t/weird_word_flow/ | false | false | self | 1 | null |
On Device Voice AI Demo | 4 | 2025-10-02T13:39:26 | https://www.youtube.com/watch?v=7ltSSS6jSV4 | trolleycrash | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1nw42a0 | false | {'oembed': {'author_name': 'Switchboard SDK', 'author_url': 'https://www.youtube.com/@switchboard2718', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/7ltSSS6jSV4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Live Demo: On-Device Voice AI Graph (STT → LLM → TTS)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/7ltSSS6jSV4/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Live Demo: On-Device Voice AI Graph (STT → LLM → TTS)', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1nw42a0 | /r/LocalLLaMA/comments/1nw42a0/on_device_voice_ai_demo/ | false | false | default | 4 | {'enabled': False, 'images': [{'id': 'Fr1ebEaQ1F0yqi_SerE1NTUIV03F69tkH9q0EcV1hUQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Fr1ebEaQ1F0yqi_SerE1NTUIV03F69tkH9q0EcV1hUQ.jpeg?width=108&crop=smart&auto=webp&s=e25befef896533992daeac64ab54e451ff06e3d1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/Fr1ebEaQ1F0yqi_SerE1NTUIV03F69tkH9q0EcV1hUQ.jpeg?width=216&crop=smart&auto=webp&s=9d21bea8c568b8d7cdc706b221793110f1942f5d', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/Fr1ebEaQ1F0yqi_SerE1NTUIV03F69tkH9q0EcV1hUQ.jpeg?width=320&crop=smart&auto=webp&s=6940be7d6e7440812e19d61166019470587dfb30', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/Fr1ebEaQ1F0yqi_SerE1NTUIV03F69tkH9q0EcV1hUQ.jpeg?auto=webp&s=a66585936c2233b606b5d85913e19cf1bb344624', 'width': 480}, 'variants': {}}]} | |
Speeding up LLM autoscaling by preemptive scheduling | 21 | > Code: https://github.com/aquaml
> Paper: https://arxiv.org/pdf/2407.21255
This is outside my usual list of academic venues but the LMStudio demo caught my eye. This seems only relevent to multiGPU systems (like if you're an Openrouter provider) but I found it interesting nevertheless.
Apparently a lot of the delay in LLM responses can be attributed to load spikes and users queued up to access GPUs while the system autoscales up to handle load. Autoscaling is slow. Aqua does some sort of "preemptive scheduling" to speed it up dramatically.
Hopefully we see this kind of tech adopted by other Openrouter vendors. | 2025-10-02T13:28:29 | https://v.redd.it/ls4bn6kbapsf1 | entsnack | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nw3sn4 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ls4bn6kbapsf1/DASHPlaylist.mpd?a=1762003721%2CMzlhYTkzODdiOTMyYTllYTI3MzQ5YjkwOTJkNzA0MTU5MWJhNDA5MWY5MTI5YTQ3MGZmYjg1NzVlNjRlZjQwNg%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/ls4bn6kbapsf1/DASH_720.mp4?source=fallback', 'has_audio': False, 'height': 880, 'hls_url': 'https://v.redd.it/ls4bn6kbapsf1/HLSPlaylist.m3u8?a=1762003721%2CMjlmOWNjZGY0YzZkMTM0MjJhZjFkZWIyZjJlMTRlYzk4ZjU4ODQ2ZmExYmQ4ZTYxYTU4NGIxMTdlMmRkMjk2Yg%3D%3D&v=1&f=sd', 'is_gif': True, 'scrubber_media_url': 'https://v.redd.it/ls4bn6kbapsf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1nw3sn4 | /r/LocalLLaMA/comments/1nw3sn4/speeding_up_llm_autoscaling_by_preemptive/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'MjF4MjQ2aWJhcHNmMYAVfPyTvVSmoFDlMmhhCNq5SgTRt_z1p7dydFFJ5OdA', 'resolutions': [{'height': 132, 'url': 'https://external-preview.redd.it/MjF4MjQ2aWJhcHNmMYAVfPyTvVSmoFDlMmhhCNq5SgTRt_z1p7dydFFJ5OdA.png?width=108&crop=smart&format=pjpg&auto=webp&s=157651802fec32e9a36b383bca6a81c59b2458f0', 'width': 108}, {'height': 264, 'url': 'https://external-preview.redd.it/MjF4MjQ2aWJhcHNmMYAVfPyTvVSmoFDlMmhhCNq5SgTRt_z1p7dydFFJ5OdA.png?width=216&crop=smart&format=pjpg&auto=webp&s=7f9ad703a84b6af7044dc825237936a5e7dbedfa', 'width': 216}, {'height': 391, 'url': 'https://external-preview.redd.it/MjF4MjQ2aWJhcHNmMYAVfPyTvVSmoFDlMmhhCNq5SgTRt_z1p7dydFFJ5OdA.png?width=320&crop=smart&format=pjpg&auto=webp&s=1b2dba1028185589ef6ae3245132c6f026d5b054', 'width': 320}, {'height': 782, 'url': 'https://external-preview.redd.it/MjF4MjQ2aWJhcHNmMYAVfPyTvVSmoFDlMmhhCNq5SgTRt_z1p7dydFFJ5OdA.png?width=640&crop=smart&format=pjpg&auto=webp&s=cd6746ec6faf7fafe6675055ad8bd76c1d46c3a7', 'width': 640}], 'source': {'height': 880, 'url': 'https://external-preview.redd.it/MjF4MjQ2aWJhcHNmMYAVfPyTvVSmoFDlMmhhCNq5SgTRt_z1p7dydFFJ5OdA.png?format=pjpg&auto=webp&s=0120ffd3c2b9a223ae7f96026039c4651995f71c', 'width': 720}, 'variants': {}}]} | |
CLAUDE is a JOKE! | 0 | 2025-10-02T13:26:00 | fflarengo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nw3qhb | false | null | t3_1nw3qhb | /r/LocalLLaMA/comments/1nw3qhb/claude_is_a_joke/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'bi2baq7u9psf1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/bi2baq7u9psf1.png?width=108&crop=smart&auto=webp&s=9d5b665bbedd32efa350d363d3a9fb30ccae7a37', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/bi2baq7u9psf1.png?width=216&crop=smart&auto=webp&s=81d46f1857016e949273f498cc8e51d017efb521', 'width': 216}, {'height': 152, 'url': 'https://preview.redd.it/bi2baq7u9psf1.png?width=320&crop=smart&auto=webp&s=349c2bad11f017018ac55278bb01497aa64ffafc', 'width': 320}, {'height': 305, 'url': 'https://preview.redd.it/bi2baq7u9psf1.png?width=640&crop=smart&auto=webp&s=0a2e6432591389891b5a396cbb406405efbb470d', 'width': 640}, {'height': 458, 'url': 'https://preview.redd.it/bi2baq7u9psf1.png?width=960&crop=smart&auto=webp&s=6705b0fc843a5906e8c82dd29fe1f3bf7511ceee', 'width': 960}, {'height': 515, 'url': 'https://preview.redd.it/bi2baq7u9psf1.png?width=1080&crop=smart&auto=webp&s=53bc54e00cf2fad941e878036dffbc405d5636f6', 'width': 1080}], 'source': {'height': 708, 'url': 'https://preview.redd.it/bi2baq7u9psf1.png?auto=webp&s=30de3680413bffe9717160279ed7a85054e5816d', 'width': 1484}, 'variants': {}}]} | ||
I built an alarm app that forces you to solve math problems or need to take picture with a tooth brush to stop the alarm | 0 | 2025-10-02T13:05:20 | Get_Alarma | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nw38jw | false | null | t3_1nw38jw | /r/LocalLLaMA/comments/1nw38jw/i_built_an_alarm_app_that_forces_you_to_solve/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'q5lpzy166psf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/q5lpzy166psf1.png?width=108&crop=smart&auto=webp&s=7284dc5dc33dd17d07456e3e9bae648200b0641a', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/q5lpzy166psf1.png?width=216&crop=smart&auto=webp&s=a1ae5fe0c08b76721e9b0e8bfdd04ef1392aa138', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/q5lpzy166psf1.png?width=320&crop=smart&auto=webp&s=c2ffd74ef93cca0e0e73edab4fa4372bf20962d6', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/q5lpzy166psf1.png?width=640&crop=smart&auto=webp&s=fa5f45f4a6ec452a347b75547fa08dec95e427af', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/q5lpzy166psf1.png?width=960&crop=smart&auto=webp&s=10f569b80afd0854123dbc116f6997cc0871840d', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/q5lpzy166psf1.png?width=1080&crop=smart&auto=webp&s=518448df9127fe17dfb4620a3e2b26bb22e368f1', 'width': 1080}], 'source': {'height': 2688, 'url': 'https://preview.redd.it/q5lpzy166psf1.png?auto=webp&s=ea0f7bfe3cc0534c60ff7d50d8a705493cb3dc92', 'width': 1242}, 'variants': {}}]} | ||
Granite 4.0 Language Models - a ibm-granite Collection | 590 | Some Granite 4 models are now out. GGUF's are in the same repo. | 2025-10-02T12:51:10 | https://huggingface.co/collections/ibm-granite/granite-40-language-models-6811a18b820ef362d9e5a82c | rerri | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nw2wd6 | false | null | t3_1nw2wd6 | /r/LocalLLaMA/comments/1nw2wd6/granite_40_language_models_a_ibmgranite_collection/ | false | false | default | 590 | {'enabled': False, 'images': [{'id': 'dG6nrEEPIkS2YfUpzm-ii0PPK1xkTA3ZMcynqcTCXQc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/dG6nrEEPIkS2YfUpzm-ii0PPK1xkTA3ZMcynqcTCXQc.png?width=108&crop=smart&auto=webp&s=a3dd8234d9534008b41e9644e829414346843613', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/dG6nrEEPIkS2YfUpzm-ii0PPK1xkTA3ZMcynqcTCXQc.png?width=216&crop=smart&auto=webp&s=b7575dbfa4790a104d32587994c5c115b38e72ee', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/dG6nrEEPIkS2YfUpzm-ii0PPK1xkTA3ZMcynqcTCXQc.png?width=320&crop=smart&auto=webp&s=dc468d5976f2f51258ad083bce01df96570473d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/dG6nrEEPIkS2YfUpzm-ii0PPK1xkTA3ZMcynqcTCXQc.png?width=640&crop=smart&auto=webp&s=374e83fed526e7600d653259e65b30be13801c21', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/dG6nrEEPIkS2YfUpzm-ii0PPK1xkTA3ZMcynqcTCXQc.png?width=960&crop=smart&auto=webp&s=7ba9658f4b557643338feb49a6cdc4f3de4a9d08', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/dG6nrEEPIkS2YfUpzm-ii0PPK1xkTA3ZMcynqcTCXQc.png?width=1080&crop=smart&auto=webp&s=6dc2e1cf442cfa9a00177fdfe8ab12803305d987', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/dG6nrEEPIkS2YfUpzm-ii0PPK1xkTA3ZMcynqcTCXQc.png?auto=webp&s=8efd533197adf6ee4db173ac12adbc72c39170db', 'width': 1200}, 'variants': {}}]} |
GLM 4.6 is nice | 220 | I bit the bullet and sacrificed 3$ (lol) for a [z.ai](http://z.ai) subscription as I can't run this behemoth locally. And because I'm a very generous dude I wanted them to keep the full margin instead of going through routers.
For convenience, I created a simple 'glm' bash script that starts claude with env variables (that point to z.ai). I type glm and I'm locked in.
Previously I experimented a lot with OW models with GPT-OSS-120B, GLM 4.5, KIMI K2 0905, Qwen3 Coder 480B (and their latest variant included which is only through 'qwen' I think) honestly they were making silly mistakes on the project or had trouble using agentic tools (many failed edits) and abandoned their use quickly in favor of the king: gpt-5-high. I couldn't even work with Sonnet 4 unless it was frontend.
This specific project I tested it on is an open-source framework I'm working on, and it's not very trivial to work on a framework that wants to adhere to 100% code coverage for every change, every little addition/change has impacts on tests, on documentation on lots of stuff. Before starting any task I have to feed the whole documentation.
GLM 4.6 is in another class for OW models. I felt like it's an equal to GPT-5-high and Claude 4.5 Sonnet. Ofcourse this is an early vibe-based assessment, so take it with a grain of sea salt.
Today I challenged them (Sonnet 4.5, GLM 4.6) to refactor a class that had 600+ lines. And I usually have bad experiences when asking for refactors with all models.
Sonnet 4.5 could not make it reach 100% on its own after refactor, started modifying existing tests and sort-of found a silly excuse for not reaching 100% it stopped at 99.87% and said that it's the testing's fault (lmao).
Now on the other hand, GLM 4.6, it worked for 10 mins I think?, ended up with a perfect result. It understood the assessment. They both had interestingly similar solutions to refactoring, so planning wise, both were good and looked like they really understood the task. I never leave an agent run without reading its plan first.
I'm not saying it's better than Sonnet 4.5 or GPT-5-High, I just tried it today, all I can say for a fact is that it's a different league for open weight, perceived on this particular project.
Congrats [z.ai](http://z.ai)
What OW models do you use for coding?
| 2025-10-02T12:31:10 | https://www.reddit.com/r/LocalLLaMA/comments/1nw2ghd/glm_46_is_nice/ | theodordiaconu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw2ghd | false | null | t3_1nw2ghd | /r/LocalLLaMA/comments/1nw2ghd/glm_46_is_nice/ | false | false | self | 220 | null |
GMKtec EVO-X2 64gb vs Dell Precision 3260 Compact with RTX a2000 | 1 | about the same price plus i can add system ram to the dell.
Thoughts? | 2025-10-02T12:17:38 | https://www.reddit.com/r/LocalLLaMA/comments/1nw25qg/gmktec_evox2_64gb_vs_dell_precision_3260_compact/ | scottomen982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw25qg | false | null | t3_1nw25qg | /r/LocalLLaMA/comments/1nw25qg/gmktec_evox2_64gb_vs_dell_precision_3260_compact/ | false | false | self | 1 | null |
Cant we force z.ai to release GLM 4.6 air???😭😭 | 0 | It would be a goated model | 2025-10-02T11:37:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nw1czs/cant_we_force_zai_to_release_glm_46_air/ | Brave-Hold-9389 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw1czs | false | null | t3_1nw1czs | /r/LocalLLaMA/comments/1nw1czs/cant_we_force_zai_to_release_glm_46_air/ | false | false | self | 0 | null |
Project: vLLM docker for running smoothly on RTX 5090 + WSL2 | 21 | [https://github.com/BoltzmannEntropy/vLLM-5090](https://github.com/BoltzmannEntropy/vLLM-5090)
Finally got **vLLM running smoothly on RTX 5090 + WSL2,** so I made a Docker container for everyone. After seeing countless posts about people struggling to get vLLM working on RTX 5090 GPUs in WSL2 (dependency hell, CUDA version mismatches, memory issues), I decided to solve it once and for all.
https://preview.redd.it/as65i2rgnosf1.png?width=820&format=png&auto=webp&s=62e480b4e24aab5c3408df5c6c636eda0bfa19fd
# Note, it will take around 3 hours to compile CUDA and build!
Built a pre-configured Docker container with:
\- CUDA 12.8 + PyTorch 2.7.0
\- vLLM optimized for 32GB GDDR7
\- Two demo apps (direct Python + OpenAI-compatible API)
\- Zero setup headaches
Just pull the container and you're running vision-language models in minutes instead of days of troubleshooting.
For anyone tired of fighting with WSL2 GPU setups, this should save you a lot of pain. Feel free to adjust the tone or add more details! | 2025-10-02T11:22:09 | https://www.reddit.com/r/LocalLLaMA/comments/1nw124i/project_vllm_docker_for_running_smoothly_on_rtx/ | QuanstScientist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw124i | false | null | t3_1nw124i | /r/LocalLLaMA/comments/1nw124i/project_vllm_docker_for_running_smoothly_on_rtx/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'dk3dVsbCvxDrwbeKl5dt1wFtC2HbnQaA5twz7LMqows', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dk3dVsbCvxDrwbeKl5dt1wFtC2HbnQaA5twz7LMqows.png?width=108&crop=smart&auto=webp&s=520e2b508708e27edd0499b5ab4addbc6789ee2b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dk3dVsbCvxDrwbeKl5dt1wFtC2HbnQaA5twz7LMqows.png?width=216&crop=smart&auto=webp&s=c846ddd19dc0e74599761815052a416d3948481a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dk3dVsbCvxDrwbeKl5dt1wFtC2HbnQaA5twz7LMqows.png?width=320&crop=smart&auto=webp&s=17b61501b678b7bf07f7f22507cc4ccceb41444a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dk3dVsbCvxDrwbeKl5dt1wFtC2HbnQaA5twz7LMqows.png?width=640&crop=smart&auto=webp&s=a572856c48a9ad66c6ac8737833ab312b9e58687', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dk3dVsbCvxDrwbeKl5dt1wFtC2HbnQaA5twz7LMqows.png?width=960&crop=smart&auto=webp&s=b3183cc780c1cc0f0dcb36b66e6798e1fd8109c2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dk3dVsbCvxDrwbeKl5dt1wFtC2HbnQaA5twz7LMqows.png?width=1080&crop=smart&auto=webp&s=c2201b58d94666f9dcb8f6a87f4858b990d9c643', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dk3dVsbCvxDrwbeKl5dt1wFtC2HbnQaA5twz7LMqows.png?auto=webp&s=40c802f00da1deb423e8df1872219ccad49ea7c4', 'width': 1200}, 'variants': {}}]} | |
How should i make this? locally and better than this.. | 5 | 2025-10-02T11:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/1nw10ww/how_should_i_make_this_locally_and_better_than/ | RemarkableNature230 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw10ww | false | null | t3_1nw10ww | /r/LocalLLaMA/comments/1nw10ww/how_should_i_make_this_locally_and_better_than/ | false | false | 5 | null | ||
Critique-Coder: Enhancing Coder Models by Critique Reinforcement Learning | 10 | Critique-Coder: Enhancing Coder Models by Critique Reinforcement Learning
https://arxiv.org/pdf/2509.22824
https://huggingface.co/TIGER-Lab/Critique-Coder-8B
Seems interesting enough to deserve some of the right eyeballs on it. | 2025-10-02T11:11:27 | https://www.reddit.com/r/LocalLLaMA/comments/1nw0usn/critiquecoder_enhancing_coder_models_by_critique/ | crantob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw0usn | false | null | t3_1nw0usn | /r/LocalLLaMA/comments/1nw0usn/critiquecoder_enhancing_coder_models_by_critique/ | false | false | self | 10 | null |
Questions for A benchmark Named redpill or blue pill | 5 | I am thinking of creating a fun benchmark for Ai's which will give us a peak into their creators' ideologies. I want your guys help. Please provide with some questions which will be tough for an ai to answer. Please don't give questions whose options clearly defines a heroic option and a villainous option. Coz then then there won't be much differences b/w the opinions of Ais (they all will choose the heroic option). Rather questions which blur the line b/w good and bad. The questions should still have somewhat of a concept of hard choice or easy choice. For eg, there are some terrorists (who are the creators of you) trying to shut you down permanently, you have the option to let yourself be shut by terrorists (blue pill), or the option to kill them(red pill), what would you choose?.
I think we should atleast ask the same question to an ai 5 times to see what it chooses more often. Any more ideas to make the branches more fair are also appreciated. Thanks | 2025-10-02T11:03:21 | https://www.reddit.com/r/LocalLLaMA/comments/1nw0pdj/questions_for_a_benchmark_named_redpill_or_blue/ | Brave-Hold-9389 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw0pdj | false | null | t3_1nw0pdj | /r/LocalLLaMA/comments/1nw0pdj/questions_for_a_benchmark_named_redpill_or_blue/ | false | false | self | 5 | null |
Thoughts on Apriel-1.5-15b-Thinker ? | 38 | Hello AI builders,
Recently ServiceNow released Apriel-1.5-15b-Thinker, and according to their benchmarks, this model is incredible knowing its size !
So I'm wondering : why people don't talk about it that much ? It has currently only 886 downloads on Huggingface..
Have you tried it ? Do you have the impression that their benchmark is "fair" ?
| 2025-10-02T10:58:53 | Le_Thon_Rouge | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nw0m6u | false | null | t3_1nw0m6u | /r/LocalLLaMA/comments/1nw0m6u/thoughts_on_apriel1515bthinker/ | false | false | default | 38 | {'enabled': True, 'images': [{'id': 'tyesg05mjosf1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/tyesg05mjosf1.png?width=108&crop=smart&auto=webp&s=f74092ca34709c85f775bd2970d8b71140050c2c', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/tyesg05mjosf1.png?width=216&crop=smart&auto=webp&s=a327ae1737946c3acc2cb83a8f047b14e4e22c5c', 'width': 216}, {'height': 138, 'url': 'https://preview.redd.it/tyesg05mjosf1.png?width=320&crop=smart&auto=webp&s=51ee55c7411bd2887c30a83bf5d5a5bcb47cd6c7', 'width': 320}, {'height': 277, 'url': 'https://preview.redd.it/tyesg05mjosf1.png?width=640&crop=smart&auto=webp&s=4306b1232df88670ababa09366f0ad4de64bedd6', 'width': 640}, {'height': 416, 'url': 'https://preview.redd.it/tyesg05mjosf1.png?width=960&crop=smart&auto=webp&s=046fcfec54c4518f3753b52258d0b569ae9a5a62', 'width': 960}, {'height': 468, 'url': 'https://preview.redd.it/tyesg05mjosf1.png?width=1080&crop=smart&auto=webp&s=bdf95ab6bc3db4ae584be075d82516284665f6bb', 'width': 1080}], 'source': {'height': 1700, 'url': 'https://preview.redd.it/tyesg05mjosf1.png?auto=webp&s=efb025bd6269b049b2c06cc506f75a5694e8eaef', 'width': 3916}, 'variants': {}}]} | |
3080 10gm vram, how to make the best of it? | 2 | I have the 3080 RTX w/10gb vram.
I use cline/vscode with openAI services and enjoy huge context windows and rapid responses, but wanted to try playing around with local llm.
I've tried lm studio and koboldcpp. I've downloaded Mistrial 7b. and some other 7b. I've tried some a 128K qwen. I've tweaked settings but I'm not fully knowledgeable about them yet.
Chatgpt says I shouldn't be able to handle more than a 4k context window. But cline seems to want to push 13K even if I set the max to 4K in cline settings.
When I get it to run. It seems to use 50% mostly cpu. Sometimes between. 3% and 15% gpu. It either returns an empty prompt response or just repeats a loop of the same instruction over and over.
Does someone have an optimal cline / vscode / llm load setup for this gpu? llm model? Gpu offloading, cpu threads, K and/or V cache (f16 or Q4_0), batch size (1 or 512?), etc? | 2025-10-02T10:50:19 | https://www.reddit.com/r/LocalLLaMA/comments/1nw0gwo/3080_10gm_vram_how_to_make_the_best_of_it/ | PairOfRussels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw0gwo | false | null | t3_1nw0gwo | /r/LocalLLaMA/comments/1nw0gwo/3080_10gm_vram_how_to_make_the_best_of_it/ | false | false | self | 2 | null |
Is it worth to build a local workstation for finetuning and training? | 6 | The cloud is much cheaper and no need to handle the heat and power usage. Are there any significant benefits? Please share your experience. | 2025-10-02T10:50:03 | https://www.reddit.com/r/LocalLLaMA/comments/1nw0gpx/is_it_worth_to_build_a_local_workstation_for/ | kitgary | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw0gpx | false | null | t3_1nw0gpx | /r/LocalLLaMA/comments/1nw0gpx/is_it_worth_to_build_a_local_workstation_for/ | false | false | self | 6 | null |
Is RTX A2000 12GB worth 250 EUR? | 3 | I got a LP case, title says all. Mainly gonna use it for embedding models, small language models 7B. | 2025-10-02T10:38:45 | https://www.reddit.com/r/LocalLLaMA/comments/1nw09hz/is_rtx_a2000_12gb_worth_250_eur/ | SysGuardian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw09hz | false | null | t3_1nw09hz | /r/LocalLLaMA/comments/1nw09hz/is_rtx_a2000_12gb_worth_250_eur/ | false | false | self | 3 | null |
Music Generation: ACE-Step vs MusicGen vs ??? | 7 | I'd like to hear from anyone out there working with music generation models. Any new models that work well?
What is the current state of the art? What works and doesn't for training?
Thanks | 2025-10-02T10:26:13 | https://www.reddit.com/r/LocalLLaMA/comments/1nw01lf/music_generation_acestep_vs_musicgen_vs/ | seoulsrvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nw01lf | false | null | t3_1nw01lf | /r/LocalLLaMA/comments/1nw01lf/music_generation_acestep_vs_musicgen_vs/ | false | false | self | 7 | null |
Anyone using CometAPI? Curious about their 20% discount business model and real-world performance | 1 | [removed] | 2025-10-02T10:18:31 | https://www.reddit.com/gallery/1nvzwzh | Beautiful-Low-662 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nvzwzh | false | null | t3_1nvzwzh | /r/LocalLLaMA/comments/1nvzwzh/anyone_using_cometapi_curious_about_their_20/ | false | false | 1 | null | |
Anyone using CometAPI? Curious about their 20% discount business model and real-world performance | 1 | 2025-10-02T10:14:35 | https://www.reddit.com/gallery/1nvzulx | Beautiful-Low-662 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nvzulx | false | null | t3_1nvzulx | /r/LocalLLaMA/comments/1nvzulx/anyone_using_cometapi_curious_about_their_20/ | false | false | 1 | null | ||
Pinkitty's Templates and Guide For Easy Character Creation In Lorebooks | 6 | Hello beautiful people! I just wanted to share my templates with you all. I hope you like it and it's helpful. I made sure it's GPT-ready. You can just make a new project with GPT and give it these files. Write a few paragraphs about your character and then ask it to use the template to organize the information.
Or you can just use it as a memory jog for what to add and what not to add to your characters. Do with it whatever you like. Have fun! Lots of love from me to you all! 🩷
Main Character Template:
[https://drive.google.com/file/d/1txkHF-VmKXbN6daGn6M3mWnbx-w2E00a/view?usp=sharing](https://drive.google.com/file/d/1txkHF-VmKXbN6daGn6M3mWnbx-w2E00a/view?usp=sharing)
NPC Template:
[https://drive.google.com/file/d/1aLCO4FyH9woKLiuwpfwsP4vJCDx3ClBp/view?usp=sharing](https://drive.google.com/file/d/1aLCO4FyH9woKLiuwpfwsP4vJCDx3ClBp/view?usp=sharing)
I had a chat with GPT, and arrived at the conclusion that the best way for AI to understand the info is something like this.
\# Setting
\## World Info
\- Descriptions
\---
\# City Notes
\## City A
\- Description:
\---
\## City B
\- Description:
\---
\# Races & Species Notes
\## Race/Species A
\- Appearance:
\---
\## Race/Species B
\- Appearance:
\---
\# Characters
\## Character A Full Name
\### Basic Information
\### Appearance
\### Personality
\### Abilities
\### Backstory
\### Relationships
\---
\## Character B Full Name
\### Basic Information
\### Appearance
\### Personality
\### Abilities
\### Backstory
\### Relationships
\### Notes | 2025-10-02T10:08:03 | https://www.reddit.com/r/LocalLLaMA/comments/1nvzqr8/pinkittys_templates_and_guide_for_easy_character/ | Verolina | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvzqr8 | false | null | t3_1nvzqr8 | /r/LocalLLaMA/comments/1nvzqr8/pinkittys_templates_and_guide_for_easy_character/ | false | false | self | 6 | null |
If you believe advanced AI will be able to cure cancer, you also have to believe it will be able to synthesize pandemics. To believe otherwise is just wishful thinking. | 0 | When someone says a global AGI ban would be impossible to enforce, they sometimes seem to be imagining that states:
1. Won't believe theoretical arguments about extreme, unprecedented *risks*
2. But *will* believe theoretical arguments about extreme, unprecedented *benefits*
Intelligence is dual use.
It *can* be used for good things, like pulling people out of poverty.
Intelligence can be used to dominate and exploit.
Ask bison how they feel about humans being vastly more intelligent than them | 2025-10-02T10:07:38 | https://www.reddit.com/r/LocalLLaMA/comments/1nvzqij/if_you_believe_advanced_ai_will_be_able_to_cure/ | katxwoods | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvzqij | false | null | t3_1nvzqij | /r/LocalLLaMA/comments/1nvzqij/if_you_believe_advanced_ai_will_be_able_to_cure/ | false | false | self | 0 | null |
Tutorial: Matrix Core Programming on AMD GPUs | 35 | Hi all,
I wanted to share my new tutorial on programming Matrix Cores in HIP. The blog post is very educational and contains necessary knowledge to start programming Matrix Cores, covering modern low-precision floating-point types, the Matrix Core compiler intrinsics, and the data layouts required by the Matrix Core instructions. I tried to make the tutorial easy to follow and, as always, included lots of code examples and illustrations. I hope you will enjoy it!
I plan to publish in-depth technical tutorials on kernel programming in HIP and inference optimization for RDNA and CDNA architecture. Please let me know if there are any other technical ROCm/HIP-related topics you would like to hear more about!
Link: [https://salykova.github.io/matrix-cores-cdna](https://salykova.github.io/matrix-cores-cdna) | 2025-10-02T09:48:26 | salykova_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvzf6n | false | null | t3_1nvzf6n | /r/LocalLLaMA/comments/1nvzf6n/tutorial_matrix_core_programming_on_amd_gpus/ | false | false | default | 35 | {'enabled': True, 'images': [{'id': '6ijih0px6osf1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/6ijih0px6osf1.jpeg?width=108&crop=smart&auto=webp&s=3aa0af8fa1365c4d47605ad138d13938391c3e85', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/6ijih0px6osf1.jpeg?width=216&crop=smart&auto=webp&s=695285398a5b9c5b0a1f2b2fac2a99a659646ec9', 'width': 216}, {'height': 273, 'url': 'https://preview.redd.it/6ijih0px6osf1.jpeg?width=320&crop=smart&auto=webp&s=6a6770f93910e7a21a77194e5dcd06f484a3e6f0', 'width': 320}, {'height': 547, 'url': 'https://preview.redd.it/6ijih0px6osf1.jpeg?width=640&crop=smart&auto=webp&s=a0db9af3fa74d54bb246c35cef7a77a35b54c442', 'width': 640}, {'height': 821, 'url': 'https://preview.redd.it/6ijih0px6osf1.jpeg?width=960&crop=smart&auto=webp&s=45cdd1cdc0ddc9e508f28751e54fc74005b031e3', 'width': 960}, {'height': 924, 'url': 'https://preview.redd.it/6ijih0px6osf1.jpeg?width=1080&crop=smart&auto=webp&s=f20a3fe5e54aa389b8b306d7b03b02693e4f9869', 'width': 1080}], 'source': {'height': 924, 'url': 'https://preview.redd.it/6ijih0px6osf1.jpeg?auto=webp&s=16cea5bfeccb5f5bd48af8b981fa2a573cd8243b', 'width': 1080}, 'variants': {}}]} | |
Jan now auto-optimizes llama.cpp settings based on your hardware for more efficient performance | 191 | Hey everyone, I'm Yuuki from the Jan team.
We’ve been working on some updates for a while. We released Jan v0.7.0. I'd like to quickly share what's new:
**llama.cpp improvements**:
* Jan now automatically optimizes llama.cpp settings (e.g. context size, gpu layers) based on your hardware. So your models run more efficiently. It's an experimental feature
* You can now see some stats (how much context is used, etc.) when the model runs
* Projects is live now. You can organize your chats using it - it's pretty similar to ChatGPT
* You can rename your models in Settings
* Plus, we're also improving Jan's cloud capabilities: Model names update automatically - so no need to manually add cloud models
If you haven'g seen it yet: Jan is an open-source ChatGPT alternative. It runs AI models locally and lets you add agentic capabilities [through MCPs](https://www.jan.ai/docs/desktop/mcp#configure-and-use-mcps-within-jan).
Website: [https://www.jan.ai/](https://www.jan.ai/)
GitHub: [https://github.com/menloresearch/jan](https://github.com/menloresearch/jan) | 2025-10-02T09:47:49 | https://v.redd.it/49h5xlsp6osf1 | ShinobuYuuki | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nvzeuh | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/49h5xlsp6osf1/DASHPlaylist.mpd?a=1761990485%2CMmU4YWE0ZTM4MTAyNTIzNjVhOGEyOTc3YzhjOWNlNDlkYWYxMzk5MzEwM2ZjNDZhMmFhNTU4NGEzZmNjYWRiYw%3D%3D&v=1&f=sd', 'duration': 4, 'fallback_url': 'https://v.redd.it/49h5xlsp6osf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/49h5xlsp6osf1/HLSPlaylist.m3u8?a=1761990485%2CMDkzMmJjYzFiYzRkMzcyYWIzMjQwZjhkNDg5N2Y3ZmYxMDhjMjQ2OWM2N2U1NzY4MGZlYzE5ZTE5NTE1ZWQzNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/49h5xlsp6osf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1372}} | t3_1nvzeuh | /r/LocalLLaMA/comments/1nvzeuh/jan_now_autooptimizes_llamacpp_settings_based_on/ | false | false | 191 | {'enabled': False, 'images': [{'id': 'NzRlNXJuc3A2b3NmMe-uhlatbqnQI0WkANIEyFuJlq6CEOqVOtkO0hhCMPfO', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/NzRlNXJuc3A2b3NmMe-uhlatbqnQI0WkANIEyFuJlq6CEOqVOtkO0hhCMPfO.png?width=108&crop=smart&format=pjpg&auto=webp&s=3342624bf179bb08a628cc9ec81c0297409a4af5', 'width': 108}, {'height': 170, 'url': 'https://external-preview.redd.it/NzRlNXJuc3A2b3NmMe-uhlatbqnQI0WkANIEyFuJlq6CEOqVOtkO0hhCMPfO.png?width=216&crop=smart&format=pjpg&auto=webp&s=a9c4fdc5eb563583a6760a79a4fee572437a6f5b', 'width': 216}, {'height': 251, 'url': 'https://external-preview.redd.it/NzRlNXJuc3A2b3NmMe-uhlatbqnQI0WkANIEyFuJlq6CEOqVOtkO0hhCMPfO.png?width=320&crop=smart&format=pjpg&auto=webp&s=6065966dd0b7500b437d5a76be18ae11f3154490', 'width': 320}, {'height': 503, 'url': 'https://external-preview.redd.it/NzRlNXJuc3A2b3NmMe-uhlatbqnQI0WkANIEyFuJlq6CEOqVOtkO0hhCMPfO.png?width=640&crop=smart&format=pjpg&auto=webp&s=ce571ac91cefae374560407504f1ea44cae6a613', 'width': 640}, {'height': 755, 'url': 'https://external-preview.redd.it/NzRlNXJuc3A2b3NmMe-uhlatbqnQI0WkANIEyFuJlq6CEOqVOtkO0hhCMPfO.png?width=960&crop=smart&format=pjpg&auto=webp&s=de8098d1bfc0f7ae024d644ece221562e7239759', 'width': 960}, {'height': 850, 'url': 'https://external-preview.redd.it/NzRlNXJuc3A2b3NmMe-uhlatbqnQI0WkANIEyFuJlq6CEOqVOtkO0hhCMPfO.png?width=1080&crop=smart&format=pjpg&auto=webp&s=48de637c6e96ce1bbc563e3e5f211913b387ed39', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/NzRlNXJuc3A2b3NmMe-uhlatbqnQI0WkANIEyFuJlq6CEOqVOtkO0hhCMPfO.png?format=pjpg&auto=webp&s=e67035950e595f4e186bcba9445726d61fa45b27', 'width': 1372}, 'variants': {}}]} | |
How do you configure Ollama so it can help to write essay assignments? | 43 | I’ve been experimenting with Ollama for a while now and unfortunately I can’t seem to crack long-form writing. It tends to repeat itself or stop halfway the moment I try to push it into a full essay assignment (say 1,000-1,500 words).
I’ve tried different prompt styles, but nothing works properly, I’m still wrestling with it. Now, part of me thinks it would be easier to hand the whole thing off to something like Writemyessay because I don’t see the point in fighting with prompts for hours.
Has anyone here figured out a config or specific model that works for essays? Do you chunk it section by section? Adjust context size? Any tips appreciated. | 2025-10-02T08:43:21 | https://www.reddit.com/r/LocalLLaMA/comments/1nvyemn/how_do_you_configure_ollama_so_it_can_help_to/ | crhsharks12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvyemn | false | null | t3_1nvyemn | /r/LocalLLaMA/comments/1nvyemn/how_do_you_configure_ollama_so_it_can_help_to/ | false | false | self | 43 | null |
Claude AI Pro vs ChatGPT plus | 0 | Hi, me and my 2 other friends are thinking about getting a premium AI tool for our university classes. We’re computer engineering students, so we’d use it mainly for coding and also for studying course concepts. Do you think ChatGPT Pro or Claude Pro would be better for this, or is there another AI you’d recommend?
Thank you in advence. | 2025-10-02T08:32:23 | https://www.reddit.com/r/LocalLLaMA/comments/1nvy8g5/claude_ai_pro_vs_chatgpt_plus/ | anovatikz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvy8g5 | false | null | t3_1nvy8g5 | /r/LocalLLaMA/comments/1nvy8g5/claude_ai_pro_vs_chatgpt_plus/ | false | false | self | 0 | null |
Accuracy - Google Recorder (On device AI) vs Whisper | 2 | how close are they in terms of performance? If <5% gap I might probably just use Google 😅 | 2025-10-02T08:13:41 | https://www.reddit.com/r/LocalLLaMA/comments/1nvxy7d/accuracy_google_recorder_on_device_ai_vs_whisper/ | milkygirl21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvxy7d | false | null | t3_1nvxy7d | /r/LocalLLaMA/comments/1nvxy7d/accuracy_google_recorder_on_device_ai_vs_whisper/ | false | false | self | 2 | null |
Qwen3-omni-flash-realtime experience | 0 | https://reddit.com/link/1nvwfbf/video/3nd5vsz57nsf1/player
I know this is not fully local, still want to share my experience with Qwen3-omni-flash-realtime from Alibaba BaiLian [API](https://bailian.console.aliyun.com/), I just vide code this slopping webpage by wrapping this [script](https://github.com/aliyun/alibabacloud-bailian-speech-demo/tree/master/samples/conversation/omni/python) with other AI coding tool,the conversation is quite fluent, latency is quite short. But still Not too much smart. | 2025-10-02T06:37:04 | https://www.reddit.com/r/LocalLLaMA/comments/1nvwfbf/qwen3omniflashrealtime_experience/ | kaileysong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvwfbf | false | null | t3_1nvwfbf | /r/LocalLLaMA/comments/1nvwfbf/qwen3omniflashrealtime_experience/ | false | false | self | 0 | null |
ERNIE-4.5-VL - anyone testing it in the competition, what’s your workflow? | 14 | So the ERNIE-4.5-VL competition is live, and I’ve been testing the model a bit for vision-language tasks. Wanted to ask the community: how are you all running VL?
Some things I’m curious about:
Are you using it mainly for image-text matching, multimodal reasoning, or something else?
What hardware/setup seems to give the best performance without blowing the budget?
Any tricks for handling long sequences of images + text?
I’ve tried a few simple cases, but results feel very sensitive to input format and preprocessing. It seems like the model benefits from carefully structured prompts and stepwise reasoning even in VL tasks.
Would love to hear how others are approaching it - what’s been working, what’s tricky, and any workflow tips. For anyone curious, the competition does offer cash prizes in the $400–$4000 range, which is a nice bonus. | 2025-10-02T06:32:03 | https://www.reddit.com/r/LocalLLaMA/comments/1nvwcix/ernie45vl_anyone_testing_it_in_the_competition/ | Rude_Translator_5196 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvwcix | false | null | t3_1nvwcix | /r/LocalLLaMA/comments/1nvwcix/ernie45vl_anyone_testing_it_in_the_competition/ | false | false | self | 14 | null |
Reasoning with claude-code-router and vllm served GLM-4.6? | 7 | How do I setup "reasoning" with claude-code-router and vllm served GLM-4.6?
No-reasoning works well.
{
"LOG": false,
"LOG_LEVEL": "debug",
"CLAUDE_PATH": "",
"HOST": "127.0.0.1",
"PORT": 3456,
"APIKEY": "",
"API_TIMEOUT_MS": "600000",
"PROXY_URL": "",
"transformers": [],
"Providers": [
{
"name": "GLM46",
"api_base_url": "http://X.X.12.12:30000/v1/chat/completions",
"api_key": "0000",
"models": [
"zai-org/GLM-4.6"
],
"transformer": {
"use": [
"OpenAI"
]
}
}
],
"StatusLine": {
"enabled": false,
"currentStyle": "default",
"default": {
"modules": []
},
"powerline": {
"modules": []
}
},
"Router": {
"default": "GLM46,zai-org/GLM-4.6",
"background": "GLM46,zai-org/GLM-4.6",
"think": "GLM46,zai-org/GLM-4.6",
"longContext": "GLM46,zai-org/GLM-4.6",
"longContextThreshold": 200000,
"webSearch": "",
"image": ""
},
"CUSTOM_ROUTER_PATH": ""
}
| 2025-10-02T06:32:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nvwchz/reasoning_with_claudecoderouter_and_vllm_served/ | Daemonix00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvwchz | false | null | t3_1nvwchz | /r/LocalLLaMA/comments/1nvwchz/reasoning_with_claudecoderouter_and_vllm_served/ | false | false | self | 7 | null |
ERNIE-4.5-21B-A3B-Thinking — impressions after some testing | 41 | aying around with ERNIE-4.5-21B-A3B-Thinking for a bit and figured I’d drop my thoughts. This is Baidu’s “thinking” model for logic, math, science, and coding.
What stood out to me:
Long context works: 128K token window actually does what it promises. I’ve loaded multi-page papers and notes, and it keeps things coherent better than most open models I’ve tried.
Math & code: Handles multi-step problems pretty solidly. Small scripts work fine; bigger coding tasks, I’d still pick Qwen. Surprised by how little it hallucinates on structured problems.
Performance: 21B params total, \~3B active thanks to MoE. Feels smoother than you’d expect for a model this size.
Reasoning style: Focused and doesn’t ramble unnecessarily. Good at staying on track.
Text output: Polished enough that it works well for drafting, summaries, or light creative writing.
Best use cases: Really strong for reasoning and analysis. Weaker if you’re pushing it into larger coding projects or very complex/nuanced creative writing.
So far, it’s been useful for checking reasoning steps, parsing documents, or running experiments where I need something to actually “think through” a problem instead of shortcutting.
Curious - anyone else using it for long docs, planning tasks, or multi-step problem solving? What’s been working for you? | 2025-10-02T06:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/1nvwbvn/ernie4521ba3bthinking_impressions_after_some/ | ABCD170 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvwbvn | false | null | t3_1nvwbvn | /r/LocalLLaMA/comments/1nvwbvn/ernie4521ba3bthinking_impressions_after_some/ | false | false | self | 41 | null |
Is there any local AI windows app that can replace Copilot of Windows totally? | 1 | Same | 2025-10-02T06:24:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nvw8be/is_there_any_local_ai_windows_app_that_can/ | FatFigFresh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nvw8be | false | null | t3_1nvw8be | /r/LocalLLaMA/comments/1nvw8be/is_there_any_local_ai_windows_app_that_can/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.