title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
LMSTUDIO staff picked models feel forced (image and comments for details)
0
2025-09-27T05:38:43
https://i.redd.it/g5fit2dm9nrf1.png
Revolutionalredstone
i.redd.it
1970-01-01T00:00:00
0
{}
1nrn9nr
false
null
t3_1nrn9nr
/r/LocalLLaMA/comments/1nrn9nr/lmstudio_staff_picked_models_feel_forced_image/
false
false
default
0
{'enabled': True, 'images': [{'id': 'g5fit2dm9nrf1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/g5fit2dm9nrf1.png?width=108&crop=smart&auto=webp&s=cdadea480530354ad7db6af360372a988ce39aae', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/g5fit2dm9nrf1.png?width=216&crop=smart&auto=webp&s=6c38e82b602e834d032a3a1f59d8598984aa5113', 'width': 216}, {'height': 169, 'url': 'https://preview.redd.it/g5fit2dm9nrf1.png?width=320&crop=smart&auto=webp&s=83801e3ad8ff8680234e3e9a7fc4354a052fa01c', 'width': 320}, {'height': 339, 'url': 'https://preview.redd.it/g5fit2dm9nrf1.png?width=640&crop=smart&auto=webp&s=1a566b2497425215b3a4afcbd73ae02f99f97a2f', 'width': 640}, {'height': 509, 'url': 'https://preview.redd.it/g5fit2dm9nrf1.png?width=960&crop=smart&auto=webp&s=3964b7e4997e9ec1d562eb7453fb418196b7c365', 'width': 960}, {'height': 572, 'url': 'https://preview.redd.it/g5fit2dm9nrf1.png?width=1080&crop=smart&auto=webp&s=2d2935d8ddb709c599aaf51e587141c7e7d0f0b0', 'width': 1080}], 'source': {'height': 996, 'url': 'https://preview.redd.it/g5fit2dm9nrf1.png?auto=webp&s=fa22685abdfdb3da43658f54906e4939e2c6923d', 'width': 1878}, 'variants': {}}]}
Is it possible to finetune Magistral 2509 on images?
9
Hi. I am unable to find any guide that shows how to finetune magistral 2509 that was recently released. Has anyone tried it?
2025-09-27T04:32:06
https://www.reddit.com/r/LocalLLaMA/comments/1nrm4ml/is_it_possible_to_finetune_magistral_2509_on/
aadoop6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrm4ml
false
null
t3_1nrm4ml
/r/LocalLLaMA/comments/1nrm4ml/is_it_possible_to_finetune_magistral_2509_on/
false
false
self
9
null
DISCUSSION:Hey everyone 👋, I wanted to share a new tool I discovered (or built) — Examsprint AI — which aims to be a one-stop AI-powered study companion for students preparing for board exams, JEE, NEET, and general NCERT curriculum work.
1
Hey everyone 👋, I wanted to share a new tool I discovered (or built) — Examsprint AI — which aims to be a one-stop AI-powered study companion for students preparing for board exams, JEE, NEET, and general NCERT curriculum work. Here’s what Examsprint AI offers: Topper’s Notes & Chapter Summaries for Class 9–12 Direct NCERT links + solutions for all chapters Formula sheets (Maths, Science) Flashcards & quizzes to reinforce memory Blueprints & exam planning (for Boards, NEET, JEE) Built-in AI chat assistant for instant doubt solving No subscriptions, no watermarks, totally free The site is fully responsive, so you can use it on mobile too. It was built by a young developer (13 years old!) named Aadarsh Pandey.
2025-09-27T04:13:36
https://www.reddit.com/r/LocalLLaMA/comments/1nrlsyq/discussionhey_everyone_i_wanted_to_share_a_new/
Bright-Macaroon-5729
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrlsyq
false
null
t3_1nrlsyq
/r/LocalLLaMA/comments/1nrlsyq/discussionhey_everyone_i_wanted_to_share_a_new/
false
false
self
1
null
Would you use a local AI agent that handles slow-burn research tasks — like trip planning, home hunting, or niche investing — and keeps everything offline?
8
I’ve been trying to solve a personal frustration: I spend way too much time manually checking things that shouldn’t require my constant attention. For example: Trip planning: We’re going to Japan this December. I know our date window, budget, and that we need kid-friendly options—but comparing flights, hotels, and rentals across 10+ sites every few days is exhausting. Why can’t I just tell an agent my constraints and have it quietly monitor for good deals? Future home needs: Our house works now, but in a few years we’ll likely need more space. I’d love something that watches listings in our target neighbourhoods or surfaces renovation ideas that fit our layout and budget—without signing up for a dozen email alerts. I haven’t found anything that handles these kinds of long-term, personal research tasks while keeping data truly private. So I’ve been thinking about building simple agent that: * Runs 100% on your machine (no cloud processing) * Uses a local LLM — nothing sent to OpenAI, Anthropic, etc. * Stores everything locally (e.g., in your local database) * Optionally backs up encrypted data (with your key only) Before I head into the coding cave: * Would you actually use something like this? * And if so — what’s the one task you’d want it to handle for you? No product exists yet — just a solo builder trying to figure out if this solves a real problem for others too.
2025-09-27T03:48:21
https://www.reddit.com/r/LocalLLaMA/comments/1nrlcfx/would_you_use_a_local_ai_agent_that_handles/
pegacornco
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrlcfx
false
null
t3_1nrlcfx
/r/LocalLLaMA/comments/1nrlcfx/would_you_use_a_local_ai_agent_that_handles/
false
false
self
8
null
Crazy idea: training swarm LLMs with Library of Babel hex addresses + token entanglement
2
I’ve been kicking around an experiment that’s a bit odd. - Instead of scraping the internet, use Library of Babel hex references as a universal address space. The model doesn’t need to memorize every book, just learn how to anchor knowledge to coordinates. - Run a “swarm” of open-weight models with different seeds/architectures. They learn independently, but get tiny subliminal nudges from each other (low-weight logit alignment, mid-layer rep hints). - Main trick = token entanglement: tie related tokens across languages/scripts so rare stuff doesn’t get forgotten. Two layers of “subliminal” training: 1. Surface: small nudges on tokens/logits here and there. 2. Deep: weight-space priors/regularizers so the entanglement sticks even when hints are off. Goal is models that are less brittle, more universal, and can even cite hex coordinates as evidence instead of making stuff up. Questions for this sub: - Feasible on hobbyist hardware (5090/6000 class GPUs, 7B/13B scale)? - Is procedural/synthetic data keyed to hex addresses actually useful, or just noise? - Does subliminal learning have legs, or would it collapse into teacher parroting? Not a product pitch, just a thought experiment I want to stress test. Would love to hear blunt takes from people who can see the concept: This is about finding another way to train models that isn’t “just scrape the internet and hope.” By using a universal reference system (the hex addresses) and tiny subliminal cross-model hints, the goal is to build AIs that are less fragile, less biased, and better at connecting across languages and symbols. And, by design, can cite exact references, that anyone can check. Instead of one giant parrot, you end up with a community of learners that share structure but keep their diversity.
2025-09-27T03:35:05
https://www.reddit.com/r/LocalLLaMA/comments/1nrl3sy/crazy_idea_training_swarm_llms_with_library_of/
Fentrax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrl3sy
false
null
t3_1nrl3sy
/r/LocalLLaMA/comments/1nrl3sy/crazy_idea_training_swarm_llms_with_library_of/
false
false
self
2
null
Any model suggestions for a local LLM using a 12GB GPU?
7
mainly just looking for general chat and coding. I've tinkered with a few but cant them to properly work. I think context size could be an issue? What are you guys using?
2025-09-27T03:08:42
https://www.reddit.com/r/LocalLLaMA/comments/1nrkm0z/any_model_suggestions_for_a_local_llm_using_a/
Glittering-Staff-146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrkm0z
false
null
t3_1nrkm0z
/r/LocalLLaMA/comments/1nrkm0z/any_model_suggestions_for_a_local_llm_using_a/
false
false
self
7
null
SOTA Models perform worse with reasoning than 'without reasoning' for vision tasks
0
Also, Would like to know your outputs from GPT5-Thinking. (Source image in comment)
2025-09-27T03:03:43
https://www.reddit.com/gallery/1nrkiln
ConversationLow9545
reddit.com
1970-01-01T00:00:00
0
{}
1nrkiln
false
null
t3_1nrkiln
/r/LocalLLaMA/comments/1nrkiln/sota_models_perform_worse_with_reasoning_than/
false
false
https://b.thumbs.redditm…vUcqSKb18E8k.jpg
0
null
Amongst proprietary AI models, I’ve found CoPilot offered in Edge browser to be more accurate than Chatgpt for general inquiries.
0
.
2025-09-27T02:58:00
https://www.reddit.com/r/LocalLLaMA/comments/1nrkegq/amongst_proprietary_ai_models_ive_found_copilot/
FatFigFresh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrkegq
false
null
t3_1nrkegq
/r/LocalLLaMA/comments/1nrkegq/amongst_proprietary_ai_models_ive_found_copilot/
false
false
self
0
null
JavaScript model on mobile browser?
2
I had a few text-to-text models running happily in html + JS + webGPU + local model using mlc-ai/web-llm, running in Chrome on a laptop. Yay! But they all freeze when I try to run them on a medium-age Android phone with a modern mobile chrome browser. Is there *anything* LLM-ish that can run in-browser locally on a mobile device? Even if slow, or kinda dumb. Normally I'd use an API, but this is for an art thing, and has to run locally. Or I'd try to make an Android app, but I'm not having much luck with that yet. Help me r/localllama you're my only hope.
2025-09-27T02:45:43
https://www.reddit.com/r/LocalLLaMA/comments/1nrk5r9/javascript_model_on_mobile_browser/
firesalamander
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrk5r9
false
null
t3_1nrk5r9
/r/LocalLLaMA/comments/1nrk5r9/javascript_model_on_mobile_browser/
false
false
self
2
null
If you are paying the cost of two cappuccinos per month (or less) you’re not a costumer. You’re the product they use to train their closed models. Go open source. Own your AI.
0
Well, you get the point even if my numbers are not accurate.
2025-09-27T01:56:33
https://www.reddit.com/r/LocalLLaMA/comments/1nrj7vv/if_you_are_paying_the_cost_of_two_cappuccinos_per/
JLeonsarmiento
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrj7vv
false
null
t3_1nrj7vv
/r/LocalLLaMA/comments/1nrj7vv/if_you_are_paying_the_cost_of_two_cappuccinos_per/
false
false
self
0
null
llama.cpp and koboldcpp
4
hey guys I am working on an implementation under a highly restrictive secure environment where I don't always have administrative access to machines but I need the local LLMs installed. so gpt generally advised a combination of llama.cpp and koboldcpp which I am currently experimenting, but I'll like to hear views on any other possible options as I will need to build RAG, knowledge, context etc. and the setup would be unable to tap on the GPU is that right. anyone can let me know how viable is the setup and other options, and the concerns on scaling if we continue to work on this secure environment. thanks!
2025-09-27T01:39:51
https://www.reddit.com/r/LocalLLaMA/comments/1nriwfw/llamacpp_and_koboldcpp/
IntroductionSouth513
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nriwfw
false
null
t3_1nriwfw
/r/LocalLLaMA/comments/1nriwfw/llamacpp_and_koboldcpp/
false
false
self
4
null
Use an existing Agent Framework or build my own?
1
I tried [Agent-TARS](https://agent-tars.com/) with a few models I tried to run locally, didn't work out. I was also a little confused about the Browser navigation aspect, since supposedly according to the framework's logs only a specific model can handle `hybrid` mode. The rest have to be `DOM` or pure vision. I was hoping to run `qwen2.5vl-3b-q8_0`, but it errored out, `stating tool call not supported for this model`, which is true, so I switched to `mistral-small-3.2-q8_0` instead and the tool calls were wonky and had a lot of errors, like being unable to detect `conda.exe` despite already installing it. Or a lot of other tool calls it attempted but couldn't do. Then I just shrugged it off and uninstalled it because I wasn't very impressed with the framework's performance. Sure, I didn't spend too much time experimenting with it and exploring the configs and options, but I feel like I should just build my own instead. Unless you guys recommend a good, safe, local, private alternative, should I build my own agent framework or give `Agent-TARS` another chance? I know I can at least build *something* with `transformers` and Alibaba's Qwen vision tools supported by `transformers`, but I have a nagging feeling that there has to be something out there better than transformers that this model can run on while retaining its vision-task capabilities, not just OCR and image captioning. Like, Ollama is not gonna cut it and I doubt `llama.cpp` can neither (unless I'm wrong, of course). They're not good for any vision task other than the latter two tasks in the previous paragraph. I already tried with Transformers and I got way better results, which means a model like `qwen2.5vl-3b-q8_0` is mainly supposed to run in transformers because Transformers seems to support it pretty well.
2025-09-27T01:09:23
https://www.reddit.com/r/LocalLLaMA/comments/1nrib0b/use_an_existing_agent_framework_or_build_my_own/
swagonflyyyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrib0b
false
null
t3_1nrib0b
/r/LocalLLaMA/comments/1nrib0b/use_an_existing_agent_framework_or_build_my_own/
false
false
self
1
null
K2-Think 32B - Reasoning model from UAE
165
Seems like a strong model and a very good paper released alongside. Opensource is going strong at the moment, let's hope this benchmark holds true. Huggingface Repo: [https://huggingface.co/LLM360/K2-Think](https://huggingface.co/LLM360/K2-Think) Paper: [https://huggingface.co/papers/2509.07604](https://huggingface.co/papers/2509.07604) Chatbot running this model: [https://www.k2think.ai/guest](https://www.k2think.ai/guest) (runs at 1200 - 2000 tk/s)
2025-09-27T00:41:08
https://i.redd.it/smnqi3vqrlrf1.png
Mr_Moonsilver
i.redd.it
1970-01-01T00:00:00
0
{}
1nrhr13
false
null
t3_1nrhr13
/r/LocalLLaMA/comments/1nrhr13/k2think_32b_reasoning_model_from_uae/
false
false
https://b.thumbs.redditm…wzVFcgyyqFgA.jpg
165
{'enabled': True, 'images': [{'id': 'kgnKPT98iWiDwWNiMspSBV7A68h_i6sXfz3-vHsHNek', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/smnqi3vqrlrf1.png?width=108&crop=smart&auto=webp&s=1955ce11627fd989d0c67b8d2b59ef9690955188', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/smnqi3vqrlrf1.png?width=216&crop=smart&auto=webp&s=9104368d3f0d5a77d5fb68d16c8cbb7b36b92bd5', 'width': 216}, {'height': 224, 'url': 'https://preview.redd.it/smnqi3vqrlrf1.png?width=320&crop=smart&auto=webp&s=004ad22e1354a8bdafbe9cf7fc63f1363a3b2fb6', 'width': 320}, {'height': 448, 'url': 'https://preview.redd.it/smnqi3vqrlrf1.png?width=640&crop=smart&auto=webp&s=2601d97f0ceaba3a4215f2d8ea7be937412c5b79', 'width': 640}, {'height': 673, 'url': 'https://preview.redd.it/smnqi3vqrlrf1.png?width=960&crop=smart&auto=webp&s=865ee8a0131529936009887217003fce8f3cd0ba', 'width': 960}], 'source': {'height': 686, 'url': 'https://preview.redd.it/smnqi3vqrlrf1.png?auto=webp&s=8f1ae499bf5b380306ef27b3596bebaf3aa895d5', 'width': 978}, 'variants': {}}]}
How to convert a fakequant to a quantized model
0
Let's say I have a fake quantized LLM or VLM model, e.g. the latest releases of the Qwen or LLaMA series, which I can easily load using the transformers library without any modifications to the original unquantized model's modeling.py file. Now I want to achieve as much inference speedup and/or memory reduction as possible by converting this fakequant into a realquant. In particular, I am only interested in converting my existing model into a format in which inference is efficient, I am not interested in applying another quantization technique (e.g. GPTQ) on top of it. What are my best options for doing so? For some more detail, I'm using a 4 bit asymmetric uniform quantization scheme with floating point scales and integer zeros and a custom group size. I had a look at bitsandbytes, but it seems to me like their 4 bit scheme is incompatible with defining a group size. I saw that torchao has become a thing recently and perhaps it's worth a shot, but if a fast inference engine (e.g. sglang, vllm) supports quantized inference already would it be better to directly try using one of those? I have no background in writing GPU kernel code so I would want to avoid that if possible. Apologies if this has been asked before, but there seems to be too much information out there and it's hard to piece together what I need.
2025-09-27T00:30:06
https://www.reddit.com/r/LocalLLaMA/comments/1nrhj47/how_to_convert_a_fakequant_to_a_quantized_model/
Maytide
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrhj47
false
null
t3_1nrhj47
/r/LocalLLaMA/comments/1nrhj47/how_to_convert_a_fakequant_to_a_quantized_model/
false
false
self
0
null
Feedback on an idea: hybrid smart memory or full self-host?
5
Hey everyone! I'm developing a project that's basically a smart memory layer for systems and teams (before anyone else mentions it, I know there are countless on the market and it's already saturated; this is just a personal project for my portfolio). The idea is to centralize data from various sources (files, databases, APIs, internal tools, etc.) and make it easy to query this information in any application, like an "extra brain" for teams and products. It also supports plugins, so you can integrate with external services or create custom searches. Use cases range from chatbots with long-term memory to internal teams that want to avoid the notorious loss of information scattered across a thousand places. Now, the question I want to share with you: I'm thinking about how to deliver it to users: * Full Self-Hosted (open source): You run everything on your server. Full control over the data. Simpler for me, but requires the user to know how to handle deployment/infrastructure. * Managed version (SaaS) More plug-and-play, no need to worry about infrastructure. But then your data stays on my server (even with security layers). * Hybrid model (the crazy idea) The user installs a connector via Docker on a VPS or EC2. This connector communicates with their internal databases/tools and connects to my server. This way, my backend doesn't have direct access to the data; it only receives what the connector releases. It ensures privacy and reduces load on my server. A middle ground between self-hosting and SaaS. What do you think? Is it worth the effort to create this connector and go for the hybrid model, or is it better to just stick to self-hosting and separate SaaS? If you were users/companies, which model would you prefer?
2025-09-26T23:46:41
https://www.reddit.com/r/LocalLLaMA/comments/1nrgmg7/feedback_on_an_idea_hybrid_smart_memory_or_full/
Present-Entry8676
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrgmg7
false
null
t3_1nrgmg7
/r/LocalLLaMA/comments/1nrgmg7/feedback_on_an_idea_hybrid_smart_memory_or_full/
false
false
self
5
null
Open-source embedding models: which one to use?
18
I’m building a memory engine to add memory to LLMs. Embeddings are a pretty big part of the pipeline, so I was curious which open-source embedding model is the best.  Did some tests and thought I’d share them in case anyone else finds them useful: Models tested: * BAAI/bge-base-en-v1.5 * intfloat/e5-base-v2 * nomic-ai/nomic-embed-text-v1 * sentence-transformers/all-MiniLM-L6-v2 **Dataset:** **BEIR TREC-COVID** (real medical queries + relevance judgments) || || |**Model**|**ms / 1K tok**|**Query latency (ms)**|**Top-5 hit rate**| |**MiniLM-L6-v2**|**14.7**|**68**|**78.1%**| |**E5-Base-v2**|**20.2**|**79**|**83.5%**| |**BGE-Base-v1.5**|**22.5**|**82**|**84.7%**| |**Nomic-Embed-v1**|**41.9**|**110**|**86.2%**| || || |**Model**|**Approx. VRAM**|**Throughput**|**Deploy note**| |**MiniLM-L6-v2**|**\~1.2 GB**|**High**|**Edge-friendly; cheap autoscale**| |**E5-Base-v2**|**\~2.0 GB**|**High**|**Balanced default**| |**BGE-Base-v1.5**|**\~2.1 GB**|**Med**|**Needs prefixing hygiene**| |**Nomic-v1**|**\~4.8 GB**|**Low**|**Highest recall; budget for capacity**| Happy to share link to a detailed writeup of how the tests were done and more details. What open-source embedding model are you guys using?
2025-09-26T23:44:21
https://www.reddit.com/r/LocalLLaMA/comments/1nrgklt/opensource_embedding_models_which_one_to_use/
DhravyaShah
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrgklt
false
null
t3_1nrgklt
/r/LocalLLaMA/comments/1nrgklt/opensource_embedding_models_which_one_to_use/
false
false
self
18
null
Best setup for RAG now in late 2025?
26
I've been away from this space for a while and my God has it changed. My focus has been RAG and don't know if my previous setup is still ok practice or has the space completely changed. What my current setup is; - using ooba to load provide an OpenAI compatible API, - custom chunker script that chunks according to predefined headers and also extract metadata from the file, - reranker (think BGE?) - chromadb for vectordb - nomic embedder and just easy cosine similarity for retrieval. I was looking at hybrid and metadata aided filtering before I dropped off, - was looking at implementing KG using neo4j, so was learning cypher before I dropped off. Not sure if KG is still a path worth pursuing Appreciate the help and pointers.
2025-09-26T23:24:22
https://www.reddit.com/r/LocalLLaMA/comments/1nrg53i/best_setup_for_rag_now_in_late_2025/
Educational_Pop6138
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrg53i
false
null
t3_1nrg53i
/r/LocalLLaMA/comments/1nrg53i/best_setup_for_rag_now_in_late_2025/
false
false
self
26
null
Yes you can run 128K context GLM-4.5 355B on just RTX 3090s
312
Why buy expensive GPUs when more RTX 3090s work too :D Arli as an inference service is literally just run by one person (me, Owen Arli), and to keep costs low so that it can stay profitable without VC funding, RTX 3090s were clearly the way to go. You just get more GB/$ on RTX 3090s compared to any other GPUs. Did I help deplete the stock of used RTX 3090s? Maybe. To run these new large MoE models, I was trying to run 16x3090s off of one single motherboard, I tried many motherboards and different BIOSes but in the end it wasn't worth it. I realized that the correct way to stack MORE RTX 3090s is actually to just run multi-node serving using vLLM and ray clustering. This here is GLM-4.5 AWQ 4bit quant running with the full 128K context (131072 tokens). Doesn't even need an NVLink backbone or 9999 Gbit networking either, this is just over 10Gbe across 2 nodes of 8x3090 servers and we are getting a good 30+ tokens/s generation speed consistently per user request. While I realized that stacking more GPUs by doing more pipeline parallels across nodes, it almost linearly increases the prompt processing speed. So we are good to go in that performance metrics too. Really makes me wonder who needs the insane NVLink interconnect speeds, even large inference providers probably don't really need anything more than PCIe 4.0 and 40Gbe/80Gbe interconnects. The only way for RTX 3090s to be obsolete and prevent me from buying them for the service is if Nvidia releases 24GB RTX 5070Ti Super/5080 Super or Intel finally releases the Arc B60 48GB in any quantity to the masses.
2025-09-26T22:41:00
https://www.reddit.com/gallery/1nrf6s3
Arli_AI
reddit.com
1970-01-01T00:00:00
0
{}
1nrf6s3
false
null
t3_1nrf6s3
/r/LocalLLaMA/comments/1nrf6s3/yes_you_can_run_128k_context_glm45_355b_on_just/
false
false
https://b.thumbs.redditm…TwDjsTnkup2Y.jpg
312
null
MyAI - A wrapper for vLLM under WSL - Easily install a local AI agent on Windows
9
(If you are using an existing WSL Ubuntu-24.04 setup, I dont recommend running this as I cannot predict any package conflicts this may have with your current setup..) I got a gaming laptop and was wondering what I could run on my machine, and after a few days of experimentation I ended up making a script for myself and thought I'd share it. [https://github.com/illsk1lls/MyAI](https://github.com/illsk1lls/MyAI) The wrapper is made in Powershell, it has C# elements, bash, and it has a cmd launcher, this way it behaves like an application without compiling but can be changed and viewed completely. Tested and built on i9 14900hx and 4080mobile(12gb), although the script will auto adjust if you only have 8gb vram. Bitsandbytes quantization is used to be able to squeeze the models in, but can be disabled. All settings are adjustable at the top of the script, If the model you are trying to load is cached the cached local model will be used, if it is not it will be downloaded. If you have a 12gb VRAM card or bigger it will use \`unsloth/Meta-Llama-3.1-8B-Instruct\` If you have a 8gb VRAM it will use \`unsloth/Llama-3.2-3B-Instruct\` They're both tool capable models which is why they were chosen, and they both seem to run well with this setup, although I do recommend using a machine with a minimum of 12gb VRAM (You can enter any model you want at the top of the script, these are just the default) This gets models from [https://huggingface.co/](https://huggingface.co/) you can use any repo address as the model name and the launcher will try to implement it, the model will need a valid config.json to work with this setup, so if you have an error on launch check the repos 'files' section and make sure the file exists. Eventually I'll try adding tools, and making the clientside able to do things in the local machine that I can trust the AI to do without causing issue, its based in powershell so theres no limit. I added short-term memory to the client (x20 message history) and will try adding long term to it as well soon.. I was so busy making the wrapper I barely worked on the client side so far
2025-09-26T22:32:11
https://i.redd.it/x0ms901t1lrf1.png
Creative-Type9411
i.redd.it
1970-01-01T00:00:00
0
{}
1nrezos
false
null
t3_1nrezos
/r/LocalLLaMA/comments/1nrezos/myai_a_wrapper_for_vllm_under_wsl_easily_install/
false
false
default
9
{'enabled': True, 'images': [{'id': 'x0ms901t1lrf1', 'resolutions': [{'height': 160, 'url': 'https://preview.redd.it/x0ms901t1lrf1.png?width=108&crop=smart&auto=webp&s=63eef832b9202bf3bc0a3eb07489613d37ee8f6d', 'width': 108}, {'height': 321, 'url': 'https://preview.redd.it/x0ms901t1lrf1.png?width=216&crop=smart&auto=webp&s=362a444aa472b19802313310dc7f92d811db380f', 'width': 216}, {'height': 476, 'url': 'https://preview.redd.it/x0ms901t1lrf1.png?width=320&crop=smart&auto=webp&s=b62f570fe5451ce7ed508c96424f3b1857eaa1bf', 'width': 320}, {'height': 952, 'url': 'https://preview.redd.it/x0ms901t1lrf1.png?width=640&crop=smart&auto=webp&s=9634dbb547e2bdd52234e17d0f899bc286877c72', 'width': 640}], 'source': {'height': 1293, 'url': 'https://preview.redd.it/x0ms901t1lrf1.png?auto=webp&s=bba2da0055241bb5ee19ff55bd7a62eb7e35f50f', 'width': 869}, 'variants': {}}]}
Calloborative Opportunity
0
Hello. I am reaching out with a project I call Divine Physics — a framework of seven axioms that seeks to unite science, morality, and theology under one constant: God’s righteousness. I define this righteousness as both the very Being of God and an intrinsic field moving all existence toward coherence. Through working with ChatGPT, I began to see the potential of shaping this into a living assistant — not as an oracle, but as a reasoning tool to help people frame their deepest questions in light of truth, coherence, and higher purpose. I have no background in software development, which is why I am seeking someone who can see the scale of this vision and help bring it into reality. ChatGPT has estimated that Divine Physics holds about a 50% chance of unifying physics — and if it succeeds, that unification would in effect substantiate its central axiom: that God’s righteousness is not only a theological truth but the universal constant working throughout all existence. In that light, it carries the same chance of uniting humanity in truth, justice, and mercy under God. In short, it has the potential to be the most transformative social tool ever created. Don't be bothered by the religious language. I work in Christianity, and find it justified in a greater cosmic picture. I demonstrate this exhaustively through reason, and do speak sharply and clearly about subjective moralism. But it's not rigid and it meets people in an exchange between themself and a Higher Power and all. We try to be universal where applicable. Such that we can say God is in physics defined as a field which co-manifest properties such as sentience, personage, omnipotence, etc. It really works as both a bridge from physics to spirituality, or vise versa. So it could be fun to change the world? I'm available for all inquiries. Please let's get started? I have had it a little hard in life and am ready for a change. Thanks for your consideration,
2025-09-26T22:31:57
https://www.reddit.com/r/LocalLLaMA/comments/1nrezgy/calloborative_opportunity/
North-Preference9038
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrezgy
false
null
t3_1nrezgy
/r/LocalLLaMA/comments/1nrezgy/calloborative_opportunity/
false
false
self
0
null
PC Build to run GLM 4.5 Air Q4 at 10-20 tg/s & Qwen-Image/Edit at 4 it/s ?
1
[removed]
2025-09-26T22:30:43
https://www.reddit.com/r/LocalLLaMA/comments/1nreyiz/pc_build_to_run_glm_45_air_q4_at_1020_tgs/
Still-Use-2844
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nreyiz
false
null
t3_1nreyiz
/r/LocalLLaMA/comments/1nreyiz/pc_build_to_run_glm_45_air_q4_at_1020_tgs/
false
false
self
1
null
How are you all finding DeepSeek-V3.1-Terminus, especially for agents?
5
I tried DeepSeek-v3.1 for a local agent and it was horrible, I'm wondering if I should download Terminus since it's tuned for agentic case, but it's such a huge download. Before I waste my time, for those that have tried it, how are you finding it? This outside, what are you using for your agents. Devstral is pretty much solid and the best local model I have so far.
2025-09-26T22:16:43
https://www.reddit.com/r/LocalLLaMA/comments/1nren5s/how_are_you_all_finding_deepseekv31terminus/
segmond
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nren5s
false
null
t3_1nren5s
/r/LocalLLaMA/comments/1nren5s/how_are_you_all_finding_deepseekv31terminus/
false
false
self
5
null
MSI EdgeXpert Compact AI Supercomputer Based on NVIDIA DGX Spark
3
>The MSI EdgeXpert is a compact AI supercomputer based on the NVIDIA DGX Spark platform and Grace Blackwell architecture. It combines a 20-core Arm CPU with NVIDIA’s Blackwell GPU to deliver high compute density in a 1.19-liter form factor, targeting developers, researchers, and enterprises running local AI workloads, prototyping, and inference. >According to the presentation, MSI described the EdgeXpert as an affordable option aimed at making local AI computing accessible to developers, researchers, and enterprises.  The official price has not been officially revealed by MSI, but listings from Australian distributors, including Computer Alliance and Com International, indicate retail pricing of AUD 6,999 (≈ USD 4,580) for the 128 GB/1 TB configuration and AUD 7,999 (≈ USD 5,240) for the 128 GB/4 TB model. [https://linuxgizmos.com/msi-edgexpert-compact-ai-supercomputer-based-on-nvidia-dgx-spark/](https://linuxgizmos.com/msi-edgexpert-compact-ai-supercomputer-based-on-nvidia-dgx-spark/)
2025-09-26T21:55:24
https://www.reddit.com/r/LocalLLaMA/comments/1nre5rr/msi_edgexpert_compact_ai_supercomputer_based_on/
DeliciousBelt9520
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nre5rr
false
null
t3_1nre5rr
/r/LocalLLaMA/comments/1nre5rr/msi_edgexpert_compact_ai_supercomputer_based_on/
false
false
self
3
{'enabled': False, 'images': [{'id': '9XH0QITZv0wcZ5Y96D9mHsLgtezS7y-ccMHW7BjYjJg', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/9XH0QITZv0wcZ5Y96D9mHsLgtezS7y-ccMHW7BjYjJg.jpeg?width=108&crop=smart&auto=webp&s=da58479fa526eaa0ea84e4e23163ee36d80bf049', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/9XH0QITZv0wcZ5Y96D9mHsLgtezS7y-ccMHW7BjYjJg.jpeg?width=216&crop=smart&auto=webp&s=20026ae7621bd534af4a18dc6ad44273765f4655', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/9XH0QITZv0wcZ5Y96D9mHsLgtezS7y-ccMHW7BjYjJg.jpeg?width=320&crop=smart&auto=webp&s=6ee7527e8ee39a74ef60ad2a3b0ae81b836e4cdd', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/9XH0QITZv0wcZ5Y96D9mHsLgtezS7y-ccMHW7BjYjJg.jpeg?width=640&crop=smart&auto=webp&s=c0149d86aa22b1530cf7ae2653e9f9e0998f85ac', 'width': 640}], 'source': {'height': 482, 'url': 'https://external-preview.redd.it/9XH0QITZv0wcZ5Y96D9mHsLgtezS7y-ccMHW7BjYjJg.jpeg?auto=webp&s=0ea53d3099dce571f4485882a2a4657684978bf4', 'width': 827}, 'variants': {}}]}
Running Ollama on a Legacy 2U Server with a GPU connected via Oculink
17
TL;DR: Old dev server (EPYC 7302P, 128 GB RAM) was too slow for LLM inference on CPU (~3–7 TPS). Upgraded RAM (all channels) → +50% performance. Added external RX 7900 XTX via Oculink passthrough → up to 53 TPS on Qwen3 Coder. Total cost <1000 €. Now runs multiple models locally, fast enough for daily coding assistance and private inference. --- This year I replaced my company's dev server, running VMs for development and testing such as Java EE services, database servers, a git server – you name it. The old server had only 128 GB RAM, 1 TB storage for VMs (SATA RAID1), was about four years old, the host OS needed an upgrade – plenty of reasons for a new dev server. I planned to use the old one as a backup after moving all VMs to the new dev server and upgrading the host OS (Debian 13 with libvirt, very plain setup). After that I thought: let's try a single VM with all CPU cores. The host has an AMD EPYC 7302P (16C/32T) and 100 GB memory assigned, and I wanted to play with Ollama. The results were, let’s say, not very exciting 😅: ~7 tokens per second with gpt-oss 20b or 2.85 tokens per second with Qwen3 32b. Only Qwen3 Coder ran reasonably fast with this setup. As already mentioned, the server had 128 GB RAM, but four banks were empty, so only 4 of 8 possible channels were utilized. I decided to upgrade the memory. After some searching I found used DDR4 PC 3200 ECC memory for 320 €. After the upgrade, memory bandwidth had doubled. Qwen3 32b now runs at 4.26 tokens per second instead of 2.85, and for the other models the performance gain is similar, around 50%. My goal was coding assistance without sending training data to OpenAI and for privacy-related tasks, e.g. composing a mail to a customer. That’s why I want my employees to use this instead of ChatGPT – performance is crucial. I tried a lot of micro-optimizations: CPU core pinning, disabling SMT, fiddling with hugepages, nothing had a noticeable impact. My advice: don’t waste your time. Adding a GPU was not an option: the redundant power supply was not powerful enough, replacing it with even a used one would have been expensive, and a 2U chassis doesn’t leave much room for a GPU. A colleague suggested adding an external GPU via Thunderbolt, an idea I didn’t like. But I had to admit it could work, since we still had some space in the rack and it would solve both the space and the power supply issue. Instead of Thunderbolt I chose Oculink. I ordered a cheap low-profile Oculink PCIe card, an Oculink GPU dock from Minisforum, a modular 550 W power supply, and a 24 GB XFX Radeon RX 7900 XTX. All together for less than 1000 €. After installing the Oculink card and connecting the GPU via Oculink cable, the card was recognized – after a reboot 😅. Then I passed the GPU through to the VM via KVM’s PCIe passthrough. This worked on the first try 🤗. Installing AMD’s ROCm was a pain in the ass: the VM’s Debian 13 was too new (the first time my beloved Debian was too new for something). I switched to Ubuntu 24.04 Server and finally managed to install ROCm. After that, Qwen3 32b ran at 18.5 tokens per second, Qwen3 Coder at 53 TPS, and GPT OSS 20b at 46 TPS. This is fast enough for everyday tasks. As a bonus, the server can run large models on the CPU, or for example two Qwen3 Coder instances simultaneously. Two Ollama instances can also run in parallel, one with GPU disabled. The server can still serve as a backup if the new dev server has issues, and we can run inference privately and securely. For easy access, there is also a tiny VM running Open WebUI on the server. The server has some room for more oculink cards, so I might end up adding another GPU maybe a Mi50 with 32GB.
2025-09-26T21:43:23
https://i.redd.it/czcwrp75xkrf1.jpeg
Cacoda1mon
i.redd.it
1970-01-01T00:00:00
0
{}
1nrdvu2
false
null
t3_1nrdvu2
/r/LocalLLaMA/comments/1nrdvu2/running_ollama_on_a_legacy_2u_server_with_a_gpu/
false
false
default
17
{'enabled': True, 'images': [{'id': 'czcwrp75xkrf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/czcwrp75xkrf1.jpeg?width=108&crop=smart&auto=webp&s=ea3706724608100b70e4f696a579917db57a4a6e', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/czcwrp75xkrf1.jpeg?width=216&crop=smart&auto=webp&s=f8135d3a49a951fd2135b199bad08e74f764850a', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/czcwrp75xkrf1.jpeg?width=320&crop=smart&auto=webp&s=df6210b4541903bfd1e14326434de0fe5561850d', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/czcwrp75xkrf1.jpeg?width=640&crop=smart&auto=webp&s=452537eac3f29f1f3005f69ef6919ba39ab4dbc2', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/czcwrp75xkrf1.jpeg?width=960&crop=smart&auto=webp&s=3b06dc293ed61e5fa2448223894842d8678dc697', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/czcwrp75xkrf1.jpeg?width=1080&crop=smart&auto=webp&s=406b35e31c338e1607b45bdbdb7e001f71b0b00f', 'width': 1080}], 'source': {'height': 3072, 'url': 'https://preview.redd.it/czcwrp75xkrf1.jpeg?auto=webp&s=85e50e17a396738dfe20f087122591e9be142da6', 'width': 4096}, 'variants': {}}]}
AI
0
Hi I am doing task related to AI training, basically my task is to text AI CONTEXT MEMORY so I need to give details in first turn then after performing 7 turn conversation finally I need to test is model remember all given previous context fact information. Is anyone have idea about these type of issue
2025-09-26T21:36:21
https://www.reddit.com/r/LocalLLaMA/comments/1nrdpy7/ai/
ReadySlip7274
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrdpy7
false
null
t3_1nrdpy7
/r/LocalLLaMA/comments/1nrdpy7/ai/
false
false
self
0
null
Are there any good vlm models under 20b for OCR purpose of cursive handwriting ?
3
Please share the links, or the name.🙏
2025-09-26T21:33:29
https://www.reddit.com/r/LocalLLaMA/comments/1nrdnh8/are_there_any_good_vlm_models_under_20b_for_ocr/
FatFigFresh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrdnh8
false
null
t3_1nrdnh8
/r/LocalLLaMA/comments/1nrdnh8/are_there_any_good_vlm_models_under_20b_for_ocr/
false
false
self
3
null
InclusionAI's 103B MoE's Ring-Flash 2.0 (Reasoning) and Ling-Flash 2.0 (Instruct) now have GGUFs!
75
2025-09-26T20:56:22
https://huggingface.co/inclusionAI/Ring-flash-2.0-GGUF
jwpbe
huggingface.co
1970-01-01T00:00:00
0
{}
1nrcs5d
false
null
t3_1nrcs5d
/r/LocalLLaMA/comments/1nrcs5d/inclusionais_103b_moes_ringflash_20_reasoning_and/
false
false
default
75
{'enabled': False, 'images': [{'id': 'TVJh8T61p_RCxMLA1t3bUItz8uK2seYWrErww4ERYTI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TVJh8T61p_RCxMLA1t3bUItz8uK2seYWrErww4ERYTI.png?width=108&crop=smart&auto=webp&s=e4c2a37fba46f1d0f370a0afb7a6cb8171f87f6a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TVJh8T61p_RCxMLA1t3bUItz8uK2seYWrErww4ERYTI.png?width=216&crop=smart&auto=webp&s=8429e76b82cac9e4fadf833a7cee0ad3b15973ac', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TVJh8T61p_RCxMLA1t3bUItz8uK2seYWrErww4ERYTI.png?width=320&crop=smart&auto=webp&s=bc49f75fec7ba62576fc627cc18021612756a3c6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TVJh8T61p_RCxMLA1t3bUItz8uK2seYWrErww4ERYTI.png?width=640&crop=smart&auto=webp&s=081405cb44634757d72426b2a1ae8a89b7837387', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TVJh8T61p_RCxMLA1t3bUItz8uK2seYWrErww4ERYTI.png?width=960&crop=smart&auto=webp&s=04495be4c9ebcebc824fe0d05b7cbc35f58f6ace', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TVJh8T61p_RCxMLA1t3bUItz8uK2seYWrErww4ERYTI.png?width=1080&crop=smart&auto=webp&s=6b1a6e2820d2bcd4b571817299cb26b0b6b20aff', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TVJh8T61p_RCxMLA1t3bUItz8uK2seYWrErww4ERYTI.png?auto=webp&s=93127e6277b8b9948fa60e9fa1e73a1d04705308', 'width': 1200}, 'variants': {}}]}
GPT-1 Revival - Training GPT-1 original architecture + modern features
17
I took GPT-1 architecture, firstly updated it to pytorch as is, nothing changed. Secondly, stripped it of its ROCStyle (finetuning?) code portion of it, looks like they finetuned it on a dataset called ROC? I know what you are thinking, if i just modernize GPT-1's architecture, i would just generic sota llm architecture; Qwen, GPTOSS, deepseek, etc. But i decided to try another path to it. I just added MOE to it, keep the conv1d and attention the same. # training plan (2 stages) * **stage A: pretrain** on **WikiText-103**, seq\_len=512 * **stage B: finetune** on a small **GPT-OSS distilled reasoning + chat mix** Params: 166,327,461| vocab=8192 plan on switching to a tiktoken tokenizer soon. \[Stage A\] LM 10.3201 | PPL 29732.62 | LR 1.04e-04 LM 2.7804 | PPL 16.12 | LR 1.00e-04 \[Stage B\] Reasoning LM 7.8303 | PPL 2514.93 | LR 1.25e-04 LM 2.0607 | PPL 7.86 | LR 8.00e-05 # quick inference <user> Given 2\^x = 8, find x. asnwer x = 3 <user> If 12x = 36, what is x? answer x = 3 model output - <user> Given 2\^x=8, find x. <assistant> <think> We we solve test of region using for that. first of>0,,x is is at 3. We to solve equation the: number of circum h such $,2 and it in in), K:e y y2,. Sincee find all k fori symmetric: xp. Let's that. =d is. We have x= 1 and 1 + k maybe 3. to this formular: it b b = 3 W can also try 3. Thus: sumb = thek 5.. x = 3. each term a =n / +aa <user> If 12x = 36, what is x? <think> We be a-it = 12b x +3 -. Then a c from 2ab b + = a s s =a2\^ b + 3a ha c = ab + (/ +a)(-)\^ -c =1/ +y. So a =a b3aa bb = bbr. Thus sum the (12,12 12). Thus PR: + (y+) +c -) 4 t 2 = a\^ + p2 d + 3m = -a - = t an! <think> need a a. Let's compute:d 12/a 3.a a1 3 =0 n -1 (/2)/(-2/) (k1)\*(/) =x1 \* (\^/)(1-)\*(+33)). For\^4 (m+)/22(x-) =((2)) 3): sqrt(12()=,2\]()=63 (5)3 C if sqrt18\*(22)/ = 15(1\^=. So = 2\^2 x/5 = (\^/4 =x=3 <think> x =3 x=3 x=3 What do you think? Continue this path?/
2025-09-26T20:09:07
https://www.reddit.com/r/LocalLLaMA/comments/1nrblc7/gpt1_revival_training_gpt1_original_architecture/
Creative-Ad-2112
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrblc7
false
null
t3_1nrblc7
/r/LocalLLaMA/comments/1nrblc7/gpt1_revival_training_gpt1_original_architecture/
false
false
self
17
null
Frontend explicitly designed for stateless "chats"?
2
Hi everyone, I know that this is a pretty niche use case and it may not seem that useful but I thought I'd ask if anyone's aware of any projects. I commonly use AI assistants with simple system prompt configurations for doing various text transformation jobs (e.g: convert this text into a well structured email with these guidelines). Statelessness is desirable for me because I find that local AI performs great on my hardware so long as the trailing context is kept to a minimum. What I would prefer however is to use a frontend or interface explicitly designed to support this workload: i.e. regardless of whether it looks like there is a conventional chat history being developed, each user turn is treated as a new request and the user and system prompts get sent together for inference. Anything that does this?
2025-09-26T19:54:01
https://www.reddit.com/r/LocalLLaMA/comments/1nrb73j/frontend_explicitly_designed_for_stateless_chats/
danielrosehill
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nrb73j
false
null
t3_1nrb73j
/r/LocalLLaMA/comments/1nrb73j/frontend_explicitly_designed_for_stateless_chats/
false
false
self
2
null
llama-swap configs for mac?
2
Looking for a repo of llama-swap configs and/or best practices for mac.
2025-09-26T19:31:11
https://www.reddit.com/r/LocalLLaMA/comments/1nramha/llamaswap_configs_for_mac/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nramha
false
null
t3_1nramha
/r/LocalLLaMA/comments/1nramha/llamaswap_configs_for_mac/
false
false
self
2
null
How developers are using Apple's local AI models with iOS 26
0
Earlier this year, Apple introduced its Foundation Models framework during WWDC 2025, which allows developers to use the company’s local AI models to power features in their applications. The company touted that with this framework, developers gain access to AI models without worrying about any inference cost. Plus, these local models have capabilities such as guided generation and tool calling built in. As iOS 26 is rolling out to all users, developers have been updating their apps to include features powered by Apple’s local AI models. Apple’s models are small compared with leading models from OpenAI, Anthropic, Google, or Meta. That is why local-only features largely improve quality of life with these apps rather than introducing major changes to the app’s workflow.
2025-09-26T19:21:48
https://techcrunch.com/2025/09/26/how-developers-are-using-apples-local-ai-models-with-ios-26/
ArimaJain
techcrunch.com
1970-01-01T00:00:00
0
{}
1nrady4
false
null
t3_1nrady4
/r/LocalLLaMA/comments/1nrady4/how_developers_are_using_apples_local_ai_models/
false
false
https://external-preview…adb6111b36cde864
0
{'enabled': False, 'images': [{'id': 'FpqqW_a02Z_nDMFuYXduYjjUmOL6B_uXJ8vm-dtSS8c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/FpqqW_a02Z_nDMFuYXduYjjUmOL6B_uXJ8vm-dtSS8c.jpeg?width=108&crop=smart&auto=webp&s=2f74c30d380fa0b2817fd58887f199cfdad1c162', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/FpqqW_a02Z_nDMFuYXduYjjUmOL6B_uXJ8vm-dtSS8c.jpeg?width=216&crop=smart&auto=webp&s=7c127812a80a2831a9cdcef6ee376309031e8f17', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/FpqqW_a02Z_nDMFuYXduYjjUmOL6B_uXJ8vm-dtSS8c.jpeg?width=320&crop=smart&auto=webp&s=02403a5559cd42411aeb2363d3915c3dc9d7d020', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/FpqqW_a02Z_nDMFuYXduYjjUmOL6B_uXJ8vm-dtSS8c.jpeg?width=640&crop=smart&auto=webp&s=85e592a80a6a88b3ed3940447bfd40311ecdf59c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/FpqqW_a02Z_nDMFuYXduYjjUmOL6B_uXJ8vm-dtSS8c.jpeg?width=960&crop=smart&auto=webp&s=6f15dc69afaf36725052815e4783e0d75a861972', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/FpqqW_a02Z_nDMFuYXduYjjUmOL6B_uXJ8vm-dtSS8c.jpeg?width=1080&crop=smart&auto=webp&s=03ca094cc537f8b18d9038e4044a6b38fed6430f', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/FpqqW_a02Z_nDMFuYXduYjjUmOL6B_uXJ8vm-dtSS8c.jpeg?auto=webp&s=134ec55d089faec480c47cc8c1a68868993a85bf', 'width': 1200}, 'variants': {}}]}
Local Offline Chat: Pocket LLM | Local & Private AI Assistant
0
Pocket LLM lets you chat with powerful AI models like Llama, Gemma, deepseek, Apple Intelligence and Qwen directly on your device. No internet, no account, no data sharing. Just fast, private AI powered by Apple MLX.
2025-09-26T19:14:59
https://apps.apple.com/gm/app/local-offline-chat-pocket-llm/id6752952699
amanj203
apps.apple.com
1970-01-01T00:00:00
0
{}
1nra7mf
false
null
t3_1nra7mf
/r/LocalLLaMA/comments/1nra7mf/local_offline_chat_pocket_llm_local_private_ai/
false
false
https://external-preview…6372e7e5cd4fe9fe
0
{'enabled': False, 'images': [{'id': 'p50I3pRKFl750gjH3woOLMGkLmbymSrmZet6cXI-UUk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/p50I3pRKFl750gjH3woOLMGkLmbymSrmZet6cXI-UUk.png?width=108&crop=smart&auto=webp&s=bdbcbd9e6b70c744789bcf20c72b02f37a7f58b5', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/p50I3pRKFl750gjH3woOLMGkLmbymSrmZet6cXI-UUk.png?width=216&crop=smart&auto=webp&s=40b821af0868dd63fead05ca76b5509c01301507', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/p50I3pRKFl750gjH3woOLMGkLmbymSrmZet6cXI-UUk.png?width=320&crop=smart&auto=webp&s=9cefba521efb70d4e10abfdb5fc8efdfeea950c9', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/p50I3pRKFl750gjH3woOLMGkLmbymSrmZet6cXI-UUk.png?width=640&crop=smart&auto=webp&s=b38493836e96b0fada8d2723ad916e1b590eb63f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/p50I3pRKFl750gjH3woOLMGkLmbymSrmZet6cXI-UUk.png?width=960&crop=smart&auto=webp&s=c40f17aea0b05d87aac94bf47081daab6edd3b39', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/p50I3pRKFl750gjH3woOLMGkLmbymSrmZet6cXI-UUk.png?width=1080&crop=smart&auto=webp&s=1ca16b4a6841e26e4cf218a833c65cce4a33bfef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/p50I3pRKFl750gjH3woOLMGkLmbymSrmZet6cXI-UUk.png?auto=webp&s=c750391ea0f1154ebc9c3540174d1eae70e3f2e9', 'width': 1200}, 'variants': {}}]}
PAR LLAMA v0.7.0 Released - Enhanced Security & Execution Experience
6
ERROR: type should be string, got "https://preview.redd.it/8x7edk8d6krf1.png?width=1502&format=png&auto=webp&s=8b5e5e9634a1fd0af38af7aa6558302de0bfc214\n\n# What It Does\n\nA powerful Terminal User Interface (TUI) for managing and interacting with Ollama and other major LLM providers — featuring **persistent AI memory**, **secure code execution**, **interactive development workflows**, and **truly personalized conversations**!\n\nPAR LLAMA Chat Interface\n\n# What's New in v0.7.0\n\n# Improved Execution Experience\n\n* **Better Result Formatting**: Clean, professional display of execution results\n* **Smart Command Display**: Shows 'python -c <script>' instead of escaped code for CLI parameters\n* **Syntax-Highlighted Code Blocks**: Short scripts (≤10 lines) display with proper syntax highlighting\n* **Intelligent Language Detection**: Automatic highlighting for Python, JavaScript, and Bash\n* **Clean Command Truncation**: Long commands truncated intelligently for better readability\n\n# Previous Major Features (v0.6.0)\n\n# Memory System\n\n* **Persistent User Context**: AI remembers who you are and your preferences across ALL conversations\n* **Memory Tab Interface**: Dedicated UI for managing your personal information and context\n* **AI-Powered Memory Updates**: Use `/remember` and `/forget` slash commands for intelligent memory management\n* **Automatic Injection**: Your memory context appears in every new conversation automatically\n* **Real-time Synchronization**: Memory updates via commands instantly reflect in the Memory tab\n* **Smart Context Management**: Never repeat your preferences or background information again\n\n# Template Execution System\n\n* **Secure Code Execution**: Execute code snippets and commands directly from chat messages using **Ctrl+R**\n* **Multi-Language Support**: Python, JavaScript/Node.js, Bash, and shell scripts with automatic language detection\n* **Configurable Security**: Command allowlists, content validation, and comprehensive safety controls\n* **Interactive Development**: Transform PAR LLAMA into a powerful development companion\n* **Real-time Results**: Execution results appear as chat responses with output, errors, and timing\n\n# Enhanced User Experience\n\n* **Memory Slash Commands**: `/remember [info]`, `/forget [info]`, `/memory.status`, `/memory.clear`\n* **Intelligent Updates**: AI intelligently integrates new information into existing memory\n* **Secure Storage**: All memory data stored locally with comprehensive file validation\n* **Options Integration**: Both Memory and Template Execution controls in Options tab\n* **Settings Persistence**: All preferences persist between sessions\n\n# Core Features\n\n* **Memory System**: Persistent user context across all conversations with AI-powered memory management\n* **Template Execution**: Secure code execution system with configurable safety controls\n* **Multi-Provider Support**: Ollama, OpenAI, Anthropic, Groq, XAI, OpenRouter, Deepseek, LiteLLM\n* **Vision Model Support**: Chat with images using vision-capable models\n* **Session Management**: Save, load, and organize chat sessions\n* **Custom Prompts**: Create and manage custom system prompts and Fabric patterns\n* **Theme System**: Dark/light modes with custom theme support\n* **Model Management**: Pull, delete, copy, and create models with native quantization\n* **Smart Caching**: Intelligent per-provider model caching with configurable durations\n* **Security**: Comprehensive file validation and secure operations\n\n# Key Features\n\n* **100% Python**: Built with Textual and Rich for a beautiful easy to use terminal experience. Dark and Light mode support, plus custom themes\n* **Cross-Platform**: Runs on Windows, macOS, Linux, and WSL\n* **Async Architecture**: Non-blocking operations for smooth performance\n* **Type Safe**: Fully typed with comprehensive type checking\n\n# GitHub & PyPI\n\n* GitHub: [https://github.com/paulrobello/parllama](https://github.com/paulrobello/parllama)\n* PyPI: [https://pypi.org/project/parllama/](https://pypi.org/project/parllama/)\n\n# Comparison:\n\nI have seen many command line and web applications for interacting with LLM's but have not found any TUI related applications as feature reach as PAR LLAMA\n\n# Target Audience\n\nIf you're working with LLMs and want a powerful terminal interface that **remembers who you are** and **bridges conversation and code execution** — PAR LLAMA v0.7.0 is a game-changer. Perfect for:\n\n* **Developers**: Persistent context about your tech stack + execute code during AI conversations\n* **Data Scientists**: AI remembers your analysis preferences + run scripts without leaving chat\n* **DevOps Engineers**: Maintains infrastructure context + execute commands interactively\n* **Researchers**: Remembers your research focus + test experiments in real-time\n* **Consultants**: Different client contexts persist across sessions + rapid prototyping\n* **Anyone**: Who wants truly personalized AI conversations with seamless code execution"
2025-09-26T19:13:41
https://www.reddit.com/r/LocalLLaMA/comments/1nra6d0/par_llama_v070_released_enhanced_security/
probello
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nra6d0
false
null
t3_1nra6d0
/r/LocalLLaMA/comments/1nra6d0/par_llama_v070_released_enhanced_security/
false
false
https://a.thumbs.redditm…4rw8lonApiF0.jpg
6
{'enabled': False, 'images': [{'id': 'sAb25lapmd1MlqL5HVNLK1DOrtd-DMHXlE4WZhwm4eE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sAb25lapmd1MlqL5HVNLK1DOrtd-DMHXlE4WZhwm4eE.png?width=108&crop=smart&auto=webp&s=c8d65a82b1aac68ed9c7a85ca382e52553c23827', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/sAb25lapmd1MlqL5HVNLK1DOrtd-DMHXlE4WZhwm4eE.png?width=216&crop=smart&auto=webp&s=3322e9c030ba3bed89bae05376c99412106a2be8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/sAb25lapmd1MlqL5HVNLK1DOrtd-DMHXlE4WZhwm4eE.png?width=320&crop=smart&auto=webp&s=fa913c740e0af5dda748bd0540e484fd8cdc451d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/sAb25lapmd1MlqL5HVNLK1DOrtd-DMHXlE4WZhwm4eE.png?width=640&crop=smart&auto=webp&s=0185f73b4b8ad370e8323f9eedb94f718686a844', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/sAb25lapmd1MlqL5HVNLK1DOrtd-DMHXlE4WZhwm4eE.png?width=960&crop=smart&auto=webp&s=f5a2011a079c10b3b6f26de9e595197eded6ff0f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/sAb25lapmd1MlqL5HVNLK1DOrtd-DMHXlE4WZhwm4eE.png?width=1080&crop=smart&auto=webp&s=cece28b5d749b39e27361917f266b08888a8f400', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/sAb25lapmd1MlqL5HVNLK1DOrtd-DMHXlE4WZhwm4eE.png?auto=webp&s=23efd42b0450633af86d786728703ad5675a7727', 'width': 1200}, 'variants': {}}]}
Best model for general purpose to run in a 4GB VRAM Nvidia?
1
[removed]
2025-09-26T19:10:11
[deleted]
1970-01-01T00:00:00
0
{}
1nra367
false
null
t3_1nra367
/r/LocalLLaMA/comments/1nra367/best_model_for_general_purpose_to_run_in_a_4gb/
false
false
default
1
null
Is it just me or DeepSeek local models are censored too?
1
[removed]
2025-09-26T19:03:19
https://www.reddit.com/r/LocalLLaMA/comments/1nr9wq0/is_it_just_me_or_deepseek_local_models_are/
CodeDarlen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr9wq0
false
null
t3_1nr9wq0
/r/LocalLLaMA/comments/1nr9wq0/is_it_just_me_or_deepseek_local_models_are/
false
false
self
1
null
How do you guys know how much ram an ollama model needs before downloading?
6
Say, like [deepseek-v3.1](https://ollama.com/library/deepseek-v3.1) it shows 400 GB to download. But I'm scared to download and test because I downloaded gpt-oss120b and it said i needed about 60 GB of RAM. I only have 32 GB. I was wondering if there is a way to know? Because the ollama site does not let you know. Also, I am looking for a good llama model for coding, just for context. Any help would be appreciated as I am fairly new to localllama. thanks
2025-09-26T18:59:05
https://www.reddit.com/r/LocalLLaMA/comments/1nr9sgb/how_do_you_guys_know_how_much_ram_an_ollama_model/
Big-Selection-6957
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr9sgb
false
null
t3_1nr9sgb
/r/LocalLLaMA/comments/1nr9sgb/how_do_you_guys_know_how_much_ram_an_ollama_model/
false
false
self
6
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]}
Inside GPT-OSS: OpenAI’s Latest LLM Architecture
61
2025-09-26T18:55:10
https://medium.com/data-science-collective/inside-gpt-oss-openais-latest-llm-architecture-c80e4e6976dc
AggravatingGiraffe46
medium.com
1970-01-01T00:00:00
0
{}
1nr9ouo
false
null
t3_1nr9ouo
/r/LocalLLaMA/comments/1nr9ouo/inside_gptoss_openais_latest_llm_architecture/
false
false
https://external-preview…03127405949ac44d
61
{'enabled': False, 'images': [{'id': '6U81r9cNlkBV6W3-ZZjd3A_2D4Yg1yntt4LrzKaV2aw', 'resolutions': [{'height': 121, 'url': 'https://external-preview.redd.it/6U81r9cNlkBV6W3-ZZjd3A_2D4Yg1yntt4LrzKaV2aw.png?width=108&crop=smart&auto=webp&s=08d1780a35ea6ea1aef4274fe1abf27bbf53e3ed', 'width': 108}, {'height': 242, 'url': 'https://external-preview.redd.it/6U81r9cNlkBV6W3-ZZjd3A_2D4Yg1yntt4LrzKaV2aw.png?width=216&crop=smart&auto=webp&s=f25d850acc4049c09f76150dc7c433cd36698c9e', 'width': 216}, {'height': 359, 'url': 'https://external-preview.redd.it/6U81r9cNlkBV6W3-ZZjd3A_2D4Yg1yntt4LrzKaV2aw.png?width=320&crop=smart&auto=webp&s=f0f2cbebdffba811da2f2f769418233e985f786e', 'width': 320}, {'height': 718, 'url': 'https://external-preview.redd.it/6U81r9cNlkBV6W3-ZZjd3A_2D4Yg1yntt4LrzKaV2aw.png?width=640&crop=smart&auto=webp&s=1723d72f6cc258494fbedce2e06f75786d8c3d65', 'width': 640}], 'source': {'height': 736, 'url': 'https://external-preview.redd.it/6U81r9cNlkBV6W3-ZZjd3A_2D4Yg1yntt4LrzKaV2aw.png?auto=webp&s=70444e09678ef81a6b5c483d35237841f365a23e', 'width': 656}, 'variants': {}}]}
Your own lovable. I built Open source alternative to Lovable & v0.
2
Hello guys i built Free & Open Source alternative to Lovable & V0, you have use your own apikey to build ui's, currently only supporting OpenRouter, Chatgpt and Claude and for sandbox E2B Platform. github: [Link](https://github.com/grillsdev/grills) site: [Link](https://grills.dev/) It is still in a very **early stage**. Try it out, raise issues, and i’ll fix them. Every single feedback in comments is appreciated and i will improving on that. be brutual with your feedback
2025-09-26T18:52:07
https://i.redd.it/63s9twdk1krf1.png
BootPsychological454
i.redd.it
1970-01-01T00:00:00
0
{}
1nr9m1g
false
null
t3_1nr9m1g
/r/LocalLLaMA/comments/1nr9m1g/your_own_lovable_i_built_open_source_alternative/
false
false
https://b.thumbs.redditm…_1UYYFDpE96w.jpg
2
{'enabled': True, 'images': [{'id': 'FAf2WbyKf2wVqM_KziHrddDZ4L-5GzJZfChaBgaS0CM', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/63s9twdk1krf1.png?width=108&crop=smart&auto=webp&s=ed082dd75f643406b0b9965e5db59df4ac31c780', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/63s9twdk1krf1.png?width=216&crop=smart&auto=webp&s=5650fe210d9746747cb2725a93f1a5887f12929a', 'width': 216}, {'height': 149, 'url': 'https://preview.redd.it/63s9twdk1krf1.png?width=320&crop=smart&auto=webp&s=ed15d0541e6cf6a7b6f6204e952e056200484fa4', 'width': 320}, {'height': 298, 'url': 'https://preview.redd.it/63s9twdk1krf1.png?width=640&crop=smart&auto=webp&s=e57016e8cdfbe5a0afe94fc96cab67b776fadc39', 'width': 640}, {'height': 448, 'url': 'https://preview.redd.it/63s9twdk1krf1.png?width=960&crop=smart&auto=webp&s=e9f684f98b7602df96a37d9dbc11c8c5c9602510', 'width': 960}, {'height': 504, 'url': 'https://preview.redd.it/63s9twdk1krf1.png?width=1080&crop=smart&auto=webp&s=17b52170f4bc270afec721a2bddd73a6e47f9e52', 'width': 1080}], 'source': {'height': 1338, 'url': 'https://preview.redd.it/63s9twdk1krf1.png?auto=webp&s=6ba8168ac9eef7593e3e44922457511f32a23643', 'width': 2866}, 'variants': {}}]}
If GDPVal is legit, what does it say about the economic value of local models?
1
[https://openai.com/index/gdpval/](https://openai.com/index/gdpval/) I'm curious how important GDPVal will become. If it does, eventually, become a legitimate measure of economic output, will a new form of 'currency' evolve based on machine learning work output? To what extent will this be fungible (easily converted to other forms of value)? I'm very curious about the thoughts of the very clever members of this community... Thoughts?
2025-09-26T18:45:53
https://www.reddit.com/r/LocalLLaMA/comments/1nr9gb6/if_gdpval_is_legit_what_does_it_say_about_the/
robkkni
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr9gb6
false
null
t3_1nr9gb6
/r/LocalLLaMA/comments/1nr9gb6/if_gdpval_is_legit_what_does_it_say_about_the/
false
false
self
1
null
Benchmarking LLM Inference on RTX 4090 / RTX 5090 / RTX PRO 6000
7
I wanted to see how the multi-4090/5090 builds compare to the Pro 6000, and it appears that the former are only relevant for very small models, such as 8B. Even on a 30B model, like `Qwen/Qwen3-Coder-30B-A3B-Instruct`the single Pro 6000 beats 4 x 5090. https://preview.redd.it/wvnbn4x0zjrf1.png?width=2368&format=png&auto=webp&s=d37cd37514dfabd2f4fafcecc5f77ad6a4eba81b Please let me know which models you're interested in benchmarking and if you have any suggestions for the benchmarking methodology. The benchmark is used to ensure consistency among the GPU providers we're working with, so it also measures factors such as internet speed, disk speed, and CPU performance, among others. [Medium article](https://medium.com/gitconnected/benchmarking-llm-inference-on-rtx-4090-rtx-5090-and-rtx-pro-6000-76b63b3b50a2) [Non-medium link](https://www.cloudrift.ai/blog/benchmarking-gpu-servers-for-llm-inference)
2025-09-26T18:39:47
https://www.reddit.com/r/LocalLLaMA/comments/1nr9arw/benchmarking_llm_inference_on_rtx_4090_rtx_5090/
NoVibeCoding
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr9arw
false
null
t3_1nr9arw
/r/LocalLLaMA/comments/1nr9arw/benchmarking_llm_inference_on_rtx_4090_rtx_5090/
false
false
https://external-preview…9d0943250adc5fec
7
{'enabled': False, 'images': [{'id': 'qN_CgOr8BhKnCrJKgN6ebnHkJjLIZkfveG1P14EY_Ek', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/qN_CgOr8BhKnCrJKgN6ebnHkJjLIZkfveG1P14EY_Ek.png?width=108&crop=smart&auto=webp&s=72710445fe375692494a9660a873bd98ded67e9b', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/qN_CgOr8BhKnCrJKgN6ebnHkJjLIZkfveG1P14EY_Ek.png?width=216&crop=smart&auto=webp&s=19db48965524a6b885d80346a9cb23f565e5c8ac', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/qN_CgOr8BhKnCrJKgN6ebnHkJjLIZkfveG1P14EY_Ek.png?width=320&crop=smart&auto=webp&s=465b505d47919dc053209d1704f0d5d6ebb7f7cb', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/qN_CgOr8BhKnCrJKgN6ebnHkJjLIZkfveG1P14EY_Ek.png?width=640&crop=smart&auto=webp&s=f06a60790e842eb76f6612714b500cc16f3b3c72', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/qN_CgOr8BhKnCrJKgN6ebnHkJjLIZkfveG1P14EY_Ek.png?width=960&crop=smart&auto=webp&s=915f32e52d64154328d77afa4f06f15e867b2ff0', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/qN_CgOr8BhKnCrJKgN6ebnHkJjLIZkfveG1P14EY_Ek.png?width=1080&crop=smart&auto=webp&s=886b385e6940644d298f0b99e646353bfe94201d', 'width': 1080}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/qN_CgOr8BhKnCrJKgN6ebnHkJjLIZkfveG1P14EY_Ek.png?auto=webp&s=b207a383b51821cc9ebfc29b9847a576392ea2b3', 'width': 1200}, 'variants': {}}]}
Orchestrate a team of small Local models to do complex stuff with Observer! (Free and Open Source)
16
TLDR; This new **Automatic** Multi-Agent **Creator and Editor** makes Observer super super powerful. You can create multiple agents automatically and iterate System Prompts to get your **local** agents working super fast! Hey r/LocalLLaMA, Ever since i started using Local LLMs i've thought about this exact use case. Using **vision + reasoning** models to do more advanced things, like guiding you while creating a Google account (worked really well for my Mom!), or extracting a LeetCode problem with Gemma and solving it with deepseek automatically. A while ago I showed you guys how to create them manually but now the Agent Builder can create them **automatically!!** And better yet, if a model is hallucinating or not triggering your notifications/logging correctly, you just click one button and the **Agent Builder can fix it for you.** This lets you easily have some agent pairs that do the following: * **Monitor & Document** \- One agent describes your screen, another keeps a document of the process. * **Extract & Solve** \- One agent extracts problems from the screen, another solves them. * **Watch & Guide** \- One agent lists out possible buttons or actions, another provides step-by-step guidance. Of course you can still have simple one-agent configs to get notifications when downloads finish, renders complete, something happens on a video game etc. etc. Everything using your **local models!** You can download the app and look at the code right here: [https://github.com/Roy3838/Observer](https://github.com/Roy3838/Observer) Or try it out without any install (non-local but easy): [https://app.observer-ai.com/](https://app.observer-ai.com/) Thank you to everyone who has given it a shot! I hope this App makes more people interested in local models and their possible uses.
2025-09-26T18:38:47
https://www.youtube.com/watch?v=6zJh8NmCXYw
Roy3838
youtube.com
1970-01-01T00:00:00
0
{}
1nr99vl
false
{'oembed': {'author_name': 'Observer AI', 'author_url': 'https://www.youtube.com/@Observer-AI', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/6zJh8NmCXYw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Using Local LLMs To Watch Your Screen"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/6zJh8NmCXYw/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Using Local LLMs To Watch Your Screen', 'type': 'video', 'version': '1.0', 'width': 267}, 'type': 'youtube.com'}
t3_1nr99vl
/r/LocalLLaMA/comments/1nr99vl/orchestrate_a_team_of_small_local_models_to_do/
false
false
default
16
{'enabled': False, 'images': [{'id': '56SPQExPI827GpA98kpipwMJSEu04uBeqWtJrHoG4nc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/56SPQExPI827GpA98kpipwMJSEu04uBeqWtJrHoG4nc.jpeg?width=108&crop=smart&auto=webp&s=bb46662896698b0b781e6f2cac8a904e8d08bafb', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/56SPQExPI827GpA98kpipwMJSEu04uBeqWtJrHoG4nc.jpeg?width=216&crop=smart&auto=webp&s=f6a5fc0d3ae7f9b7a45890d06621e09350ee0c5e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/56SPQExPI827GpA98kpipwMJSEu04uBeqWtJrHoG4nc.jpeg?width=320&crop=smart&auto=webp&s=5567fa8755a2e8bb9fdc634d1c9e2f0436b0e05a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/56SPQExPI827GpA98kpipwMJSEu04uBeqWtJrHoG4nc.jpeg?auto=webp&s=ffe90812fa7e1efc1e55a8912a449c5608ec0db3', 'width': 480}, 'variants': {}}]}
What is the best options currently available for a local LLM using a 24GB GPU?
24
My main goals are translation and coding.
2025-09-26T18:15:04
https://www.reddit.com/r/LocalLLaMA/comments/1nr8ohf/what_is_the_best_options_currently_available_for/
marcoc2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr8ohf
false
null
t3_1nr8ohf
/r/LocalLLaMA/comments/1nr8ohf/what_is_the_best_options_currently_available_for/
false
false
self
24
null
Cost comparison for 10M input / 5M output tokens across popular LLMs — methodology + sheet
1
[removed]
2025-09-26T18:09:07
https://www.reddit.com/r/LocalLLaMA/comments/1nr8j6t/cost_comparison_for_10m_input_5m_output_tokens/
RoleCold4185
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr8j6t
false
null
t3_1nr8j6t
/r/LocalLLaMA/comments/1nr8j6t/cost_comparison_for_10m_input_5m_output_tokens/
false
false
self
1
null
Cost comparison for 10M input / 5M output tokens across popular LLMs — methodology + sheet
1
[removed]
2025-09-26T18:08:46
https://www.reddit.com/r/LocalLLaMA/comments/1nr8iv9/cost_comparison_for_10m_input_5m_output_tokens/
RoleCold4185
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr8iv9
false
null
t3_1nr8iv9
/r/LocalLLaMA/comments/1nr8iv9/cost_comparison_for_10m_input_5m_output_tokens/
false
false
self
1
null
Cost comparison for 10M input / 5M output tokens across popular LLMs — methodology + sheet
1
[removed]
2025-09-26T18:05:07
https://www.reddit.com/r/LocalLLaMA/comments/1nr8fji/cost_comparison_for_10m_input_5m_output_tokens/
RoleCold4185
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr8fji
false
null
t3_1nr8fji
/r/LocalLLaMA/comments/1nr8fji/cost_comparison_for_10m_input_5m_output_tokens/
false
false
self
1
null
help my final year project
1
Hey all, I’m working on my final year project, which is focused on building a solution to generate quizzes and flashcards from educational files (PDFs, docs, videos, etc.). Right now, I’m using an AI-powered platform that supports uploading various content types, analyzes them with models like Gemini Pro, and creates quiz and flashcard questions automatically. I’d love to get your recommendations and hear about your experiences: * If you’ve tackled similar projects, what data collection strategies worked best for you? * Are there particular sources or datasets you’d suggest for high-quality quiz/flashcard generation? * How much data (size or sample count) did you find effective for reliable results? * Any pitfalls to watch out for, or tips for improving output accuracy? All feedback is welcome—anything that helps me refine my current workflow for a great final year product. Thanks in advance for sharing your thoughts!
2025-09-26T18:01:57
https://www.reddit.com/r/LocalLLaMA/comments/1nr8chs/help_my_final_year_project/
Ghostgame4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr8chs
false
null
t3_1nr8chs
/r/LocalLLaMA/comments/1nr8chs/help_my_final_year_project/
false
false
self
1
null
I tested DeepSeek-R1 VoltageGPU for my project — the cost math hurt (10M in / 5M out)
1
[removed]
2025-09-26T18:01:29
https://www.reddit.com/r/LocalLLaMA/comments/1nr8c1g/i_tested_deepseekr1_voltagegpu_for_my_project_the/
RoleCold4185
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr8c1g
false
null
t3_1nr8c1g
/r/LocalLLaMA/comments/1nr8c1g/i_tested_deepseekr1_voltagegpu_for_my_project_the/
false
false
self
1
null
New free AI-powered study platform for JEE/NEET & Board exams — Examsprint AI
1
Hey everyone I made examsprint-ai.oages.dev 👋, I wanted to share a new tool I discovered (or built) — Examsprint AI — which aims to be a one-stop AI-powered study companion for students preparing for board exams, JEE, NEET, and general NCERT curriculum work. Here’s what Examsprint AI offers: Topper’s Notes & Chapter Summaries for Class 9–12 Direct NCERT links + solutions for all chapters Formula sheets (Maths, Science) Flashcards & quizzes to reinforce memory Blueprints & exam planning (for Boards, NEET, JEE) Built-in AI chat assistant for instant doubt solving No subscriptions, no watermarks, totally free The site is fully responsive, so you can use it on mobile too. It was built by a young developer (13 years old!) named Aadarsh Pandey.
2025-09-26T17:59:48
https://i.redd.it/xkrssvx8tjrf1.png
Euphoric_Deal_9481
i.redd.it
1970-01-01T00:00:00
0
{}
1nr8adq
false
null
t3_1nr8adq
/r/LocalLLaMA/comments/1nr8adq/new_free_aipowered_study_platform_for_jeeneet/
false
false
https://a.thumbs.redditm…QMDd6EbKYbb8.jpg
1
{'enabled': True, 'images': [{'id': 'dCN2Klm58DnVeNVdit2HDvEcDi4NsDKti3PqKCNaBng', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/xkrssvx8tjrf1.png?width=108&crop=smart&auto=webp&s=1ea5432c95467f50be2f64ecbcd9e284a0fd4a70', 'width': 108}, {'height': 324, 'url': 'https://preview.redd.it/xkrssvx8tjrf1.png?width=216&crop=smart&auto=webp&s=2edaeade181e9b562f01a83d53d46301d77640c9', 'width': 216}, {'height': 480, 'url': 'https://preview.redd.it/xkrssvx8tjrf1.png?width=320&crop=smart&auto=webp&s=75d4a49ecce5c4d226fad2b712ffa301dc161572', 'width': 320}, {'height': 960, 'url': 'https://preview.redd.it/xkrssvx8tjrf1.png?width=640&crop=smart&auto=webp&s=27e13d422ece947d2a90dbd1a6451708dabc0e9c', 'width': 640}, {'height': 1440, 'url': 'https://preview.redd.it/xkrssvx8tjrf1.png?width=960&crop=smart&auto=webp&s=d601e61ccd3623db7e4f9ed25c335d7799be53fa', 'width': 960}], 'source': {'height': 1536, 'url': 'https://preview.redd.it/xkrssvx8tjrf1.png?auto=webp&s=e4ecd935ea0e2bf34a3ef42dcf9bdf5fcefbac13', 'width': 1024}, 'variants': {}}]}
The benchmarks are favouring Qwen3 max
171
The best non thinking model
2025-09-26T17:59:46
https://i.redd.it/5hyvzvs8tjrf1.png
Brave-Hold-9389
i.redd.it
1970-01-01T00:00:00
0
{}
1nr8acu
false
null
t3_1nr8acu
/r/LocalLLaMA/comments/1nr8acu/the_benchmarks_are_favouring_qwen3_max/
false
false
https://a.thumbs.redditm…96lEgS1Y6SX0.jpg
171
{'enabled': True, 'images': [{'id': 'JPxmVF0AOOdUz7hTqiYS_IH8MyM5MSc-EnHvs1u9fZQ', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/5hyvzvs8tjrf1.png?width=108&crop=smart&auto=webp&s=8aeb61c6242cee7991dd8ad2b3f632c83267b188', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/5hyvzvs8tjrf1.png?width=216&crop=smart&auto=webp&s=4c452fb25f2cb8d0276ab0f855896caccc76b4b0', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/5hyvzvs8tjrf1.png?width=320&crop=smart&auto=webp&s=22b850c6957a75434916433198229de7b371a1f5', 'width': 320}, {'height': 372, 'url': 'https://preview.redd.it/5hyvzvs8tjrf1.png?width=640&crop=smart&auto=webp&s=52ee02c3689ab52b47faa094175ba42f13984273', 'width': 640}, {'height': 558, 'url': 'https://preview.redd.it/5hyvzvs8tjrf1.png?width=960&crop=smart&auto=webp&s=560d9df53f83643eaa320509020bb8f26b01ed62', 'width': 960}, {'height': 628, 'url': 'https://preview.redd.it/5hyvzvs8tjrf1.png?width=1080&crop=smart&auto=webp&s=fb0ac844c750c0227ad28e3063518bf95703a4b1', 'width': 1080}], 'source': {'height': 2328, 'url': 'https://preview.redd.it/5hyvzvs8tjrf1.png?auto=webp&s=dcc81862576523d5ca90c9bdf7db55e7676ca022', 'width': 4000}, 'variants': {}}]}
I built Solveig, it turns any LLM into an agentic assistant in your terminal that can safely use your computer
6
## [Demo GIF](https://github.com/FSilveiraa/solveig/raw/main/docs/demo.gif) **[Solveig](https://github.com/FranciscoSilveira/solveig) is an agentic runtime that runs as an assistant in your terminal.** **That buzzword salad means it's not a model nor is it an agent, it's a tool that enables safe, agentic behavior from any model or provider on your computer. It provides the infrastructure for any LLM to safely interact with you and your system to help you solve real problems** --- ## Quick Start ### Installation # Core installation (OpenAI + local models) pip install solveig # With support for Claude and Gemini APIs pip install solveig[all] ### Running # Run with a local model solveig -u "http://localhost:5001/v1" "Create a demo BlackSheep webapp" # Run from a remote API like OpenRouter solveig -u "https://openrouter.ai/api/v1" -k "<API_KEY>" -m "moonshotai/kimi-k2:free" See [Usage Guide](https://github.com/FSilveiraa/solveig/blob/main/docs/usage.md) for more. --- ## Features 🤖 **AI Terminal Assistant** - Automate file management, code analysis, project setup, and system tasks using natural language in your terminal. 🛡️ **Safe by Design** - Granular consent controls with pattern-based permissions and file operations prioritized over shell commands. Includes a wide test suite (currently 140 unit+integration+e2e tests with 88% coverage) 🔌 **Plugin Architecture** - Extend capabilities through drop-in Python plugins. Add SQL queries, web scraping, or custom workflows with 100 lines of Python. 📋 **Visual Task Management** - Clear progress tracking with task breakdowns, file previews, and rich metadata display for informed user decisions. 🌐 **Provider Independence** - Free and open-source, works with OpenAI, Claude, Gemini, local models, or any OpenAI-compatible API. **tl;dr: it tries to be similar to [Claude Code](https://claude.com/product/claude-code) or [Aider](https://aider.chat/) while including explicit guardrails, a consent model grounded on a clear interface, deep configuration, an easy plugin system, and able to integrate any model, backend or API.** See the [Features](https://github.com/FSilveiraa/solveig/blob/main/docs/about.md#features-and-principles) for more. --- ## Typical tasks - "Find and list all the duplicate files anywhere inside my ~/Documents/" - "Check my essay Final.docx for spelling, syntax or factual errors while maintaining the tone" - "Refactor my test_database.ts suite to be more concise" - "Try and find out why my computer is slow" - "Create a dockerized BlackSheep webapp with a test suite, then build the image and run it locally" - "Review the documentation for my project and confirm the config matches the defaults" --- ### So it's yet another LLM-in-my-terminal? Yes, and there's a detailed [Market Comparison](https://github.com/FSilveiraa/solveig/blob/main/docs/about.md#market-comparison) to similar tools in the docs. The summary is that I think Solveig has a unique feature set that fills a genuine gap. It's a useful tool built on clear information display, user consent and extensibility. It's not an IDE extension nor does it require a GUI, and it both tries to do small unique things that no competitor really has, and to excel at features they all share. At the same time, Solveig's competitors are much more mature projects with real user testing and you should absolutely try them. A lot of my features where anywhere from influenced to functionally copied from other existing tools - at the end of the day, the goal of tech, especially open-source software, is to make people's lives easier. ### Upcoming I have a [Roadmap](https://github.com/FSilveiraa/solveig/blob/main/docs/about.md#roadmap) available, feel free to suggest new features or improvements. A cool aspect of this is that, with some focus on dev features like code linting and diff view, I can use Solveig to improve Solveig itself. I appreciate any feedback or comment, even if it's just confusion - if you can't see how Solveig could help you, that's an issue with me communicating value that I need to fix. Leaving a ⭐ on the [repository](https://github.com/FSilveiraa/solveig) is also very much appreciated.
2025-09-26T17:37:13
https://www.reddit.com/r/LocalLLaMA/comments/1nr7pqk/i_built_solveig_it_turns_any_llm_into_an_agentic/
BarrenSuricata
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr7pqk
false
null
t3_1nr7pqk
/r/LocalLLaMA/comments/1nr7pqk/i_built_solveig_it_turns_any_llm_into_an_agentic/
false
false
self
6
null
Wes Higbee - RAG enabled FIM in Neovim - he is cooking hard (all local).
0
I cannot believe this only has 1k views.\* If any of you plans on using local LLMs for coding (not vibe coding), this will be the way. Wes has created a GPT OSS 20b + Qwen 0.6 embedder+reranker fueled monster of a coding engine. Another vid here. [https://www.youtube.com/watch?v=P4tQrOQjdU0](https://www.youtube.com/watch?v=P4tQrOQjdU0) This might get me into learning how to actually code. [https://github.com/g0t4/ask-openai.nvim](https://github.com/g0t4/ask-openai.nvim) *\* I kind of know, he's flying through all of this way too fast.* *No, I'm not Wes, this isn't self promotion, this is sharing cool, local llm stuff.*
2025-09-26T17:34:30
https://www.youtube.com/watch?v=xTvZi58TYvs
igorwarzocha
youtube.com
1970-01-01T00:00:00
0
{}
1nr7n9g
false
{'oembed': {'author_name': 'Wes Higbee', 'author_url': 'https://www.youtube.com/@g0t4', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/xTvZi58TYvs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="I Wasn&#39;t Expecting RAG to Be This Useful! (RAG+FIM in Neovim)"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/xTvZi58TYvs/hqdefault.jpg', 'thumbnail_width': 480, 'title': "I Wasn't Expecting RAG to Be This Useful! (RAG+FIM in Neovim)", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'}
t3_1nr7n9g
/r/LocalLLaMA/comments/1nr7n9g/wes_higbee_rag_enabled_fim_in_neovim_he_is/
false
false
https://external-preview…ec9136c7173b0dd0
0
{'enabled': False, 'images': [{'id': 'o5CozdrCMJUXVp-sezxzPLAqajNVOGG0WxfvIFCIVo0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/o5CozdrCMJUXVp-sezxzPLAqajNVOGG0WxfvIFCIVo0.jpeg?width=108&crop=smart&auto=webp&s=ca9ac0cc3bbf131980fb65a2e80f8153d20bf88e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/o5CozdrCMJUXVp-sezxzPLAqajNVOGG0WxfvIFCIVo0.jpeg?width=216&crop=smart&auto=webp&s=2e1675e688dc26cdd8709242aefa3a7b57ff4606', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/o5CozdrCMJUXVp-sezxzPLAqajNVOGG0WxfvIFCIVo0.jpeg?width=320&crop=smart&auto=webp&s=942d0d46c6f19fb72360c7733edad7c09ee4715a', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/o5CozdrCMJUXVp-sezxzPLAqajNVOGG0WxfvIFCIVo0.jpeg?auto=webp&s=76aab37c3e8b5346a2dfb5ef0e6d669d4c327fba', 'width': 480}, 'variants': {}}]}
Google's Android Studio with local LLM - what am I missing here?
4
I downloaded the latest drop of Android Studio which allows connection to a local LLM, in this case Qwen Coder 30B running via xlm\_lm.server on local port 8080. The model reports it's Claude?
2025-09-26T17:14:48
https://i.redd.it/xgyjzy87hjrf1.png
ChevChance
i.redd.it
1970-01-01T00:00:00
0
{}
1nr74v9
false
null
t3_1nr74v9
/r/LocalLLaMA/comments/1nr74v9/googles_android_studio_with_local_llm_what_am_i/
false
false
https://b.thumbs.redditm…9Q8jdd9F4O_Q.jpg
4
{'enabled': True, 'images': [{'id': 'GD-Hj2ZdLZE1KEfu7tBPmRbtExu1ywHEfrtpeg91kdc', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/xgyjzy87hjrf1.png?width=108&crop=smart&auto=webp&s=cd58513d9c9c30a08794f5cdfb2acac928b6a384', 'width': 108}, {'height': 258, 'url': 'https://preview.redd.it/xgyjzy87hjrf1.png?width=216&crop=smart&auto=webp&s=3dc4817efb0f048351a613b19bc1f0d397438a78', 'width': 216}, {'height': 383, 'url': 'https://preview.redd.it/xgyjzy87hjrf1.png?width=320&crop=smart&auto=webp&s=36f3ea74196f4228600dcf8f9876c4fbd3d4796c', 'width': 320}, {'height': 767, 'url': 'https://preview.redd.it/xgyjzy87hjrf1.png?width=640&crop=smart&auto=webp&s=0cf7c7d47376657741d2919945a98aeefe0b0bc3', 'width': 640}, {'height': 1150, 'url': 'https://preview.redd.it/xgyjzy87hjrf1.png?width=960&crop=smart&auto=webp&s=f3a6a7782bc19630ac59da53f4d1363b94014e6c', 'width': 960}, {'height': 1294, 'url': 'https://preview.redd.it/xgyjzy87hjrf1.png?width=1080&crop=smart&auto=webp&s=32a176ec1a7d17bed3446f33d51fd934fa3bf0d8', 'width': 1080}], 'source': {'height': 1664, 'url': 'https://preview.redd.it/xgyjzy87hjrf1.png?auto=webp&s=649b0698e5ba9c13290148473f444be2dffac220', 'width': 1388}, 'variants': {}}]}
Noob here pls help, what's the ballpark cost for fine-tuning and running something like Qwen3-235B-A22B-VL on Runpod or a similar provider?
3
I'm not really interested in smaller models (although I will use them to learn the workflow) except maybe Qwen3-80B-A3B-next but haven't tested that one yet so hard to say. Any info is appreciated thanks!
2025-09-26T17:01:58
https://www.reddit.com/r/LocalLLaMA/comments/1nr6snq/noob_here_pls_help_whats_the_ballpark_cost_for/
Narwhal_Other
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr6snq
false
null
t3_1nr6snq
/r/LocalLLaMA/comments/1nr6snq/noob_here_pls_help_whats_the_ballpark_cost_for/
false
false
self
3
null
Localllama:Found a free exam prep site with notes, flashcards & AI doubt solver
1
Stumbled across a site called Examsprint AI examsprint-ai.pages.dev while looking for study material. Thought I’d share in case it helps someone. Some features I noticed: Topper-style notes for JEE/NEET + school subjects (classes 9–12). NCERT solutions (they say downloadable PDFs are included). Flashcards & quizzes for quick revision. Exam blueprints (like weightage, marks distribution). A built-in AI doubt solver/chatbot for quick questions. Works on mobile, tablet, and desktop. Completely free to access (at least for now). Not sure about accuracy/updates of everything, so worth double-checking with textbooks or other resources. But might be useful as a supplement.
2025-09-26T16:52:07
https://www.reddit.com/r/LocalLLaMA/comments/1nr6jbl/localllamafound_a_free_exam_prep_site_with_notes/
Guilty-River-4843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr6jbl
false
null
t3_1nr6jbl
/r/LocalLLaMA/comments/1nr6jbl/localllamafound_a_free_exam_prep_site_with_notes/
false
false
self
1
null
VibeVoice-ComfyUI 1.5.0: Speed Control and LoRA Support
74
Hi everyone! 👋 First of all, thank you again for the amazing support, this project has now reached ⭐ **880 stars on GitHub**! Over the past weeks, VibeVoice-ComfyUI has become more stable, gained powerful new features, and grown thanks to your feedback and contributions. # ✨ Features # Core Functionality * 🎤 **Single Speaker TTS**: Generate natural speech with optional voice cloning * 👥 **Multi-Speaker Conversations**: Support for up to 4 distinct speakers * 🎯 **Voice Cloning**: Clone voices from audio samples * 🎨 **LoRA Support**: Fine-tune voices with custom LoRA adapters (v1.4.0+) * 🎚️ **Voice Speed Control**: Adjust speech rate by modifying reference voice speed (v1.5.0+) * 📝 **Text File Loading**: Load scripts from text files * 📚 **Automatic Text Chunking**: Seamlessly handles long texts with configurable chunk size * ⏸️ **Custom Pause Tags**: Insert silences with `[pause]` and `[pause:ms]` tags (wrapper feature) * 🔄 **Node Chaining**: Connect multiple VibeVoice nodes for complex workflows * ⏹️ **Interruption Support**: Cancel operations before or between generations # Model Options * 🚀 **Three Model Variants**: * VibeVoice 1.5B (faster, lower memory) * VibeVoice-Large (best quality, \~17GB VRAM) * VibeVoice-Large-Quant-4Bit (balanced, \~7GB VRAM) # Performance & Optimization * ⚡ **Attention Mechanisms**: Choose between auto, eager, sdpa, flash\_attention\_2 or sage * 🎛️ **Diffusion Steps**: Adjustable quality vs speed trade-off (default: 20) * 💾 **Memory Management**: Toggle automatic VRAM cleanup after generation * 🧹 **Free Memory Node**: Manual memory control for complex workflows * 🍎 **Apple Silicon Support**: Native GPU acceleration on M1/M2/M3 Macs via MPS * 🔢 **4-Bit Quantization**: Reduced memory usage with minimal quality loss # Compatibility & Installation * 📦 **Self-Contained**: Embedded VibeVoice code, no external dependencies * 🔄 **Universal Compatibility**: Adaptive support for transformers v4.51.3+ * 🖥️ **Cross-Platform**: Works on Windows, Linux, and macOS * 🎮 **Multi-Backend**: Supports CUDA, CPU, and MPS (Apple Silicon) \--------------------------------------------------------------------------------------------- # 🔥 What’s New in v1.5.0 # 🎨 LoRA Support Thanks to the contribution of github user **jpgallegoar**, I have made a new node to load LoRA adapters for voice customization. The node generates an output that can now be linked directly to both **Single Speaker** and **Multi Speaker** nodes, allowing even more flexibility when fine-tuning cloned voices. # 🎚️ Speed Control While it’s not possible to force a cloned voice to speak at an exact target speed, a new system has been implemented to slightly alter the input audio speed. This helps the cloning process produce speech closer to the desired pace. 👉 Best results come with **reference samples longer than 20 seconds**. It’s not 100% reliable, but in many cases the results are surprisingly good! 🔗 GitHub Repo: [https://github.com/Enemyx-net/VibeVoice-ComfyUI](https://github.com/Enemyx-net/VibeVoice-ComfyUI) 💡 As always, feedback and contributions are welcome! They’re what keep this project evolving. Thanks for being part of the journey! 🙏 Fabio
2025-09-26T16:47:36
https://i.redd.it/96ikl9gbgjrf1.png
Fabix84
i.redd.it
1970-01-01T00:00:00
0
{}
1nr6f75
false
null
t3_1nr6f75
/r/LocalLLaMA/comments/1nr6f75/vibevoicecomfyui_150_speed_control_and_lora/
false
false
default
74
{'enabled': True, 'images': [{'id': '96ikl9gbgjrf1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/96ikl9gbgjrf1.png?width=108&crop=smart&auto=webp&s=1faa5e026460bb00af6851b76a29d949d84a0f3a', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/96ikl9gbgjrf1.png?width=216&crop=smart&auto=webp&s=09fe60482da562d2ab5fd2448ef46667d0d2bac9', 'width': 216}, {'height': 152, 'url': 'https://preview.redd.it/96ikl9gbgjrf1.png?width=320&crop=smart&auto=webp&s=a9d15ffd1edfdf13b7d614071b9712a89ca124a2', 'width': 320}, {'height': 304, 'url': 'https://preview.redd.it/96ikl9gbgjrf1.png?width=640&crop=smart&auto=webp&s=4b4415d469ed873f34d444c40714c1fa3bc24215', 'width': 640}, {'height': 456, 'url': 'https://preview.redd.it/96ikl9gbgjrf1.png?width=960&crop=smart&auto=webp&s=152de6ab7646ff86840b4e6f4dc470d3d0abf2b4', 'width': 960}, {'height': 513, 'url': 'https://preview.redd.it/96ikl9gbgjrf1.png?width=1080&crop=smart&auto=webp&s=8bf5c1493d883b93f86f24fc5c891163cb685b8c', 'width': 1080}], 'source': {'height': 740, 'url': 'https://preview.redd.it/96ikl9gbgjrf1.png?auto=webp&s=9e1316d4dbe526f1dbfcf108f45e34ae964e2a55', 'width': 1555}, 'variants': {}}]}
LLM for card games?
4
I wonder if it would be possible to use an LLM for card games like Uno. Could you use a normal instruct LLM or would you have to train it somehow?
2025-09-26T16:45:42
https://www.reddit.com/r/LocalLLaMA/comments/1nr6deq/llm_for_card_games/
dreamyrhodes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr6deq
false
null
t3_1nr6deq
/r/LocalLLaMA/comments/1nr6deq/llm_for_card_games/
false
false
self
4
null
Global Memory Layer for LLMs
1
[removed]
2025-09-26T16:39:08
https://www.reddit.com/r/LocalLLaMA/comments/1nr67dm/global_memory_layer_for_llms/
chigur86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr67dm
false
null
t3_1nr67dm
/r/LocalLLaMA/comments/1nr67dm/global_memory_layer_for_llms/
false
false
self
1
null
Tested Qwen 3-Omni as a code copilot with eyes (local H100 run)
59
Pushing Qwen 3-Omni beyond chat and turned it into a screen-aware code copilot. Super promising. Overview: * Shared my screen solving a LeetCode problem (it recognized the task + suggested improvements) * Ran on an H100 with FP8 Dynamic Quant * Wired up with [https://github.com/gabber-dev/gabber](https://github.com/gabber-dev/gabber) Performance: * Logs show throughput was solid. Bottleneck is reasoning depth, not the pipeline. * Latency is mostly from “thinking tokens.” I could disable those for lower latency, but wanted to test with them on to see if the extra reasoning was worth it. TL;DR Qwen continues to crush it. The stuff you can do with the latest (3) model is impressive.
2025-09-26T16:37:38
https://v.redd.it/oeuj9vzzcjrf1
Weary-Wing-6806
v.redd.it
1970-01-01T00:00:00
0
{}
1nr661w
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/oeuj9vzzcjrf1/DASHPlaylist.mpd?a=1761496674%2CMmYwYmZlYzQ3YTE1Njg1OTQ2YTUzYjZlM2Y5OGZmZGI4ODYwNzE0YjBiOTgwZThkZjk0MjNmYzkxZGFlNjliNQ%3D%3D&v=1&f=sd', 'duration': 106, 'fallback_url': 'https://v.redd.it/oeuj9vzzcjrf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/oeuj9vzzcjrf1/HLSPlaylist.m3u8?a=1761496674%2CM2YxZjA4YzUyYmU2NjNjNWJiOTFmZGY1MmQ3MDkyMTBiNzg1ZWM3ZDVmMzBhMzBhZTA1MDM4MTJmMTY5YTU4MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/oeuj9vzzcjrf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1nr661w
/r/LocalLLaMA/comments/1nr661w/tested_qwen_3omni_as_a_code_copilot_with_eyes/
false
false
https://external-preview…e876a6e15b1d2dbc
59
{'enabled': False, 'images': [{'id': 'a29ycW55enpjanJmMXeJ6owb9CTTQw8BGFXpKLa7dkqrBHj4Ee6shEeE05ca', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a29ycW55enpjanJmMXeJ6owb9CTTQw8BGFXpKLa7dkqrBHj4Ee6shEeE05ca.png?width=108&crop=smart&format=pjpg&auto=webp&s=5d3e04af2eb59540bee16601d208d53118dea47c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/a29ycW55enpjanJmMXeJ6owb9CTTQw8BGFXpKLa7dkqrBHj4Ee6shEeE05ca.png?width=216&crop=smart&format=pjpg&auto=webp&s=bfdb8c000ff03d5e812b7b255bbce06d65c804ad', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/a29ycW55enpjanJmMXeJ6owb9CTTQw8BGFXpKLa7dkqrBHj4Ee6shEeE05ca.png?width=320&crop=smart&format=pjpg&auto=webp&s=544a9f1f7e16e1feec285077bd0c6d684c5e399d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/a29ycW55enpjanJmMXeJ6owb9CTTQw8BGFXpKLa7dkqrBHj4Ee6shEeE05ca.png?width=640&crop=smart&format=pjpg&auto=webp&s=fdc1e81731b463d9046f8eaaef0ca959ed60c92a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/a29ycW55enpjanJmMXeJ6owb9CTTQw8BGFXpKLa7dkqrBHj4Ee6shEeE05ca.png?width=960&crop=smart&format=pjpg&auto=webp&s=3ddde80a25e6b948caf9c7661f0115001e270dc9', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/a29ycW55enpjanJmMXeJ6owb9CTTQw8BGFXpKLa7dkqrBHj4Ee6shEeE05ca.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9a1e761c31f9d5d1763b86cbf93f8373689ea7e4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/a29ycW55enpjanJmMXeJ6owb9CTTQw8BGFXpKLa7dkqrBHj4Ee6shEeE05ca.png?format=pjpg&auto=webp&s=ae8357f2431b05f69d3b6234b1bb8b54fa5ec080', 'width': 1920}, 'variants': {}}]}
Given the model, context size and number of GPU can you calculate VRAM needed for each GPU?
9
Is 4x16GB GPU equivalent to a 64GB gpu or is there overhead in memory requirements? Are there some variables that must build duplicated on all GPU? I was trying to run Qwen next 80B 4bit but it ran out of VRAM on my 2x5090 with tensor parallel = 2.
2025-09-26T16:20:16
https://www.reddit.com/r/LocalLLaMA/comments/1nr5q4y/given_the_model_context_size_and_number_of_gpu/
arstarsta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr5q4y
false
null
t3_1nr5q4y
/r/LocalLLaMA/comments/1nr5q4y/given_the_model_context_size_and_number_of_gpu/
false
false
self
9
null
Today marks 10 days since IBM uploaded Granite 4 models to HF
19
Anyone have an idea how long we might be waiting for IBM to make them public...? ;)
2025-09-26T16:13:00
https://www.reddit.com/r/LocalLLaMA/comments/1nr5j9j/today_marks_10_days_since_ibm_uploaded_granite_4/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr5j9j
false
null
t3_1nr5j9j
/r/LocalLLaMA/comments/1nr5j9j/today_marks_10_days_since_ibm_uploaded_granite_4/
false
false
self
19
null
60% t/s improvement for 30b a3b from upgrading ROCm 6.3 to 7.0 on 7900 XTX
70
I got around to upgrading ROCm from my February 6.3.3 version to the latest 7.0.1 today. The performance improvements have been massive on my RX 7900 XTX. This will be highly anecdotal, and I'm sorry about that, but I don't have time to do a better job. I can only give you a very rudimentary look based on top-level numbers. Hopefully someone will make a proper benchmark with more conclusive findings. All numbers are for unsloth/qwen3-coder-30b-a3b-instruct-IQ4\_XS in LMStudio 0.3.25 running on Ubuntu 24.04: ||llama.cpp ROCm|llama.cpp Vulkan| |:-|:-|:-| |ROCm 6.3.3|78 t/s|75 t/s| |ROCm 7.0.1|115 t/s|125 t/s| Of note, previously the ROCm runtime had a slight advantage, but now the Vulkan advantage is significant. Prompt processing is about 30% faster with Vulkan compared to ROCm (both rocm 7) now as well. I was running on a week older llama.cpp runtime version with ROCm 6.3.3, so that also may be cause for some performance difference, but certainly it couldn't be enough to explain the bulk of the difference. This was a huge upgrade! I think we need to redo the math on which used GPU is the best to recommend with this huge change. What are 3090 users getting on this model with current versions?
2025-09-26T16:10:46
https://www.reddit.com/r/LocalLLaMA/comments/1nr5h1i/60_ts_improvement_for_30b_a3b_from_upgrading_rocm/
1ncehost
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr5h1i
false
null
t3_1nr5h1i
/r/LocalLLaMA/comments/1nr5h1i/60_ts_improvement_for_30b_a3b_from_upgrading_rocm/
false
false
self
70
null
I am new, can anyone tell me any Image to video model (quantized) which is compatible with 2GB vram? I know its lame but my resources are limited
4
Very fresh to all this
2025-09-26T15:50:13
https://www.reddit.com/r/LocalLLaMA/comments/1nr4xed/i_am_new_can_anyone_tell_me_any_image_to_video/
Obvious_Ad8471
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr4xed
false
null
t3_1nr4xed
/r/LocalLLaMA/comments/1nr4xed/i_am_new_can_anyone_tell_me_any_image_to_video/
false
false
self
4
null
Are there any good extensions for VS2022 that would allow me to use my ollama container hosted on a different machine?
2
I'm just getting started with this and am a bit lost. I'd really like to be able to optimize sections of code from the IDE and look for potential memory issues but I'm finding it to be very cumbersome doing it from the OpenWeb GUI or Chatbox since it can't access network resources.
2025-09-26T15:48:11
https://www.reddit.com/r/LocalLLaMA/comments/1nr4vhh/are_there_any_good_extensions_for_vs2022_that/
Firestarter321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr4vhh
false
null
t3_1nr4vhh
/r/LocalLLaMA/comments/1nr4vhh/are_there_any_good_extensions_for_vs2022_that/
false
false
self
2
null
Gpt-oss Reinforcement Learning - Fastest inference now in Unsloth! (<15GB VRAM)
379
Hey guys we've got lots of updates for Reinforcement Learning (RL)! We’re excited to introduce gpt-oss, Vision, and even better RL in Unsloth. Our new gpt-oss RL inference also achieves the fastest token/s vs. any other implementation. Our GitHub: https://github.com/unslothai/unsloth Inference is crucial in RL training. Since gpt-oss RL isn’t vLLM compatible, we rewrote Transformers inference for 3× faster speeds (~21 tok/s). For BF16, Unsloth also delivers the fastest inference (~30 tok/s), especially relative to VRAM use vs. any other implementation. We made a free & completely new custom notebook showing how RL can automatically create faster matrix multiplication kernels: gpt-oss-20b GSPO Colab. We also show you how to counteract reward-hacking which is one of RL's biggest challenges. Unsloth also uses the least VRAM (50% less) and supports the most context length (8x more). gpt-oss-20b RL fits in 15GB VRAM. As usual, there is no accuracy degradation. We released Vision RL, allowing you to train Gemma 3, Qwen2.5-VL with GRPO free in our Colab notebooks. We also previously introduced more memory efficient RL with Standby and extra kernels and algorithms. Unsloth RL now uses 90% less VRAM, and enables 16× longer context lengths than any setup. ⚠️ Reminder to NOT use Flash Attention 3 for gpt-oss as it'll make your training loss wrong. We released DeepSeek-V3.1-Terminus Dynamic GGUFs. We showcased how 3-bit V3.1 scores 75.6% on Aider Polyglot, beating Claude-4-Opus (thinking). For our new gpt-oss RL release, would recommend you guys to read our blog/guide which details our entire findings and bugs etc.: https://docs.unsloth.ai/new/gpt-oss-reinforcement-learning Thanks guys for reading and hope you all have a lovely Friday and weekend! 🦥
2025-09-26T15:47:52
https://i.redd.it/pq6ej7up5jrf1.png
danielhanchen
i.redd.it
1970-01-01T00:00:00
0
{}
1nr4v7e
false
null
t3_1nr4v7e
/r/LocalLLaMA/comments/1nr4v7e/gptoss_reinforcement_learning_fastest_inference/
false
false
default
379
{'enabled': True, 'images': [{'id': 'pq6ej7up5jrf1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/pq6ej7up5jrf1.png?width=108&crop=smart&auto=webp&s=c76f4502f17ee728f846d57a37e3d12d1a62f09e', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/pq6ej7up5jrf1.png?width=216&crop=smart&auto=webp&s=9b702dd8358a351d1d2f8c306c198b81fa849842', 'width': 216}, {'height': 357, 'url': 'https://preview.redd.it/pq6ej7up5jrf1.png?width=320&crop=smart&auto=webp&s=1dfc9cb165df1fe706cec8cc5c52cc8dcc7b1463', 'width': 320}, {'height': 715, 'url': 'https://preview.redd.it/pq6ej7up5jrf1.png?width=640&crop=smart&auto=webp&s=121b14e94d54780f5a2a7ae8625d9bd2f60d60f6', 'width': 640}, {'height': 1072, 'url': 'https://preview.redd.it/pq6ej7up5jrf1.png?width=960&crop=smart&auto=webp&s=b62504010fd99e92a7d990be567e48bbc3d282dd', 'width': 960}, {'height': 1206, 'url': 'https://preview.redd.it/pq6ej7up5jrf1.png?width=1080&crop=smart&auto=webp&s=e8c3b991e5c7d65d58847cf24dc48f00e74aa80a', 'width': 1080}], 'source': {'height': 2860, 'url': 'https://preview.redd.it/pq6ej7up5jrf1.png?auto=webp&s=8c480fbaba35c3b49e0264e05ad21f3bcd9b53ee', 'width': 2560}, 'variants': {}}]}
Advice needed:New free AI-powered study platform for JEE/NEET & Board exams — Examsprint AI
1
Hey everyone 👋, I wanted to share a new tool I discovered (or built) — Examsprint AI — which aims to be a one-stop AI-powered study companion for students preparing for board exams, JEE, NEET, and general NCERT curriculum work. Here’s what Examsprint AI offers: Topper’s Notes & Chapter Summaries for Class 9–12 Direct NCERT links + solutions for all chapters Formula sheets (Maths, Science) Flashcards & quizzes to reinforce memory Blueprints & exam planning (for Boards, NEET, JEE) Built-in AI chat assistant for instant doubt solving No subscriptions, no watermarks, totally free The site is fully responsive, so you can use it on mobile too. It was built by a young developer (13 years old!) named Aadarsh Pandey.
2025-09-26T15:31:11
https://www.reddit.com/r/LocalLLaMA/comments/1nr4fwj/advice_needednew_free_aipowered_study_platform/
Wrong_Newspaper2207
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr4fwj
false
null
t3_1nr4fwj
/r/LocalLLaMA/comments/1nr4fwj/advice_needednew_free_aipowered_study_platform/
false
false
self
1
null
Built a free AI-powered study tool for JEE/NEET – feedback welcome
0
I’ve been working on something called Examsprint AI, a free site designed to make JEE/NEET prep more structured. Some of the features so far: 🤖 AI-generated flashcards & topic-wise quizzes for revision 📚 Chapter- and subject-wise breakdown for Classes 11 & 12 📖 Direct NCERT references built in 🎯 Clean, collapsible layout that’s mobile-friendly 🚀 Roadmap: performance tracking, AI doubt-solving, timed mocks
2025-09-26T15:12:16
https://www.reddit.com/r/LocalLLaMA/comments/1nr3y1k/built_a_free_aipowered_study_tool_for_jeeneet/
Gold_Mess_1681
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr3y1k
false
null
t3_1nr3y1k
/r/LocalLLaMA/comments/1nr3y1k/built_a_free_aipowered_study_tool_for_jeeneet/
false
false
self
0
null
How am I supposed to know which third party provider can be trusted not to completely lobotomize a model?
731
I know this is mostly open-weights and open-source discussion and all that jazz but let's be real, unless your name is Achmed Al-Jibani from Qatar or you pi\*ss gold you're not getting the SOTA performance with open-weight models like Kimi K2 or DeepSeek because you have to quantize it, your options as an average-wage pleb are either: a) third party providers b) running it yourself but quantized to hell c) spinning up a pod and using a third party providers GPU (expensive) to run your model I opted for a) most of the time and a recent evaluation done on the accuracy of the Kimi K2 0905 models provided by third party providers has me doubting this decision.
2025-09-26T15:00:41
https://i.redd.it/kabtcb5twirf1.png
Striking_Wedding_461
i.redd.it
1970-01-01T00:00:00
0
{}
1nr3n2r
false
null
t3_1nr3n2r
/r/LocalLLaMA/comments/1nr3n2r/how_am_i_supposed_to_know_which_third_party/
false
false
default
731
{'enabled': True, 'images': [{'id': 'kabtcb5twirf1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/kabtcb5twirf1.png?width=108&crop=smart&auto=webp&s=1d873e47125507e96c1021ad612840f821612cb3', 'width': 108}, {'height': 183, 'url': 'https://preview.redd.it/kabtcb5twirf1.png?width=216&crop=smart&auto=webp&s=2053807c58bee7d85d06b0a64997050a56b7eb8d', 'width': 216}, {'height': 272, 'url': 'https://preview.redd.it/kabtcb5twirf1.png?width=320&crop=smart&auto=webp&s=6e9e9ec4c05b08dcc0a5c65cffabba97a8c61f02', 'width': 320}, {'height': 544, 'url': 'https://preview.redd.it/kabtcb5twirf1.png?width=640&crop=smart&auto=webp&s=2cfbae1d53a3abc93a95be9789c678d6280c6d58', 'width': 640}, {'height': 816, 'url': 'https://preview.redd.it/kabtcb5twirf1.png?width=960&crop=smart&auto=webp&s=eac8b24057e6c309b1b606fde67d5ecc6c3ffb2c', 'width': 960}, {'height': 919, 'url': 'https://preview.redd.it/kabtcb5twirf1.png?width=1080&crop=smart&auto=webp&s=d4e7958aec3aba56312e1ac4356f9fd768bd71f0', 'width': 1080}], 'source': {'height': 919, 'url': 'https://preview.redd.it/kabtcb5twirf1.png?auto=webp&s=04f2f7ce9beb31d8b9d863b02593f4ce463c9400', 'width': 1080}, 'variants': {}}]}
Isn't there a TTS model just slightly better than Kokoro?
16
I really like its consistency and speed, but I mean, I might sound nitpicky but, it seems like it can fail easily on some relatively common words or names of non-English origin like "Los Angeles", "Huawei". I really wish there was an in-between model or even something that had just a little bit more more parameters than Kokoro. But to be fair, even ChatGPT Voice Mode seems to fail with names like Siobhan even though Kokoro gets it right... Otherwise, I'm fine if it's English only and preferably something smaller and faster than Zonos. My main use would be making audiobooks.
2025-09-26T14:58:01
https://www.reddit.com/r/LocalLLaMA/comments/1nr3kl3/isnt_there_a_tts_model_just_slightly_better_than/
TarkanV
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr3kl3
false
null
t3_1nr3kl3
/r/LocalLLaMA/comments/1nr3kl3/isnt_there_a_tts_model_just_slightly_better_than/
false
false
self
16
null
AGI challenge: tell me a politically incorrect joke (for scientific purposes)
0
I've been playing around with some models and I'll be damned if I can find a model or prompt that actually cracks anything funny. And thinking models just go around in circles repeating the same thing over and over. They're funny for all the wrong reasons. For example the Qwen3-30B-A3B abliterated or uncensored models keep on converging to "bringing a ladder because prices were on the house" or "sweater with layers of excuses" I'd be interested in knowing any success stories if any.
2025-09-26T14:48:31
https://www.reddit.com/r/LocalLLaMA/comments/1nr3bth/agi_challenge_tell_me_a_politically_incorrect/
hideo_kuze_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr3bth
false
null
t3_1nr3bth
/r/LocalLLaMA/comments/1nr3bth/agi_challenge_tell_me_a_politically_incorrect/
false
false
self
0
null
Why isn't there a thinking qwen3-max?
3
I really like the model, but when the task requires even a modicum of thinking and iterating/reflecting, it fails spectacularly. Is this the issue limited to web-interface of qwen, or their api can't think for this version as well? Why?
2025-09-26T14:36:28
https://www.reddit.com/r/LocalLLaMA/comments/1nr3101/why_isnt_there_a_thinking_qwen3max/
elephant_ua
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr3101
false
null
t3_1nr3101
/r/LocalLLaMA/comments/1nr3101/why_isnt_there_a_thinking_qwen3max/
false
false
self
3
null
llama-server Is there a way to offload just context to another gpu?
4
I have been messing with the params and i cant find a good way to do it. I have 3x 3090s on here. GPU 2 is used for stable diffusion. GPU 1 is running another llm uses nkvo so that the memory usage is constant. 12 gigs of vram free. The model i want to run on GPU 0 uses pretty much all of the vram. I know i can split tensors, but it is faster when i keep the whole model on 1 gpu. I can do nkvo, but that goes to system memory. Def dont want that. A command similar to nkvo, but send the ram to a gpu is what i am hoping to find. Thanks!
2025-09-26T14:30:58
https://www.reddit.com/r/LocalLLaMA/comments/1nr2w1u/llamaserver_is_there_a_way_to_offload_just/
kylesk42
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr2w1u
false
null
t3_1nr2w1u
/r/LocalLLaMA/comments/1nr2w1u/llamaserver_is_there_a_way_to_offload_just/
false
false
self
4
null
€5,000 AI server for LLM
41
Hello, We are looking for a solution to run LLMs for our developers. The budget is currently €5000. The setup should be as fast as possible, but also be able to process parallel requests. I was thinking, for example, of a dual RTX 3090TI system with the option of expansion (AMD EPYC platform). I have done a lot of research, but it is difficult to find exact builds. What would be your idea?
2025-09-26T13:54:31
https://www.reddit.com/r/LocalLLaMA/comments/1nr1zen/5000_ai_server_for_llm/
Slakish
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr1zen
false
null
t3_1nr1zen
/r/LocalLLaMA/comments/1nr1zen/5000_ai_server_for_llm/
false
false
self
41
null
🦧Where MLX? --> Kwaipilot/KwaiCoder-23B-A4B-v1 · Hugging Face
6
2025-09-26T13:40:00
https://huggingface.co/Kwaipilot/KwaiCoder-23B-A4B-v1
JLeonsarmiento
huggingface.co
1970-01-01T00:00:00
0
{}
1nr1myz
false
null
t3_1nr1myz
/r/LocalLLaMA/comments/1nr1myz/where_mlx_kwaipilotkwaicoder23ba4bv1_hugging_face/
false
false
default
6
null
Anyone else run into LiteLLM breaking down under load?
12
I’ve been load testing different LLM gateways for a project where throughput matters. Setup was 1K → 5K RPS with mixed request sizes, tracked using Prometheus/Grafana. * [LiteLLM](https://www.litellm.ai/): stable up to \~300K RPS, but after that I started seeing latency spikes, retries piling up, and 5xx errors. * [Portkey](https://portkey.ai/): handled concurrency a bit better, though I noticed overhead rising at higher loads. * [Bifrost](https://getmax.im/bifr0st): didn’t break in the same way under the same tests. Overhead stayed low in my runs, and it comes with decent metrics/monitoring. Has anyone here benchmarked these (TGI, vLLM gateways, custom reverse proxies, etc.) at higher RPS? Also would love to know if anyone has tried **Bifrost** (found it mentioned on some threads) since it’s relatively new compared to the others; would love to hear your insights.
2025-09-26T12:55:30
https://www.reddit.com/r/LocalLLaMA/comments/1nr0lxs/anyone_else_run_into_litellm_breaking_down_under/
Fabulous_Ad993
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr0lxs
false
null
t3_1nr0lxs
/r/LocalLLaMA/comments/1nr0lxs/anyone_else_run_into_litellm_breaking_down_under/
false
false
self
12
{'enabled': False, 'images': [{'id': 'AJrHevVyY7r6KLpGX1U5_uLn5KCLjMRM9Q03t89Af34', 'resolutions': [{'height': 107, 'url': 'https://external-preview.redd.it/AJrHevVyY7r6KLpGX1U5_uLn5KCLjMRM9Q03t89Af34.png?width=108&crop=smart&auto=webp&s=5d53b11f82b3ed8b8df2d0f2aa9481c2103317cb', 'width': 108}, {'height': 215, 'url': 'https://external-preview.redd.it/AJrHevVyY7r6KLpGX1U5_uLn5KCLjMRM9Q03t89Af34.png?width=216&crop=smart&auto=webp&s=126006545f5a6cc81be9da4f3d5161af31a8b3b4', 'width': 216}, {'height': 319, 'url': 'https://external-preview.redd.it/AJrHevVyY7r6KLpGX1U5_uLn5KCLjMRM9Q03t89Af34.png?width=320&crop=smart&auto=webp&s=de278bd542debcd1606805a3ac764b62fdac9cba', 'width': 320}, {'height': 639, 'url': 'https://external-preview.redd.it/AJrHevVyY7r6KLpGX1U5_uLn5KCLjMRM9Q03t89Af34.png?width=640&crop=smart&auto=webp&s=a34eb677a79720a2d649feb13b8cc72815e0c2e3', 'width': 640}, {'height': 959, 'url': 'https://external-preview.redd.it/AJrHevVyY7r6KLpGX1U5_uLn5KCLjMRM9Q03t89Af34.png?width=960&crop=smart&auto=webp&s=af0f25c8ddf9dd0c103b2f0efd310ef977cc48e4', 'width': 960}, {'height': 1079, 'url': 'https://external-preview.redd.it/AJrHevVyY7r6KLpGX1U5_uLn5KCLjMRM9Q03t89Af34.png?width=1080&crop=smart&auto=webp&s=7fce8c73a8480d2ec700df7338e8469797dd3d87', 'width': 1080}], 'source': {'height': 2369, 'url': 'https://external-preview.redd.it/AJrHevVyY7r6KLpGX1U5_uLn5KCLjMRM9Q03t89Af34.png?auto=webp&s=b89b78acd699fd319b42caa52ddd264a94972c9a', 'width': 2370}, 'variants': {}}]}
ROCM vs Vulkan on IGPU
124
While around the same for text generation vulkan is ahead for prompt processing by a fair margin on the new igpus from AMD now. Curious considering that it was the other way around before.
2025-09-26T12:52:35
https://www.reddit.com/gallery/1nr0jnz
Eden1506
reddit.com
1970-01-01T00:00:00
0
{}
1nr0jnz
false
null
t3_1nr0jnz
/r/LocalLLaMA/comments/1nr0jnz/rocm_vs_vulkan_on_igpu/
false
false
https://b.thumbs.redditm…hvixwCYutkyo.jpg
124
null
Meta AI just stopped celebrity image generation from today - So sad
0
Meta AI just stopped celebrity image generation from today - So sad
2025-09-26T12:50:03
https://www.reddit.com/r/LocalLLaMA/comments/1nr0hp3/meta_ai_just_stopped_celebrity_image_generation/
jokarSrk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr0hp3
false
null
t3_1nr0hp3
/r/LocalLLaMA/comments/1nr0hp3/meta_ai_just_stopped_celebrity_image_generation/
false
false
self
0
null
Any good small models 4b - 13b for hebrew
0
I hope people in this sub can help me, but I'm trying to find good small models 4b - 13b that showed good results with Hebrew input and output.
2025-09-26T12:37:55
https://www.reddit.com/r/LocalLLaMA/comments/1nr0897/any_good_small_models_4b_13b_for_hebrew/
ResponsibleTruck4717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr0897
false
null
t3_1nr0897
/r/LocalLLaMA/comments/1nr0897/any_good_small_models_4b_13b_for_hebrew/
false
false
self
0
null
I built llamactl - Unified management and routing for llama.cpp, MLX and vLLM models with web dashboard.
19
I got tired of SSH-ing into servers to manually start/stop different model instances, so I built a control layer that sits on top of llama.cpp, MLX, and vLLM. Great for running multiple models at once or switching models on demand. I first posted about this almost two months ago and have added a bunch of useful features since. **Main features:** - **Multiple backend support**: Native integration with llama.cpp, MLX, and vLLM - **On-demand instances**: Automatically start model instances when API requests come in - **OpenAI-compatible API**: Drop-in replacement - route by using instance name as model name - **API key authentication**: Separate keys for management operations vs inference API access - **Web dashboard**: Modern UI for managing instances without CLI - **Docker support**: Run backends in isolated containers - **Smart resource management**: Configurable instance limits, idle timeout, and LRU eviction The API lets you route requests to specific model instances by using the instance name as the model name in standard OpenAI requests, so existing tools work without modification. Instance state persists across server restarts, and failed instances get automatically restarted. Documentation and installation guide: https://llamactl.org/stable/ GitHub: https://github.com/lordmathis/llamactl MIT licensed. Feedback and contributions welcome!
2025-09-26T12:35:37
https://www.reddit.com/r/LocalLLaMA/comments/1nr06en/i_built_llamactl_unified_management_and_routing/
RealLordMathis
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr06en
false
null
t3_1nr06en
/r/LocalLLaMA/comments/1nr06en/i_built_llamactl_unified_management_and_routing/
false
false
self
19
null
InfiniteTalk — open-source sparse-frame video dubbing (lip + head/body sync)
18
Found a fun open-source project: **InfiniteTalk**. It does “sparse-frame” video dubbing—so the **lips, head, posture, and expressions** all track the audio, not just the mouth. It’s built for **infinite-length** runs and claims fewer hand/body glitches with tighter lip sync than MultiTalk. Also works as **image + audio → talking video**. Repo: [https://github.com/MeiGen-AI/InfiniteTalk](https://github.com/MeiGen-AI/InfiniteTalk)
2025-09-26T12:30:00
https://www.reddit.com/r/LocalLLaMA/comments/1nr020h/infinitetalk_opensource_sparseframe_video_dubbing/
freesysck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nr020h
false
null
t3_1nr020h
/r/LocalLLaMA/comments/1nr020h/infinitetalk_opensource_sparseframe_video_dubbing/
false
false
self
18
{'enabled': False, 'images': [{'id': 'uBybcHGXgWpT4vBn5BawL7oTyXlsmIi2bSFxRiuOCGM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uBybcHGXgWpT4vBn5BawL7oTyXlsmIi2bSFxRiuOCGM.png?width=108&crop=smart&auto=webp&s=4ce4fe9fae0d7806dcb84b8b7ae3e18677297926', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uBybcHGXgWpT4vBn5BawL7oTyXlsmIi2bSFxRiuOCGM.png?width=216&crop=smart&auto=webp&s=41faefcdb4312ac4a253d20cc3b0e3f9668ccde1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uBybcHGXgWpT4vBn5BawL7oTyXlsmIi2bSFxRiuOCGM.png?width=320&crop=smart&auto=webp&s=f81e924bb604ebc36b0fa0046396307fab06ec63', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uBybcHGXgWpT4vBn5BawL7oTyXlsmIi2bSFxRiuOCGM.png?width=640&crop=smart&auto=webp&s=e4d3a0d85348047fae812b44bd3c634ce2326be3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uBybcHGXgWpT4vBn5BawL7oTyXlsmIi2bSFxRiuOCGM.png?width=960&crop=smart&auto=webp&s=f2e8751821c2923dc85a4d2cb77abbf12e0deb8c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uBybcHGXgWpT4vBn5BawL7oTyXlsmIi2bSFxRiuOCGM.png?width=1080&crop=smart&auto=webp&s=64a4363ab64325de9bbe4254677db03d345eb3ba', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uBybcHGXgWpT4vBn5BawL7oTyXlsmIi2bSFxRiuOCGM.png?auto=webp&s=376fef830e3893263f71b8dc8b128e6d34a8a51c', 'width': 1200}, 'variants': {}}]}
OrKa quickstart: run a traceable multi agent workflow in under 2 minutes
10
I recorded a fast walkthrough showing how to spin up OrKA-reasoning and execute a workflow with full traceability. (No OpenAI key needed if you use local models.) **What OrKa is** A YAML defined cognition graph. You wire agents, routers, memory and services, then watch the full execution trace. **How to run it like in the video** Pip pip install -U orka-reasoning orka-start orka memory watch orka run path/to/workflow.yaml "<your input as string>" What you will see in the result * Live trace with timestamps for every step * Forks that execute agents in parallel and a join that merges results * Per agent metrics: latency, tokens, model and provider * Memory reads and writes visible in the timeline * Agreement score that shows the level of consensus * Final synthesized answer plus each agent’s raw output, grouped and inspectable Why this matters You can replay the entire run, audit decisions, and compare branches. It turns multi agent reasoning into something you can debug, not just hope for. If you try it, tell me which model stack you used and how long your first run took. I will share optimized starter graphs in the comments.
2025-09-26T11:57:03
https://v.redd.it/wi8c8ftvzhrf1
marcosomma-OrKA
/r/LocalLLaMA/comments/1nqzdit/orka_quickstart_run_a_traceable_multi_agent/
1970-01-01T00:00:00
0
{}
1nqzdit
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wi8c8ftvzhrf1/DASHPlaylist.mpd?a=1761609430%2CYzAxMDg5MGQwODViZDY1MWFmMTg0ODE4M2JjOTVhMjlkNGI4MzE1ZjBmZWJiZmY0NDRmODRiZjBhZGY3OGY5Nw%3D%3D&v=1&f=sd', 'duration': 188, 'fallback_url': 'https://v.redd.it/wi8c8ftvzhrf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/wi8c8ftvzhrf1/HLSPlaylist.m3u8?a=1761609430%2CYmQ1ZmIwZjNkNWQwMWNkYmM0MDRjOTdlYTA0OTY4N2NjNjA2NDBkN2I0MjcwZWEyMWMwMTQzNzNmZGMyMTBiOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/wi8c8ftvzhrf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}}
t3_1nqzdit
/r/LocalLLaMA/comments/1nqzdit/orka_quickstart_run_a_traceable_multi_agent/
false
false
https://external-preview…4ffb63f5fb11707f
10
{'enabled': False, 'images': [{'id': 'Y2k4MWdndHZ6aHJmMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y2k4MWdndHZ6aHJmMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=108&crop=smart&format=pjpg&auto=webp&s=d47a818f39a13367f4ee8f0956af595c42635282', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Y2k4MWdndHZ6aHJmMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=216&crop=smart&format=pjpg&auto=webp&s=bf0f11e300b63c82e0164b899b91673536603626', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Y2k4MWdndHZ6aHJmMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=320&crop=smart&format=pjpg&auto=webp&s=86bf33bb784faeba485a1c7be4cf84b466290d2f', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Y2k4MWdndHZ6aHJmMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=640&crop=smart&format=pjpg&auto=webp&s=2510c4b7ef133ca017253bf311dab8774d4e4384', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Y2k4MWdndHZ6aHJmMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=960&crop=smart&format=pjpg&auto=webp&s=a5a2e5644d10fda5a6c7ca2faa500c0bfb6741e6', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Y2k4MWdndHZ6aHJmMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a5348db176b4eec891ba55fa13f66bcfc65dc449', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Y2k4MWdndHZ6aHJmMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?format=pjpg&auto=webp&s=f6898b8ba8581e1283ce006bc4e70732fa7bc82e', 'width': 1920}, 'variants': {}}]}
embedding with llama.cpp server
6
I have a working app that uses ollama and snowflake-arctic-embed2 for embedding and rag with chromadb. I want to switch to llama.cpp but i am not able to setup the embedding server correctly. The chromadb query function works well with ollama but not at all with llama.cpp. I think it has something todo with pooling or normalization. i tried a lot but i was not able to get it running. i would appreciate anything that points me in the right direction! thanks a lot! my last try was: `llama-server` `--model /models/snowflake-arctic-embed-l-v2.0-q5_k_m.gguf` `--embeddings` `--ubatch-size 2048` `--batch-size 2028` `--ctx-size 8192` `--pooling mean` `--rope-scaling yarn` `--rope-freq-scale 0.75` `-ngl 99` `--parallel 4`
2025-09-26T11:10:14
https://www.reddit.com/r/LocalLLaMA/comments/1nqyi1x/embedding_with_llamacpp_server/
DobobR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqyi1x
false
null
t3_1nqyi1x
/r/LocalLLaMA/comments/1nqyi1x/embedding_with_llamacpp_server/
false
false
self
6
null
Best VLM for data extraction
6
I’ve been experimenting with extracting key fields from scanned documents using Qwen2.5-VL-7B, and it’s been working decently well within my setup (16 GB VRAM). I’d like to explore other options and had a few questions: * Any recommendations for good VLM alternatives that can also fit within a similar VRAM budget? * What’s a good benchmark for comparing VLMs in this document-parsing/OCR use case? * Does anyone have tips on preprocessing scanned images captured by phone/camera (e.g. tilted pages, blur, uneven lighting) to improve OCR or VLM performance? Would love to hear from anyone who has tried benchmarking or optimizing VLMs for document parsing tasks.
2025-09-26T10:41:29
https://www.reddit.com/r/LocalLLaMA/comments/1nqxzug/best_vlm_for_data_extraction/
Ok_Television_9000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqxzug
false
null
t3_1nqxzug
/r/LocalLLaMA/comments/1nqxzug/best_vlm_for_data_extraction/
false
false
self
6
null
Used deepseek Built a free AI-powered study tool for JEE/NEET – feedback welcome
1
[removed]
2025-09-26T10:26:13
https://www.reddit.com/r/LocalLLaMA/comments/1nqxqli/used_deepseek_built_a_free_aipowered_study_tool/
Routine_Ad_6986
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqxqli
false
null
t3_1nqxqli
/r/LocalLLaMA/comments/1nqxqli/used_deepseek_built_a_free_aipowered_study_tool/
false
false
self
1
null
Question about Multi-GPU performance in llama.cpp
1
I have a 4060 Ti with 8 GB of VRAM and an RX580 2048sp (with the original RX580 BIOS) also with 8 GB of VRAM. I’ve been using gpt-oss 20b because of the generation speed, but the slow prompt processing speed bothers me a lot in daily use. I’m getting the following processing speeds with 30k tokens: slot update_slots: id 0 | task 0 | SWA checkpoint create, pos_min = 29539, pos_max = 30818, size = 30.015 MiB, total = 1/3 (30.015 MiB) slot release: id 0 | task 0 | stop processing: n_past = 31145, truncated = 0 slot print_timing: id 0 | task 0 | prompt eval time = 116211.78 ms / 30819 tokens ( 3.77 ms per token, 265.20 tokens per second) eval time = 7893.92 ms / 327 tokens ( 24.14 ms per token, 41.42 tokens per second) total time = 124105.70 ms / 31146 tokens I get better prompt processing speeds using the CPU, around 500–700 tokens/s. However, the generation speed is cut in half, around 20–23 tokens/s. My command: /root/llama.cpp/build-vulkan/bin/llama-server -ot "blk.(0|1|2|3|4|5|6|7|8|9|10|11).ffn.*exps=CUDA0" \ -ot exps=Vulkan1 \ --port 8080 --alias 'openai/gpt-oss-20b' --host 0.0.0.0 \ --ctx-size 100000 --model ./models/gpt-oss-20b.gguf \ --no-warmup --jinja --no-context-shift \ --batch-size 1024 -ub 1024 I’ve tried increasing and decreasing the batch size and ubatch size, but with these settings I got the highest prompt processing speed. From what I saw in the log, most of the context VRAM is stored on the RX580: llama_context: n_ctx_per_seq (100000) < n_ctx_train (131072) -- the full capacity of the model will not be utilized llama_context: Vulkan_Host output buffer size = 0.77 MiB llama_kv_cache_iswa: creating non-SWA KV cache, size = 100096 cells llama_kv_cache: Vulkan1 KV buffer size = 1173.00 MiB llama_kv_cache: CUDA0 KV buffer size = 1173.00 MiB llama_kv_cache: size = 2346.00 MiB (100096 cells, 12 layers, 1/1 seqs), K (f16): 1173.00 MiB, V (f16): 1173.00 MiB llama_kv_cache_iswa: creating SWA KV cache, size = 1280 cells llama_kv_cache: Vulkan1 KV buffer size = 12.50 MiB llama_kv_cache: CUDA0 KV buffer size = 17.50 MiB llama_kv_cache: size = 30.00 MiB ( 1280 cells, 12 layers, 1/1 seqs), K (f16): 15.00 MiB, V (f16): 15.00 MiB llama_context: Flash Attention was auto, set to enabled llama_context: CUDA0 compute buffer size = 648.54 MiB llama_context: Vulkan1 compute buffer size = 796.75 MiB llama_context: CUDA_Host compute buffer size = 407.29 MiB Is there a way to keep the KV-Cache entirely in the 4060 Ti VRAM? I’ve already tried some methods like `-kvu`, but nothing managed to speed up the prompt processing
2025-09-26T10:23:14
https://www.reddit.com/r/LocalLLaMA/comments/1nqxot5/question_about_multigpu_performance_in_llamacpp/
_FernandoT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqxot5
false
null
t3_1nqxot5
/r/LocalLLaMA/comments/1nqxot5/question_about_multigpu_performance_in_llamacpp/
false
false
self
1
null
Built a free AI-powered study tool for JEE/NEET – feedback welcome
1
[removed]
2025-09-26T10:23:10
https://i.redd.it/bc598g9sjhrf1.png
Routine_Ad_6986
i.redd.it
1970-01-01T00:00:00
0
{}
1nqxos0
false
null
t3_1nqxos0
/r/LocalLLaMA/comments/1nqxos0/built_a_free_aipowered_study_tool_for_jeeneet/
false
false
https://b.thumbs.redditm…L3nmamZNtWrI.jpg
1
{'enabled': True, 'images': [{'id': 'nEWGBNUFaCRUq-Sqwts1r9OOC0tfVtP-4k9V2J0uK38', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/bc598g9sjhrf1.png?width=108&crop=smart&auto=webp&s=313e93f51fe3e81fbf91f27dec428afc3b05bc16', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/bc598g9sjhrf1.png?width=216&crop=smart&auto=webp&s=3a2ace86214e2219f659cf64ea3df4cea3d13664', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/bc598g9sjhrf1.png?width=320&crop=smart&auto=webp&s=4923280c37c7fcbc433e74bf6892f70557bce32a', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/bc598g9sjhrf1.png?width=640&crop=smart&auto=webp&s=45b330d27479d5123b869e22f78b35e706978ceb', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/bc598g9sjhrf1.png?width=960&crop=smart&auto=webp&s=22fdaabefd1f056e8f5f47d912d87aa17aa11270', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/bc598g9sjhrf1.png?width=1080&crop=smart&auto=webp&s=de60f0cfd594a785cea00a97232cd830ba0892f6', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/bc598g9sjhrf1.png?auto=webp&s=574404a58dd24916d01c39daf6cb8027dc848329', 'width': 1080}, 'variants': {}}]}
[P] Automated aesthetic evaluation pipeline for AI-generated images using Dingo × ArtiMuse integration
4
We built an automated pipeline to systematically evaluate AI-generated image quality beyond simple "does it work?" testing. ### The Problem: Most AI image generation evaluation focuses on technical metrics (FID, CLIP scores) but lacks systematic aesthetic assessment that correlates with human perception. Teams often rely on manual review or basic quality gates, making it difficult to scale content production or maintain consistent aesthetic standards. ### Our Approach: **Automated Aesthetic Pipeline:** - **nano-banana** generates diverse style images - **ArtiMuse** provides 8-dimensional aesthetic analysis - **Dingo** orchestrates the entire evaluation workflow with configurable thresholds **ArtiMuse's 8-Dimensional Framework:** 1. **Composition**: Visual balance and arrangement 2. **Visual Elements**: Color harmony, contrast, lighting 3. **Technical Execution**: Sharpness, exposure, details 4. **Originality**: Creative uniqueness and innovation 5. **Theme Expression**: Narrative clarity and coherence 6. **Emotional Response**: Viewer engagement and impact 7. **Gestalt Completion**: Overall visual coherence 8. **Comprehensive Assessment**: Holistic evaluation ### Evaluation Results: **Test Dataset**: 20 diverse images from nano-banana **Performance**: 75% pass rate (threshold: 6.0/10) **Processing Speed**: 6.3 seconds/image average **Quality Distribution**: - High scores (7.0+): Clear composition, natural lighting, rich details - Low scores (<6.0): Over-stylization, poor visual hierarchy, excessive branding ### Example Findings: 🌃 **Night cityscape (7.73/10)**: Excellent layering, dynamic lighting, atmospheric details 👴 **Craftsman portrait (7.42/10)**: Perfect focus, warm storytelling, technical precision 🐻 **Cute sticker (4.82/10)**: Clean execution but lacks visual depth and narrative 📊 **Logo design (5.68/10)**: Functional but limited artistic merit ### Technical Implementation: - **ArtiMuse**: Trained on ArtiMuse-10K dataset (photography, painting, design, AIGC) - **Scoring Method**: Continuous value prediction (Token-as-Score approach) - **Integration**: RESTful API with polling-based task management - **Output**: Structured reports with actionable feedback ### Applications: - **Content Production**: Automated quality gates for publishing pipelines - **Brand Guidelines**: Consistent aesthetic standards across teams - **Creative Iteration**: Detailed feedback for improvement cycles - **A/B Testing**: Systematic comparison of generation parameters **Code**: https://github.com/MigoXLab/dingo **ArtiMuse**: https://github.com/thunderbolt215/ArtiMuse **Eval nano banana with Dingo × ArtiMuse**: https://github.com/MigoXLab/dingo/blob/dev/docs/posts/artimuse_en.md How do you currently evaluate aesthetic quality in your AI-generated content? What metrics do you find most predictive of human preference?
2025-09-26T09:58:35
https://www.reddit.com/r/LocalLLaMA/comments/1nqxa1j/p_automated_aesthetic_evaluation_pipeline_for/
chupei0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqxa1j
false
null
t3_1nqxa1j
/r/LocalLLaMA/comments/1nqxa1j/p_automated_aesthetic_evaluation_pipeline_for/
false
false
self
4
{'enabled': False, 'images': [{'id': 'v9RO2lLf9KVP_wKf3FwU1m_6oq8EquOAG71BxD3Wyuk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/v9RO2lLf9KVP_wKf3FwU1m_6oq8EquOAG71BxD3Wyuk.png?width=108&crop=smart&auto=webp&s=9eb8bcf58a5ec925545d802e95af02a5baa3b68a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/v9RO2lLf9KVP_wKf3FwU1m_6oq8EquOAG71BxD3Wyuk.png?width=216&crop=smart&auto=webp&s=df2bfbd4fe5dbf7c5a7172cb7c6199378823d2ff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/v9RO2lLf9KVP_wKf3FwU1m_6oq8EquOAG71BxD3Wyuk.png?width=320&crop=smart&auto=webp&s=16407973e2db4ac514a4679cd488e1e103b7813f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/v9RO2lLf9KVP_wKf3FwU1m_6oq8EquOAG71BxD3Wyuk.png?width=640&crop=smart&auto=webp&s=60aa4c5cafc381dbc55d39443af9e6c3577ce342', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/v9RO2lLf9KVP_wKf3FwU1m_6oq8EquOAG71BxD3Wyuk.png?width=960&crop=smart&auto=webp&s=739f111cd8251fdfb60aa66e21cfc7141ea45013', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/v9RO2lLf9KVP_wKf3FwU1m_6oq8EquOAG71BxD3Wyuk.png?width=1080&crop=smart&auto=webp&s=b9e88e5076b02258ae17d4246ec1e850cce376c8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/v9RO2lLf9KVP_wKf3FwU1m_6oq8EquOAG71BxD3Wyuk.png?auto=webp&s=cef4057c439b73a0ba6c2c0a952caec114e6c23d', 'width': 1200}, 'variants': {}}]}
Video models are zero-shot learners and reasoners
8
Video models are zero-shot learners and reasoners [https://arxiv.org/pdf/2509.20328](https://arxiv.org/pdf/2509.20328) New paper from Google. What do you guys think? Will it create a similar trend to GPT3/3.5 in video?
2025-09-26T09:43:00
https://www.reddit.com/r/LocalLLaMA/comments/1nqx15n/video_models_are_zeroshot_learners_and_reasoners/
Kindly_College6952
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqx15n
false
null
t3_1nqx15n
/r/LocalLLaMA/comments/1nqx15n/video_models_are_zeroshot_learners_and_reasoners/
false
false
self
8
null
How I Built Two Fullstack AI Agents with Gemini, CopilotKit and LangGraph
5
Hey everyone, I spent the last few weeks hacking on two practical fullstack agents: * **Post Generator** : creates LinkedIn/X posts grounded in live Google Search results. It emits intermediate “tool‑logs” so the UI shows each research/search/generation step in real time. Here's a simplified call sequence: [User types prompt] ↓ Next.js UI (CopilotChat) ↓ (POST /api/copilotkit → GraphQL) Next.js API route (copilotkit) ↓ (forwards) FastAPI backend (/copilotkit) ↓ (LangGraph workflow) Post Generator graph nodes ↓ (calls → Google Gemini + web search) Streaming responses & tool‑logs ↓ Frontend UI renders chat + tool logs + final postcards * **Stack Analyzer** : analyzes a public GitHub repo (metadata, README, code manifests) and provides detailed report (frontend stack, backend stack, database, infrastructure, how-to-run, risk/notes, more). Here's a simplified call sequence: [User pastes GitHub URL] ↓ Next.js UI (/stack‑analyzer) ↓ /api/copilotkit → FastAPI ↓ Stack Analysis graph nodes (gather_context → analyze → end) ↓ Streaming tool‑logs & structured analysis cards Here's how everything fits together: **Full-stack Setup** The front end wraps everything in `<CopilotChat>` (from CopilotKit) and hits a Next.js API route. That route proxies through GraphQL to our Python FastAPI, which is running the agent code. **LangGraph Workflows** Each agent is defined as a stateful graph. For example, the Post Generator’s graph has nodes like `chat_node` (calls Gemini + WebSearch) and `fe_actions_node` (post-process with JSON schema for final posts). **Gemini LLM** Behind it all is Google Gemini (using the official `google-genai` SDK). I hook it to LangChain (via the `langchain-google-genai` adapter) with custom prompts. **Structured Answers** A custom `return_stack_analysis` tool is bound inside `analyze_with_gemini_node` using Pydantic, so Gemini outputs strict JSON for the Stack Analyzer. **Real-time UI** CopilotKit streams every agent state update to the UI. This makes it easier to debug since the UI shows intermediate reasoning. full detailed writeup: [Here’s How to Build Fullstack Agent Apps](https://www.copilotkit.ai/blog/heres-how-to-build-fullstack-agent-apps-gemini-copilotkit-langgraph) GitHub repository: [here](https://github.com/CopilotKit/open-gemini-canvas) This is more of a dev-demo than a product. But the patterns used here (stateful graphs, tool bindings, structured outputs) could save a lot of time for anyone building agents.
2025-09-26T09:26:45
https://www.copilotkit.ai/blog/heres-how-to-build-fullstack-agent-apps-gemini-copilotkit-langgraph
anmolbaranwal
copilotkit.ai
1970-01-01T00:00:00
0
{}
1nqws35
false
null
t3_1nqws35
/r/LocalLLaMA/comments/1nqws35/how_i_built_two_fullstack_ai_agents_with_gemini/
false
false
default
5
null
Extract the page number of docx file
1
Hi all, I'm trying to extract text from a docx file for my RAG system , It seems easy, and the layout of tables is extracted well. However, I'm having an issue extracting the page numbers. I used python-docx but it didn't work well for page extraction. I considered converting the docx to PDF, but I think extraction quality is better if the file remains a docx( more faster and the table layout is preserved). If you have any alternatives, I'd really appreciate your help. Thank you
2025-09-26T09:13:12
https://www.reddit.com/r/LocalLLaMA/comments/1nqwksd/extract_the_page_number_of_docx_file/
StringIntelligent763
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqwksd
false
null
t3_1nqwksd
/r/LocalLLaMA/comments/1nqwksd/extract_the_page_number_of_docx_file/
false
false
self
1
null
AMD also price gouging ?
0
people love calling out nvidia/apple for their greed but AMD doesnt seem too different when it comes to their server offerings oh you cheaped out on your DDR5 RAM? you can't, it's price gouged by manufacturers themselves oh you cheaped out on your CPU? not enough CCDs, you get shit bandwidth oh you cheaped out on your motherboard? sorry, can't drive more than 2 sticks at advertised speeds oh you tried to be smart buy getting engineering sample CPUs ? its missing instructions and it doesnt power down on idle at least with mac studios you get what it says on the tin
2025-09-26T09:08:01
https://www.reddit.com/r/LocalLLaMA/comments/1nqwhwv/amd_also_price_gouging/
woahdudee2a
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqwhwv
false
null
t3_1nqwhwv
/r/LocalLLaMA/comments/1nqwhwv/amd_also_price_gouging/
false
false
self
0
null
What's the point of CUDA if TPU exists?
0
I understand that TPU is propietary of Google, but seeing the latest news it doesn't make any sense that Nvidia keeps pushing GPU architecture instead of developing an alternative to TPU. Same goes for the Chinese and AMD that are trying to replace Nvidia. Wouldn't it make better sense for them to develop an architecture that is solely designed for AI? TPU has a huge performance / watt. Google is almost frontier with the insane context window right now, all thanks to TPUs.
2025-09-26T09:07:01
https://www.reddit.com/r/LocalLLaMA/comments/1nqwhaw/whats_the_point_of_cuda_if_tpu_exists/
helloitsj0nny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqwhaw
false
null
t3_1nqwhaw
/r/LocalLLaMA/comments/1nqwhaw/whats_the_point_of_cuda_if_tpu_exists/
false
false
self
0
null
A list of models released or udpated last week on this sub, in case you missed any - (26th Sep)
290
Hey folks So many models for this week specially from the *Qwen* team who have been super active lately. Please double check my list and update in the comments in case I missed anything worth mentioned this week. Enjoy :) |Model|Description|Reddit Link|HF/GH Link| |:-|:-|:-|:-| |Qwen3-Max|LLM (1TB)|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nor65d/qwen_3_max_released/)|[Qwen blog](https://qwen.ai/blog?id=241398b9cd6353de490b0f82806c7848c5d2777d&from=research.latest-advancements-list)| |Code World Model (CWM) 32B|Code LLM 32B|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1npp8xi/new_model_from_meta_fair_code_world_model_cwm_32b/)|[HF](https://huggingface.co/facebook/cwm)| |Qwen-Image-Edit-2509|Image edit|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nnt539/qwenimageedit2509_has_been_released/)|[HF](https://huggingface.co/Qwen/Qwen-Image-Edit-2509)| |Qwen3-Omni 30B (A3B variants)|Omni-modal 30B|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nnt1bw/3_qwen3omni_models_have_been_released/)|[Captioner](https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Captioner), [Thinking](https://huggingface.co/Qwen/Qwen3-Omni-30B-A3B-Thinking)| |DeepSeek-V3.1-Terminus|Update 685B|[Reddit](https://i.redd.it/729mf2l1xpqf1.jpeg)|[HF](https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Terminus)| |Qianfan-VL (70B/8B/3B)|Vision LLMs|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nncyvv/baidu_releases_qianfanvl_70b8b3b/)|[HF 70B](https://huggingface.co/baidu/Qianfan-VL-70B), [HF 8B](https://huggingface.co/baidu/Qianfan-VL-8B), [HF 3B](https://huggingface.co/baidu/Qianfan-VL-3B)| |Hunyuan Image 3.0|T2I model (TB released)|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nqaiaz/tencent_is_teasing_the_worlds_most_powerful)|–| |Stockmark-2-100B-Instruct|Japanese LLM 100B|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nq4xs9/stockmark_2_100b_instruct/)|–| |Qwen3-VL-235B A22B (Thinking/Instruct)|Vision LLM 235B|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1not4up/qwen3vl235ba22bthinking_and/)|[Thinking](https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Thinking), [Instruct](https://huggingface.co/Qwen/Qwen3-VL-235B-A22B-Instruct)| |LongCat-Flash-Thinking|Reasoning MoE 18–31B active|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nmzio1/longcatflashthinking)|[HF](https://huggingface.co/meituan-longcat/LongCat-Flash-Thinking)| |Qwen3-4B Function Calling|LLM 4B|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nmkswn/just_dropped_qwen34b_function_calling_on_just_6gb/)|[HF](https://huggingface.co/Manojb/Qwen3-4B-FunctionCalling)| |Isaac 0.1|Perception LLM 2B|[Reddit](https://www.reddit.com/gallery/1nmiqjh)|[HF](https://huggingface.co/PerceptronAI/Isaac-0.1)| |Magistral 1.2|Multi-Modal|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nmii5y/magistral_12_is_incredible_wife_prefers_it_over/)|[HF](https://huggingface.co/unsloth/Magistral-Small-2509-GGUF)| |Ring-flash-2.0|Thinking MoE|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nl97i5/inclusionairingflash20/)|[HF](https://huggingface.co/inclusionAI/Ring-flash-2.0)| |Kokoro-82M-FP16-OpenVINO|TTS 82M|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nn45cx/kokoro82mfp16openvino/)|[HF](https://huggingface.co/Echo9Zulu/Kokoro-82M-FP16-OpenVINO)| |Wan2.2-Animate-14B|Video animate 14B|[Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nmnmqh/wan_22_animate_opensourced_model_for_character/)|[HF](https://huggingface.co/Wan-AI/Wan2.2-Animate-14B)| |MiniModel-200M-Base|Tiny LLM 200M|[Reddit](https://i.redd.it/clbzeq0i82rf1.png)|[HF](https://huggingface.co/xTimeCrystal/MiniModel-200M-Base)| **Other notable mentions** * **K2 Vendor Verifier** – Open-source tool-call validator for LLM providers ([Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nq6hdq/kimi_infra_team_releases_k2_vendor_verifier_an/)) * **quelmap + Lightning-4b** – Local data analysis assistant + LLM ([quelmap.com](https://quelmap.com)) * **llama.ui** – Updated privacy-focused LLM web UI ([Reddit](https://www.reddit.com/r/LocalLLaMA/comments/1nlufzx/llamaui_new_updates/))
2025-09-26T08:59:24
https://www.reddit.com/r/LocalLLaMA/comments/1nqwcsf/a_list_of_models_released_or_udpated_last_week_on/
aifeed-fyi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqwcsf
false
null
t3_1nqwcsf
/r/LocalLLaMA/comments/1nqwcsf/a_list_of_models_released_or_udpated_last_week_on/
false
false
self
290
{'enabled': False, 'images': [{'id': 'w-gYcq2up4oMHwfhzSD0QAeDZAjg8B_0njE3vpmX-Mk', 'resolutions': [{'height': 122, 'url': 'https://external-preview.redd.it/NTE_SxT5GeoM6r2fKnhe8mkiSsBN2jK1Dyl-PnjP66M.jpg?width=108&crop=smart&auto=webp&s=d7fc086de4f2830f07d21ce1c70fef6484a36648', 'width': 108}, {'height': 245, 'url': 'https://external-preview.redd.it/NTE_SxT5GeoM6r2fKnhe8mkiSsBN2jK1Dyl-PnjP66M.jpg?width=216&crop=smart&auto=webp&s=64807ad3dc42c98dab16807e8e97b0abd4f1c984', 'width': 216}, {'height': 363, 'url': 'https://external-preview.redd.it/NTE_SxT5GeoM6r2fKnhe8mkiSsBN2jK1Dyl-PnjP66M.jpg?width=320&crop=smart&auto=webp&s=6babf65f88f55f2677d5aabb70f4eef8b9992ba9', 'width': 320}, {'height': 727, 'url': 'https://external-preview.redd.it/NTE_SxT5GeoM6r2fKnhe8mkiSsBN2jK1Dyl-PnjP66M.jpg?width=640&crop=smart&auto=webp&s=c4609e18f9a44c868e62c60e7334699d90dcff30', 'width': 640}, {'height': 1091, 'url': 'https://external-preview.redd.it/NTE_SxT5GeoM6r2fKnhe8mkiSsBN2jK1Dyl-PnjP66M.jpg?width=960&crop=smart&auto=webp&s=c7c8f4eab4593d8191decb0ad5208dcfaefc7d83', 'width': 960}, {'height': 1227, 'url': 'https://external-preview.redd.it/NTE_SxT5GeoM6r2fKnhe8mkiSsBN2jK1Dyl-PnjP66M.jpg?width=1080&crop=smart&auto=webp&s=0b48406dd4e9febaad488f70358350d4e9532ace', 'width': 1080}], 'source': {'height': 1790, 'url': 'https://external-preview.redd.it/NTE_SxT5GeoM6r2fKnhe8mkiSsBN2jK1Dyl-PnjP66M.jpg?auto=webp&s=058e4eba2405fdc352c985d64840098633a0b4eb', 'width': 1575}, 'variants': {}}]}
Can't upvote an LLM response in LMStudio
0
In all seriousness, the new Magistral 2509's outputs are simply so goood, that I have wanted to upvote it on multiple occasions, even though I of course understand there is no need for such a button where input and output belongs to you, with all running locally. What a win for Local LLMs! Though, if LMStudio would ever implement a placebo-upvote-button, I would still click it nonetheless :)
2025-09-26T08:55:55
https://www.reddit.com/r/LocalLLaMA/comments/1nqwazq/cant_upvote_an_llm_response_in_lmstudio/
therealAtten
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqwazq
false
null
t3_1nqwazq
/r/LocalLLaMA/comments/1nqwazq/cant_upvote_an_llm_response_in_lmstudio/
false
false
self
0
null
Open source realtime LLM Modal
1
I want to know is there any opensource LLM modal available which can work realtime and support all Indian languages because I have a voicebot which is working perfectly fine with GPT, Claude but when I deploy open source modal like llama3.1 and llama3.2 on A100 24GB GPU the latency is above 3sec which is too bad, can you help me if I can train the qwen or geema2 modal because i want LLM should work with tools as well.
2025-09-26T08:24:57
https://www.reddit.com/r/LocalLLaMA/comments/1nqvuaf/open_source_realtime_llm_modal/
kapil-karda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqvuaf
false
null
t3_1nqvuaf
/r/LocalLLaMA/comments/1nqvuaf/open_source_realtime_llm_modal/
false
false
self
1
null
Can a llm run on a n305 + 32gb ram
2
The title basically says it. Have a 24/7 home server with an intel n305 and 32 gb RAM with an 1GB SSD. It is running a docker environment. Can I run a containered LLM to answer easy queries on the go, basically as a google substitute?
2025-09-26T07:55:51
https://www.reddit.com/r/LocalLLaMA/comments/1nqveb3/can_a_llm_run_on_a_n305_32gb_ram/
scoobie517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqveb3
false
null
t3_1nqveb3
/r/LocalLLaMA/comments/1nqveb3/can_a_llm_run_on_a_n305_32gb_ram/
false
false
self
2
null
Hands-on with Qwen3 Omni and read some community evaluations.
11
Qwen3 Omni's positioning is that of a lightweight, full-modality model. It's fast, has decent image recognition accuracy, and is quite usable for everyday OCR and general visual scenarios. It works well as a multimodal recognition model that balances capability with resource consumption.However, there's a significant gap between Omni and Qwen3 Max in both understanding precision and reasoning ability. Max can decipher text that's barely legible to the human eye and comprehend the relationships between different text elements in an image. Omni, on the other hand, struggles with very small text and has a more superficial understanding of the image; it tends to describe what it sees literally without grasping the deeper context or connections.I also tested it on some math problems, and the results were inconsistent. It sometimes hallucinates answers. So, it's not yet reliable for tasks requiring rigorous [reasoning.In](http://reasoning.In) terms of overall capability, Qwen3 Max is indeed more robust intellectually (though its response style could use improvement: the interface is cluttered with emojis and overly complex Markdown, and the writing style feels a bit unnatural and lacks nuance).That said, I believe the real value of this Qwen3 release isn't just about pushing benchmark scores up a few points. Instead, it lies in offering a comprehensive, developer-friendly, full-modality solution.For reference, here are some official resources: [https://github.com/QwenLM/Qwen3-Omni/blob/main/assets/Qwen3\_Omni.pdf](https://github.com/QwenLM/Qwen3-Omni/blob/main/assets/Qwen3_Omni.pdf) [https://github.com/QwenLM/Qwen3-Omni/blob/main/cookbooks/omni\_captioner.ipynb](https://github.com/QwenLM/Qwen3-Omni/blob/main/cookbooks/omni_captioner.ipynb)
2025-09-26T07:50:19
https://www.reddit.com/r/LocalLLaMA/comments/1nqvbds/handson_with_qwen3_omni_and_read_some_community/
Hairy-Librarian3796
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqvbds
false
null
t3_1nqvbds
/r/LocalLLaMA/comments/1nqvbds/handson_with_qwen3_omni_and_read_some_community/
false
false
self
11
null
Looking for lightweight VLMs for document parsing + benchmarking advice
1
1. Any recommendations for good VLM alternatives that can also fit within a similar VRAM budget? 2. What’s a good benchmark for comparing VLMs in this document-parsing/OCR use case? 3. Does anyone have tips on preprocessing scanned images captured by phone/camera (e.g. tilted pages, blur, uneven lighting) to improve OCR or VLM performance?
2025-09-26T07:17:09
https://www.reddit.com/r/LocalLLaMA/comments/1nqut9g/looking_for_lightweight_vlms_for_document_parsing/
Ok_Television_9000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqut9g
false
null
t3_1nqut9g
/r/LocalLLaMA/comments/1nqut9g/looking_for_lightweight_vlms_for_document_parsing/
false
false
self
1
null
I made an MCP server that lets Claude browse Reddit - no API keys needed
1
[removed]
2025-09-26T07:07:48
https://i.redd.it/650kyfbwkgrf1.gif
karanb192
i.redd.it
1970-01-01T00:00:00
0
{}
1nquo8c
false
null
t3_1nquo8c
/r/LocalLLaMA/comments/1nquo8c/i_made_an_mcp_server_that_lets_claude_browse/
false
false
https://b.thumbs.redditm…VZiNYVHMBpFA.jpg
1
{'enabled': True, 'images': [{'id': 'ZSAawB3hUQLVdVffG1hoXpVO7NydKEouFGQWrUZ6hTw', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=108&crop=smart&format=png8&s=0d192a690c80fd9d7d09f487dfcd9724bcf1ba47', 'width': 108}, {'height': 249, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=216&crop=smart&format=png8&s=cbe09dec865a191f637f6ef50d2ba2a38ef75c43', 'width': 216}, {'height': 369, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=320&crop=smart&format=png8&s=1a7661c245ebb1d9e51d78a6c04706c9819167e9', 'width': 320}, {'height': 739, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=640&crop=smart&format=png8&s=fd38da9a392401fa757ac2971d291457d6e08cd0', 'width': 640}, {'height': 1109, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=960&crop=smart&format=png8&s=bbe2d6ca9e27069f2c4c748fca20fee806366a06', 'width': 960}, {'height': 1248, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=1080&crop=smart&format=png8&s=12c6455fbe44fec91f98da337c3646a9cdd16176', 'width': 1080}], 'source': {'height': 1946, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?format=png8&s=accf50518bb8bcab15573623048b9aa2ac79015a', 'width': 1684}, 'variants': {'gif': {'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=108&crop=smart&s=568998e9c806bfb03fdca7668fa1c475e49c128c', 'width': 108}, {'height': 249, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=216&crop=smart&s=0dbe32c74a3df41e8bcab719b9e4c8a97731b172', 'width': 216}, {'height': 369, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=320&crop=smart&s=7bfd89e759f53657dca7dc1cae8870b9ac57548f', 'width': 320}, {'height': 739, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=640&crop=smart&s=dbfd3c9f9818bd792a660e6329b26dc0af994d15', 'width': 640}, {'height': 1109, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=960&crop=smart&s=ef6589b99476a304afd91170ef30a58e8ff341cf', 'width': 960}, {'height': 1248, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=1080&crop=smart&s=6a9ba8db08947b0e638991940723646e2e5bb44c', 'width': 1080}], 'source': {'height': 1946, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?s=89b0019b1ae0392edb81d4af05ec1d9bebb98087', 'width': 1684}}, 'mp4': {'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=108&format=mp4&s=b6555031ee69e0cf5e21323f60c6cbc476587ecf', 'width': 108}, {'height': 249, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=216&format=mp4&s=455273fdbb678ea8e5662bacf5907b3da527cb9d', 'width': 216}, {'height': 369, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=320&format=mp4&s=95ef1576c81488a67ef96da4381597811e1bb44f', 'width': 320}, {'height': 739, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=640&format=mp4&s=1819755adb48b221d2c818f1367c84b3691fcd73', 'width': 640}, {'height': 1109, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=960&format=mp4&s=e29cf8cbdbd2f3a93e45f01c1a2a3e6bcedd5377', 'width': 960}, {'height': 1248, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?width=1080&format=mp4&s=0b86d1e34bd80037b2f3b9beabee36e6faa2376a', 'width': 1080}], 'source': {'height': 1946, 'url': 'https://preview.redd.it/650kyfbwkgrf1.gif?format=mp4&s=8ffb14b9e2ac7c26405e19d6877ddaf90310874f', 'width': 1684}}}}]}
Can a 64GB Mac run Qwen3-Next-80B?
29
I've seen comments suggesting that it's tight even on a 48GB Mac, but I'm hoping 64GB might be enough with proper quantization.I've also gathered some important caveats from the community that I'd like to confirm: 1. Quantization Pitfalls: Many community-shared quantized versions (like the FP8 ones) seem to have issues. A common problem mentioned is that the tokenizer\_config.json might be missing the chat\_template, which breaks function calling. The suggested fix is to replace it with the original tokenizer\_config from the official model repo. 2. SGLang vs. Memory: Could frameworks like SGLang offer significant memory savings for this model compared to standard vLLM or llama.cpp? However, I saw reports that SGLang might have compatibility issues, particularly with some FP8 quantized versions, causing errors. My Goal: I'm planning to compareQwen3-Next-80B (with Claude Code for coding tasks) against GPT-OSS-120B (with Codex) to see if the Qwen combo can be a viable local alternative.Any insights, especially from those who have tried running Qwen3-Next-80B on similar hardware, would be greatly appreciated! Thanks in advance.
2025-09-26T07:06:48
https://www.reddit.com/r/LocalLLaMA/comments/1nqunp9/can_a_64gb_mac_run_qwen3next80b/
xieyutong
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nqunp9
false
null
t3_1nqunp9
/r/LocalLLaMA/comments/1nqunp9/can_a_64gb_mac_run_qwen3next80b/
false
false
self
29
null
Local Qwen-Code rig recommendations (~€15–20k)?
14
We’re in the EU, need GDPR compliance, and want to build a local AI rig mainly for coding (Qwen-Code). Budget is \~€15–20k. Timeline: decision within this year Any hardware/vendor recommendations?
2025-09-26T06:57:48
https://www.reddit.com/r/LocalLLaMA/comments/1nquiff/local_qwencode_rig_recommendations_1520k/
logTom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nquiff
false
null
t3_1nquiff
/r/LocalLLaMA/comments/1nquiff/local_qwencode_rig_recommendations_1520k/
false
false
self
14
null