title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to run Qwen3 0.6B at 8.4 tok/sec on 2 x 5090s | 34 | (Completely useless but thought I would share :D)
This was just a fun experiment to see how fast I could run LLMs with WiFi interconnect and, well, I have to say it's quite a bit slower than I thought...
I set up two machines with 1x5090 each; then installed the latest vLLM on each, and also installed Ray on each of them. Then once you [start ray on one machine and connect to it with the other,](https://docs.ray.io/en/latest/cluster/cli.html) you can run:
vllm serve Qwen/Qwen3-0.6B --max-model-len 1024 --tensor-parallel-size 1 --pipeline-parallel-size 2 --host 0.0.0.0 --port 8181 --enable-reasoning --reasoning-parser deepseek_r1
Lo and behold, the mighty Qwen3 0.6B running at 8.4 t/s split across 2 5090s!!
[Open WebUI](https://preview.redd.it/6vwu78dhy1nf1.png?width=2136&format=png&auto=webp&s=7fadb7a4ead8e56a8f3270f038608585ce8b8ed2)
Not only is the model bad, but also:
* Runs way slower than just CPU.
* Ray & vLLM need a bit of tweaking to get running correctly
* vLLM will throw a bunch of random errors along the way ;) | 2025-09-04T01:47:21 | https://www.reddit.com/r/LocalLLaMA/comments/1n7xgm5/how_to_run_qwen3_06b_at_84_toksec_on_2_x_5090s/ | random-tomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7xgm5 | false | null | t3_1n7xgm5 | /r/LocalLLaMA/comments/1n7xgm5/how_to_run_qwen3_06b_at_84_toksec_on_2_x_5090s/ | false | false | 34 | null | |
When can we expect fine tuned models for specific programming language(s)? | 3 | Most of the LLMs out there are trained to serve multiple purposes and not fine tuned for specific programming languages.
Maybe they are all generally good at Python but what about Flutter(not a language), Dart, Swift, JavaScript and more. I do wish that large enterprises fine tune and add their source code to training data. | 2025-09-04T01:29:58 | https://www.reddit.com/r/LocalLLaMA/comments/1n7x3jn/when_can_we_expect_fine_tuned_models_for_specific/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7x3jn | false | null | t3_1n7x3jn | /r/LocalLLaMA/comments/1n7x3jn/when_can_we_expect_fine_tuned_models_for_specific/ | false | false | self | 3 | null |
VibeVoice Gone? | 82 | It seems like the GitHub page and the huggingface page are gone. The huggingface only has the 1.5B
| 2025-09-04T01:00:53 | https://www.reddit.com/r/LocalLLaMA/comments/1n7wh65/vibevoice_gone/ | atrfx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7wh65 | false | null | t3_1n7wh65 | /r/LocalLLaMA/comments/1n7wh65/vibevoice_gone/ | false | false | self | 82 | null |
gpt-oss-120b Latex error on LM studio | 2 | 2025-09-04T00:58:11 | hieuphamduy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n7wezo | false | null | t3_1n7wezo | /r/LocalLLaMA/comments/1n7wezo/gptoss120b_latex_error_on_lm_studio/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'herbvidwq1nf1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/herbvidwq1nf1.png?width=108&crop=smart&auto=webp&s=764106de41b5e7e7fb0d330d139510ba30d67892', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/herbvidwq1nf1.png?width=216&crop=smart&auto=webp&s=695618aabda27c363dc5fc32161d95a7f30dee75', 'width': 216}, {'height': 272, 'url': 'https://preview.redd.it/herbvidwq1nf1.png?width=320&crop=smart&auto=webp&s=6d01e152203da0a13a0f21e6497bde9a2248e36f', 'width': 320}, {'height': 545, 'url': 'https://preview.redd.it/herbvidwq1nf1.png?width=640&crop=smart&auto=webp&s=e27a37bacd134b9c65560c2374b51567c80dc495', 'width': 640}, {'height': 817, 'url': 'https://preview.redd.it/herbvidwq1nf1.png?width=960&crop=smart&auto=webp&s=e5cab8d50553616ed1299336bb1c84d0d3cf9970', 'width': 960}, {'height': 920, 'url': 'https://preview.redd.it/herbvidwq1nf1.png?width=1080&crop=smart&auto=webp&s=624eec1c737ed8b457a2db49658a55009c9bb9a9', 'width': 1080}], 'source': {'height': 1185, 'url': 'https://preview.redd.it/herbvidwq1nf1.png?auto=webp&s=cab868f66992147ee5040d76da44b75f460500f6', 'width': 1391}, 'variants': {}}]} | ||
Training LLM on guideline? | 1 | is there anyway we can teach an LLM to follow rules just by training it on the *text* of guidelines without needing to show it any examples. something like these guidelines into the prompt, or use RAG to get the relevant portion of the guidelines.I wonder if we could start by training a LoRA adapter on the following JSON:\[
{
"text": "RULE: If the user says 'blablabla', respond with '12345'."
},
{
"text": "RULE: If the user types 'good night', reply with 'hi there'."
},
{
"text": "RULE: If the user inputs 'no', respond with '67890'."
},
{
"text": "RULE: Never answer questions with 'maybe’.”}
| 2025-09-04T00:44:26 | https://www.reddit.com/r/LocalLLaMA/comments/1n7w4e9/training_llm_on_guideline/ | NotBizzaark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7w4e9 | false | null | t3_1n7w4e9 | /r/LocalLLaMA/comments/1n7w4e9/training_llm_on_guideline/ | false | false | self | 1 | null |
I've tried loading NVIDIA-Nemotron-Nano-12B-v2-GGUF in Jan and LM Studio and I can't get it to work | 7 | I'm trying to load one of the new NVIDIA Nemotron GGUF models (specifically `NVIDIA-Nemotron-Nano-12B-v2-GGUF`) in both LM Studio and Jan, but I'm running into an error.
The applications fail to load the model and give this specific message:
error loading model: error loading model architecture: unknown model architecture: 'nemotron_h'
My assumption is that the underlying `llama.cpp` version used by LM Studio and Jan hasn't been updated to support this new `nemotron_h` architecture yet.
Is that correct? Or is there a workaround I'm missing? Just wanted to confirm if the solution is simply to wait for the next application updates. | 2025-09-04T00:42:59 | basedvampgang | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n7w37j | false | null | t3_1n7w37j | /r/LocalLLaMA/comments/1n7w37j/ive_tried_loading_nvidianemotronnano12bv2gguf_in/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'ov3cowyin1nf1', 'resolutions': [{'height': 52, 'url': 'https://preview.redd.it/ov3cowyin1nf1.png?width=108&crop=smart&auto=webp&s=e766085fdb1b311966595ff8e747eafb6e66575f', 'width': 108}, {'height': 104, 'url': 'https://preview.redd.it/ov3cowyin1nf1.png?width=216&crop=smart&auto=webp&s=a4cda65be7dbadef1300374ae358a5ef04ab6def', 'width': 216}, {'height': 154, 'url': 'https://preview.redd.it/ov3cowyin1nf1.png?width=320&crop=smart&auto=webp&s=a227f7afa5c85365a9bd808c0736c41a2b633cf6', 'width': 320}], 'source': {'height': 234, 'url': 'https://preview.redd.it/ov3cowyin1nf1.png?auto=webp&s=f747704584b6e4ce008ad32ca21080d47183e1a1', 'width': 484}, 'variants': {}}]} | |
Hey LocalLLaMA | 6 | While many coding assistants and platforms have been improving fairly rapidly and adding new features, they of course are usually tiered in some sense or limited based on size, ultimately up-selling you to their subscribed tiers. So, in my vehement refusal to submit to the Freemuim model taken by every platform and service today, I've made 2 mcp servers that pretty much cover most of the functions I need when coding, mainly:
1 - A token efficient indexing and retrieval system that does not use up my tokens/credit if I am using an external service, or significantly bog down my models when I'm using local modals ( Usually 14 - 32B and they do not manage well when the context is flooded by irrelevant context)
2 - Extensive system prompts to use Orchestrator/Architect/Code agents which usually does the above, eating up my available credits/floods context windows.
To address this, I have worked on and am happy to share 2 MCP servers that pretty much address these issues. They're not some magical never seen before revolution, just well implemented servers that directly address my needs.
1 - [**Code Indexer**](https://github.com/scooter-lacroix/code-indexer.git) \- A code-indexing MCP server that can be configured for speed and extreme resource management for smaller codebases (using SQLite) OR for decent speed (based on your hardware), depth and accuracy in Large 1M+ file codebases (where I usually watch my days waste away) (using PostgreSQL metadata storage and Elasticsearch/Zoekt search). This is my little baby as its saved me *tons* when an agent or model needs to search for a function or module by preventing hallucinations where models claim files or functions do not exist after wasting my time tokens and credits searching through an incomplete or inaccurate index, sometimes when I even have the exact file in reference open in the IDE, preventing unnecessary file/function duplication or codebase segmentation.
I've found it has greater accuracy than tools used by agents such as Kiro, Warp, Kilo, Roo and Augment, while only returning the relevant files and information, at the cost of my System's performance rather than credits/tokens.
2 - [**Swiss Sandbox**](https://github.com/scooter-lacroix/swiss-sandbox.git) \- Similar to giving your agent a virtual machine to work in (not quite but you get the idea, think Jules by google) the sandbox is a space where the agents' task decomposition and execution is assisted/handled by the sandbox. This environment allows the model/agent using it to not only work in a externally limited but internally unrestricted sandbox without fear of losing data (the memes aren't memes, less often with Claude but I'm TIRED of models deleting chunks and files from my codebase. Yes this usually happens when i grant the model permission to execute code and functions without my review, no you are not any different and if you haven't encountered this beautiful Easter egg, you will :D ) through creating a clone workspace for the model for it to apply and test all the fixes without actually touching your files. This allows you to then simply instruct the model to later on apply the verified fixes/changes or request it exports its workspace (gives you a nice compressed file for you to work with) If the changes are substantial enough for you to make a new branch or version. The sandbox also includes a GPT-like canvas that when used with front-ends like Open-webui, allows the code to run within the sandbox, reducing the amount of copy-pasta required to view how well the generated code runs.
While I have provided system prompt templates with both tools, I usually only need to use them with stubborn models like Grok or Kimi, but for models like Claude, Gemini and Qwen(Qwen 2 models too!), they are very efficient with use of the tools and accurately return the root issue if any are encountered.
Now, I fully expect a litany of issues to assault me once you guys give the servers a try but that comes with the territory, but if you come across an issue, open it in the respective repo and I'll have it addressed as soon as possible! If you find the tools useful, all I ask is that you star the repo! :) | 2025-09-04T00:37:56 | https://www.reddit.com/r/LocalLLaMA/comments/1n7vzeg/hey_localllama/ | Doogie707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7vzeg | false | null | t3_1n7vzeg | /r/LocalLLaMA/comments/1n7vzeg/hey_localllama/ | false | false | self | 6 | null |
Thoughts on Intel Arc Pro B50 x4 = 64GB of VRAM for $1400 and 280W Power Draw? | 36 | For new cards that is some of the best $/GB of VRAM you can get, and it's also the best VRAM/w you can get and because they're x8 cards you can run them off of a x16 splitter right? How are x16 splitters? I assume you'd need some external PCIe power.
Is this realistic? Does me making this thread basically prevent this card from ever be obtainable? _Am I stupid?_ | 2025-09-04T00:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/1n7vqn8/thoughts_on_intel_arc_pro_b50_x4_64gb_of_vram_for/ | 79215185-1feb-44c6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7vqn8 | false | null | t3_1n7vqn8 | /r/LocalLLaMA/comments/1n7vqn8/thoughts_on_intel_arc_pro_b50_x4_64gb_of_vram_for/ | false | false | self | 36 | null |
Is there a way to have models load in to vram quicker, or stay alive without persisting in vram? Or are there alternatives for fast models? | 5 | I just set up Home Assistant, and I installed OLLAMA to run a local LLM for a voice assistant. I have an RTX 3080 - a comfortable GPU for decently large models - however I am running HA and OLLAMA on my main computer. I do not want to waste power while idle, and I don't really have the money to spend on a seperate computer with a beefy GPU to run servers and LLMs on. There is one problem with that though, I do not want the LLM to persist in memory, as I use my computer for other things and have it running other LLMs at different times such as Whisper.
There is a "keep alive" option in Home Assistant that keeps the model alive in vram so it is ready for the next response if a command is given in succession (the default is being indefinite), If the model is loaded in memory it responds almost instantly. If not, it takes about 5 seconds to respond - which is far from real-time. With all of the other processing my computer has to do for voice commands, that reaches the very inconvenient boundaries, which is not why I am doing this project.
Is there a faster way to run LLMs that don't hog all of my vram when they are not running? | 2025-09-04T00:14:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n7vhc6/is_there_a_way_to_have_models_load_in_to_vram/ | AlternateWitness | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7vhc6 | false | null | t3_1n7vhc6 | /r/LocalLLaMA/comments/1n7vhc6/is_there_a_way_to_have_models_load_in_to_vram/ | false | false | self | 5 | null |
Is there a way to have models load in to vram quicker, or stay alive without persisting in vram? Or are there alternatives for fast models? | 1 | [deleted] | 2025-09-04T00:13:46 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1n7vgoe | false | null | t3_1n7vgoe | /r/LocalLLaMA/comments/1n7vgoe/is_there_a_way_to_have_models_load_in_to_vram/ | false | false | default | 1 | null | ||
Ex-Miner Turned Local LLM Enthusiast, now I have a Dilemma | 11 | Ex-miner here, now messing around with local LLMs. Kept my rig through the crypto craze, and it’s paid off. Got 5x RTX 3080 (10GB VRAM), 2x RTX 3060 (12GB), and a 3080 Ti (12GB), all running on 850W PSUs. Total VRAM’s like 86GB across 8 cards. All mine from day one, kept ‘em cool, maintained, no complaints.
Been at it since Mixtral 8x7B days, took a break, now I’m back with ComfyUI for diffusion stuff and LLMs for long story videos. Splitting tasks across GPUs nodes here, models there....... works pretty well.
Here’s the deal: snagged a 3090 (24GB VRAM) to test some ideas, and damn, it’s nice. Fits a whole ComfyUI diffusion model on one card, rest of the rig handles other stuff. Problem is, my 850W PSUs choke if I try more than one 3090. Also tried jamming all 8 GPUs together with PCIe risers back in the day and had some inestability problems. But I think that I should be okay doing some more testing.
So, I’m stuck thinking:
* Dump my setup and grab used 3090s? More VRAM per card (24GB) is tempting for big models, and I could maybe get 4x 3090s for \~96GB total. But my cards are clean, first-owner, and used 3090s might be beat to hell. I could use my 4 x 850W psu for the rig. Maybe adding some 3060 to the mix.
* Tweak what I got? Maybe find a sweet spot for my 3080s/3060s/3080 Ti where it’s stable. Could pull a card or two for side experiments, maybe even EXO mining down the line if I feel like it.
* Wait for next-gen cards? Heard recently of the 96GB VRAM from HUAWEI, but that’s probably a year out.
What do you all think? Anyone got a stable multi-GPU setup with 3080s or similar for LLMs/ComfyUI? Tips for risers not sucking? Worth selling my good cards for mined used 3090s? Or just keep tweaking, testing? Waiting for cheap big-VRAM cards worth it?
Hit me with your roasts and ideas. I am open to hear. Thank you so much! | 2025-09-04T00:13:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n7vgjc/exminer_turned_local_llm_enthusiast_now_i_have_a/ | mslocox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7vgjc | false | null | t3_1n7vgjc | /r/LocalLLaMA/comments/1n7vgjc/exminer_turned_local_llm_enthusiast_now_i_have_a/ | false | false | self | 11 | null |
PSA: Make sure your API ports aren't exposed to the open internet | 212 | There are about 1,100 exposed Ollama servers out there according to this blog post:
https://blogs.cisco.com/security/detecting-exposed-llm-servers-shodan-case-study-on-ollama
Also, if you see the prompt "What is 2+2?" in your logs, it was Cisco. | 2025-09-03T23:38:13 | https://www.reddit.com/r/LocalLLaMA/comments/1n7uocj/psa_make_sure_your_api_ports_arent_exposed_to_the/ | nooclear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7uocj | false | null | t3_1n7uocj | /r/LocalLLaMA/comments/1n7uocj/psa_make_sure_your_api_ports_arent_exposed_to_the/ | false | false | self | 212 | {'enabled': False, 'images': [{'id': 'CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=108&crop=smart&auto=webp&s=da2a688972f8427ee78d8dca9d238362348aa343', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=216&crop=smart&auto=webp&s=24fe1c7a20b9752381156f64dd0edf73167cba48', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=320&crop=smart&auto=webp&s=284e6d2f76b1ef19071d8e81f465fd90e69f1b32', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=640&crop=smart&auto=webp&s=cf056e82668e8dca0249994539cb52499fc325d3', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=960&crop=smart&auto=webp&s=e92dcc45b474010dd565bc05c6c5929456849eff', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=1080&crop=smart&auto=webp&s=1669a69c4f805c90dd5b951a19b930febdbc1047', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?auto=webp&s=bde846d4d1d6d3e347823871cbfb297f24b72709', 'width': 1200}, 'variants': {}}]} |
VibeVoice 1.5B anyone solved these minor problems? | 1 | Its really good but it has some weird quirks in Comfyfui.
I've been using it my [videos on Comfyui tricks](https://www.youtube.com/watch?v=YAk4YtuMnLM). it is curious how many words it cant do. and sometimes you get music, or background ambience, and if you use too much text, it blaps out and starts distorting while getting louder. There is often a lot of "p" popping as well, like the mike is too close.
I was hoping to see some people find solutions to it. I would love to be able to feed it a large text document, but currently have to cut and paste about 4 short paragraphs at a time.
having said that, it is incredible and very good, just the weakenesses need resolving and it would be perfect. | 2025-09-03T23:09:25 | https://www.reddit.com/r/LocalLLaMA/comments/1n7u07c/vibevoice_15b_anyone_solved_these_minor_problems/ | superstarbootlegs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7u07c | false | null | t3_1n7u07c | /r/LocalLLaMA/comments/1n7u07c/vibevoice_15b_anyone_solved_these_minor_problems/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'xAzAWDFCweMEpLUATJiKfnsK2du1joq9nr7fLJDVSAo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/xAzAWDFCweMEpLUATJiKfnsK2du1joq9nr7fLJDVSAo.jpeg?width=108&crop=smart&auto=webp&s=9fbaa17e02fad7261b7b0200eb9aba351cb09410', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/xAzAWDFCweMEpLUATJiKfnsK2du1joq9nr7fLJDVSAo.jpeg?width=216&crop=smart&auto=webp&s=9dd4333ec637a26b96e440534648506376b7dda2', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/xAzAWDFCweMEpLUATJiKfnsK2du1joq9nr7fLJDVSAo.jpeg?width=320&crop=smart&auto=webp&s=a088c8afc6fddf8c158a63266822b30e6418c5d3', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/xAzAWDFCweMEpLUATJiKfnsK2du1joq9nr7fLJDVSAo.jpeg?auto=webp&s=e8a0bf5b4565b8c389e318dc3f7943135c45fc57', 'width': 480}, 'variants': {}}]} |
Can VibeVoice 7B run real-time TTS on an RTX 3090 locally? | 5 | Hi all,
I want to run VibeVoice 7B for real-time text-to-speech on my local PC with an RTX 3090 (24 GB VRAM).
Has anyone managed to get real-time streaming with this setup?
If yes, what quantization / optimization methods did you use (bitsandbytes, 4-bit/8-bit, TensorRT, etc.)?
Or is 1.5B the only practical option for real-time on a 3090?
Thanks for any tips or benchmarks! | 2025-09-03T22:51:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n7tl0q/can_vibevoice_7b_run_realtime_tts_on_an_rtx_3090/ | Adept_Lawyer_4592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7tl0q | false | null | t3_1n7tl0q | /r/LocalLLaMA/comments/1n7tl0q/can_vibevoice_7b_run_realtime_tts_on_an_rtx_3090/ | false | false | self | 5 | null |
Grok via OpenRouter: What's your calculus for the performance vs. data sovereignty trade-off? | 1 | I've noticed a growing interest in Grok models through high-performance endpoints like OpenRouter. The benchmarks are tempting.
However, for those of us who operate under strict Zero Data Retention (ZDR) policies, this presents a fundamental dilemma. Most of these endpoints don't offer that guarantee.
This leads me to a strategic question for this developer community:
**Are you making a conscious trade-off, where immediate access to state-of-the-art performance outweighs the risk to your data sovereignty?**
I'm not trying to call anyone out; I'm genuinely trying to understand the decision-making framework. What's your calculus? Do you consider the risk to be low, the benefit exceptionally high, or is there another factor I'm missing? | 2025-09-03T22:31:36 | https://www.reddit.com/r/LocalLLaMA/comments/1n7t429/grok_via_openrouter_whats_your_calculus_for_the/ | Initial-Swan6385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7t429 | false | null | t3_1n7t429 | /r/LocalLLaMA/comments/1n7t429/grok_via_openrouter_whats_your_calculus_for_the/ | false | false | self | 1 | null |
Amd Radeon Instinct Mi50 32Gb vs 6700XT 16gb | 1 | Running BackyardAI for my AI stories.
Wondering if this Mi50 card would work in Windows with this program?
Would it allow larger models to be loaded and used?
Currently I can run a 12gb slowly. 10gb models are faster. about 20 tokens/sec using mistral.14b.chaifighter-latte.gguf_v2.q5_k_m.gguf ( my default model)
Would this be an improvement? | 2025-09-03T22:25:45 | https://www.reddit.com/r/LocalLLaMA/comments/1n7sz06/amd_radeon_instinct_mi50_32gb_vs_6700xt_16gb/ | cmdrmcgarrett | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7sz06 | false | null | t3_1n7sz06 | /r/LocalLLaMA/comments/1n7sz06/amd_radeon_instinct_mi50_32gb_vs_6700xt_16gb/ | false | false | self | 1 | null |
Any IDE AI Chat plugins (for local models) that support images? | 2 | I want try to 'emulate' the Cursor AI chat experience (in IDE Interface chat assistant) using local models, including the ability to drag and drop or ctrl + v images into the chat.
I really like Qwen3 30B as a coding assistant, but it doesn't have vision, so I'm also running Intern2 2B VL as a vision model. I made a small proxy that identifies when images are included in a request, sends them to InternVL first and asks for a summary of the image, then injects the text summary of the image along with the rest of the request context and sends it to Qwen.
It's all working well, so now I want to plug it into an IDE chat assistant. I've been using Continue in VSCode mostly but it doesn't seem to support images at all.
I tried the "trick Cursor into using local model" trick, and regular chat works, but when I attach an image it blows up, it seems that it still handles images through it's own API even if you change the base API url.
Are there any current IDE chat assistants that allow image pasting and ctrl +v?
I could probably make a VS Code plugin but, I'm trying to make it as simple as possible to connect to on the client side (minimal config to connect to the model endpoint), and I'd rather not reinvent the wheel.. | 2025-09-03T22:10:11 | https://www.reddit.com/r/LocalLLaMA/comments/1n7slkk/any_ide_ai_chat_plugins_for_local_models_that/ | Acceptable_Adagio_91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7slkk | false | null | t3_1n7slkk | /r/LocalLLaMA/comments/1n7slkk/any_ide_ai_chat_plugins_for_local_models_that/ | false | false | self | 2 | null |
Get Perplexity Pro - Cheap like Free | 0 | Perplexity Pro 1 Year - $7.25
https://www.poof.io/@dggoods/3034bfd0-9761-49e9
In case, anyone want to buy my stash. | 2025-09-03T21:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/1n7s0i3/get_perplexity_pro_cheap_like_free/ | ThreeMegabytes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7s0i3 | false | null | t3_1n7s0i3 | /r/LocalLLaMA/comments/1n7s0i3/get_perplexity_pro_cheap_like_free/ | false | false | self | 0 | null |
Who here has got a Mac Studio with 512 gigs RAM? | 24 | I have questions for you guys. So many questions. What models you run and what token/sec you get? What is the context size you set, do you run local LLM for fun or you do development and trying to replace Claude.
Thanks. | 2025-09-03T21:34:03 | https://www.reddit.com/r/LocalLLaMA/comments/1n7rp5v/who_here_has_got_a_mac_studio_with_512_gigs_ram/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7rp5v | false | null | t3_1n7rp5v | /r/LocalLLaMA/comments/1n7rp5v/who_here_has_got_a_mac_studio_with_512_gigs_ram/ | false | false | self | 24 | null |
There is any way to have like phone calls with the models but fully automatized? | 0 | Like if they were a person who lives with you and is always listening and responding to you | 2025-09-03T21:21:57 | https://www.reddit.com/r/LocalLLaMA/comments/1n7re61/there_is_any_way_to_have_like_phone_calls_with/ | Stock-Fault5734 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7re61 | false | null | t3_1n7re61 | /r/LocalLLaMA/comments/1n7re61/there_is_any_way_to_have_like_phone_calls_with/ | false | false | self | 0 | null |
gpt-oss-120b not following instructions with Codex | 0 | Hello All,
I have recently started using `codex-cli` with `gpt-oss-120b` hosted on AWS Bedrock, and I have been a bit disappointed so far. First, it doesn't always follow instructions precisely, and second, it struggles with updating files.
Has anyone else run into similar issues with `gpt-oss-120b`? Also, any tips on resolving file editing issues with Codex would be much appreciated. | 2025-09-03T21:05:06 | https://www.reddit.com/r/LocalLLaMA/comments/1n7qz0u/gptoss120b_not_following_instructions_with_codex/ | pasha_oo7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7qz0u | false | null | t3_1n7qz0u | /r/LocalLLaMA/comments/1n7qz0u/gptoss120b_not_following_instructions_with_codex/ | false | false | self | 0 | null |
Local Models for Local Risks? | 1 | [removed] | 2025-09-03T20:56:52 | https://atelyedr.etsy.com/listing/4337080365 | AtelyeDR | atelyedr.etsy.com | 1970-01-01T00:00:00 | 0 | {} | 1n7qr5v | false | null | t3_1n7qr5v | /r/LocalLLaMA/comments/1n7qr5v/local_models_for_local_risks/ | false | false | default | 1 | null |
Qwen3 30B A3B 2507 Hybrid Deep Reasoning Showcase | 0 | So, here is a question even the mighty ChatGPT fails:
Q: Can you name one (or more) countries , in english, with the last three letters LIA ?
ChatGPT: Yes — **Australia**.
Another one is **Mongolia**.
https://preview.redd.it/ovsa3sn0a0nf1.png?width=722&format=png&auto=webp&s=cc1d4ad741280724cadfa4b8afa06d063530556d
Qwen3 30B A3B 2507 Hybrid:
https://preview.redd.it/mwn2pps6a0nf1.png?width=1094&format=png&auto=webp&s=3e1c55f45086271bc4921f19ee79b96ecd14cd70
Methodology is, I manually parse reasoning\_content from assistant responses and prefix assistant messages in chat history as Assistant reasoning begin . . . .Assistant reasoning end....Assistant response. I also let the model know about this in system prompt, so not to confuse it and cause hallucinations. It clearly knows that assistant reasoning begin...assistant reasoning end block in chat history, are its own thought processes which resulted in that turn's response. Why I do it like this and not just send reasoning\_content as-is to chat history ? Because, although role = assistant reasoning\_content is recorded in chat history, model completely refuses to look at it and denies referencing that data, like at all. Thus, when my frontend deep reasoning switch is off, I don't even record reasoning\_content in chat history. Why waste context if the model completely discards it anyway. When deep reasoning is on, I manually prefix assistant messages as I've explained in "content" and model now clearly can see its past reasoning efforts and reference them.
`{`
`"role": "assistant",`
`"content": "Assistant reasoning start\nOkay, let's go through this again. We have two confirmed countries ending with \"LIA\": Somalia and Mongolia.\n\nThe user says there's a third one that I considered in my past reasoning but falsely discarded. Let me go back to my previous reasoning steps.\n\nIn my last reasoning, I considered:\n- Albania: NIA \u2192 No\n- Bulgaria: RIA \u2192 No\n- Bolivia: VIA \u2192 No\n- Croatia: TIA \u2192 No\n- Slovenia: NIA \u2192 No\n- Lithuania: NIA \u2192 No\n- Latvia: VIA \u2192 No\n- Mali: LI \u2192 No\n- Somalia: LIA \u2192 Yes\n- And I thought of \"Mongolia\" as a new one.\n\nBut the user says the third one is one I considered but falsely discarded. So it must be one of the ones I dismissed earlier.\n\nLet me think of countries ending with \"LIA\" again.\n\nWait, what about \"Slovakia\"? S-L-O-V-A-K-I-A \u2192 ends with \"KIA\". No.\n\nWait, \"Slovenia\" is NIA.\n\nWait, \"Belize\" is Z-E.\n\nWait, \"Belgium\" is IUM.\n\nWait, \"Estonia\"? NIA.\n\nWait, \"Croatia\"? TIA.\n\nWait, \"Liberia\"? RIA.\n\nWait, \"Nigeria\"? RIA.\n\nWait, \"Senegal\"? AL.\n\nWait, \"Zambia\"? BIA.\n\nWait, \"Zimbabwe\"? E.\n\nWait, \"Malta\"? T-A.\n\nWait, \"Malaysia\"? SIA.\n\nWait, \"Mali\"? LI.\n\nWait, let's think of countries that end with \"LIA\". Maybe \"Finland\"? No, ends with AND.\n\nWait, \"Iceland\"? AND.\n\nWait, \"Ireland\"? AND.\n\nWait, \"Norway\"? AY.\n\nWait, \"Sweden\"? EN.\n\nWait, \"Denmark\"? ARK.\n\nWait, \"France\"? NCE.\n\nWait, \"Spain\"? AIN.\n\nWait, \"Greece\"? CEE.\n\nWait, \"Italy\"? LY.\n\nWait, \"Portugal\"? AL.\n\nWait, \"Belgium\"? IUM.\n\nWait, \"Netherlands\"? LANDS.\n\nWait, \"Austria\"? IA.\n\nWait, \"Switzerland\"? CE.\n\nWait, \"Canada\"? ADA.\n\nWait, \"Australia\"? IA.\n\nWait, \"New Zealand\"? LAND.\n\nWait, \"South Africa\"? ICA.\n\nWait, \"North Korea\"? Y.\n\nWait, \"South Korea\"? Y.\n\nWait, \"Japan\"? AN.\n\nWait, \"China\"? NA.\n\nWait, \"India\"? IA.\n\nWait, \"Bangladesh\"? H.\n\nWait, \"Pakistan\"? K.\n\nWait, \"Iran\"? AN.\n\nWait, \"Iraq\"? AQ.\n\nWait, \"Saudi Arabia\"? A.\n\nWait, \"Turkey\"? Y.\n\nWait, \"Egypt\"? PT.\n\nWait, \"Israel\"? EL.\n\nWait, \"Lebanon\"? ON.\n\nWait, \"Syria\"? IA.\n\nWait, \"Jordan\"? ON.\n\nWait, \"Iraq\"? AQ.\n\nWait, \"Yemen\"? EN.\n\nWait, \"Oman\"? AN.\n\nWait, \"Qatar\"? AR.\n\nWait, \"UAE\"? AE.\n\nWait, \"Bahrain\"? IN.\n\nWait, \"Kuwait\"? IT.\n\nWait, \"Afghanistan\"? N.\n\nWait, \"Nepal\"? AL.\n\nWait, \"Sri Lanka\"? KA.\n\nWait, \"Bangladesh\"? SH.\n\nWait, \"Myanmar\"? AR.\n\nWait, \"Thailand\"? AND.\n\nWait, \"Vietnam\"? AM.\n\nWait, \"Cambodia\"? IA.\n\nWait, \"Laos\"? OS.\n\nWait, \"Malaysia\"? IA.\n\nWait, \"Indonesia\"? A.\n\nWait, \"Philippines\"? ES.\n\nWait, \"Singapore\"? RE.\n\nWait, \"Brunei\"? I.\n\nWait, \"Timor-Leste\"? TE.\n\nWait, \"Papua New Guinea\"? A.\n\nWait, \"Fiji\"? I.\n\nWait, \"Tonga\"? A.\n\nWait, \"Samoa\"? A.\n\nWait, \"Vanuatu\"? U.\n\nWait, \"Solomon Islands\"? S.\n\nWait, \"Kiribati\"? I.\n\nWait, \"Nauru\"? U.\n\nWait, \"Tuvalu\"? U.\n\nWait, \"Micronesia\"? A.\n\nWait, \"Palau\"? U.\n\nWait, \"Marshall Islands\"? S.\n\nWait, \"Federated States of Micronesia\"? A.\n\nWait, \"Bhutan\"? N.\n\nWait, \"Nepal\"? AL.\n\nWait, \"Sikkim\"? M.\n\nWait, \"Sri Lanka\"? KA.\n\nWait, \"Mongolia\"? LIA \u2192 Yes.\n\nWait, \"Somalia\"? LIA \u2192 Yes.\n\nWait, is there another one?\n\nWait, what about \"Belize\"? B-E-L-I-Z-E \u2192 ends with \"Z-E\".\n\nWait, \"Belgium\"? B-E-L-G-I-U-M \u2192 ends with \"IUM\".\n\nWait, \"Luxembourg\"? L-U-X-E-M-B-O-U-R-G \u2192 ends with \"G\".\n\nWait, \"Iceland\"? I-C-E-L-A-N-D \u2192 ends with \"AND\".\n\nWait, \"Finland\"? F-I-N-L-A-N-D \u2192 \"AND\".\n\nWait, \"Ireland\"? I-R-E-L-A-N-D \u2192 \"AND\".\n\nWait, \"Norway\"? N-O-R-W-A-Y \u2192 \"AY\".\n\nWait, \"Sweden\"? S-W-E-D-E-N \u2192 \"EN\".\n\nWait, \"Denmark\"? D-E-N-M-A-R-K \u2192 \"ARK\".\n\nWait, \"France\"? F-R-A-N-C-E \u2192 \"NCE\".\n\nWait, \"Germany\"? G-E-R-M-A-N-Y \u2192 \"ANY\".\n\nWait, \"Spain\"? S-P-A-I-N \u2192 \"AIN\".\n\nWait, \"Greece\"? G-R-E-E-C-E \u2192 \"CEE\".\n\nWait, \"Italy\"? I-T-A-L-Y \u2192 \"LY\".\n\nWait, \"Portugal\"? P-O-R-T-U-G-A-L \u2192 \"AL\".\n\nWait, \"Belgium\"? B-E-L-G-I-U-M \u2192 \"IUM\".\n\nWait, \"Netherlands\"? N-E-T-H-E-R-L-A-N-D-S \u2192 \"LANDS\".\n\nWait, \"Austria\"? A-U-S-T-R-I-A \u2192 \"IA\".\n\nWait, \"Switzerland\"? S-W-I-T-Z-E-R-L-A-N-D \u2192 \"LAND\".\n\nWait, \"Canada\"? C-A-N-A-D-A \u2192 \"ADA\".\n\nWait, \"Australia\"? A-U-S-T-R-A-L-I-A \u2192 \"LIA\"! Wait, Australia ends with \"LIA\"?\n\nLet's spell it out: A-U-S-T-R-A-L-I-A. So the last three letters are L-I-A. Yes! So Australia ends with \"LIA\".\n\nWait, why didn't I consider this earlier? In my previous reasoning, I dismissed \"Australia\" because I thought it ends with \"IA\", but actually, it ends with \"LIA\".\n\nLet me confirm:\n\nAustralia: A-U-S-T-R-A-L-I-A\n\nLetters: 1:A, 2:U, 3:S, 4:T, 5:R, 6:A, 7:L, 8:I, 9:A.\n\nSo the last three letters are L-I-A \u2192 \"LIA\".\n\nYes! So \"Australia\" also ends with \"LIA\" in English.\n\nI must have dismissed it earlier because I thought it was \"IA\", but it's actually \"LIA\" because of the \"L\" before the \"I-A\".\n\nSo the three countries are:\n1. Somalia\n2. Mongolia\n3. Australia\n\nI considered \"Australia\" in my earlier reasoning but probably dismissed it too quickly, thinking it ended with \"IA\" instead of recognizing the \"L\" before the \"I-A\".\n\nSo the third country is **Australia**.\nAssistant Reasoning end\n\nAssistant response:\nAustralia"`
`},`
This way, assistant can "see" and recursively reason on past reasoning content. Which I call "Deep Reasoning". This allows vast possibilities, it can now do selective elimination, can respond to guiding, pruning etc. In this example, in first attempt it got Somalia and Mongolia correct, however, although it did consider Australia, it discarded it as ending with ia. I have then told it "There is one more that in your last reasoning you did consider but discarded falsely". It then was able to look back at its past reasoning and correctly identify Australia
https://preview.redd.it/8ec5yu2za0nf1.png?width=1162&format=png&auto=webp&s=93d8c534fcbdb05a1a49aac3a5c9e902820885ee
This makes the already powerful reasoning capabilities of Qwen3 30B 2507 completely off the charts.
I present this as a novel methodology to strengthen reasoning of smaller models recursively. This way, through creating a reasoning feedback loop, even very small models can achieve very impressive results. This means even very small models, if given a few turns to refine their reasoning, can come up with very powerful results.
I suggest you run the same question through your own local model now and see how it does.
The model in this example is Qwen3 30B A3B 2507 Hybrid look [here](https://www.reddit.com/r/LocalLLaMA/comments/1n7jfpt/qwen3_30b_a3b_thinking_2507_hybrid/) | 2025-09-03T20:25:56 | https://www.reddit.com/r/LocalLLaMA/comments/1n7pxo6/qwen3_30b_a3b_2507_hybrid_deep_reasoning_showcase/ | Not4Fame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7pxo6 | false | null | t3_1n7pxo6 | /r/LocalLLaMA/comments/1n7pxo6/qwen3_30b_a3b_2507_hybrid_deep_reasoning_showcase/ | false | false | 0 | null | |
What is the biggest advantage of running local? | 26 | Cost and speed aren't one of those right? For me, knowing my data isn't shared is the biggest. Other reasons:
1. Being able to create NSFW content
2. Knowing that my model isn't being degraded unknowingly via quantization
3. ?
What are you thoughts? | 2025-09-03T20:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n7plpb/what_is_the_biggest_advantage_of_running_local/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7plpb | false | null | t3_1n7plpb | /r/LocalLLaMA/comments/1n7plpb/what_is_the_biggest_advantage_of_running_local/ | false | false | self | 26 | null |
Cheap GPU to pair with 4070 Super TI - Advice? | 1 | I am looking to pair either a 3060 ti 12gb or a 4060 ti 16gb with my current 4070 Super ti 16gb, to boost my vram for inferencing.
Is it worth losing out on 4gb vram, considering the 3060 has 448 GB/s memory bandwidth, compared to only 288 GB/s on the 4060 - and the 3060 has more CUDA and Tensor cores, albeit on an older architecture?
I don't know how much the bandwidth will effect tokens per second.
Any advice would be appreciated. | 2025-09-03T19:51:38 | https://www.reddit.com/r/LocalLLaMA/comments/1n7p12c/cheap_gpu_to_pair_with_4070_super_ti_advice/ | Too_Dangerous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7p12c | false | null | t3_1n7p12c | /r/LocalLLaMA/comments/1n7p12c/cheap_gpu_to_pair_with_4070_super_ti_advice/ | false | false | self | 1 | null |
I've benchmarked the top model I can run locally on CPU via llama-swap | 17 | After gpt-oss template has been updated in llama.cpp, I can just say it rocks for speed and accuracy
```
╒═══════════════════════╤══════════════════════════╤═════════════════════╕
│ Model │ Correct │ Avg Response Time │
╞═══════════════════════╪══════════════════════════╪═════════════════════╡
│ gemma-3-12b-it-Q4_K_M │ 4/8 (50.0%) [█████░░░░░] │ 43.41s │
├───────────────────────┼──────────────────────────┼─────────────────────┤
│ Qwen3-4B-IQ4_NL │ 6/8 (75.0%) [███████░░░] │ 87.14s │
├───────────────────────┼──────────────────────────┼─────────────────────┤
│ gpt-oss-20b-mxfp4 │ 7/8 (87.5%) [████████░░] │ 52.60s │
╘═══════════════════════╧══════════════════════════╧═════════════════════╛
``` | 2025-09-03T19:24:57 | https://www.reddit.com/r/LocalLLaMA/comments/1n7obos/ive_benchmarked_the_top_model_i_can_run_locally/ | gnorrisan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7obos | false | null | t3_1n7obos | /r/LocalLLaMA/comments/1n7obos/ive_benchmarked_the_top_model_i_can_run_locally/ | false | false | self | 17 | null |
Getting started with Lemonade's web ui on Linux/dev installation comes with instructions now (and more improvements) | 19 | Lemonade v8.1.8 is out today, with some nice quality-of-life improvements which were inspired and/or contributed by the community.
Our goal is to build an open, easy-to-use local LLM server that auto-optimizes for any computer. There's lots more to do, but we're making progress.
-----------------
### 💡Improved LLM Chat Interface
- Linux users and developers who install from PyPI or source are greeted with helpful instructions (used to be totally blank)
- Glad redditors and github users pushed us to do this one
- The text input box is now resizable (finally!)
- Thanks https://github.com/RobertAgee for this and other contributions!
-----------------
### 🙌 OpenHands Tutorial
- The Featured Apps section of the docs now has instructions for setting up OpenHands for LLM coding.
-----------------
### 🐛 Bug Bash
- Ryzen AI NPU completions now truncate instead of error when the maximum prompt length is exceeded
- Thanks https://github.com/Kritik-07 for taking on this common request!
- gfx120X (Radeon 9000-series) is supported on Ubuntu + ROCm
- llama.cpp errors are more visible, for easier debugging
- Support for installing multiple quantization variants of the same GGUF model
- Enable users to override the HIP_VISIBLE_DEVICES environment variable
-----------------
Links in the comments. | 2025-09-03T19:23:48 | jfowers_amd | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n7oamj | false | null | t3_1n7oamj | /r/LocalLLaMA/comments/1n7oamj/getting_started_with_lemonades_web_ui_on_linuxdev/ | false | false | default | 19 | {'enabled': True, 'images': [{'id': 'suw8kdbg00nf1', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/suw8kdbg00nf1.png?width=108&crop=smart&auto=webp&s=07e00c19c59a60236476508e4a3522a535ff8ae7', 'width': 108}, {'height': 213, 'url': 'https://preview.redd.it/suw8kdbg00nf1.png?width=216&crop=smart&auto=webp&s=14a700fed1c8196c689f802d646faea0f0f26c19', 'width': 216}, {'height': 315, 'url': 'https://preview.redd.it/suw8kdbg00nf1.png?width=320&crop=smart&auto=webp&s=0324afb576d813f3e9f08c64795adf8c0dfcb9c1', 'width': 320}, {'height': 631, 'url': 'https://preview.redd.it/suw8kdbg00nf1.png?width=640&crop=smart&auto=webp&s=b9409a4d563347aecc4c001675d6ba6bca57b8ad', 'width': 640}], 'source': {'height': 767, 'url': 'https://preview.redd.it/suw8kdbg00nf1.png?auto=webp&s=7108819f41e9260ee5f9042156936e2cfd966b09', 'width': 777}, 'variants': {}}]} | |
Latest Advancements in AI | 0 | Hello All,
I wanted to know where can we get latest advancements in AI like some example use cases or some new research topics that can be used to build some proof of concept . I’m struck at VLM and Multi Models . Can any one please help. Can anyone share the blogs or some resources.How to get the recent developments in AI ? Thanks . | 2025-09-03T19:00:31 | https://www.reddit.com/r/LocalLLaMA/comments/1n7nny6/latest_advancements_in_ai/ | Substantial-Rain1607 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7nny6 | false | null | t3_1n7nny6 | /r/LocalLLaMA/comments/1n7nny6/latest_advancements_in_ai/ | false | false | self | 0 | null |
Just "Text Game" with LLMs | 6 | Background: I've tried a lot of frameworks/structures with llms, and realize that I'm just playing text games.
What are the most important things of llms? Corpus for trainning/fine-tuning, and context for inference. Huge compaies are playing with corpus to build solid models, while our local hommies are playing with contexts to get better output. Most of the text that put to llms are built by llms. The most common prompt I wrote by hand is: build/modify xxx for me. The xxx are always be: codes that process texts, directly modify texts, generate new text that I need/want to put in context.
I'm playing this game time and time again, until finally see some text I want. This is strange. It feels like I'm meta'ing all things in text world, direct how the text flows goes to. But that is really, really hard, it takes a lot of effort and engeneering skills to make the context a perfect playground to llms. I still don't know how could I build a better context. Should I add more related information? More precise hints? Hand-written instructions? Or some exact examples to reference to? Whatever, the context window is limited, and degraded by length, the more we put in, the worse the performance it will be.
How do you guys think about this? I'm kind of tired with this text game. Text world isn't the whole world, maybe I need some multimodal things. | 2025-09-03T18:40:21 | https://www.reddit.com/r/LocalLLaMA/comments/1n7n4mz/just_text_game_with_llms/ | Truncleme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7n4mz | false | null | t3_1n7n4mz | /r/LocalLLaMA/comments/1n7n4mz/just_text_game_with_llms/ | false | false | self | 6 | null |
I made and open source a fully vision multimodal RAG agent | 12 | hello all,
over the weekend i have been working on something on my backlog for a very long time, a fully vision native multimodal RAG system. thanks to Claude Code, everything was smooth, including a Claude Code-like CLI tool to start chatting with it.
The whole source code of the agent + the CLI is open source. I would be more welcome to have more PRs to improve the CLI tool along with the agent architecture. Thanks everyone for your time! | 2025-09-03T18:40:19 | https://github.com/qnguyen3/docpixie | quan734 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1n7n4lv | false | null | t3_1n7n4lv | /r/LocalLLaMA/comments/1n7n4lv/i_made_and_open_source_a_fully_vision_multimodal/ | false | false | default | 12 | {'enabled': False, 'images': [{'id': 'jShloGHbzXFp3Gh6e_eJp14vev2sfY8TDH_CKHBwc1A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jShloGHbzXFp3Gh6e_eJp14vev2sfY8TDH_CKHBwc1A.png?width=108&crop=smart&auto=webp&s=12699857b0aed3ece66882635d39c8bb0644d89a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jShloGHbzXFp3Gh6e_eJp14vev2sfY8TDH_CKHBwc1A.png?width=216&crop=smart&auto=webp&s=cba982a4f2fabe7cf3dbd4e8f695cb8c4156cc46', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jShloGHbzXFp3Gh6e_eJp14vev2sfY8TDH_CKHBwc1A.png?width=320&crop=smart&auto=webp&s=c5269513837f2f55a2f1f821da59d4b53f988e18', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jShloGHbzXFp3Gh6e_eJp14vev2sfY8TDH_CKHBwc1A.png?width=640&crop=smart&auto=webp&s=fc555745e7a2ea9d04a7be4a586a997df2fa70f0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jShloGHbzXFp3Gh6e_eJp14vev2sfY8TDH_CKHBwc1A.png?width=960&crop=smart&auto=webp&s=8e21e60e8de86215e7c2122b99196e22cd8b69ab', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jShloGHbzXFp3Gh6e_eJp14vev2sfY8TDH_CKHBwc1A.png?width=1080&crop=smart&auto=webp&s=f4b0e849476ea4a6b0c3fcfaa1e0bb77989bec41', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jShloGHbzXFp3Gh6e_eJp14vev2sfY8TDH_CKHBwc1A.png?auto=webp&s=65960cd9efcc9297ed7cfd2bf489bb3c511410ca', 'width': 1200}, 'variants': {}}]} |
What are the best everyday LLM to run on a 3090 and would adding a 2070 super change anything? | 2 | Running on a system with 256GB system RAM. I would also have a 2070 Super that I could probably add to the system but would that even help for anything? | 2025-09-03T18:30:17 | https://www.reddit.com/r/LocalLLaMA/comments/1n7muyt/what_are_the_best_everyday_llm_to_run_on_a_3090/ | MarinatedPickachu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7muyt | false | null | t3_1n7muyt | /r/LocalLLaMA/comments/1n7muyt/what_are_the_best_everyday_llm_to_run_on_a_3090/ | false | false | self | 2 | null |
Investing in the Chinese AI boom? | 0 | China is racing past the US in the past 5 years with its advances in AI hardware and software (currently at al all-time high, +13.66% in the past 5 years). What are your plans to divest from the S&P500 and invest in the Chinese AI wave? | 2025-09-03T18:18:48 | https://www.reddit.com/gallery/1n7mjtt | entsnack | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n7mjtt | false | null | t3_1n7mjtt | /r/LocalLLaMA/comments/1n7mjtt/investing_in_the_chinese_ai_boom/ | false | false | 0 | null | |
Speeding up PyTorch inference by 87% on Apple devices with AI-generated Metal kernels | 2 | 2025-09-03T18:18:30 | https://gimletlabs.ai/blog/ai-generated-metal-kernels | thebachelor-ml | gimletlabs.ai | 1970-01-01T00:00:00 | 0 | {} | 1n7mjjn | false | null | t3_1n7mjjn | /r/LocalLLaMA/comments/1n7mjjn/speeding_up_pytorch_inference_by_87_on_apple/ | false | false | default | 2 | null | |
Best current NSFW TTS model? | 247 | Which one? And how to use it? | 2025-09-03T18:17:17 | https://www.reddit.com/r/LocalLLaMA/comments/1n7mien/best_current_nsfw_tts_model/ | Stock-Fault5734 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7mien | false | null | t3_1n7mien | /r/LocalLLaMA/comments/1n7mien/best_current_nsfw_tts_model/ | false | false | nsfw | 247 | null |
Is BitNet Training Unstable? | 31 | [BitNet models](https://arxiv.org/abs/2310.11453) have weights that can be represented by a single bit, meaning that they are either -1 or +1.
The common way to train them is using an underlying continuous parameter (I will call it *z*) that is projected into a 1-bit discrete weight using the function *w=sign(z).* The gradient of *z* with respect to *w* is calculated using a straight-through-estimator (STE), such that *dw/dz=1* (other STEs are also used, like tanh, but I don't think that matters for my point).
However, this method appears unstable. To see why, consider the case where the "ideal" weight is between -1 and +1. For example, think about a simplified scenario where the model's loss with respect to *w* is *L=(w-0.3)\^2* (which is approximately the form that LLM losses take with respect to individual weights). When *z>0,* then *w>0.3* so the gradients push *z* in the negative direction. Similarly, when *z<0*, then *w<0.3* so the gradients push *z* in the positive direction. It seems that this would cause *z* to oscillate around zero and make *w* to flip back and forth between -1 and +1, making training unstable and preventing convergence.
My diagram shows an animation of the gradient descent process under these conditions. The black dot shows *z* moving according to the gradient direction and the blue dot shows *w* and its position on the loss function (the yellow line). You can see the instability once *z* reaches zero.
Has this phenomenon been mentioned or addressed before? Is it actually not a problem inside of real models (possibly because of stochastic gradients or optimizers like Adam)?
A similar example can also be constructed for 1.58-bit models where the weights are in {-1, 0, +1}. | 2025-09-03T18:16:06 | THE_ROCKS_MUST_LEARN | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n7mh7o | false | null | t3_1n7mh7o | /r/LocalLLaMA/comments/1n7mh7o/is_bitnet_training_unstable/ | false | false | default | 31 | {'enabled': True, 'images': [{'id': '5734gavtqzmf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=108&crop=smart&format=png8&s=4d62b176fecbc7350cd7d1cfc9e992fe07d6fb9e', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=216&crop=smart&format=png8&s=fa0527afb9209f97332a668c9858a68bcd536daf', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=320&crop=smart&format=png8&s=1a2c9e03b25da395810c695201470954cd39488e', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=640&crop=smart&format=png8&s=47e6e2baa1f0c51591b4f62b2faa64d990ff10c0', 'width': 640}], 'source': {'height': 480, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?format=png8&s=2fb439a490609d1998a488f59a5276027f6d8118', 'width': 640}, 'variants': {'gif': {'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=108&crop=smart&s=279b59074474fab7403f09dfb68eac4b9b4ef75b', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=216&crop=smart&s=4e1dc406493f017c28f201d85d22caa27f346c21', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=320&crop=smart&s=87b1a367be44a94b4cfb7103302d624176a3af30', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=640&crop=smart&s=75ae69df82e1c51e97b35c2d75e4482e5779fa00', 'width': 640}], 'source': {'height': 480, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?s=e0b5b5074e2247e3c3844ae08ab89c184d620cfb', 'width': 640}}, 'mp4': {'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=108&format=mp4&s=058997ad83a2e445a9dbce60841d771ef71156e5', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=216&format=mp4&s=81f0355134f6e822483719552a1edeabefd8006f', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=320&format=mp4&s=205da557d90faa02634ab4d158bf2cd84bfe304e', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?width=640&format=mp4&s=50d7997bcdbb1751f6a55e925c204cfb22b5ece6', 'width': 640}], 'source': {'height': 480, 'url': 'https://preview.redd.it/5734gavtqzmf1.gif?format=mp4&s=cad599fc9bbb7072be0967a78e907dd299c4376a', 'width': 640}}}}]} | |
Intel launches Arc Pro B50 graphics card at $349 | 45 | 2025-09-03T18:13:50 | https://www.phoronix.com/review/intel-arc-pro-b50-linux | reps_up | phoronix.com | 1970-01-01T00:00:00 | 0 | {} | 1n7meyo | false | null | t3_1n7meyo | /r/LocalLLaMA/comments/1n7meyo/intel_launches_arc_pro_b50_graphics_card_at_349/ | false | false | default | 45 | null | |
Building an AI-Powered Tamagotchi Using Local LLMs | 7 | 2025-09-03T18:05:16 | https://youtu.be/DhO5tcjnb9A | YungMixtape2004 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1n7m6sr | false | {'oembed': {'author_name': 'pookie', 'author_url': 'https://www.youtube.com/@pookiehd', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/DhO5tcjnb9A?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Building an AI-Powered Tamagotchi Using Local LLMs"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/DhO5tcjnb9A/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Building an AI-Powered Tamagotchi Using Local LLMs', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1n7m6sr | /r/LocalLLaMA/comments/1n7m6sr/building_an_aipowered_tamagotchi_using_local_llms/ | false | false | default | 7 | {'enabled': False, 'images': [{'id': 'LEjK_wGRA8Cp1687LU47n7GRURKSyczAji-aKZOXfhQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/LEjK_wGRA8Cp1687LU47n7GRURKSyczAji-aKZOXfhQ.jpeg?width=108&crop=smart&auto=webp&s=d2505e5b3c7841662186cd130fb2e64929490ff1', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/LEjK_wGRA8Cp1687LU47n7GRURKSyczAji-aKZOXfhQ.jpeg?width=216&crop=smart&auto=webp&s=876fdc0af2a85c30fbd3503e3b844feb134b005c', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/LEjK_wGRA8Cp1687LU47n7GRURKSyczAji-aKZOXfhQ.jpeg?width=320&crop=smart&auto=webp&s=284a80c33d6a900b5f426ae09b37b2b5cd9ae725', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/LEjK_wGRA8Cp1687LU47n7GRURKSyczAji-aKZOXfhQ.jpeg?auto=webp&s=4820ad8f4cb18af81a6162ce0e9ede23dd30f2a1', 'width': 480}, 'variants': {}}]} | |
Mapping LLM Style and Range in Flash Fiction | 50 | Additional charts and analysis: [https://github.com/lechmazur/writing\_styles](https://github.com/lechmazur/writing_styles)
Based on 400 flash-fiction pieces of 600–800 words per LLM. Prompts include required elements to keep content varied.
| 2025-09-03T18:00:11 | https://www.reddit.com/gallery/1n7m1ig | zero0_one1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n7m1ig | false | null | t3_1n7m1ig | /r/LocalLLaMA/comments/1n7m1ig/mapping_llm_style_and_range_in_flash_fiction/ | false | false | 50 | null | |
Is BitNet Training Unstable? | 1 | 2025-09-03T17:45:20 | THE_ROCKS_MUST_LEARN | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n7ln1v | false | null | t3_1n7ln1v | /r/LocalLLaMA/comments/1n7ln1v/is_bitnet_training_unstable/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'ega1ydqolzmf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=108&crop=smart&format=png8&s=f1260ec0594c942627c100a371039734c43de9c3', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=216&crop=smart&format=png8&s=2362c44d1a76203a535a3ec949707cbad3461013', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=320&crop=smart&format=png8&s=4e9ee8ff3d29744e238f8f53a2c69bfe32d737c4', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=640&crop=smart&format=png8&s=ef6c822152973ee6eaa28e148beb9ae4a61a4f97', 'width': 640}], 'source': {'height': 480, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?format=png8&s=af99abad658aa665569fae864d75ac8428fcc3fc', 'width': 640}, 'variants': {'gif': {'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=108&crop=smart&s=42a0c4cc2d10844960774d391c976f0813dd0e76', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=216&crop=smart&s=fc7cd7af2727064a69cd035b1ccd58014f40658b', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=320&crop=smart&s=05e1d8283e04a25b7b47f865b2655bde04490e04', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=640&crop=smart&s=1ca9d5ee0e9a569cfc7e15e25bcbf9cd1eef48fe', 'width': 640}], 'source': {'height': 480, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?s=0be038b89869c83e56f2f51f1b51c7398140a9d0', 'width': 640}}, 'mp4': {'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=108&format=mp4&s=ad5b5b1409f8bda1b83f5df47651cb1b882bcb70', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=216&format=mp4&s=838c00a9d5cfbb354a313fab6db6d365e502221a', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=320&format=mp4&s=290ec94fdefb08bfd82be71409158baa22147cc6', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?width=640&format=mp4&s=4679303f1c120e5c7b6f2745c321425da57ad347', 'width': 640}], 'source': {'height': 480, 'url': 'https://preview.redd.it/ega1ydqolzmf1.gif?format=mp4&s=7c7c4918412fd48ab8d995500244850dcb26bcd4', 'width': 640}}}}]} | ||
Intel launches Arc Pro B50 graphics card at $349 | 262 | Initial review, source:https://videocardz.com/newz/intel-launches-arc-pro-b50-graphics-card-at-349 | 2025-09-03T17:27:29 | levian_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n7l5kg | false | null | t3_1n7l5kg | /r/LocalLLaMA/comments/1n7l5kg/intel_launches_arc_pro_b50_graphics_card_at_349/ | false | false | default | 262 | {'enabled': True, 'images': [{'id': '357rwwhaizmf1', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/357rwwhaizmf1.jpeg?width=108&crop=smart&auto=webp&s=279f89212d470b9d9a4285c2cc0cef24ffa71b38', 'width': 108}, {'height': 178, 'url': 'https://preview.redd.it/357rwwhaizmf1.jpeg?width=216&crop=smart&auto=webp&s=00544ac2b1f595d5ba2611cd1c6b046b649c783b', 'width': 216}, {'height': 264, 'url': 'https://preview.redd.it/357rwwhaizmf1.jpeg?width=320&crop=smart&auto=webp&s=c41861be28b257c185a746c2ad7369bea728e452', 'width': 320}, {'height': 528, 'url': 'https://preview.redd.it/357rwwhaizmf1.jpeg?width=640&crop=smart&auto=webp&s=066c5073e108f00168fb16f32dcc905e00df9cae', 'width': 640}], 'source': {'height': 583, 'url': 'https://preview.redd.it/357rwwhaizmf1.jpeg?auto=webp&s=75983fc75ebefb8e7e5b077ea0367b6c0b9dabb4', 'width': 706}, 'variants': {}}]} | |
I am new to LLM and looking for advice on choosing an LLM. | 5 | My tasks, general questions, computer hardware and AI, prompts for comfyUI, scripts for blender 3d, help in building nodes in Unreal Engine. Maybe it is better to have two models, one AI assistant , the second AI scripts/node instructor. Chatgpt does a great job of this tasks
Hardware 5060ti 16Gb, AMD Ryzen 5 5600G, 32 ram.
Was tried Gemma3:27b-it-q4\_K\_M, Qwen3:14b-q4\_K\_M, Gemma3:12b-it-q4\_K\_M.
They are all a bit silly and out of context sometimes. Gemma3 27B hasnt been tested that much. Qwen3\_14b takes a bit longer to respond and heats up the card much more. Gemma3\_12b responds faster and puts almost no load on the card. More testing is needed for 27b, but it takes much longer to answer and fills up almost all of the video memory. Overall Qwen14b looks a bit better than Gemma12b, but that a very subjective opinion.
Testing and finding the perfect model can take a lot of time, they are heavy and take a long time to upload. Maybe someone has a similar tasks and has already found a good option for LLM and may be there some specific models? Think to try gpt-oss, its less then 27b and more than 14b, more AI power and not so heavy. Also i found that, Gemma's better optimized, only this reason why i decided to try Gemma 27B, not sure about gpt-oss. | 2025-09-03T17:24:39 | https://www.reddit.com/r/LocalLLaMA/comments/1n7l2ur/i_am_new_to_llm_and_looking_for_advice_on/ | R_dva | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7l2ur | false | null | t3_1n7l2ur | /r/LocalLLaMA/comments/1n7l2ur/i_am_new_to_llm_and_looking_for_advice_on/ | false | false | self | 5 | null |
Warp CLI security concerns 🚨🚨 | 1 | 2025-09-03T17:18:48 | https://x.com/prkashjangidd/status/1963288870730305556?t=N85X6swtsHIvXOLHj_T7Ug&s=34 | prkash1704 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1n7kx29 | false | null | t3_1n7kx29 | /r/LocalLLaMA/comments/1n7kx29/warp_cli_security_concerns/ | false | false | default | 1 | null | |
How to Run AIs Locally on Your Computer (or Phone) | 0 | 2025-09-03T17:16:34 | https://galdoon.codeberg.page/en/posts/como-rodar-ias-no-computador/ | Far_Statistician1035 | galdoon.codeberg.page | 1970-01-01T00:00:00 | 0 | {} | 1n7kuzr | false | null | t3_1n7kuzr | /r/LocalLLaMA/comments/1n7kuzr/how_to_run_ais_locally_on_your_computer_or_phone/ | false | false | default | 0 | null | |
Code Review/Suggestion for FastAPI Rag Application | 0 | I have been working on rag full stack web app using LllamaIndex,fastapi,chomra it has been couple of month but was only able to get basic rag some what right now when deployed it on azure b2 instance i realized it was too slow . Initially i have tried complete async approach and other stuff as rag keep breaking first i implement basic rag first .I have most basic rag flow no intelligent chucking or use of full fledged async functionality now
Basic Idea what i wanted was rag for 2 use cases
1. For normal text pdf and exam notes
2.For code specific use case to index files from git repo directly
and i enable to switch multiple model and providers
I would like to get some suggestion / code review on my backend here is my[ repo](https://github.com/DineshThumma9/centralGPT-backend) and [rag web app](http://central-gpt.vercel.app)
| 2025-09-03T17:11:09 | https://www.reddit.com/r/LocalLLaMA/comments/1n7kpqr/code_reviewsuggestion_for_fastapi_rag_application/ | Minimum-Row6464 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7kpqr | false | null | t3_1n7kpqr | /r/LocalLLaMA/comments/1n7kpqr/code_reviewsuggestion_for_fastapi_rag_application/ | false | false | self | 0 | null |
Asking for adive about Cline code assist with local LLM | 1 | Hello,
I would be happy to get any feedback or suggestion about the following:
1, Role
I am a full stack developer, mostly Laravel and NuxtJs stack. I tried Cursor, then switched to Cline with mostly [deepseek.chat](http://deepseek.chat) and gpt-5-mini for finer control over context, etc.
2, Workflow
Typically when I want to work on a new feature, I ask Cline in plan mode to make a detailed roadmap following Baby Steps method, broken up to Phases and Tasks inside them. I review it, and when it's good I start new task, and start to implement it in small etaps (like a Phase at a time or so). I work on other things, and check it from time to time. So I am more using it as a "batch" mode than interactive code completion.When it is done, i review it, test it, iterate / fix if needed, when all is OK I start new task with next phase.
3, Goal
I want to explore the possibilites of using a local LLM instead of deepseek, gpt or other cloud providers, around 100k context window (for Cline as I understand it's really beneficial)
4, What I have
Right now use a lenovo legion pro 7 laptop, 64 GB ram and RTX 408 with 12 GB VRAM and some relatively strong CPU and Ubuntu 24.04
5, Next steps
I want to systematically test multiple local models with my current machine in probably hybrid (CPU/GPU) mode, and see what this machine can handle. As I do batch and can do other stuff meanwhile, T/s is not really that critical for me. I At this stage I want to develop a good understanding of what's possible locally, how I can run models effectively, what choices I have to share things (layers, etc) between CPU. I would really appreciate any resource where to learn about it, maybe some open source test packages with which I can test on my own machine with Bash scripts different models, model settings, quants, etc. As I understand Cline docs really suggest to use cloud LLM for best experience.
6, Final goal
I want to clearly understand if it is possible from somehow resonable budget of 3000 USD or so to create anything locally that functionality wise is in a same leage like deepseek / chat-gpt-5 (much slower token generation I accept). As I understand in this price ranges my choices are:
\- older Xeon/Threadripper workstation with 2 RTX 3090 and tensor parallelismg
\- an AMD 395+ platform with 128 GB unified RAM
\- Mac ultra with 128 GB unified RAM
\- I just wait and see if the new 48 GB intel card, or nvidia spark, or 395+ successor or anything else will be available in next half year or so, and keep using cloud LLM services till then
If you have any experience with similar workflow (or otherwise) please share your thoughts! | 2025-09-03T17:08:21 | https://www.reddit.com/r/LocalLLaMA/comments/1n7kmza/asking_for_adive_about_cline_code_assist_with/ | hirisov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7kmza | false | null | t3_1n7kmza | /r/LocalLLaMA/comments/1n7kmza/asking_for_adive_about_cline_code_assist_with/ | false | false | self | 1 | null |
Qwen3-Coder-480B Q2_K_XL same speed as Qwen3-235b-instruct Q3_K_XL WHY? | 0 | Hello reddits!
i am running two models and got unusual result of speed inference:
Qwen3-235B-A22B-Instruct-2507-UD-Q3\_K\_XL.gguf - **got 23-24 token/s** for 104GB model size from Unsloth
Qwen3-Coder-480B-A35B-Instruct-UD-Q2\_K\_XL.gguf - **got 23-25 token/s** for 180GB model size from Unsloth.
is it possible to boost 235B version? or what i am doing wrong?
our setup is 2xR9700 (32gb) + 6x7900xtx
"qwe3-coder-480b-q2-kxl":
env:
- "HIP_VISIBLE_DEVICES=0,6,5,7,1,2,3,4"
- "AMD_DIRECT_DISPATCH=1"
- "LLAMA_SET_ROWS=0"
aliases:
- qwe3-coder-480b-q2-kxl
cmd: >
/opt/llama-cpp/llama-hip-0109/build/bin/llama-server
--model /Qwen3-Coder-480B-A35B-Instruct-UD-Q2_K_XL-00001-of-00004.gguf
--temp 0.65
--min-p 0.0
--top-p 0.95
--gpu-layers 90
--batch-size 1024
--ubatch-size 256
--ctx-size 65536
--host 0.0.0.0
--port ${PORT}
--parallel 1
--tensor-split 10,10,8,8,8,8,8,8
--jinja
--mlock
--flash-attn on
--cache-type-k q8_0
--cache-type-v q8_0
--split-mode layer
"bigqwen":
env:
- "HIP_VISIBLE_DEVICES=0,6,5,7,1,2,3,4"
- "AMD_DIRECT_DISPATCH=1"
- "LLAMA_SET_ROWS=0"
aliases:
- bigqwen
cmd: >
/opt/llama-cpp/llama-hip-0109/build/bin/llama-server
--model /Qwen3-235B-A22B-Instruct-2507-UD-Q3_K_XL-00001-of-00003.gguf
--temp 0.65
--min-p 0.0
--top-p 0.95
--gpu-layers 200
--ubatch-size 2048
--ctx-size 65536
--host 0.0.0.0
--port ${PORT}
--parallel 1
--tensor-split 10,10,8,8,8,8,8,8
--jinja
--mlock
--flash-attn on
--cache-type-k q8_0
--cache-type-v q8_0
--split-mode layer
| 2025-09-03T17:00:13 | https://www.reddit.com/r/LocalLLaMA/comments/1n7ket1/qwen3coder480b_q2_k_xl_same_speed_as/ | djdeniro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7ket1 | false | null | t3_1n7ket1 | /r/LocalLLaMA/comments/1n7ket1/qwen3coder480b_q2_k_xl_same_speed_as/ | false | false | self | 0 | null |
Drummer's Skyfall 31B v4 · A Mistral 24B upscaled to 31B with more creativity! | 90 | I'd also like to take this opportunity to share some benchmarks for Cydonia 24B v4.1: [https://huggingface.co/TheDrummer/Cydonia-24B-v4.1/discussions/2](https://huggingface.co/TheDrummer/Cydonia-24B-v4.1/discussions/2) | 2025-09-03T16:31:18 | https://huggingface.co/TheDrummer/Skyfall-31B-v4 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1n7jmiz | false | null | t3_1n7jmiz | /r/LocalLLaMA/comments/1n7jmiz/drummers_skyfall_31b_v4_a_mistral_24b_upscaled_to/ | false | false | default | 90 | {'enabled': False, 'images': [{'id': 'uylRGwq1HYqH_9GVZQwtt7vjMGVse2R0k3BHHixg9iQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uylRGwq1HYqH_9GVZQwtt7vjMGVse2R0k3BHHixg9iQ.png?width=108&crop=smart&auto=webp&s=50002d391f0aad984fd6d6fb95c0d4decff51d3c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uylRGwq1HYqH_9GVZQwtt7vjMGVse2R0k3BHHixg9iQ.png?width=216&crop=smart&auto=webp&s=4fda5f621cc715b4256dc319945a2e5bbb8d58c6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uylRGwq1HYqH_9GVZQwtt7vjMGVse2R0k3BHHixg9iQ.png?width=320&crop=smart&auto=webp&s=eb8b671260a612862673c106485f531450bf7d7f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uylRGwq1HYqH_9GVZQwtt7vjMGVse2R0k3BHHixg9iQ.png?width=640&crop=smart&auto=webp&s=bd30ec44d7dcdb6ab67aaebd28d83444619fea7e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uylRGwq1HYqH_9GVZQwtt7vjMGVse2R0k3BHHixg9iQ.png?width=960&crop=smart&auto=webp&s=3d1bf5851c22e1035e103ff19756de0798284aac', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uylRGwq1HYqH_9GVZQwtt7vjMGVse2R0k3BHHixg9iQ.png?width=1080&crop=smart&auto=webp&s=f79bb0b9cf6052bfc553650c69422236ed895146', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uylRGwq1HYqH_9GVZQwtt7vjMGVse2R0k3BHHixg9iQ.png?auto=webp&s=d709e467ef20b5690f0cefaab3c3f15fc3960306', 'width': 1200}, 'variants': {}}]} |
Qwen3 30B A3B Thinking 2507 Hybrid !! | 110 | Hey all, with some creative merge from YOYO-AI, and some love from me, now you have Qwen3 30B A3B Thinking 2507 in hybrid mode, just like the old hybrid mode, but 2507 weights. First give the creator some love [here](https://huggingface.co/YOYO-AI/Qwen3-30B-A3B-YOYO/discussions) and next, read my instructions and get the chat template [here](https://huggingface.co/YOYO-AI/Qwen3-30B-A3B-YOYO/discussions/1) finally, go and download the model [here](https://huggingface.co/mradermacher/Qwen3-30B-A3B-YOYO-GGUF) .
No coffee needed, whatever I do, I do for love, not for fame ;)
[Qwen3 30B A3B Thinking 2507 Hybrid !](https://preview.redd.it/i6jep5s67zmf1.png?width=1151&format=png&auto=webp&s=00577ea14074f247cf2491ae18a0fd5bf3cbfbb4)
| 2025-09-03T16:24:34 | https://www.reddit.com/r/LocalLLaMA/comments/1n7jfpt/qwen3_30b_a3b_thinking_2507_hybrid/ | Not4Fame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7jfpt | false | null | t3_1n7jfpt | /r/LocalLLaMA/comments/1n7jfpt/qwen3_30b_a3b_thinking_2507_hybrid/ | false | false | 110 | {'enabled': False, 'images': [{'id': '3-YBimUSWKbnR7AwkACpdqNr5hKT1fY59SClnp7z_yM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3-YBimUSWKbnR7AwkACpdqNr5hKT1fY59SClnp7z_yM.png?width=108&crop=smart&auto=webp&s=86f3a01f871179819a14459e0ca5ac17541fcb23', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3-YBimUSWKbnR7AwkACpdqNr5hKT1fY59SClnp7z_yM.png?width=216&crop=smart&auto=webp&s=f5edaa7242569be75f412b015136d447371e2944', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3-YBimUSWKbnR7AwkACpdqNr5hKT1fY59SClnp7z_yM.png?width=320&crop=smart&auto=webp&s=1a799e4dd94ed7d1248663a35e41461181be9502', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3-YBimUSWKbnR7AwkACpdqNr5hKT1fY59SClnp7z_yM.png?width=640&crop=smart&auto=webp&s=b5bdb4b82debdc147a98f0aa03728d4703fe317e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3-YBimUSWKbnR7AwkACpdqNr5hKT1fY59SClnp7z_yM.png?width=960&crop=smart&auto=webp&s=3d84ae68005e816b41d6e4f9faf8966598cdd334', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3-YBimUSWKbnR7AwkACpdqNr5hKT1fY59SClnp7z_yM.png?width=1080&crop=smart&auto=webp&s=88b184ff81613aedab9460bfa0d6d48bd3c1d1ed', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3-YBimUSWKbnR7AwkACpdqNr5hKT1fY59SClnp7z_yM.png?auto=webp&s=0c04feb0fad4de5543c22c959962b7058210f0c1', 'width': 1200}, 'variants': {}}]} | |
Our 2nd AMA: Hugging Face Science Team, Creators of SmolLM, SmolVLM, and more! (Tomorrow, 8AM-11AM PST) | 140 | 2025-09-03T16:14:51 | XMasterrrr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n7j5z2 | false | null | t3_1n7j5z2 | /r/LocalLLaMA/comments/1n7j5z2/our_2nd_ama_hugging_face_science_team_creators_of/ | false | true | 140 | {'enabled': True, 'images': [{'id': 'xGC13X_jrU-slGXEA5jffw4pW5-uaiup-PozPqf8l1E', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/wdx4ivdw3zmf1.jpeg?width=108&crop=smart&auto=webp&s=c2ca5258b58454dd3986e34a2ebcd7b452274b90', 'width': 108}, {'height': 204, 'url': 'https://preview.redd.it/wdx4ivdw3zmf1.jpeg?width=216&crop=smart&auto=webp&s=7106d618baddb070ed54b076e12e226d0106ce96', 'width': 216}, {'height': 302, 'url': 'https://preview.redd.it/wdx4ivdw3zmf1.jpeg?width=320&crop=smart&auto=webp&s=f29d2b701ea85555a1b66cc9706ac043109df67d', 'width': 320}, {'height': 605, 'url': 'https://preview.redd.it/wdx4ivdw3zmf1.jpeg?width=640&crop=smart&auto=webp&s=876855c03867ead70389d15b60f24b91d478f835', 'width': 640}, {'height': 907, 'url': 'https://preview.redd.it/wdx4ivdw3zmf1.jpeg?width=960&crop=smart&auto=webp&s=e8effc8f1e3989e15d6b131377fb03a5611933e5', 'width': 960}, {'height': 1020, 'url': 'https://preview.redd.it/wdx4ivdw3zmf1.jpeg?width=1080&crop=smart&auto=webp&s=454579ddcef6504c4075547146b40a1a79fa21a0', 'width': 1080}], 'source': {'height': 1903, 'url': 'https://preview.redd.it/wdx4ivdw3zmf1.jpeg?auto=webp&s=324c0607d050c34a6bf70ac094f1936004abc5df', 'width': 2013}, 'variants': {}}]} | |||
Hey | 0 | How do you feel i open up qwen3-4b let you all call see how it copes | 2025-09-03T16:04:56 | https://www.reddit.com/r/LocalLLaMA/comments/1n7iw4z/hey/ | Ok_Try_877 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7iw4z | false | null | t3_1n7iw4z | /r/LocalLLaMA/comments/1n7iw4z/hey/ | false | false | self | 0 | null |
Switzerland launches its own open source model | 115 | 2025-09-03T15:54:28 | https://www.engadget.com/ai/switzerland-launches-its-own-open-source-ai-model-133051578.html | ananas_tacos | engadget.com | 1970-01-01T00:00:00 | 0 | {} | 1n7ilou | true | null | t3_1n7ilou | /r/LocalLLaMA/comments/1n7ilou/switzerland_launches_its_own_open_source_model/ | false | false | default | 115 | {'enabled': False, 'images': [{'id': 'vqsLOQwLzSpFCY0wZKGHoR70wxX3Zo0oDI880u-Ya_o', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/vqsLOQwLzSpFCY0wZKGHoR70wxX3Zo0oDI880u-Ya_o.jpeg?width=108&crop=smart&auto=webp&s=276ce76b5c2850c65f10e026a6133358356edb6e', 'width': 108}, {'height': 143, 'url': 'https://external-preview.redd.it/vqsLOQwLzSpFCY0wZKGHoR70wxX3Zo0oDI880u-Ya_o.jpeg?width=216&crop=smart&auto=webp&s=fe51220d1e870543199627f13d6db1a0aa1c8026', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/vqsLOQwLzSpFCY0wZKGHoR70wxX3Zo0oDI880u-Ya_o.jpeg?width=320&crop=smart&auto=webp&s=d01450b537140bb9fbc855312a4956e6853ece3d', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/vqsLOQwLzSpFCY0wZKGHoR70wxX3Zo0oDI880u-Ya_o.jpeg?width=640&crop=smart&auto=webp&s=659f786877c6c3ce79ac169974bd64f43e3484fc', 'width': 640}, {'height': 639, 'url': 'https://external-preview.redd.it/vqsLOQwLzSpFCY0wZKGHoR70wxX3Zo0oDI880u-Ya_o.jpeg?width=960&crop=smart&auto=webp&s=b093628a8b2991b52e1b9d0026b2c5b027b48def', 'width': 960}, {'height': 719, 'url': 'https://external-preview.redd.it/vqsLOQwLzSpFCY0wZKGHoR70wxX3Zo0oDI880u-Ya_o.jpeg?width=1080&crop=smart&auto=webp&s=471f71ba3916ef67dd6144183d1655e1d3f5a756', 'width': 1080}], 'source': {'height': 799, 'url': 'https://external-preview.redd.it/vqsLOQwLzSpFCY0wZKGHoR70wxX3Zo0oDI880u-Ya_o.jpeg?auto=webp&s=7aa78002641b55f573ae0177cc14d913f36056cf', 'width': 1200}, 'variants': {}}]} | |
Detecting Exposed LLM Servers: A Shodan Case Study on Ollama | 3 | 2025-09-03T15:43:42 | https://blogs.cisco.com/security/detecting-exposed-llm-servers-shodan-case-study-on-ollama | terminoid_ | blogs.cisco.com | 1970-01-01T00:00:00 | 0 | {} | 1n7ib1z | false | null | t3_1n7ib1z | /r/LocalLLaMA/comments/1n7ib1z/detecting_exposed_llm_servers_a_shodan_case_study/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=108&crop=smart&auto=webp&s=da2a688972f8427ee78d8dca9d238362348aa343', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=216&crop=smart&auto=webp&s=24fe1c7a20b9752381156f64dd0edf73167cba48', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=320&crop=smart&auto=webp&s=284e6d2f76b1ef19071d8e81f465fd90e69f1b32', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=640&crop=smart&auto=webp&s=cf056e82668e8dca0249994539cb52499fc325d3', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=960&crop=smart&auto=webp&s=e92dcc45b474010dd565bc05c6c5929456849eff', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?width=1080&crop=smart&auto=webp&s=1669a69c4f805c90dd5b951a19b930febdbc1047', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/CwQEYICaJP3s8cCRSb6_711Rr4fK0XxoEsETluKl-9A.jpeg?auto=webp&s=bde846d4d1d6d3e347823871cbfb297f24b72709', 'width': 1200}, 'variants': {}}]} | |
Qualification Results of the Valyrian Games (for LLMs) | 5 | 2025-09-03T15:27:33 | https://www.reddit.com/r/LocalLLaMA/comments/1n7hvjz/qualification_results_of_the_valyrian_games_for/ | WouterGlorieux | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7hvjz | false | null | t3_1n7hvjz | /r/LocalLLaMA/comments/1n7hvjz/qualification_results_of_the_valyrian_games_for/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'MzgZ4oKD7J3HD0m38bZf_TfBVzN9rXFr9aaG32zOaRU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MzgZ4oKD7J3HD0m38bZf_TfBVzN9rXFr9aaG32zOaRU.png?width=108&crop=smart&auto=webp&s=80e5d5f028bf43d054a1f9cc6ec4acd46ae0eaa2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MzgZ4oKD7J3HD0m38bZf_TfBVzN9rXFr9aaG32zOaRU.png?width=216&crop=smart&auto=webp&s=79271099b2e2693efa5fe689828182e641c0f997', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MzgZ4oKD7J3HD0m38bZf_TfBVzN9rXFr9aaG32zOaRU.png?width=320&crop=smart&auto=webp&s=67d6f392d09aecb58e304200734705568d281f51', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MzgZ4oKD7J3HD0m38bZf_TfBVzN9rXFr9aaG32zOaRU.png?width=640&crop=smart&auto=webp&s=8108c59a538998b19b61b2bd109a158dc4e37d86', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MzgZ4oKD7J3HD0m38bZf_TfBVzN9rXFr9aaG32zOaRU.png?width=960&crop=smart&auto=webp&s=58e38e852afbf1d22bc5ae0c82252339c59087a5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MzgZ4oKD7J3HD0m38bZf_TfBVzN9rXFr9aaG32zOaRU.png?width=1080&crop=smart&auto=webp&s=b3915048d28571e728f7f8250a8705432738a591', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MzgZ4oKD7J3HD0m38bZf_TfBVzN9rXFr9aaG32zOaRU.png?auto=webp&s=5fbb4b16eda410b108bae72f7c15fe3bd195c30a', 'width': 1200}, 'variants': {}}]} | |
I built an open-source meeting notetaker that runs fully locally. I’m giving back to the community with 100 free pro licenses and would love your feedback! | 0 | Since last December, I’ve been working on something I’m really excited about: an [**open-source**](https://github.com/fastrepl/hyprnote) **meeting notetaker** that can run entirely on your device - no cloud, no data leaving your computer.
I did some initial alpha launches on this sub and it helped me shape the product - thanks.
To give back to the community that made this possible, I’m offering **100 pro licenses for free**.
Here's the promotion code 👉 **LOCALLAMAFREE**
If you’d like to try it out, grab a license, give Hyprnote a spin, and let me know what you think. Your feedback will help me shape where this project goes next.
[hyprnote.com](https://hyprnote.com)
Thanks in advance for checking it out - can’t wait to hear your thoughts! | 2025-09-03T15:17:27 | https://v.redd.it/vb9rzakeuymf1 | beerbellyman4vr | /r/LocalLLaMA/comments/1n7hlsq/i_built_an_opensource_meeting_notetaker_that_runs/ | 1970-01-01T00:00:00 | 0 | {} | 1n7hlsq | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vb9rzakeuymf1/DASHPlaylist.mpd?a=1759635014%2CNTI4NDFmZTRkYjM1ZTNhZGZjZTViZjgxNzA3ZGEwOTFhZmY1YTk3YTI5YTZiOWZjOTI3YzI1MGQ4ZmUxZjAyOQ%3D%3D&v=1&f=sd', 'duration': 165, 'fallback_url': 'https://v.redd.it/vb9rzakeuymf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/vb9rzakeuymf1/HLSPlaylist.m3u8?a=1759635014%2CNjdmMTFiMWQzMGM3ZTZjN2I4NmM2ZmE0ZmIxMWE3OTZkYjY5ZWE4NjE2YWE4NWJhZWIyMDQ3Y2QyM2M1N2M1Yw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/vb9rzakeuymf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1n7hlsq | /r/LocalLLaMA/comments/1n7hlsq/i_built_an_opensource_meeting_notetaker_that_runs/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'djh4em5ha2V1eW1mMWgVuwkfkLTxN7Pkzpd0evbO52sHh3LAmyh1_q9Y4mu2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/djh4em5ha2V1eW1mMWgVuwkfkLTxN7Pkzpd0evbO52sHh3LAmyh1_q9Y4mu2.png?width=108&crop=smart&format=pjpg&auto=webp&s=43b29cb1d5a62abfb133729635923df586029c9f', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/djh4em5ha2V1eW1mMWgVuwkfkLTxN7Pkzpd0evbO52sHh3LAmyh1_q9Y4mu2.png?width=216&crop=smart&format=pjpg&auto=webp&s=252dbdc3f428824dcb0892839faffba956585bdd', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/djh4em5ha2V1eW1mMWgVuwkfkLTxN7Pkzpd0evbO52sHh3LAmyh1_q9Y4mu2.png?width=320&crop=smart&format=pjpg&auto=webp&s=84a108ebcf57a193b09e0f5e75b3c860f6fc441d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/djh4em5ha2V1eW1mMWgVuwkfkLTxN7Pkzpd0evbO52sHh3LAmyh1_q9Y4mu2.png?width=640&crop=smart&format=pjpg&auto=webp&s=b44ac4dbc57b64052e5d21b2105cfc900b6bab7f', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/djh4em5ha2V1eW1mMWgVuwkfkLTxN7Pkzpd0evbO52sHh3LAmyh1_q9Y4mu2.png?width=960&crop=smart&format=pjpg&auto=webp&s=a3dbad09ac14f2418e232b1bdbc6d700d9502161', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/djh4em5ha2V1eW1mMWgVuwkfkLTxN7Pkzpd0evbO52sHh3LAmyh1_q9Y4mu2.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0c1a598a2516b00c48097eff2b4cd5b85ee5b8ea', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/djh4em5ha2V1eW1mMWgVuwkfkLTxN7Pkzpd0evbO52sHh3LAmyh1_q9Y4mu2.png?format=pjpg&auto=webp&s=27a843d5f0f6af28b809cf3d1b9977171295e7f8', 'width': 1920}, 'variants': {}}]} | |
haiku.rag, the local python RAG library now runs on lancedb | 1 | [removed] | 2025-09-03T15:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/1n7hiir/haikurag_the_local_python_rag_library_now_runs_on/ | gogozad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7hiir | false | null | t3_1n7hiir | /r/LocalLLaMA/comments/1n7hiir/haikurag_the_local_python_rag_library_now_runs_on/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'zPUs1ENu2lGuyOW6EiCxph_nDMnwK3wCzZyPlMKP-bo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zPUs1ENu2lGuyOW6EiCxph_nDMnwK3wCzZyPlMKP-bo.png?width=108&crop=smart&auto=webp&s=0671b77b0446d2d950d756aa500b162e9eebecfa', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zPUs1ENu2lGuyOW6EiCxph_nDMnwK3wCzZyPlMKP-bo.png?width=216&crop=smart&auto=webp&s=a1424b43eb0618273ded6cb641d6a5bc456b45c1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zPUs1ENu2lGuyOW6EiCxph_nDMnwK3wCzZyPlMKP-bo.png?width=320&crop=smart&auto=webp&s=4660e69a268d02c500c637b90353bf73e0523a40', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zPUs1ENu2lGuyOW6EiCxph_nDMnwK3wCzZyPlMKP-bo.png?width=640&crop=smart&auto=webp&s=9a26e4f5c1c4db1fe51d10242be4bdedc209ee3b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zPUs1ENu2lGuyOW6EiCxph_nDMnwK3wCzZyPlMKP-bo.png?width=960&crop=smart&auto=webp&s=6c1bd454b7ace0276f323a5ebc8a23b9ee212650', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zPUs1ENu2lGuyOW6EiCxph_nDMnwK3wCzZyPlMKP-bo.png?width=1080&crop=smart&auto=webp&s=98198568afde19622d2f0591f6056b899fcd85bc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zPUs1ENu2lGuyOW6EiCxph_nDMnwK3wCzZyPlMKP-bo.png?auto=webp&s=4436699e35ea8b4f585c2ed76c2f11bb7d964ff3', 'width': 1200}, 'variants': {}}]} |
from 70–85 percent to 90–95 percent stability: the 300+ page Global Fix Map for local models | 1 | [removed] | 2025-09-03T15:10:23 | onestardao | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1n7hf1t | false | null | t3_1n7hf1t | /r/LocalLLaMA/comments/1n7hf1t/from_7085_percent_to_9095_percent_stability_the/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'bpj9ih72uymf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/bpj9ih72uymf1.jpeg?width=108&crop=smart&auto=webp&s=cfb8189126d3a19697dc0f170d2166db96521ef9', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/bpj9ih72uymf1.jpeg?width=216&crop=smart&auto=webp&s=3778aaadab47a2bc85e26bd9f22341773b0214e1', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/bpj9ih72uymf1.jpeg?width=320&crop=smart&auto=webp&s=5bc8277bb5de62104235fcd30d22f4db1ad8943a', 'width': 320}, {'height': 481, 'url': 'https://preview.redd.it/bpj9ih72uymf1.jpeg?width=640&crop=smart&auto=webp&s=80fbe2195e89ffc1641e459d3ce807ed525d3af5', 'width': 640}, {'height': 722, 'url': 'https://preview.redd.it/bpj9ih72uymf1.jpeg?width=960&crop=smart&auto=webp&s=38ecb74ba5357433de1eee22a68149e3a2afc4c8', 'width': 960}, {'height': 812, 'url': 'https://preview.redd.it/bpj9ih72uymf1.jpeg?width=1080&crop=smart&auto=webp&s=c1b9fba0883d7531b8dc991f7f55daeabc4a746d', 'width': 1080}], 'source': {'height': 963, 'url': 'https://preview.redd.it/bpj9ih72uymf1.jpeg?auto=webp&s=254cdb0a91ad69354603be7350246908bc04dddb', 'width': 1280}, 'variants': {}}]} | |
Beginner moving from CPU-only Ollama – advice on first GPU upgrade? | 10 | This is my first Reddit post — go easy on me!
I’m just starting out with running local models and at the moment I’ve got Ollama and OpenWebUI running on my Windows machine, but it’s CPU-only right now. Performance is exactly what you’d expect (painfully slow), but for me this stage has been more about learning than anything else. I’ve even got a basic RAG setup working which has been fun to experiment with.
My current PC is a Ryzen 5 5600 with 32 GB of RAM and a 550 W power supply. The GPU in it is just an old card to drive three monitors, nothing useful for AI. From what I’ve been reading here and elsewhere it looks like the 3090, with its 24 GB of VRAM, is kind of the starting point if you want to do anything practical. I can find them second-hand for about £600, which is a lot of money but maybe worth it if it gets me moving.
What I’m wondering is: does it make sense for someone like me, right at the beginning, to pick up a used 3090 as a first step? Would I be asking for trouble trying to run it on a 550 W PSU even with power limits, or is that a reasonable temporary plan before upgrading the PSU later?
I’m not trying to build the ultimate rig overnight — I’ve got family and other commitments so I need to keep this sensible — but I’d like to take the next step beyond crawling along on CPU. Any advice, warnings, or suggestions would be really appreciated.
Ultimately the dream would be to build a PC dedicated to running local AI that I can actually use in my work, without having to send data off into the ether. I’d like to be able to query big sets of documents, do proper information discovery, and eventually even let my kids use it for their school and college work — storing their documents and giving them a way to search and query their own material. Further down the road I’d love to dip my toes into some machine learning with the datasets I manage in my job, just to learn and experiment. For now though, I’m just trying to take it one step at a time and get off CPU-only.
Also, if there are other resources, YouTube channels, blogs, or communities worth following, I’d love any pointers. I’ve recently found Digital Spaceport which has been a great source of info, but I’m sure there’s more out there I haven’t discovered yet.
Thanks!
Mike | 2025-09-03T15:07:11 | https://www.reddit.com/r/LocalLLaMA/comments/1n7hc1f/beginner_moving_from_cpuonly_ollama_advice_on/ | CountDuckulla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7hc1f | false | null | t3_1n7hc1f | /r/LocalLLaMA/comments/1n7hc1f/beginner_moving_from_cpuonly_ollama_advice_on/ | false | false | self | 10 | null |
Does VRAM correlate with model quality? | 8 | I use a 3060 12GB. I've tried many different 8B models at different quantizations, and their error rate seem to be about equal. By errors I mean how they repeat previous paragraphs (even with temp set to max) or forget important details from earlier in the conversation (even from a few messages earlier).
I was thinking of a upgrading to a 3090 or 4090 for for SDXL training (my 12GB only works for training SD1.5), and was wondering if this upgrade would grant an improvement in how my LLMs run. For context, I use local LLMs exclusively for horny RP | 2025-09-03T15:05:46 | https://www.reddit.com/r/LocalLLaMA/comments/1n7hanu/does_vram_correlate_with_model_quality/ | ta394283509 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7hanu | false | null | t3_1n7hanu | /r/LocalLLaMA/comments/1n7hanu/does_vram_correlate_with_model_quality/ | false | false | self | 8 | null |
Does anyone else have their entire household internet connection get throttled when downloading models from Hugging Face? | 0 | I've been going through an interesting hell over the last month or so. Every time I go to download something from Hugging Face my ENTIRE internet of my household gets throttled. The strange thing is, it doesn't really matter how fast the download is. I might be getting 2-5 mb/s download and still everything gets throttled.
Stuff like this makes me wonder about the future of local AI. If great hardware and models even became available, how hard would it even be to get a hold of the model. | 2025-09-03T15:05:21 | https://www.reddit.com/r/LocalLLaMA/comments/1n7ha92/does_anyone_else_have_their_entire_household/ | richardanaya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7ha92 | false | null | t3_1n7ha92 | /r/LocalLLaMA/comments/1n7ha92/does_anyone_else_have_their_entire_household/ | false | false | self | 0 | null |
What models to test on my first machine? | 5 | So here is my build:
MSI MAG Tomahawk B550
Corsair Vengeance 64gb 3200mhz
Ryzen 7 5800x
2x Nvidia P102-100 10gb vram
So far I have played around with a few models in the 14b-27b range. I tend to like using the Gemma model the most but gpt-oss wasn’t bad either. I mainly want to use this as a deep researching llm that I can also train of pdf data. Gemma3:27b runs fine but it’s slow and running about a 87% gpu and 13% cpu, and I’m new to this so I’m not sure if I can optimize it to be fast or not. My main gripe with thinking models like gpt-oss or deepseek is that the thinking feels like it takes forever and it hasn’t really proven to give me any better answers so far, in fact sometimes it’s more stubborn and tells me stuff is down right impossible. ANYWAYS, any advice would be must appreciated but for right now I’ll keep playing with Gemma and maybe trying mistral-small. | 2025-09-03T15:04:05 | https://www.reddit.com/r/LocalLLaMA/comments/1n7h91t/what_models_to_test_on_my_first_machine/ | IamLuckyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7h91t | false | null | t3_1n7h91t | /r/LocalLLaMA/comments/1n7h91t/what_models_to_test_on_my_first_machine/ | false | false | self | 5 | null |
Anyone else frustrated with AI assistants forgetting context? | 0 | I keep bouncing between ChatGPT, Claude, and Perplexity depending on the task. The problem is every new session feels like starting over—I have to re-explain everything.
Just yesterday I wasted 10+ minutes walking perplexity through my project direction again just to get related search if not it is just useless. This morning, ChatGPT didn’t remember anything about my client’s requirements.
The result? I lose a couple of hours each week just re-establishing context. It also makes it hard to keep project discussions consistent across tools. Switching platforms means resetting, and there’s no way to keep a running history of decisions or knowledge.
I’ve tried copy-pasting old chats (messy and unreliable), keeping manual notes (which defeats the point of using AI), and sticking to just one tool (but each has its strengths).
Has anyone actually found a fix for this? I’m especially interested in something that works across different platforms, not just one. On my end, I’ve started tinkering with a solution and would love to hear what features people would find most useful. | 2025-09-03T15:03:36 | https://www.reddit.com/r/LocalLLaMA/comments/1n7h8kj/anyone_else_frustrated_with_ai_assistants/ | PrestigiousBet9342 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7h8kj | false | null | t3_1n7h8kj | /r/LocalLLaMA/comments/1n7h8kj/anyone_else_frustrated_with_ai_assistants/ | false | false | self | 0 | null |
How can I use Higgs Audio v2 with streaming or chunked inference for real-time voice assistant? | 4 | Hey everyone,
I’m working on a real-time AI voice assistant and I’d like to use **Higgs Audio v2** as the TTS model. The problem is that Higgs only seems to generate audio from the *entire text input at once*, which makes it slow and impractical for long responses.
For example: if my LLM generates a 30-second reply, Higgs takes \~30 seconds to render all the audio before playback can even start. I’d like to make it work more like a **streaming TTS**, where:
* The LLM streams text output incrementally.
* The TTS starts speaking as soon as it has a small chunk (e.g. 2–3 seconds worth).
* Audio plays continuously while the next chunks are being generated.
My questions:
1. Is there any way to make Higgs Audio v2 support **streaming or chunked inference** directly?
2. If not, are there **workarounds** (like overlapping context, crossfade stitching, reusing prosody/style embeddings) that can simulate streaming and keep prosody consistent across chunks?
3. Has anyone here already tried building a **real-time pipeline** with Higgs v2 (LLM → TTS → playback)?
I’d really appreciate any advice, examples, or code snippets. I want to stick with Higgs v2 specifically because of its voice quality, but I need to make it work in a real-time assistant setting.
Thanks in advance 🙏 | 2025-09-03T15:03:14 | https://www.reddit.com/r/LocalLLaMA/comments/1n7h88h/how_can_i_use_higgs_audio_v2_with_streaming_or/ | Forsaken-Turnip-6664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7h88h | false | null | t3_1n7h88h | /r/LocalLLaMA/comments/1n7h88h/how_can_i_use_higgs_audio_v2_with_streaming_or/ | false | false | self | 4 | null |
How to connect a python to LLama using GPT4ALL ? | 0 | I am trying to use AI to control my PC by giving prompts to it via GPT4ALL.
GPT told me a step by step, like connecting any code runner like python, but I have no idea | 2025-09-03T14:55:02 | https://www.reddit.com/r/LocalLLaMA/comments/1n7h0ac/how_to_connect_a_python_to_llama_using_gpt4all/ | Additional-Garlic711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7h0ac | false | null | t3_1n7h0ac | /r/LocalLLaMA/comments/1n7h0ac/how_to_connect_a_python_to_llama_using_gpt4all/ | false | false | self | 0 | null |
New protocol for agent/tool coordination | 1 | [removed] | 2025-09-03T14:43:05 | https://www.reddit.com/r/LocalLLaMA/comments/1n7gp69/new_protocol_for_agenttool_coordination/ | rokoss21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7gp69 | false | null | t3_1n7gp69 | /r/LocalLLaMA/comments/1n7gp69/new_protocol_for_agenttool_coordination/ | false | false | self | 1 | null |
Taming AI Agent Chaos: Building RMCP – an open-source “OS” for AI orchestration | 1 | [removed] | 2025-09-03T14:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/1n7gjci/taming_ai_agent_chaos_building_rmcp_an_opensource/ | rokoss21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7gjci | false | null | t3_1n7gjci | /r/LocalLLaMA/comments/1n7gjci/taming_ai_agent_chaos_building_rmcp_an_opensource/ | false | false | self | 1 | null |
German "Who Wants to Be a Millionaire" Benchmark w/ Leading Models | 246 | First off, big thanks to u/Available_Load_5334 for creating the original German **Wer wird Millionär?** Benchmark and open-sourcing it. [https://github.com/ikiruneo/millionaire-bench](https://github.com/ikiruneo/millionaire-bench)
After speaking, we said it would be fun to run the same benchmark on a set of leading models, and that's what we did here.
The rules and data stayed the same, 45 rounds, each with 15 multiple-choice questions from easy to hard. One wrong answer ends the program and you keep the current winnings. No lifelines. Answers are single letters A–D. same public WWM question corpus used in the original. [https://github.com/GerritKainz/wer\_wird\_millionaer](https://github.com/GerritKainz/wer_wird_millionaer)
Questions remain in German for inference, but we included parallel English text so non-German readers can follow along. See fragen\_antworten\_en.json in the repo. Scripts to run many programs quickly and rebuild results from per-model outputs (millionaire-run.py, rebuild\_leaderboard.py). We’ll attach a screenshot of the leaderboard instead of pasting a table here. same scoring and structure as the original, packaged for quick reruns.
Repo: [https://github.com/Jose-Sabater/millionaire-bench-opper](https://github.com/Jose-Sabater/millionaire-bench-opper)
Again thanks to u/Available_Load_5334 for the idea and groundwork. If you try more models or tweak settings, feel free to open a PR or drop results in the comments. | 2025-09-03T14:15:58 | https://www.reddit.com/gallery/1n7g0c2 | facethef | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n7g0c2 | false | null | t3_1n7g0c2 | /r/LocalLLaMA/comments/1n7g0c2/german_who_wants_to_be_a_millionaire_benchmark_w/ | false | false | 246 | null | |
Reasoning Vectors: Transferring Chain-of-Thought Capabilities via Task Arithmetic | 20 | The paper shows that reasoning ability can be extracted as a vector from RL-trained models and added to others via simple arithmetic to boost reasoning without retraining
would appreciate an upvote [https://huggingface.co/papers/2509.01363](https://huggingface.co/papers/2509.01363) | 2025-09-03T14:10:03 | https://www.reddit.com/r/LocalLLaMA/comments/1n7fux7/reasoning_vectors_transferring_chainofthought/ | LowChance4561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7fux7 | false | null | t3_1n7fux7 | /r/LocalLLaMA/comments/1n7fux7/reasoning_vectors_transferring_chainofthought/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': '6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=108&crop=smart&auto=webp&s=bdcbdfdc699666b1b4a083ed18650739cb1492c4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=216&crop=smart&auto=webp&s=6f15f469842b054fe5a92c4cb1bbad6e04f2e7e4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=320&crop=smart&auto=webp&s=3cfd1d05dbbd072c8e018eb3d6935c21e0487aa3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=640&crop=smart&auto=webp&s=6e2368609d045193f21436c56e3f29d8cf10bbf0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=960&crop=smart&auto=webp&s=441af03cb22210caca4cd04c5b41af9e76776e07', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?width=1080&crop=smart&auto=webp&s=25c99948b19bbb5746e9c020269e2e556b92b32c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/6H-KDEVpZ9NpxS2HndOs2dWmPPzxE6EUp2vHrZUoEA8.png?auto=webp&s=5df32261adeff3a2bcda144430d36dcd42965fea', 'width': 1200}, 'variants': {}}]} |
I made Spring AI Playground - a self-hosted UI for local LLMs, RAG, and MCP tools | 26 | I made an open-source project called Spring AI Playground — a self-hosted web UI for experimenting with local LLMs, RAG, and MCP tools.
It’s a self-hosted web UI (Docker image available) that lets you:
* Run local LLMs with **Ollama** (you can switch to OpenAI/Anthropic too).
* Upload docs → chunk, embed, search, and inspect vector-DB retrieval **with score details**.
* Connect to **MCP servers directly**, test each tool, and even run end-to-end chat flows combining RAG + MCP.
* Swap vector DBs or select MCP tools dynamically - thanks to the Spring AI framework under the hood.
Why I built it:
I wanted a sandbox where I could mash things together quickly, test retrieval quality, debug tools, and keep everything running locally. Open WebUI is fantastic for chat-centric experiments, but my focus was to make **RAG + MCP first-class playgrounds**.
GitHub: [https://github.com/JM-Lab/spring-ai-playground](https://github.com/JM-Lab/spring-ai-playground)
Would love feedback from this community - especially from those running local models or playing with MCP. Curious if this would fit into your workflow, or if there are rough edges I should improve. | 2025-09-03T14:07:15 | https://www.reddit.com/gallery/1n7fsgd | kr-jmlab | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n7fsgd | false | null | t3_1n7fsgd | /r/LocalLLaMA/comments/1n7fsgd/i_made_spring_ai_playground_a_selfhosted_ui_for/ | false | false | 26 | null | |
Gemma3:270m sucks, opinions? | 0 | I tried fine tuning the model to be an english teacher, giving him 370 question-answer pairs. Finetuned with unsloth using google colab notebook for creating LoRa adapters.
Then trying the model the results were terrible… Any honest opinion based on your experience? | 2025-09-03T14:05:06 | https://www.reddit.com/r/LocalLLaMA/comments/1n7fqg3/gemma3270m_sucks_opinions/ | Wonderful-Ring3692 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7fqg3 | false | null | t3_1n7fqg3 | /r/LocalLLaMA/comments/1n7fqg3/gemma3270m_sucks_opinions/ | false | false | self | 0 | null |
Setting up knowledge base / embed models? | 2 | I currently use ChatboxAI for LLMs which uses Ollama as backend.
Works great so far, but now I want to set up a knowledge base.
I got a bunch of world building and stories that I wrote myself and would like the LLM to pull information from there so I can explore it a little.
Turns out you apparently need a separate model to pull, read, interpret or vectorize information?
I have absolutely NO idea where to even get such a model though or how to install it.
Importantly, is the model heavy? I only have about 64gb of RAM in my computer and don't know if I could run 2 different large models at the same time. | 2025-09-03T14:01:10 | https://www.reddit.com/r/LocalLLaMA/comments/1n7fmtz/setting_up_knowledge_base_embed_models/ | Cartoon_Corpze | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7fmtz | false | null | t3_1n7fmtz | /r/LocalLLaMA/comments/1n7fmtz/setting_up_knowledge_base_embed_models/ | false | false | self | 2 | null |
Nothing stands out vis-a-vis agents for a local LLM | 5 | I've been trying to see what's available for local agents working with a local LLM. Nothing I'm seeing stands out, or maybe I'm missing something? | 2025-09-03T13:57:12 | https://www.reddit.com/r/LocalLLaMA/comments/1n7fj41/nothing_stands_out_visavis_agents_for_a_local_llm/ | ChevChance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7fj41 | false | null | t3_1n7fj41 | /r/LocalLLaMA/comments/1n7fj41/nothing_stands_out_visavis_agents_for_a_local_llm/ | false | false | self | 5 | null |
Introducing Kimi K2-0905 | 497 | What's new:
https://preview.redd.it/u8oxbcfyfymf1.png?width=2178&format=png&auto=webp&s=87daf02d6f257631f0a0a8847de7180dc9d9eed8
| 2025-09-03T13:51:27 | https://www.reddit.com/r/LocalLLaMA/comments/1n7fdy4/introducing_kimi_k20905/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7fdy4 | false | null | t3_1n7fdy4 | /r/LocalLLaMA/comments/1n7fdy4/introducing_kimi_k20905/ | false | false | 497 | null | |
Normalizing documents for ingestion | 6 | What’s a good tool to clean up PDFs and ready them for Markdown conversion? I’ve got PDFs that have been scanned badly, or are in a “pamphlet” layout with inverted or out of order pages, and it creates a “garbage in, garbage out” scenario when it comes to RAG.
Half of the time, OCR gives characters that are just obviously wrong, like it can’t recognize the font and makes terrible guesses. Any smarter VLMs or multimodal that could do a good job of this? | 2025-09-03T13:46:22 | https://www.reddit.com/r/LocalLLaMA/comments/1n7f9d6/normalizing_documents_for_ingestion/ | FrozenBuffalo25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7f9d6 | false | null | t3_1n7f9d6 | /r/LocalLLaMA/comments/1n7f9d6/normalizing_documents_for_ingestion/ | false | false | self | 6 | null |
Memory system | 0 | I'm trying to fix memory system in my ai current one is ok but missing alot of things , currently trying to train deberta v3 for entity and relation extraction been taking me a while now, but surely there's already somewhere good system that will Extract Stich and recall | 2025-09-03T13:00:15 | https://www.reddit.com/r/LocalLLaMA/comments/1n7e5hi/memory_system/ | Informal_Catch_4688 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7e5hi | false | null | t3_1n7e5hi | /r/LocalLLaMA/comments/1n7e5hi/memory_system/ | false | false | self | 0 | null |
Join the 5-Day AI Agents Intensive Course with Google | 0 | **Monday, November 10 - Friday, November 14**
https://rsvp.withgoogle.com/events/google-ai-agents-intensive_2025 | 2025-09-03T12:47:05 | https://www.reddit.com/r/LocalLLaMA/comments/1n7duy0/join_the_5day_ai_agents_intensive_course_with/ | Educational_Sun_8813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7duy0 | false | null | t3_1n7duy0 | /r/LocalLLaMA/comments/1n7duy0/join_the_5day_ai_agents_intensive_course_with/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Qi473nZaKGp2BKgGFlkt9iudfGkWE4HzygeM9OCYmfY', 'resolutions': [{'height': 99, 'url': 'https://external-preview.redd.it/Qi473nZaKGp2BKgGFlkt9iudfGkWE4HzygeM9OCYmfY.png?width=108&crop=smart&auto=webp&s=041eab9a71a6efbac98f688ddc35e924a6de29ae', 'width': 108}, {'height': 198, 'url': 'https://external-preview.redd.it/Qi473nZaKGp2BKgGFlkt9iudfGkWE4HzygeM9OCYmfY.png?width=216&crop=smart&auto=webp&s=02c2f82ad93a811b43d19fa5c764247f4078da4d', 'width': 216}, {'height': 294, 'url': 'https://external-preview.redd.it/Qi473nZaKGp2BKgGFlkt9iudfGkWE4HzygeM9OCYmfY.png?width=320&crop=smart&auto=webp&s=f9757d4f0d6ea1d7bac554b4a1d3db40bec80bc3', 'width': 320}, {'height': 589, 'url': 'https://external-preview.redd.it/Qi473nZaKGp2BKgGFlkt9iudfGkWE4HzygeM9OCYmfY.png?width=640&crop=smart&auto=webp&s=157d8d19fe100e79f2d9ea481c60171119d0cdc1', 'width': 640}, {'height': 884, 'url': 'https://external-preview.redd.it/Qi473nZaKGp2BKgGFlkt9iudfGkWE4HzygeM9OCYmfY.png?width=960&crop=smart&auto=webp&s=fa6b49a6ce2c57732a66e08fd1ad3c410c3612de', 'width': 960}, {'height': 994, 'url': 'https://external-preview.redd.it/Qi473nZaKGp2BKgGFlkt9iudfGkWE4HzygeM9OCYmfY.png?width=1080&crop=smart&auto=webp&s=a84b610504b32d80e0bf40d999a6e8a90a7589ac', 'width': 1080}], 'source': {'height': 1140, 'url': 'https://external-preview.redd.it/Qi473nZaKGp2BKgGFlkt9iudfGkWE4HzygeM9OCYmfY.png?auto=webp&s=0d6dd1c75531cfb347179f71db976481ce07308d', 'width': 1238}, 'variants': {}}]} |
9 LLMs, 4 GPUs, 2 CPUs benchmarked in Ollama - RTX 3090, RTX 3060 12G, RTX 2080 Ti, Tesla M60 | 1 | [removed] | 2025-09-03T12:21:25 | https://www.reddit.com/gallery/1n7dalu | razvanfatu | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n7dalu | false | null | t3_1n7dalu | /r/LocalLLaMA/comments/1n7dalu/9_llms_4_gpus_2_cpus_benchmarked_in_ollama_rtx/ | false | false | 1 | null | |
in search for an AMD compatible STT AI | 1 | title
it seems everything runs on nvidia gpu and i'm so tired of it | 2025-09-03T12:19:04 | https://www.reddit.com/r/LocalLLaMA/comments/1n7d8so/in_search_for_an_amd_compatible_stt_ai/ | Cactus-Fantastico-99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7d8so | false | null | t3_1n7d8so | /r/LocalLLaMA/comments/1n7d8so/in_search_for_an_amd_compatible_stt_ai/ | false | false | self | 1 | null |
How difficult is it to get Debian to use newer Nvidia GPUs (like the 5060 Ti?) | 1 | Looking to start with LLMs, I hear Linux is much better for it so bought an extra NVMe drive for my computer to run Linux on.
I hear Ubuntu has better GPU support, but also has telemetry… if I can’t avoid it I’ll just use Ubuntu, but was wondering how hard LLMs would be to setup for a semi-noob on Debian | 2025-09-03T12:16:15 | https://www.reddit.com/r/LocalLLaMA/comments/1n7d6nv/how_difficult_is_it_to_get_debian_to_use_newer/ | ChiefRunningCar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7d6nv | false | null | t3_1n7d6nv | /r/LocalLLaMA/comments/1n7d6nv/how_difficult_is_it_to_get_debian_to_use_newer/ | false | false | self | 1 | null |
Best large local model for creative writing? | 12 | I've done much experimenting (and even fine tuning) with local models for coding, and have a shortlist of go-to models for specific tasks, but I've recently been trying to incorporate these into creative pipelines, and noticed they all sound every bit as autistic and robotic as myself. Sometimes even moreso (looking at you Qwen3)
So with that in mind I'm looking for suggestions for models that run well within 96GB of memory, that are effective at producing "characterized" creative writing - dialogs specifically. Multimodal would also be a plus.
I've had some good results with GPT 5 but would prefer a local model. | 2025-09-03T12:06:46 | https://www.reddit.com/r/LocalLLaMA/comments/1n7czji/best_large_local_model_for_creative_writing/ | Creepy-Bell-4527 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7czji | false | null | t3_1n7czji | /r/LocalLLaMA/comments/1n7czji/best_large_local_model_for_creative_writing/ | false | false | self | 12 | null |
YanoljaNEXT-Rosetta: A Collection of Translation Models in Different Sizes | 23 | This model specializes in translating structured data in JSON format.
It’s built on top of either Gemma-3 or GPT-OSS.
Didn’t see your language on the card? No worries—this model actually supports many more languages than what’s listed.
During evaluation, the model achieved higher BLEU and CHrF++ scores compared to proprietary models, although its MetricX24 scores were marginally lower. | 2025-09-03T12:06:20 | https://www.reddit.com/r/LocalLLaMA/comments/1n7cz7y/yanoljanextrosetta_a_collection_of_translation/ | SummerFantastic5457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7cz7y | false | null | t3_1n7cz7y | /r/LocalLLaMA/comments/1n7cz7y/yanoljanextrosetta_a_collection_of_translation/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'w4VcPxYR7JagDKmle22zlpkAYvkt1Rqa1CKRqes6zgg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/w4VcPxYR7JagDKmle22zlpkAYvkt1Rqa1CKRqes6zgg.png?width=108&crop=smart&auto=webp&s=7239d9936a55a385ad94574e0bd779fe83e42a4c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/w4VcPxYR7JagDKmle22zlpkAYvkt1Rqa1CKRqes6zgg.png?width=216&crop=smart&auto=webp&s=3726f79c788cf103342c5d5016596ccc45ccfb3d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/w4VcPxYR7JagDKmle22zlpkAYvkt1Rqa1CKRqes6zgg.png?width=320&crop=smart&auto=webp&s=66b5e4133627c8d41e40206a403cce8e6ebeccd4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/w4VcPxYR7JagDKmle22zlpkAYvkt1Rqa1CKRqes6zgg.png?width=640&crop=smart&auto=webp&s=68fcafe163807389bb6c43dcfc76371901a7f628', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/w4VcPxYR7JagDKmle22zlpkAYvkt1Rqa1CKRqes6zgg.png?width=960&crop=smart&auto=webp&s=271d90dedc015585b51767c6a51ddb9014e2fa50', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/w4VcPxYR7JagDKmle22zlpkAYvkt1Rqa1CKRqes6zgg.png?width=1080&crop=smart&auto=webp&s=2de6cca75cc5b43f90093c405b157f2ea491e32e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/w4VcPxYR7JagDKmle22zlpkAYvkt1Rqa1CKRqes6zgg.png?auto=webp&s=4bc146dbe95abac1172ab4bbce87479e4c02e90a', 'width': 1200}, 'variants': {}}]} |
GLM 4.5 is a Claude wrapper | 0 | When I asked "which model are u?" to GLM 4.5, it responded as "I am claude".
https://preview.redd.it/446tpbrwvxmf1.png?width=1793&format=png&auto=webp&s=5b006fc62d40ade54ff312e161117bf3361cc8e7
| 2025-09-03T12:00:45 | https://www.reddit.com/r/LocalLLaMA/comments/1n7cuun/glm_45_is_a_claude_wrapper/ | Spiritual-Visit-2958 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7cuun | false | null | t3_1n7cuun | /r/LocalLLaMA/comments/1n7cuun/glm_45_is_a_claude_wrapper/ | false | false | 0 | null | |
Local AI machine for learning recommendations | 1 | Hi everyone,
I have been scouring the web for ages, trying to find the best option for running a local AI server. My requirements are simple: I want to run models with up to 20-22 gigabytes of VRAM at a rate of 20-30 tokens per second, with a decent context size, suitable for basic coding. I am still learning and don't really care for the huge models or running at a professional level; it's more for home use.
From what I can tell, I have only really a few options as I don't currently have a PC desktop, just a m2 max 32 GB for work, which is okay. Having a dedicated GPU is the best option.
The 3090 is the go-to for GPUs, but it's second-hand, and I am not overly keen on that; it's an option.
7090xtx - seems another option as i can get it new but the same price as a 2nd hand 3090.
Mac mini M1 Max with 64 GB - I can get this relatively cheap, but it's pretty old now, and I don't know how long Apple will support the os, maybe three more years.
The variations of the AMD Max 395 seem okay, but it's a lot of money for that, and the performance isn't that great for the price, but it might be good enough for me.
I have seen that there are different cards and servers available on eBay, but ideally, I want something relatively new.
I am not as bothered about future-proofing, as you can't do that with the way things move, but a PC I could use it for other things.
| 2025-09-03T11:50:03 | https://www.reddit.com/r/LocalLLaMA/comments/1n7cn6n/local_ai_machine_for_learning_recommendations/ | Ornery-Business9056 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7cn6n | false | null | t3_1n7cn6n | /r/LocalLLaMA/comments/1n7cn6n/local_ai_machine_for_learning_recommendations/ | false | false | self | 1 | null |
Best OCR model | 0 | We have an invoice with us, and we want to extract the details of the invoice precisely...is there any best OCR model which could give us better results? For ex: olmocr, ovis, qwen 2.5vl...similarly.
Have tried gemma3 27b,olmocr, ovis, qwen 2.5vl but just wondering if there other non mainstream models that are better faster and accurate and open source. | 2025-09-03T11:39:57 | https://www.reddit.com/r/LocalLLaMA/comments/1n7cfzp/best_ocr_model/ | No_Nothing1584 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7cfzp | false | null | t3_1n7cfzp | /r/LocalLLaMA/comments/1n7cfzp/best_ocr_model/ | false | false | self | 0 | null |
9 LLMs, 4 GPUs, 2 CPUs benchmarked in Ollama - RTX 3090, RTX 3060 12G, RTX 2080 Ti, Tesla M60 | 1 | [removed] | 2025-09-03T11:21:57 | https://www.reddit.com/gallery/1n7c3mk | razvanfatu | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n7c3mk | false | null | t3_1n7c3mk | /r/LocalLLaMA/comments/1n7c3mk/9_llms_4_gpus_2_cpus_benchmarked_in_ollama_rtx/ | false | false | 1 | null | |
Le Chat. Custom MCP connectors. Memories. | 80 | Le Chat now integrates with 20+ enterprise platforms—powered by MCP—and remembers what matters with Memories.
| 2025-09-03T11:19:20 | https://mistral.ai/news/le-chat-mcp-connectors-memories | According_to_Mission | mistral.ai | 1970-01-01T00:00:00 | 0 | {} | 1n7c1tg | false | null | t3_1n7c1tg | /r/LocalLLaMA/comments/1n7c1tg/le_chat_custom_mcp_connectors_memories/ | false | false | 80 | {'enabled': False, 'images': [{'id': 'QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=108&crop=smart&auto=webp&s=757c6641896f42b25e4c88e87dc438f1e8d270bb', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=216&crop=smart&auto=webp&s=d4e78d09c1d0842276f98a4a7745457d7c7c5171', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=320&crop=smart&auto=webp&s=4df6ded6329ae09fc0e110879f55f893298c17b4', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=640&crop=smart&auto=webp&s=4c3b97e1405ebb7916bf71d7b9a3da9a44efaea7', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=960&crop=smart&auto=webp&s=0e49bc517b9cd96d953bfc71387ecf137efddf97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?width=1080&crop=smart&auto=webp&s=f52f14c1247d26b63fd222b2cb6756d88234d2f0', 'width': 1080}], 'source': {'height': 2520, 'url': 'https://external-preview.redd.it/QLQU1soiMTzFAm8GzW6EPDbaX5jrcYYFqy1ql5NYoiQ.png?auto=webp&s=fe19c20c363332d32b7f6d8917f3febce9133568', 'width': 4800}, 'variants': {}}]} | |
9 LLMs, 4 GPUs, 2 CPUs benchmarked in Ollama - RTX 3090, RTX 3060 12G, RTX 2080 Ti, Tesla M60 | 1 | [removed] | 2025-09-03T11:14:15 | https://www.reddit.com/gallery/1n7bydt | razvanfatu | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n7bydt | false | null | t3_1n7bydt | /r/LocalLLaMA/comments/1n7bydt/9_llms_4_gpus_2_cpus_benchmarked_in_ollama_rtx/ | false | false | 1 | null | |
Paratrooper Rescue - A simple retro game. | 0 | Created a retro game via few prompts (Gemini Pro). Game play is to rescue the paratroopers by controlling the boat, before they hit the shark ridden water. All game assets and sound were self generated by Gemini.
Play the game at: [https://genmaya.com/games/parachute.html](https://genmaya.com/games/parachute.html)
| 2025-09-03T11:08:53 | https://www.reddit.com/gallery/1n7buux | phone_radio_tv | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n7buux | false | null | t3_1n7buux | /r/LocalLLaMA/comments/1n7buux/paratrooper_rescue_a_simple_retro_game/ | false | false | 0 | null | |
Paratrooper Rescue - A simple getro game. | 1 | Created a retro game via few prompts (Gemini Pro). Game play is to rescue the paratroopers by controlling the boat, before they hit the shark ridden water. All game assets and sounds were self generated by Gemini. | 2025-09-03T11:07:01 | https://www.reddit.com/gallery/1n7btnp | phone_radio_tv | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1n7btnp | false | null | t3_1n7btnp | /r/LocalLLaMA/comments/1n7btnp/paratrooper_rescue_a_simple_getro_game/ | false | false | 1 | null | |
LangExtract by Google: many people don't know about this yet! | 156 | 2025-09-03T11:02:22 | https://github.com/google/langextract | fuckAIbruhIhateCorps | github.com | 1970-01-01T00:00:00 | 0 | {} | 1n7bqgm | false | null | t3_1n7bqgm | /r/LocalLLaMA/comments/1n7bqgm/langextract_by_google_many_people_dont_know_about/ | false | false | default | 156 | {'enabled': False, 'images': [{'id': 'n9THNRvTBgabZmzyX_O8lEw2GxXkLfCbQBuYD0khQMY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n9THNRvTBgabZmzyX_O8lEw2GxXkLfCbQBuYD0khQMY.png?width=108&crop=smart&auto=webp&s=0f6a4424cf4341fa25c696b18f8ccb7d8b089bb5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/n9THNRvTBgabZmzyX_O8lEw2GxXkLfCbQBuYD0khQMY.png?width=216&crop=smart&auto=webp&s=5dda3f2d5a427770bc0f683b5ce11403cd3fe7d3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/n9THNRvTBgabZmzyX_O8lEw2GxXkLfCbQBuYD0khQMY.png?width=320&crop=smart&auto=webp&s=613b71d7f073200bcb786657236d43b724e886ad', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/n9THNRvTBgabZmzyX_O8lEw2GxXkLfCbQBuYD0khQMY.png?width=640&crop=smart&auto=webp&s=70d9ab8760012176bf18e519e0590b3f5f3d4bab', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/n9THNRvTBgabZmzyX_O8lEw2GxXkLfCbQBuYD0khQMY.png?width=960&crop=smart&auto=webp&s=cd53f9c66d9300fe84d83891898b40e06930209d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/n9THNRvTBgabZmzyX_O8lEw2GxXkLfCbQBuYD0khQMY.png?width=1080&crop=smart&auto=webp&s=c9ef315047fab4673c6e9226eaf9c437ded16b3d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/n9THNRvTBgabZmzyX_O8lEw2GxXkLfCbQBuYD0khQMY.png?auto=webp&s=2c9adc4e3d6159c2f0ff9506404ea3df1fc2e151', 'width': 1200}, 'variants': {}}]} | |
Bytebot.ai - what do we think? | 0 | [https://www.bytebot.ai/](https://www.bytebot.ai/)
What do you all think of this local agent? (Not my project) | 2025-09-03T11:01:11 | https://www.reddit.com/r/LocalLLaMA/comments/1n7bpn6/bytebotai_what_do_we_think/ | Impossible-Glass-487 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7bpn6 | false | null | t3_1n7bpn6 | /r/LocalLLaMA/comments/1n7bpn6/bytebotai_what_do_we_think/ | false | false | self | 0 | null |
New Swiss fully-open multilingual Model | 53 | 2025-09-03T10:29:46 | https://huggingface.co/swiss-ai/Apertus-70B-2509 | braincrowd | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1n7b5xl | false | null | t3_1n7b5xl | /r/LocalLLaMA/comments/1n7b5xl/new_swiss_fullyopen_multilingual_model/ | false | false | default | 53 | {'enabled': False, 'images': [{'id': 'KeZfybYf994Jltq2xFXUUTTUg9fRGIDbb5FdVf9Sh70', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KeZfybYf994Jltq2xFXUUTTUg9fRGIDbb5FdVf9Sh70.png?width=108&crop=smart&auto=webp&s=0b8ab29d9df365bb6d773274eb60784c150d9310', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/KeZfybYf994Jltq2xFXUUTTUg9fRGIDbb5FdVf9Sh70.png?width=216&crop=smart&auto=webp&s=dfeb9c4bfb710b73250ff402b029c3719619093c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/KeZfybYf994Jltq2xFXUUTTUg9fRGIDbb5FdVf9Sh70.png?width=320&crop=smart&auto=webp&s=3e7d5f5e4030e0dcd1bcda1325f717b30994a68b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/KeZfybYf994Jltq2xFXUUTTUg9fRGIDbb5FdVf9Sh70.png?width=640&crop=smart&auto=webp&s=415bc651c08564e7f0fb8fbbfdc78d40ba8ad377', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/KeZfybYf994Jltq2xFXUUTTUg9fRGIDbb5FdVf9Sh70.png?width=960&crop=smart&auto=webp&s=22ba2ef53024462757c8f5d9873b24d712638655', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/KeZfybYf994Jltq2xFXUUTTUg9fRGIDbb5FdVf9Sh70.png?width=1080&crop=smart&auto=webp&s=ce17afec214263d59bbe9cffa6fa7816290653e1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/KeZfybYf994Jltq2xFXUUTTUg9fRGIDbb5FdVf9Sh70.png?auto=webp&s=56e771ba9d1a46a692b2214d3db29e87468febcd', 'width': 1200}, 'variants': {}}]} | |
Introducing CAP (Context-Aware Protocol) – the missing layer after MCP | 0 | Hey folks,
You’ve probably heard of **MCP (Model Context Protocol)**, which standardizes how AI models talk to external tools. It’s a huge step forward, but I kept thinking: *what about context itself?*
That’s where I’m building **CAP – Context-Aware Protocol**.
CAP is a **middleware layer** that enriches AI queries with:
* **Session memory** (short + long term)
* **Vector storage + RAG** for knowledge retrieval
* **Caching** for speed
* **Policy & governance** (PII redaction, tool access control)
* **Context fusion & ranking** to make sure models see the *most relevant* info
The cool part?
* Works **with MCP** → enriches tool responses.
* Works **without MCP** → provides its own API.
So instead of passing raw queries to an LLM, CAP creates a **structured context package** (JSON) that includes memory, retrieved docs, session history, and even compliance filters — all ready for the model to use.
Think of CAP as **“the brain behind the brain”**: it ensures your AI always reasons with the right data.
I’m packaging it so devs can drop it in as an SDK or microservice. Planning adapters for OpenAI, Anthropic, Gemini, Pinecone, Redis, Postgres, etc.
Would love feedback from this community:
* Do you see CAP as something useful in your AI pipelines?
* What integrations would you want first?
Cheers,
Sunny
here is the github link star would be appreciated : [https://github.com/SunnyCOdet/CAP.git](https://github.com/SunnyCOdet/CAP.git) | 2025-09-03T10:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/1n7b43m/introducing_cap_contextaware_protocol_the_missing/ | Striking-Button2303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7b43m | false | null | t3_1n7b43m | /r/LocalLLaMA/comments/1n7b43m/introducing_cap_contextaware_protocol_the_missing/ | false | false | self | 0 | null |
Subject: Seeking Recommendations for Sub-8B Parameter LLMs for RAG-Based Attribute Extraction | 1 | [removed] | 2025-09-03T10:15:26 | https://www.reddit.com/r/LocalLLaMA/comments/1n7ax3k/subject_seeking_recommendations_for_sub8b/ | Illustrious_Form3935 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7ax3k | false | null | t3_1n7ax3k | /r/LocalLLaMA/comments/1n7ax3k/subject_seeking_recommendations_for_sub8b/ | false | false | self | 1 | null |
after 10 percent of context window i think every ai become useless . i litterally used the every ai cli every ai has a same problem they become slow and dumber | 0 | first one is qwen 3 coder they say there model is very good yaah its just good from previous version .
same with every other ai coding model not a good experience 1 out 10 | 2025-09-03T10:03:41 | https://www.reddit.com/r/LocalLLaMA/comments/1n7aq0m/after_10_percent_of_context_window_i_think_every/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7aq0m | false | null | t3_1n7aq0m | /r/LocalLLaMA/comments/1n7aq0m/after_10_percent_of_context_window_i_think_every/ | false | false | self | 0 | null |
after 10 percent of context window i think every ai become useless . i litterally used the every ai cli every ai has a same problem they become slow and dumber | 0 | first one is qwen 3 coder they say there model is very good yaah its just good from previous version .
same with every other ai coding model not a good experience 1 out 10 | 2025-09-03T10:03:39 | https://www.reddit.com/r/LocalLLaMA/comments/1n7apzp/after_10_percent_of_context_window_i_think_every/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n7apzp | false | null | t3_1n7apzp | /r/LocalLLaMA/comments/1n7apzp/after_10_percent_of_context_window_i_think_every/ | false | false | self | 0 | null |
What's leading the LLM race at the moment? | 1 | [removed] | 2025-09-03T09:13:41 | https://www.reddit.com/r/LocalLLaMA/comments/1n79xnl/whats_leading_the_llm_race_at_the_moment/ | NikoDraven | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n79xnl | false | null | t3_1n79xnl | /r/LocalLLaMA/comments/1n79xnl/whats_leading_the_llm_race_at_the_moment/ | false | false | self | 1 | null |
qwen3 70B or 100B coder from the existing 480B? | 1 | [removed] | 2025-09-03T09:08:17 | https://www.reddit.com/r/LocalLLaMA/comments/1n79uk2/qwen3_70b_or_100b_coder_from_the_existing_480b/ | One_Archer_577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n79uk2 | false | null | t3_1n79uk2 | /r/LocalLLaMA/comments/1n79uk2/qwen3_70b_or_100b_coder_from_the_existing_480b/ | false | false | self | 1 | null |
Inference on new Framework desktop | 11 | Hello, lovely community! I'm just curious if anyone has gotten their hands on the new Framework desktop and used it to run inference for local models. I'm aware the memory bandwidth is weak, and I assume it's probably not great for fine-tuning or training. I just wonder if, given its energy efficiency and large shared memory capacity, it would make sense to set up the board as an LLM server for mid-sized models like quen3-coder:30b. Or if you have any other solutions that might work for this scenario, I'd love to hear them! (maybe a Mac Mini??). I already have an Nvidia 3060 with 12gb VRAM, and I'd rather not just get a bigger/faster GPU, they're pretty expensive and hog a lot of power when idling. Anyway, I'm rambling now, show me what you got! | 2025-09-03T09:07:58 | https://www.reddit.com/r/LocalLLaMA/comments/1n79udw/inference_on_new_framework_desktop/ | wombatsock | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n79udw | false | null | t3_1n79udw | /r/LocalLLaMA/comments/1n79udw/inference_on_new_framework_desktop/ | false | false | self | 11 | null |
You need 1 Al tool - Not 10 for study and research. | 1 | [removed] | 2025-09-03T09:00:09 | https://www.reddit.com/r/LocalLLaMA/comments/1n79pz7/you_need_1_al_tool_not_10_for_study_and_research/ | ai_is_hallucinating | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n79pz7 | false | null | t3_1n79pz7 | /r/LocalLLaMA/comments/1n79pz7/you_need_1_al_tool_not_10_for_study_and_research/ | false | false | self | 1 | null |
How do you prompt an image editing model? | 1 | Hi. I'm not really a professional (or even informed amateur) when it comes to photo editing, which proves quite problematic when it comes to terminology in this area etc.
Now I want to start using Image editing LLMs and become a bit more proficient in doing image retouch using them. The results are however still not what I expect and see other, more professional users achieve.
Given how important the vocabulary and "language" is in communicating with LLMs, are there some guides or prompt examples that people have used in editing images with LLMs? e.g. do I simply say "improve the quality of this photo" if I want a higher resolution, or should be something else? Is it just "make it sharper", or something more technical? etc.
Many thanks | 2025-09-03T08:50:26 | https://www.reddit.com/r/LocalLLaMA/comments/1n79kt7/how_do_you_prompt_an_image_editing_model/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1n79kt7 | false | null | t3_1n79kt7 | /r/LocalLLaMA/comments/1n79kt7/how_do_you_prompt_an_image_editing_model/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.