title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Haven't been been following LLM releases recently. Did we get any MoE <10B total parameters? | 8 | I only know about the Olmoe one, but it's not SoTA | 2025-09-17T13:14:36 | https://www.reddit.com/r/LocalLLaMA/comments/1njcues/havent_been_been_following_llm_releases_recently/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1njcues | false | null | t3_1njcues | /r/LocalLLaMA/comments/1njcues/havent_been_been_following_llm_releases_recently/ | false | false | self | 8 | null |
Cline --> Qwen3-Coder tool calling fix | 14 | I jumped into the AI assisted coding world about 5 weeks ago. Been doing the normal "download all the models and tinker" thing I am sure we all did. I have settled on Qwen3-Coder 30B as the best model for local use for now, as many have. Mainly it was because I use VSCode and Cline for the most part. It mostly worked, until a specific tool call and then it broke. Not the end of the world but also annoying. Did more research, and it seems like Qwen3-Coder was using it's own format, and Cline is using XML. Figured it might be worth an experiment, and I am pretty sure it works well. Hasn't failed a tool call yet although to be fair I didn't put it through the ringer. Maybe this saves someone else some time.
# Qwen Wrapper for Cline
## Overview
This wrapper allows Cline, a VS Code plugin with a strong affinity for Anthropic's chat format, to work with local Qwen models. It acts as a bidirectional translator between Anthropic-style tool calls and Qwen's custom XML format, enabling seamless integration of local Qwen models with Cline.
## Features
* **Request Translation:** Converts Anthropic-style tool definitions (XML) into the JSON format expected by Qwen.
* **Response Translation:** Translates Qwen's tool call responses (custom XML or OpenAI-style JSON) into the Anthropic-style `<invoke>` format that Cline understands.
* **Local and Docker Support:** Can be run as a local Python script or as a self-contained Docker container.
* **Easy Configuration:** Can be configured using environment variables for easy deployment.
## How It Works
The wrapper is a Flask application that sits between Cline and a local `llama-server` instance running a Qwen model. It intercepts requests from Cline, translates them into a format that the Qwen model can understand, and then forwards them to the `llama-server`. When the `llama-server` responds, the wrapper translates the response back into a format that Cline can understand.
### Request Translation (Cline → Qwen)
1. The wrapper receives a request from Cline containing an Anthropic-style `<tools>` XML block in the system prompt.
2. It parses the XML block to extract the tool definitions.
3. It converts the tool definitions into the JSON format expected by Qwen.
4. It removes the XML block from the original prompt.
5. It forwards the translated request to the `llama-server`.
### Response Translation (Qwen → Cline)
1. The wrapper receives a response from the `llama-server`.
2. It detects whether the response is a standard text response, a Qwen-style tool call (`<tool_call>`), or an OpenAI-style tool call (JSON).
3. If the response is a tool call, it translates it into the Anthropic-style `<invoke>` XML format.
4. It returns the translated response to Cline.
## Local Usage
To run the wrapper locally, you need to have Python and the required dependencies installed.
1. **Install Dependencies:**
```bash
pip install -r requirements.txt
```
2. **Configure Paths:**
Edit the `qwen_wrapper.py` file and update the following variables to point to your `llama-server` executable and Qwen model file:
```python
LLAMA_SERVER_EXECUTABLE = "/path/to/your/llama-server"
MODEL_PATH = "/path/to/your/qwen/model.gguf"
```
3. **Run the Wrapper:**
```bash
python qwen_wrapper.py
```
The wrapper will start on `http://localhost:8000`.
## Docker Usage
To run the wrapper in a Docker container, you need to have Docker installed.
1. **Place Files:**
Place the following files in the same directory:
* `Dockerfile`
* `qwen_wrapper_docker.py`
* `requirements.txt`
* Your `llama-server` executable
* Your Qwen model file (renamed to `model.gguf`)
2. **Build the Image:**
Open a terminal in the directory containing the files and run the following command to build the Docker image:
```bash
docker build -t qwen-wrapper .
```
3. **Run the Container:**
Once the image is built, run the following command to start the container:
```bash
docker run -p 8000:8000 -p 8001:8001 qwen-wrapper
```
This will start the container and map both ports 8000 and 8001 on your host machine to the corresponding ports in the container. Port 8000 is for the wrapper API, and port 8001 is for the internal llama-server communication.
4. **Connect Cline:**
You can then configure Cline to connect to `http://localhost:8000`. The wrapper will now also accept connections from other hosts on your network using your machine's IP address.
## Configuration
The wrapper can be configured using the following environment variables when running in Docker:
* `LLAMA_SERVER_EXECUTABLE`: The path to the `llama-server` executable inside the container. Defaults to `/app/llama-server`.
* `MODEL_PATH`: The path to the Qwen model file inside the container. Defaults to `/app/model.gguf`.
When running locally, these paths can be configured by editing the `qwen_wrapper.py` file directly.
## Network Connectivity
The wrapper now supports external connections from other hosts on your network. When running locally, the service will be accessible via:
- `http://localhost:8000` (local access)
- `http://YOUR_MACHINE_IP:8000` (external access from other hosts)
Make sure your firewall allows connections on port 8000 if you want to access the service from other machines.
flask==3.0.0
requests==2.31.0
waitress==2.1.2 | 2025-09-17T13:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/1njcpok/cline_qwen3coder_tool_calling_fix/ | jrodder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1njcpok | false | null | t3_1njcpok | /r/LocalLLaMA/comments/1njcpok/cline_qwen3coder_tool_calling_fix/ | false | false | self | 14 | null |
How to detect eye blink and occlusion in Mediapipe? | 1 | I'm trying to develop a mobile application using Google Mediapipe (Face Landmark Detection Model). The idea is to detect the face of the human and prove the liveliness by blinking twice. However, I'm unable to do so and stuck for the last 7 days. I tried following things so far:
* I extract landmark values for open vs. closed eyes and check the difference. If the change crosses a threshold twice, liveness is confirmed.
* For occlusion checks, I measure distances between jawline, lips, and nose landmarks. If it crosses a threshold, occlusion detected.
* I also need to ensure the user isn’t wearing glasses, but detecting that via landmarks hasn’t been reliable, especially with rimless glasses.
this “landmark math” approach isn’t giving consistent results, and I’m new to ML. Since the solution needs to run **on-device** for speed and better UX, Mediapipe seemed the right choice, but I’m getting failed consistently.
Can anyone please help me how can I accomplish this? | 2025-09-17T12:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/1njcf0a/how_to_detect_eye_blink_and_occlusion_in_mediapipe/ | abhijee00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1njcf0a | false | null | t3_1njcf0a | /r/LocalLLaMA/comments/1njcf0a/how_to_detect_eye_blink_and_occlusion_in_mediapipe/ | false | false | self | 1 | null |
Ok put your guesses what will meta release or launch beside smart glasses in Wednesday event?.. | 0 | Well I don't think they will release behemoth model or thinking model but if they release it it will be awesome.. | 2025-09-17T12:46:10 | https://www.reddit.com/r/LocalLLaMA/comments/1njc5zs/ok_put_your_guesses_what_will_meta_release_or/ | Independent-Wind4462 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1njc5zs | false | null | t3_1njc5zs | /r/LocalLLaMA/comments/1njc5zs/ok_put_your_guesses_what_will_meta_release_or/ | false | false | self | 0 | null |
We release a rag system Diver | 1 | [removed] | 2025-09-17T12:14:21 | https://www.reddit.com/r/LocalLLaMA/comments/1njbgha/we_release_a_rag_system_diver/ | Dazzling-Impact1075 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1njbgha | false | null | t3_1njbgha | /r/LocalLLaMA/comments/1njbgha/we_release_a_rag_system_diver/ | false | false | self | 1 | null |
LMStudio loads model context so slow... | 2 | I had been using KoboldCPP all this years. I am trying out LMStudio now. But I get a problem. For the amount of time it takes KoboldCPP to load completely, LMStudio loads the model to 80%. After that it slows down a lot and takes ten times as much time to load the remaining 20%. I am talking about the same model, context size, other settings too. After the model is loaded, it works fast, maybe a little faster than Kobold even.
If I disable the "Offload KV cache to GPU memory" switch, then the model loads fast, but obviously the inference speed is killed.
I use CUDA, with sysmem fallback turned off globally. Anybody knows how to fix that? This waiting completely kills the mood. Thanks! | 2025-09-17T12:13:25 | https://www.reddit.com/r/LocalLLaMA/comments/1njbfrt/lmstudio_loads_model_context_so_slow/ | Barafu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1njbfrt | false | null | t3_1njbfrt | /r/LocalLLaMA/comments/1njbfrt/lmstudio_loads_model_context_so_slow/ | false | false | self | 2 | null |
Trained XTTS_V2 how to infer the dvae.pth file and check the output of the .pth trained file | 1 | i have trained the xtts file and fine\_tuned on the data set XTTS-v2/dvae.pth this is the .pth fine\_tuned file now how should i do the infercing on the data\_set and check how the model is working , unable to find resource that solves this issue | 2025-09-17T12:09:59 | https://www.reddit.com/r/LocalLLaMA/comments/1njbd5o/trained_xtts_v2_how_to_infer_the_dvaepth_file_and/ | atmanirbhar21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1njbd5o | false | null | t3_1njbd5o | /r/LocalLLaMA/comments/1njbd5o/trained_xtts_v2_how_to_infer_the_dvaepth_file_and/ | false | false | self | 1 | null |
🚀 Next-gen retrieval pipeline Diver! | 1 | [removed] | 2025-09-17T12:08:43 | https://www.reddit.com/r/LocalLLaMA/comments/1njbc7r/nextgen_retrieval_pipeline_diver/ | Dazzling-Impact1075 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1njbc7r | false | null | t3_1njbc7r | /r/LocalLLaMA/comments/1njbc7r/nextgen_retrieval_pipeline_diver/ | false | false | self | 1 | null |
Sharing Our Internal Training Material: LLM Terminology Cheat Sheet! | 4 | We originally put this together as an internal reference to help our team stay aligned when reading papers, model reports, or evaluating benchmarks. Sharing it here in case others find it useful too: full reference [here](https://blog.netmind.ai/article/LLM_Terminology_Cheat_Sheet%3A_Comprehensive_Reference_for_AI_Practitioners).
The cheat sheet is grouped into core sections:
- Model architectures: Transformer, encoder–decoder, decoder-only, MoE
- Core mechanisms: attention, embeddings, quantisation, LoRA
- Training methods: pre-training, RLHF/RLAIF, QLoRA, instruction tuning
- Evaluation benchmarks: GLUE, MMLU, HumanEval, GSM8K
It’s aimed at practitioners who frequently encounter scattered, inconsistent terminology across LLM papers and docs. Particularly useful for those working with LoRA, QLoRA, or deploying local models.
Hope it’s helpful! Happy to hear suggestions or improvements from others in the space. | 2025-09-17T12:04:24 | https://www.reddit.com/r/LocalLLaMA/comments/1njb92p/sharing_our_internal_training_material_llm/ | MarketingNetMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1njb92p | false | null | t3_1njb92p | /r/LocalLLaMA/comments/1njb92p/sharing_our_internal_training_material_llm/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'EptZVva80SeG5tFS1AJAYTCSojdFP03DwXmu8W_45ek', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/EptZVva80SeG5tFS1AJAYTCSojdFP03DwXmu8W_45ek.jpeg?width=108&crop=smart&auto=webp&s=9481ccc9b7a318cf7a7e30e60131e6860355fce1', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/EptZVva80SeG5tFS1AJAYTCSojdFP03DwXmu8W_45ek.jpeg?width=216&crop=smart&auto=webp&s=21e48df549b3a1cafe27371990e6040a23e4bd74', 'width': 216}, {'height': 165, 'url': 'https://external-preview.redd.it/EptZVva80SeG5tFS1AJAYTCSojdFP03DwXmu8W_45ek.jpeg?width=320&crop=smart&auto=webp&s=f3cb8a579385732b2c76d310a34e6cda29623ddd', 'width': 320}, {'height': 330, 'url': 'https://external-preview.redd.it/EptZVva80SeG5tFS1AJAYTCSojdFP03DwXmu8W_45ek.jpeg?width=640&crop=smart&auto=webp&s=ebb953f534fa47e7432b2a3b7bfc079731d48e66', 'width': 640}], 'source': {'height': 387, 'url': 'https://external-preview.redd.it/EptZVva80SeG5tFS1AJAYTCSojdFP03DwXmu8W_45ek.jpeg?auto=webp&s=3a740ce4c2f1365216de8a88b90cb356f8b223f8', 'width': 750}, 'variants': {}}]} |
The dangers of local LLMs: Sleeper Agents | 0 | Not my video. I just thought it was interesting as I almost exclusively run LLMs trained by foreign countries. | 2025-09-17T11:59:13 | https://youtu.be/wL22URoMZjo | createthiscom | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1njb4wp | false | {'oembed': {'author_name': 'Computerphile', 'author_url': 'https://www.youtube.com/@Computerphile', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/wL22URoMZjo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Sleeper Agents in Large Language Models - Computerphile"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/wL22URoMZjo/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Sleeper Agents in Large Language Models - Computerphile', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1njb4wp | /r/LocalLLaMA/comments/1njb4wp/the_dangers_of_local_llms_sleeper_agents/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'D-PP482jELRrLilgoH0jWLYDrgb1FowUIVC91Di3__c', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/D-PP482jELRrLilgoH0jWLYDrgb1FowUIVC91Di3__c.jpeg?width=108&crop=smart&auto=webp&s=aba9526edae24ae373dad5474c9087967eafb13e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/D-PP482jELRrLilgoH0jWLYDrgb1FowUIVC91Di3__c.jpeg?width=216&crop=smart&auto=webp&s=f00a7502ff690aba17e59c6dea63fdda6af0c716', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/D-PP482jELRrLilgoH0jWLYDrgb1FowUIVC91Di3__c.jpeg?width=320&crop=smart&auto=webp&s=c4dafdd6910f1de2dc696647fd8b3ae3a70d9569', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/D-PP482jELRrLilgoH0jWLYDrgb1FowUIVC91Di3__c.jpeg?auto=webp&s=ce81a879e0b6834c761508dc9d24e3f9973ce658', 'width': 480}, 'variants': {}}]} |
I have this question in my mind for a really long time, lead author of paper 'attention is all you need' is vaswani, but why everybody talks about noam shazeer ? | 2 | As it is | 2025-09-17T11:57:52 | https://www.reddit.com/r/LocalLLaMA/comments/1njb3xh/i_have_this_question_in_my_mind_for_a_really_long/ | Key-Preference-5142 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1njb3xh | false | null | t3_1njb3xh | /r/LocalLLaMA/comments/1njb3xh/i_have_this_question_in_my_mind_for_a_really_long/ | false | false | self | 2 | null |
Online learning hypothesis: freeze instruction blocks, adapt the base. Lets discuss this idea | 0 | Here’s a rough idea I’ve been thinking about:
1. Train a base model (standard transformer stack).
2. Add some extra instruction transformer layers on top, and fine-tune those on instruction data (while the base stays mostly frozen).
3. After that, freeze those instruction layers so the instruction-following ability stays intact.
4. For online/continuous learning, unfreeze just a small part of the base layers and keep updating them with new data.
So the instruction part is a “frozen shell” that protects alignment, while the base retains some capacity to adapt to new knowledge. | 2025-09-17T11:54:22 | https://www.reddit.com/r/LocalLLaMA/comments/1njb1cq/online_learning_hypothesis_freeze_instruction/ | ZeusZCC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1njb1cq | false | null | t3_1njb1cq | /r/LocalLLaMA/comments/1njb1cq/online_learning_hypothesis_freeze_instruction/ | false | false | self | 0 | null |
local llm for macbook air? | 0 | I'm thinking of building a mac app that will use a local llm to do content generation and I would like to find a local llm that would work on not so powerful laptops, like the macbook air.
What are your suggestions? So far, from multiple conversations with our group of friends (ChatGPT, Claude, all those guys) the best bet is on llama 3.2 1b quantized. Has anyone run this locally? Curious of what the output would be. | 2025-09-17T10:59:15 | https://www.reddit.com/r/LocalLLaMA/comments/1nj9ye9/local_llm_for_macbook_air/ | thebrokebuilder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj9ye9 | false | null | t3_1nj9ye9 | /r/LocalLLaMA/comments/1nj9ye9/local_llm_for_macbook_air/ | false | false | self | 0 | null |
How to post-train LLM with tokenizer replacement? | 3 | I tried searching Google for guides but couldn't find any. I have an idea to teach LLM a new language, but there is a problem. After I retrained the basic tokenizer of the model, first, the IDs of some system tokens changed, and second, after retraining the model itself with the new tokenizer, it generates garbage. Please advise on how to retrain correctly with the tokenizer replacement. Maybe I'm not retraining the tokenizer correctly? Maybe it needs to be expanded? And is it possible to retrain the model using the tokenizer of another model? I like the organization of the chat template and tokenizer in gpt oss, and I would like to train on it. | 2025-09-17T10:37:41 | https://www.reddit.com/r/LocalLLaMA/comments/1nj9khh/how_to_posttrain_llm_with_tokenizer_replacement/ | Objective-Good310 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj9khh | false | null | t3_1nj9khh | /r/LocalLLaMA/comments/1nj9khh/how_to_posttrain_llm_with_tokenizer_replacement/ | false | false | self | 3 | null |
Opencode plugin for extending local LLM knowledge using Google AI Search - free, unlimited, incognito via Playwright automation | 6 | So... I was trying to figure out how to integrate Google AI Search as a native tool/plugin and I vibecoded this thing. [https://github.com/IgorWarzocha/Opencode-Google-AI-Search-Plugin](https://github.com/IgorWarzocha/Opencode-Google-AI-Search-Plugin)
Why? Because local LLMs have a training cutoff date and their knowledge can be limited. This way you can spoonfeed your LLM some extra, up to date info. Yes, you are at risk of feeding the LLM some hallucinations or incorrect replies, but if you ask a reasonably detailed question, you will get a reasonably detailed result, and with links to sources so you can then fetch them for more info.
It's basically a tool that runs a very specific sequence of Playwright events and feeds the output back to the LLM (stumbled upon that idea while using browser control mcps). Unfortunately couldn't get the tool call to display properly (like fetch). LLM calls the tool, ingests the output into the context, and spits out a summary. If you want the full result, you need to ask it for it (it will give you the links, proper formatting etc, so you can then fetch content).
It fires playwright in headless, goes through the cookies, and does the thing. And it works locally in incognito, so your searches are kinda private.
Enjoy it while it lasts, I'm sure Google will do something about it eventually. Let me know if it works for you... "it works on my machine" LOL
PS. I'm pretty damn sure it can be adapted to work with any client and any website since it's a scripted Playwright automation. Scary.
| 2025-09-17T10:27:34 | https://www.reddit.com/r/LocalLLaMA/comments/1nj9e7v/opencode_plugin_for_extending_local_llm_knowledge/ | igorwarzocha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj9e7v | false | null | t3_1nj9e7v | /r/LocalLLaMA/comments/1nj9e7v/opencode_plugin_for_extending_local_llm_knowledge/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'ytHU7tmcs1NN6pXxPLGtBlVtNJRtNu2SS8Y9t1cetdI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ytHU7tmcs1NN6pXxPLGtBlVtNJRtNu2SS8Y9t1cetdI.png?width=108&crop=smart&auto=webp&s=6656f456c016a30f78fc6ecb4e4e3906f65f5285', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ytHU7tmcs1NN6pXxPLGtBlVtNJRtNu2SS8Y9t1cetdI.png?width=216&crop=smart&auto=webp&s=63ebdde96f58786f6153112f05cd2e69a5feb452', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ytHU7tmcs1NN6pXxPLGtBlVtNJRtNu2SS8Y9t1cetdI.png?width=320&crop=smart&auto=webp&s=7d5a253073ac1d976406e5bcf4c290b75e2fb8b3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ytHU7tmcs1NN6pXxPLGtBlVtNJRtNu2SS8Y9t1cetdI.png?width=640&crop=smart&auto=webp&s=21f738ea474e4d565a7af73264407af3ea2ea78f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ytHU7tmcs1NN6pXxPLGtBlVtNJRtNu2SS8Y9t1cetdI.png?width=960&crop=smart&auto=webp&s=8b46a8a2cb09bfe77e69566e02953267219893af', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ytHU7tmcs1NN6pXxPLGtBlVtNJRtNu2SS8Y9t1cetdI.png?width=1080&crop=smart&auto=webp&s=ef9a5d8f1642811847796d1d21f7c87ecd057431', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ytHU7tmcs1NN6pXxPLGtBlVtNJRtNu2SS8Y9t1cetdI.png?auto=webp&s=a43981021a9bf0f7e13b9c08b07fe3dddf6939ca', 'width': 1200}, 'variants': {}}]} |
Local MCP server not connection to Open WebUI | mcpo | 2 | I have got a MCP server running in a docker container using mcpo it is running a nmap binary in python file. The file runs but doesnt connect to the open webui tools. The backend is ollama.
This is the output
[mcpo running in docker](https://preview.redd.it/9uahh2c7appf1.png?width=2556&format=png&auto=webp&s=b473b292d971a5a5bb50e63302add52e571975cb)
[Host machine trying to connect](https://preview.redd.it/zjoalqzbappf1.png?width=2558&format=png&auto=webp&s=e83de7a2053b54737b53826beb4ba49176829079)
| 2025-09-17T10:16:59 | https://www.reddit.com/r/LocalLLaMA/comments/1nj97pu/local_mcp_server_not_connection_to_open_webui_mcpo/ | PrizePerformance5066 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj97pu | false | null | t3_1nj97pu | /r/LocalLLaMA/comments/1nj97pu/local_mcp_server_not_connection_to_open_webui_mcpo/ | false | false | 2 | null | |
Ling Flash 2.0 released | 299 | Ling Flash-2.0, from InclusionAI, a language model with 100B total parameters and 6.1B activated parameters (4.8B non-embedding).
[https://huggingface.co/inclusionAI/Ling-flash-2.0](https://huggingface.co/inclusionAI/Ling-flash-2.0)
| 2025-09-17T10:14:07 | https://www.reddit.com/gallery/1nj9601 | abskvrm | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nj9601 | false | null | t3_1nj9601 | /r/LocalLLaMA/comments/1nj9601/ling_flash_20_released/ | false | false | 299 | null | |
Local Translation should I use one Big model that support all languages or English model with a small translation model? | 2 | Hi all
I’m setting up local LLMs for multiple purposes, but we work in a variety of languages. From my research, **Gemma-3 12B-IT** (or the **27B** version) looks best, since I could use one big model for text generation and just choose the response language. The downside is that if I ever switch models, the new one must also support multiple languages, which is constraining.
Would it be better to use a smaller model to **translate** the generated text instead and english based big LLM model? That way I can mix-and-match components, and if I generate in English and translate, I avoid a single queue because the models are separated.
Has anyone tested this? I couldn’t find results, so I’m implementing the idea to test it myself. | 2025-09-17T09:52:32 | https://www.reddit.com/r/LocalLLaMA/comments/1nj8sy6/local_translation_should_i_use_one_big_model_that/ | SomeRandomGuuuuuuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj8sy6 | false | null | t3_1nj8sy6 | /r/LocalLLaMA/comments/1nj8sy6/local_translation_should_i_use_one_big_model_that/ | false | false | self | 2 | null |
[Release] DASLab GGUF Non-Uniform Quantization Toolkit | 29 | We're excited to release the **first open-source toolkit** that brings **GPTQ + EvoPress** to the **GGUF format**, enabling *heterogeneous quantization* based on importance.
**Delivering Higher-quality models, same file size.**
# What's inside
* [**GPTQ (ICLR '23)**](https://arxiv.org/pdf/2210.17323) **quantization with GGUF export:** delivers error-correcting calibration for improved performance
* [**EvoPress (ICML '25)**](https://arxiv.org/pdf/2410.14649): runs evolutionary search to automatically discover optimal per-layer quantization configs
* **Model assembly tools:** package models to be fully functional with llama.cpp
# Why it matters
Unlike standard uniform quantization, our toolkit **optimizes precision where it matters most**.
Critical layers (e.g. attention) can use higher precision, while others (e.g. FFN) compress more aggressively.
With **EvoPress search + GPTQ quantization**, these trade-offs are discovered automatically.
# Results
Below are zero-shot evaluations. Full benchmark results are available in the repo.
https://preview.redd.it/3eg7rp0vyopf1.png?width=3569&format=png&auto=webp&s=c6590f70e8abf59f3442df57321eaa55ea85ba9c
# Resources
[DASLab GGUF Quantization Toolkit (GitHub Repo Link)](https://github.com/IST-DASLab/gptq-gguf-toolkit)
We are happy to get feedback, contributions, and experiments! | 2025-09-17T09:32:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nj8hee/release_daslab_gguf_nonuniform_quantization/ | Loginhe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj8hee | false | null | t3_1nj8hee | /r/LocalLLaMA/comments/1nj8hee/release_daslab_gguf_nonuniform_quantization/ | false | false | 29 | null | |
Has anyone been able to use GLM 4.5 with the Github copilot extension in VSCode? | 5 | I couldn't make it work, tried insiders too, I get this error:
\`\`\`
Sorry, your request failed. Please try again. Request id: add5bf64-832a-4bd5-afd2-6ba10be9a734
Reason: Rate limit exceeded
{"code":"1113","message":"Insufficient balance or no resource package. Please recharge."}
\`\`\` | 2025-09-17T09:02:13 | https://www.reddit.com/r/LocalLLaMA/comments/1nj80eu/has_anyone_been_able_to_use_glm_45_with_the/ | Intelligent-Top3333 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj80eu | false | null | t3_1nj80eu | /r/LocalLLaMA/comments/1nj80eu/has_anyone_been_able_to_use_glm_45_with_the/ | false | false | self | 5 | null |
llama.cpp: IPEX-LLM or SYCL for Intel Arc? | 5 | While waiting for the formal release and availability of the MaxSun B60 Turbo cards, I was looking into the various options on running inference: Vulkan, SYCL and IPEX-LLM. But, it seems that IPEX-LLM only releases a "portable zip", and reading their Python code (apps/src/python/llm) I am floored with the abundance of CFFI. I bet it works - but... damn does that feel wobbly. That said; I am not a python-expert - so, I might just be reading wrongly into this. More of a C and Go person, tbh.
There was a PR to upstream IPEX-LLM support into llama.cpp (via ggml.cpp) in 2024, but aside from that, I haven't seen much of it.
So I wanted to ask the blue-team folks here (they exist, I am sure of it!) what their inference experience is.
I will also look at vLLM, but I have not gotten enough experience with that just yet to know its features, flags and the like. My ideal stack will revolve around localAI, so I want to make sure I know the backends I am wiring up beforehand.
Thanks! | 2025-09-17T09:00:11 | https://www.reddit.com/r/LocalLLaMA/comments/1nj7z6b/llamacpp_ipexllm_or_sycl_for_intel_arc/ | IngwiePhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj7z6b | false | null | t3_1nj7z6b | /r/LocalLLaMA/comments/1nj7z6b/llamacpp_ipexllm_or_sycl_for_intel_arc/ | false | false | self | 5 | null |
🦾The Real Virus of the Pandemic🤖 | 1 | [removed] | 2025-09-17T08:47:53 | https://www.reddit.com/gallery/1nj7sin | Competitive-Cloud314 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1nj7sin | false | null | t3_1nj7sin | /r/LocalLLaMA/comments/1nj7sin/the_real_virus_of_the_pandemic/ | false | false | 1 | null | |
support for the upcoming Olmo3 model has been merged into llama.cpp | 64 | 2025-09-17T08:42:16 | https://github.com/ggml-org/llama.cpp/pull/16015 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nj7pik | false | null | t3_1nj7pik | /r/LocalLLaMA/comments/1nj7pik/support_for_the_upcoming_olmo3_model_has_been/ | false | false | 64 | {'enabled': False, 'images': [{'id': '11Q8uZ2-M8bnIL12n-O39P2wooNtmpfM5ORG4VqYvik', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/11Q8uZ2-M8bnIL12n-O39P2wooNtmpfM5ORG4VqYvik.png?width=108&crop=smart&auto=webp&s=9452cfd39d5355d630fb57cff24239d19fdc3d16', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/11Q8uZ2-M8bnIL12n-O39P2wooNtmpfM5ORG4VqYvik.png?width=216&crop=smart&auto=webp&s=96513bd146973c7e2d7db2930f14a11baaed3076', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/11Q8uZ2-M8bnIL12n-O39P2wooNtmpfM5ORG4VqYvik.png?width=320&crop=smart&auto=webp&s=165a20ec372fcbe1423bc9e0410d3b4cb15248dd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/11Q8uZ2-M8bnIL12n-O39P2wooNtmpfM5ORG4VqYvik.png?width=640&crop=smart&auto=webp&s=fab39b6ac711c07bf4835557388faf67e2bdb807', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/11Q8uZ2-M8bnIL12n-O39P2wooNtmpfM5ORG4VqYvik.png?width=960&crop=smart&auto=webp&s=d215f0b645524a2d0e99e5f03057a6600a587a14', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/11Q8uZ2-M8bnIL12n-O39P2wooNtmpfM5ORG4VqYvik.png?width=1080&crop=smart&auto=webp&s=1824cd1998593a78992530dc179abf48fe372746', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/11Q8uZ2-M8bnIL12n-O39P2wooNtmpfM5ORG4VqYvik.png?auto=webp&s=6baac864ddf4a5f4d96748181dc4eab6d817b567', 'width': 1200}, 'variants': {}}]} | ||
Big AI pushes the "we need to beat China" narrative cuz they want fat government contracts and zero democratic oversight. It's an old trick. Fear sells. | 156 | Throughout the Cold War, the military-industrial complex spent a fortune pushing the false narrative that the Soviet military was far more advanced than they actually were.
Why? To ensure the money from Congress kept flowing.
They lied… and lied… and lied again to get bigger and bigger defense contracts.
Now, obviously, there is *some* amount of competition between the US and China, but **Big Tech is stoking the flames beyond what is reasonable to terrify Congress into giving them whatever they want.**
What they want is fat government contracts and zero democratic oversight. Day after day we hear about another big AI company announcing a giant contract with the Department of Defense. | 2025-09-17T08:36:32 | https://www.reddit.com/r/LocalLLaMA/comments/1nj7mbu/big_ai_pushes_the_we_need_to_beat_china_narrative/ | katxwoods | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj7mbu | false | null | t3_1nj7mbu | /r/LocalLLaMA/comments/1nj7mbu/big_ai_pushes_the_we_need_to_beat_china_narrative/ | false | false | self | 156 | null |
Best sub 14b llm for long text summaries? | 9 | Speed is not important (can run overnight if really need be) but accuracy really matters to me. I was wondering if there were good 1M or 512K or even 256k context models That I might not be aware of.
I know qwen3 4b instruct has 256k native but im afraid it might not be accurate enough and hallucinate quite a bit due to its size | 2025-09-17T08:10:41 | https://www.reddit.com/r/LocalLLaMA/comments/1nj78fl/best_sub_14b_llm_for_long_text_summaries/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj78fl | false | null | t3_1nj78fl | /r/LocalLLaMA/comments/1nj78fl/best_sub_14b_llm_for_long_text_summaries/ | false | false | self | 9 | null |
Can I use Cursor Agent (or similar) with a local LLM setup (8B / 13B)? | 6 | Hey everyone,
I want to set up a local LLM (running 8B and possibly 13B parameter models). I was wondering if tools like Cursor Agent (or other AI coding agents) can work directly with my local setup, or if they require cloud-based APIs only.
Basically:
Is it possible to connect Cursor (or any similar coding agent) to a local model?
If not Cursor specifically, are there any good agent frameworks that can plug into local models for tasks like code generation and project automation?
Would appreciate any guidance from folks who’ve tried this. 🙏 | 2025-09-17T08:04:26 | https://www.reddit.com/r/LocalLLaMA/comments/1nj758c/can_i_use_cursor_agent_or_similar_with_a_local/ | BudgetPurple3002 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj758c | false | null | t3_1nj758c | /r/LocalLLaMA/comments/1nj758c/can_i_use_cursor_agent_or_similar_with_a_local/ | false | false | self | 6 | null |
Any new SOTA music generation models since ACE-step? | 6 | anyone got the links/repos? And not just papers pls because lots of times they never end up publishing the models.
p.s. in response to this post: [https://www.reddit.com/r/LocalLLaMA/comments/1kg9jkq/new\_sota\_music\_generation\_model/](https://www.reddit.com/r/LocalLLaMA/comments/1kg9jkq/new_sota_music_generation_model/) | 2025-09-17T07:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/1nj709e/any_new_sota_music_generation_models_since_acestep/ | utofy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj709e | false | null | t3_1nj709e | /r/LocalLLaMA/comments/1nj709e/any_new_sota_music_generation_models_since_acestep/ | false | false | self | 6 | null |
Deepinfra sudden 2.5x price hike for llama 3.3 70b instruction turbo. How are others coping with this? | 0 | Deepinfra has sent a notification of sudden massive price increase of inference for llama 3.370B model. Overall it’s close to 250% price increase with a one day notice.
This seems unprecedented as my project costs are going way up overnight. Has anyone else got this notice?
Would appreciate if there are anyways to cope up with this increase?
People generally don’t expect inference cost to rise in today’s times.
——
DeepInfra is committed to providing high-quality AI model access while maintaining sustainable operations.
We're writing to inform you of upcoming price changes for models you've been using.
1. meta-llama/Llama-3.3-70B-Instruct-Turbo
Current pricing: $0.038/$0.12 in/out Mtoken
New pricing: $0.13/$0.39 in/out Mtoken (still the best price in the market)
Effective date: 2025-09-18
| 2025-09-17T07:20:15 | https://www.reddit.com/r/LocalLLaMA/comments/1nj6h3o/deepinfra_sudden_25x_price_hike_for_llama_33_70b/ | parmarss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj6h3o | false | null | t3_1nj6h3o | /r/LocalLLaMA/comments/1nj6h3o/deepinfra_sudden_25x_price_hike_for_llama_33_70b/ | false | false | self | 0 | null |
STT –> LLM –> TTS pipeline in C | 15 | For **S**peech-**T**o-**T**ext, **L**arge-**L**anguage-**M**odel inference and **T**ext-**T**o-**S**peech I created three wrapper libraries in C/C++ (using [Whisper.cpp](https://github.com/ggml-org/whisper.cpp), [Llama.cpp](https://github.com/ggml-org/llama.cpp) and [Piper](https://github.com/rhasspy/piper)).
They offer pure C interfaces, Windows and Linux are supported, meant to be used on standard consumer hardware.
[mt\_stt](https://github.com/RhinoDevel/mt_stt) for **S**peech-**T**o-**T**ext.
[mt\_llm](https://github.com/RhinoDevel/mt_llm) for **L**arge-**L**anguage-**M**odel inference.
[mt\_tts](https://github.com/RhinoDevel/mt_tts) for **T**ext-**T**o-**S**peech.
An example implementation of an **STT -> LLM -> TTS pipeline** in C can be found [here](https://github.com/RhinoDevel/mt_llm/tree/main/stt_llm_tts-pipeline-example). | 2025-09-17T07:02:35 | https://www.reddit.com/r/LocalLLaMA/comments/1nj673e/stt_llm_tts_pipeline_in_c/ | rhinodevil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj673e | false | null | t3_1nj673e | /r/LocalLLaMA/comments/1nj673e/stt_llm_tts_pipeline_in_c/ | false | false | self | 15 | {'enabled': False, 'images': [{'id': 'xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y.png?width=108&crop=smart&auto=webp&s=4153aa44c656a73782bda174e591b773ddc6e36c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y.png?width=216&crop=smart&auto=webp&s=e0ab6caba1f83b64fc4261c35cdd1c207dc01097', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y.png?width=320&crop=smart&auto=webp&s=d291236aeb3595e81e37d5a851161d456b4f62d7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y.png?width=640&crop=smart&auto=webp&s=1bb25ab86d7861a27a76f7becd66e9bb2ad16b58', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y.png?width=960&crop=smart&auto=webp&s=d3bfbfd376f88c5e056a4b20e0169f54174a4fd5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y.png?width=1080&crop=smart&auto=webp&s=d1e51a2794c10ba55c541d4d0c34f196b35bd534', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y.png?auto=webp&s=80ebe0788f384793b85c65700a7da57eea184a72', 'width': 1280}, 'variants': {}}]} |
M1 Ultra Mac Studio vs AMD Ryzen AI Max 395+ for local AI? | 10 | # Looking at two options for a local AI sandbox:
1. Mac Studio M1 Ultra - 128GB RAM, 2TB SSD - $2500 (second hand, barely used)
2. AMD Ryzen AI Max 395+ laptop - 128GB RAM, 4TB SSD - $2200 (new)
Main use will be playing around with LLMs, image gen, maybe some video/audio stuff.
The M1 Ultra has way better memory bandwidth (800GB/s) which should help with LLMs, but I'm wondering if the AMD's RDNA 3.5 GPU might be better for other AI workloads? Also not sure about software support differences.
Anyone have experience with either for local AI? What would you pick? | 2025-09-17T06:54:18 | https://www.reddit.com/r/LocalLLaMA/comments/1nj626b/m1_ultra_mac_studio_vs_amd_ryzen_ai_max_395_for/ | doweig | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj626b | false | null | t3_1nj626b | /r/LocalLLaMA/comments/1nj626b/m1_ultra_mac_studio_vs_amd_ryzen_ai_max_395_for/ | false | false | self | 10 | null |
OpenAI usage breakdown released | 149 | I would have thought image generation would be higher... but this might be skewed by the fact that the 4o image (the whole ghibli craze) only came out in march 2025
[https://www.nber.org/system/files/working\_papers/w34255/w34255.pdf](https://www.nber.org/system/files/working_papers/w34255/w34255.pdf)
[https://www.nber.org/papers/w34255](https://www.nber.org/papers/w34255) | 2025-09-17T06:44:15 | LeatherRub7248 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nj5wk4 | false | null | t3_1nj5wk4 | /r/LocalLLaMA/comments/1nj5wk4/openai_usage_breakdown_released/ | false | false | default | 149 | {'enabled': True, 'images': [{'id': 'njcotg7i7opf1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/njcotg7i7opf1.png?width=108&crop=smart&auto=webp&s=a7598d3c22445e5e31c5a911174f988f6bd04d73', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/njcotg7i7opf1.png?width=216&crop=smart&auto=webp&s=b390812b3d79431cc043f3c1bfc15cf815dc853b', 'width': 216}, {'height': 200, 'url': 'https://preview.redd.it/njcotg7i7opf1.png?width=320&crop=smart&auto=webp&s=c8b339fa5d3b1185f08f0016f90183a87a771614', 'width': 320}, {'height': 401, 'url': 'https://preview.redd.it/njcotg7i7opf1.png?width=640&crop=smart&auto=webp&s=4e00350509ba0edd32cc7dfb7341451356402cd8', 'width': 640}, {'height': 601, 'url': 'https://preview.redd.it/njcotg7i7opf1.png?width=960&crop=smart&auto=webp&s=fc486374ec13edb94539dd6ca6e49e7e2fb7b868', 'width': 960}], 'source': {'height': 675, 'url': 'https://preview.redd.it/njcotg7i7opf1.png?auto=webp&s=b8e441ad5bbfa39558fdd8916ee3d325430703d2', 'width': 1077}, 'variants': {}}]} | |
Looking for the most reliable AI model for product image moderation (watermarks, blur, text, etc.) | 1 | [removed] | 2025-09-17T06:44:08 | https://www.reddit.com/r/LocalLLaMA/comments/1nj5wgr/looking_for_the_most_reliable_ai_model_for/ | sub_hez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj5wgr | false | null | t3_1nj5wgr | /r/LocalLLaMA/comments/1nj5wgr/looking_for_the_most_reliable_ai_model_for/ | false | false | self | 1 | null |
Looking for the most reliable AI model for product image moderation (watermarks, blur, text, etc.) | 1 | [removed] | 2025-09-17T06:39:25 | https://www.reddit.com/r/LocalLLaMA/comments/1nj5tty/looking_for_the_most_reliable_ai_model_for/ | sub_hez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj5tty | false | null | t3_1nj5tty | /r/LocalLLaMA/comments/1nj5tty/looking_for_the_most_reliable_ai_model_for/ | false | false | self | 1 | null |
Is anyone able to successfully run Qwen 30B Coder BF16? | 4 | With Llama.cpp and the Unsloth GGUFs for Qwen 30B Coder BF16, I am getting frequent crashes on two entirely different systems, a Ryzen AI Max, and a RTX 6000 Blackwell.
Llama.cpp just exits with no error message after a few messages.
VLLM works perfectly on the Blackwell with the official model from Qwen, except tool calling is currently broken, even with the new qwen 3 tool call parser which VLLM added. So the tool call instructions just end up in the chat stream. | 2025-09-17T06:27:47 | https://www.reddit.com/r/LocalLLaMA/comments/1nj5n67/is_anyone_able_to_successfully_run_qwen_30b_coder/ | TokenRingAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj5n67 | false | null | t3_1nj5n67 | /r/LocalLLaMA/comments/1nj5n67/is_anyone_able_to_successfully_run_qwen_30b_coder/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'HOG_usMm8kqi92zgZmu7FVJGNKQdI7ob0W4fRhELRQ0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HOG_usMm8kqi92zgZmu7FVJGNKQdI7ob0W4fRhELRQ0.png?width=108&crop=smart&auto=webp&s=afa2f48195881b61481490c45d03645a5c9448d7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HOG_usMm8kqi92zgZmu7FVJGNKQdI7ob0W4fRhELRQ0.png?width=216&crop=smart&auto=webp&s=f7e4afea89262b7442ee932e79e114561f160095', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HOG_usMm8kqi92zgZmu7FVJGNKQdI7ob0W4fRhELRQ0.png?width=320&crop=smart&auto=webp&s=7b76b18ee7a658073e0a4c2cacc08c74f821e563', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HOG_usMm8kqi92zgZmu7FVJGNKQdI7ob0W4fRhELRQ0.png?width=640&crop=smart&auto=webp&s=0e124b59f450acb0356d6dd14c7117f605cc0667', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HOG_usMm8kqi92zgZmu7FVJGNKQdI7ob0W4fRhELRQ0.png?width=960&crop=smart&auto=webp&s=a7340d6d9c210cca97316ce1af2f57b5de305627', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HOG_usMm8kqi92zgZmu7FVJGNKQdI7ob0W4fRhELRQ0.png?width=1080&crop=smart&auto=webp&s=84697e25299a9f83ed5abb1a5a2ce8d01be5f75e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HOG_usMm8kqi92zgZmu7FVJGNKQdI7ob0W4fRhELRQ0.png?auto=webp&s=f9b8bee9631dbf2f38ed7ab811e1f202403b1106', 'width': 1200}, 'variants': {}}]} |
Help running 2 rtx pro 6000 blackwell with VLLM. | 1 | I have been trying for months trying to get multiple rtx pro 6000 Blackwell GPU's to work for inference.
I tested llama.cpp and .gguf models are not for me.
If anyone has any working solutions are references to some posts to solve my problem would be greatly appreciated. Thanks! | 2025-09-17T06:19:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nj5igv/help_running_2_rtx_pro_6000_blackwell_with_vllm/ | Devcomeups | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj5igv | false | null | t3_1nj5igv | /r/LocalLLaMA/comments/1nj5igv/help_running_2_rtx_pro_6000_blackwell_with_vllm/ | false | false | self | 1 | null |
Posso usare un LLM su Raspbarry Pi 4? (4Gb) | 0 | Voglio trasformare il mio raspbarry in un server che possa gestire delle richieste ia (esclusivamente per me) utilizzando un LLM ollama in locale.
E possibile? E, se si, che modello posso installare date le specifiche?
Grazie 🙏 | 2025-09-17T05:27:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nj4m2f/posso_usare_un_llm_su_raspbarry_pi_4_4gb/ | tombino104 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj4m2f | false | null | t3_1nj4m2f | /r/LocalLLaMA/comments/1nj4m2f/posso_usare_un_llm_su_raspbarry_pi_4_4gb/ | false | false | self | 0 | null |
Thread for CPU-only LLM performance comparison | 68 | Hi everyone,
I could not find any recent posts about CPU only performance comparison of different CPUs. With recent advancements in CPUs, we are seeing incredible memory bandwidth speeds with DDR5 6400 12 channel EPYC 9005 (614.4 GB/s theoretical bw). AMD also announced that Zen 6 CPUs will have 1.6TB/s memory bw. The future of CPUs looks exciting. But for now, I wanted to test what we already have. I need your help to see where we stand with CPUs currently.
For this CPU only comparison, I want to use ik\_llama - [https://github.com/ikawrakow/ik\_llama.cpp](https://github.com/ikawrakow/ik_llama.cpp) . I compiled and tested both ik\_llama and llama.cpp with MoE models like Qwen3 30B3A Q4\_1, gpt-oss 120B Q8 and qwen3 235B Q4\_1. ik\_llama is at least 2x faster prompt processing (PP) and 50% faster in text generation (TG).
For this benchmark, I used Qwen3 30B3A Q4\_1 (19.2GB) (https://huggingface.co/unsloth/Qwen3-30B-A3B-GGUF/blob/main/Qwen3-30B-A3B-Q4\_1.gguf) and ran ik\_llama in Ubuntu 24.04.3.
ik\_llama installation:
git clone https://github.com/ikawrakow/ik_llama.cpp.git
cd ik_llama.cpp
cmake -B build
cmake --build build --config Release -j $(nproc)
llama-bench benchmark (make sure GPUs are disabled with CUDA\_VISIBLE\_DEVICES="" just in case if you compiled for GPUs):
CUDA_VISIBLE_DEVICES="" ./build/bin/llama-bench -m /media/ai-llm/wd_2t/models/Qwen3-30B-A3B-Q4_1.gguf --threads 32
| model | size | params | backend | threads | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ---: | ------------: | ---------------: |
| qwen3moe ?B Q4_1 | 17.87 GiB | 30.53 B | CPU | 32 | 0 | pp512 | 263.02 ± 2.53 |
| qwen3moe ?B Q4_1 | 17.87 GiB | 30.53 B | CPU | 32 | 0 | tg128 | 38.98 ± 0.16 |
build: 6d2e7ca4 (3884)
GPT-OSS 120B:
CUDA_VISIBLE_DEVICES="" ./build/bin/llama-bench -m /media/ai-llm/wd_2t/models/GPT_OSS_120B_UD-Q8_K_XL/gpt-oss-120b-UD-Q8_K_XL-00001-of-00002.gguf -mmp 0 --threads 32
| model | size | params | backend | threads | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ---: | ------------: | ---------------: |
| gpt-oss ?B Q8_0 | 60.03 GiB | 116.83 B | CPU | 32 | 0 | pp512 | 163.24 ± 4.46 |
| gpt-oss ?B Q8_0 | 60.03 GiB | 116.83 B | CPU | 32 | 0 | tg128 | 24.77 ± 0.42 |
build: 6d2e7ca4 (3884)
So, the requirement for this benchmark is simple:
* Required: use CPU only inference (No APUs, NPUs, or build-in GPUs allowed)
* use ik-llama (any recent version) if possible since llama.cpp will be slower for your CPU performance
* Required model: Run the standard llama-bench benchmark with Qwen3-30B-A3B-Q4\_1.gguf (2703 version should also be fine as long as it is Q4\_1) and share the command with output in the comments as I shared above.
* Optional (not required but good to have): run CPU only benchmark with GPT-OSS 120B (file here: https://huggingface.co/unsloth/gpt-oss-120b-GGUF/tree/main/UD-Q8\_K\_XL) and share the command with output in the comments.
I will start by adding my CPU performance in this table below.
|Motherboard|CPU (physical cores)|RAM size and type |channels|Qwen3 30B3A Q4\_1 TG|Qwen3 30B3A Q4\_1 PP|
|:-|:-|:-|:-|:-|:-|
|AsRock ROMED8-2T|AMD EPYC 7532 (32 cores) |8x32GB DDR4 3200Mhz|8|39.98|263.02|
|||||||
I will check comments daily and keep updating the table.
This awesome community is the best place to collect such performance metrics.
Thank you! | 2025-09-17T05:08:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nj4axf/thread_for_cpuonly_llm_performance_comparison/ | MLDataScientist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj4axf | false | null | t3_1nj4axf | /r/LocalLLaMA/comments/1nj4axf/thread_for_cpuonly_llm_performance_comparison/ | false | false | self | 68 | {'enabled': False, 'images': [{'id': 'i7jANKDhLeKMah3cSMR1-1SzHuWdmd-kZo-aRTGOpPI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i7jANKDhLeKMah3cSMR1-1SzHuWdmd-kZo-aRTGOpPI.png?width=108&crop=smart&auto=webp&s=e6a381765eeeda503ab47f41be9f5c9cc9b3a2c5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/i7jANKDhLeKMah3cSMR1-1SzHuWdmd-kZo-aRTGOpPI.png?width=216&crop=smart&auto=webp&s=dfc3d2c4d30ef5203737156f57ef424271d236de', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/i7jANKDhLeKMah3cSMR1-1SzHuWdmd-kZo-aRTGOpPI.png?width=320&crop=smart&auto=webp&s=78d4578da1b2d2bd3e56d2366d79fd640f76422b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/i7jANKDhLeKMah3cSMR1-1SzHuWdmd-kZo-aRTGOpPI.png?width=640&crop=smart&auto=webp&s=afbc5e32817532932cd8459b5a0c26f5419aef32', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/i7jANKDhLeKMah3cSMR1-1SzHuWdmd-kZo-aRTGOpPI.png?width=960&crop=smart&auto=webp&s=e1e71be58ccd535b8a47aca386bd809572edf9d0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/i7jANKDhLeKMah3cSMR1-1SzHuWdmd-kZo-aRTGOpPI.png?width=1080&crop=smart&auto=webp&s=6152476f401f1d69cf4e9fab7e478165509e474d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/i7jANKDhLeKMah3cSMR1-1SzHuWdmd-kZo-aRTGOpPI.png?auto=webp&s=fa1a5b79cee48d56d636f5180383541ad11ab541', 'width': 1200}, 'variants': {}}]} |
embeddinggemma with Qdrant compatible uint8 tensors output | 12 | I hacked on the int8-sized community ONNX model of emnbeddinggemma to get it to output uint8 tensors which are compatible with Qdrant. For some reason it benchmarks higher than the base model on all but one of the NanoBEIR benchmarks.
benchmarks and info here:
https://huggingface.co/electroglyph/embeddinggemma-300m-ONNX-uint8 | 2025-09-17T05:04:49 | https://www.reddit.com/r/LocalLLaMA/comments/1nj48gh/embeddinggemma_with_qdrant_compatible_uint8/ | terminoid_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj48gh | false | null | t3_1nj48gh | /r/LocalLLaMA/comments/1nj48gh/embeddinggemma_with_qdrant_compatible_uint8/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'NFt-dtPL2iHCSx-mArPgbc6MjEOeWCd4Xmix91LSE38', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NFt-dtPL2iHCSx-mArPgbc6MjEOeWCd4Xmix91LSE38.png?width=108&crop=smart&auto=webp&s=7223088bba809b6e72f98e03c7f9e1acfea4ad42', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NFt-dtPL2iHCSx-mArPgbc6MjEOeWCd4Xmix91LSE38.png?width=216&crop=smart&auto=webp&s=8371a23a62e078a351fbd8ae7572f8f8cbbb4be2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NFt-dtPL2iHCSx-mArPgbc6MjEOeWCd4Xmix91LSE38.png?width=320&crop=smart&auto=webp&s=c5856fe55c5513ab45fb63d1d5d5f97f21168a01', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NFt-dtPL2iHCSx-mArPgbc6MjEOeWCd4Xmix91LSE38.png?width=640&crop=smart&auto=webp&s=12531bb06b561b157f080668190363b620a14669', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NFt-dtPL2iHCSx-mArPgbc6MjEOeWCd4Xmix91LSE38.png?width=960&crop=smart&auto=webp&s=beffee3dfe2958e2722367958821ac1c1bffce3b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NFt-dtPL2iHCSx-mArPgbc6MjEOeWCd4Xmix91LSE38.png?width=1080&crop=smart&auto=webp&s=a78ff8b82554722500e49feda112bf895d00c14e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NFt-dtPL2iHCSx-mArPgbc6MjEOeWCd4Xmix91LSE38.png?auto=webp&s=8822ca93c6e279f08100e8601e3f96c2e0298b32', 'width': 1200}, 'variants': {}}]} |
Good for training and inference locally ?? | 0 | Purpose: Multiples VMs , AI workloads ( inference , stable diffusion , etc )
Processor: Core Ultra 7 265KF (20C/20T)
MotherBoard: Gigabyte Z890M Aorus Elite Wifi7 Motherboard
Ram: Crucial (96GB) 48GBx2 DDR5 5200 Mhz
GPU: ZOTAC GeForce RTX 5070 Ti 16gb
Storage: WD\_Black SN7100 2TB
Cooler: 360mm AIO (deepcool)
Cabinet: High Airflow - 2 x 120mm Fans Included - 360mm Top Radiator Support
SMPS: Gigabyte 850w Gold Plus rating | 2025-09-17T04:53:45 | https://www.reddit.com/r/LocalLLaMA/comments/1nj41e0/good_for_training_and_inference_locally/ | Mobile_Bread6664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj41e0 | false | null | t3_1nj41e0 | /r/LocalLLaMA/comments/1nj41e0/good_for_training_and_inference_locally/ | false | false | self | 0 | null |
ASUS Ascent GX10 Compact AI Supercomputer Now Available for Preorder | 3 | `The ASUS Ascent GX10 is a compact AI supercomputer built on the NVIDIA GB10 Grace Blackwell Superchip with a unified CPU+GPU memory model and NVIDIA’s AI software stack. Introduced in March 2025, it targets developers, researchers, and data scientists needing petaflop-scale performance in a desktop system with scalable deployment options.`
#
[https://linuxgizmos.com/asus-ascent-gx10-compact-ai-supercomputer-now-available-for-preorder/](https://linuxgizmos.com/asus-ascent-gx10-compact-ai-supercomputer-now-available-for-preorder/) | 2025-09-17T03:56:10 | https://www.reddit.com/r/LocalLLaMA/comments/1nj2yix/asus_ascent_gx10_compact_ai_supercomputer_now/ | DeliciousBelt9520 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj2yix | false | null | t3_1nj2yix | /r/LocalLLaMA/comments/1nj2yix/asus_ascent_gx10_compact_ai_supercomputer_now/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'P8tf6o1zOdM_sV8wriDGzlKhu4fbihsuGtrPW1zi_cY', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/P8tf6o1zOdM_sV8wriDGzlKhu4fbihsuGtrPW1zi_cY.jpeg?width=108&crop=smart&auto=webp&s=808f2c4ce73c54a8133fe48e1f2e77343d5ad2ca', 'width': 108}, {'height': 204, 'url': 'https://external-preview.redd.it/P8tf6o1zOdM_sV8wriDGzlKhu4fbihsuGtrPW1zi_cY.jpeg?width=216&crop=smart&auto=webp&s=f67d6271cb059570d91be7de8380182e590e9ad1', 'width': 216}, {'height': 303, 'url': 'https://external-preview.redd.it/P8tf6o1zOdM_sV8wriDGzlKhu4fbihsuGtrPW1zi_cY.jpeg?width=320&crop=smart&auto=webp&s=06e0d55b63f3dabff7f3a8482215c6434daec049', 'width': 320}, {'height': 606, 'url': 'https://external-preview.redd.it/P8tf6o1zOdM_sV8wriDGzlKhu4fbihsuGtrPW1zi_cY.jpeg?width=640&crop=smart&auto=webp&s=6a9ff5870db59023af047f156f9c62ec02281f70', 'width': 640}, {'height': 909, 'url': 'https://external-preview.redd.it/P8tf6o1zOdM_sV8wriDGzlKhu4fbihsuGtrPW1zi_cY.jpeg?width=960&crop=smart&auto=webp&s=992a1e03b0ade8a2f882279ab84524df2a1b643c', 'width': 960}], 'source': {'height': 923, 'url': 'https://external-preview.redd.it/P8tf6o1zOdM_sV8wriDGzlKhu4fbihsuGtrPW1zi_cY.jpeg?auto=webp&s=36af8a3657dab33dd098dd9808e5bfbbe9a0c64a', 'width': 974}, 'variants': {}}]} |
ASUS Ascent GX10 Compact AI Supercomputer Now Available for Preorder | 1 | `The ASUS Ascent GX10 is a compact AI supercomputer built on the NVIDIA GB10 Grace Blackwell Superchip with a unified CPU+GPU memory model and NVIDIA’s AI software stack. Introduced in March 2025, it targets developers, researchers, and data scientists needing petaflop-scale performance in a desktop system with scalable deployment options.`
#
[https://linuxgizmos.com/asus-ascent-gx10-compact-ai-supercomputer-now-available-for-preorder/](https://linuxgizmos.com/asus-ascent-gx10-compact-ai-supercomputer-now-available-for-preorder/) | 2025-09-17T03:54:49 | https://www.reddit.com/r/LocalLLaMA/comments/1nj2xik/asus_ascent_gx10_compact_ai_supercomputer_now/ | thr0w_away_acc_123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj2xik | false | null | t3_1nj2xik | /r/LocalLLaMA/comments/1nj2xik/asus_ascent_gx10_compact_ai_supercomputer_now/ | false | false | self | 1 | null |
Where does MLX install hugging face LLMs on my Mac with uv? | 0 | I went through [THIS](https://www.reddit.com/r/LocalLLaMA/comments/1mpdwcw/how_to_run_mlxoptimized_models_on_apple_gets_best/) tutorial to get MLX running on my MacOS box which is great, which goes through installing uv with Brew, then MLX, but I can't seem locate where the automatically downloaded LLM models live? | 2025-09-17T03:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/1nj2uky/where_does_mlx_install_hugging_face_llms_on_my/ | ChevChance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj2uky | false | null | t3_1nj2uky | /r/LocalLLaMA/comments/1nj2uky/where_does_mlx_install_hugging_face_llms_on_my/ | false | false | self | 0 | null |
Can PCIE X16 Gen4 SlimSAS 8i x2 Adapters be powered by a second PSU ? or does it need the same PSU that powers the motherboard ? | 6 | 2025-09-17T03:24:04 | d00m_sayer | i.ebayimg.com | 1970-01-01T00:00:00 | 0 | {} | 1nj2boq | false | null | t3_1nj2boq | /r/LocalLLaMA/comments/1nj2boq/can_pcie_x16_gen4_slimsas_8i_x2_adapters_be/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': 'xrA57BXNgzADT1IwFf9XhSXX-WUrJB9uRPnAxvuh1YM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/xrA57BXNgzADT1IwFf9XhSXX-WUrJB9uRPnAxvuh1YM.png?width=108&crop=smart&auto=webp&s=8f29f368b707e26ecba28e86f2977e016a4e0a2d', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/xrA57BXNgzADT1IwFf9XhSXX-WUrJB9uRPnAxvuh1YM.png?width=216&crop=smart&auto=webp&s=53b3bd5894b887cb64190c06b0474077b69ac170', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/xrA57BXNgzADT1IwFf9XhSXX-WUrJB9uRPnAxvuh1YM.png?width=320&crop=smart&auto=webp&s=7ae795507165aef738c1ead93218dbc30fc452c4', 'width': 320}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/xrA57BXNgzADT1IwFf9XhSXX-WUrJB9uRPnAxvuh1YM.png?auto=webp&s=1e5e053ca76a5f87940d5441973a6afc6b6fe226', 'width': 400}, 'variants': {}}]} | ||
Modding guide for adding memory to RTX 4090 to 48GB | 46 | 2025-09-17T03:02:54 | https://www.techpowerup.com/forums/threads/nvidia-geforce-rtx-4090-gets-a-48-gb-mod-and-step-by-step-tutorial.340880/ | kaggleqrdl | techpowerup.com | 1970-01-01T00:00:00 | 0 | {} | 1nj1wfk | false | null | t3_1nj1wfk | /r/LocalLLaMA/comments/1nj1wfk/modding_guide_for_adding_memory_to_rtx_4090_to/ | false | false | default | 46 | null | |
Subscribe to knowledgeLM: | 0 | https://youtu.be/QWl8tLLx6tc?si=r3Cg-qbe30PZ2ouc | 2025-09-17T02:50:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nj1ndl/subscribe_to_knowledgelm/ | Other_Vehicle_4530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj1ndl | false | null | t3_1nj1ndl | /r/LocalLLaMA/comments/1nj1ndl/subscribe_to_knowledgelm/ | false | false | self | 0 | null |
one day, this will work 🤞 | 7 | 2025-09-17T02:43:16 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nj1htj | false | null | t3_1nj1htj | /r/LocalLLaMA/comments/1nj1htj/one_day_this_will_work/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'mmwwmtdh1npf1', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/mmwwmtdh1npf1.jpeg?width=108&crop=smart&auto=webp&s=3fc2280644370e061186ac731d01e5a98a387d0f', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/mmwwmtdh1npf1.jpeg?width=216&crop=smart&auto=webp&s=b1a536cbe0f0c5ced8c11fdf8cdae3a621a5f670', 'width': 216}, {'height': 181, 'url': 'https://preview.redd.it/mmwwmtdh1npf1.jpeg?width=320&crop=smart&auto=webp&s=24ab4325c0f23f901bdcad6e9ea7c47c9e878715', 'width': 320}, {'height': 362, 'url': 'https://preview.redd.it/mmwwmtdh1npf1.jpeg?width=640&crop=smart&auto=webp&s=70bca1b832c2d49c694f77abe7ad5ee6414235e3', 'width': 640}, {'height': 544, 'url': 'https://preview.redd.it/mmwwmtdh1npf1.jpeg?width=960&crop=smart&auto=webp&s=6e2ae8b4868055d5ff735868bf3365687b770c7b', 'width': 960}, {'height': 612, 'url': 'https://preview.redd.it/mmwwmtdh1npf1.jpeg?width=1080&crop=smart&auto=webp&s=4fdb5526e539408445e67d1eace6e775bc2ed7a3', 'width': 1080}], 'source': {'height': 684, 'url': 'https://preview.redd.it/mmwwmtdh1npf1.jpeg?auto=webp&s=3f2b6972e3b8b1a621008d83badb810709e3f274', 'width': 1206}, 'variants': {}}]} | ||
used gaming machine vs new ai max+ ? | 6 | My existing desktop believes that cutting edge storage technology is chiselling things into stone tablets, so it's time to upgrade to the current millennium. I haven't yet played with local LLMs, but I want to run a local LLM general assistant to learn more about this, and to have better control of my data. I also want the ability to do some image generation, though I'm unsure how much I'll use that part.
I'm a linux user, and this will be my main desktop in addition to AI use, I'm not really a gamer though, so the rest of my usage is not too resource intensive (hence surviving thus far on ancient tech).
My budget is about $3,000-$4,000 CAD (about $2,000-$3,000 USD). I'm seeing some nice used machines on marketplace with RTX 4060ti through RTX 5080 available in that price range with decent specs otherwise
But I'm also hearing hype about the new AMD ai max+ machines which also seem to fit the budget, and I sure like the idea of the lower power use, especially given that the rest of my non-ai use won't be too resource intensive.
I'm hearing 2 conflicting things for AI though:
1) the only thing that matters is vram, nothing else matters
2) you must use nvidia, that's all that matters
So obviously the ai max+ has a ton more vram than any nvidia card I can afford, but it's not nvidia... so how much priority should I put on 1) vs 2)? | 2025-09-17T02:33:32 | https://www.reddit.com/r/LocalLLaMA/comments/1nj1anf/used_gaming_machine_vs_new_ai_max/ | green__1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nj1anf | false | null | t3_1nj1anf | /r/LocalLLaMA/comments/1nj1anf/used_gaming_machine_vs_new_ai_max/ | false | false | self | 6 | null |
ArchGW 0.3.12 🚀 Model aliases: allow clients to use friendly, semantic names and swap out underlying models without changing application code. | 8 | I added this lightweight abstraction to [archgw](https://github.com/katanemo/archgw) to decouple app code from specific model names. Instead of sprinkling hardcoded model names like`gpt-4o-mini` or `llama3.2` everywhere, you point to an *alias* that encodes intent, and allows you to test new models, swap out the config safely without having to do codewide search/replace every time you want to experiment with a new model or version.
arch.summarize.v1 → cheap/fast summarization
arch.v1 → default “latest” general-purpose model
arch.reasoning.v1 → heavier reasoning
The app calls the alias, not the vendor. Swap the model in config, and the entire system updates without touching code. Of course, you would want to use models compatible. Meaning if you map an embedding model to an alias, when the application expects a chat model, it won't be a good day.
Where are we headed with this...
* Guardrails -> Apply safety, cost, or latency rules at the alias level: arch.reasoning.v1:
​
arch.reasoning.v1:
target: gpt-oss-120b
guardrails:
max_latency: 5s
block_categories: [“jailbreak”, “PII”]
* Fallbacks -> Provide a chain if a model fails or hits quota:
​
arch.summarize.v1:
target: gpt-4o-mini
fallback: llama3.2
* Traffic splitting & canaries -> Let an alias fan out traffic across multiple targets:
​
arch.v1:
targets:
- model: llama3.2
weight: 80
- model: gpt-4o-mini
weight: 20 | 2025-09-17T02:04:53 | AdditionalWeb107 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nj0p7r | false | null | t3_1nj0p7r | /r/LocalLLaMA/comments/1nj0p7r/archgw_0312_model_aliases_allow_clients_to_use/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': 'y12ej5klrmpf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/y12ej5klrmpf1.png?width=108&crop=smart&auto=webp&s=516718abf300e14f99c734b9539543fff463f893', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/y12ej5klrmpf1.png?width=216&crop=smart&auto=webp&s=b4649413d2bf7ca322ebe0c4e762b242f9780c65', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/y12ej5klrmpf1.png?width=320&crop=smart&auto=webp&s=a7e12efe786869d40fc5a02cbf4282a31388d4b3', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/y12ej5klrmpf1.png?width=640&crop=smart&auto=webp&s=49d0c97a765a892fb138b713ad332655f5cddfab', 'width': 640}, {'height': 518, 'url': 'https://preview.redd.it/y12ej5klrmpf1.png?width=960&crop=smart&auto=webp&s=aaf691f31b4e0c2a56191729dd04a695592de1b8', 'width': 960}, {'height': 583, 'url': 'https://preview.redd.it/y12ej5klrmpf1.png?width=1080&crop=smart&auto=webp&s=efd6c84e4b3e0efab40db9ef9debf7c40225788f', 'width': 1080}], 'source': {'height': 716, 'url': 'https://preview.redd.it/y12ej5klrmpf1.png?auto=webp&s=da3d13653489b0e018f88a59e3aaeec8be610553', 'width': 1326}, 'variants': {}}]} | |
New stealth model Zenith Alpha on Design Arena | 0 | A new cloaked model named Zenith Alpha has emerged on Design Arena. It's performed pretty well in recent votes, and it's been especially good at subtle animations.
[First Place: Zenith Alpha](https://reddit.com/link/1niz8cg/video/1azohap2impf1/player)
[Second Place: Claude Opus 4](https://preview.redd.it/h8y8huy9impf1.png?width=3420&format=png&auto=webp&s=c7d6730a37ab8a27d60f56e7a666cf8d6a0a5f8c)
[Third Place: Qwen3 235B Thinking](https://preview.redd.it/fxled2maimpf1.png?width=3420&format=png&auto=webp&s=4e630780742bbaa45c6450163831ea41dc04b35d)
Any guesses?
| 2025-09-17T00:56:31 | https://www.reddit.com/r/LocalLLaMA/comments/1niz8cg/new_stealth_model_zenith_alpha_on_design_arena/ | grx_xce | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niz8cg | false | null | t3_1niz8cg | /r/LocalLLaMA/comments/1niz8cg/new_stealth_model_zenith_alpha_on_design_arena/ | false | false | 0 | null | |
LING-MINI-2 QUANTIZED | 9 | While we wait for the quantization of llama.cpp we can use the chatllm.cpp library
[https://huggingface.co/RiverkanIT/Ling-mini-2.0-Quantized/tree/main](https://huggingface.co/RiverkanIT/Ling-mini-2.0-Quantized/tree/main) | 2025-09-17T00:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1niz3yk/lingmini2_quantized/ | Chance_Camp3720 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niz3yk | false | null | t3_1niz3yk | /r/LocalLLaMA/comments/1niz3yk/lingmini2_quantized/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'gCq6UgNnWLMKZ95gyJA-6li8gNorkFW8BnjLqpthjQo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gCq6UgNnWLMKZ95gyJA-6li8gNorkFW8BnjLqpthjQo.png?width=108&crop=smart&auto=webp&s=d145342713200a7d4962af06c8502deb77759029', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gCq6UgNnWLMKZ95gyJA-6li8gNorkFW8BnjLqpthjQo.png?width=216&crop=smart&auto=webp&s=ab7b48b77f4eade2e6b0fd0f10806e8c0d903812', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gCq6UgNnWLMKZ95gyJA-6li8gNorkFW8BnjLqpthjQo.png?width=320&crop=smart&auto=webp&s=2a1c3d43cec9ff57bf85b6cdfde7ecd1927d80eb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gCq6UgNnWLMKZ95gyJA-6li8gNorkFW8BnjLqpthjQo.png?width=640&crop=smart&auto=webp&s=f56512a96515a9cbd2ac27dadb1ddf387ae2f6d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gCq6UgNnWLMKZ95gyJA-6li8gNorkFW8BnjLqpthjQo.png?width=960&crop=smart&auto=webp&s=b5299ecfc3e80c4523f3598cedd23a6538b39e83', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gCq6UgNnWLMKZ95gyJA-6li8gNorkFW8BnjLqpthjQo.png?width=1080&crop=smart&auto=webp&s=b35b41188d5bd205c16e5eb7f35fb322518e824d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gCq6UgNnWLMKZ95gyJA-6li8gNorkFW8BnjLqpthjQo.png?auto=webp&s=294f9a621f61be78020067a545132c8b2c0a238c', 'width': 1200}, 'variants': {}}]} |
The best fine-tunable real time TTS | 14 | I am searching a good open source TTS model to fine tune it on a specific voice dataset of 1 hour.I find that kokoro is good but I couldn’t find a documentation about it’s fine-tuning,also if the model supports non verbal expressions such as [laugh],[sigh],ect… would be better (not a requirement). | 2025-09-17T00:36:14 | https://www.reddit.com/r/LocalLLaMA/comments/1niysfm/the_best_finetunable_real_time_tts/ | AwkwardBoysenberry26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niysfm | false | null | t3_1niysfm | /r/LocalLLaMA/comments/1niysfm/the_best_finetunable_real_time_tts/ | false | false | self | 14 | null |
The best fine-tunable TTS on a specific voice. | 1 | [removed] | 2025-09-17T00:31:14 | https://www.reddit.com/r/LocalLLaMA/comments/1niyojw/the_best_finetunable_tts_on_a_specific_voice/ | ReasonableSun2380 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niyojw | false | null | t3_1niyojw | /r/LocalLLaMA/comments/1niyojw/the_best_finetunable_tts_on_a_specific_voice/ | false | false | self | 1 | null |
New method to retrain neural nets with llm POC script | 5 | [https://colab.research.google.com/drive/1bA9n3615\_\_9mUN7YIeIo-lWG-8HIW9d6?usp=sharing](https://colab.research.google.com/drive/1bA9n3615__9mUN7YIeIo-lWG-8HIW9d6?usp=sharing)
I just finished working on a technique I thought of to retrain networks as opposed to traditionally continuing standard training in some form. As a demonstration I have included a script testing the method which is quite interesting and successful. The technique works on par with SFT but converges quicker in my experience but I am still benchmarking. I'd love community input! Specifically I'm really curious if anyone has tried to retrain models before? | 2025-09-17T00:24:10 | https://github.com/arccoxx/OpposedGradientProjection/ | arcco96 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1niyj2k | false | null | t3_1niyj2k | /r/LocalLLaMA/comments/1niyj2k/new_method_to_retrain_neural_nets_with_llm_poc/ | false | false | default | 5 | {'enabled': False, 'images': [{'id': 'xrHXCE_yNOQCWHuep3_1GcvRtjU3V6vsFY0P6AAfmRA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xrHXCE_yNOQCWHuep3_1GcvRtjU3V6vsFY0P6AAfmRA.png?width=108&crop=smart&auto=webp&s=d65405bdaa932d07e40cc2f8152247178ec3d4b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xrHXCE_yNOQCWHuep3_1GcvRtjU3V6vsFY0P6AAfmRA.png?width=216&crop=smart&auto=webp&s=bca9383dc9d13042f9d8c256552a2eeab8744352', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xrHXCE_yNOQCWHuep3_1GcvRtjU3V6vsFY0P6AAfmRA.png?width=320&crop=smart&auto=webp&s=2360106904c38d236ccf5609ddd8d3e5d600e4b9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xrHXCE_yNOQCWHuep3_1GcvRtjU3V6vsFY0P6AAfmRA.png?width=640&crop=smart&auto=webp&s=5152abbef6baea84ca587bfc2c82439b82be6e47', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xrHXCE_yNOQCWHuep3_1GcvRtjU3V6vsFY0P6AAfmRA.png?width=960&crop=smart&auto=webp&s=027dfd9b550ecd44c3b3b167eadd987970fd2678', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xrHXCE_yNOQCWHuep3_1GcvRtjU3V6vsFY0P6AAfmRA.png?width=1080&crop=smart&auto=webp&s=e54a6cbbd85edfd2a668b8a6b308e6f2dfaafd2e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xrHXCE_yNOQCWHuep3_1GcvRtjU3V6vsFY0P6AAfmRA.png?auto=webp&s=e788eaf0e8a1131109ce13f03bf16b0449a0870a', 'width': 1200}, 'variants': {}}]} |
Need a list of vision Clip like models, from tiny to huge ,locally | 1 | im looking into making my own photo search feature | 2025-09-17T00:10:33 | https://www.reddit.com/r/LocalLLaMA/comments/1niy8eo/need_a_list_of_vision_clip_like_models_from_tiny/ | Commercial-Ad-1148 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niy8eo | false | null | t3_1niy8eo | /r/LocalLLaMA/comments/1niy8eo/need_a_list_of_vision_clip_like_models_from_tiny/ | false | false | self | 1 | null |
Pluely Lightweight (~10MB) Open-Source Desktop App to Use Any Local LLM with Audio, Screenshots, and More! | 1 | [removed] | 2025-09-16T23:58:32 | iam-neighbour | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nixyv6 | false | null | t3_1nixyv6 | /r/LocalLLaMA/comments/1nixyv6/pluely_lightweight_10mb_opensource_desktop_app_to/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'zvwfli-KSrQ4GeHZr0hw00slPg7XCFk9hfKfN2ChRf8', 'resolutions': [{'height': 16, 'url': 'https://preview.redd.it/1eo52heg6mpf1.png?width=108&crop=smart&auto=webp&s=1e3755105254c447aa4030c9e1ab833ae61ac7bf', 'width': 108}, {'height': 33, 'url': 'https://preview.redd.it/1eo52heg6mpf1.png?width=216&crop=smart&auto=webp&s=4c739bd37374d0f6c4f698a0268b49fbf2a8264c', 'width': 216}, {'height': 49, 'url': 'https://preview.redd.it/1eo52heg6mpf1.png?width=320&crop=smart&auto=webp&s=5654874159df448963271d3a9d1ec9b858b7aef1', 'width': 320}, {'height': 98, 'url': 'https://preview.redd.it/1eo52heg6mpf1.png?width=640&crop=smart&auto=webp&s=94da6f0f9bc54c12662f16616cc7c41bf40cb643', 'width': 640}, {'height': 147, 'url': 'https://preview.redd.it/1eo52heg6mpf1.png?width=960&crop=smart&auto=webp&s=fe1186f566fefd1665d0732445bb3863fa6fe610', 'width': 960}, {'height': 165, 'url': 'https://preview.redd.it/1eo52heg6mpf1.png?width=1080&crop=smart&auto=webp&s=0743cada1f350cfc507de1b498d89ad92bbdc622', 'width': 1080}], 'source': {'height': 219, 'url': 'https://preview.redd.it/1eo52heg6mpf1.png?auto=webp&s=612dc2563a3aaa18694e8e76321ff1bd32a15e1e', 'width': 1430}, 'variants': {}}]} | ||
The Qwen of Pain. | 681 | 2025-09-16T23:58:16 | -Ellary- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nixynv | false | null | t3_1nixynv | /r/LocalLLaMA/comments/1nixynv/the_qwen_of_pain/ | false | false | default | 681 | {'enabled': True, 'images': [{'id': '0px1banw6mpf1', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/0px1banw6mpf1.jpeg?width=108&crop=smart&auto=webp&s=f1eabb3dd9162dcf52d9f16ed4e4d685f2cfc4fa', 'width': 108}, {'height': 96, 'url': 'https://preview.redd.it/0px1banw6mpf1.jpeg?width=216&crop=smart&auto=webp&s=237f1850781080aef46f1da64955ad5ef5b2b19e', 'width': 216}, {'height': 143, 'url': 'https://preview.redd.it/0px1banw6mpf1.jpeg?width=320&crop=smart&auto=webp&s=c6c32f0dc84f655d9916d0ebff8c85a290a7fdb8', 'width': 320}, {'height': 287, 'url': 'https://preview.redd.it/0px1banw6mpf1.jpeg?width=640&crop=smart&auto=webp&s=8edc833e57220e0c00a8b11ba32c881974742ef1', 'width': 640}, {'height': 430, 'url': 'https://preview.redd.it/0px1banw6mpf1.jpeg?width=960&crop=smart&auto=webp&s=4a4919c4cf0c73cb34112715b4cd49606ec869bd', 'width': 960}, {'height': 484, 'url': 'https://preview.redd.it/0px1banw6mpf1.jpeg?width=1080&crop=smart&auto=webp&s=56066e9b077cd089cc7ad9d93948ba3d00d2f627', 'width': 1080}], 'source': {'height': 500, 'url': 'https://preview.redd.it/0px1banw6mpf1.jpeg?auto=webp&s=6f77fc0192a5473a575acef74e6d074de8783d12', 'width': 1114}, 'variants': {}}]} | ||
Llm suggestion | 0 | Hi everyone,
I recently built a server setup with an RTX 3090 that has 24 GB of VRAM. I’d really like to experiment with some image-to-image large language models and was wondering if you could recommend a good model to get started with.
Any suggestions, tips, or personal experiences would be greatly appreciated!
Thanks in advance | 2025-09-16T23:46:26 | https://www.reddit.com/r/LocalLLaMA/comments/1nixp4i/llm_suggestion/ | Forward-Conference28 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nixp4i | false | null | t3_1nixp4i | /r/LocalLLaMA/comments/1nixp4i/llm_suggestion/ | false | false | self | 0 | null |
Stop fine tuning, use RAG | 1 | [removed] | 2025-09-16T22:56:33 | https://www.reddit.com/r/LocalLLaMA/comments/1niwjn1/stop_fine_tuning_use_rag/ | AdmirableJackfruit59 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niwjn1 | false | null | t3_1niwjn1 | /r/LocalLLaMA/comments/1niwjn1/stop_fine_tuning_use_rag/ | false | false | self | 1 | null |
500,000 public datasets on Hugging Face | 234 | 2025-09-16T22:46:45 | clem59480 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1niwb8l | false | null | t3_1niwb8l | /r/LocalLLaMA/comments/1niwb8l/500000_public_datasets_on_hugging_face/ | false | false | default | 234 | {'enabled': True, 'images': [{'id': 'rokftav6vlpf1', 'resolutions': [{'height': 130, 'url': 'https://preview.redd.it/rokftav6vlpf1.png?width=108&crop=smart&auto=webp&s=a9b95ff1a68d88b44aa2f3316a7ebf54f18454e5', 'width': 108}, {'height': 260, 'url': 'https://preview.redd.it/rokftav6vlpf1.png?width=216&crop=smart&auto=webp&s=0d3cb5cf6501e8a7cf81a471b586b4324ab8b74e', 'width': 216}, {'height': 386, 'url': 'https://preview.redd.it/rokftav6vlpf1.png?width=320&crop=smart&auto=webp&s=4945718488281601dd88f82e10c4aa2ccae95a9d', 'width': 320}, {'height': 773, 'url': 'https://preview.redd.it/rokftav6vlpf1.png?width=640&crop=smart&auto=webp&s=26f96c62b0cfcf4ab8d9a212645ed0b0f54e16e2', 'width': 640}], 'source': {'height': 964, 'url': 'https://preview.redd.it/rokftav6vlpf1.png?auto=webp&s=28b2dd276a88d8840dcd47cb92195ea8da9fae2d', 'width': 798}, 'variants': {}}]} | ||
General llm <8b | 0 | Hi,
I’m looking for an LLM that is good for general knowledge and fast to respond. With my setup and after several tests, I found that 8B or smaller (Q4, though I was thinking about going with Q4) models work best. The smaller, the better (when my ex-girlfriend used to say that, I didn’t believe her, but now I agree).
I tried LLaMA 3.1, but some answers were wrong or just not good enough for me.
Then I tried Qwen3, which is better — I like it, but it takes a long time to think, even for simple questions like “Is it better to shut down the PC or put it to sleep at night?” — and it took 11 seconds to answer that. Maybe it’s normal and I have just to keep it, idk 🤷🏼♂️
What do you suggest? Should I try changing some configuration on Qwen3 or should I try another LLM? I’m using Ollama as my primary service to run LLMs.
Thanks, everyone 👋 | 2025-09-16T22:46:28 | https://www.reddit.com/r/LocalLLaMA/comments/1niwaz5/general_llm_8b/ | BigTias | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niwaz5 | false | null | t3_1niwaz5 | /r/LocalLLaMA/comments/1niwaz5/general_llm_8b/ | false | false | self | 0 | null |
We got a 2B param model running on iPhone at ~500MB RAM — fully offline demo | 229 | Ongoing research out of Derive DX Labs in Lafayette, Louisiana. We’ve been experimenting with efficiency optimizations and managed to get a 2B parameter chain-of-thought model running on iPhone with \~400–500MB RAM, fully offline.
I’m not super active on Reddit, so please don’t kill me if I’m slow to respond to comments — but I’ll do my best to answer questions.
\[Correction: Meant Gemma-3N not Gemini-3B\] | 2025-09-16T22:32:32 | https://v.redd.it/6rczu79aslpf1 | Josiahhenryus | /r/LocalLLaMA/comments/1nivz2n/we_got_a_2b_param_model_running_on_iphone_at/ | 1970-01-01T00:00:00 | 0 | {} | 1nivz2n | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/6rczu79aslpf1/DASHPlaylist.mpd?a=1760783557%2CMzk2NDc3NDc1MjY1NzkwMzFjYzc1ZDM4MGI5ODhlMjQwODRhMzE3NmZlYjVkZjVjMjhkYTNmYzQ3YzA3ZGVhMQ%3D%3D&v=1&f=sd', 'duration': 186, 'fallback_url': 'https://v.redd.it/6rczu79aslpf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/6rczu79aslpf1/HLSPlaylist.m3u8?a=1760783557%2CYmQwMDBhMGM2ZDNkYjViZTdmNzE5YjBjODI5ZjlhNjcwNzIzYWExZGE1YmViMWRhM2I4ZmUxMWRlYzYzZWMzZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/6rczu79aslpf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1nivz2n | /r/LocalLLaMA/comments/1nivz2n/we_got_a_2b_param_model_running_on_iphone_at/ | false | false | 229 | {'enabled': False, 'images': [{'id': 'ZDZxemk3OWFzbHBmMVMFq2pfv69EmnrpZl789HXOOBvSofKD3EML3NWxX5eD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZDZxemk3OWFzbHBmMVMFq2pfv69EmnrpZl789HXOOBvSofKD3EML3NWxX5eD.png?width=108&crop=smart&format=pjpg&auto=webp&s=c3e48116649a6f0535cd780ac967bb0419a56848', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ZDZxemk3OWFzbHBmMVMFq2pfv69EmnrpZl789HXOOBvSofKD3EML3NWxX5eD.png?width=216&crop=smart&format=pjpg&auto=webp&s=06280bfeb9dc965cbea240e1f1f165aca8faf9ba', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ZDZxemk3OWFzbHBmMVMFq2pfv69EmnrpZl789HXOOBvSofKD3EML3NWxX5eD.png?width=320&crop=smart&format=pjpg&auto=webp&s=e5d1fdd6d07aa25a87aded8495fddfd67c443c97', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ZDZxemk3OWFzbHBmMVMFq2pfv69EmnrpZl789HXOOBvSofKD3EML3NWxX5eD.png?width=640&crop=smart&format=pjpg&auto=webp&s=eb0c0cac73fa8207b227fccd807937c60f7baadd', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ZDZxemk3OWFzbHBmMVMFq2pfv69EmnrpZl789HXOOBvSofKD3EML3NWxX5eD.png?width=960&crop=smart&format=pjpg&auto=webp&s=74f4908012c68bdf1d9517c47cf227277d6ca04a', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ZDZxemk3OWFzbHBmMVMFq2pfv69EmnrpZl789HXOOBvSofKD3EML3NWxX5eD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=9512be9dd765c3dd3bb1722e8662c0a3d17bb4a7', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZDZxemk3OWFzbHBmMVMFq2pfv69EmnrpZl789HXOOBvSofKD3EML3NWxX5eD.png?format=pjpg&auto=webp&s=0a5f668e47d1de9c13250366ef235804dd169cc1', 'width': 1920}, 'variants': {}}]} | |
Unsloth Dynamic GGUFs on Aider Polyglot | Unsloth Documentation | 1 | 2025-09-16T22:28:37 | https://docs.unsloth.ai/new/unsloth-dynamic-ggufs-on-aider-polyglot | sleepingsysadmin | docs.unsloth.ai | 1970-01-01T00:00:00 | 0 | {} | 1nivvm0 | false | null | t3_1nivvm0 | /r/LocalLLaMA/comments/1nivvm0/unsloth_dynamic_ggufs_on_aider_polyglot_unsloth/ | false | false | default | 1 | null | |
Anyone heard of Zenith Alpha? | 4 | 2025-09-16T22:21:20 | https://www.reddit.com/r/LocalLLaMA/comments/1nivpb5/anyone_heard_of_zenith_alpha/ | Acrobatic_Initial665 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nivpb5 | false | null | t3_1nivpb5 | /r/LocalLLaMA/comments/1nivpb5/anyone_heard_of_zenith_alpha/ | false | false | 4 | null | ||
First AI Agent for DevOps/SRE and Platform Engineering | 0 | 2025-09-16T22:13:48 | https://www.reddit.com/r/LocalLLaMA/comments/1nivin8/first_ai_agent_for_devopssre_and_platform/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nivin8 | false | null | t3_1nivin8 | /r/LocalLLaMA/comments/1nivin8/first_ai_agent_for_devopssre_and_platform/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OXxxWdLbVCavelv1Yejmx0Ysto7v5yVqs8-MZbwAMVg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OXxxWdLbVCavelv1Yejmx0Ysto7v5yVqs8-MZbwAMVg.png?width=108&crop=smart&auto=webp&s=336e39457eedbc20b63f8e8518c1781b7275c968', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OXxxWdLbVCavelv1Yejmx0Ysto7v5yVqs8-MZbwAMVg.png?width=216&crop=smart&auto=webp&s=7e39c16043bb311a765a82bbde38af083a35e3d7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OXxxWdLbVCavelv1Yejmx0Ysto7v5yVqs8-MZbwAMVg.png?width=320&crop=smart&auto=webp&s=7caea79b31a190e9c7d7ee6cb9e1713dbfb33bd0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OXxxWdLbVCavelv1Yejmx0Ysto7v5yVqs8-MZbwAMVg.png?width=640&crop=smart&auto=webp&s=f735a7e4cc82ea38c18d7a874864c6ba86eb4675', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OXxxWdLbVCavelv1Yejmx0Ysto7v5yVqs8-MZbwAMVg.png?width=960&crop=smart&auto=webp&s=7c2f4b082dd13175d2e02cfcb8a9c5eb588905f4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OXxxWdLbVCavelv1Yejmx0Ysto7v5yVqs8-MZbwAMVg.png?width=1080&crop=smart&auto=webp&s=17b1e6df5f01510c31a02b0d5fe1463c9619f61b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OXxxWdLbVCavelv1Yejmx0Ysto7v5yVqs8-MZbwAMVg.png?auto=webp&s=c5cd54c02366d8647c423913c0597b63d03702dd', 'width': 1200}, 'variants': {}}]} | ||
Best TTS models for text-based emotional control? | 0 | Looking for recent TTS models where you can influence emotion with text prompts (e.g. “speak happily”, “somber tone”). Any recommendations?
| 2025-09-16T22:10:53 | https://www.reddit.com/r/LocalLLaMA/comments/1nivg3j/best_tts_models_for_textbased_emotional_control/ | Adept_Lawyer_4592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nivg3j | false | null | t3_1nivg3j | /r/LocalLLaMA/comments/1nivg3j/best_tts_models_for_textbased_emotional_control/ | false | false | self | 0 | null |
Local Image Generators for AMD? | 5 | What Local AI can I use with AMD? I got the 7900 XTX with 24GB of VRAM and I'd like to find an uncensored AI model I can get running on my PC | 2025-09-16T21:38:40 | https://www.reddit.com/r/LocalLLaMA/comments/1niumtq/local_image_generators_for_amd/ | WigWoo2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niumtq | false | null | t3_1niumtq | /r/LocalLLaMA/comments/1niumtq/local_image_generators_for_amd/ | false | false | self | 5 | null |
What model(s) are likely being used by pitch.com to generate presentations? | 0 | I was wondering how this would be done, because there's of course image generation models and text etc, but I think they may have had to string together a few of these, but I couldn't think what pipeline of existing models would work.
Or is it possible they just built an end to end text to slides model? | 2025-09-16T21:35:20 | https://www.reddit.com/r/LocalLLaMA/comments/1niujre/what_models_are_likely_being_used_by_pitchcom_to/ | boringblobking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niujre | false | null | t3_1niujre | /r/LocalLLaMA/comments/1niujre/what_models_are_likely_being_used_by_pitchcom_to/ | false | false | self | 0 | null |
local vs cloud for ai? | 2 | Howdy,
So I ran couple of models locally using ollama it was fun to have things running on my pc and i've kept seeing more folks talking about running local too. But here is my take: to run models locally (the good stuff) you need the hardware.
Cloud is still the easy button, specially with mcp servers, super convinient to have things running without worrying too much. But then the cost and privacy always will be inevitable concerns.
So I guess I'm wondering: is local just for personal tinkering, or its crucial to understand into clustering/infra stuff to make it scale?
Appreciate your opinions/experiences! | 2025-09-16T21:29:12 | https://www.reddit.com/r/LocalLLaMA/comments/1niue65/local_vs_cloud_for_ai/ | toolhouseai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niue65 | false | null | t3_1niue65 | /r/LocalLLaMA/comments/1niue65/local_vs_cloud_for_ai/ | false | false | self | 2 | null |
benchmark rankings | 2 | i was trying to understand the performance of models + speed in relation with certain benchmarks, came across these rankings that seem pretty good. they have a deep dive on how they arrived at these on a blog [https://brokk.ai/power-ranking](https://brokk.ai/power-ranking) | 2025-09-16T21:21:30 | https://www.reddit.com/r/LocalLLaMA/comments/1niu72a/benchmark_rankings/ | Basic_Ingenuity_8084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niu72a | false | null | t3_1niu72a | /r/LocalLLaMA/comments/1niu72a/benchmark_rankings/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'XmbB_Ggpaw13Ih4SiltMb7pnW0SotFk3Ey3eZ2fkjxY', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/XmbB_Ggpaw13Ih4SiltMb7pnW0SotFk3Ey3eZ2fkjxY.png?width=108&crop=smart&auto=webp&s=8ddd0009ff8428f66396a97f4c1d1c8de9c7be98', 'width': 108}, {'height': 125, 'url': 'https://external-preview.redd.it/XmbB_Ggpaw13Ih4SiltMb7pnW0SotFk3Ey3eZ2fkjxY.png?width=216&crop=smart&auto=webp&s=1443023ee8e8e6b0f6c0eb54bb59607ff6b4f4bb', 'width': 216}, {'height': 185, 'url': 'https://external-preview.redd.it/XmbB_Ggpaw13Ih4SiltMb7pnW0SotFk3Ey3eZ2fkjxY.png?width=320&crop=smart&auto=webp&s=2d65aca1b9bf5a7f3438af889c1ebaea9155939d', 'width': 320}, {'height': 371, 'url': 'https://external-preview.redd.it/XmbB_Ggpaw13Ih4SiltMb7pnW0SotFk3Ey3eZ2fkjxY.png?width=640&crop=smart&auto=webp&s=00fc58d8590fb833a39e70961d2fba37eab593c1', 'width': 640}, {'height': 556, 'url': 'https://external-preview.redd.it/XmbB_Ggpaw13Ih4SiltMb7pnW0SotFk3Ey3eZ2fkjxY.png?width=960&crop=smart&auto=webp&s=4678e60908faa215482491f2e7befbeadf806fba', 'width': 960}, {'height': 626, 'url': 'https://external-preview.redd.it/XmbB_Ggpaw13Ih4SiltMb7pnW0SotFk3Ey3eZ2fkjxY.png?width=1080&crop=smart&auto=webp&s=a7322d2f58b106b1842d300adcd8803888273a8d', 'width': 1080}], 'source': {'height': 1172, 'url': 'https://external-preview.redd.it/XmbB_Ggpaw13Ih4SiltMb7pnW0SotFk3Ey3eZ2fkjxY.png?auto=webp&s=b3e0da788f28104ac8d001c910809ca83d880a5b', 'width': 2020}, 'variants': {}}]} |
Granite 4 release today? Collection updated with 8 private repos. | 170 | 2025-09-16T20:40:36 | ironwroth | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nit4v6 | false | null | t3_1nit4v6 | /r/LocalLLaMA/comments/1nit4v6/granite_4_release_today_collection_updated_with_8/ | false | false | default | 170 | {'enabled': True, 'images': [{'id': 'ihwp4dy78lpf1', 'resolutions': [{'height': 24, 'url': 'https://preview.redd.it/ihwp4dy78lpf1.png?width=108&crop=smart&auto=webp&s=07ca05d11d10833a3f95df4cb9b586166232eeea', 'width': 108}, {'height': 49, 'url': 'https://preview.redd.it/ihwp4dy78lpf1.png?width=216&crop=smart&auto=webp&s=0ba6cb4fba07bcbd4f057afdd33e757a882add75', 'width': 216}, {'height': 72, 'url': 'https://preview.redd.it/ihwp4dy78lpf1.png?width=320&crop=smart&auto=webp&s=c48f00a60e49c22de619db18e408960440230857', 'width': 320}, {'height': 145, 'url': 'https://preview.redd.it/ihwp4dy78lpf1.png?width=640&crop=smart&auto=webp&s=310d508b27499694f225a40decad5893a979dfda', 'width': 640}, {'height': 217, 'url': 'https://preview.redd.it/ihwp4dy78lpf1.png?width=960&crop=smart&auto=webp&s=0183094590ed64fdd9607f8a8f80c350a6d76d81', 'width': 960}, {'height': 245, 'url': 'https://preview.redd.it/ihwp4dy78lpf1.png?width=1080&crop=smart&auto=webp&s=d06ddaccf5fbcd5e4630ee85e7a0da7b63335682', 'width': 1080}], 'source': {'height': 272, 'url': 'https://preview.redd.it/ihwp4dy78lpf1.png?auto=webp&s=d5b5216fe4088620b6daaf3fe30c9a01d566fa5a', 'width': 1198}, 'variants': {}}]} | ||
Top LLM models all within margin of error | 0 | Where is the hype coming from? | 2025-09-16T20:19:10 | One_Long_996 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1niskaz | false | null | t3_1niskaz | /r/LocalLLaMA/comments/1niskaz/top_llm_models_all_within_margin_of_error/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'dd1dkncw4lpf1', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/dd1dkncw4lpf1.png?width=108&crop=smart&auto=webp&s=dd9ba5a5e96bdc921a95c3971cfa88abce247ac2', 'width': 108}, {'height': 259, 'url': 'https://preview.redd.it/dd1dkncw4lpf1.png?width=216&crop=smart&auto=webp&s=eaabefa3c51129d2dc02a9fe4261425980b4b3e3', 'width': 216}, {'height': 383, 'url': 'https://preview.redd.it/dd1dkncw4lpf1.png?width=320&crop=smart&auto=webp&s=acbf8288bb4014da13236bd36817468b992ad952', 'width': 320}, {'height': 767, 'url': 'https://preview.redd.it/dd1dkncw4lpf1.png?width=640&crop=smart&auto=webp&s=35d43a6aa016f0425e20a285f21a10afa3f198e2', 'width': 640}, {'height': 1151, 'url': 'https://preview.redd.it/dd1dkncw4lpf1.png?width=960&crop=smart&auto=webp&s=31707b142d493bb7e57ac6d1c75e706a1fccdaaf', 'width': 960}, {'height': 1295, 'url': 'https://preview.redd.it/dd1dkncw4lpf1.png?width=1080&crop=smart&auto=webp&s=15f49495fd6a04f37c5341ef7471fd68135a9499', 'width': 1080}], 'source': {'height': 1552, 'url': 'https://preview.redd.it/dd1dkncw4lpf1.png?auto=webp&s=95355c44c2124b202f97783b0b88271915b6469d', 'width': 1294}, 'variants': {}}]} | |
Alibaba Tongyi released open-source (Deep Research) Web Agent | 100 | Hugging Face link to weights : https://huggingface.co/Alibaba-NLP/Tongyi-DeepResearch-30B-A3B | 2025-09-16T20:02:18 | https://x.com/Ali_TongyiLab/status/1967988004179546451?s=19 | kahlil29 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1nis417 | false | null | t3_1nis417 | /r/LocalLLaMA/comments/1nis417/alibaba_tongyi_released_opensource_deep_research/ | false | false | default | 100 | null |
Alibaba-NLP/Tongyi-DeepResearch-30B-A3B · Hugging Face | 146 | 2025-09-16T19:59:13 | https://huggingface.co/Alibaba-NLP/Tongyi-DeepResearch-30B-A3B | Few_Painter_5588 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nis0za | false | null | t3_1nis0za | /r/LocalLLaMA/comments/1nis0za/alibabanlptongyideepresearch30ba3b_hugging_face/ | false | false | default | 146 | {'enabled': False, 'images': [{'id': 'Br8d0DO81Y2NXG6ObCzOPqMnemqzEFVfpKOIf-1Xb3Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Br8d0DO81Y2NXG6ObCzOPqMnemqzEFVfpKOIf-1Xb3Y.png?width=108&crop=smart&auto=webp&s=793b1e603d1526cb884bba6d676b8edb70ab0435', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Br8d0DO81Y2NXG6ObCzOPqMnemqzEFVfpKOIf-1Xb3Y.png?width=216&crop=smart&auto=webp&s=4b1bb355d26871a027a4b7f517c9daae92269d18', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Br8d0DO81Y2NXG6ObCzOPqMnemqzEFVfpKOIf-1Xb3Y.png?width=320&crop=smart&auto=webp&s=6542fe4f2a0277bee5ba1a2073f1c6909aad29b9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Br8d0DO81Y2NXG6ObCzOPqMnemqzEFVfpKOIf-1Xb3Y.png?width=640&crop=smart&auto=webp&s=f864231608ef7f1e9dcabddf98002a4ed64cb7df', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Br8d0DO81Y2NXG6ObCzOPqMnemqzEFVfpKOIf-1Xb3Y.png?width=960&crop=smart&auto=webp&s=c3628ef9cf49a1fbcc1f8e6c9bea575baf0f8278', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Br8d0DO81Y2NXG6ObCzOPqMnemqzEFVfpKOIf-1Xb3Y.png?width=1080&crop=smart&auto=webp&s=4389473c74e178da7cd3fe44407b5542a943bb3a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Br8d0DO81Y2NXG6ObCzOPqMnemqzEFVfpKOIf-1Xb3Y.png?auto=webp&s=1597d95db4c4608ef72c1375fef4385657c3fea4', 'width': 1200}, 'variants': {}}]} | |
Advice on moving from first GPU upgrade to dual-GPU local AI setup | 2 | Hey all,
A couple of weeks ago I posted here about advice on a first GPU upgrade. Based on the replies, I went with a **3060 12GB**, which is now running in my daily driver PC. The difference has been significant — even though it’s a more modest card, it’s already been a great step up.
That said, I think I’ve started sliding down the slippery slope…
I’ve come across a PC for sale locally that I’m considering picking up and turning into a stand-alone AI machine. Specs are:
* Ryzen 9 3900X
* X570 board
* RTX 3080 12GB
* PSU that looks just about capable of covering both cards (3060 + 3080) 750 Gold
* Plus other parts (case, RAM, storage, AIO etc.)
The asking price is **£800**, which from a parts perspective seems fairly reasonable.
My question is: if I did go for it and ran both GPUs together, what’s the best way to approach setting it up for local models? In particular:
* Any pitfalls with running a 3060 and 3080 together in the same box?
* Tips on getting the most out of a dual-GPU setup for local AI workloads?
* Whether £800 for that system seems like good value compared to alternatives?
Any advice or lessons learned would be really welcome.
Thanks
Mike | 2025-09-16T19:57:58 | https://www.reddit.com/r/LocalLLaMA/comments/1nirztz/advice_on_moving_from_first_gpu_upgrade_to/ | CountDuckulla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nirztz | false | null | t3_1nirztz | /r/LocalLLaMA/comments/1nirztz/advice_on_moving_from_first_gpu_upgrade_to/ | false | false | self | 2 | null |
Roo Code and Qwen3 Next is Not Impressive | 19 | Hi All,
I wanted to share my experience with the thinking and instruct versions of the new Qwen3 Next model. Both run impressively well on my computer, delivering fast and reasonably accurate responses outside the Roo code development environment.
However, their performance in the Roo code environment is less consistent. While both models handle tool calling effectively, the instruct model struggles with fixing issues, and the thinking model takes excessively long to process solutions, making other models like GLM Air more reliable in these cases.
Despite these challenges, I’m optimistic about the model’s potential, especially given its longer context window. I’m eager for the GGUF releases and believe increasing the active parameters could enhance accuracy.
Thanks for reading! I’d love to hear your thoughts. And if if you recommend another set of tools to use with Qwen3 Next other than roo, please do share. | 2025-09-16T19:42:06 | https://www.reddit.com/r/LocalLLaMA/comments/1nirkn1/roo_code_and_qwen3_next_is_not_impressive/ | gamblingapocalypse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nirkn1 | false | null | t3_1nirkn1 | /r/LocalLLaMA/comments/1nirkn1/roo_code_and_qwen3_next_is_not_impressive/ | false | false | self | 19 | null |
Transformation and AI | 1 | Is AI a useful tool for promoting cybersecurity education?
Is it being used? If so, how?
There is good use and bad use.
Good use is when it guides you, explains difficult concepts, and helps you find solutions more quickly and reliably.
There is also bad use. Bad use is when you copy commands and simply use AI instead of your brain.
It is a fact that AI is transforming many industries and cybersecurity.
What is your opinion? Is AI used to help teach cybersecurity? | 2025-09-16T19:31:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nir9wx/transformation_and_ai/ | Elliot-1988 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nir9wx | false | null | t3_1nir9wx | /r/LocalLLaMA/comments/1nir9wx/transformation_and_ai/ | false | false | self | 1 | null |
I am really impressed with the quality of seedream 4.0 image quality | 0 | It's on a cinematic level. It looks like a shot from a real movie | 2025-09-16T19:13:22 | Previous-Speed1790 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1niqsir | false | null | t3_1niqsir | /r/LocalLLaMA/comments/1niqsir/i_am_really_impressed_with_the_quality_of/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'RhIDNELYoo-2ErcsjNvgoSq4z8QaQw8tNSOLH0N2ong', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/lfhcyww0tkpf1.jpeg?width=108&crop=smart&auto=webp&s=8dcebdec7ab1eab527f97741fda487176aae14cf', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/lfhcyww0tkpf1.jpeg?width=216&crop=smart&auto=webp&s=c9cb0a860e67be387031a59381caac8c95a49ab1', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/lfhcyww0tkpf1.jpeg?width=320&crop=smart&auto=webp&s=df4929d8617b5d6ce4bb0e8a3bf09c4261a6d7a4', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/lfhcyww0tkpf1.jpeg?width=640&crop=smart&auto=webp&s=a3a5ea0ce730f38434f0fd4af9c621ae7459b92c', 'width': 640}, {'height': 1706, 'url': 'https://preview.redd.it/lfhcyww0tkpf1.jpeg?width=960&crop=smart&auto=webp&s=d63109333d8bb78522692cbd956540ed20176343', 'width': 960}, {'height': 1920, 'url': 'https://preview.redd.it/lfhcyww0tkpf1.jpeg?width=1080&crop=smart&auto=webp&s=93d9a6c6d61dc4d5f5141b49582100c1c140ce2c', 'width': 1080}], 'source': {'height': 2560, 'url': 'https://preview.redd.it/lfhcyww0tkpf1.jpeg?auto=webp&s=ea218e7d7251cd396ffe261b6e6da09c51cbddb7', 'width': 1440}, 'variants': {}}]} | ||
Run Apple Foundation Model, MLX, and OpenAI LLMs using a single API | 1 | [removed] | 2025-09-16T19:05:29 | https://www.reddit.com/r/LocalLLaMA/comments/1niqkzl/run_apple_foundation_model_mlx_and_openai_llms/ | Affectionate-Fix6472 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niqkzl | false | null | t3_1niqkzl | /r/LocalLLaMA/comments/1niqkzl/run_apple_foundation_model_mlx_and_openai_llms/ | false | false | self | 1 | null |
Transformer Lab now supports training text-to-speech (TTS) models | 23 | https://i.redd.it/s21p6omknkpf1.gif
We just shipped text to speech (TTS) support in Transformer Lab.
That means you can:
* Fine-tune open source TTS models on your own dataset
* Clone a voice in one-shot from just a single reference sample
* Train & generate speech locally on NVIDIA and AMD GPUs, or generate on Apple Silicon
* Use the same UI you’re already using for LLMs and diffusion model trains
If you’ve been curious about training speech models locally, this makes it easier to get started.
Here’s how to get started along with easy to follow examples: [https://transformerlab.ai/blog/text-to-speech-support](https://transformerlab.ai/blog/text-to-speech-support)
Please let me know if you have any questions!
| 2025-09-16T18:45:02 | https://www.reddit.com/r/LocalLLaMA/comments/1niq0t6/transformer_lab_now_supports_training/ | OriginalSpread3100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niq0t6 | false | null | t3_1niq0t6 | /r/LocalLLaMA/comments/1niq0t6/transformer_lab_now_supports_training/ | false | false | 23 | null | |
Did you ever regret majoring in Computer Science, given how good AI is now? | 0 | If I can choose again, I would study Electronic Engineering or Physics rather than Computer Science now. | 2025-09-16T18:42:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nipy59/did_you_ever_regret_majoring_in_computer_science/ | kitgary | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nipy59 | false | null | t3_1nipy59 | /r/LocalLLaMA/comments/1nipy59/did_you_ever_regret_majoring_in_computer_science/ | false | false | self | 0 | null |
Fine-tuning Small Language models/ qwen2.5 0.5 B | 38 | I've been up all week trying to fine-tune a small language model using Unsloth, and I've experimented with RAG. I generated around 1,500 domain-specific questions, but my LLM is still hallucinating. Below is a summary of my training setup and data distribution:
* **Epochs**: 20 (training stops around epoch 11)
* **Batch size**: 8
* **Learning rate**: 1e-4
* **Warmup ratio**: 0.5
* **Max sequence length**: 4096
* **LoRA rank**: 32
* **LoRA alpha**: 16
* **Data**: Includes both positive and negative QA-style examples
Despite this setup, hallucinations persist the model dont even know what it was finetuned on. Can anyone help me understand what I might be doing wrong? | 2025-09-16T18:33:01 | Mysterious_Ad_3788 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nipox6 | false | null | t3_1nipox6 | /r/LocalLLaMA/comments/1nipox6/finetuning_small_language_models_qwen25_05_b/ | false | false | default | 38 | {'enabled': True, 'images': [{'id': 'hoplx2colkpf1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/hoplx2colkpf1.png?width=108&crop=smart&auto=webp&s=2a0d61c658d8fa99636942c7ebfa282850f4803c', 'width': 108}, {'height': 160, 'url': 'https://preview.redd.it/hoplx2colkpf1.png?width=216&crop=smart&auto=webp&s=cc7c5bbbc72d61088037599345e0512df7c4edac', 'width': 216}, {'height': 237, 'url': 'https://preview.redd.it/hoplx2colkpf1.png?width=320&crop=smart&auto=webp&s=27ffbb578a92f925e5cbec2743cfa4911bdc424a', 'width': 320}, {'height': 474, 'url': 'https://preview.redd.it/hoplx2colkpf1.png?width=640&crop=smart&auto=webp&s=35a01423a78f194d98f7f162fddd9b55eec0fee6', 'width': 640}, {'height': 712, 'url': 'https://preview.redd.it/hoplx2colkpf1.png?width=960&crop=smart&auto=webp&s=679dca897ce4cd18353ca11461d15d359d409dd0', 'width': 960}, {'height': 801, 'url': 'https://preview.redd.it/hoplx2colkpf1.png?width=1080&crop=smart&auto=webp&s=d10b8bd99c2c53969001c9c688411cc3664adb97', 'width': 1080}], 'source': {'height': 859, 'url': 'https://preview.redd.it/hoplx2colkpf1.png?auto=webp&s=e051e99f44770897e859e253d6402da5ab712e67', 'width': 1158}, 'variants': {}}]} | |
Ktransformers now supports qwen3-next | 61 | This was a few days ago but I haven't seen it mentioned here so I figured I'd post it. They claim 6GB of vram usage with 320GB of system memory. Hopefully in the future the system memory requirements can be brought down if they support quantized variants.
I think this could be the ideal way to run it on low vram systems in the short term before llamacpp gets support. | 2025-09-16T18:29:30 | https://github.com/kvcache-ai/ktransformers/blob/main/doc/en/Qwen3-Next.md | Betadoggo_ | github.com | 1970-01-01T00:00:00 | 0 | {} | 1nipldx | false | null | t3_1nipldx | /r/LocalLLaMA/comments/1nipldx/ktransformers_now_supports_qwen3next/ | false | false | 61 | {'enabled': False, 'images': [{'id': 'GCXHZq6UgvHr-07Ef7MzKApM7hyb5aZQRF1Wd5lmCZ0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GCXHZq6UgvHr-07Ef7MzKApM7hyb5aZQRF1Wd5lmCZ0.png?width=108&crop=smart&auto=webp&s=ca3761e90f5cf24cce94a648a715b7469b722cf0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GCXHZq6UgvHr-07Ef7MzKApM7hyb5aZQRF1Wd5lmCZ0.png?width=216&crop=smart&auto=webp&s=dcebd5711bf5c61e2daddc009e4bb25b8cfc35da', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GCXHZq6UgvHr-07Ef7MzKApM7hyb5aZQRF1Wd5lmCZ0.png?width=320&crop=smart&auto=webp&s=09f6e3b83d9cee3e0317e27dfd4f9f3086f30c75', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GCXHZq6UgvHr-07Ef7MzKApM7hyb5aZQRF1Wd5lmCZ0.png?width=640&crop=smart&auto=webp&s=45539a92e2924b07b2035937feb0f51a09d5cc5e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GCXHZq6UgvHr-07Ef7MzKApM7hyb5aZQRF1Wd5lmCZ0.png?width=960&crop=smart&auto=webp&s=e3bc89625b07dc5d5abc145d909abfbefe430d2c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GCXHZq6UgvHr-07Ef7MzKApM7hyb5aZQRF1Wd5lmCZ0.png?width=1080&crop=smart&auto=webp&s=6f14f18067b8e77c00c9257596a17655f7645de2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GCXHZq6UgvHr-07Ef7MzKApM7hyb5aZQRF1Wd5lmCZ0.png?auto=webp&s=44c89078271d2800889265e33e317d562139194a', 'width': 1200}, 'variants': {}}]} | |
Tell me an LLM model you need and I run it for free | 0 | We're helping data centers utilize their unused GPUs. Currently, there is a small cluster of RTX 4090 and MI300X cards that are mainly sitting idle, so I haven't come up with a better idea than just running some models on them and offering them for free or at half price.
Let me know a model that fits into 96GB VRAM for RTX 4090 - **we'll run it for free**. Currently, we're running [https://console.cloudrift.ai/inference?modelId=meta-llama%2FMeta-Llama-3.1-70B-Instruct-FP8](https://console.cloudrift.ai/inference?modelId=meta-llama%2FMeta-Llama-3.1-70B-Instruct-FP8)
Let me know a model that fits into 1536GB VRAM for MI300X - **we'll run it for half the price** of the cheapest provider on OpenRouter.
We're looking for someone who can **utilize the capacity**, like if you need to process a massive dataset or run some other heavy-duty workload. This way, we'll test the service under the load. Additionally, it takes time and effort to serve another model, so switching them often is a pain. | 2025-09-16T17:55:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nionwk/tell_me_an_llm_model_you_need_and_i_run_it_for/ | NoVibeCoding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nionwk | false | null | t3_1nionwk | /r/LocalLLaMA/comments/1nionwk/tell_me_an_llm_model_you_need_and_i_run_it_for/ | false | false | self | 0 | null |
DeepSeek 3.1 from Unsloth performance on Apple Silicon | 2 | Hello! This post is to solicit feedback from the apple silicon users about DS 3.1 various quants performance. First of all, thank you to Unsloth for making the awesome quants; and, thank you to DeepSeek for training such an amazing model. There are so many good models these days, but this one definitely stands out, making me feel like I am running Claude (from back when it was cool, 3.7) at home ( on a Mac).
Questions for the community:
\- What's your favorite DS quant, why, and what's the speed that you are seeing on apple silicon?
\- There's most likely(?) a compromise between speed and quality, among the quants. What quant did you settle on and why? If you don't mind mentioning your hardware, that would be appreciated. | 2025-09-16T17:30:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ninyfh/deepseek_31_from_unsloth_performance_on_apple/ | Southern_Sun_2106 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ninyfh | false | null | t3_1ninyfh | /r/LocalLLaMA/comments/1ninyfh/deepseek_31_from_unsloth_performance_on_apple/ | false | false | self | 2 | null |
Has anyone tried Intel/Qwen3-Next-80B-A3B-Instruct-int4-mixed-AutoRound? | 18 | When can we expect llama.cpp support for this model? | 2025-09-16T17:20:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ninoo3/has_anyone_tried/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ninoo3 | false | null | t3_1ninoo3 | /r/LocalLLaMA/comments/1ninoo3/has_anyone_tried/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'tlnhZBEud4iKAiUjrnlOpQnJooRD9gYsADuWZoAdgDI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tlnhZBEud4iKAiUjrnlOpQnJooRD9gYsADuWZoAdgDI.png?width=108&crop=smart&auto=webp&s=4cf8e9cb92d6b274136b630cc06c319c2324d053', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/tlnhZBEud4iKAiUjrnlOpQnJooRD9gYsADuWZoAdgDI.png?width=216&crop=smart&auto=webp&s=870f414e1180da1ccbfe339b48a4767eb69cb42e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/tlnhZBEud4iKAiUjrnlOpQnJooRD9gYsADuWZoAdgDI.png?width=320&crop=smart&auto=webp&s=2b52c7f943158beccd3bbb9a054a8d96a057b2a0', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/tlnhZBEud4iKAiUjrnlOpQnJooRD9gYsADuWZoAdgDI.png?width=640&crop=smart&auto=webp&s=8d875eb97706d31acfe4b37fd932c04914d55739', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/tlnhZBEud4iKAiUjrnlOpQnJooRD9gYsADuWZoAdgDI.png?width=960&crop=smart&auto=webp&s=ec3e25eac9457385eac97be0b8867c62a1e0a43a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/tlnhZBEud4iKAiUjrnlOpQnJooRD9gYsADuWZoAdgDI.png?width=1080&crop=smart&auto=webp&s=4b073d882e41b1401a811663b333c9cd4ee1490f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/tlnhZBEud4iKAiUjrnlOpQnJooRD9gYsADuWZoAdgDI.png?auto=webp&s=1d3818076504ffa37466475332322cfcc4b18eab', 'width': 1200}, 'variants': {}}]} |
FULL Lovable System Prompt and Internal Tools [UPDATED] | 10 | Latest update: 16/09/2025
I’ve published the FULL UPDATED Lovable System prompt and Internal tools. Over 700+ lines.
You can check it out here: https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools | 2025-09-16T17:12:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ningjq/full_lovable_system_prompt_and_internal_tools/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ningjq | false | null | t3_1ningjq | /r/LocalLLaMA/comments/1ningjq/full_lovable_system_prompt_and_internal_tools/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'N0ry00nVp4bRQvXaCNMQGaXfS-6QDKNNu5IK9ew36wg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N0ry00nVp4bRQvXaCNMQGaXfS-6QDKNNu5IK9ew36wg.png?width=108&crop=smart&auto=webp&s=372cb3d76463153dbc1fd103e5b96643b5bd9eef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N0ry00nVp4bRQvXaCNMQGaXfS-6QDKNNu5IK9ew36wg.png?width=216&crop=smart&auto=webp&s=8aa580e6020d09554be9cb403b7346a4516d829c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N0ry00nVp4bRQvXaCNMQGaXfS-6QDKNNu5IK9ew36wg.png?width=320&crop=smart&auto=webp&s=af173126ace0563074763294c7e7385bcc1e2669', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N0ry00nVp4bRQvXaCNMQGaXfS-6QDKNNu5IK9ew36wg.png?width=640&crop=smart&auto=webp&s=5513514c4a5b69ff07d60c1cdf38ea88652794b5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N0ry00nVp4bRQvXaCNMQGaXfS-6QDKNNu5IK9ew36wg.png?width=960&crop=smart&auto=webp&s=d88604839f95f2c7feaa5dfb02645dd97f71e587', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N0ry00nVp4bRQvXaCNMQGaXfS-6QDKNNu5IK9ew36wg.png?width=1080&crop=smart&auto=webp&s=291d66281a2bcc5d63d8d9d95e362358a8468e9a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/N0ry00nVp4bRQvXaCNMQGaXfS-6QDKNNu5IK9ew36wg.png?auto=webp&s=6dbf9e294fbb34f6c7864f0ba8cff5451e1b2d01', 'width': 1200}, 'variants': {}}]} |
Whining about tariffs | 0 | So I ordered a MCIO to PCIe (gen5) adapter and the cables from Germany for a little over $200. Since I can't find anything cheaper that passes the sniff test, I pulled the trigger.
Just got the bill for another $180 on top of it for tariffs... apparently if the board was originally made in China, then it gets hit with the full tax?
Anyway, mostly whining, but also curious if anyone knows of any options to buy MCIO to PCI gen5 stuff in the states? | 2025-09-16T17:11:03 | https://www.reddit.com/r/LocalLLaMA/comments/1ninf2x/whining_about_tariffs/ | Mass2018 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ninf2x | false | null | t3_1ninf2x | /r/LocalLLaMA/comments/1ninf2x/whining_about_tariffs/ | false | false | self | 0 | null |
Radeon 8060s | 5 | What am I missing with these AMD iGPUs? For the price to VRAM ratio(up to 96gb, why are they not “top dog” in the local LLM world? Are they pretty limited compared to dGPUs?
I’m pretty tempted to pick up something like this.
https://www.corsair.com/us/en/p/gaming-computers/cs-9080002-na/corsair-ai-workstation-300-amd-ryzen-ai-max-395-processor-amd-radeon-8060s-igpu-up-to-96gb-vram-128gb-lpddr5x-memory-1tb-m2-ssd-win11-home-cs-9080002-na#tab-techspecs | 2025-09-16T17:10:26 | https://www.reddit.com/r/LocalLLaMA/comments/1nineeh/radeon_8060s/ | animal_hoarder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nineeh | false | null | t3_1nineeh | /r/LocalLLaMA/comments/1nineeh/radeon_8060s/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'JVFAZ9kL9Ds-ucCwvXP2aT2-6WBtqddrD7cWFxdGoos', 'resolutions': [], 'source': {'height': 96, 'url': 'https://external-preview.redd.it/JVFAZ9kL9Ds-ucCwvXP2aT2-6WBtqddrD7cWFxdGoos.png?auto=webp&s=dfeb4f39b18cafe5b3319c702d5a1325a21394ef', 'width': 96}, 'variants': {}}]} |
Purchase RTX Pro 6000 Workstation around Los Angeles | 1 | Any place around Los Angeles have the RTX Pro 6000 Workstation GPU in stock? | 2025-09-16T16:59:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nin35l/purchase_rtx_pro_6000_workstation_around_los/ | yellow_golf_ball | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nin35l | false | null | t3_1nin35l | /r/LocalLLaMA/comments/1nin35l/purchase_rtx_pro_6000_workstation_around_los/ | false | false | self | 1 | null |
Question about running AI locally and how good it is compared to the big tech stuff? | 5 | Unfortunately without people being paid to work on it full time for development and being run on large server farms you can't get as good as bjg tech and I know that.
That said I am wondering for role play and or image generation are there any models that are good enough that I could run (9070xt and am curious better consumer hardware can run this) an LLM that just has context for role play, instead of a specialized AI where I download stuff per character can I just use a general LLM and say do you know x character and add in another later down the line and it knows the character and franchise just because that information was in its training set? Like how if I ask GPT about any franchise it will know it and the characters, it be well enough that if its not to censored it could even do great role play as them. Something like that for local?
Alternatively for image generation and I'm less sure this exists (but maybe if you somehow merge models...maybe or something idk?) Is there a way to talk to an LLM say what I want to create, have it ask questions before creation or during edits and spit out the images I want or the edits. Again the same way that if I asked GPT to create an image and then asked it to edit the image it would ask for spesifics, do a few questions to clarify or even suggest things and then just make the image and edit. Or do I have to learn a UI still for images and edits, get no suggestions or clarification questions and just have it spit out what it thinks it understands from the prompt? | 2025-09-16T16:47:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nims0k/question_about_running_ai_locally_and_how_good_it/ | newbuildertfb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nims0k | false | null | t3_1nims0k | /r/LocalLLaMA/comments/1nims0k/question_about_running_ai_locally_and_how_good_it/ | false | false | self | 5 | null |
Perplexity ai alternative | 2 | Hello I just wanted to ask what if I make a Perplexity ai alternative will it scale or get successful | 2025-09-16T16:46:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nimrij/perplexity_ai_alternative/ | ShoulderTough8758 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nimrij | false | null | t3_1nimrij | /r/LocalLLaMA/comments/1nimrij/perplexity_ai_alternative/ | false | false | self | 2 | null |
Anyone use free API tier in google gemini for bulk tasks? | 0 | I run Qwen3 30b local with a smallish context window. Trying to best figure out how to use 100/250 free calls to gemini pro/flash per day. It doesn't seem like these calls are limited to how much are in their context window, so you could stuff 1mill context and get up to 64k context back. Anyone do this? | 2025-09-16T16:34:49 | https://www.reddit.com/r/LocalLLaMA/comments/1nimfrt/anyone_use_free_api_tier_in_google_gemini_for/ | ChainOfThot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nimfrt | false | null | t3_1nimfrt | /r/LocalLLaMA/comments/1nimfrt/anyone_use_free_api_tier_in_google_gemini_for/ | false | false | self | 0 | null |
Are we headed toward a world of cheap subsidized AI, expensive clean water and expensive local electricity? | 0 | Would a good reason to use a local AI is to simply show that the desire for AI is not that high? I was thinking about how electricity has gone up since AI datacenters are using more electricity, clean water is now being used for cooling, and how it will become more of a luxury item as time goes on. I saw a [topic](https://old.reddit.com/r/LocalLLaMA/comments/1nhx3jp/whats_the_most_costeffective_and_best_ai_model/) about when to use local AI and basically its not a good idea with subsidized AI, but at what cost? | 2025-09-16T16:04:45 | https://www.reddit.com/r/LocalLLaMA/comments/1nilmr7/are_we_headed_toward_a_world_of_cheap_subsidized/ | Beestinge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nilmr7 | false | null | t3_1nilmr7 | /r/LocalLLaMA/comments/1nilmr7/are_we_headed_toward_a_world_of_cheap_subsidized/ | false | false | self | 0 | null |
Parallelization, Reliability, DevEx for AI Workflows | 0 | If you are running AI agents on large workloads or to run long running flows, Exosphere orchestrates any agent to unlock scale effortlessly. Watch the demo in comments | 2025-09-16T15:56:18 | https://www.reddit.com/r/LocalLLaMA/comments/1nileer/parallelization_reliability_devex_for_ai_workflows/ | jain-nivedit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nileer | false | null | t3_1nileer | /r/LocalLLaMA/comments/1nileer/parallelization_reliability_devex_for_ai_workflows/ | false | false | self | 0 | null |
VoxCPM-0.5B | 63 | > VoxCPM is a novel tokenizer-free Text-to-Speech (TTS) system that redefines realism in speech synthesis. By modeling speech in a continuous space, it overcomes the limitations of discrete tokenization and enables two flagship capabilities: context-aware speech generation and true-to-life zero-shot voice cloning.
Supports both Regular text and Phoneme input. Seems promising! | 2025-09-16T15:34:15 | https://huggingface.co/openbmb/VoxCPM-0.5B | k-en | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1niktfz | false | null | t3_1niktfz | /r/LocalLLaMA/comments/1niktfz/voxcpm05b/ | false | false | default | 63 | {'enabled': False, 'images': [{'id': 'r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=108&crop=smart&auto=webp&s=f4de9044cf40d797de876cb88486cf64eeb751b2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=216&crop=smart&auto=webp&s=6cc73426288ce8f26d075271d1c82226fafaf5a4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=320&crop=smart&auto=webp&s=b89517445ea0178739330ac6d93e413de703f8b9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=640&crop=smart&auto=webp&s=c21b8a7fb420d7443519db438480fdc9bd7c71a4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=960&crop=smart&auto=webp&s=4605695ffd89b219463460aae2356d6717006600', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?width=1080&crop=smart&auto=webp&s=0635b1fc4300acb882f8303ac96987fdf349b275', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/r3qnehuYhIo41bAc9p8n4efqIezTbTJqzszutOT9598.png?auto=webp&s=9cf3fd16647e9f410c5cf3b868b320962ecd69d9', 'width': 1200}, 'variants': {}}]} |
RTX 6000 Pro Workstation sold out, can I use server edition instead? | 7 | I am building a server for running local LLM. The idea was to get a single RTX 6000 Pro Workstation. But it appears to be completely sold out in my area with uncertain delivery times of at least 1-2 months. The Max Q version is available, but I want the full version. The server edition also appears to be available, but that one has no fans. My server is a rack system, but home build and 100% not with enough airflow to passively cool a card like that. But I am good with a 3D printer and maybe I could design an adapter to fit a 120 fan to cool it? Anyone done this before? Will I get in trouble? What happens if the cooling is insufficient? What about the power connector - is that standard? | 2025-09-16T15:30:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nikq42/rtx_6000_pro_workstation_sold_out_can_i_use/ | Baldur-Norddahl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nikq42 | false | null | t3_1nikq42 | /r/LocalLLaMA/comments/1nikq42/rtx_6000_pro_workstation_sold_out_can_i_use/ | false | false | self | 7 | null |
Is there a way to FINETUNE a TTS model LOCALLY to learn sound effects? | 3 | Imagine entering the text “Hey, how are you? <leaves_rustling> ….what was that?!” And the model can output it, leaves rustling included.
I have audio clips of the sounds I want to use and transcriptions of every sound and time.
So far the options I’ve seen that can run on a 3090 are:
Bark - but it only allows inference, NOT finetuning/training. If it doesn’t know the sound, it can’t make it.
XTTSv2 - but I think it only does voices. Has anyone tried doing it with labelled sound effects like this? Does it work?
If not, does anyone have any estimates on how long something like this would take to make from scratch locally? Claude says about 2-4 weeks. But is that even possible on a 3090?
| 2025-09-16T15:14:07 | https://www.reddit.com/r/LocalLLaMA/comments/1nik9qe/is_there_a_way_to_finetune_a_tts_model_locally_to/ | Borkato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nik9qe | false | null | t3_1nik9qe | /r/LocalLLaMA/comments/1nik9qe/is_there_a_way_to_finetune_a_tts_model_locally_to/ | false | false | self | 3 | null |
Can you tell I like Qwen3 Coder | 0 | 2025-09-16T15:08:27 | spacespacespapce | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nik45w | false | null | t3_1nik45w | /r/LocalLLaMA/comments/1nik45w/can_you_tell_i_like_qwen3_coder/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'karllt9hljpf1', 'resolutions': [{'height': 40, 'url': 'https://preview.redd.it/karllt9hljpf1.png?width=108&crop=smart&auto=webp&s=b96cf96ef2d063753da87437be0d6fa73d6b6090', 'width': 108}, {'height': 81, 'url': 'https://preview.redd.it/karllt9hljpf1.png?width=216&crop=smart&auto=webp&s=a48799aae1b77febf19e841b78c56d7eba10ec69', 'width': 216}, {'height': 120, 'url': 'https://preview.redd.it/karllt9hljpf1.png?width=320&crop=smart&auto=webp&s=8808cd388cbbceb5788d2b9b11d0ba217adaa30c', 'width': 320}, {'height': 241, 'url': 'https://preview.redd.it/karllt9hljpf1.png?width=640&crop=smart&auto=webp&s=2aac56b3aaddceac3992042b461adaf0b34e5a46', 'width': 640}], 'source': {'height': 306, 'url': 'https://preview.redd.it/karllt9hljpf1.png?auto=webp&s=3a45e9960163158f4d1d1c7873fe8a1bc78e306c', 'width': 812}, 'variants': {}}]} | ||
Lightweight chat web UI that supports on-disk storage and can hook to llama.cpp | 7 | Hey all! What options exists for a locally running web UI that is able to integrate with llama.cpp's API to provide a chat interface and store the conversations in a local database. llama.cpp's web UI is nice and simply, but it only stores data in the browser using IndexedDB. I also looked at:
* chatbox: only works with ollama
* Open WebUI: very heavyweight, difficult to maintain and deploy
* LibreChat: doesn't seem to support llama.cpp
* LMStudio: desktop app, doesn't run a web interface
* text-generation-webui (oobabooga): the docs leave a lot to be desired
Any other options I missed? Alternatively, if I were to build one myself, are there any LLM chat interface templates that I could reuse? | 2025-09-16T14:48:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nijl9n/lightweight_chat_web_ui_that_supports_ondisk/ | yellow_gravel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nijl9n | false | null | t3_1nijl9n | /r/LocalLLaMA/comments/1nijl9n/lightweight_chat_web_ui_that_supports_ondisk/ | false | false | self | 7 | null |
Inference will win ultimately | 106 | inference is where the real value shows up. it’s where models are actually used at scale.
A few reasons why I think this is where the winners will be:
•Hardware is shifting. Morgan Stanley recently noted that more chips will be dedicated to inference than training in the years ahead. The market is already preparing for this transition.
•Open-source is exploding. Meta’s Llama models alone have crossed over a billion downloads. That’s a massive long tail of developers and companies who need efficient ways to serve all kinds of models.
•Agents mean real usage. Training is abstract , inference is what everyday people experience when they use agents, apps, and platforms. That’s where latency, cost, and availability matter.
•Inefficiency is the opportunity. Right now GPUs are underutilized, cold starts are painful, and costs are high. Whoever cracks this at scale , making inference efficient, reliable, and accessible , will capture enormous value.
In short, inference isn’t just a technical detail. It’s where AI meets reality. And that’s why inference will win. | 2025-09-16T14:46:07 | pmv143 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nijikb | false | null | t3_1nijikb | /r/LocalLLaMA/comments/1nijikb/inference_will_win_ultimately/ | false | false | default | 106 | {'enabled': True, 'images': [{'id': 'jp7ada3lhjpf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/jp7ada3lhjpf1.jpeg?width=108&crop=smart&auto=webp&s=7b58d311c3a877982deaeb00b16ff7ebcc4ee6e7', 'width': 108}, {'height': 117, 'url': 'https://preview.redd.it/jp7ada3lhjpf1.jpeg?width=216&crop=smart&auto=webp&s=f96f08b8bdc54e907e4819ab86085a84a18a9d90', 'width': 216}, {'height': 173, 'url': 'https://preview.redd.it/jp7ada3lhjpf1.jpeg?width=320&crop=smart&auto=webp&s=d82e522555b5db198ce552f3596920c55fa99f17', 'width': 320}, {'height': 347, 'url': 'https://preview.redd.it/jp7ada3lhjpf1.jpeg?width=640&crop=smart&auto=webp&s=2651ab9359b4d75a0e7c1c55003fec8ea92f4fdb', 'width': 640}, {'height': 521, 'url': 'https://preview.redd.it/jp7ada3lhjpf1.jpeg?width=960&crop=smart&auto=webp&s=ab806d0cef2a10a0261a11b5ec6663980dd85117', 'width': 960}, {'height': 586, 'url': 'https://preview.redd.it/jp7ada3lhjpf1.jpeg?width=1080&crop=smart&auto=webp&s=2f8eda3c7acd955b1e996bacdae0eb2623e4166a', 'width': 1080}], 'source': {'height': 652, 'url': 'https://preview.redd.it/jp7ada3lhjpf1.jpeg?auto=webp&s=a36718f7abb4b9371780e3abc48f61a380e595f6', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.