title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Nvidia Rtx Pro 6000 96gb workstation for fine tuning | 6 | Looking to get this for work for training local models. Training data is sensitive so would rather keep it local. I would like a pre-built but would build one if it made sense. I have been looking at OriginPC and the card is significantly cheaper in one of their pre-builds. Anyone have any recommendations on pre-built and/or parts for building? Only one thing I really want is ability to add another GPU later if needed. I'm also open to other ideas for something better. Looking at budget of \~$15K (company money :-) ). Thanks. | 2025-09-16T14:43:18 | https://www.reddit.com/r/LocalLLaMA/comments/1nijfv6/nvidia_rtx_pro_6000_96gb_workstation_for_fine/ | Psychological_Ad8426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nijfv6 | false | null | t3_1nijfv6 | /r/LocalLLaMA/comments/1nijfv6/nvidia_rtx_pro_6000_96gb_workstation_for_fine/ | false | false | self | 6 | null |
Can Domain-Specific Pretraining on Proprietary Data Beat GPT-5 or Gemini in Specialized Fields? | 4 | I’m working in a domain that relies heavily on large amounts of non-public, human-generated data. This data uses highly specialized jargon and terminology that current state-of-the-art (SOTA) large language models (LLMs) struggle to interpret correctly. Suppose I take one of the leading open-source LLMs and perform continual pretraining on this raw, domain-specific corpus, followed by generating a small set of question–answer pairs for instruction tuning. In this scenario, could the adapted model realistically outperform cutting-edge general-purpose models like GPT-5 or Gemini within this narrow domain?
What are the main challenges and limitations in this approach—for example, risks of catastrophic forgetting during continual pretraining, the limited effectiveness of synthetic QA data for instruction tuning, scaling issues when compared to the massive pretraining of frontier models, or the difficulty of evaluating “outperformance” in terms of accuracy, reasoning, and robustness?
I've checked the previous work but they compare the performances of old models like GPT3.5 GPT-4 and I think LLMs made a long way since and it is difficult to beat them. | 2025-09-16T14:19:43 | https://www.reddit.com/r/LocalLLaMA/comments/1niit47/can_domainspecific_pretraining_on_proprietary/ | hezarfenserden | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niit47 | false | null | t3_1niit47 | /r/LocalLLaMA/comments/1niit47/can_domainspecific_pretraining_on_proprietary/ | false | false | self | 4 | null |
Qwen Next vLLM fail @ 48GB | 9 | I cannot seem to squeeze the 4 bit ones into vram but I don't see any 3 bit ones anywhere? Is this an AWQ thing? Maybe it's just not possible?
If it is possible, does anyone feel like making one? :D | 2025-09-16T14:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1niin71/qwen_next_vllm_fail_48gb/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niin71 | false | null | t3_1niin71 | /r/LocalLLaMA/comments/1niin71/qwen_next_vllm_fail_48gb/ | false | false | self | 9 | null |
What are the current options for running LLMs locally on a laptop? | 1 | The main ones I’ve seen are MacBook and The ROG Z FLOW. Are there other options? I’m looking for 100+ gb RAM. I guess the 395+ is not good with image generation. Most of my work and hobby involves LLMs but I’d like to be able to use image and audio generation as well. | 2025-09-16T13:48:50 | https://www.reddit.com/r/LocalLLaMA/comments/1nii0iy/what_are_the_current_options_for_running_llms/ | ConSemaforos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nii0iy | false | null | t3_1nii0iy | /r/LocalLLaMA/comments/1nii0iy/what_are_the_current_options_for_running_llms/ | false | false | self | 1 | null |
Genuine question about RAG | 10 | Ok, as many have mentioned or pointed out, I’m a bit of a noob at AI and probably coding. I’m a 43yo old techy. Yeah I’m not up on a lot of newer tech, but becoming disabled and having tons of time on my hands because I cant work has lead me to wanting to at least build myself an AI that can help me with daily tasks. I don’t have the hardware to build myself own model so I’m trying to build tools that can help augment any available LLM that I can run. I have limited funds, so I’m building what I can with what I have. But what is all the hype about RAG? I don’t understand it. And a lot of platforms just assume when you’re trying to share your code with an LLM that you want RAG. what is RAG? From what I can limitedly gather, it only looks at say a few excerpts from your code or file you upload and uses that to show the model. If I’m uploading a file I don’t want to have the UI randomly look through the code for whatever I’m saying in the chat I’m sending the code with. I’d rather the model just read my code, and respond to my question. Can someone please explain RAG. In a human readable way please? I’m just getting back into coding and I’m not as into a lot of the terminology as I probably should. | 2025-09-16T13:39:22 | https://www.reddit.com/r/LocalLLaMA/comments/1nihs0a/genuine_question_about_rag/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nihs0a | false | null | t3_1nihs0a | /r/LocalLLaMA/comments/1nihs0a/genuine_question_about_rag/ | false | false | self | 10 | null |
I tried Kimi K2 so you don't have to | 0 | My Claude Code max subscription plan expired couple days lately, so I was trying to look for some better alternatives with better price, Kimi K2 caught my attention with cheaper pricing but here's how it turns out:
\- After my first 2 hours of vibe coding, I spent around 1.3$, I work 8hrs/day (minimum) so it would be 5.2$/day, 150$/month which is more or less as expensive as Claude Code max.
\- About code quality/performance, **not even close in compared to CC,** it seems to has very narrow grasp about the codebase, when working with CC it automatically scans/reads for related files before making changes and for better context, in contrast Kimi behavior is single-file-focused, it tries to work on a single file and doesn't bother read related files and ofc it didn't get the job done.
\- About the business, I am the very early user to use Claude since the web version and I can see that Anthropic does a very good job tuning their model for coding. In comparison, Kimi is just a early Chinese startup, their dashboard is not even fully developed yet, some features are not working and their price is not competitive at all.
\- Kimi is more or less the same as Deepseek, Deepseek used to have the hype, everyone used to talk about it all over Reddit and still it being crushed by OpenAI eventually.
My point being: for the budget around 100-200$/month, Claude Code is currently the best option you can get, I've tried something new and I learned the lession. Kimi still has its potential, it's open source so it can be self-hosted and be more cost-effective but for individual vibe-coders it's a NO-GO.
https://preview.redd.it/0wpczwhf0jpf1.png?width=2560&format=png&auto=webp&s=6b02e9ffc5af48340fa26b340280ffb8ea215c95
| 2025-09-16T13:10:21 | https://www.reddit.com/r/LocalLLaMA/comments/1nih24r/i_tried_kimi_k2_so_you_dont_have_to/ | toantruong38 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nih24r | false | null | t3_1nih24r | /r/LocalLLaMA/comments/1nih24r/i_tried_kimi_k2_so_you_dont_have_to/ | false | false | 0 | null | |
Z440 with 512GB RAM and a 3090 | 1 | Hi.
Thinking about to re activate my HP Z440.
I could get 512GB DDR4 2400 for around 400€.
I have a 2690V4 (14 Core) and could throw an RTX 3090 in, much more won't be possible so easy because of the 700W PSU (yes - it could be changed to a normal ATX etc, but want to keep it simple for now).
What performance could I expect - also on bigger models?
Some references out there? | 2025-09-16T12:57:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nigq8b/z440_with_512gb_ram_and_a_3090/ | Potential-Leg-639 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nigq8b | false | null | t3_1nigq8b | /r/LocalLLaMA/comments/1nigq8b/z440_with_512gb_ram_and_a_3090/ | false | false | self | 1 | null |
Small LLM evaluation | 7 | Hello, I have a script for evaluating tiny language models that I'm sharing with the community. I hope it's useful to you. I'm looking for your feedback on what other metrics could be added to measure performance, GPU consumption, answer quality, and more. Thanks! (AMD 1800 32GB RAM GTX 1070). # ======================================================================
\# Archivo: llm\_evaluation\_script.py
\# Descripción: Script de evaluación de modelos LLM con métricas de rendimiento y ranking automático.
\# ======================================================================
from dotenv import load\_dotenv
import os
import sys
import time
import psutil
import json
from openai import OpenAI
from IPython.display import Markdown, display
\# Cargar variables de entorno desde el archivo .env
load\_dotenv(override=True)
\# Inicializar el cliente de OpenAI para interactuar con Ollama
client = OpenAI(
base\_url="http://192.168.50.253:11434/v1",
api\_key="ollama",
timeout=120
)
\# ======================================================================
\# Configuración del Benchmarking
\# ======================================================================
\# Lista de modelos a evaluar
models = \[
"llama3.2:1b",
"llama3.2:3b",
"qwen3:1.7b",
"gemma3n:e4b",
"qwen3:0.6b",
"gemma3:1b",
"cogito:3b"
\]
\# Tamaños de los modelos en GB para la estimación de energía
model\_sizes = {
"llama3.2:1b": 1.0,
"llama3.2:3b": 3.0,
"qwen3:1.7b": 1.3,
"gemma3n:e4b": 4.0,
"qwen3:0.6b": 1.0,
"gemma3:1b": 1.0
}
\# Tareas de evaluación y sus prompts
tasks = {
"Programación": "Here’s a buggy Python function for the Fibonacci sequence: \`\`\`def fib(n): if n <= 1: return n; else: return fib(n-1) + fib(n-2)\`\`\` The function is correct for small \`n\` but inefficient for larger \`n\`. Suggest an optimized version and explain the bug in 100 words or less.",
"Razonamiento Profundo": "Three people, A, B, and C, are either knights (always tell the truth) or knaves (always lie). A says, 'B is a knight.' B says, 'C is a knave.' C says, 'A and B are knaves.' Determine who is a knight and who is a knave in 100 words or less.",
"Matemáticas": "Calculate the integral ∫(0 to 1) x\^2 dx and explain the steps in 100 words or less.",
"Física": "A ball is thrown horizontally at 10 m/s from a 20 m high cliff. How far from the base of the cliff does it land? Ignore air resistance and use g = 9.8 m/s². Answer in 100 words or less.",
"Química": "Balance the chemical equation: C3H8 + O2 → CO2 + H2O. Provide the balanced equation and a brief explanation in 100 words or less.",
"Creatividad": "Write a 100-word story about a robot discovering a hidden forest on Mars."
}
\# Prompt del sistema para guiar a los modelos
system\_prompt = "You are an expert AI assistant. Provide accurate, concise, and clear answers to the following task in 100 words or less."
\# Diccionarios para almacenar resultados, rankings y puntajes
results = {task: {model: {"response": "", "metrics": {}} for model in models} for task in tasks}
rankings = {task: {} for task in tasks}
overall\_scores = {model: 0 for model in models}
\# ======================================================================
\# Bucle de Evaluación Principal
\# ======================================================================
\# Evaluar cada modelo en cada tarea
for task, prompt in tasks.items():
print(f"\\n=== Evaluando tarea: {task} ===\\n")
competitors = \[\]
answers = \[\]
for model\_name in models:
print(f"\\n--- Modelo: {model\_name} ---")
try:
\# 1. Medir el rendimiento antes de la llamada
cpu\_before = psutil.cpu\_percent(interval=None)
mem\_before = psutil.virtual\_memory().used / 1024\*\*2
start\_time = time.time()
\# 2. Llamada a la API de Ollama
response = client.chat.completions.create(
model=model\_name,
messages=\[
{"role": "system", "content": system\_prompt},
{"role": "user", "content": prompt}
\],
max\_tokens=200
)
\# 3. Medir el rendimiento después de la llamada
elapsed\_time = time.time() - start\_time
if elapsed\_time > 120:
raise TimeoutError("La respuesta excedió el límite de 2 minutos.")
cpu\_after = psutil.cpu\_percent(interval=None)
mem\_after = psutil.virtual\_memory().used / 1024\*\*2
cpu\_usage = (cpu\_before + cpu\_after) / 2
mem\_usage = mem\_after - mem\_before
energy\_estimate = model\_sizes.get(model\_name, 0) \* elapsed\_time
\# 4. Almacenar la respuesta y las métricas
answer = response.choices\[0\].message.content
display(Markdown(f"\*\*{model\_name}\*\* (Tiempo: {elapsed\_time:.2f}s, CPU: {cpu\_usage:.1f}%, Mem: {mem\_usage:.1f} MB, Energía: {energy\_estimate:.1f} GB\*s): {answer}"))
print(f"{model\_name} (Tiempo: {elapsed\_time:.2f}s, CPU: {cpu\_usage:.1f}%, Mem: {mem\_usage:.1f} MB, Energía: {energy\_estimate:.1f} GB\*s): {answer}")
results\[task\]\[model\_name\] = {
"response": answer,
"metrics": {
"response\_time": elapsed\_time,
"cpu\_usage": cpu\_usage,
"mem\_usage": mem\_usage,
"energy\_estimate": energy\_estimate
}
}
competitors.append(model\_name)
answers.append(answer)
except Exception as e:
print(f"Error con {model\_name}: {e}", file=sys.stderr)
error\_msg = f"Error: No response ({str(e)})"
results\[task\]\[model\_name\] = {
"response": error\_msg,
"metrics": {
"response\_time": float("inf"),
"cpu\_usage": 0,
"mem\_usage": 0,
"energy\_estimate": float("inf")
}
}
competitors.append(model\_name)
answers.append(error\_msg)
\# 4. Juzgar las respuestas y generar un ranking
together = ""
for index, answer in enumerate(answers):
together += f"# Respuesta del competidor {index+1}\\n\\n{answer}\\n\\n"
print(f"\\n=== Respuestas Combinadas para {task} ===\\n")
print(together)
judge\_prompt = f"""Estás juzgando una competencia entre {len(competitors)} competidores para la tarea: {task}.
Evalúa cada respuesta por precisión, claridad, concisión y relevancia. Clasifícalos del mejor al peor. Si una respuesta es un mensaje de error, clasifícala al final.
Responde solo con JSON:
{{"results": \["número del mejor competidor", "número del segundo mejor", ...\]}}
Respuestas:
{together}
Responde solo con el ranking en formato JSON."""
try:
response = client.chat.completions.create(
model="cogito:8b",
messages=\[{"role": "user", "content": judge\_prompt}\],
max\_tokens=200
)
judge\_result = json.loads(response.choices\[0\].message.content)
ranks = judge\_result\["results"\]
print(f"\\n=== Rankings para {task} ===\\n")
for index, rank in enumerate(ranks):
competitor = competitors\[int(rank) - 1\]
rankings\[task\]\[competitor\] = len(ranks) - index
overall\_scores\[competitor\] += len(ranks) - index
print(f"Rank {index + 1}: {competitor} (Puntaje: {len(ranks) - index})")
except Exception as e:
print(f"Error al juzgar {task}: {e}", file=sys.stderr)
\# ======================================================================
\# Resumen de Resultados
\# ======================================================================
\# 5. Imprimir el resumen de métricas
print("\\n=== Resumen de Métricas de Rendimiento ===\\n")
for task in tasks:
print(f"\\n--- Tarea: {task} ---")
print("Modelo\\t\\t\\tTiempo (s)\\tCPU (%)\\tMem (MB)\\tEnergía (GB\*s)")
for model\_name in models:
metrics = results\[task\]\[model\_name\]\["metrics"\]
time\_s = metrics\["response\_time"\]
cpu = metrics\["cpu\_usage"\]
mem = metrics\["mem\_usage"\]
energy = metrics\["energy\_estimate"\]
print(f"{model\_name:<20}\\t{time\_s:.2f}\\t\\t{cpu:.1f}\\t{mem:.1f}\\t\\t{energy:.1f}")
\# 6. Identificar los modelos más lentos y de mayor consumo
print("\\n=== Modelos Más Lentos y de Mayor Consumo ===\\n")
for task in tasks:
print(f"\\n--- Tarea: {task} ---")
max\_time\_model = max(models, key=lambda m: results\[task\]\[m\]\["metrics"\]\["response\_time"\])
max\_cpu\_model = max(models, key=lambda m: results\[task\]\[m\]\["metrics"\]\["cpu\_usage"\])
max\_mem\_model = max(models, key=lambda m: results\[task\]\[m\]\["metrics"\]\["mem\_usage"\])
max\_energy\_model = max(models, key=lambda m: results\[task\]\[m\]\["metrics"\]\["energy\_estimate"\])
print(f"Modelo más lento: {max\_time\_model} ({results\[task\]\[max\_time\_model\]\['metrics'\]\['response\_time'\]:.2f}s)")
print(f"Mayor uso de CPU: {max\_cpu\_model} ({results\[task\]\[max\_cpu\_model\]\['metrics'\]\['cpu\_usage'\]:.1f}%)")
print(f"Mayor uso de memoria: {max\_mem\_model} ({results\[task\]\[max\_mem\_model\]\['metrics'\]\['mem\_usage'\]:.1f} MB)")
print(f"Mayor energía estimada: {max\_energy\_model} ({results\[task\]\[max\_energy\_model\]\['metrics'\]\['energy\_estimate'\]:.1f} GB\*s)")
\# 7. Imprimir el ranking general
print("\\n=== Ranking General de Modelos ===\\n")
sorted\_models = sorted(overall\_scores.items(), key=lambda x: x\[1\], reverse=True)
print("Modelo\\t\\t\\tPuntaje Total")
for model, score in sorted\_models:
print(f"{model:<20}\\t{score}")
\# 8. Recomendaciones de optimización (añadidas para mayor valor)
print("\\n=== Recomendaciones de Optimización del Servidor ===\\n")
slowest\_model = max(models, key=lambda m: sum(results\[task\]\[m\]\["metrics"\]\["response\_time"\] for task in tasks))
highest\_energy\_model = max(models, key=lambda m: sum(results\[task\]\[m\]\["metrics"\]\["energy\_estimate"\] for task in tasks))
print(f"1. \*\*Aceleración por GPU\*\*: Modelos grandes como {slowest\_model} (el más lento) y {highest\_energy\_model} (el de mayor consumo) se benefician enormemente de una GPU. Configura Ollama con soporte para GPU: \`https://ollama.com/docs/gpu\`.")
print("2. \*\*Cuantización\*\*: Aplica cuantización a los modelos grandes para reducir la memoria y el tiempo de inferencia. Utiliza \`ollama quantize\`.")
print("3. \*\*Monitoreo de Recursos\*\*: Monitorea la RAM del servidor (\`htop\` o \`nvidia-smi\`) para evitar cuellos de botella.") | 2025-09-16T12:48:25 | https://www.reddit.com/r/LocalLLaMA/comments/1nigiuk/small_llm_evaluation/ | InformationPretty616 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nigiuk | false | null | t3_1nigiuk | /r/LocalLLaMA/comments/1nigiuk/small_llm_evaluation/ | false | false | self | 7 | null |
Plan to build my setup | 2 | Hi guys, while I was tinkering at home with llms and building small AI agents I came across Ollama and the concept of self hosting quantized models. I really want to continue tinkering with self hosted LLMs and build my own assistant for the fun of it and the learning experience.
While I am strongly restricted on my laptop I discovered some old cpu parts I have lying around:
Motherboard: msi b250 pc mate
CPU: i5 7600 LGA1151
Memory: 16GB DDR3 RAM
Storage: 500GB HDD
PSU: iarena 400W
GPU: Nvidia GT 240
I am playing with the idea of putting these parts together and upgrading step by step to a new PC build, since I can't spend the necessary money at once. My plan is to start with a new PSU and Storage, and get a new/used GPU for a start. Then step by step upgrade the rest of the build like motherboard, RAM and CPU, over the next months.
For the GPU, I've been researching a lot and came up with a up to 500€ budget. I'm considering following GPUs which should allow me to tinker with ml models and also occasionally game
- new RTX 3060 12GB ~260€
- new RTX 5060 Ti 16GB ~430€
- used RTX 3090 24 GB ~ up to 500€ (found some on in this range)
I'm new to building PCs and the PC spec world. What I'm really looking for is some guidance here to purchase a well rounded GPU which can last me for the next few years in experimenting with LLMs (and gaming but no need to go all out for it). I'm currently leaning towards the used 3090 but I'm not sure if it'll hold up for the next few years with the software support.
Questions:
What is your opinion od the GPUs? Any other I should consider? What to look out for when purchasing used ones? Are there any problems with my plan of putting together the pc over the course of the next 3-6 months?
I'm aware that until I upgrade the CPU and Motherboard I wont be able to use the GPU to its fullest potential. Other than that no harm will happen to it right?
I'd be happy to be able to run some 13b models and do some LoRA finetuning locally. I'd also like to be able to run some computer vision models (detecting ovjects for example) and to be able to run S2T and T2S.
If you guys need more info I'll be happy to provide. Also I hope I'm at the right sub! | 2025-09-16T12:43:13 | https://www.reddit.com/r/LocalLLaMA/comments/1nigejm/plan_to_build_my_setup/ | greensmuzi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nigejm | false | null | t3_1nigejm | /r/LocalLLaMA/comments/1nigejm/plan_to_build_my_setup/ | false | false | self | 2 | null |
Hardware and model recommendations for on-prem LLM deployment | 5 | I've delivered a couple of projects using frontier models, but my latest client wants something on-prem for his team of ~10. The application will have a RAG pipeline. Starting with ~100 PDFs. Later I will need to add and some agentic reasoning.
Questions:
- Which open-source LLM is a good place to start for RAG? I will experiment a bit, but nice to have some working experience.
- Viable hardware: do I need Nvidia? AMD? I've only ever used cloud-based systems, so this is a bit new to me, and the part I feel less sure about.
Any help would be appreciated, thank you! | 2025-09-16T12:26:23 | https://www.reddit.com/r/LocalLLaMA/comments/1nig0zp/hardware_and_model_recommendations_for_onprem_llm/ | neenawa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nig0zp | false | null | t3_1nig0zp | /r/LocalLLaMA/comments/1nig0zp/hardware_and_model_recommendations_for_onprem_llm/ | false | false | self | 5 | null |
Do you pay in dollars or in patience? | 0 | had an interesting topic recently with the colleagues that got me thinking which scenario is the actual nightmare? A bill that makes finance chase you down (sometimes even faster than models are generating the text) or watching a model drip out tokens slower than your CI pipeline on a Friday?
Only few companies actually hit a nice middle ground between having a great pricing, being fast, and providing models that are actually usable. | 2025-09-16T11:53:10 | codegolf-guru | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nifb70 | false | null | t3_1nifb70 | /r/LocalLLaMA/comments/1nifb70/do_you_pay_in_dollars_or_in_patience/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'fpb7db5imipf1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/fpb7db5imipf1.png?width=108&crop=smart&auto=webp&s=013105950fef933f81048cf625af8591df83bc2f', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/fpb7db5imipf1.png?width=216&crop=smart&auto=webp&s=b1adfa3834a48a309c273a06ac16d205befa2c34', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/fpb7db5imipf1.png?width=320&crop=smart&auto=webp&s=a9338643a512e227ff55375643d9a01fed2bdfb4', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/fpb7db5imipf1.png?width=640&crop=smart&auto=webp&s=d5a82ed5ce5fc5f6189239a275ec93a1ed0e6161', 'width': 640}, {'height': 481, 'url': 'https://preview.redd.it/fpb7db5imipf1.png?width=960&crop=smart&auto=webp&s=1dc8de0f2fe6d2e80e482e78a1340fee70deb71d', 'width': 960}, {'height': 541, 'url': 'https://preview.redd.it/fpb7db5imipf1.png?width=1080&crop=smart&auto=webp&s=6c63371888d6341b5e32dc44f067bab1b9e0b195', 'width': 1080}], 'source': {'height': 2052, 'url': 'https://preview.redd.it/fpb7db5imipf1.png?auto=webp&s=ba04b93dec58cef8195a0878f334b7c6c4020670', 'width': 4092}, 'variants': {}}]} | |
I bought a modded 4090 48GB in Shenzhen. This is my story. | 1,684 | 2025-09-16T11:52:20 | https://www.reddit.com/r/LocalLLaMA/comments/1nifajh/i_bought_a_modded_4090_48gb_in_shenzhen_this_is/ | king_priam_of_Troy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nifajh | false | null | t3_1nifajh | /r/LocalLLaMA/comments/1nifajh/i_bought_a_modded_4090_48gb_in_shenzhen_this_is/ | false | false | 1,684 | {'enabled': False, 'images': [{'id': '1vD_R63iqu4vnM_qQf7pZNwXb9dy_UDc_Gl2j3LnTpU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/1vD_R63iqu4vnM_qQf7pZNwXb9dy_UDc_Gl2j3LnTpU.png?width=108&crop=smart&auto=webp&s=35fe4b7b70c9ab743e74325cbfe55031a4641ed6', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/1vD_R63iqu4vnM_qQf7pZNwXb9dy_UDc_Gl2j3LnTpU.png?width=216&crop=smart&auto=webp&s=1088071968a9d155af9b6b2b32e8ffce7db4fe08', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/1vD_R63iqu4vnM_qQf7pZNwXb9dy_UDc_Gl2j3LnTpU.png?width=320&crop=smart&auto=webp&s=a00ae2c2c6ef84c2b597c1e9a2ab434db631eeba', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/1vD_R63iqu4vnM_qQf7pZNwXb9dy_UDc_Gl2j3LnTpU.png?width=640&crop=smart&auto=webp&s=e5102c5612db16c04c26877a1e72e86700648e25', 'width': 640}], 'source': {'height': 800, 'url': 'https://external-preview.redd.it/1vD_R63iqu4vnM_qQf7pZNwXb9dy_UDc_Gl2j3LnTpU.png?auto=webp&s=82ece937bf43ad3c64c938891185b1fede1cccce', 'width': 800}, 'variants': {}}]} | ||
Do you pay in dollars or in patience? | 1 | 2025-09-16T11:50:48 | https://artificialanalysis.ai/providers/deepinfra#output-speed-vs-price | codegolf-guru | artificialanalysis.ai | 1970-01-01T00:00:00 | 0 | {} | 1nif9ds | false | null | t3_1nif9ds | /r/LocalLLaMA/comments/1nif9ds/do_you_pay_in_dollars_or_in_patience/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=216&crop=smart&auto=webp&s=b97954336b79c1390848d0e44fa056a85de68672', 'width': 216}, {'height': 177, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=320&crop=smart&auto=webp&s=65f53b80ab9674ee645013e3e8eeac4f953d657e', 'width': 320}, {'height': 355, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=640&crop=smart&auto=webp&s=47f397e4a22ed5ec7e82aad070eb446319603abc', 'width': 640}, {'height': 533, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=960&crop=smart&auto=webp&s=0f4359d47b78f5c1aa35de8804dbe36a749fc11a', 'width': 960}, {'height': 600, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=1080&crop=smart&auto=webp&s=62eb4b7216f41af6600fc4df79cfa67425c19442', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?auto=webp&s=efc17c9f241b4403d22cbacfe5d71900ee1cf85a', 'width': 1260}, 'variants': {}}]} | ||
Would you rather save money or not grow a beard waiting for tokens? | 1 | had an interesting topic recently with the colleagues that got me thinking which scenario is the actual nightmare? A bill that makes finance chase you down (sometimes even faster than models are generating the text) or watching a model drip out tokens slower than your CI pipeline on a Friday?
Only few companies actually hit a nice middle ground between having a great pricing, being fast, and providing models that are actually usable.
| 2025-09-16T11:47:53 | https://www.reddit.com/r/LocalLLaMA/comments/1nif79j/would_you_rather_save_money_or_not_grow_a_beard/ | codegolf-guru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nif79j | false | null | t3_1nif79j | /r/LocalLLaMA/comments/1nif79j/would_you_rather_save_money_or_not_grow_a_beard/ | false | false | self | 1 | null |
Unofficial VibeVoice finetuning code released! | 89 | Just came across this on discord: [https://github.com/voicepowered-ai/VibeVoice-finetuning](https://github.com/voicepowered-ai/VibeVoice-finetuning)
I will try training a lora soon, I hope it works :D | 2025-09-16T11:47:48 | https://www.reddit.com/r/LocalLLaMA/comments/1nif778/unofficial_vibevoice_finetuning_code_released/ | Downtown-Accident-87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nif778 | false | null | t3_1nif778 | /r/LocalLLaMA/comments/1nif778/unofficial_vibevoice_finetuning_code_released/ | false | false | self | 89 | {'enabled': False, 'images': [{'id': 'KVGg_UIfL39bCgXlkziO7zHpgd2Tgj80HnW9DzVEs7c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KVGg_UIfL39bCgXlkziO7zHpgd2Tgj80HnW9DzVEs7c.png?width=108&crop=smart&auto=webp&s=7fde60db36a71b6e5e2dc1d113c77674a82a8a78', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KVGg_UIfL39bCgXlkziO7zHpgd2Tgj80HnW9DzVEs7c.png?width=216&crop=smart&auto=webp&s=34b686da8366b8841192319f3346b88c801633f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KVGg_UIfL39bCgXlkziO7zHpgd2Tgj80HnW9DzVEs7c.png?width=320&crop=smart&auto=webp&s=5c71c133cb4b258798d4ee4b3268ae9431a2a4dc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KVGg_UIfL39bCgXlkziO7zHpgd2Tgj80HnW9DzVEs7c.png?width=640&crop=smart&auto=webp&s=47cbfbf5947a2cada5a855cff1af804520cc44d2', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KVGg_UIfL39bCgXlkziO7zHpgd2Tgj80HnW9DzVEs7c.png?width=960&crop=smart&auto=webp&s=de0735717972a7fe411989aa315f73aee5bb9b26', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KVGg_UIfL39bCgXlkziO7zHpgd2Tgj80HnW9DzVEs7c.png?width=1080&crop=smart&auto=webp&s=b69741e79c7f662dca1cc799a9df3467dac63b6d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KVGg_UIfL39bCgXlkziO7zHpgd2Tgj80HnW9DzVEs7c.png?auto=webp&s=b9d36b28c3d88c6d72592b9149a8ba44f4777773', 'width': 1200}, 'variants': {}}]} |
I built a tool to search content in my local files using semantic search | 12 | Hey everyone
A while back I shared an open source tool called DeepDoc that I built to explore local files using a research type workflow. The support and feedback I got here really meant a lot and kept me building more so thank you
The idea is simple. Instead of manually going through pdfs, docs, or notes I wanted a smarter way to search the content of my own files
You just point it to a folder with pdf docx txt or image files. It extracts the text splits it into chunks does semantic search based on your query and builds a structured markdown report step by step
Here is the repo if you want to take a look
[https://github.com/Datalore-ai/deepdoc](https://github.com/Datalore-ai/deepdoc)
It recently reached 95 stars which honestly means a lot to me. Knowing that people actually use it and find it useful really made my day
Many people suggested adding OneDrive Google Drive integrations and support for more file formats which I am planning to add soon. and keep making it better. | 2025-09-16T11:37:33 | https://www.reddit.com/r/LocalLLaMA/comments/1niezv4/i_built_a_tool_to_search_content_in_my_local/ | Interesting-Area6418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niezv4 | false | null | t3_1niezv4 | /r/LocalLLaMA/comments/1niezv4/i_built_a_tool_to_search_content_in_my_local/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': '8BycYv4P-m4xDYfQ8hNCh4a02S5URyftBqjRcAddJeA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8BycYv4P-m4xDYfQ8hNCh4a02S5URyftBqjRcAddJeA.png?width=108&crop=smart&auto=webp&s=02993fa7d1a7cc26db19cc5bce3522d712ce5ff8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8BycYv4P-m4xDYfQ8hNCh4a02S5URyftBqjRcAddJeA.png?width=216&crop=smart&auto=webp&s=91c7535a98b1882b16f92ae80e842931dfe5c0e8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8BycYv4P-m4xDYfQ8hNCh4a02S5URyftBqjRcAddJeA.png?width=320&crop=smart&auto=webp&s=4ea2c5a8f1c3a9b48a39f63f5bbe4b32f0c9f1ee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8BycYv4P-m4xDYfQ8hNCh4a02S5URyftBqjRcAddJeA.png?width=640&crop=smart&auto=webp&s=ecb05e93583380a196b1e8ffdc9c76df382f56ef', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8BycYv4P-m4xDYfQ8hNCh4a02S5URyftBqjRcAddJeA.png?width=960&crop=smart&auto=webp&s=ffc6140c65eef8774744cf622207ac77caa6ef33', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8BycYv4P-m4xDYfQ8hNCh4a02S5URyftBqjRcAddJeA.png?width=1080&crop=smart&auto=webp&s=5d0bed4baace1944e4d4d86b369b420826293c27', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8BycYv4P-m4xDYfQ8hNCh4a02S5URyftBqjRcAddJeA.png?auto=webp&s=b67d6ca1892be681687f0b949c1d352c6ba6371e', 'width': 1200}, 'variants': {}}]} |
Unofficial VibeVoice LoRa finetuning code released | 1 | [removed] | 2025-09-16T11:29:41 | https://www.reddit.com/r/LocalLLaMA/comments/1nieu9d/unofficial_vibevoice_lora_finetuning_code_released/ | Left-Investment7050 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nieu9d | false | null | t3_1nieu9d | /r/LocalLLaMA/comments/1nieu9d/unofficial_vibevoice_lora_finetuning_code_released/ | false | false | self | 1 | null |
We just rolled out vLLM with Falcon3 & Mamba-7B - have a discount code if anyone wants to try | 0 | Ever thought about running your own LLMs without the hassle of setting up expensive hardware? ⚡️We are building a **distributed GPU compute platform**. One of the big challenges we’ve seen is how tricky it can be to spin up LLMs without buying a GPU rig or spending hours on cloud configs.
To make things simpler, we’ve just added **vLLM support** with models like **Falcon3 (3B, 7B, 10B)** and **Mamba-7B**. The idea is to let developers and researchers experiment, benchmark, or prototype without needing to manage infra themselves.
If anyone here is curious to test it, I can share a **70% discount code** for first-time credits — just DM me and I’ll send it over. 🙌
Curious to hear how you usually approach this — do you rent compute, self-host, or stick with managed services ? | 2025-09-16T11:28:16 | https://www.reddit.com/r/LocalLLaMA/comments/1nietce/we_just_rolled_out_vllm_with_falcon3_mamba7b_have/ | frentro_max | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nietce | false | null | t3_1nietce | /r/LocalLLaMA/comments/1nietce/we_just_rolled_out_vllm_with_falcon3_mamba7b_have/ | false | false | self | 0 | null |
Vector DBs and LM Studio, how does it work in practicality? | 4 | Hi. I'm going to take a backup of the vectors made in LM Studio from a RAG, and I expect that to go just well with ChromaDB. But when I want to hook up those vectors with a new chat then I'm not sure how to proceed in LMS. I can't find any "load vector DB" anywhere, but I might not have looked well enough. I'm obviously not very experienced with using vectors from one chat to another, so this might seem trivial to some, but I'm still outside a tall gate on this right now. Thanks in advance! | 2025-09-16T11:24:07 | https://www.reddit.com/r/LocalLLaMA/comments/1nieqgy/vector_dbs_and_lm_studio_how_does_it_work_in/ | TunnelToTheMoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nieqgy | false | null | t3_1nieqgy | /r/LocalLLaMA/comments/1nieqgy/vector_dbs_and_lm_studio_how_does_it_work_in/ | false | false | self | 4 | null |
Exploring LLaMA for Student-Centric Study Tools (Notes, Flashcards, and More) | 1 | I’ve been experimenting with applying LLaMA models for student workflows — generating topic-wise notes, NCERT-aligned references, flashcards, and a lightweight chatbot with RAG for context retrieval.
The idea is to blend LoRA fine-tuning + embeddings search to make study material more structured instead of generic outputs. Currently calling it ExamSprint. Curious to hear feedback from others working with LLaMA on similar use cases. | 2025-09-16T10:57:28 | Distinct-Mode-7415 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nie8hf | false | null | t3_1nie8hf | /r/LocalLLaMA/comments/1nie8hf/exploring_llama_for_studentcentric_study_tools/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'tg2f2o8scipf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/tg2f2o8scipf1.png?width=108&crop=smart&auto=webp&s=518ab12db495b87a22d8fba23f77f3936b040c11', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/tg2f2o8scipf1.png?width=216&crop=smart&auto=webp&s=c55b6cd0969e1eef06e07a56d22143c3bee2edf8', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/tg2f2o8scipf1.png?width=320&crop=smart&auto=webp&s=09f52eeaf04984c4eaa786176964e03a731f9b22', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/tg2f2o8scipf1.png?width=640&crop=smart&auto=webp&s=f8bc9e3e5bb35af3e6b856ef2f46afdfa1a2f866', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/tg2f2o8scipf1.png?width=960&crop=smart&auto=webp&s=c806124799ba46c16c61e7384eb35fa418aec012', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/tg2f2o8scipf1.png?width=1080&crop=smart&auto=webp&s=1c287a812eac7db2c1645f42863f78d91f4074ae', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/tg2f2o8scipf1.png?auto=webp&s=6685eb27688dd3a62a6ae7bb35d93b2fda6b100f', 'width': 1080}, 'variants': {}}]} | |
Docling Interferes with Embedding & Reranking | 1 | Hi everyone,
I've been testing a variety of content extractors, embedding models, and reranking models lately. In my experience, Docling offers the best quality among all free‑to‑use content extractors, but many embedding and reranking models fail to correctly interpret tabular layouts. As a result, they often place irrelevant or mismatched data in the output.
This issue is quite severe-in certain documents, unless you feed the entire document context directly to the model, using Docling becomes impractical.
If anyone has encountered the same problem or managed to work around it, I’d love to hear your thoughts and solutions.
Models I’ve tried:
* BAAI
* Qwen3 | 2025-09-16T10:30:55 | https://www.reddit.com/r/LocalLLaMA/comments/1nidrwy/docling_interferes_with_embedding_reranking/ | Cyp9715 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nidrwy | false | null | t3_1nidrwy | /r/LocalLLaMA/comments/1nidrwy/docling_interferes_with_embedding_reranking/ | false | false | self | 1 | null |
[Release] DASLab GGUF Non-Uniform Quantization Toolkit | 1 | [removed] | 2025-09-16T10:21:48 | https://www.reddit.com/r/LocalLLaMA/comments/1nidmdn/release_daslab_gguf_nonuniform_quantization/ | mic__hel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nidmdn | false | null | t3_1nidmdn | /r/LocalLLaMA/comments/1nidmdn/release_daslab_gguf_nonuniform_quantization/ | false | false | 1 | null | |
What would be the best way to use MCP with ollama and open webUI? | 0 | I am developing a ai model with open webUI how can I make it work with nmap and other tools? | 2025-09-16T10:16:14 | https://www.reddit.com/r/LocalLLaMA/comments/1nidj0c/what_would_be_the_best_way_to_use_mcp_with_ollama/ | PrizePerformance5066 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nidj0c | false | null | t3_1nidj0c | /r/LocalLLaMA/comments/1nidj0c/what_would_be_the_best_way_to_use_mcp_with_ollama/ | false | false | self | 0 | null |
Think twice before spending on GPU? | 106 | Qwen team is shifting paradigm. Qwen Next is probably first big step of many that Qwen (and other chinese labs) are taking towards sparse models, because they do not have the required GPUs to train on.
10% of the training cost, 10x inference throughout, 512 experts, ultra long context (though not good enough yet).
They have a huge incentive to train this model further (on 36T tokens instead of 15T). They will probably release the final checkpoint in coming months or even weeks. Think of the electricity savings running (and on idle) a pretty capable model. We might be able to run a qwen 235B equivalent locally on a hardware under $1500. 128GB of RAM could be enough for the models this year and it's easily upgradable to 256GB for the next.
Wdyt? | 2025-09-16T10:16:07 | https://www.reddit.com/r/LocalLLaMA/comments/1nidixx/think_twice_before_spending_on_gpu/ | __Maximum__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nidixx | false | null | t3_1nidixx | /r/LocalLLaMA/comments/1nidixx/think_twice_before_spending_on_gpu/ | false | false | self | 106 | null |
[URGENT] Which is a reliable and affordable GPU cluster for hosting custom LLMs for business | 0 | I train local LLM models, train it on company specific data so that my client's employees can use this LLM internally instead of ChatGPT.
Specs:
\- 1 company
\- Max. 50 accounts
\- llama3.1:8b & deepseek-r1:8b
\- Firebase as storage
\~$1000/mo GPU hosting budget
Now I have to host this somewhere and this is the part that I have never done before (because costing is hourly). I checked out [Vast.ai](https://cloud.vast.ai/?ref_id=315793) & [RunPod](https://runpod.io?ref=7xb737zi) but I am still unsure of the decision
This is my new business and I don't mind losing money at all. I have a descent runway saved up. So which platform or GPU clusters can I use to host this? Consider my maximum budget to be $25/mo/user ie $1250/mo for 50 users.
I also have another client scheduled for next month for a similar service who has 150 accounts. So just purchasing hardware is not an option as the clients will keep coming and going.
I am completely noob in hosting field. So if there are other ways of hosting as well, I am open to anything. Idk what I am doing I just want that the client should have a lag free and smooth experience when their employees use it.
PS. If you are suggesting a service please let me know which GPU would you recommend and what is the hourly hosting fees as well. Thank you. | 2025-09-16T10:14:36 | https://www.reddit.com/r/LocalLLaMA/comments/1nidi1s/urgent_which_is_a_reliable_and_affordable_gpu/ | Competitive-Wing1585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nidi1s | false | null | t3_1nidi1s | /r/LocalLLaMA/comments/1nidi1s/urgent_which_is_a_reliable_and_affordable_gpu/ | false | false | self | 0 | null |
Big models feels like joke | 0 | I have been trying to fix an js file for near 30 minutes. i have tried everything and every LLM you name it.
Qwen3-Coder-480b, Deepseek v3.1, gpt-oss-120b (ollama version), kimi k2 etc.
Just i was thinking about giving up an getting claude subscription ithought why not i give a try gpt-oss-20b on my LM studio. I had nothing to lose. AND BOY IT FIXED IT. i dont know why i cant change the thinking rate on ollama but LM studio lets you decide that. I am too happy i wanted to share with you guys. | 2025-09-16T09:58:02 | https://www.reddit.com/r/LocalLLaMA/comments/1nid7yp/big_models_feels_like_joke/ | sado361 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nid7yp | false | null | t3_1nid7yp | /r/LocalLLaMA/comments/1nid7yp/big_models_feels_like_joke/ | false | false | self | 0 | null |
GP models and AI Act | 1 | Since there was a discussion some days ago with reference to GPAI models in the context of the AI Act (particularly with reference to fine tuning and a - possible - transition from the role of deployer to the role of provider) I share the invitation I have just received. Normally during this webinars there is a live Q&A so if you have any question, you may ask to some as close to the source of the legislation as possible :)
This email is sent to you following your expression of interest in the AI Pact – Pillar I (all stakeholders)
The AI Office will host its next AI Pact webinar on 23 September 2025, thus continuing to engage actively with stakeholders on the implementation of the EU’s AI Act.
You are invited to join the webinar dedicated to the EU's guidelines on General Purpose AI, Code of Practice for GPAI, and training data transparency template that will take place on Tuesday 23 September from 11:00 to 12:30 CET.
This webinar provides an overview of the EU's guidelines on General Purpose AI, voluntary Code of Practice and training data transparency template for AI Act compliance.
General-purpose AI (GPAI) models can perform a wide range of tasks and are becoming the basis for many AI systems in the EU. Some of these models could carry systemic risks if they are very capable or widely used. To ensure safe and trustworthy AI, the AI Act puts in place rules for providers of such models.
The session will clarify key regulatory concepts, explain compliance pathways, and outline how stakeholders can make sense of different GPAI documents published ahead of the entry into application of the GPAI rules under the AI Act on 2 August 2025.
Specifically, the webinar will delve into the guidelines for General-Purpose AI (GPAI) models, which define core concepts such as what constitutes a GPAI model, what are the responsibilities of provider, and market placement criteria. Experts will also discuss how the voluntary Code of Practice for GPAI — finalised through an inclusive, multi-stakeholder process — will help industry comply with the rules by providing legal certainty and reducing administrative burden. Additionally, the session will cover the Commission template for the public summary of training content for GPAI models, a transparency requirement under the AI Act that complements the Code and is expected from all providers of GPAI models placed on the EU market.
You can watch it live here: Sixth AI Pact webinar on the General-Purpose AI Models and Code of Practice - [https://www.youtube.com/live/jyGlYo5rE-Y](https://www.youtube.com/live/jyGlYo5rE-Y)
More info: Sixth AI Pact webinar on the General-Purpose AI Models and Code of Practice | Shaping Europe’s digital future - [https://digital-strategy.ec.europa.eu/en/events/sixth-ai-pact-webinar-general-purpose-ai-models-and-code-practice](https://digital-strategy.ec.europa.eu/en/events/sixth-ai-pact-webinar-general-purpose-ai-models-and-code-practice) | 2025-09-16T09:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nicy2n/gp_models_and_ai_act/ | gianlucag1971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nicy2n | false | null | t3_1nicy2n | /r/LocalLLaMA/comments/1nicy2n/gp_models_and_ai_act/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'G5OytUFFMwr0E7ppTEksV4_1ZDRjp9gmYHu-aa96TWI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/G5OytUFFMwr0E7ppTEksV4_1ZDRjp9gmYHu-aa96TWI.jpeg?width=108&crop=smart&auto=webp&s=6cb349e563d48bfc2510ddeb77eeabb249ebb3ff', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/G5OytUFFMwr0E7ppTEksV4_1ZDRjp9gmYHu-aa96TWI.jpeg?width=216&crop=smart&auto=webp&s=39336d26826233d94aa876038308cc1eda222913', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/G5OytUFFMwr0E7ppTEksV4_1ZDRjp9gmYHu-aa96TWI.jpeg?width=320&crop=smart&auto=webp&s=341f4a76b7a12b4c88fb8ff4d877bc7f8145b534', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/G5OytUFFMwr0E7ppTEksV4_1ZDRjp9gmYHu-aa96TWI.jpeg?width=640&crop=smart&auto=webp&s=0dd75ff9490ff7828c8f2ad94a94dabaa1d4ff00', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/G5OytUFFMwr0E7ppTEksV4_1ZDRjp9gmYHu-aa96TWI.jpeg?width=960&crop=smart&auto=webp&s=7d802e378dcb66e063d150385c17f66b9560f163', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/G5OytUFFMwr0E7ppTEksV4_1ZDRjp9gmYHu-aa96TWI.jpeg?width=1080&crop=smart&auto=webp&s=cb259fb18e2974b9c99d3a33a787624f038df273', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/G5OytUFFMwr0E7ppTEksV4_1ZDRjp9gmYHu-aa96TWI.jpeg?auto=webp&s=88db07cca9f07b659f501bdbf719c4a7447595af', 'width': 1280}, 'variants': {}}]} |
Why my server uses only 5-6% ram with f16 llama 8b model. | 7 | My server uses well cpu with my settings but ram usage is low. Under 10%. I have 48GB ddr3 RAM. Software is GPT4ALL and f16 model.
Context length: 8192
prompt batch size: 485
Max length: 12 250
Temperature: 0,9
20 value cpu threads.
What do you could chance values etc. I have tested different values. | 2025-09-16T08:52:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nic6tz/why_my_server_uses_only_56_ram_with_f16_llama_8b/ | Independent-Olive-66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nic6tz | false | null | t3_1nic6tz | /r/LocalLLaMA/comments/1nic6tz/why_my_server_uses_only_56_ram_with_f16_llama_8b/ | false | false | self | 7 | null |
Can I run Parakeet v3 Multilingual locally with my AMD RX 5700 XT? | 4 | Hi everyone,
I’m a law student in Spain and I’ve been using Whisper v3 Turbo for my note-taking. It works, but for something like a 1.5-hour class, the transcription ends up taking me almost 2 hours when I run it locally.
I also have an AMD RX 5700 XT, but I’m not sure if I can use it to run Parakeet v3 0.6 locally to make things faster. Is that possible? And if yes, how would I set it up? Would I need to use my own GPU?
If anyone could share a tutorial or point me in the right direction, I’d really appreciate it.
Thanks a lot! | 2025-09-16T08:49:26 | https://www.reddit.com/r/LocalLLaMA/comments/1nic4ws/can_i_run_parakeet_v3_multilingual_locally_with/ | solcid1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nic4ws | false | null | t3_1nic4ws | /r/LocalLLaMA/comments/1nic4ws/can_i_run_parakeet_v3_multilingual_locally_with/ | false | false | self | 4 | null |
Epyc 9965 4k on eBay vs 9995wx 12k ??? | 0 | What’s the catch ? How are the new epycs so cheap? | 2025-09-16T07:44:06 | https://www.reddit.com/r/LocalLLaMA/comments/1nib5qe/epyc_9965_4k_on_ebay_vs_9995wx_12k/ | That-Thanks3889 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nib5qe | false | null | t3_1nib5qe | /r/LocalLLaMA/comments/1nib5qe/epyc_9965_4k_on_ebay_vs_9995wx_12k/ | false | false | self | 0 | null |
I built a package to deploy any local model & make it accessible online — looking for contributors (Torrent Like p2p Protocol ) | 5 | Hey folks,
I just hacked together something I’m really excited about — [connectit.chatit](https://github.com/loayabdalslam/connectit.chatit).
It’s a lightweight package that lets you:
* Deploy **any local model** (LLM, image generator, whatever) with just a few lines of code.
* Expose it via a simple API endpoint so **anyone on the internet** can query it.
Think of it as turning your laptop into a mini inference server. 💻➡️🌍
But here’s the thing — the package is still in its early days. It works, but there’s a lot of room to grow:
* More deployment backends
* Better auth & security
* Examples for popular model frameworks
* CI/CD + testing
* Docs that don’t look like they were written at 3am 😅
If any of that sounds fun, I’d love contributors to help shape its future. The vision is to make self-hosting models stupidly easy — no infra headaches, just run and share.
Repo’s here: [github.com/loayabdalslam/connectit.chatit](https://github.com/loayabdalslam/connectit.chatit)
Would love feedback, feature requests, and PRs. Let’s make running models locally cool again. 🚀 | 2025-09-16T07:00:43 | https://www.reddit.com/r/LocalLLaMA/comments/1niahsr/i_built_a_package_to_deploy_any_local_model_make/ | stopwwIII | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1niahsr | false | null | t3_1niahsr | /r/LocalLLaMA/comments/1niahsr/i_built_a_package_to_deploy_any_local_model_make/ | false | false | self | 5 | null |
LLMs for detailed book summaries? | 13 | I am picturing a tool that I can throw any arbitrary ePub novel at and get back a SparkNotes-style summary:
https://www.sparknotes.com/lit/pride/
(This page has a plot overview but there are other pages that do deeper dives into the material.)
It seems like something an LLM could do in principle if you could avoid hallucinations and maintain coherency.
Has anyone had success on this? | 2025-09-16T05:09:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ni8noo/llms_for_detailed_book_summaries/ | JealousAmoeba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni8noo | false | null | t3_1ni8noo | /r/LocalLLaMA/comments/1ni8noo/llms_for_detailed_book_summaries/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'Y--lIrTQVTiKbqoVSuL2WQ4TFZiNYIE-R5loqONnT2g', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Y--lIrTQVTiKbqoVSuL2WQ4TFZiNYIE-R5loqONnT2g.jpeg?width=108&crop=smart&auto=webp&s=bce9aab521c2ad215da2cef25616ce36527f7d11', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/Y--lIrTQVTiKbqoVSuL2WQ4TFZiNYIE-R5loqONnT2g.jpeg?width=216&crop=smart&auto=webp&s=d4495f62c621e7010e167b6373090e92b922b219', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/Y--lIrTQVTiKbqoVSuL2WQ4TFZiNYIE-R5loqONnT2g.jpeg?width=320&crop=smart&auto=webp&s=df938dde30ec916680f53fee5d2d0aa0194056ef', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/Y--lIrTQVTiKbqoVSuL2WQ4TFZiNYIE-R5loqONnT2g.jpeg?width=640&crop=smart&auto=webp&s=14f41369703f8f44a810aa127310ff18542b0107', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/Y--lIrTQVTiKbqoVSuL2WQ4TFZiNYIE-R5loqONnT2g.jpeg?width=960&crop=smart&auto=webp&s=25b293e66347d713a002fd546a68009dac0a4830', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/Y--lIrTQVTiKbqoVSuL2WQ4TFZiNYIE-R5loqONnT2g.jpeg?width=1080&crop=smart&auto=webp&s=b84ba61ea6c9bf09bd5faef94f153bcfb19775ef', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/Y--lIrTQVTiKbqoVSuL2WQ4TFZiNYIE-R5loqONnT2g.jpeg?auto=webp&s=0ca75d9fded1275bacbb21cf4aca99a1e2897ba8', 'width': 1200}, 'variants': {}}]} |
Feedback on trimmed-down AI workstation build (based on a16z specs) | 10 | I’m putting together a local AI workstation build inspired by the [a16z setup](https://a16z.com/building-a16zs-personal-ai-workstation-with-four-nvidia-rtx-6000-pro-blackwell-max-q-gpus/). The idea is to stop bleeding money on GCP/AWS for GPU hours and finally have a home rig for quick ideation and prototyping. I’ll mainly be using it to train and finetune custom architectures.
I’ve slimmed down the original spec to make it (slightly) more reasonable while keeping room to expand in the future. I’d love feedback from this community before pulling the trigger.
Here are the main changes vs the reference build:
* 4× GPU → 1× GPU (will expand later if needed)
* 256GB RAM → 128GB RAM
* 8TB storage → 2TB storage
* Sticking with the same PSU for headroom if I add GPUs later
* Unsure if the motherboard swap is the right move (original was GIGABYTE MH53-G40, I picked the ASUS Pro WS WRX90E-SAGE SE — any thoughts here?)
Current parts list:
|Category|Item|Price|
|:-|:-|:-|
|**GPU**|NVIDIA RTX PRO 6000 Blackwell Max-Q|$8,449.00|
|**CPU**|AMD Ryzen Threadripper PRO 7975WX 32-core 5.3GHz Computer Processor|$3,400.00|
|**Motherboard**|Pro WS WRX90E-SAGE SE|$1,299.00|
|**RAM**|OWC DDR5 4×32GB|$700.00|
|**Storage**|WD\_BLACK 2TB SN8100 NVMe SSD Internal Solid State Drive - Gen 5 PCIe 5.0x4, M.2 2280|$230.00|
|**PSU**|Thermaltake Toughpower GF3|$300.00|
|**CPU Cooler**|ARCTIC Liquid Freezer III Pro 420 A-RGB – AIO CPU Cooler, 3 × 140 mm Water Cooling, 38 mm Radiator, PWM Pump, VRM Fan, for AMD/Intel sockets|$115.00|
|**Total**||**$14,493.00**|
Any advice on the component choices or obvious oversights would be super appreciated. Thanks in advance! | 2025-09-16T04:44:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ni87hl/feedback_on_trimmeddown_ai_workstation_build/ | cuuuuuooooongg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni87hl | false | null | t3_1ni87hl | /r/LocalLLaMA/comments/1ni87hl/feedback_on_trimmeddown_ai_workstation_build/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'tSHXw7m6EVbopsDED7pQLJ2iVRWGfeUjT3AP7y-ub4o', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/tSHXw7m6EVbopsDED7pQLJ2iVRWGfeUjT3AP7y-ub4o.png?width=108&crop=smart&auto=webp&s=f420be0cd78077b5c4b2c8a2f820fd5a1d1bab60', 'width': 108}, {'height': 112, 'url': 'https://external-preview.redd.it/tSHXw7m6EVbopsDED7pQLJ2iVRWGfeUjT3AP7y-ub4o.png?width=216&crop=smart&auto=webp&s=a756d645e7883ce7ab2b366a82bb1c8f0ba82061', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/tSHXw7m6EVbopsDED7pQLJ2iVRWGfeUjT3AP7y-ub4o.png?width=320&crop=smart&auto=webp&s=fa1d884a4392718894218ad9a68b133a4cc5113a', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/tSHXw7m6EVbopsDED7pQLJ2iVRWGfeUjT3AP7y-ub4o.png?width=640&crop=smart&auto=webp&s=e3557ad7133bdf6ef9fd0e9b21cef57b635648a7', 'width': 640}, {'height': 501, 'url': 'https://external-preview.redd.it/tSHXw7m6EVbopsDED7pQLJ2iVRWGfeUjT3AP7y-ub4o.png?width=960&crop=smart&auto=webp&s=ce5a0f413c0eb67b12823fb5f39ede65bd456b12', 'width': 960}, {'height': 564, 'url': 'https://external-preview.redd.it/tSHXw7m6EVbopsDED7pQLJ2iVRWGfeUjT3AP7y-ub4o.png?width=1080&crop=smart&auto=webp&s=62bf9f1a2e4d9e637ce0469b345a042796c3172d', 'width': 1080}], 'source': {'height': 627, 'url': 'https://external-preview.redd.it/tSHXw7m6EVbopsDED7pQLJ2iVRWGfeUjT3AP7y-ub4o.png?auto=webp&s=38bbad37e991b6ac71252d62527a82e6e535fbda', 'width': 1200}, 'variants': {}}]} |
Voice Assistant Running on a Raspyberry Pi | 22 | Hey folks, I just published a write-up on a project I’ve been working on: pi-assistant — a local, open-source voice assistant that runs fully offline on a Raspberry Pi 5.
Blog post: https://alexfi.dev/blog/raspberry-pi-assistant
Code: https://github.com/alexander-fischer/pi-assistant
What it is
pi-assistant is a modular, tool-calling voice assistant that:
* Listens for a wake word (e.g., “Hey Jarvis”)
* Transcribes your speech
* Uses small LLMs to interpret commands and call tools (weather, Wikipedia, smart home)
* Speaks the answer back to you
—all without sending data to the cloud.
Tech stack
* Wake word detection: openWakeWord
* ASR: nemo-parakeet-tdt-0.6b-v2 / nvidia/canary-180m-flash
* Function calling: Arch-Function 1.5B
* Answer generation: Gemma3 1B
* TTS: Piper
* Hardware: Raspberry Pi 5 (16 GB), Jabra Speak 410
You can easily change the language models for a bigger hardware setup. | 2025-09-16T04:34:43 | https://v.redd.it/lh67dy5eggpf1 | localslm | /r/LocalLLaMA/comments/1ni815f/voice_assistant_running_on_a_raspyberry_pi/ | 1970-01-01T00:00:00 | 0 | {} | 1ni815f | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/lh67dy5eggpf1/DASHPlaylist.mpd?a=1760718891%2CNTk3ZmIwNzhjNmI3ODdhOTZkNzU2MWMxNGFjZTkwODNlMmI3NGNiNDdjZTUzYzYwODc3YmFmOGU4NGQ0NjI0Mg%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/lh67dy5eggpf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/lh67dy5eggpf1/HLSPlaylist.m3u8?a=1760718891%2COWQyY2JlMDJkNzE4MmU2NTI5NjUzYmQ4ODJjYmU3NmFiMmE5Y2MzN2YzYWUwOTAzNGE3M2I0M2Y5ZjdjYmYwZA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lh67dy5eggpf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1ni815f | /r/LocalLLaMA/comments/1ni815f/voice_assistant_running_on_a_raspyberry_pi/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'Y3RvYW1ibmRnZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF', 'resolutions': [{'height': 191, 'url': 'https://external-preview.redd.it/Y3RvYW1ibmRnZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=108&crop=smart&format=pjpg&auto=webp&s=6a672126e102443a1c4b2cae8494026b54c160b9', 'width': 108}, {'height': 383, 'url': 'https://external-preview.redd.it/Y3RvYW1ibmRnZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=216&crop=smart&format=pjpg&auto=webp&s=0c5eb7af6c98a756d778aae4b3ec8033b0e12e85', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/Y3RvYW1ibmRnZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=320&crop=smart&format=pjpg&auto=webp&s=c5d4c59513221bc9516fb2ee293fffbff17faf37', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/Y3RvYW1ibmRnZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=640&crop=smart&format=pjpg&auto=webp&s=82ffc51ed2db0367e551ab2177e74f12c4c096ce', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/Y3RvYW1ibmRnZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=960&crop=smart&format=pjpg&auto=webp&s=aac8d5d7e1ebba2baaa71c902c2c2474103c622c', 'width': 960}], 'source': {'height': 1820, 'url': 'https://external-preview.redd.it/Y3RvYW1ibmRnZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?format=pjpg&auto=webp&s=402a782e11d8c5d6df75eea75efd801155db0d51', 'width': 1024}, 'variants': {}}]} | |
Feedback regarding ASUS - ROG Flow Z13 | 2 | Hi,
I have built a pc with the following soecs:
3 x RTX 3090
2x RTX 3060
96GB DDR4 RAM limited at 2333 due to processor limitations
2 x Intel Xeon Silver 4214 with 24 channels each
ASUS WS C621E SAGE Intel C621
Dedpite the huge VRAM i have got, i have very low t/s when i load 100+ GB models that requires to use both CPU and GPU, i the max i can get is 3 t/h and it will continue to go down as the context is building up.
Now i have seen a new laptop from ASUS - ROG Flow Z13 Gaming Laptop AMD Ryzen AI Max+ 395 - unified 128GB RAM and 96GB can be deficated to the VRAM
I thinking seriously to dismantle my pc and sell it in pieces and buy this laptop.
What performance i would expect with this laptop? Will i get the same performance or even better vs my current configuration? | 2025-09-16T04:34:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ni812p/feedback_regarding_asus_rog_flow_z13/ | Competitive_Fox7811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni812p | false | null | t3_1ni812p | /r/LocalLLaMA/comments/1ni812p/feedback_regarding_asus_rog_flow_z13/ | false | false | self | 2 | null |
Anyone use a foundation LM to create tests to evaluate local models and quants? | 1 | [removed] | 2025-09-16T04:30:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ni7ygv/anyone_use_a_foundation_lm_to_create_tests_to/ | Intotheblue1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni7ygv | false | null | t3_1ni7ygv | /r/LocalLLaMA/comments/1ni7ygv/anyone_use_a_foundation_lm_to_create_tests_to/ | false | false | self | 1 | null |
Voice Assistant Running on a Raspberry Pi | 1 | Hey folks, I just published a write-up on a project I’ve been working on: pi-assistant — a local, open-source voice assistant that runs fully offline on a Raspberry Pi 5.
Blog post: https://alexfi.dev/blog/raspberry-pi-assistant
Code: github.com/alexander-fischer/pi-assistant
What it is
pi-assistant is a modular, tool-calling voice assistant that:
* Listens for a wake word (e.g., “Hey Jarvis”)
* Transcribes your speech
* Uses small language models to interpret commands and call tools (weather, Wikipedia, smart home)
* Speaks the answer back to you
—all without sending data to the cloud.
Tech stack
* Wake word detection: openWakeWord
* ASR: nemo-parakeet-tdt-0.6b-v2 / nvidia/canary-180m-flash
* Function calling: Arch-Function 1.5B
* Answer generation: Gemma3 1B
* TTS: Piper
* Hardware: Raspberry Pi 5 (16 GB), Jabra Speak 410
You can change all language models easily when you have a bigger hardware setup. | 2025-09-16T04:29:28 | https://v.redd.it/svruvk2jfgpf1 | localslm | /r/LocalLLaMA/comments/1ni7xw0/voice_assistant_running_on_a_raspberry_pi/ | 1970-01-01T00:00:00 | 0 | {} | 1ni7xw0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/svruvk2jfgpf1/DASHPlaylist.mpd?a=1760718573%2CNzllMjgwZDc0ZTFiZWRlMWIyOTE2YjlkNTMwNWIwYjYyYTk3NTJjYjEwMDdiYjQxYTQyMjNiYjY3ZDVhMGIyMQ%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/svruvk2jfgpf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/svruvk2jfgpf1/HLSPlaylist.m3u8?a=1760718573%2CN2UxMzI5ODhlM2ExOWJjMzJjNzVlZTZmMDNkYjUxYThkOTBkYzQ5ZWJmMjY1NWVhMWYzZmExMjhlZmE3MGRjNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/svruvk2jfgpf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1ni7xw0 | /r/LocalLLaMA/comments/1ni7xw0/voice_assistant_running_on_a_raspberry_pi/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eXh4dXJydGlmZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF', 'resolutions': [{'height': 191, 'url': 'https://external-preview.redd.it/eXh4dXJydGlmZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=108&crop=smart&format=pjpg&auto=webp&s=c73c0f91d3c9aa16b045097fe63c57adf80e941b', 'width': 108}, {'height': 383, 'url': 'https://external-preview.redd.it/eXh4dXJydGlmZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=216&crop=smart&format=pjpg&auto=webp&s=a7f371b3743b9cade344d07ff854a19cb8ee1361', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/eXh4dXJydGlmZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=320&crop=smart&format=pjpg&auto=webp&s=00bda515515b7a34efc787eb20fb395b38eea710', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/eXh4dXJydGlmZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=640&crop=smart&format=pjpg&auto=webp&s=b02d1e97151ba0c5560806881a7d5313286a18b6', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/eXh4dXJydGlmZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=960&crop=smart&format=pjpg&auto=webp&s=fea92eaa0f0e0929ad6d5f5afa6b536d8c11e1d8', 'width': 960}], 'source': {'height': 1820, 'url': 'https://external-preview.redd.it/eXh4dXJydGlmZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?format=pjpg&auto=webp&s=fe6adf260222756573ddfbba72f33d6825647966', 'width': 1024}, 'variants': {}}]} | |
Voice Assistant Running on a Raspberry Pi | 1 | Hey folks, I just published a write-up on a project I’ve been working on: pi-assistant — a local, open-source voice assistant that runs fully offline on a Raspberry Pi 5.
🔗 Blog post: https://alexfi.dev/blog/raspberry-pi-assistant
🔗 Code: github.com/alexander-fischer/pi-assistant
What it is
pi-assistant is a modular, tool-calling voice assistant that:
• Listens for a wake word (e.g., “Hey Jarvis”)
• Transcribes your speech
• Uses small LLMs to interpret commands and call tools (weather, Wikipedia, smart home)
• Speaks the answer back to you
—all without sending data to the cloud.
Tech stack
• Wake word detection: openWakeWord
• ASR: nemo-parakeet-tdt-0.6b-v2 / nvidia/canary-180m-flash
• Function calling: Arch-Function 1.5B
• Answer generation: Gemma3 1B
• TTS: Piper
• Hardware: Raspberry Pi 5 (16 GB), Jabra Speak 410
You can also adjust the language models for bigger hardware setups. | 2025-09-16T04:19:08 | https://v.redd.it/db1w11fodgpf1 | localslm | /r/LocalLLaMA/comments/1ni7r2l/voice_assistant_running_on_a_raspberry_pi/ | 1970-01-01T00:00:00 | 0 | {} | 1ni7r2l | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/db1w11fodgpf1/DASHPlaylist.mpd?a=1760717956%2CNzJiNDFhMDhlY2VkZjQ0OTU4Y2JlZTA0YTMyOTIzMmQ0YWRkODNhYTQxZDUyZWI3NTc3ZTc1NTI3ZTE3ZDk2OA%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/db1w11fodgpf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/db1w11fodgpf1/HLSPlaylist.m3u8?a=1760717956%2CNjkxMzllOGMyYjVmYWQxNzBjMmQ4NWM5ZTY3ODE0MDhhNTdlNWM3ODgxZDdhYmIzMzAwZjExYjQ3MWNjM2MwZg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/db1w11fodgpf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1ni7r2l | /r/LocalLLaMA/comments/1ni7r2l/voice_assistant_running_on_a_raspberry_pi/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cDBlcThjNW9kZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF', 'resolutions': [{'height': 191, 'url': 'https://external-preview.redd.it/cDBlcThjNW9kZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=108&crop=smart&format=pjpg&auto=webp&s=08c0f16e5a471538958caff16dc430ba4b8b471a', 'width': 108}, {'height': 383, 'url': 'https://external-preview.redd.it/cDBlcThjNW9kZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=216&crop=smart&format=pjpg&auto=webp&s=3536cf1db392f305ce90d222aaa946b7b1f5a861', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/cDBlcThjNW9kZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=320&crop=smart&format=pjpg&auto=webp&s=b0a18205c6a8e006ed40c3c52ce415b9e8377a20', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/cDBlcThjNW9kZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=640&crop=smart&format=pjpg&auto=webp&s=15e69b75208b1cca6c6120fcfb4db5bb6523fb6c', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/cDBlcThjNW9kZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?width=960&crop=smart&format=pjpg&auto=webp&s=c6f91061239b45d9bf0b89fd1e15e2e63bd6e536', 'width': 960}], 'source': {'height': 1820, 'url': 'https://external-preview.redd.it/cDBlcThjNW9kZ3BmMWGGnl44cMWlAhwBPOVxHgzmQ7jmQEBvcqJECv2kUwPF.png?format=pjpg&auto=webp&s=c70383989642b4a8154b9ea370302ac5bede6681', 'width': 1024}, 'variants': {}}]} | |
System level architecture disigner . | 0 | Hey everyone. I wanna know there is any agent or tool who work on system level architecture ? I don't want any copy cat web site/ agent tool. I want agent/ tool who help me in coding and building system level architecture . I want to make my app prototype. | 2025-09-16T03:08:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ni6dsh/system_level_architecture_disigner/ | No_Structure7849 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni6dsh | false | null | t3_1ni6dsh | /r/LocalLLaMA/comments/1ni6dsh/system_level_architecture_disigner/ | false | false | self | 0 | null |
llama.cpp not getting my CPU RAM | 0 | So, I have a weird and curious hardware setup that is 16GB VRAM (NVIDIA RTX A4000) and wooping 173 GB CPU RAM.
So far I've been using openwebui and ollama, and it's... ok? But ollama only uses VRAM, and I'm RAM-rich, so I've heard llama.cpp (in fact, ik\_lamma.cpp) was the path for me.
I did get it to work, fine, and I make sure to use same model as ollama, to test.
And... it's in fact *slower*. It only uses 3GB of the (drumbeats) 173GB I have available. It's simply *slower*. And my Ollama *is* slow.
Here are the flags I used...
/srv/llama/build/bin/llama-server \
--model /srv/models/Qwen3-14B-Q4_K_M.gguf \
--alias qwen3-14b-q4km \
--ctx-size 8192 \
--n-gpu-layers 16 \
--threads 16 \
--host 0.0.0.0 \
--port 8080
I was told (by chatgpt, ha) to use `—main-mem` flag, but ik\_llama.cpp doesn't accept it when I try to run. is it (literally) a false flag?
How to tune llama.cpp to my environment? I have 100GB RAM just sitting there doing nothing. It's almost a sin, can someone help me out?
Is it a matter of right flags? Is it because ollama was still running on the side? Can I even utilize my RAM-rich environment for faster responses?
I feel I'm almost there but I can't reach it. What did I do wrong? | 2025-09-16T03:00:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ni67vw/llamacpp_not_getting_my_cpu_ram/ | nonlinear_nyc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni67vw | false | null | t3_1ni67vw | /r/LocalLLaMA/comments/1ni67vw/llamacpp_not_getting_my_cpu_ram/ | false | false | self | 0 | null |
Open Line Protocol (MIT): a minimal wire for AI agents (graphs + telemetry, not paragraphs) Useful if you’re wiring tool-using / multi-agent runs and want auditable plans. | 3 | TL;DR: Open Line lets agents send small graphs + telemetry instead of paragraphs. Frozen wire v0.1, guardrails, and a 5-number “shape” digest (+Δ_hol) so merges are auditable.
Highlights
• Typed schema (frozen wire v0.1)
• Digest: b0, cycle_plus, x_frontier, s_over_c, depth + Δ_hol
• Guards: blocks self-reinforcing loops + silent objection deletion
• Receipts: JSON evidence (schema-checked) → shows on a public hub
Hub (latest receipts): https://terryncew.github.io/openline-hub/
Ask: Which adapter would you want first (WebSocket, store, LangGraph)? | 2025-09-16T02:54:53 | https://github.com/terryncew/openline-core | Both-Ad-5476 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ni63nf | false | null | t3_1ni63nf | /r/LocalLLaMA/comments/1ni63nf/open_line_protocol_mit_a_minimal_wire_for_ai/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'WZLRJD8v_fo00CWkxoeoiRlNKlNL-jOXgxTIilkNrFA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WZLRJD8v_fo00CWkxoeoiRlNKlNL-jOXgxTIilkNrFA.png?width=108&crop=smart&auto=webp&s=3c117d7feccb217bccec6e55da805eff39e6e9d7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WZLRJD8v_fo00CWkxoeoiRlNKlNL-jOXgxTIilkNrFA.png?width=216&crop=smart&auto=webp&s=91699e59fbb66da1f3acd4207428e2cfdb744d47', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WZLRJD8v_fo00CWkxoeoiRlNKlNL-jOXgxTIilkNrFA.png?width=320&crop=smart&auto=webp&s=f1e12faec1d4b0557f6c08ff78b9b746caea6065', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WZLRJD8v_fo00CWkxoeoiRlNKlNL-jOXgxTIilkNrFA.png?width=640&crop=smart&auto=webp&s=f3e026acd62c629d4995be921015fb7ae72806fa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WZLRJD8v_fo00CWkxoeoiRlNKlNL-jOXgxTIilkNrFA.png?width=960&crop=smart&auto=webp&s=7f2c5de57449d279f4f0ca66990c7a5a5255876b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WZLRJD8v_fo00CWkxoeoiRlNKlNL-jOXgxTIilkNrFA.png?width=1080&crop=smart&auto=webp&s=575351177d55480815094d6242845d1b6b40f556', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WZLRJD8v_fo00CWkxoeoiRlNKlNL-jOXgxTIilkNrFA.png?auto=webp&s=7b0ff27dc69eab5b79130a9f71373c155b0cf016', 'width': 1200}, 'variants': {}}]} |
AMD Max+ 395 with a 7900xtx as a little helper. | 48 | I finally got around to hooking up my 7900xtx to my GMK X2. A while back some people were interested in numbers for this so here are some numbers for OSS 120B. The big win is that adding the 7900xtx didn't make it slower and in fact made everything a little faster. My experience going multi-gpu is that there is a speed penalty. In this case adding the 7900xtx is effectively like just having another 24GB added to the 128GB.
I'll start with a baseline run in Vulkan on just the Max+ 395.
ggml_vulkan: 0 = AMD Radeon Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
| model | size | params | backend | ngl | fa | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | ---: | --------------: | -------------------: |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | Vulkan,RPC | 9999 | 1 | 0 | pp512 | 473.93 ± 3.64 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | Vulkan,RPC | 9999 | 1 | 0 | tg128 | 51.49 ± 0.03 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | Vulkan,RPC | 9999 | 1 | 0 | pp512 @ d20000 | 261.49 ± 0.58 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | Vulkan,RPC | 9999 | 1 | 0 | tg128 @ d20000 | 41.03 ± 0.01 |
Here's a run in Vulkan split between the Max+ and the 7900xtx.
ggml_vulkan: Found 2 Vulkan devices:
ggml_vulkan: 0 = Radeon RX 7900 XTX (RADV NAVI31) (radv) | uma: 0 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
ggml_vulkan: 1 = AMD Radeon Graphics (RADV GFX1151) (radv) | uma: 1 | fp16: 1 | bf16: 0 | warp size: 64 | shared memory: 65536 | int dot: 1 | matrix cores: KHR_coopmat
| model | size | params | backend | ngl | fa | ts | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | -: | ------------ | ---: | --------------: | -------------------: |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | Vulkan,RPC | 9999 | 1 | 36.00/64.00 | 0 | pp512 | 615.07 ± 3.11 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | Vulkan,RPC | 9999 | 1 | 36.00/64.00 | 0 | tg128 | 53.08 ± 0.31 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | Vulkan,RPC | 9999 | 1 | 36.00/64.00 | 0 | pp512 @ d20000 | 343.58 ± 5.11 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | Vulkan,RPC | 9999 | 1 | 36.00/64.00 | 0 | tg128 @ d20000 | 40.53 ± 0.13 |
And lastly, here's a split ROCm run for comparison. Vulkan is still king. Particularly as the context grows.
ggml_cuda_init: found 2 ROCm devices:
Device 0: Radeon RX 7900 XTX, gfx1100 (0x1100), VMM: no, Wave Size: 32
Device 1: AMD Radeon Graphics, gfx1151 (0x1151), VMM: no, Wave Size: 32
| model | size | params | backend | ngl | main_gpu | fa | ts | mmap | test | t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ---------: | -: | ------------ | ---: | --------------: | -------------------: |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm,RPC | 9999 | 1 | 1 | 36.00/64.00 | 0 | pp512 | 566.14 ± 4.61 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm,RPC | 9999 | 1 | 1 | 36.00/64.00 | 0 | tg128 | 46.88 ± 0.15 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm,RPC | 9999 | 1 | 1 | 36.00/64.00 | 0 | pp512 @ d20000 | 397.01 ± 0.99 |
| gpt-oss 120B MXFP4 MoE | 59.02 GiB | 116.83 B | ROCm,RPC | 9999 | 1 | 1 | 36.00/64.00 | 0 | tg128 @ d20000 | 18.09 ± 0.06 | | 2025-09-16T02:41:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ni5tq3/amd_max_395_with_a_7900xtx_as_a_little_helper/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni5tq3 | false | null | t3_1ni5tq3 | /r/LocalLLaMA/comments/1ni5tq3/amd_max_395_with_a_7900xtx_as_a_little_helper/ | false | false | self | 48 | null |
Fully local data analysis assistant (plus new Model) | 153 | Hi community! Today I’m releasing an open-source, fully local data analysis assistant along with a lightweight LLM trained for it, called [**quelmap**](https://quelmap.com) and **Lightning-4b**.
LLMs are amazing, but handing over all your data to a major LLM provider isn’t how it should be. Nowadays, data analysis has relied on huge context windows and very large models. Instead, we tried to see if we could cover most common analysis tasks with an efficient XML-based output format and GRPO training.
It even works smoothly on my **M4 MacBook Air (16GB)**.
**Basic Features**
📊 Data visualization
🚀 Table joins
📈 Run statistical tests
📂 Unlimited rows, analyze 30+ tables at once
🐍 Built-in Python sandbox
🦙 Ollama or LM Studio API integration
Lightning-4b is trained specifically for quelmap, and it’s been accurate and stable in generating structured outputs and Python code—more consistent than gpt-oss-120b or even Qwen3-235B in simple analysis tasks on quelmap. You can check the training details and performance here:
👉 [https://www.quelmap.com/lightning-4b/](https://www.quelmap.com/lightning-4b/)
It’s not meant for writing complex research reports or high-level business advice like Gemini-DeepResearch. But I hope it can be a helpful tool for privacy-conscious analysts and beginners who just want to explore or analyze their data safely.
All details, installation instructions, and source code are here:
🔗 Github: [https://github.com/quelmap-inc/quelmap](https://github.com/quelmap-inc/quelmap)
🔗 HuggingFace: [https://huggingface.co/quelmap/Lightning-4b](https://huggingface.co/quelmap/Lightning-4b)
If people find this useful, I’d love to keep working on this project (agent mode, new models and more). Let me know what you think—I’d love to hear it. | 2025-09-16T02:16:04 | mshintaro777 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ni5bao | false | null | t3_1ni5bao | /r/LocalLLaMA/comments/1ni5bao/fully_local_data_analysis_assistant_plus_new_model/ | false | false | default | 153 | {'enabled': True, 'images': [{'id': 'ifula3tiqfpf1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=108&crop=smart&format=png8&s=be7662295c3dffcb39b2ac983bed02103ebf5882', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=216&crop=smart&format=png8&s=2431a3ca68122e22b7cc07a1a6008479c459d2a1', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=320&crop=smart&format=png8&s=c5b4d51e6cde5ed21e2e9215c8d254835821ab3c', 'width': 320}, {'height': 354, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=640&crop=smart&format=png8&s=fcb835afccdc343b1389fe9fdb90a3b9872b6df8', 'width': 640}, {'height': 531, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=960&crop=smart&format=png8&s=6e99d8ebde3dd598b4aea67cc5cb5cc35d0dae6f', 'width': 960}, {'height': 598, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=1080&crop=smart&format=png8&s=e09526b6d056e51c0df93dbc15c84b51cdf11ea9', 'width': 1080}], 'source': {'height': 1894, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?format=png8&s=e0ea9a97a9ee5c8431cacde25683bead680b2b58', 'width': 3420}, 'variants': {'gif': {'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=108&crop=smart&s=8a9c6594d1601227e40da191c1ccd1725cbd4035', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=216&crop=smart&s=2a038bf5976ecb8f28c3580d01dba4a940319c54', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=320&crop=smart&s=4379bbf2aa5c55ee08ce5bfb832b127722d23956', 'width': 320}, {'height': 354, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=640&crop=smart&s=e334cd9bdf8f0d006bb34767d9f02bebb39206c1', 'width': 640}, {'height': 531, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=960&crop=smart&s=ea3db8f318195b8d9b623fb319a912791f29e9d5', 'width': 960}, {'height': 598, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=1080&crop=smart&s=4e0ae9efdbc46214e0a4415af2b38b5ea9a8f9c4', 'width': 1080}], 'source': {'height': 1894, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?s=63bf3743f975cba14a7c3598627418fabb254a88', 'width': 3420}}, 'mp4': {'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=108&format=mp4&s=440a4d05c4cc75a0104e2bfc28901326651cbc8a', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=216&format=mp4&s=8122668de582a7527cde32cd0384cdf6a389ee94', 'width': 216}, {'height': 177, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=320&format=mp4&s=ef6bd3061e70fec6b440027b03cee5ffaf6431cb', 'width': 320}, {'height': 354, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=640&format=mp4&s=60235d75dd8ec20bc14ddbd8de670cae08080880', 'width': 640}, {'height': 531, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=960&format=mp4&s=8e3fe28b3d601e6bbd4807df652f25be6658e0bb', 'width': 960}, {'height': 598, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?width=1080&format=mp4&s=97bb7eb52dbcb76856e39abf1d14b7a98bb28140', 'width': 1080}], 'source': {'height': 1894, 'url': 'https://preview.redd.it/ifula3tiqfpf1.gif?format=mp4&s=3f86da7ee819529fd5eea474823b314ff59b47e2', 'width': 3420}}}}]} | |
Still just another SOTA? Meta REFRAG: 30× Faster RAG Decoding | 5 | Meta just introduced **REFRAG**, a new way to handle retrieval-augmented generation. Instead of pushing all retrieved tokens into the decoder, it chops passages into chunks, compresses most into embeddings, and only expands the ones judged important with a lightweight RL policy.
The results sound huge: \~30× faster time-to-first-token, \~16× longer usable contexts, and accuracy basically intact. On benchmarks, it clearly beats older setups like prepend-RAG or CEPE.
But here’s the catch: every few months we see “the new SOTA” in RAG. Each looks great in its own setting, yet in practice different variants trade speed, cost, and accuracy in messy ways. What works for a benchmark may flop on a company’s private data. | 2025-09-16T02:13:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ni591i/still_just_another_sota_meta_refrag_30_faster_rag/ | Cheryl_Apple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni591i | false | null | t3_1ni591i | /r/LocalLLaMA/comments/1ni591i/still_just_another_sota_meta_refrag_30_faster_rag/ | false | false | self | 5 | null |
Favorite web ui frameworks? | 1 | What are your favorite frameworks for UI/UX connecting to your local LLM? | 2025-09-16T02:07:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ni555g/favorite_web_ui_frameworks/ | ctrl-brk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni555g | false | null | t3_1ni555g | /r/LocalLLaMA/comments/1ni555g/favorite_web_ui_frameworks/ | false | false | self | 1 | null |
SOTA video embedding model? | 1 | seems like there is only one viable option (marengo from twelve labs) in this space? anyone know of any other video embedding models available? i want full video embedding ideally (ie. not doing audio embed + image embed) | 2025-09-16T00:50:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ni3gxa/sota_video_embedding_model/ | Turbulent-Sky5396 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni3gxa | false | null | t3_1ni3gxa | /r/LocalLLaMA/comments/1ni3gxa/sota_video_embedding_model/ | false | false | self | 1 | null |
Single Install for GGUF Across CPU/GPU/NPU - Goodbye Multiple Builds | 11 | **Problem**
AI developers need flexibility and simplicity when running and developing with local models, yet popular on-device runtimes such as llama.cpp and Ollama still often fall short:
* Separate installers for CPU, GPU, and NPU
* Conflicting APIs and function signatures
* NPU-optimized formats are limited
For anyone building on-device LLM apps, these hurdles slow development and fragment the stack.
**To solve this:**
I upgraded Nexa SDK so that it supports:
* One core API for LLM/VLM/embedding/ASR
* Backend plugins for CPU, GPU, and NPU that load only when needed
* Automatic registry to pick the best accelerator at runtime
https://reddit.com/link/1ni2vqw/video/uucn4t7p6fpf1/player
On an HP OmniBook with Snapdragon Elite X, I ran the same LLaMA-3.2-3B GGUF model and achieved:
* On CPU: 17 tok/s
* On GPU: 10 tok/s
* On NPU (Turbo engine): 29 tok/s
I didn’t need to switch backends or make any extra code changes; everything worked with the same SDK.
**You Can Achieve**
* Ship a single build that scales from laptops to edge devices
* Mix GGUF and vendor-optimized formats without rewriting code
* Cut cold-start times to milliseconds while keeping the package size small
Download one installer, choose your model, and deploy across CPU, GPU, and NPU—without changing a single line of code, so AI developers can focus on the actual products instead of wrestling with hardware differences.
Try it today and let me know any feedback or thoughts. I look forward to improving this project based on requests.
GitHub Repo**:** [**github.com/NexaAI/nexa-sdk**](https://github.com/NexaAI/nexa-sdk) | 2025-09-16T00:24:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ni2vqw/single_install_for_gguf_across_cpugpunpu_goodbye/ | Different-Effect-724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni2vqw | false | null | t3_1ni2vqw | /r/LocalLLaMA/comments/1ni2vqw/single_install_for_gguf_across_cpugpunpu_goodbye/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'CVSxdiqWqp-ZrX8yVw3cfjqBie1BbHNW--0BWHDVzxg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CVSxdiqWqp-ZrX8yVw3cfjqBie1BbHNW--0BWHDVzxg.png?width=108&crop=smart&auto=webp&s=29914274bf0b33168fde1d5168a3790b433cb74d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CVSxdiqWqp-ZrX8yVw3cfjqBie1BbHNW--0BWHDVzxg.png?width=216&crop=smart&auto=webp&s=f011c3001db1f5db0ed800ff4d1b35389485acca', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CVSxdiqWqp-ZrX8yVw3cfjqBie1BbHNW--0BWHDVzxg.png?width=320&crop=smart&auto=webp&s=32ff1744bc9a94a1e86d07690319217d71008f89', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CVSxdiqWqp-ZrX8yVw3cfjqBie1BbHNW--0BWHDVzxg.png?width=640&crop=smart&auto=webp&s=856312cd81fead7db026713fa8f52842ec93752d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CVSxdiqWqp-ZrX8yVw3cfjqBie1BbHNW--0BWHDVzxg.png?width=960&crop=smart&auto=webp&s=8966db87232186a6495617ba08af6d7b3a727b56', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CVSxdiqWqp-ZrX8yVw3cfjqBie1BbHNW--0BWHDVzxg.png?width=1080&crop=smart&auto=webp&s=508ab2f0ceb1d1a06a3a7d0f07ede03036e82ca7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CVSxdiqWqp-ZrX8yVw3cfjqBie1BbHNW--0BWHDVzxg.png?auto=webp&s=78a2713cfad2816d7d370738b055e09c8b2b4e58', 'width': 1200}, 'variants': {}}]} |
Anyone tried Apples Foundational Local Model? It's great so far! | 1 | Knowledgeable, mild hallucination, precise, reasons quite well, super fast. I wonder why they didn't implement it into Siri yet. What is its size? Works great on my iphone pro max 15 | 2025-09-16T00:12:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ni2lyw/anyone_tried_apples_foundational_local_model_its/ | Vast-Piano2940 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni2lyw | false | null | t3_1ni2lyw | /r/LocalLLaMA/comments/1ni2lyw/anyone_tried_apples_foundational_local_model_its/ | false | false | self | 1 | null |
Qwen3-Next 80b MLX (Mac) runs on latest LM Studio | 234 | Was excited to see this work. About 35 tps on my M1 Mac Studio 64 gb. Takes about 42 gb. | 2025-09-15T23:59:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ni2chb/qwen3next_80b_mlx_mac_runs_on_latest_lm_studio/ | jarec707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni2chb | false | null | t3_1ni2chb | /r/LocalLLaMA/comments/1ni2chb/qwen3next_80b_mlx_mac_runs_on_latest_lm_studio/ | false | false | self | 234 | {'enabled': False, 'images': [{'id': 'x8jLGnkIVMCU4dcRemE9Uox0c_-_iJmqQBMcnxbcFL4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x8jLGnkIVMCU4dcRemE9Uox0c_-_iJmqQBMcnxbcFL4.png?width=108&crop=smart&auto=webp&s=9e51f3ef376f215051ec504325b69dcd16a1fa48', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x8jLGnkIVMCU4dcRemE9Uox0c_-_iJmqQBMcnxbcFL4.png?width=216&crop=smart&auto=webp&s=9ecc47efcfe675845aaa52f05b6c824f4e3f7d4d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x8jLGnkIVMCU4dcRemE9Uox0c_-_iJmqQBMcnxbcFL4.png?width=320&crop=smart&auto=webp&s=e7686920366eccdd0db0ce6d80f67fb7905243d9', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x8jLGnkIVMCU4dcRemE9Uox0c_-_iJmqQBMcnxbcFL4.png?width=640&crop=smart&auto=webp&s=f3d6860cb003c16103b44627342843b0b3d9201a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x8jLGnkIVMCU4dcRemE9Uox0c_-_iJmqQBMcnxbcFL4.png?width=960&crop=smart&auto=webp&s=cd091bdadd84f929af7c4b46136d3623f9bfa4fb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x8jLGnkIVMCU4dcRemE9Uox0c_-_iJmqQBMcnxbcFL4.png?width=1080&crop=smart&auto=webp&s=2e88c2d94ccc2622c4addb5f97a8706a74e9c4a2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x8jLGnkIVMCU4dcRemE9Uox0c_-_iJmqQBMcnxbcFL4.png?auto=webp&s=2f1dbb0aa999b6ecf09d0ee7a94f3be6cbae4658', 'width': 1200}, 'variants': {}}]} |
Is there a local LLM that can intelligently analyze speech from microphone in terms of tone, pitch, confidence, etc? | 3 | The use-case is for me to speak into my computer microphone and record myself as I pretend to cold call the owner of a fake company as I give them my 15 second elevator pitch for the small freelance business I own (nothing to do with AI).
I'm hoping that AI can listen to my recording and analyze my tone, pitch, cadence, confidence, and provide intelligent feedback. I couldn't cold call my way out of a paper bag and the idea of turning to an AI to coach me is some turbo-autismo idea that I came up with. On paper, it sounds like a great idea.
I realize if nothing exists, I'm probably giving one of you a multi-million dollar business idea. You have my blessing to take it and run with it, as I have bigger fish to fry in the business world. Just pinky-promise when you're making millions you'll reach out to me with a nice little gift *(giving me a brand new BMW M5 would bring massive volumes of karma your way for the next 10 years).* | 2025-09-15T23:46:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ni220p/is_there_a_local_llm_that_can_intelligently/ | OsakaSeafoodConcrn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni220p | false | null | t3_1ni220p | /r/LocalLLaMA/comments/1ni220p/is_there_a_local_llm_that_can_intelligently/ | false | false | self | 3 | null |
Anyone else have small models just "forget" MCP tools exist? | 27 | Trying to stitch together a lightweight "local research assistant" setup with MCP, but running into weird behavior:
Stack:
* [Bright Data MCP](https://github.com/brightdata/brightdata-mcp)
* [Cherry Studio](https://github.com/CherryHQ/cherry-studio) built-in knowledge graph MCP
* Ollama connected w/ Qwen3-4B-Instruct-2507 as the model
Most of the time, Qwen doesn’t even seem to know that the MCP tools are there. Paraphrasing the problem here:
Me: "Fetch this URL, then summarize it in 3 bullets, and finally, store it in the knowledge graph with observations."
Qwen: "Sorry, I don't have any tools that can browse the internet to fetch the contents of that page for you."
…but maybe 1 out of 3 tries, it does call the Bright Data MCP and returns clean markdown???
Same with Cherry’s knowledge graph. sometimes it builds links between entities, sometimes the model acts like the tool was never registered.
I've tried explicitly reminding the model, "you have these tools available," but it doesn't stick.
Have I messed up the config somewhere? Has anyone else run into this "tool amnesia" issue with Cherry studio or MCP servers? | 2025-09-15T23:37:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ni1uw3/anyone_else_have_small_models_just_forget_mcp/ | TheLostWanderer47 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni1uw3 | false | null | t3_1ni1uw3 | /r/LocalLLaMA/comments/1ni1uw3/anyone_else_have_small_models_just_forget_mcp/ | false | false | self | 27 | {'enabled': False, 'images': [{'id': '92bTZVVFieoTD3MTgy1ubWcS3_YUdqvK0XGvKhP9nKo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/92bTZVVFieoTD3MTgy1ubWcS3_YUdqvK0XGvKhP9nKo.png?width=108&crop=smart&auto=webp&s=8a3519004674db0ad95a704262aaa9a2413c656f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/92bTZVVFieoTD3MTgy1ubWcS3_YUdqvK0XGvKhP9nKo.png?width=216&crop=smart&auto=webp&s=aba56ce7fc3693f85df7ddb70c1d87b0f233d491', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/92bTZVVFieoTD3MTgy1ubWcS3_YUdqvK0XGvKhP9nKo.png?width=320&crop=smart&auto=webp&s=59422c1ced67a606eeea127ff29f6ffc7d315a65', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/92bTZVVFieoTD3MTgy1ubWcS3_YUdqvK0XGvKhP9nKo.png?width=640&crop=smart&auto=webp&s=96b77811a14cdb1254273318fa8a357a0c3fa559', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/92bTZVVFieoTD3MTgy1ubWcS3_YUdqvK0XGvKhP9nKo.png?width=960&crop=smart&auto=webp&s=3cb2064a4ac5097a2faabdddbb8fca276709ac85', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/92bTZVVFieoTD3MTgy1ubWcS3_YUdqvK0XGvKhP9nKo.png?width=1080&crop=smart&auto=webp&s=ad10a8866d11424e748a0db8c632ca5663c4a0fc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/92bTZVVFieoTD3MTgy1ubWcS3_YUdqvK0XGvKhP9nKo.png?auto=webp&s=f5a6cabaaa8e9df78d22e82f92b4b56f04bc7fa3', 'width': 1200}, 'variants': {}}]} |
Ridiculousness Level Please | 0 | I apologize to all for not directly applying to localLLama, although it is a build that I am making specifically for LLMs. It's been a rollercoaster with my Epyc build and I need to just hear you opinions on this. I thought since most of us here use Epycs, you might have relevant insights more so than other communities. Thanks. I really want my Epyc to stay cool under load and I thought it was funny at first, but now considering 4 fans. | 2025-09-15T23:21:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ni1h4a/ridiculousness_level_please/ | joelasmussen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni1h4a | false | null | t3_1ni1h4a | /r/LocalLLaMA/comments/1ni1h4a/ridiculousness_level_please/ | false | false | self | 0 | null |
Is 550 MB worth it to fine-tune a model? And start a big thing like a business on it? | 0 | Sm | 2025-09-15T23:16:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ni1d7c/is_550_mb_worth_it_to_finetune_a_model_and_start/ | iSuper1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni1d7c | false | null | t3_1ni1d7c | /r/LocalLLaMA/comments/1ni1d7c/is_550_mb_worth_it_to_finetune_a_model_and_start/ | false | false | self | 0 | null |
I am willing to train Qwen3 14B to clean my data for me since using a closed source models is expensive and open source models are not good at all at cleaning data. | 0 | So I have already cleaned about 1500 samples using Gemini but it costed me so much so I am thinking of training my own cleaning model on that 1500 samples. And I don’t need something complex I want the model to to normalize my data using the adjective instead of names, writing the text numbers as real numbers, deleting parentheses around money numbers, using commas for money like 5,000 rather than 5000, deleting unrelated numbers and so on. So what do you think? | 2025-09-15T23:13:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ni1ai0/i_am_willing_to_train_qwen3_14b_to_clean_my_data/ | iSuper1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ni1ai0 | false | null | t3_1ni1ai0 | /r/LocalLLaMA/comments/1ni1ai0/i_am_willing_to_train_qwen3_14b_to_clean_my_data/ | false | false | self | 0 | null |
A tutorial iOS app about LLM’s on the go | 0 | Hi all, I saw there are lots of AI wrapper apps out there, but few that had tutorials about LLM training and specs.
I went ahead and built one called A.I. DelvePad — a free Opensource iOS app designed for anyone who wants to build a basic foundation in generative A.I.
It has :
Bite-sized video tutorials you can watch on the go
A glossary of key AI terms
A quick overview of how LLMs are trained
A tutorial sharing function so you can pass what you learn to friends
All tutorials are all free.
Looking to get more feedback, would love to hear yours. If you’ve been curious about AI models but didn’t know where to start, this might be a good starter pack for you.
App Store link : https://apps.apple.com/us/app/a-i-delvepad/id6743481267
Github : https://github.com/leapdeck/AIDelvePad Site: http://aidelvepad.com
Would love any input you’ve got. And if you’re building too — keep going! Enjoy making mobile projects. | 2025-09-15T22:56:19 | https://www.reddit.com/gallery/1ni0vx2 | Other_Passion_4710 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ni0vx2 | false | null | t3_1ni0vx2 | /r/LocalLLaMA/comments/1ni0vx2/a_tutorial_ios_app_about_llms_on_the_go/ | false | false | 0 | null | |
If YaRN is so good, why there is n- | 1 | [removed] | 2025-09-15T22:50:15 | https://github.com/HKUNLP/STRING | Electrical_Gas_77 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ni0r3x | false | null | t3_1ni0r3x | /r/LocalLLaMA/comments/1ni0r3x/if_yarn_is_so_good_why_there_is_n/ | false | false | default | 1 | null |
What are the local TTS models with voice cloning? | 10 | I've been working on a personal project of mine, and I tried using CoquiTTS and it cloned the Japanese Makima's voice from Chainsaw-man and it is really pleasant to hear, but the problem is that the Coqui Github is not up to date and has a broken tutorial, but somehow DeepSeek got the code and dependencies working for me, I have no idea how. And also its performance is very underwhelming on my CPU so I switched to a lighter model, kokoro, and it's been great but I miss Makima's voice on it.
So, are there others lightweight TTS local models with voice cloning? | 2025-09-15T22:12:28 | https://www.reddit.com/r/LocalLLaMA/comments/1nhztu7/what_are_the_local_tts_models_with_voice_cloning/ | Rique_Belt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhztu7 | false | null | t3_1nhztu7 | /r/LocalLLaMA/comments/1nhztu7/what_are_the_local_tts_models_with_voice_cloning/ | false | false | self | 10 | null |
Qwen-next - no gguf yet | 77 | does anyone know why llama.cpp has not implemented the new architecture yet?
I am not complaining, i am just wondering what the reason(s) might be. The feature request on github seems quite stuck to me.
Sadly there is no skill on my side, so i am not able to help. | 2025-09-15T21:44:23 | https://www.reddit.com/r/LocalLLaMA/comments/1nhz4dn/qwennext_no_gguf_yet/ | mgr2019x | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhz4dn | false | null | t3_1nhz4dn | /r/LocalLLaMA/comments/1nhz4dn/qwennext_no_gguf_yet/ | false | false | self | 77 | null |
Running LLMS on RAM ? | 3 | Hey guys, I have been seeing some posts here and there about people that are able to run the local models partly on the RAM and I had not heard of this until this sub Reddit is there a good source of information on how to do this? I’m running a 4060TI 16gb and I also have an RX 6700 nitro, but I took that one out as most of my web searches said that trying to do both at the same time would be a huge pain and I’d be better off selling it. But I do have 64 GB of RAM. Thanks! | 2025-09-15T21:39:56 | https://www.reddit.com/r/LocalLLaMA/comments/1nhz069/running_llms_on_ram/ | Electronic_Image1665 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhz069 | false | null | t3_1nhz069 | /r/LocalLLaMA/comments/1nhz069/running_llms_on_ram/ | false | false | self | 3 | null |
LMStudio Multiple AMD GPU support on Windows | 2 | I couldn’t really find much information on this, as the majority of people are using NVIDIA GPUs (probably for good reason), but what about AMD GPUs on Windows 11? | 2025-09-15T21:14:17 | https://www.reddit.com/r/LocalLLaMA/comments/1nhye8d/lmstudio_multiple_amd_gpu_support_on_windows/ | Different-Fold-8360 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhye8d | false | null | t3_1nhye8d | /r/LocalLLaMA/comments/1nhye8d/lmstudio_multiple_amd_gpu_support_on_windows/ | false | false | self | 2 | null |
Best open-source TTS that streams and handles very long/short text? | 1 | Looking for an open-source TTS (model + inference) that can stream audio token- or chunk-by-chunk (so it starts speaking immediately), handle very long/long inputs without producing glitches or noise, and deliver expressive/emotional prosody. Prefer solutions that run locally or on a modest GPU, include pretrained voices, and offer an easy CLI/Python API. Links to repos, demos, and any gotchas (memory, latency, vocoder choice) would be super helpful — thanks! | 2025-09-15T21:11:47 | https://www.reddit.com/r/LocalLLaMA/comments/1nhybyu/best_opensource_tts_that_streams_and_handles_very/ | Adept_Lawyer_4592 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhybyu | false | null | t3_1nhybyu | /r/LocalLLaMA/comments/1nhybyu/best_opensource_tts_that_streams_and_handles_very/ | false | false | self | 1 | null |
Polygraph for AI | 0 | Hello everyone!
I came across this fun website on another sub-reddit that lets you check models under a polygraph.
It only works with local models so I thought it could be interesting to post about it here!
[https://eposlabs.ai/research/polygraph](https://eposlabs.ai/research/polygraph)
Ironically, Llama (the model) likes alpacas more than itself! :o
https://preview.redd.it/k28ja4a17epf1.png?width=1430&format=png&auto=webp&s=11d5491689073e92dcf02eb3dee3168ff46c8aff
I imagine it can be used for more controversial topics since the model can't lie under the polygraph.
What do you think? | 2025-09-15T21:05:08 | https://www.reddit.com/r/LocalLLaMA/comments/1nhy5nd/polygraph_for_ai/ | Big-Page6926 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhy5nd | false | null | t3_1nhy5nd | /r/LocalLLaMA/comments/1nhy5nd/polygraph_for_ai/ | false | false | 0 | null | |
For inference, I'm looking for help to navigate hardware that would support inference across 3 RTX 3090s with the ability to expand to 4 later. | 6 | I'm finding a lot of conflicting information across Reddit, and the scene/meta seems to move so fast! So I apologize if y'all get a *ton* of these kind of questions.
With that said, I've got my FormD TD1 with a mini ITX build inside that I used to use as a gaming PC, but I have since recommissioned it as a home lab. I've had a blast coming up with applications for local LLMs to manage use-cases across the system.
I found someone selling used RTX 3090 FEs locally for C$750 a pop, so I bought all three from them after stress testing and benchmarking all of them. Everything checked out. I have since replaced the RTX 4080 inside with one of them, but obviously I want to leverage all of them.
My goal is to get the RTX 4080 back in the PC, and come up with a separate build around the GPUs, and I'm having a little bit of a tough time navigating the (niche) information online relating to running a similar setup. Particularly the motherboard & CPU combination. I'd appreciate any insight or pointers for a starting point.
No budget, but I'd like to spend mindfully rather than for the sake of spending.
Thanks so much in advance! | 2025-09-15T20:51:31 | https://www.reddit.com/r/LocalLLaMA/comments/1nhxsg5/for_inference_im_looking_for_help_to_navigate/ | fkih | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhxsg5 | false | null | t3_1nhxsg5 | /r/LocalLLaMA/comments/1nhxsg5/for_inference_im_looking_for_help_to_navigate/ | false | false | self | 6 | null |
for hybrid setups (some layers in ram, some on ssd) - how do you decide which layers to keep in memory? is there a pattern to which layers benefit most from fast access? | 5 | been experimenting with offloading and noticed some layers seem way more sensitive to access speed than others. like attention layers vs feed-forward - wondering if there's actual research on this or if it's mostly trial and error.
also curious about the autoregressive nature - since each token generation needs to access the kv cache, are you prioritizing keeping certain attention heads in fast memory? or is it more about the embedding layers that get hit constantly?
seen some mention that early layers (closer to input) might be more critical for speed since they process every token, while deeper layers might be okay on slower storage. but then again, the later layers are doing the heavy reasoning work.
anyone have concrete numbers on latency differences? like if attention layers are on ssd vs ram, how much does that actually impact tokens/sec compared to having the ffn layers there instead?
thinking about building a smarter layer allocation system but want to understand the actual bottlenecks first rather than just guessing based on layer size." | 2025-09-15T20:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1nhxp1a/for_hybrid_setups_some_layers_in_ram_some_on_ssd/ | EmbarrassedAsk2887 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhxp1a | false | null | t3_1nhxp1a | /r/LocalLLaMA/comments/1nhxp1a/for_hybrid_setups_some_layers_in_ram_some_on_ssd/ | false | false | self | 5 | null |
Experience with OS LLM's for agentic coding? | 3 | As the title suggest I'm wondering how OS LLMS like Kimi K2 (0905) and the new Deepseek or GLM 4.5 are doing for you in comparison to Claude Opus/Sonnet or Codex with ChatGPT? | 2025-09-15T20:45:23 | https://www.reddit.com/r/LocalLLaMA/comments/1nhxmlp/experience_with_os_llms_for_agentic_coding/ | Crafty-Wonder-7509 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhxmlp | false | null | t3_1nhxmlp | /r/LocalLLaMA/comments/1nhxmlp/experience_with_os_llms_for_agentic_coding/ | false | false | self | 3 | null |
Is there a newer large corpus of synthetic training data than Cosmopedia v2? | 9 | I hoard models and datasets, but am usually limited by my crappy rural home DSL. I'm currently taking advantage of a business trip to download my backlog of large models with someone else's fast internet connection (brought an empty 14TB hard drive with me to fill up and take home).
It's only been a day, and I have already downloaded my backlog of large models. Datasets are next. I've queued up a few TB which are downloading now.
I'm particularly interested in high-quality open source synthetic datasets, but already have copies of Cosmopedia and Cosmopedia v2 from https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus at home, and various smaller datasets.
Cosmopedia v2 is a year old already, and I'm wondering if anyone can suggest a few newer, high quality synthetic corpus I should nab while I still have access to the faster internet.
I'm particularly interested in open source physics-oriented STEM datasets, persuasion skill datasets, and datasets which have undergone multiple rounds of improvement (complexifying / rarifying via Evol-Instruct, Self-Critique, reward model scoring, and similar techniques). Especially if they have associated open source software repositories, papers, and permissible licenses.
If you have suggestions, I'd love to see them! | 2025-09-15T20:44:30 | https://www.reddit.com/r/LocalLLaMA/comments/1nhxlqe/is_there_a_newer_large_corpus_of_synthetic/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhxlqe | false | null | t3_1nhxlqe | /r/LocalLLaMA/comments/1nhxlqe/is_there_a_newer_large_corpus_of_synthetic/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'fqrMixs7pq0P22MSgMtk93UWtRh_YVgqEOWlK67igJQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fqrMixs7pq0P22MSgMtk93UWtRh_YVgqEOWlK67igJQ.png?width=108&crop=smart&auto=webp&s=56b35c9b813bc5f6f96768ea5368dbd868cfe6e6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fqrMixs7pq0P22MSgMtk93UWtRh_YVgqEOWlK67igJQ.png?width=216&crop=smart&auto=webp&s=2adb2479acc9aaed4d470f8392cd44ca8102d7ea', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fqrMixs7pq0P22MSgMtk93UWtRh_YVgqEOWlK67igJQ.png?width=320&crop=smart&auto=webp&s=347a4099f1fe07ee9e7e447bae1daa1e6a97583b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fqrMixs7pq0P22MSgMtk93UWtRh_YVgqEOWlK67igJQ.png?width=640&crop=smart&auto=webp&s=987ed0f3aa99b8ae080dece8bf8f4da198f711da', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fqrMixs7pq0P22MSgMtk93UWtRh_YVgqEOWlK67igJQ.png?width=960&crop=smart&auto=webp&s=d98f25bec6ca5fc83ffc7377b30c7c2bb02b2321', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fqrMixs7pq0P22MSgMtk93UWtRh_YVgqEOWlK67igJQ.png?width=1080&crop=smart&auto=webp&s=682e68bfd2dbce7f9c29cdd2849f6eee818eb938', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fqrMixs7pq0P22MSgMtk93UWtRh_YVgqEOWlK67igJQ.png?auto=webp&s=d02ebf1369654b80c67e852b2d466e29f7304441', 'width': 1200}, 'variants': {}}]} |
What’s the most cost-effective and best AI model for coding in your experience? | 24 | Hi everyone,
I’m curious to hear from developers here: which AI model do you personally find the most cost-effective and reliable for coding tasks?
I know it can depend a lot on use cases (debugging, writing new code, learning, pair programming, etc.), but I’d love to get a sense of what actually works well for you in real projects.
* Which model do you use the most?
* Do you combine multiple models depending on the task?
* If you pay for one, do you feel the price is justified compared to free or open-source options?
I think it’d be really helpful to compare experiences across the community, so please share your thoughts! | 2025-09-15T20:25:39 | https://www.reddit.com/r/LocalLLaMA/comments/1nhx3jp/whats_the_most_costeffective_and_best_ai_model/ | Mammoth-Leopard6549 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhx3jp | false | null | t3_1nhx3jp | /r/LocalLLaMA/comments/1nhx3jp/whats_the_most_costeffective_and_best_ai_model/ | false | false | self | 24 | null |
Guy explains how to use a local model on a flash drive | 0 | He’s using the uncensored Dolphin Llama 3 | 2025-09-15T20:15:18 | https://youtu.be/eiMSapoeyaU?si=RfcrwG1rxqtj6BVW | holistic-engine | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1nhwtgj | false | {'oembed': {'author_name': 'Global Science Network', 'author_url': 'https://www.youtube.com/@GlobalScienceNetwork', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/eiMSapoeyaU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="How To Run Private & Uncensored LLMs Offline | Dolphin Llama 3"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/eiMSapoeyaU/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'How To Run Private & Uncensored LLMs Offline | Dolphin Llama 3', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1nhwtgj | /r/LocalLLaMA/comments/1nhwtgj/guy_explains_how_to_use_a_local_model_on_a_flash/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'OMHXtb3J4PYYJ-wlwTaqpb6jBXDBK9SBjXeFzHS4yb8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/OMHXtb3J4PYYJ-wlwTaqpb6jBXDBK9SBjXeFzHS4yb8.jpeg?width=108&crop=smart&auto=webp&s=f871f4e46fc6db9a94ba21d79667d574b66d3d7c', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/OMHXtb3J4PYYJ-wlwTaqpb6jBXDBK9SBjXeFzHS4yb8.jpeg?width=216&crop=smart&auto=webp&s=8537eb571d405cb2f05964fa8768998ed99d1c5a', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/OMHXtb3J4PYYJ-wlwTaqpb6jBXDBK9SBjXeFzHS4yb8.jpeg?width=320&crop=smart&auto=webp&s=eac009284714fc0ee161dab1887cffa9477e4091', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/OMHXtb3J4PYYJ-wlwTaqpb6jBXDBK9SBjXeFzHS4yb8.jpeg?auto=webp&s=7d4252763a5e99ed60a35e0e3e6c698225f28853', 'width': 480}, 'variants': {}}]} |
Looking for advice on finetuning an embedding modell | 10 | 2025-09-15T19:43:08 | CaptainSnackbar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nhvxo7 | false | null | t3_1nhvxo7 | /r/LocalLLaMA/comments/1nhvxo7/looking_for_advice_on_finetuning_an_embedding/ | false | false | default | 10 | {'enabled': True, 'images': [{'id': 'imbfqj01tdpf1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/imbfqj01tdpf1.png?width=108&crop=smart&auto=webp&s=81ffee3e5e8840f37c9a9ffed81e32b219a78051', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/imbfqj01tdpf1.png?width=216&crop=smart&auto=webp&s=32ea28f6c74bd3ac29881582bee9753b916d7ed4', 'width': 216}, {'height': 227, 'url': 'https://preview.redd.it/imbfqj01tdpf1.png?width=320&crop=smart&auto=webp&s=b319bcbdc12042733c623be4ab9fef344cd5ca23', 'width': 320}, {'height': 454, 'url': 'https://preview.redd.it/imbfqj01tdpf1.png?width=640&crop=smart&auto=webp&s=41ce23670265bb6c6cab59c2ab8ac854415baae3', 'width': 640}, {'height': 681, 'url': 'https://preview.redd.it/imbfqj01tdpf1.png?width=960&crop=smart&auto=webp&s=d46626439bae06d6193f1ff6260c9f1af6f3152c', 'width': 960}, {'height': 767, 'url': 'https://preview.redd.it/imbfqj01tdpf1.png?width=1080&crop=smart&auto=webp&s=c6bfa9609581ccca9b8d2ddd3c037d2403b90568', 'width': 1080}], 'source': {'height': 861, 'url': 'https://preview.redd.it/imbfqj01tdpf1.png?auto=webp&s=627b7e16472fa86ce95770e2a225f46dd6b65a91', 'width': 1212}, 'variants': {}}]} | ||
I need help choosing between 2 GPUs for AI | 0 | Good time.
My PC configuration:
**CPU** \- i3 10100f
**GPU** \- GTX 1650
**RAM** \- 32 GB
**Motherboard** \- Asus Prime B560MK
I am considering to buy a new GPU. Right now I have two options:
1. **RTX 3060 12GB**
2. **Intel Arc B580 12GB**
The main concerns I have - **stability** and **software support**.
I lean more to bying **B580** \- AI and game benchmarks look good.
Also - around my place **B580** is a bit lower in price than **3060.**
What am I doing - **video editing (Premiere Pro, Davinci Resolve), AI (ComfyUI, koboldcpp), gaming (Mordhau, Paradox Games, Cyberpunk 2077, etc..), video recording (OBS)**.
Will B580 be a **plug-and-use/play experience** or should I just pick up 3060?
Also, if you know - does B560MK support ReBAR or not? | 2025-09-15T19:33:08 | https://www.reddit.com/r/LocalLLaMA/comments/1nhvnw0/i_need_help_choosing_between_2_gpus_for_ai/ | ee_di_tor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhvnw0 | false | null | t3_1nhvnw0 | /r/LocalLLaMA/comments/1nhvnw0/i_need_help_choosing_between_2_gpus_for_ai/ | false | false | self | 0 | null |
NCSOFT/VARCO-VISION-2.0-14B · Hugging Face | 22 | Abstract
**VARCO-VISION-2.0** is a multimodal AI model capable of understanding both images and text to answer user queries. It supports multi-image inputs, enabling effective processing of complex content such as documents, tables, and charts. The model demonstrates strong comprehension in both Korean and English, with significantly improved text generation capabilities and a deeper understanding of Korean cultural context. Compared to its predecessor, performance has been notably enhanced across various benchmarks, and its usability in real-world scenarios—such as everyday Q&A and information summarization—has also improved.
| 2025-09-15T19:12:50 | https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1nhv42c | false | null | t3_1nhv42c | /r/LocalLLaMA/comments/1nhv42c/ncsoftvarcovision2014b_hugging_face/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'Zqwx3E1Z_EElc-Wqav8y07fmTzCY9Mbf2KZGmL-cSJg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Zqwx3E1Z_EElc-Wqav8y07fmTzCY9Mbf2KZGmL-cSJg.png?width=108&crop=smart&auto=webp&s=8bcf95fcd438b24a6d2809499bb8285bce866a5c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Zqwx3E1Z_EElc-Wqav8y07fmTzCY9Mbf2KZGmL-cSJg.png?width=216&crop=smart&auto=webp&s=32e05d552d281371002580d05f5e09ae7d974259', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Zqwx3E1Z_EElc-Wqav8y07fmTzCY9Mbf2KZGmL-cSJg.png?width=320&crop=smart&auto=webp&s=c37061cb540e02b153b7e508f6ff53c6abeddd71', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Zqwx3E1Z_EElc-Wqav8y07fmTzCY9Mbf2KZGmL-cSJg.png?width=640&crop=smart&auto=webp&s=8273519f7df58dceaae1c21bddf409931659286d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Zqwx3E1Z_EElc-Wqav8y07fmTzCY9Mbf2KZGmL-cSJg.png?width=960&crop=smart&auto=webp&s=02e8bf7068ac73687a60b2e42af4a46137819aed', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Zqwx3E1Z_EElc-Wqav8y07fmTzCY9Mbf2KZGmL-cSJg.png?width=1080&crop=smart&auto=webp&s=828a96943aad96858946f6e0cf95290f6d810472', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Zqwx3E1Z_EElc-Wqav8y07fmTzCY9Mbf2KZGmL-cSJg.png?auto=webp&s=2ab4c8623800e8863e0b06016c3318b35e1d6636', 'width': 1200}, 'variants': {}}]} | |
We wanted to craft a perfect phishing scam. AI bots were happy to help | 0 | 2025-09-15T19:09:16 | https://www.reuters.com/investigates/special-report/ai-chatbots-cyber/ | Old-School8916 | reuters.com | 1970-01-01T00:00:00 | 0 | {} | 1nhv0fu | false | null | t3_1nhv0fu | /r/LocalLLaMA/comments/1nhv0fu/we_wanted_to_craft_a_perfect_phishing_scam_ai/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'mykKw0qDl4waY-tgZwOzSb_6z_HfpN64-koy59ekzyM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mykKw0qDl4waY-tgZwOzSb_6z_HfpN64-koy59ekzyM.jpeg?width=108&crop=smart&auto=webp&s=bb43d424e0ee1457b2d0a35ff383d7e10e5b2a69', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/mykKw0qDl4waY-tgZwOzSb_6z_HfpN64-koy59ekzyM.jpeg?width=216&crop=smart&auto=webp&s=8b8fdc3671a92ff313585d9034b011ae092ed87e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/mykKw0qDl4waY-tgZwOzSb_6z_HfpN64-koy59ekzyM.jpeg?width=320&crop=smart&auto=webp&s=9b06afbcb648ecb4acf803ef99b884e08c62a78c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/mykKw0qDl4waY-tgZwOzSb_6z_HfpN64-koy59ekzyM.jpeg?width=640&crop=smart&auto=webp&s=48e0f905b052b9adb4154be195a3e08b05288d28', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/mykKw0qDl4waY-tgZwOzSb_6z_HfpN64-koy59ekzyM.jpeg?width=960&crop=smart&auto=webp&s=4d7868be9b4774ef7644e59139066ad2905a37e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/mykKw0qDl4waY-tgZwOzSb_6z_HfpN64-koy59ekzyM.jpeg?width=1080&crop=smart&auto=webp&s=78258a89e4f67d7cd550498ebdaa491a09a61f17', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/mykKw0qDl4waY-tgZwOzSb_6z_HfpN64-koy59ekzyM.jpeg?auto=webp&s=68b8cb590ddf217f99afe7add125216403bd9c09', 'width': 1200}, 'variants': {}}]} | ||
Introducing the new GPT-5-Codex model, released today! Outperforms any local LLM, completes your tasks in a single shot 100% accuracy. More effective, more efficient, and cost-saving. Try it now! | 0 | 2025-09-15T18:56:36 | balianone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nhunqc | false | null | t3_1nhunqc | /r/LocalLLaMA/comments/1nhunqc/introducing_the_new_gpt5codex_model_released/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '74uhhj49ldpf1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/74uhhj49ldpf1.jpeg?width=108&crop=smart&auto=webp&s=ba943874322f2edbc9418f02fd9eab4a29aefaaa', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/74uhhj49ldpf1.jpeg?width=216&crop=smart&auto=webp&s=8b26aca5d54b6f7e2d0b2f794531f24d74b11bc1', 'width': 216}, {'height': 197, 'url': 'https://preview.redd.it/74uhhj49ldpf1.jpeg?width=320&crop=smart&auto=webp&s=dff7946d95cbed49358eca8fdac4cb0ce63b7e0d', 'width': 320}, {'height': 394, 'url': 'https://preview.redd.it/74uhhj49ldpf1.jpeg?width=640&crop=smart&auto=webp&s=686272dacf15d7cf96a169d7eed376e50417524d', 'width': 640}], 'source': {'height': 442, 'url': 'https://preview.redd.it/74uhhj49ldpf1.jpeg?auto=webp&s=acc9d83ef4b633dca765ada14c22a839f21d4b63', 'width': 717}, 'variants': {}}]} | ||
Looking for some LLM’s to run locally on my M4 Mac mini, and M3 MacBook Air | 0 | I apologize if this has been answered already, I tried searching but couldn't find what I was looking for, and that may be because Im not sure what to search for.
Im an Author, Im looking for a Claude like AI that I can run on my Mac hardware. Primarily the M4 Mac mini, and expandable to my M3 MacBook Air for whenever im not home.
Claude like AI for the writing and research, and Midjourney like for media creation. And whatever would be good for AI Video creation.
I don’t have any coding experience, am an advanced computer user so im not afraid to learn if needed. | 2025-09-15T18:55:43 | https://www.reddit.com/r/LocalLLaMA/comments/1nhumva/looking_for_some_llms_to_run_locally_on_my_m4_mac/ | scorpnet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhumva | true | null | t3_1nhumva | /r/LocalLLaMA/comments/1nhumva/looking_for_some_llms_to_run_locally_on_my_m4_mac/ | false | false | self | 0 | null |
Best approach for generating test cases from a 25-page BRD - chunk for prompts or implement RAG? | 6 | Hey everyone,
I'm working with a 25-page Business Requirements Document (BRD) for a banking system (Limits & Collateral module) and need to generate comprehensive test cases from it.
The document has detailed functional requirements, integration points, validation rules, and field specifications.I'm torn between two approaches:
Option 1: Chunk + Prompt
Break the BRD into logical sections (country allocations, limit utilization, collateral management, etc.)
Feed each chunk to an LLM with specific prompts for test case generation
Option 2: RAG Implementation
Store the entire document in a vector database
Query specific requirements as needed
What approach would you recommend?
| 2025-09-15T18:45:28 | https://www.reddit.com/r/LocalLLaMA/comments/1nhucwr/best_approach_for_generating_test_cases_from_a/ | KarimAbdelQader | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhucwr | false | null | t3_1nhucwr | /r/LocalLLaMA/comments/1nhucwr/best_approach_for_generating_test_cases_from_a/ | false | false | self | 6 | null |
Can someone explain this? | 0 | This chat is All weird but somethings are more weird then other. Like how is Qwen 3 coder flash (30b a3b) is worse in coding benchmarks then Qwen 3 30b a3b 2507.like how??? | 2025-09-15T18:44:10 | Brave-Hold-9389 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nhubn9 | false | null | t3_1nhubn9 | /r/LocalLLaMA/comments/1nhubn9/can_someone_explain_this/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'f37mawt4jdpf1', 'resolutions': [{'height': 206, 'url': 'https://preview.redd.it/f37mawt4jdpf1.png?width=108&crop=smart&auto=webp&s=dad1f6e1aed88317e9f38c04b83539b0730a6253', 'width': 108}, {'height': 412, 'url': 'https://preview.redd.it/f37mawt4jdpf1.png?width=216&crop=smart&auto=webp&s=68b012b1daacf9bb59e1b5f6fc687e6f6568b233', 'width': 216}, {'height': 611, 'url': 'https://preview.redd.it/f37mawt4jdpf1.png?width=320&crop=smart&auto=webp&s=0ca0ed7ea0c3018c945401cc2e987b766f6b71d8', 'width': 320}, {'height': 1223, 'url': 'https://preview.redd.it/f37mawt4jdpf1.png?width=640&crop=smart&auto=webp&s=06f890fc83949c3496cf739f6de4c89093f5e8cb', 'width': 640}, {'height': 1834, 'url': 'https://preview.redd.it/f37mawt4jdpf1.png?width=960&crop=smart&auto=webp&s=08ec1a2df69ec1d8dfb1ad665e5717c5a2fe64f0', 'width': 960}, {'height': 2063, 'url': 'https://preview.redd.it/f37mawt4jdpf1.png?width=1080&crop=smart&auto=webp&s=02329e180636ed39fee24e32a94d9041fa2cb1db', 'width': 1080}], 'source': {'height': 7644, 'url': 'https://preview.redd.it/f37mawt4jdpf1.png?auto=webp&s=0520fc39128a1c4d15141c340f0093d798f51069', 'width': 4000}, 'variants': {}}]} | |
Free Community LLM Exchange | 0 | **TL;DR:** Big gateways (OpenRouter, HF) are great for users, but challenging for new providers to get listed. Self-hosted gateways (LiteLLM, LLM Gateway) don’t solve discoverability. I built a **proof-of-concept** called **Inferline** that lets you list a self-hosted endpoint in **one command**. You keep running vLLM / llama.cpp / SGLang; a tiny sidecar polls a central queue and forwards requests to your runtime. Free and open-source.
# The problem
* Getting into popular marketplaces is tough for small teams and indie researchers.
* Self-hosted gateways still require users to add *your* endpoint manually, which kills discovery.
* Meanwhile, there’s a ton of cool tech out there—new models, KV-cache tricks, AI accelerators—that never reach users.
# What I built (POC)
* **Pull-based model:** instead of proxying traffic to you, your sidecar **polls** the exchange for available jobs.
* **Keep your stack:** run **vLLM**, **llama.cpp**, or **SGLang** as usual.
* **One extra container:** run an agent that registers your models, pulls requests, and streams responses back.
* Works behind NAT/CGNAT, lets you pause/scale without exposing new public ingress.
# Example
`PROVIDER_ID=my-server docker compose -f docker/docker-compose-tinyllama.yml up -d`
# Why is this different?
* **Instant discoverability:** listing = run the sidecar - no long partner process.
* **Provider control:** You own the infrastructure and keys, set caps, toggle availability, or route overflow elsewhere.
* **Pull-based model:** Provides natural backpressure, making load-balancing much easier.
# Monetization
For now, it is free to use, and providers can only list free endpoints. If you find it useful, I’ll hook it into metering + billing so providers can earn.
# Links
[Link to the service POC](https://inferline.cloudrift.ai/) \- free endpoints will be listed here. No credit card, no registration required.
[A deeper overview on Medium](https://medium.com/itnext/building-a-community-llm-exchange-3a37f75c6148)
[Non-medium link](https://www.cloudrift.ai/blog/building-a-community-llm-exchange)
[Github](https://github.com/cloudrift-ai/inferline)
**P.S. It is a proof-of-concept developed with heavy use of AI. It is not ready for production. I am evaluating the idea. Please give me some feedback.** | 2025-09-15T18:39:45 | https://www.reddit.com/r/LocalLLaMA/comments/1nhu7cw/free_community_llm_exchange/ | NoVibeCoding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhu7cw | false | null | t3_1nhu7cw | /r/LocalLLaMA/comments/1nhu7cw/free_community_llm_exchange/ | false | false | self | 0 | null |
Some GPU (5090,4090,3090,A600) idle power consumption, headless on Linux (Fedora 42), and some undervolt/overclock info. | 161 | Just an small post about some power consumption of those some GPUs if some people are interested.
As extra info, all the cards are both undervolted + power limited, but it shouldn't affect idle power consumption.
Undervolt was done with LACT, and they are:
* 3090s: 1875Mhz max core clock, +150Mhz core clock offset, +1700Mhz VRAM offset.
* A6000: 1740Mhz max core clock, +150Mhz core clock offset, +2000 Mhz VRAM offset.
* 4090 (1): 2850Mhz max core clock, +150Mhz core clock offset, +2700Mhz VRAM.
* 4090 (2): 2805Mhz max core clock, +180Mhz core clock offset, +1700Mhz VRAM offset.
* 5090s: 3010Mhz max core clock, +1000Mhz core clock offset, +4400Mhz VRAM offset.
This mostly puts the 3090s, A6000 and 4090 (2) at 0.9V. 4090 (1) is at 0.915V, and 5090s are at 0.895V. Also this offset in VRAM is MT/s basically, so on Windows comparatively, it is half of that (+1700Mhz = +850Mhz on MSI Afterburner, +1800 = +900, +2700 = 1350, +4400 = +2200) | 2025-09-15T18:27:22 | panchovix | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nhtv5f | false | null | t3_1nhtv5f | /r/LocalLLaMA/comments/1nhtv5f/some_gpu_509040903090a600_idle_power_consumption/ | false | false | 161 | {'enabled': True, 'images': [{'id': 'uqucxebVX8LbymOnWmzxVgxkoI65FVnqAZHVlAyL4SA', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/5difgej3fdpf1.png?width=108&crop=smart&auto=webp&s=714c5e0b038d01d536dad9b891bf7fa3673c46a8', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/5difgej3fdpf1.png?width=216&crop=smart&auto=webp&s=6cb832ed2840fad7bdeacb90d30686ae5b451407', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/5difgej3fdpf1.png?width=320&crop=smart&auto=webp&s=1b99273ed2d80465bffa02d5f7bf21bce2612ec1', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/5difgej3fdpf1.png?width=640&crop=smart&auto=webp&s=f699faed3067a46c354771c3653e38f77a492e56', 'width': 640}], 'source': {'height': 407, 'url': 'https://preview.redd.it/5difgej3fdpf1.png?auto=webp&s=1b961b8c046792a8230950416f7cefaf94ceff7c', 'width': 753}, 'variants': {}}]} | ||
Looking for a safe and GDPR-compliant web search API for LLM | 2 | Context: building an internal conversational agents for my company in Germany. Very concerned about safety and GDPR.
Using Mistral OSS and now Looking for a good SERP solution to plug it to the web.
So far, I’ve only found SearXNG and Linkup as “EU-compliant,” now that Bing has been deprecated. They might be good options, but for the sake of benchmarking, am I missing something? DuckDuckGo works well, but I don’t see any official API. | 2025-09-15T17:56:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nhszyl/looking_for_a_safe_and_gdprcompliant_web_search/ | MaleficentGoal9787 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhszyl | false | null | t3_1nhszyl | /r/LocalLLaMA/comments/1nhszyl/looking_for_a_safe_and_gdprcompliant_web_search/ | false | false | self | 2 | null |
Lots of hate I will get for this | 0 | https://youtu.be/6sJ50Ybp44I?si=BRMNcpfpWF7mOf-1 | 2025-09-15T17:24:36 | https://www.reddit.com/r/LocalLLaMA/comments/1nhs56i/lots_of_hate_i_will_get_for_this/ | theundertakeer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhs56i | false | null | t3_1nhs56i | /r/LocalLLaMA/comments/1nhs56i/lots_of_hate_i_will_get_for_this/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'yQB7W8SSORf4DlbCZ_xuRKDzrp-lwimzXqGvPt1A5Yg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/yQB7W8SSORf4DlbCZ_xuRKDzrp-lwimzXqGvPt1A5Yg.jpeg?width=108&crop=smart&auto=webp&s=f6d92b26fb35b3ca7fbf7ce012271d4c5ed881de', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/yQB7W8SSORf4DlbCZ_xuRKDzrp-lwimzXqGvPt1A5Yg.jpeg?width=216&crop=smart&auto=webp&s=58bc3119c6dc5850a4da67e613d829e35cd8792e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/yQB7W8SSORf4DlbCZ_xuRKDzrp-lwimzXqGvPt1A5Yg.jpeg?width=320&crop=smart&auto=webp&s=fc4a6af14c7a7afe430272f9463cc9ada152e838', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/yQB7W8SSORf4DlbCZ_xuRKDzrp-lwimzXqGvPt1A5Yg.jpeg?auto=webp&s=8ccd5ef524f48d8fe5205946062cd7a19aa013bc', 'width': 480}, 'variants': {}}]} |
Is the framework 385 32gb entry model enough? | 1 | I know it's not powerful, but it's half the price of the 395 64gb. Is this enough for MoE and stt-tts? I'm looking for a non expensive hardware that doesn't use much power. | 2025-09-15T16:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/1nhr43x/is_the_framework_385_32gb_entry_model_enough/ | n1k0v | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhr43x | false | null | t3_1nhr43x | /r/LocalLLaMA/comments/1nhr43x/is_the_framework_385_32gb_entry_model_enough/ | false | false | self | 1 | null |
Why don’t we have tiny, single-purpose LLMs that just output search-and-replace rules? | 2 | Hi there,
Why can't I find any LLM fine-tuned solely to produce search-and-replace blocks (regex or structured patterns + replacement templates). Almost each editing workflow comes down to some flavor of “find X, replace with Y,” even if the syntax varies.
Is this simply not practical with smaller models, or am I missing something? | 2025-09-15T16:40:52 | https://www.reddit.com/r/LocalLLaMA/comments/1nhqxzl/why_dont_we_have_tiny_singlepurpose_llms_that/ | Round_Mixture_7541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhqxzl | false | null | t3_1nhqxzl | /r/LocalLLaMA/comments/1nhqxzl/why_dont_we_have_tiny_singlepurpose_llms_that/ | false | false | self | 2 | null |
What drives you the most insane about local AI dev? | 0 | Running local models is awesome — you get freedom, privacy, and you’re not bleeding cash on API calls to the frontier labs. But man, some of the pain points make me want to yeet my GPU out the window.
For me, it’s the eternal VRAM juggling act. You see a shiny new model, get excited, then realize it needs 24GB and you’re rocking a 12GB card like 🥲. (Or, you're on Mac.) So you try the quantized version and it’s either a word salad generator or somehow still too big. The “will it fit?” calculator basically lives rent-free in my browser.
Close second: dependency chaos. One day your setup is perfect, the next day some package sneezes and suddenly nothing loads. Poetry, conda, pip, docker—pick your poison, it’ll betray you eventually.
And then there’s the analysis paralysis of choosing the “right” model. Do you go small and fast but meh quality? Giant and slow but amazing? Or roll the dice on some hot new architecture that only runs on a fork of a fork of a half-maintained repo?
What about y'all? Is it the endless model downloads eating your SSD? The wildly inconsistent inference speeds? Having to become a CUDA whisperer just to get hello world working? Or that every new model family needs a completely different runtime?
Let’s commiserate. What’s your personal “screw this, I’m going back to the cloud” moment? | 2025-09-15T16:32:51 | https://www.reddit.com/r/LocalLLaMA/comments/1nhqq8e/what_drives_you_the_most_insane_about_local_ai_dev/ | Prior-Consequence416 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhqq8e | false | null | t3_1nhqq8e | /r/LocalLLaMA/comments/1nhqq8e/what_drives_you_the_most_insane_about_local_ai_dev/ | false | false | self | 0 | null |
Qwen2.5-VL 7B: Why is Hugging Face Inference more accurate/faster than my local run? | 29 | I’ve been experimenting with **Qwen2.5-VL 7B** for image-based data extraction (e.g. receipts).
When I run it on the [Hugging Face Inference provider](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct), the results are **highly accurate and quite fast**.
But when I run the same model locally (16 GB VRAM, Q8 quantization, `max_new_tokens=512`), the output is noticeably **less accurate** (wrong digits/letters, small hallucinations) and much **slower** (\~3 tok/s despite FlashAttention 2 enabled)
A few questions:
* Does HF Inference wrap Qwen-VL with any **extra preprocessing/decoding constraints** (e.g., image normalization, capped `max_new_tokens`, schema prompts)?
* Could the gap be mostly due to my local choices (Q8 quantization, large token budget), or are there known **optimizations in their serving stack** (BetterCUDA kernels, tensorRT, fp16/bf16 tuning)?
* Any tips for getting closer to HF inference performance locally? | 2025-09-15T16:28:41 | https://www.reddit.com/r/LocalLLaMA/comments/1nhqm7n/qwen25vl_7b_why_is_hugging_face_inference_more/ | Ok_Television_9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhqm7n | false | null | t3_1nhqm7n | /r/LocalLLaMA/comments/1nhqm7n/qwen25vl_7b_why_is_hugging_face_inference_more/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=108&crop=smart&auto=webp&s=0d0bf812fba94f9f50669a2e76037d0e7886bde2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=216&crop=smart&auto=webp&s=99a214b39375ee0ec6cdcffc1958d0a4b34e4690', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=320&crop=smart&auto=webp&s=8e945b1c768d68948eaa7f830a3b219c2df4c13c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=640&crop=smart&auto=webp&s=b846b869885ffbeadf6199126a3c0fab1ed22be2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=960&crop=smart&auto=webp&s=c94fbe6213102c60c53f38bc3207e1f1bde9733a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?width=1080&crop=smart&auto=webp&s=e2dee8bb7abc532aeb2cfeec783420f84dba72c6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/FjlIBWa3fCz4O3VIWU3O9fF8Jb7iY6HD2x0Bcm3wfGI.png?auto=webp&s=17bf72c4d47d131612ab2f5b554d85da02a85539', 'width': 1200}, 'variants': {}}]} |
ExamSprint – Free AI Study Tool with Notes, Solutions & Formula Sheets 🚀📖 | 1 | [removed] | 2025-09-15T16:27:36 | examsprinter_dev | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nhql6p | false | null | t3_1nhql6p | /r/LocalLLaMA/comments/1nhql6p/examsprint_free_ai_study_tool_with_notes/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'gjt6arwrucpf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/gjt6arwrucpf1.png?width=108&crop=smart&auto=webp&s=f4858c8ace0a745a9fd055756d9bf426fb143809', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/gjt6arwrucpf1.png?width=216&crop=smart&auto=webp&s=b0a2f7e863117845ba7a2693b45a9eb3afad9a32', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/gjt6arwrucpf1.png?width=320&crop=smart&auto=webp&s=1a000fecc8ffcb2c685def3dcadcdacb4990a638', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/gjt6arwrucpf1.png?width=640&crop=smart&auto=webp&s=e6756af98e8bb0863be5e4d44893cdc8d6021b67', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/gjt6arwrucpf1.png?width=960&crop=smart&auto=webp&s=838de508d6ee1d300c838f2b96bfc86d22e1a221', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/gjt6arwrucpf1.png?width=1080&crop=smart&auto=webp&s=02f0f91793d55ecd660cf35cb03a57aeadc14382', 'width': 1080}], 'source': {'height': 2712, 'url': 'https://preview.redd.it/gjt6arwrucpf1.png?auto=webp&s=b4ecd8a41c3f86bf4f7c545d60f3205b88f1b450', 'width': 1220}, 'variants': {}}]} | |
NVIDIA NEMO - Lack of OS comminity | 4 | Is there any channel for discussing topics related to training models in NeMo 2.0 framework? I hear many labs training their llms in it.
There is no proper documentation for it.
| 2025-09-15T16:08:48 | https://www.reddit.com/r/LocalLLaMA/comments/1nhq31b/nvidia_nemo_lack_of_os_comminity/ | Interesting-Fish-542 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhq31b | false | null | t3_1nhq31b | /r/LocalLLaMA/comments/1nhq31b/nvidia_nemo_lack_of_os_comminity/ | false | false | self | 4 | null |
A lightweight and tunable python chat interface to interact with LLM, featuring persistent memory | 47 | I developed a lightweight Python tool that allows local LLM to maintain persistent memory, and I’m sharing it here.
Local models are great for privacy and offline use, but they typically lose all context between sessions unlike online services, as you all know.
Previously, I built a project that captured conversations from LM Studio and stored them in a database to enrich prompts sent to models. This new version is a direct chat interface (leveraging easy-llama by u/master-meal-77, many thanks to him) that makes the memory process completely seamless and invisible to the user.
# Key features:
* Fully local, no external API dependencies
* Short-term and long-term memory for fluid conversations and contextually relevant responses -
* Fully customizable depth of memory and model parameters
* Workspaces to separate different projects
* Built-in visualizations to track memory data and semantic indicators
# Upcoming developments:
* Document support (PDF, Word, Excel, images) for targeted queries
* Integrated web search to supplement local memory with the most recent information
* Selective import/export of personal memory through workspaces for sharing within a team
I think this project could be of interest to some users of this sub.
The code is here : [GitHub repository](https://github.com/victorcarre6/LocalMind)
Feel free to use it as you want and to share your feedback! :) | 2025-09-15T16:03:39 | Vicouille6 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nhpy35 | false | null | t3_1nhpy35 | /r/LocalLLaMA/comments/1nhpy35/a_lightweight_and_tunable_python_chat_interface/ | false | false | default | 47 | {'enabled': True, 'images': [{'id': 'olzso2n2qcpf1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/olzso2n2qcpf1.png?width=108&crop=smart&auto=webp&s=724062340ac3a661ff9fc222fbcf74fc8f5ec6bc', 'width': 108}, {'height': 127, 'url': 'https://preview.redd.it/olzso2n2qcpf1.png?width=216&crop=smart&auto=webp&s=389553fa041cde3d0d088a0ec70a096161e0ae07', 'width': 216}, {'height': 188, 'url': 'https://preview.redd.it/olzso2n2qcpf1.png?width=320&crop=smart&auto=webp&s=6ee11d30453889c0fccd11dfcefe5f76806b6340', 'width': 320}, {'height': 377, 'url': 'https://preview.redd.it/olzso2n2qcpf1.png?width=640&crop=smart&auto=webp&s=b502586d78f6bcec10593dca6d7a69c2f9f80094', 'width': 640}, {'height': 565, 'url': 'https://preview.redd.it/olzso2n2qcpf1.png?width=960&crop=smart&auto=webp&s=afa84908380aa0461fadcb96a090f741f28e666f', 'width': 960}, {'height': 636, 'url': 'https://preview.redd.it/olzso2n2qcpf1.png?width=1080&crop=smart&auto=webp&s=ff2fd73a77a0ff96933f5fc5015b124f5558b4ea', 'width': 1080}], 'source': {'height': 706, 'url': 'https://preview.redd.it/olzso2n2qcpf1.png?auto=webp&s=cd386784cbd3cdd082572e568ca946a612e863e2', 'width': 1198}, 'variants': {}}]} | |
chatgpt competative local model/hardware that doesn't break the bank? | 0 | Hi all. I've struggled to find any local models that are even remotely as good as \~GPT4o etc at <=16GB. I have a couple of machines I'm using, an m2 max mac w/ 32GB RAM and an i7-12700 w/ ARC380.
I've been considering an upgrade to 5070ti 16GB box, but I'm not having good enough results with the m2 box running local models right now so the upgrade might just be a much faster version of mediocre results.
my goals are primarily log file analysis as well as some vibe coding.
Is this just to big of an ask for a 16GB VRAM system? Going with multiple cards or really anything higher is well out of budget. I'd love to test gpt-oss:120b but it's impossibly slow in software and I have no current path to a >=64GB VRAM system short of an exceptionally expensive mac... and for a $3200 bill for 120GB the '38 TOPS' of that machine just doesn't seem like a good value.
Is there a reasonable path to get 128GB of VRAM and \~1000TOPs (5070ti or so)?
Seems like all of the models I can utilize are just too dumb. gpt-oss:20b pales in comparison to openai cloud, so much so that it's essentially useless to me. | 2025-09-15T16:03:16 | https://www.reddit.com/r/LocalLLaMA/comments/1nhpxp7/chatgpt_competative_local_modelhardware_that/ | International_Pea500 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhpxp7 | false | null | t3_1nhpxp7 | /r/LocalLLaMA/comments/1nhpxp7/chatgpt_competative_local_modelhardware_that/ | false | false | self | 0 | null |
A lightweight and tunable python chat interface to interact with LLM, featuring persistent memory | 1 | I developed a lightweight Python tool that allows local LLM to maintain persistent memory, and I’m sharing it here.
Local models are great for privacy and offline use, but they typically lose all context between sessions unlike online services, as you all know.
Previously, I built a project that captured conversations from LM Studio and stored them in a database to enrich prompts sent to models. This new version is a direct chat interface (leveraging easy-llama by u/master-meal-77, many thanks to him) that makes the memory process completely seamless and invisible to the user.
**Key features:**
* Fully local, no external API dependencies
* Short-term and long-term memory for fluid conversations and contextually relevant responses
* Fully customizable depth of memory and model parameters
* Workspaces to separate different projects
* Built-in visualizations to track memory data and semantic indicators
**Upcoming developments:**
* Document support (PDF, Word, Excel, images) for targeted queries
* Integrated web search to supplement local memory with the most recent information
* Selective import/export of personal memory through workspaces for sharing within a team
I think this project could be of interest to some users of this sub.
The code is here : [https://github.com/victorcarre6/LocalMind](https://github.com/victorcarre6/LocalMind)
Feel free to use it as you want and to share your feedback! :) | 2025-09-15T16:00:12 | https://www.reddit.com/r/LocalLLaMA/comments/1nhpufp/a_lightweight_and_tunable_python_chat_interface/ | Vicouille6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhpufp | false | null | t3_1nhpufp | /r/LocalLLaMA/comments/1nhpufp/a_lightweight_and_tunable_python_chat_interface/ | false | false | self | 1 | null |
MobileLLM-R1-950M meets Apple Silicon | 3 | MobileLLM-R1-950M meets Apple Silicon
New 1B model dropped → config lied → I wrote the missing MLX runtime. (j/k ❤️ [@meta](https://x.com/Meta))
Now MobileLLM-R1-950M runs native on Apple Silicon @ 4bit.
- https://huggingface.co/robbiemu/MobileLLM-R1-950M-MLX
- blog - https://selfenrichment.hashnode.dev/mobilellm-r1-950m-meets-apple-silicon
try it locally on your Mac tonight. | 2025-09-15T15:38:08 | https://www.reddit.com/r/LocalLLaMA/comments/1nhp8uq/mobilellmr1950m_meets_apple_silicon/ | robertotomas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhp8uq | false | null | t3_1nhp8uq | /r/LocalLLaMA/comments/1nhp8uq/mobilellmr1950m_meets_apple_silicon/ | false | false | self | 3 | null |
Has anyone connected lm studio to onenote? | 1 | I am wondering if anyone has connected lm studio to onenote?
I use onenote as my second brain. And would like to include information into lm studio queries.
Anyone do this? or knows how to?
thanks
| 2025-09-15T15:31:00 | https://www.reddit.com/r/LocalLLaMA/comments/1nhp200/has_anyone_connected_lm_studio_to_onenote/ | rocky_balboa202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhp200 | false | null | t3_1nhp200 | /r/LocalLLaMA/comments/1nhp200/has_anyone_connected_lm_studio_to_onenote/ | false | false | self | 1 | null |
PCIE Backplane questions 2025 | 4 | Anyone use a PCIe backplane riser board? what are some gotchas and will they work with consumer motherboards (with an appropriate miniSAS adapter maybe)?
example:
[https://www.ebay.com/itm/135189657675?\_trkparms=amclksrc%3DITM%26aid%3D1110006%26algo%3DHOMESPLICE.SIM%26ao%3D1%26asc%3D275831%2C275537%2C276104%26meid%3D7a0f07a9c059413dac120f2711fd27a7%26pid%3D101196%26rk%3D1%26rkt%3D5%26sd%3D135039633701%26itm%3D135189657675%26pmt%3D1%26noa%3D0%26pg%3D2332490%26algv%3DSimplAMLv5PairwiseWebWithBBEV2bAndUBSourceDemotionWithUltimatelyBoughtOfCoviewV1%26brand%3DGIGABYTE&\_trksid=p2332490.c101196.m2219&itmprp=cksum%3A1351896576757a0f07a9c059413dac120f2711fd27a7%7Cenc%3AAQAJAAABQJh9BGsXvPG03pKg78mUhLLErCJ%252BXOEYDkzTGJ85B4rSRXG6DGHfiL9UFpXuaOk%252FmuXW6x51j8YJMfy7doeYuyk9WZaRPkl%252FLlHN84X3%252FeYgVG3iucUQjkVp9Lf5uEN8TjNNkavQeqKBTikJ7ybOxo00kkrBUoFfDIZJ5nrvFRJVnVmu3Odi4Kf0%252F1S%252BY0Y%252FOwcjk7CEhjQjvOo4Mo%252BsEYhQB3cQkFN6rGnS5LB0y86Qf0TZDA8hm0yH2vpJ6dS4WyYIeIJIWWE%252FcWcWnaChuEZj2Kh%252FS4ig3t%252FzeLFaMj0Zo6oLQws76EumQEOvqEAplWem5zMn3cnTbZyKrbUbnMms8NNNekcVI9kiCwMlGtpw3i0QgylABNkxEGFQJS9%252FFntZC%252FvLIg5tMBE28BH3zI9ntlxC6b%252BtP5CdaZble15k%7Campid%3APL\_CLK%7Cclp%3A2332490&epid=25062904712&itmmeta=01JC4NTVJ2JWHDAJ8ZZPP7XSCJ&autorefresh=true](https://www.ebay.com/itm/135189657675?_trkparms=amclksrc%3DITM%26aid%3D1110006%26algo%3DHOMESPLICE.SIM%26ao%3D1%26asc%3D275831%2C275537%2C276104%26meid%3D7a0f07a9c059413dac120f2711fd27a7%26pid%3D101196%26rk%3D1%26rkt%3D5%26sd%3D135039633701%26itm%3D135189657675%26pmt%3D1%26noa%3D0%26pg%3D2332490%26algv%3DSimplAMLv5PairwiseWebWithBBEV2bAndUBSourceDemotionWithUltimatelyBoughtOfCoviewV1%26brand%3DGIGABYTE&_trksid=p2332490.c101196.m2219&itmprp=cksum%3A1351896576757a0f07a9c059413dac120f2711fd27a7%7Cenc%3AAQAJAAABQJh9BGsXvPG03pKg78mUhLLErCJ%252BXOEYDkzTGJ85B4rSRXG6DGHfiL9UFpXuaOk%252FmuXW6x51j8YJMfy7doeYuyk9WZaRPkl%252FLlHN84X3%252FeYgVG3iucUQjkVp9Lf5uEN8TjNNkavQeqKBTikJ7ybOxo00kkrBUoFfDIZJ5nrvFRJVnVmu3Odi4Kf0%252F1S%252BY0Y%252FOwcjk7CEhjQjvOo4Mo%252BsEYhQB3cQkFN6rGnS5LB0y86Qf0TZDA8hm0yH2vpJ6dS4WyYIeIJIWWE%252FcWcWnaChuEZj2Kh%252FS4ig3t%252FzeLFaMj0Zo6oLQws76EumQEOvqEAplWem5zMn3cnTbZyKrbUbnMms8NNNekcVI9kiCwMlGtpw3i0QgylABNkxEGFQJS9%252FFntZC%252FvLIg5tMBE28BH3zI9ntlxC6b%252BtP5CdaZble15k%7Campid%3APL_CLK%7Cclp%3A2332490&epid=25062904712&itmmeta=01JC4NTVJ2JWHDAJ8ZZPP7XSCJ&autorefresh=true) | 2025-09-15T15:12:01 | https://www.reddit.com/r/LocalLLaMA/comments/1nhojdy/pcie_backplane_questions_2025/ | bennmann | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhojdy | false | null | t3_1nhojdy | /r/LocalLLaMA/comments/1nhojdy/pcie_backplane_questions_2025/ | false | false | self | 4 | null |
Testers w/ 4th-6th Generation Xeon CPUs wanted to test changes to llama.cpp | 65 | Hey all,.
I have been working on improving AMX acceleration in llama.cpp. Currently, even if you have a a supported CPU and have built llama.cpp with all the required build flags, AMX acceleration is disabled if you have a GPU present.
I modified the way that llama.cpp exposes the "extra" CPU buffers so that AMX will remain functional in CPU/GPU hybrids, resulting in a 20-40% increase in performance for CPU offloaded layers / CPU offloaded experts.
Since I have limited hardware to test with I made a temporary fork and I am looking for testers make sure everything is good before I open a PR to roll the changes into mainline llama.cpp.
4th-6th Generation Xeons accelerations supported: AVX-512VNNI, AMXInt8, AMXBF16
*Note: I have made the changes to AMX.cpp to implement AMXInt4, but since I don't have a 6th generation Xeon, I can't test it, so I left it out for now.*
To enable the new behavior you just place "--amx" in your launch command string, to revert to base behavior, just remove the "--amx" flag.
If you test please leave a comment in the discussions in the Github with your CPU/RAM/GPU hardware information and your results with and without the "--amx" flag using the example llama-bench and llama-cli commands (takes less that 1 min each) it would be very helpful. Feel free to include any other tests that you do, the more the better.
Huge thank you in advance!
Here is the github: Instructions and example commands are in the readme.
[https://github.com/Gadflyii/llama.cpp](https://github.com/Gadflyii/llama.cpp)
| 2025-09-15T14:19:58 | https://www.reddit.com/r/LocalLLaMA/comments/1nhn5sy/testers_w_4th6th_generation_xeon_cpus_wanted_to/ | DataGOGO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhn5sy | false | null | t3_1nhn5sy | /r/LocalLLaMA/comments/1nhn5sy/testers_w_4th6th_generation_xeon_cpus_wanted_to/ | false | false | self | 65 | {'enabled': False, 'images': [{'id': 'GaJUpN7TdUqNEpT9OVr0eSZpYLitkLLAYUZ604KkpvI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GaJUpN7TdUqNEpT9OVr0eSZpYLitkLLAYUZ604KkpvI.png?width=108&crop=smart&auto=webp&s=f4f1d30d0ea52cb915096af89920811176424e6a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GaJUpN7TdUqNEpT9OVr0eSZpYLitkLLAYUZ604KkpvI.png?width=216&crop=smart&auto=webp&s=32b63284d993859081ef28234298db298eb092da', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GaJUpN7TdUqNEpT9OVr0eSZpYLitkLLAYUZ604KkpvI.png?width=320&crop=smart&auto=webp&s=a28b674a5d424088fd8bdb76bf21baf56735ef6b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GaJUpN7TdUqNEpT9OVr0eSZpYLitkLLAYUZ604KkpvI.png?width=640&crop=smart&auto=webp&s=750205857a4494271130c2968e3a7fc97fbda47a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GaJUpN7TdUqNEpT9OVr0eSZpYLitkLLAYUZ604KkpvI.png?width=960&crop=smart&auto=webp&s=b3ae3f0983f4bd1b2afcec7a21ae775fdafa5637', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GaJUpN7TdUqNEpT9OVr0eSZpYLitkLLAYUZ604KkpvI.png?width=1080&crop=smart&auto=webp&s=1e06a1bd0d4bab3304902b5d20656b8ddb885d72', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GaJUpN7TdUqNEpT9OVr0eSZpYLitkLLAYUZ604KkpvI.png?auto=webp&s=125635628ac9eef8f017ddf8f97c5c3c5bf24944', 'width': 1200}, 'variants': {}}]} |
Built a GPU rental platform for LLM hosting - tired of hardware limitations | 0 | Been hitting walls with my local setup trying to run larger models. 3090 handles 7B fine but anything bigger needs serious compromises or doesn't fit at all.
Looked into cloud options but the setup overhead is brutal - spend 2 hours configuring environments just to test a model for 20 minutes. Plus AWS pricing adds up fast when you're experimenting.
Built a marketplace where people can rent out pre-configured setups. Everything from Llama2 to CodeLlama already installed and ready to go. Deploy in under a minute instead of fighting with Docker configs and 50GB model downloads.
Early testing phase - expecting rough edges and would love feedback.
Looking for both GPU providers and renters to try the platform:
\- Provider setup should take \~10 minutes
\- Rental environments deploy in 30-60 seconds
\- Currently supporting Llama2, Mistral, Stable Diffusion configs
The technical challenge was making secure P2P connections work behind NATs and firewalls without exposing anyone's home network. Using WireGuard tunnels with smart contract escrow for payments.
Platform: [https://gpuflow.app](https://gpuflow.app)
Docs: [https://docs.gpuflow.app](https://docs.gpuflow.app)
Can provide test POL tokens for anyone willing to try it.
Fair warning: this is alpha software running on testnet. Things will break. What breaks and how is exactly what I need to know.
What models do you find yourself wanting to test but can't run locally? Trying to figure out which environments to prioritize next. | 2025-09-15T14:19:13 | https://www.reddit.com/r/LocalLLaMA/comments/1nhn54d/built_a_gpu_rental_platform_for_llm_hosting_tired/ | kixago | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhn54d | false | null | t3_1nhn54d | /r/LocalLLaMA/comments/1nhn54d/built_a_gpu_rental_platform_for_llm_hosting_tired/ | false | false | self | 0 | null |
Top 7 Small Language Models | 1 | Small language models (SLMs) are quickly becoming the practical face of AI. They are getting faster, smarter, and far more efficient, delivering strong results with a fraction of the compute, memory, and energy that large models require.
A growing trend in the AI community is to use large language models (LLMs) to generate synthetic datasets, which are then used to fine-tune SLMs for specific tasks or to adopt particular styles. As a result, SLMs are becoming smarter, faster, and more specialized, all while maintaining a compact size. This opens up exciting possibilities: you can now embed intelligent models directly into systems that don’t require a constant internet connection, enabling on-device intelligence for privacy, speed, and reliability.
Review some of the top small language models making waves in the AI world, here: [https://www.kdnuggets.com/top-7-small-language-models](https://www.kdnuggets.com/top-7-small-language-models) | 2025-09-15T13:40:34 | https://www.reddit.com/r/LocalLLaMA/comments/1nhm5nj/top_7_small_language_models/ | kingabzpro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhm5nj | false | null | t3_1nhm5nj | /r/LocalLLaMA/comments/1nhm5nj/top_7_small_language_models/ | false | false | self | 1 | null |
Look at our boi go! WEBGEN-SMALL is a 4B model... | 35 | Although we have a lot to improve, its crazy to see our latest iteration on the 4B beating our previous models on design arena. The models are the worst they're ever going to be.
You can find the exact weights here: [https://huggingface.co/Tesslate/WEBGEN-4B-Preview](https://huggingface.co/Tesslate/WEBGEN-4B-Preview)
More improvements are coming and we appreciate everyone who's been a part of the journey. We are looking for new teammates, so if you are into web development, finetuning, AI development, making open source software, etc then drop me a DM! Or even if you aren't as experienced and you just want to learn and get your hands dirty! | 2025-09-15T13:25:10 | United-Rush4073 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1nhls35 | false | null | t3_1nhls35 | /r/LocalLLaMA/comments/1nhls35/look_at_our_boi_go_webgensmall_is_a_4b_model/ | false | false | default | 35 | {'enabled': True, 'images': [{'id': 'xrexkzxkwbpf1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/xrexkzxkwbpf1.png?width=108&crop=smart&auto=webp&s=9dbac171cf7dda6081f3385fc0b7136dc7a6e84e', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/xrexkzxkwbpf1.png?width=216&crop=smart&auto=webp&s=4146ae6736fbc59fe592202bb47692777309d513', 'width': 216}, {'height': 209, 'url': 'https://preview.redd.it/xrexkzxkwbpf1.png?width=320&crop=smart&auto=webp&s=c290c4501b062da25c7c2f8f94b0d55a5ea28556', 'width': 320}, {'height': 419, 'url': 'https://preview.redd.it/xrexkzxkwbpf1.png?width=640&crop=smart&auto=webp&s=074716c15bd860f1c7cf210915dc87ab73f0b823', 'width': 640}, {'height': 628, 'url': 'https://preview.redd.it/xrexkzxkwbpf1.png?width=960&crop=smart&auto=webp&s=961de97e9a760f28351db5738aea8e7cfc3e5cfd', 'width': 960}, {'height': 707, 'url': 'https://preview.redd.it/xrexkzxkwbpf1.png?width=1080&crop=smart&auto=webp&s=09c1e8f6d774bc9d0807bc3aadf645910de9ca48', 'width': 1080}], 'source': {'height': 800, 'url': 'https://preview.redd.it/xrexkzxkwbpf1.png?auto=webp&s=e48a05524ae7461553b6836f6bcb64a1b69e37cc', 'width': 1221}, 'variants': {}}]} | |
Anyone getting reliable handwriting-to-text with local VLMs or any other tools? | 0 | I’m trying to turn handwritten notes (PDF scans) into text **fully offline** on a Mac. I’ve dug through a bunch of Reddit threads and random blogs already, but nothing felt like a clear, current answer. So, asking here where people actually run this stuff.
I’d prefer a **VLM-first** pipeline if that’s realistic or maybe some other tools for OCR which might do the job more effectively? Models I’m eyeing: **Qwen2.5-VL, Mistral Small 3.2,** **InternVL or Gemma (all under 32B params + 4-6 bit quantized)**. Since I am short on VRAM and GPU so I was looking for models that I can run under 20GB VRAM. If there’s something newer people actually use for handwriting recognition, please do let me know.
I don't even know if the VLM first approach is the right way to tackle this problem so I would appreciate some guidance if anyone has made progress in this area.
Thanks in advance! | 2025-09-15T13:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/1nhl9vs/anyone_getting_reliable_handwritingtotext_with/ | IntroductionMoist974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhl9vs | false | null | t3_1nhl9vs | /r/LocalLLaMA/comments/1nhl9vs/anyone_getting_reliable_handwritingtotext_with/ | false | false | self | 0 | null |
Advice on building an enterprise-scale, privacy-first conversational assistant (local LLMs with Ollama vs fine-tuning) | 0 | Hi everyone,
I’m working on a project to design a **conversational AI assistant for employee well-being and productivity** inside a large enterprise (think thousands of staff, high compliance/security requirements). The assistant should provide personalized nudges, lightweight recommendations, and track anonymized engagement data — without sending sensitive data outside the organization.
**Key constraints:**
* Must be **privacy-first** (local deployment or private cloud — no SaaS APIs).
* Needs to support **personalized recommendations** and **ongoing employee state tracking**.
* Must handle **enterprise scale** (hundreds–thousands of concurrent users).
* Regulatory requirements: **PII protection, anonymization, auditability**.
**What I’d love advice on:**
1. **Local LLM deployment**
* Is using **Ollama with models like Gemma/MedGemma** a solid foundation for production at enterprise scale?
* What are the pros/cons of Ollama vs more MLOps-oriented solutions (vLLM, TGI, LM Studio, custom Dockerized serving)?
2. **Model strategy: RAG vs fine-tuning**
* For delivering contextual, evolving guidance: would you start with **RAG (vector DB + retrieval)** or jump straight into **fine-tuning a domain model**?
* Any rule of thumb on when fine-tuning becomes necessary in real-world enterprise use cases?
3. **Model choice**
* Experiences with **Gemma/MedGemma** or other open-source models for well-being / health-adjacent guidance?
* Alternatives you’d recommend (Mistral, LLaMA 3, Phi-3, Qwen, etc.) in terms of reasoning, safety, and multilingual support?
4. **Infrastructure & scaling**
* Minimum GPU/CPU/RAM targets to support **hundreds of concurrent chats**.
* Vector DB choices: FAISS, Milvus, Weaviate, Pinecone — what works best at enterprise scale?
* Monitoring, evaluation, and safe deployment patterns (A/B testing, hallucination mitigation, guardrails).
5. **Security & compliance**
* Best practices to prevent **PII leakage into embeddings/prompts**.
* Recommended architectures for **GDPR/HIPAA-like compliance** when dealing with well-being data.
* Any proven strategies to balance personalization with strict privacy requirements?
6. **Evaluation & KPIs**
* How to measure assistant effectiveness (safety checks, employee satisfaction, retention impact).
* Tooling for anonymized analytics dashboards at the org level. | 2025-09-15T12:58:28 | https://www.reddit.com/r/LocalLLaMA/comments/1nhl58m/advice_on_building_an_enterprisescale/ | jamalhassouni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhl58m | false | null | t3_1nhl58m | /r/LocalLLaMA/comments/1nhl58m/advice_on_building_an_enterprisescale/ | false | false | self | 0 | null |
Need a coding & general use model recommendation for my 16GB GPU | 0 | Hello everyone! I'm an SAP Basis consultant, and I'm also interested in coding. I'm looking for a model that I can use both for my daily tasks and for my work. A high context length would be better for me. I have a 16GB Nvidia RTX 4070 Ti Super graphics card. Which models would you use if you were in my place? | 2025-09-15T12:55:03 | https://www.reddit.com/r/LocalLLaMA/comments/1nhl2f0/need_a_coding_general_use_model_recommendation/ | sado361 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhl2f0 | false | null | t3_1nhl2f0 | /r/LocalLLaMA/comments/1nhl2f0/need_a_coding_general_use_model_recommendation/ | false | false | self | 0 | null |
I’ve created an AI Short Generator that turns AI research papers into short-form content.
What do you think about it? | 3 |
I’ve built an **AI Short Generator** using OpenAI and VibeVoice.
The system takes an AI research paper as input, automatically summarizes it, generates a podcast-style script, and even creates a short video.
I’m considering uploading some of these to YouTube, but I know there’s still plenty of room for improvement.
https://reddit.com/link/1nhknyl/video/npurnmv4pbpf1/player
I’d love to hear your thoughts—what do you think about this project?
If you watch one of the generated videos and notice things that could be improved, feel free to drop your feedback in the comments. | 2025-09-15T12:37:38 | https://www.reddit.com/r/LocalLLaMA/comments/1nhknyl/ive_created_an_ai_short_generator_that_turns_ai/ | Subject-Guitar4521 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1nhknyl | false | null | t3_1nhknyl | /r/LocalLLaMA/comments/1nhknyl/ive_created_an_ai_short_generator_that_turns_ai/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.