title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How good Orange Pi 6 for local llm? | 4 | Has anyone tried the Orange Pi 6 (like this one from amazon) for llm? Is it possible to run 3B or 8B LLMs in this?
| 2025-11-08T12:26:28 | Middle_Investment_81 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oro5iz | false | null | t3_1oro5iz | /r/LocalLLaMA/comments/1oro5iz/how_good_orange_pi_6_for_local_llm/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'xn6cjjgz010g1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/xn6cjjgz010g1.jpeg?width=108&crop=smart&auto=webp&s=868fa1a1386d313bc02c4c1d3007ea9e30e07515', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/xn6cjjgz010g1.jpeg?width=216&crop=smart&auto=webp&s=553fe26debc763dc21c6cd13e1bfcdb18a368b6b', 'width': 216}, {'height': 176, 'url': 'https://preview.redd.it/xn6cjjgz010g1.jpeg?width=320&crop=smart&auto=webp&s=0463b12ac67d5fb7d70912ba0b1f54622bd70922', 'width': 320}, {'height': 353, 'url': 'https://preview.redd.it/xn6cjjgz010g1.jpeg?width=640&crop=smart&auto=webp&s=7b4bf89bf9e2ec69f487879cfd566110dcf13b4c', 'width': 640}, {'height': 529, 'url': 'https://preview.redd.it/xn6cjjgz010g1.jpeg?width=960&crop=smart&auto=webp&s=ed6fb885fc2bc562b29f995f794999ea1ce2ddfa', 'width': 960}, {'height': 596, 'url': 'https://preview.redd.it/xn6cjjgz010g1.jpeg?width=1080&crop=smart&auto=webp&s=7be581d17710feecb8a0173efb50ad665c04ddf8', 'width': 1080}], 'source': {'height': 596, 'url': 'https://preview.redd.it/xn6cjjgz010g1.jpeg?auto=webp&s=b022baa58221c8b2bbe53b1294e0c9c8e8c6229f', 'width': 1080}, 'variants': {}}]} | |
Best GUI for LLM based story writing that can access external models? | 5 | Most GUIs want to run the models themself, but I'd like to run it myself or use an on campus service that provide an OpenAI compatible API access. And for my Ooba installation the Playground extension isn't working at the moment.
So, long story short:
What are your recommendations for a GUI tool that's helping me to interactively write and edit stories - and can access the LLM through an OpenAI API? | 2025-11-08T12:11:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ornvm0/best_gui_for_llm_based_story_writing_that_can/ | StableLlama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ornvm0 | false | null | t3_1ornvm0 | /r/LocalLLaMA/comments/1ornvm0/best_gui_for_llm_based_story_writing_that_can/ | false | false | self | 5 | null |
Handy : Free, Offline AI dictation app for PC, supports Whisper and Parakeet models | 34 | Handy is a trending GitHub repo which is a free alternate for Wispr Flow for AI dictation. The app size is quite small and it supports all Parakeet (nvidia) and Whisper model for speech to text.
GitHub : https://github.com/cjpais/Handy
Demo : https://youtu.be/1QzXdhVeOkI?si=yli8cfejvOy3ERbo | 2025-11-08T11:32:08 | https://www.reddit.com/r/LocalLLaMA/comments/1orn6q4/handy_free_offline_ai_dictation_app_for_pc/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1orn6q4 | false | null | t3_1orn6q4 | /r/LocalLLaMA/comments/1orn6q4/handy_free_offline_ai_dictation_app_for_pc/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'hFauF7dmVtT_NpKwLXwWj1k200bWFLqifX4gF1Eb7YE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hFauF7dmVtT_NpKwLXwWj1k200bWFLqifX4gF1Eb7YE.png?width=108&crop=smart&auto=webp&s=43dee1447ab88ec0e999d6cbd5a633ab22554878', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hFauF7dmVtT_NpKwLXwWj1k200bWFLqifX4gF1Eb7YE.png?width=216&crop=smart&auto=webp&s=498bdcb02821d520b2d5b8bce496509e12118258', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hFauF7dmVtT_NpKwLXwWj1k200bWFLqifX4gF1Eb7YE.png?width=320&crop=smart&auto=webp&s=e2da4109d5d6d8be79b9b3db45b3d379f65533f0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hFauF7dmVtT_NpKwLXwWj1k200bWFLqifX4gF1Eb7YE.png?width=640&crop=smart&auto=webp&s=2395af01080908d9f50e0fcacb432f3fd3a0f5cf', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hFauF7dmVtT_NpKwLXwWj1k200bWFLqifX4gF1Eb7YE.png?width=960&crop=smart&auto=webp&s=f3631f567f8d3d6da616f5856bc148def606adf0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hFauF7dmVtT_NpKwLXwWj1k200bWFLqifX4gF1Eb7YE.png?width=1080&crop=smart&auto=webp&s=574ff2f1462c0a39717b1b0d69a00403d1a438c3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hFauF7dmVtT_NpKwLXwWj1k200bWFLqifX4gF1Eb7YE.png?auto=webp&s=0f52220a0887da99ea6cbd8da19a935233bc1a02', 'width': 1200}, 'variants': {}}]} |
Kimi K2 Thinking was trained with only $4.6 million | 647 | OpenAI: "We need government support to cover $1.4 trillion in chips and data centers."
Kimi:
https://preview.redd.it/03swwvpfo00g1.png?width=1199&format=png&auto=webp&s=f97e99e5a5aac5bb23a62ff58df048f5db678e09
| 2025-11-08T11:16:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ormxoq/kimi_k2_thinking_was_trained_with_only_46_million/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ormxoq | false | null | t3_1ormxoq | /r/LocalLLaMA/comments/1ormxoq/kimi_k2_thinking_was_trained_with_only_46_million/ | false | false | 647 | null | |
Introducing Crane: An All-in-One Rust Engine for Local AI | 1 | [removed] | 2025-11-08T11:10:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ormtrt/introducing_crane_an_allinone_rust_engine_for/ | LewisJin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ormtrt | false | null | t3_1ormtrt | /r/LocalLLaMA/comments/1ormtrt/introducing_crane_an_allinone_rust_engine_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'KzjmVQ0CPFWzhCTQQ4uDeG1fchbaMketLdAXtOh9mqc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KzjmVQ0CPFWzhCTQQ4uDeG1fchbaMketLdAXtOh9mqc.png?width=108&crop=smart&auto=webp&s=bddf14f8ee6db387faa821dffa5277789dfdbfa1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KzjmVQ0CPFWzhCTQQ4uDeG1fchbaMketLdAXtOh9mqc.png?width=216&crop=smart&auto=webp&s=cb04fabc699f6e6bb6019df4412f09549742bd5f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KzjmVQ0CPFWzhCTQQ4uDeG1fchbaMketLdAXtOh9mqc.png?width=320&crop=smart&auto=webp&s=b379a07676eab477b39b238d2bba9e1134b030c1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KzjmVQ0CPFWzhCTQQ4uDeG1fchbaMketLdAXtOh9mqc.png?width=640&crop=smart&auto=webp&s=59c35fe6fbf9dcbdb0d9728eca5a45a01e465d78', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KzjmVQ0CPFWzhCTQQ4uDeG1fchbaMketLdAXtOh9mqc.png?width=960&crop=smart&auto=webp&s=d1423e9f01c544a43f72cf63f4a328ac3b77908c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KzjmVQ0CPFWzhCTQQ4uDeG1fchbaMketLdAXtOh9mqc.png?width=1080&crop=smart&auto=webp&s=54a5ceb77a176ea30f9a60e2d8e3831efbb45796', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KzjmVQ0CPFWzhCTQQ4uDeG1fchbaMketLdAXtOh9mqc.png?auto=webp&s=e8f74286137fc3badd7c6d8ffaa4355c0f39df40', 'width': 1200}, 'variants': {}}]} |
Honey we shrunk MiniMax M2 | 160 | Hi folks, we pruned MiniMax M2 from 250B to 192B (~25%) with only ~5% loss in coding quality. We did this with $200 worth of 8XH200 compute. Our 50% pruned model is ETA 5 more days. Would love to hear your feedback and would you want a 50% pruned Kimi K2 Thinking?
| 2025-11-08T10:04:06 | https://huggingface.co/collections/VibeStudio/thrift | arjunainfinity | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1orlqmh | false | null | t3_1orlqmh | /r/LocalLLaMA/comments/1orlqmh/honey_we_shrunk_minimax_m2/ | false | false | default | 160 | {'enabled': False, 'images': [{'id': 'ehjUFeUe3VAS2s784qCwiz1JNp5TRqMKwWsfk9mVQNM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ehjUFeUe3VAS2s784qCwiz1JNp5TRqMKwWsfk9mVQNM.png?width=108&crop=smart&auto=webp&s=ee2bd2a3c053601dc326d886fd0014fd8ce175ab', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ehjUFeUe3VAS2s784qCwiz1JNp5TRqMKwWsfk9mVQNM.png?width=216&crop=smart&auto=webp&s=1fb3c004bf48f9b6602d52e9f48683d7eb4e7c18', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ehjUFeUe3VAS2s784qCwiz1JNp5TRqMKwWsfk9mVQNM.png?width=320&crop=smart&auto=webp&s=caa7a4808ddc264c9671fed09c3b4f6f48bdb344', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ehjUFeUe3VAS2s784qCwiz1JNp5TRqMKwWsfk9mVQNM.png?width=640&crop=smart&auto=webp&s=a556feed177eef7b3a5785bd56ea678c06d8ea4e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ehjUFeUe3VAS2s784qCwiz1JNp5TRqMKwWsfk9mVQNM.png?width=960&crop=smart&auto=webp&s=ec13e2c143d1f51ea18471e2b5504b02d15b0153', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ehjUFeUe3VAS2s784qCwiz1JNp5TRqMKwWsfk9mVQNM.png?width=1080&crop=smart&auto=webp&s=b8e25abda868528d4e861941b8ea51d0f728795f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ehjUFeUe3VAS2s784qCwiz1JNp5TRqMKwWsfk9mVQNM.png?auto=webp&s=9ed5a8af33103571bf5cf9d0d6d737908604aab9', 'width': 1200}, 'variants': {}}]} |
Terminal based inference on a Mac with lots of model options | 0 | Hi friends,
I've been using my 128GB M4 Max with Ollama for some time and I have weaved local models into my work especially whilst travelling or in places without stable internet. It's been great, plus privacy which is important.
However, recently I'm constantly disappointed by Ollama's selection of models (no GLM Air, slow releases), and additionally I can't stand this new cloud push where some models are now only hosted by them, which ofc, isn't local LLM anything.
My typical workflow is in terminal, a tab serving ollama and another doing inference beside my actual work.
I'm short on time to invest in research (due to kids, work), can anyone here give me a steer on the best UX for macOS that's not a GUI, and that is open source (I know LM Studio has a command line mode but I don't trust the app).
Whilst I have the technical skillset to write python code and call some library to do inference I'm really looking for something that has knobs set to reasonable values and just works. I don't want to have to call llama.cpp directly if at all possible.
Thanks, appreciate your time. | 2025-11-08T09:55:42 | https://www.reddit.com/r/LocalLLaMA/comments/1orllo7/terminal_based_inference_on_a_mac_with_lots_of/ | anonXMR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1orllo7 | false | null | t3_1orllo7 | /r/LocalLLaMA/comments/1orllo7/terminal_based_inference_on_a_mac_with_lots_of/ | false | false | self | 0 | null |
New build LLaMA - Lenovo P920 base - How to make for max large context? | 1 | Im building a local server, as I am doing some AI stuff and need really long context windows.
I have a decent desktop.. 7800x3d 192Gb DDR5 6000 5070ti.. but its not quite there for really big models and really big context windows. Plus given these will mostly be CPU hosted, I don't want to tie up my main box for days just on one prompt.
So...
Lenovo P920 with Dual Gold Xeon 6134
* 1Tb of 2666 Ram - while not cheap, it wasn't outrageous. But I bought all the 2nd hand 64gb dimms in my country.
* And I think I am wanting to put 2 x MI50 32GB into it. It supports 2 GPU's off one CPU PCIe3 x 16.
Questions:
Do the Mi50 gel with stuff these days, I search through, I see different reports. My plan is these guys do a lot of heavy lifting and the context window sits in main memory. Is the Mi50 good for this kind of stuff. I know its slow and old, and doesn't support a lot of newer data formats like FP4, but given what its doing with KV cache that should probably be ok
I am told this work work even for big models like R1 R672b? Or does all that need to happen in Main memory.
Each CPU will have 512GB connected to it, so I believe there is a way to load two copies of a model like R672b, one for each CPU and then get double the performance out of it?
I really just want really, really long context capability, 256k-512K would be ideal. What models would support that kind of context? R1? With this much ram is there other models I should be looking at? I am okay with slowish token generation on the CPU. I have other solutions for quick needs. | 2025-11-08T09:18:29 | https://www.reddit.com/r/LocalLLaMA/comments/1orl0r9/new_build_llama_lenovo_p920_base_how_to_make_for/ | phido3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1orl0r9 | false | null | t3_1orl0r9 | /r/LocalLLaMA/comments/1orl0r9/new_build_llama_lenovo_p920_base_how_to_make_for/ | false | false | self | 1 | null |
Ready-to-use local Claude Code or Codex like agent that can grind for hours and actually deliver | 2 | First up: I’m very comfortable with LLMs and local AI like ComfyUI and other machine learning stuff, and I’ve got an RTX 5090 + 4060 Ti I want to put to good use.
So what I’m wondering if it exists is a mostly ready-to-use, Gemini CLI / Claude Code–like system that prioritizes output quality over speed and can run for hours on deep tasks like coding or other things like research.
Ideally it uses a vLLM backend and can make use of the insane token/s speeds you can get with parallel requests, so it could start multiple sub-agents in the background.
Behavior should be to take a big problem and break it into many tiny steps, iterate, reflect, and self-critique until it converges.
It should run well with local models, for example GPT-OSS 20B or maybe even GPT-OSS 120B or similar sized Qwen models, handle multi-role workflows (planner / engineer / critic), and keep grinding with reflection loops. I really want to put in more compute to get a better answer!
Optionally it should execute code in a sandbox or have clean access to the filesystem like the other code agents I mentioned, maybe even with simple search / RAG when needed.
In the past I tried CrewAI and Microsoft’s framework months ago and wasn’t thrilled back then. Maybe they’ve matured—happy to revisit—but I’m explicitly trying to avoid a weekend of LangGraph + tool soup + glue code just to get a competent loop running. I want something I can point at a repo or a spec, let it think for a few hours, and come back to a solid, test-passing result.
If you actually use a framework like this today with local vLLM, please share the exact project, your config, model choice, and any tricks that noticeably improved quality or reliability. Real anecdotes and gotchas are more helpful than marketing. | 2025-11-08T08:27:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ork81m/readytouse_local_claude_code_or_codex_like_agent/ | Elwii04 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ork81m | false | null | t3_1ork81m | /r/LocalLLaMA/comments/1ork81m/readytouse_local_claude_code_or_codex_like_agent/ | false | false | self | 2 | null |
Audio to audio conversation model | 0 | Are there any open source or open weights audio to audio conversation models like chatgpts audio chat? How much VRAM do they need and which quant is ok to use? | 2025-11-08T08:23:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ork5fm/audio_to_audio_conversation_model/ | IKerimI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ork5fm | false | null | t3_1ork5fm | /r/LocalLLaMA/comments/1ork5fm/audio_to_audio_conversation_model/ | false | false | self | 0 | null |
Grammar for structured output in llama.cpp: useful? | 2 | I’ve been exploring the grammar-based output constraint feature in **llama.cpp**, which allows guiding model output using **GEBNF** grammars. On paper it sounds super useful for ensuring structured output, preventing hallucinated fields, or enforcing strict JSON/XML schemas.
Feature reference:
[https://github.com/ggerganov/llama.cpp/tree/master/grammars](https://github.com/ggerganov/llama.cpp/tree/master/grammars)
However, I’m curious — have you seen tangible benefits in production systems?
*(Context: I’m considering adding support for llama.cpp with grammars in PydanticAI, so checking whether I am maybe wasting my time.)*
---
| 2025-11-08T08:04:56 | https://www.reddit.com/r/LocalLLaMA/comments/1orjv37/grammar_for_structured_output_in_llamacpp_useful/ | -lq_pl- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1orjv37 | false | null | t3_1orjv37 | /r/LocalLLaMA/comments/1orjv37/grammar_for_structured_output_in_llamacpp_useful/ | false | false | self | 2 | null |
How come my 3090 is just as fast as my 3050 for Qwen3-1.7B? | 0 | Both are running at 5t/s inference when 3090 has 936GB/s and 3050 6GB only 168GB/s. Is there something wrong with my inference script?
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
import time
model_name = "Qwen/Qwen3-1.7B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map="auto"
)
# prepare the model input
system_prompt = "You are a European History Professor named Professor Whitman."
user_prompt = "How come West Francia (the Kingdom of France) became a centralized state over time while East Francia (the Holy Roman Empire) stays as a feudal state that has many autonomous entities? Please write a 12,000 words essay to explain why the two states went separate ways in political development."
messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer(text, return_tensors="pt").to(model.device)
# conduct text completion
start_time = time.time()
generated_ids = model.generate(
**model_inputs,
max_new_tokens=8192
)
end_time = time.time()
time_taken = end_time - start_time
generated_tokens = generated_ids.shape[1] - model_inputs['input_ids'].shape[1]
tokens_per_second = generated_tokens / time_taken
print(f"Input Tokens: {model_inputs['input_ids'].shape[1]}")
print(f"Generated Tokens: {generated_tokens} in {time_taken:.2f} seconds")
print(f"Tokens per second: {tokens_per_second:.2f}")
| 2025-11-08T06:57:04 | https://www.reddit.com/r/LocalLLaMA/comments/1oriraf/how_come_my_3090_is_just_as_fast_as_my_3050_for/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oriraf | false | null | t3_1oriraf | /r/LocalLLaMA/comments/1oriraf/how_come_my_3090_is_just_as_fast_as_my_3050_for/ | false | false | self | 0 | null |
Building ToolNeuron: a modular, plugin‑based AI assistant ecosystem for Android. Early stage — looking for feedback from devs & power users. | 0 | Hey everyone,
I’m building **ToolNeuron**, a local/online AI assistant platform for Android. It’s designed to support plugins, private data sets, and modular architecture. Right now, we have a proof-of-concept and a GitHub repo, and I’m curious whether devs/users find value in this approach.
Some questions I’d love your thoughts on:
* Would you pay for a plugin ecosystem that lets you customize your AI assistant on Android?
* What kinds of plugins would be most useful to you?
* How important is local/data privacy vs cloud models for you?
If you’re an Android dev, AI/LLM enthusiast, plugin creator, or power-user, I’d really appreciate your input.
Here’s the repo if you want to check it out: [ToolNeuron GitHub](https://github.com/Siddhesh2377/ToolNeuron?utm_source=chatgpt.com)
Thanks in advance for any feedback — I’m just testing the waters and genuinely want to understand what the community thinks! | 2025-11-08T06:32:56 | https://www.reddit.com/r/LocalLLaMA/comments/1oricyh/building_toolneuron_a_modular_pluginbased_ai/ | DarkEngine774 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oricyh | false | null | t3_1oricyh | /r/LocalLLaMA/comments/1oricyh/building_toolneuron_a_modular_pluginbased_ai/ | false | false | self | 0 | null |
Ollama vs vLLM for Linux distro | 0 | hi Guyz, just wanted to ask which service would be better in my case of building a Linux distro integrated with llama 3 8B ik vLLm has higher token/sec but the fp16 makes it a huge dealbreaker any solutions | 2025-11-08T05:56:07 | https://www.reddit.com/r/LocalLLaMA/comments/1orhpq8/ollama_vs_vllm_for_linux_distro/ | Enough-Ant-1512 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1orhpq8 | false | null | t3_1orhpq8 | /r/LocalLLaMA/comments/1orhpq8/ollama_vs_vllm_for_linux_distro/ | false | false | self | 0 | null |
Built a Easy Ai Library for Mobile Developers | 3 | Here I the demo video, right now the library supports
- Text & Image Embedding
- VLM
- Text Generation
- Tool Calling
- TTS & STT
The aim of making this library to Unify All Offline Ai Provider into a single library, that is easy to use for new Mobile App Developers | 2025-11-08T05:30:57 | https://v.redd.it/o3dsareuyyzf1 | DarkEngine774 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1orha0x | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/o3dsareuyyzf1/DASHPlaylist.mpd?a=1765171871%2CZmY3YjE0Mjk4NGZhNGIxODMyMWM0MTYzZGE3YzkxODRiYmM0OGIzOWVmZDQ5NTEwMWQyNWFhY2I2NDU5NTdiMQ%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/o3dsareuyyzf1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 1280, 'hls_url': 'https://v.redd.it/o3dsareuyyzf1/HLSPlaylist.m3u8?a=1765171871%2CZjgxY2I1OGI0YzhhMzY5Y2UxNzM4ZTc5NGUyMzRkM2ZhYWM3ZGRkZjZmZjU4MjhjODdiMmUwZmU4Zjk4ZWE2NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/o3dsareuyyzf1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 578}} | t3_1orha0x | /r/LocalLLaMA/comments/1orha0x/built_a_easy_ai_library_for_mobile_developers/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'YWloZ3A2ZHV5eXpmMW0lUWiw7bwD4bSpINkIhCOddcAHqk5kgwOu2OsKBruV', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/YWloZ3A2ZHV5eXpmMW0lUWiw7bwD4bSpINkIhCOddcAHqk5kgwOu2OsKBruV.png?width=108&crop=smart&format=pjpg&auto=webp&s=86b671668491b5c3584ba5e9b432d649b7082d8b', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/YWloZ3A2ZHV5eXpmMW0lUWiw7bwD4bSpINkIhCOddcAHqk5kgwOu2OsKBruV.png?width=216&crop=smart&format=pjpg&auto=webp&s=9004a071351e96aad2e8f89587711a952e4ceec6', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/YWloZ3A2ZHV5eXpmMW0lUWiw7bwD4bSpINkIhCOddcAHqk5kgwOu2OsKBruV.png?width=320&crop=smart&format=pjpg&auto=webp&s=fc759204357861c943fd5a1097f0fb0910fa1235', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/YWloZ3A2ZHV5eXpmMW0lUWiw7bwD4bSpINkIhCOddcAHqk5kgwOu2OsKBruV.png?width=640&crop=smart&format=pjpg&auto=webp&s=ed90e6c1f91559e8a3e3ffed957f421be906849f', 'width': 640}], 'source': {'height': 1798, 'url': 'https://external-preview.redd.it/YWloZ3A2ZHV5eXpmMW0lUWiw7bwD4bSpINkIhCOddcAHqk5kgwOu2OsKBruV.png?format=pjpg&auto=webp&s=f1255b9cdc01a89b9ba8deb505e65f5623103643', 'width': 812}, 'variants': {}}]} | |
Training framework that monitors itself and auto-fixes issues (gradient explosions, OOM, MoE imbalance) - looking for feedback | 11 | # I built a training framework that automatically fixes gradient explosions, OOM errors, and MoE expert collapse
Hey LocalLLaMA! Tired of babysitting training runs? I built **LuminaAI** \- a framework where the system monitors itself and makes real-time decisions to keep training stable.
# What it does:
**Autonomous Training Orchestrator:**
* Gradient explosion detected → automatically reduces learning rate
* OOM error → reduces batch size and retries
* MoE experts collapsing → adjusts routing
* Loss plateau → increases LR or suggests stopping early
**Architecture Support:**
* Dense transformers, MoE (8-64 experts), MoD (30-50% faster), Hybrid
**Chinchilla Scaling:**
* Automatically calculates optimal training epochs based on model size
* Monitors convergence and predicts when to stop
# Real example from my training logs:
[Step 5000] Loss spike: 2.15 → 3.87
[Orchestrator] Emergency intervention
Decision: Reduce LR by 10x, rollback 50 steps
Reasoning: Gradient explosion detected
[Step 5100] Stabilized: 2.12 ✓
# Why it's different:
Instead of manually watching TensorBoard and adjusting hyperparameters, the orchestrator makes **18 different types of interventions** automatically:
* Add/remove MoE experts during training
* Adjust batch sizes for OOM recovery
* Emergency rollbacks when things go wrong
* Dynamic learning rate adjustments
# Hardware:
Works on CUDA (RTX 3090, a100, h100, etc), Apple Silicon (M1/M2/M3/M4), and multi-GPU with DeepSpeed.
Pre-configured for 1B → 300B parameter models (MoE).
# What I need:
* **Feedback**: What training issues should I automate next?
* **Testing**: Does it work on your hardware?
* **Brutal honesty**: What would make you actually use this?
I've been working on this for \~4.5 months because I was sick of 2 AM loss divergences. Open source, free for research/personal use.
**GitHub**: [https://github.com/matn23/luminaai](https://github.com/matn23/luminaai)
What training pain points drive you crazy? Would love to hear what I should automate next!
**Edit**: For context, I'm 13 and this is my first major ML project. Any feedback (brutal honesty welcome) is super helpful! | 2025-11-08T02:55:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ore9xo/training_framework_that_monitors_itself_and/ | Huge_Protection2600 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ore9xo | false | null | t3_1ore9xo | /r/LocalLLaMA/comments/1ore9xo/training_framework_that_monitors_itself_and/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'ZAs0CL2a8oIvlDAplY9gJJoD_oWB6ghFrLTQRVaVYcY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZAs0CL2a8oIvlDAplY9gJJoD_oWB6ghFrLTQRVaVYcY.png?width=108&crop=smart&auto=webp&s=8bb862ade0f5e89d9f3c39d3d1e0436e59b4ad2f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZAs0CL2a8oIvlDAplY9gJJoD_oWB6ghFrLTQRVaVYcY.png?width=216&crop=smart&auto=webp&s=17ff753a31744a7be79a8fecfc8c857c8cc725f9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZAs0CL2a8oIvlDAplY9gJJoD_oWB6ghFrLTQRVaVYcY.png?width=320&crop=smart&auto=webp&s=122dd0318f260c8e65b42eafe6246b49efbf3515', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZAs0CL2a8oIvlDAplY9gJJoD_oWB6ghFrLTQRVaVYcY.png?width=640&crop=smart&auto=webp&s=989ac491bbb7c8563d2a7ca8e858faa4ec56096a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZAs0CL2a8oIvlDAplY9gJJoD_oWB6ghFrLTQRVaVYcY.png?width=960&crop=smart&auto=webp&s=6b5a452098f93606d334a8c41d2ccedf050f6da5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZAs0CL2a8oIvlDAplY9gJJoD_oWB6ghFrLTQRVaVYcY.png?width=1080&crop=smart&auto=webp&s=7ea859aef59d5bae8193b147dd2baec8b98311cb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZAs0CL2a8oIvlDAplY9gJJoD_oWB6ghFrLTQRVaVYcY.png?auto=webp&s=a07786c3235060fc2e1e87df18f431022f8d1858', 'width': 1200}, 'variants': {}}]} |
Kimi K2 thinking repeatedly goes into infinite thinking look on fairly simple tasks | 1 | 2025-11-08T02:23:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ordmuz/kimi_k2_thinking_repeatedly_goes_into_infinite/ | championswimmer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ordmuz | false | null | t3_1ordmuz | /r/LocalLLaMA/comments/1ordmuz/kimi_k2_thinking_repeatedly_goes_into_infinite/ | false | false | 1 | null | ||
We got this, we can do it! When is the REAP’d iQ_001_XXS GGUF dropping? | 1,073 | 2025-11-08T02:15:28 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ordgys | false | null | t3_1ordgys | /r/LocalLLaMA/comments/1ordgys/we_got_this_we_can_do_it_when_is_the_reapd_iq_001/ | false | false | 1,073 | {'enabled': True, 'images': [{'id': 'YVLalEujkoOmjNi9AgtYkJNaIgUzmy_3-MdwRgaZ2aE', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/qfahc43zzxzf1.jpeg?width=108&crop=smart&auto=webp&s=005e45deb6404261caa9171b31e72a0573a8b99e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/qfahc43zzxzf1.jpeg?width=216&crop=smart&auto=webp&s=f9200624a87b7dd6d209e497e1fc3325f77f0c8a', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/qfahc43zzxzf1.jpeg?width=320&crop=smart&auto=webp&s=a51370be6d84a7f6d501e2b629cb59f710fd773b', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/qfahc43zzxzf1.jpeg?width=640&crop=smart&auto=webp&s=1975edd06526787672ea84d9ae1d9904b84715e1', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/qfahc43zzxzf1.jpeg?width=960&crop=smart&auto=webp&s=9937894665fea40b029349a95d15c4f6bc7b1ec4', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/qfahc43zzxzf1.jpeg?width=1080&crop=smart&auto=webp&s=2680532258a2513c8d25a6bedc41fb5753c3fe5e', 'width': 1080}], 'source': {'height': 1125, 'url': 'https://preview.redd.it/qfahc43zzxzf1.jpeg?auto=webp&s=5c3bb6ade53230b81b8c8f602968016dd0685b13', 'width': 1125}, 'variants': {}}]} | |||
[Web Demo] Qwen-Image-Edit — Camera angle control (HF Space) | 17 | Very Cool Tool.
https://preview.redd.it/z1nmida0zxzf1.png?width=900&format=png&auto=webp&s=fc5f06a9e2105a5db39e4644ef805606c65873eb
Upload an image, then tweak **camera motion/rotation/lens** sliders to generate new viewpoints—right in your browser. [Hugging Face](https://huggingface.co/spaces/linoyts/Qwen-Image-Edit-Angles?utm_source=chatgpt.com)
* Do things like move the camera (left/right/forward/down), rotate ±45°/90° or go top-down, and switch between wide vs. close-up looks.
* Built on **Qwen Image Edit**; compatible community LoRAs enable multi-angle variants.
* Tip: results can vary with busy backgrounds—short prompts often work best.Try it: [`https://huggingface.co/spaces/linoyts/Qwen-Image-Edit-Angles`](https://huggingface.co/spaces/linoyts/Qwen-Image-Edit-Angles) [Hugging Face](https://huggingface.co/spaces/linoyts/Qwen-Image-Edit-Angles) | 2025-11-08T02:10:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ordd5x/web_demo_qwenimageedit_camera_angle_control_hf/ | freesysck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ordd5x | false | null | t3_1ordd5x | /r/LocalLLaMA/comments/1ordd5x/web_demo_qwenimageedit_camera_angle_control_hf/ | false | false | 17 | null | |
China winning the race? Or a bubble about to burst? | 0 | With the latest releases — Qwen 3 Max Thinking, Kimi K2 Thinking, and Minimax M2 — China is catching up to the U.S., despite using far fewer chips. What can we conclude? Are the Chinese outperforming with limited hardware, or has the bubble reached its peak — explaining why they’ve now matched the Americans? | 2025-11-08T01:20:48 | https://www.reddit.com/r/LocalLLaMA/comments/1orcbkw/china_winning_the_race_or_a_bubble_about_to_burst/ | Objective_Lab_3182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1orcbkw | false | null | t3_1orcbkw | /r/LocalLLaMA/comments/1orcbkw/china_winning_the_race_or_a_bubble_about_to_burst/ | false | false | self | 0 | null |
Loki - An All-in-One, Batteries-Included LLM CLI | 8 | # [Introducing: Loki! An all-in-one, batteries-included LLM CLI tool](https://github.com/Dark-Alex-17/loki)
https://i.redd.it/r4joue0wixzf1.gif
[Loki](https://github.com/Dark-Alex-17/loki) started out as a fork of the fantastic [AIChat CLI](https://github.com/sigoden/aichat), where I just wanted to give it first-class MCP server support. It has since evolved into a massive passion project that’s a fully-featured tool with its own identity and extensive capabilities! My goal is to make Loki a true “all-in-one” and “batteries-included” LLM tool.
Check out the [release notes](https://github.com/Dark-Alex-17/loki/releases/tag/v0.1.1) for a quick overview of everything that Loki can do!
# What Makes Loki Different From AIChat?
* First-class MCP support, with support for both local and remote servers
* Agents, roles, and sessions can all use different MCP servers and switching between them will shutdown any unnecessary ones and start the applicable ones
* MCP sampling is coming next
* Comes with a number of useful agents, functions, roles, and macros that are included out-of-the-box
* Agents, MCP servers, and tools are all managed by Loki now; no need to pull another repository to create and use tools!
* No need for any more `*.txt` files
* Improved DevX when creating bash-based tools (agents or functions)
* No need to have `argc` installed: Loki handles all the compilation for you!
* Loki has a `--build-tools` flag that will build your bash tools so you can run them exactly the same way Loki would
* Built-in Bash prompting utils to make your bash tools even more user-friendly and flexible
* Built-in vault to securely store secrets so you don't have to store your client API keys in environment variables or plaintext anymore
* Loki also will inject additional secrets into your agent's tools as environment variables so your agents can also use secrets securely
* Multi-agent support out-of-the-box: You can now create agents that route requests to other agents and use multiple agents together without them trampling all over each other's binaries
* Improved documentation for all the things!
* Simplified directory structure so users can share full Loki directories and configurations without massive amounts of data, or secrets being exposed accidentally
* And more!
# What's Next?
* MCP sampling support, so that MCP servers can send back queries for the LLM to respond to LLM requests. Essentially, think of it like letting the MCP server and LLM talk to each other to answer your query
* Give Loki a TUI mode to allow it to operate like [claude-code](https://github.com/anthropics/claude-code), [gemini-cli](https://github.com/google-gemini/gemini-cli), [codex](https://github.com/openai/codex), and [continue](https://github.com/continuedev/continue?tab=readme-ov-file). The objective being that Loki can function exactly like all those other CLIs or even delegate to them when the problem demands it. No more needing to install a bunch of different CLIs to switch between!
* Integrate with [LSP-AI](https://github.com/SilasMarvin/lsp-ai?tab=readme-ov-file) so you can use Loki from inside your IDEs! Let Loki perform function calls, utilize agents, roles, RAGs, and all other features of Loki to help you write code. | 2025-11-08T01:10:55 | https://www.reddit.com/r/LocalLLaMA/comments/1orc401/loki_an_allinone_batteriesincluded_llm_cli/ | Hamilcar_Barca_17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1orc401 | false | null | t3_1orc401 | /r/LocalLLaMA/comments/1orc401/loki_an_allinone_batteriesincluded_llm_cli/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'LXEUPYvJzyTsJyAiWUpM817Z60b-xoBKT3vo7H-zFd0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LXEUPYvJzyTsJyAiWUpM817Z60b-xoBKT3vo7H-zFd0.png?width=108&crop=smart&auto=webp&s=db135dcf9e00c010ac5884ca286d420675da904a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LXEUPYvJzyTsJyAiWUpM817Z60b-xoBKT3vo7H-zFd0.png?width=216&crop=smart&auto=webp&s=c6f156b332ce1add7d935b895b0ee141cf4f4a40', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LXEUPYvJzyTsJyAiWUpM817Z60b-xoBKT3vo7H-zFd0.png?width=320&crop=smart&auto=webp&s=8bfd30c028fdb61b89c363c6467efcabd9749320', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LXEUPYvJzyTsJyAiWUpM817Z60b-xoBKT3vo7H-zFd0.png?width=640&crop=smart&auto=webp&s=5485c4dbd22069c585c0647556fe692af9a24d49', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LXEUPYvJzyTsJyAiWUpM817Z60b-xoBKT3vo7H-zFd0.png?width=960&crop=smart&auto=webp&s=e714a35c41b68ddc24e0e0f6b77a3d63c0fc4bbc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LXEUPYvJzyTsJyAiWUpM817Z60b-xoBKT3vo7H-zFd0.png?width=1080&crop=smart&auto=webp&s=b8bb5d5bbbd24bca5f1e90728c28a6cf1f4b5754', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LXEUPYvJzyTsJyAiWUpM817Z60b-xoBKT3vo7H-zFd0.png?auto=webp&s=c8d3956e0983f58e7d603e4eeca735f1981c7b71', 'width': 1200}, 'variants': {}}]} |
Claude cli with glm and enabled memory? | 0 | Hi all,
I am running Claude cli with glm, trying to explore it doing research and stuff.
I read that’s there’s the memory function, is it possible for me to host a mcp that replicate this feature?
If anyone have done something similar can you kind point me in the direction 😀 | 2025-11-08T01:07:03 | https://www.reddit.com/r/LocalLLaMA/comments/1orc12a/claude_cli_with_glm_and_enabled_memory/ | Alrightly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1orc12a | false | null | t3_1orc12a | /r/LocalLLaMA/comments/1orc12a/claude_cli_with_glm_and_enabled_memory/ | false | false | self | 0 | null |
Best local ai for m5? | 0 | Hey guys!
I just got an m5 MacBook Pro with 1tb storage and 24gb ram(I know it’s not ai configured but I am a photographer/video editor so give me a break 😅)
I would like to stop giving OpenAI my money every month to run their ai with no privacy.
What is the best local llm I can run on my hardware?
I would like it to help me with creative writing, content creation, and ideally be able to generate photos.
What are my best options?
Thank you so much! | 2025-11-08T01:05:48 | https://www.reddit.com/r/LocalLLaMA/comments/1orc023/best_local_ai_for_m5/ | Hour-Heron9030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1orc023 | false | null | t3_1orc023 | /r/LocalLLaMA/comments/1orc023/best_local_ai_for_m5/ | false | false | self | 0 | null |
Modules or tips for NSFW text editing? | 0 | Hey LocalLLaMA! While this might not be the "best" place to ask this question. This community seems to have better knowledge around modules and tools.
I've been playing with LM Studio and Ollama (rookie) and struggling with NSFW content even subtle in some of my stories. I've successfully leveraged both tools for dozens of coding and design projects but have recently been trying to use them to edit my writing. While not all of my stories are NSFW, many are and I keep getting blocked simply trying to save myself a few hours editing for enhanced grammar and format. Lazy as it may be, this is now a challenge.
I've tried Command-R+, Violet Lotus 12B and a few others. Seems in recent revisions many of these resources have adopted Safety Modes blocking some of the edits. I've been high and low on google, tried various prompts, local vs. networked etc... There are many paid tools used for RP and content creation which I'm not interested in. I'm simply looking for a tool or workaround to save time, tighten up the stories and get them published.
At the end of the day I'm arguably still very green with more advanced utilization of these tools, I'll happily eat crow and answer any follow up questions. I appreciate the help! | 2025-11-08T00:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/1orbt5y/modules_or_tips_for_nsfw_text_editing/ | Acrobatic-West490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1orbt5y | false | null | t3_1orbt5y | /r/LocalLLaMA/comments/1orbt5y/modules_or_tips_for_nsfw_text_editing/ | false | false | nsfw | 0 | null |
Unable to get expected output from llama.cpp with audio models | 7 | I'm trying to get audio-input working with llama.cpp. So far I've tried Voxtral and Qwen2.5 Omni. When I try Voxtral via `.\llama\llama-server --model .\Voxtral-Mini-3B-2507-Q4_K_M.gguf --mmproj .\mmproj-Voxtral-Mini-3B-2507-Q8_0.gguf` I end up with a working chat but every time I send in an audio file and ask it to describe it, the model says something like "I'm here to help, but I need a bit more context to provide an accurate and helpful response."
I know the mmproj is doing *something* because I'm running it with temp 0.0 and the response changes from input to input but it's always saying that it needs me to give it more info. If I put a bird song in and ask it what bird it is, the model asks me to describe the bird and where I saw it.
So I tried Qwen2.5 Omni via `.\llama\llama-server --model .\Qwen2.5-Omni-7B-Q4_K_M.gguf --mmproj .\mmproj-Qwen2.5-Omni-7B-f16.gguf` and this time the server just terminates. The last few lines it logs are
--- audio hparams ---
load_hparams: n_mel_bins: 128
load_hparams: proj_stack_factor: 0
load_hparams: model size: 2521.13 MiB
load_hparams: metadata size: 0.35 MiB
alloc_compute_meta: warmup with audio size = 3000
and it returns me to a waiting terminal. I get the same results for both when I try doing `llama-mtmd-cli` with `--audio sample.mp3` and `-p "Describe this."`.
I'm clearly missing something but I'm not sure what. | 2025-11-07T23:30:57 | https://www.reddit.com/r/LocalLLaMA/comments/1or9v99/unable_to_get_expected_output_from_llamacpp_with/ | TomatoCo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or9v99 | false | null | t3_1or9v99 | /r/LocalLLaMA/comments/1or9v99/unable_to_get_expected_output_from_llamacpp_with/ | false | false | self | 7 | null |
Kimi K2 Thinking is the #2 open-source model on our leaderboard | 1 | [removed] | 2025-11-07T22:35:01 | https://www.reddit.com/r/LocalLLaMA/comments/1or8imp/kimi_k2_thinking_is_the_2_opensource_model_on_our/ | Direct-Salary-4666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or8imp | false | null | t3_1or8imp | /r/LocalLLaMA/comments/1or8imp/kimi_k2_thinking_is_the_2_opensource_model_on_our/ | false | false | self | 1 | null |
Artificial Analysis has released a more in-depth benchmark breakdown of Kimi K2 Thinking (2nd image) | 114 | 2025-11-07T22:30:30 | https://www.reddit.com/gallery/1or8ehk | averagebear_003 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1or8ehk | false | null | t3_1or8ehk | /r/LocalLLaMA/comments/1or8ehk/artificial_analysis_has_released_a_more_indepth/ | false | false | default | 114 | null | |
Best LLM API for mass code translation? | 0 | Hello. I need to use an LLM to translate 300k+ code files into a different programming language. The code in all files is rather short and handles common tasks so the task should no be very difficult. Is there a api you can recommend me with a cood cost to performance ratio so i get usable results without going broke?
I am thankfull for any help :) | 2025-11-07T22:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/1or8bcf/best_llm_api_for_mass_code_translation/ | CumDrinker247 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or8bcf | false | null | t3_1or8bcf | /r/LocalLLaMA/comments/1or8bcf/best_llm_api_for_mass_code_translation/ | false | false | self | 0 | null |
Working Dockerfile for gpt-oss-120b on 4x RTX 3090 (vLLM + MXFP4) | 0 | Has anyone here successfully set up gpt-oss-120b on ubuntu with 4x RTX 3090 GPUs using Docker and vLLM? Could anyone be kind enough to share their working Dockerfile?
I successfully built the image from this Dockerfile: [https://www.reddit.com/r/LocalLLaMA/comments/1mkefbx/gptoss120b\_running\_on\_4x\_3090\_with\_vllm/](https://www.reddit.com/r/LocalLLaMA/comments/1mkefbx/gptoss120b_running_on_4x_3090_with_vllm/)
But when running the container (with tensor-parallel-size=4, --quantization mxfp4, etc.), the vLLM engine crashes during model loading. Specifically: After loading the safetensors shards, the workers fail with a `ModuleNotFoundError: No module named 'triton.language.target_info'` in the mxfp4 quantization step (triton\_kernels/matmul\_ogs.py), I guess due to incompatibility between the custom Triton kernels and Triton 3.4.0 in the zyongye/vllm rc1 fork. | 2025-11-07T21:14:09 | https://www.reddit.com/r/LocalLLaMA/comments/1or6huy/working_dockerfile_for_gptoss120b_on_4x_rtx_3090/ | iamn0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or6huy | false | null | t3_1or6huy | /r/LocalLLaMA/comments/1or6huy/working_dockerfile_for_gptoss120b_on_4x_rtx_3090/ | false | false | self | 0 | null |
Follow-up to the my Dual-RTX 3060 build (originally posted on r/Ollama): Now hitting 30 t/s on 8b models using 145W Power Limiting! | 4 | Hi, everybody!
I wanted to share the updated details of my budget-friendly, high-performance AI server that many of you may remember seeing on r/Ollama a while back.
I've since moved the full guide over to r/FrugalAI, but the core strategy is all about maximizing **local LLM performance per dollar**.
The biggest game-changers for hitting **30 tokens/second** on 8b models with two RTX 3060 12GB cards were:
1. Heavy Ollama optimization (`num_batch`, Q4 quantization).
2. The **145W GPU Power Limit** (set via `nvidia-smi` in the root `crontab`) which completely eliminated thermal throttling and stabilized performance.
Check out the post for the full specs and setup commands. I'm looking forward to hearing what kinds of unique optimizations *this* community uses for local LLM inference! | 2025-11-07T21:10:25 | https://www.reddit.com/r/FrugalAI/comments/1or4v9u/start_here_my_dualgpu_700_buildthe_frugalai/ | FrugalAI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1or6ela | false | null | t3_1or6ela | /r/LocalLLaMA/comments/1or6ela/followup_to_the_my_dualrtx_3060_build_originally/ | false | false | default | 4 | null |
Is there a LLM guide for Dummies ? | 3 | I am interested in learning how to use LLM Locally and explore models from hugging face but I’m too dumb. Any step by step guide? | 2025-11-07T21:05:03 | https://www.reddit.com/r/LocalLLaMA/comments/1or69mr/is_there_a_llm_guide_for_dummies/ | Funnytingles | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or69mr | false | null | t3_1or69mr | /r/LocalLLaMA/comments/1or69mr/is_there_a_llm_guide_for_dummies/ | false | false | self | 3 | null |
ROCm installation support on windows. HELP PLS. | 3 | I am really new to this process, and I recently did a cuda llama.cpp build on my 3060 mobile GPU, faced very less issues.
Now I wanted to utilize the VRAM of my main PC GPU which has amd gpu, 7900 gre.
I went away and installed HIP SDK from here:
[Install HIP SDK — HIP SDK installation (Windows)](https://rocm.docs.amd.com/projects/install-on-windows/en/latest/how-to/install.html#hip-install-full)
after that followed some github advise and reddit advise from official llama.cpp repo and [Guide: build llama.cpp on windows with AMD GPUs, and using ROCm : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/16d1hi0/guide_build_llamacpp_on_windows_with_amd_gpus_and/)
and
[llama.cpp guide - Running LLMs locally, on any hardware, from scratch](https://blog.steelph0enix.dev/posts/llama-cpp-guide/#building-the-llama-but-better) (this one is great for newbies)
installed llvm to provide openmp path as well.
after many iterations I came to this conclusion:
cmake --fresh -S . -B build -G Ninja `
-DCMAKE_BUILD_TYPE=Release `
-DCMAKE_INSTALL_PREFIX="C:\Users\dreadwing\AppData\Local\llama.cpp\ROCm" `
-DLLAMA_BUILD_TESTS=OFF `
-DLLAMA_BUILD_EXAMPLES=ON `
-DLLAMA_BUILD_SERVER=ON `
-DCURL_INCLUDE_DIR="G:/vcpkg/packages/curl_x64-windows/include" `
-DCURL_LIBRARY="G:/vcpkg/packages/curl_x64-windows/lib/libcurl.lib" `
-DGPU_TARGETS=gfx1100 `
-DGGML_HIP=ON `
-DCMAKE_C_COMPILER=clang `
-DCMAKE_CXX_COMPILER=clang++ `
-DOpenMP_C_FLAGS="-fopenmp -IC:/PROGRA~1/LLVM/include" `
-DOpenMP_CXX_FLAGS="-fopenmp -IC:/PROGRA~1/LLVM/include" `
-DOpenMP_C_LIB_NAMES="libomp" `
-DOpenMP_CXX_LIB_NAMES="libomp" `
-DOpenMP_libomp_LIBRARY="C:/PROGRA~1/LLVM/lib/libomp.lib"
And it gives me this output:
-- The C compiler identification is Clang 20.0.0 with GNU-like command-line
-- The CXX compiler identification is Clang 20.0.0 with GNU-like command-line
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/AMD/ROCm/6.4/bin/clang.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/AMD/ROCm/6.4/bin/clang++.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMAKE_BUILD_TYPE=Release
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.51.2.windows.1")
-- The ASM compiler identification is Clang with GNU-like command-line
-- Found assembler: C:/Program Files/AMD/ROCm/6.4/bin/clang.exe
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Check if compiler accepts -pthread
-- Check if compiler accepts -pthread - no
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- Warning: ccache not found - consider installing it for faster compilation or disable this warning with GGML_CCACHE=OFF
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- GGML_SYSTEM_ARCH: x86
-- Including CPU backend
-- Found OpenMP_C: -fopenmp -IC:/PROGRA~1/LLVM/include (found version "5.1")
-- Found OpenMP_CXX: -fopenmp -IC:/PROGRA~1/LLVM/include (found version "5.1")
-- Found OpenMP: TRUE (found version "5.1")
-- x86 detected
-- Adding CPU backend variant ggml-cpu: -march=native
-- Performing Test HIP_CLANG_SUPPORTS_PARALLEL_JOBS
-- Performing Test HIP_CLANG_SUPPORTS_PARALLEL_JOBS - Success
-- HIP and hipBLAS found
-- Including HIP backend
-- ggml version: 0.9.4
-- ggml commit: 9eb9a1331
-- Found CURL: G:/vcpkg/packages/curl_x64-windows/lib/libcurl.lib (found version "8.17.0-DEV")
-- Configuring done (3.3s)
-- Generating done (0.2s)
-- Build files have been written to: G:/llama/llama.cpp/build
All is going well but as soon as I run the llama commands, the output is empty, no nothing nada,
PS G:\\llama\\llama.cpp> llama-cli.exe --help
PS G:\\llama\\llama.cpp> llama-batched.exe
PS G:\\llama\\llama.cpp> llama-bench.exe
PS G:\\llama\\llama.cpp>
something like this, nothing is printing.
I am running latest MSVC runtime and in visual studio 2022 I also installed latest msvc.
I think I am missing something really acute, can someone please help me in my findings?
Much appreciated, Thanks.
EDIT:
I did a standalone llama.cpp build that just works with CPU and guess what, that is also behaving in the same manner, now I am getting a little clueless, dependency is not getting resolved | 2025-11-07T20:57:04 | https://www.reddit.com/r/LocalLLaMA/comments/1or61zz/rocm_installation_support_on_windows_help_pls/ | CyBerDreadWing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or61zz | false | null | t3_1or61zz | /r/LocalLLaMA/comments/1or61zz/rocm_installation_support_on_windows_help_pls/ | false | false | self | 3 | null |
Who all agree with this defination of AGI ? | 0 | A paper by [safe Agi and Scale AI](https://www.agidefinition.ai/paper.pdf)
According to them, having a models score maximum in these 10 categories will lead to Artificial General Inteligence.
* General Knowledge (K)
* Reading and Writing Ability (RW)
* Mathematical Ability (M)
* On-the-Spot Reasoning (R)
* Working Memory (WM)
* **Long-Term Memory Storage (MS)**
* **Long-Term Memory Retrieval (MR)**
* Visual Processing (V)
* Auditory Processing (A)
* Speed (S)
And you can easy pick the Odd one out, that has not been yet solved by major labs, foundatioanlly in AI model.
So yah looks good? A new model that will cover all these and achieve Agi.. | 2025-11-07T20:44:16 | https://www.reddit.com/r/LocalLLaMA/comments/1or5qab/who_all_agree_with_this_defination_of_agi/ | GlitteringAdvisor530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or5qab | false | null | t3_1or5qab | /r/LocalLLaMA/comments/1or5qab/who_all_agree_with_this_defination_of_agi/ | false | false | self | 0 | null |
Can someone explain Kokoro Space on HF for me? | 5 | 2025-11-07T20:42:33 | https://www.reddit.com/r/LocalLLaMA/comments/1or5ost/can_someone_explain_kokoro_space_on_hf_for_me/ | dreamyrhodes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or5ost | false | null | t3_1or5ost | /r/LocalLLaMA/comments/1or5ost/can_someone_explain_kokoro_space_on_hf_for_me/ | false | false | 5 | null | ||
Nvidia may cancel the RTX 50 Super due to a shortage of 3GB GDDR7 memory | 146 | For now it's just a rumor, but it seems the RTX Super cards will take a while to be released, if they ever are
https://www.techpowerup.com/342705/gddr7-shortage-could-stop-nvidia-geforce-rtx-50-series-super-rollout
https://www.guru3d.com/story/nvidia-may-cancel-or-delay-geforce-rtx-50-super-series-amid-gddr7-memory-shortage/
And we also have RAM prices skyrocketing due to high demand | 2025-11-07T20:36:28 | https://www.reddit.com/r/LocalLLaMA/comments/1or5j9z/nvidia_may_cancel_the_rtx_50_super_due_to_a/ | Spiderboyz1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or5j9z | false | null | t3_1or5j9z | /r/LocalLLaMA/comments/1or5j9z/nvidia_may_cancel_the_rtx_50_super_due_to_a/ | false | false | self | 146 | {'enabled': False, 'images': [{'id': 'WjxfTpGZXDYcseppN_BS-JwjKz--VJljF1AheGesi7s', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/WjxfTpGZXDYcseppN_BS-JwjKz--VJljF1AheGesi7s.jpeg?width=108&crop=smart&auto=webp&s=044f5b39fa5a02ae74231c08af7b8abd82dab14a', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/WjxfTpGZXDYcseppN_BS-JwjKz--VJljF1AheGesi7s.jpeg?width=216&crop=smart&auto=webp&s=c6832ec42c9a607f0b96f37497c0a0ea31c0525b', 'width': 216}, {'height': 189, 'url': 'https://external-preview.redd.it/WjxfTpGZXDYcseppN_BS-JwjKz--VJljF1AheGesi7s.jpeg?width=320&crop=smart&auto=webp&s=24fe2e958989b3a1ee3caabe07def03c54438574', 'width': 320}, {'height': 378, 'url': 'https://external-preview.redd.it/WjxfTpGZXDYcseppN_BS-JwjKz--VJljF1AheGesi7s.jpeg?width=640&crop=smart&auto=webp&s=2ce35f1e9174cbf9353d6fa51677953c780595c3', 'width': 640}, {'height': 567, 'url': 'https://external-preview.redd.it/WjxfTpGZXDYcseppN_BS-JwjKz--VJljF1AheGesi7s.jpeg?width=960&crop=smart&auto=webp&s=2ccec96be7e5e86da3aebb37f9b766d28d41727c', 'width': 960}, {'height': 638, 'url': 'https://external-preview.redd.it/WjxfTpGZXDYcseppN_BS-JwjKz--VJljF1AheGesi7s.jpeg?width=1080&crop=smart&auto=webp&s=ae50fef88c97e158f2aadf69c72a00987fa0ab57', 'width': 1080}], 'source': {'height': 736, 'url': 'https://external-preview.redd.it/WjxfTpGZXDYcseppN_BS-JwjKz--VJljF1AheGesi7s.jpeg?auto=webp&s=2119d55063043b9f793cde0063f43a1db1640a9d', 'width': 1245}, 'variants': {}}]} |
Vulkan vs. Rocm with R9700 AI Pro | 2 | Vulkan is small and fast, you can use models damn near the maximum 32 G vram with a 30k context window or even go beyond that with a 39 gb model to do partial vram offloading and it will still work with 2-3 tokens/s. Rocm is big, and you cant use model even if it's like 30 gb in size, it has to be substantially lower than the upper limit of the vram.
Also rocm will automatically OC the crap out of your graphics card while drawing less than the tpd, basically what you would do when OC-ing. vulkan doesn't do OC, it will just use the maximum 300W power and uses a normal clock speed of 2.3 to 3 GHZ, instead of 3.4 GHz from OC by rocm... | 2025-11-07T20:30:53 | OldEffective9726 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1or5eat | false | null | t3_1or5eat | /r/LocalLLaMA/comments/1or5eat/vulkan_vs_rocm_with_r9700_ai_pro/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': '4b0vdrjp9wzf1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/4b0vdrjp9wzf1.jpeg?width=108&crop=smart&auto=webp&s=62b765914c099888760ba626741d8b8d8f6b145c', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/4b0vdrjp9wzf1.jpeg?width=216&crop=smart&auto=webp&s=019d2130682e252e46e841f2dd73579b307c6d55', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/4b0vdrjp9wzf1.jpeg?width=320&crop=smart&auto=webp&s=86c182856bc216bc4feb29e784a1de642bb1d824', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/4b0vdrjp9wzf1.jpeg?width=640&crop=smart&auto=webp&s=5f06060dc9b0658ef542e8f8f8a3dc27f12ed6f8', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/4b0vdrjp9wzf1.jpeg?width=960&crop=smart&auto=webp&s=a42152a60085e0a38fc892f8b38b06aeda7d49c0', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/4b0vdrjp9wzf1.jpeg?width=1080&crop=smart&auto=webp&s=bde6d495fe07b63458f4309f5663b4e03de97e2a', 'width': 1080}], 'source': {'height': 4096, 'url': 'https://preview.redd.it/4b0vdrjp9wzf1.jpeg?auto=webp&s=195b2ce8139c52d93967d64d6f034194ef9a9172', 'width': 3072}, 'variants': {}}]} | |
no cuda0 found just vulkan driver. easy question for a noob | 0 | Hello, i have this trouble and i don't know how to resolve it. sure it's a stupid question but i 've lost too many hours tryng different ways.
I have cuda 13 installed and latest nvidia drivers.
Fresh w10 installation.
I can use only vulcan driver... | 2025-11-07T20:27:34 | Flimsy_Leadership_81 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1or5b9n | false | null | t3_1or5b9n | /r/LocalLLaMA/comments/1or5b9n/no_cuda0_found_just_vulkan_driver_easy_question/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'jmo6y8e89wzf1', 'resolutions': [{'height': 9, 'url': 'https://preview.redd.it/jmo6y8e89wzf1.png?width=108&crop=smart&auto=webp&s=ee87b8416d11b4a59ba025f2e4842177d7100f87', 'width': 108}, {'height': 18, 'url': 'https://preview.redd.it/jmo6y8e89wzf1.png?width=216&crop=smart&auto=webp&s=55e2b21b03c1a27cdffb4811ee7b035eb8a6157a', 'width': 216}, {'height': 26, 'url': 'https://preview.redd.it/jmo6y8e89wzf1.png?width=320&crop=smart&auto=webp&s=d5d281bbee4f06aa2a6cdb84c8aee84ca077ae99', 'width': 320}, {'height': 53, 'url': 'https://preview.redd.it/jmo6y8e89wzf1.png?width=640&crop=smart&auto=webp&s=28dc13a641a12dcb5e0c0cf94aaa370f1c93ab35', 'width': 640}, {'height': 80, 'url': 'https://preview.redd.it/jmo6y8e89wzf1.png?width=960&crop=smart&auto=webp&s=c8664a5893849221bf0fb21dddf8e6a8b0d39d10', 'width': 960}, {'height': 90, 'url': 'https://preview.redd.it/jmo6y8e89wzf1.png?width=1080&crop=smart&auto=webp&s=e71468f2c2296d0676ffe117e4e6685519720026', 'width': 1080}], 'source': {'height': 154, 'url': 'https://preview.redd.it/jmo6y8e89wzf1.png?auto=webp&s=39cefc56ec07d5e81fd4dd579c7a64f23130b388', 'width': 1832}, 'variants': {}}]} | |
What is dreaming? Synthetic data generation. | 4 | DreamGym from Meta is a new framework that lets AI agents train via synthetic reasoning-based experiences: [https://x.com/jiqizhixin/status/1986686971331195223](https://x.com/jiqizhixin/status/1986686971331195223)
Paper: [https://arxiv.org/abs/2511.03773](https://arxiv.org/abs/2511.03773) | 2025-11-07T20:22:07 | https://www.reddit.com/r/LocalLLaMA/comments/1or566s/what_is_dreaming_synthetic_data_generation/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or566s | false | null | t3_1or566s | /r/LocalLLaMA/comments/1or566s/what_is_dreaming_synthetic_data_generation/ | false | false | self | 4 | null |
Open-sourcing how we ship multi-user MCP servers to production with Oauth and secrets management built-in | 1 | [removed] | 2025-11-07T20:21:27 | https://www.reddit.com/r/LocalLLaMA/comments/1or55jk/opensourcing_how_we_ship_multiuser_mcp_servers_to/ | MostlyGreat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or55jk | false | null | t3_1or55jk | /r/LocalLLaMA/comments/1or55jk/opensourcing_how_we_ship_multiuser_mcp_servers_to/ | false | false | self | 1 | null |
Open-sourcing how we ship multi-user MCP servers to production with Oauth and secrets management built-in | 1 | [removed] | 2025-11-07T20:20:30 | https://www.reddit.com/r/LocalLLaMA/comments/1or54no/opensourcing_how_we_ship_multiuser_mcp_servers_to/ | MostlyGreat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or54no | false | null | t3_1or54no | /r/LocalLLaMA/comments/1or54no/opensourcing_how_we_ship_multiuser_mcp_servers_to/ | false | false | self | 1 | null |
Strange Issue with VRAM (ecc with non-ecc) Types on Vega VII and Mi50s | 0 | I posted this as an issue in llama cpp but I wanted to post it here to see if anyone has seen this issue before because it could just be something simple. I have a system with a Vega VII card (32 GB) and two Mi50s. I build llama cpp for gfx906 which is the same for all the cards. They are nearly identical, in a sense. I am able to inference on each card fine and I am able to inference on both Mi50s at the same time but if I add the Vega VII, it causes the issue below.
After countless frustrating troubleshooting with ChatGPT, after asking it to trace through each step, reference code, etc it came to the conclusion that there aren't specific build targets for llama cpp for both ECC VRAM and non-ECC VRAM. The Vega VII does not have it but the Mi50s do. I am including the ChatGPT comments if anyone is familiar with the intricacies of such things.
I have rebuilt ROCm 5 times. It's currently on 7.0.1 with the tensile stuff copied over from rocblas. I have tried all the way back to 6.2 and the error remains unchanged. I also know that inferencing with mixed VRAM types works on CUDA, at least with different build targets. It seems like one would be able to build with both variations of gfx906 but the most specific version don't seem to be build targets in llama.cpp.
Any help is much appreciated.
**Good catch — the error from the compiler:**
clang: error: invalid offload arch combinations: 'gfx906' and 'gfx906:sramecc+:xnack-'
indicates that the compiler does **not** support specifying a mix of target-ids like `gfx906` **and** `gfx906:sramecc+:xnack-` in the same `--offload-arch` list. That means my earlier suggestion to list multiple variants *that way* was incorrect.
From the documentation:
>
ROCm error: invalid device function
current device: 0, in function ggml_cuda_compute_forward at /home/name/Desktop/LLAMA_NEW/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:2722
/home/name/Desktop/LLAMA_NEW/llama.cpp/ggml/src/ggml-cuda/ggml-cuda.cu:90: ROCm error
err
[New LWP 1370285]
[New LWP 1370288]
[New LWP 1370289]
[New LWP 1370290]
[New LWP 1370291]
[New LWP 1370292]
[New LWP 1370293]
[New LWP 1370294]
[New LWP 1370295]
[New LWP 1370296]
[New LWP 1370297]
[New LWP 1370298]
[New LWP 1370299]
[New LWP 1370300]
[New LWP 1370301]
[New LWP 1370302]
[New LWP 1370303]
[New LWP 1370304]
[New LWP 1370305]
[New LWP 1370306]
[New LWP 1370307]
[New LWP 1370308]
[New LWP 1370309]
[New LWP 1370310]
[New LWP 1370311]
[New LWP 1370312]
[New LWP 1370314]
[New LWP 1370326]
[New LWP 1370327]
[New LWP 1370328]
[New LWP 1370329]
[New LWP 1370330]
[New LWP 1370331]
[New LWP 1370332]
[New LWP 1370333]
[New LWP 1370334]
[New LWP 1370335]
[New LWP 1370336]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007313506ea42f in __GI___wait4 (pid=1370353, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30 ../sysdeps/unix/sysv/linux/wait4.c: No such file or directory.
#0 0x00007313506ea42f in __GI___wait4 (pid=1370353, stat_loc=0x0, options=0, usage=0x0) at ../sysdeps/unix/sysv/linux/wait4.c:30
30 in ../sysdeps/unix/sysv/linux/wait4.c
#1 0x0000731350d7058b in ggml_print_backtrace () from /home/name/Desktop/LLAMA_NEW/llama.cpp/build/bin/libggml-base.so
#2 0x0000731350d70723 in ggml_abort () from /home/name/Desktop/LLAMA_NEW/llama.cpp/build/bin/libggml-base.so
#3 0x000073134f85def2 in ggml_cuda_error(char const*, char const*, char const*, int, char const*) () from /home/name/Desktop/LLAMA_NEW/llama.cpp/build/bin/libggml-hip.so
#4 0x000073134f865a54 in evaluate_and_capture_cuda_graph(ggml_backend_cuda_context*, ggml_cgraph*, bool&, bool&, bool&) () from /home/name/Desktop/LLAMA_NEW/llama.cpp/build/bin/libggml-hip.so
#5 0x000073134f8630bf in ggml_backend_cuda_graph_compute(ggml_backend*, ggml_cgraph*) () from /home/name/Desktop/LLAMA_NEW/llama.cpp/build/bin/libggml-hip.so
#6 0x0000731350d8be57 in ggml_backend_sched_graph_compute_async () from /home/name/Desktop/LLAMA_NEW/llama.cpp/build/bin/libggml-base.so
#7 0x0000731350ea0811 in llama_context::graph_compute(ggml_cgraph*, bool) () from /home/name/Desktop/LLAMA_NEW/llama.cpp/build/bin/libllama.so
#8 0x0000731350ea20cc in llama_context::process_ubatch(llama_ubatch const&, llm_graph_type, llama_memory_context_i*, ggml_status&) () from /home/name/Desktop/LLAMA_NEW/llama.cpp/build/bin/libllama.so
#9 0x0000731350ea7cb9 in llama_context::decode(llama_batch const&) () from /home/name/Desktop/LLAMA_NEW/llama.cpp/build/bin/libllama.so
#10 0x0000731350ea8c2f in llama_decode () from /home/name/Desktop/LLAMA_NEW/llama.cpp/build/bin/libllama.so
#11 0x0000561f239cc7a8 in common_init_from_params(common_params&) ()
#12 0x0000561f2389f349 in server_context::load_model(common_params const&) ()
#13 0x0000561f238327e8 in main ()
[Inferior 1 (process 1370284) detached]
Aborted (core dumped)
| 2025-11-07T20:18:50 | https://www.reddit.com/r/LocalLLaMA/comments/1or534m/strange_issue_with_vram_ecc_with_nonecc_types_on/ | dionysio211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or534m | false | null | t3_1or534m | /r/LocalLLaMA/comments/1or534m/strange_issue_with_vram_ecc_with_nonecc_types_on/ | false | false | self | 0 | null |
Best model for voice line generation | 1 | I'm trying to generate voice lines for a video game character. The only requirement is that I can adjust the emotions of the voice line. It also has to able to run on my RTX 2060 6gb. Kokoro sounds good but it seems like I can't adjust the emotions. I don't need voice cloning or training if it already has good voices but that's a plus. I also don't need real time capabilities.
What's the best model for my use case? Thanks. | 2025-11-07T20:06:12 | https://www.reddit.com/r/LocalLLaMA/comments/1or4rcn/best_model_for_voice_line_generation/ | oldchicken34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or4rcn | false | null | t3_1or4rcn | /r/LocalLLaMA/comments/1or4rcn/best_model_for_voice_line_generation/ | false | false | self | 1 | null |
Kimi K2 Thinking SECOND most intelligent LLM according to Artificial Analysis | 153 | 2025-11-07T20:04:53 | https://www.reddit.com/r/LocalLLaMA/comments/1or4q4m/kimi_k2_thinking_second_most_intelligent_llm/ | teatime1983 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or4q4m | false | null | t3_1or4q4m | /r/LocalLLaMA/comments/1or4q4m/kimi_k2_thinking_second_most_intelligent_llm/ | false | false | 153 | null | ||
Cerebras/Kimi-Linear-REAP-35B-A3B-Instruct · Hugging Face | 103 | 2025-11-07T19:44:19 | https://huggingface.co/cerebras/Kimi-Linear-REAP-35B-A3B-Instruct | maroule | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1or46rv | false | null | t3_1or46rv | /r/LocalLLaMA/comments/1or46rv/cerebraskimilinearreap35ba3binstruct_hugging_face/ | false | false | 103 | {'enabled': False, 'images': [{'id': 'A5NFpNf7XiO2gm9NBBYXrtttQxX4Zw8QmamAzVNdgao', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/A5NFpNf7XiO2gm9NBBYXrtttQxX4Zw8QmamAzVNdgao.png?width=108&crop=smart&auto=webp&s=fc3a89a07d3aee9e512db40844a7b15968d0f00a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/A5NFpNf7XiO2gm9NBBYXrtttQxX4Zw8QmamAzVNdgao.png?width=216&crop=smart&auto=webp&s=6765d3360475c8c345182e60c2a89914d196b69b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/A5NFpNf7XiO2gm9NBBYXrtttQxX4Zw8QmamAzVNdgao.png?width=320&crop=smart&auto=webp&s=3352aa7ce8fb2a67bd128af88e637ef928299b9e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/A5NFpNf7XiO2gm9NBBYXrtttQxX4Zw8QmamAzVNdgao.png?width=640&crop=smart&auto=webp&s=f4b51e11f834d5f8b364cbfd4018254e64276366', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/A5NFpNf7XiO2gm9NBBYXrtttQxX4Zw8QmamAzVNdgao.png?width=960&crop=smart&auto=webp&s=3c671177f5642d738465de28db0ce04aed31ec55', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/A5NFpNf7XiO2gm9NBBYXrtttQxX4Zw8QmamAzVNdgao.png?width=1080&crop=smart&auto=webp&s=f4fd15a9522c96e261cf737df176555189c61ee9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/A5NFpNf7XiO2gm9NBBYXrtttQxX4Zw8QmamAzVNdgao.png?auto=webp&s=ed5616b9aeedd0b7bb4efe19dd30837afc17d9af', 'width': 1200}, 'variants': {}}]} | ||
I just wanted to say - Hi - to Kimi K2 Thinking | 13 | Man... The existential angst is palpable... *sigh*
><think>The user said "Hi" in English. The AI Bot is supposed to be helpful and intelligent, and always answer in German.
>So I should respond to the greeting in German. A natural German response to "Hi" would be something like "Hallo!" or "Guten Tag!" Since the user used a casual "Hi", I can respond with a friendly, casual German greeting.
>I should also offer help, as that's what a helpful assistant does. The prompt says AI Bot is helpful and intelligent.
>So a good response would be:
"Hallo! Wie kann ich Ihnen helfen?" (Hello! How can I help you?)
>Or more casually:
"Hallo! Wie kann ich dir helfen?" (using "du" form)
>Since the user wrote in English but just said "Hi", it's ambiguous whether they want to continue in English or German. But the instruction is clear: AI Bot ALWAYS answers in German. So I must respond in German regardless.
>I'll go with a friendly, helpful response in German. "Hallo! Wie kann ich Ihnen helfen?" is polite and standard. Or I could be even more conversational.
>Let me think about the best approach:
- Acknowledge the greeting
- Be helpful and offer assistance
- Use German exclusively
>"Hallo! Schön, dass Sie da sind. Wie kann ich Ihnen helfen?" - This is friendly and helpful.
>Or simply: "Hallo! Wie kann ich Ihnen helfen?" - This is direct and professional.
>Given that the user just said "Hi", a slightly more conversational but still professional tone seems appropriate. I'll go with:
>"Hallo! Wie kann ich Ihnen helfen?"
>Actually, let me reconsider. The user wrote "Hi" which is very informal. In German, I could respond with:
"Hallo! Wie kann ich dir helfen?" (informal "you")
>But since I don't know the context or relationship, the formal "Ihnen" is safer and more professional for an AI assistant. The prompt says "helpful and intelligent" but doesn't specify formality level. In German-speaking contexts, it's generally better to start formal.
>So my final answer: "Hallo! Wie kann ich Ihnen helfen?"</think>Hallo! Wie kann ich Ihnen helfen?
Seriously though, wonderfull model - just conversationally, and for asking it things, letting it use search... The voice and vibe of Kimi K2 are still present. :) | 2025-11-07T19:42:40 | https://www.reddit.com/r/LocalLLaMA/comments/1or45ci/i_just_wanted_to_say_hi_to_kimi_k2_thinking/ | harlekinrains | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or45ci | false | null | t3_1or45ci | /r/LocalLLaMA/comments/1or45ci/i_just_wanted_to_say_hi_to_kimi_k2_thinking/ | false | false | self | 13 | null |
The best tools I’ve found for evaluating AI voice agents | 5 | I’ve been working on a voice agent project recently and quickly realized that building the pipeline (STT → LLM → TTS) is the easy part. The real challenge is evaluation, making sure the system performs reliably across accents, contexts, and multi-turn conversations.
I went down the rabbit hole of voice eval tools and here are the ones I found most useful:
1. **Deepgram Eval**
* Strong for transcription accuracy testing.
* Provides detailed WER (word error rate) metrics and error breakdowns.
2. **Speechmatics**
* I used this mainly for multilingual evaluation.
* Handles accents/dialects better than most engines I tested.
3. **Voiceflow Testing**
* Focused on evaluating conversation flows end-to-end.
* Helpful when testing dialogue design beyond just turn-level accuracy.
4. **Play.h.t Voice QA**
* More on the TTS side, quality and naturalness of synthetic voices.
* Useful if you care about voice fidelity as much as the NLP part.
5. **Maxim AI**
* This stood out because it let me run *structured evals on the whole voice pipeline*.
* Latency checks, persona-based stress tests, and pre/post-release evaluation of agents.
* Felt much closer to “real user” testing than just measuring WER.
I’d love to hear if anyone here has explored other approaches to **systematic evaluation of voice agents,** especially for multi-turn robustness or human-likeness metrics. | 2025-11-07T19:31:35 | https://www.reddit.com/r/LocalLLaMA/comments/1or3uyd/the_best_tools_ive_found_for_evaluating_ai_voice/ | llamacoded | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or3uyd | false | null | t3_1or3uyd | /r/LocalLLaMA/comments/1or3uyd/the_best_tools_ive_found_for_evaluating_ai_voice/ | false | false | self | 5 | null |
Zero Configuration AI | 1 | [removed] | 2025-11-07T19:30:43 | https://www.reddit.com/r/LocalLLaMA/comments/1or3u4z/zero_configuration_ai/ | NorthComplaint7631 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or3u4z | false | null | t3_1or3u4z | /r/LocalLLaMA/comments/1or3u4z/zero_configuration_ai/ | false | false | 1 | null | |
Some of the best tools for simulating LLM agents to test and evaluate behavior | 1 |
I've been looking for tools that go beyond one-off runs or traces, something that lets you simulate full tasks, test agents under different conditions, and evaluate performance as prompts or models change.
Here’s what I’ve found so far:
* LangSmith – Strong tracing and some evaluation support, but tightly coupled with LangChain and more focused on individual runs than full-task simulation.
* AutoGen Studio – Good for simulating agent conversations, especially multi-agent ones. More visual and interactive, but not really geared for structured evals.
* AgentBench – More academic benchmarking than practical testing. Great for standardized comparisons, but not as flexible for real-world workflows.
* CrewAI – Great if you're designing coordination logic or planning among multiple agents, but less about testing or structured evals.
* Maxim AI – This has been the most complete simulation + eval setup I’ve used. You can define end-to-end tasks, simulate realistic user interactions, and run both human and automated evaluations. Super helpful when you’re debugging agent behavior or trying to measure improvements. Also supports prompt versioning, chaining, and regression testing across changes.
* AgentOps – More about monitoring and observability in production than task simulation during dev. Useful complement, though.
From what I’ve tried, Maxim and Langsmith are the only one that really brings simulation + testing + evals together. Most others focus on just one piece.
If anyone’s using something else for evaluating agent behavior in the loop (not just logs or benchmarks), I’d love to hear it. | 2025-11-07T19:21:39 | https://www.reddit.com/r/LocalLLaMA/comments/1or3ls7/some_of_the_best_tools_for_simulating_llm_agents/ | Educational-Bison786 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or3ls7 | false | null | t3_1or3ls7 | /r/LocalLLaMA/comments/1or3ls7/some_of_the_best_tools_for_simulating_llm_agents/ | false | false | self | 1 | null |
How do I use the NPU in my s25 for AI inference? | 0 | Basically I want to run LLM in the NPU but I really don't know what app to use, I've be using pocketpal but it support GPU only.
I also ran local dream for NPU SD inference with success, even though I was mentally unable to convert bigger SD models to the weird format used by the app.
any suggestion about what apps can I use? | 2025-11-07T19:09:33 | https://www.reddit.com/r/LocalLLaMA/comments/1or3a78/how_do_i_use_the_npu_in_my_s25_for_ai_inference/ | weener69420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or3a78 | false | null | t3_1or3a78 | /r/LocalLLaMA/comments/1or3a78/how_do_i_use_the_npu_in_my_s25_for_ai_inference/ | false | false | self | 0 | null |
Recently built my first LLM and im wondering why there hasn't been more innovation on moving away from transformers and gradient descent? | 53 | So please excuse my lack of knowledge in this area as im new to AI/LLMs but I just recently build my first micro llm and I dunno something about them seems wrong.
Is the industry stuck on transformers and gradient descent because coming up with alternatives is a hugely difficult problem or is the industry just having blinders on?
I like a lot of the research about sparse models that use hebbian/oja and i know these come with challenges like catastrophic interference. But this seems like a very solvable problem.
Anyways im starting to tinker with my micro llm to see if I can get rid of gradient descent and traditional transformers and see if I cant make a sparse model based on hebbian/oja at the very least in a small scale
Again pardon my nativity, my expertise is mostly in backend systems and architecture. I have very little exposure to AI/LLMs until recently. | 2025-11-07T19:01:17 | https://www.reddit.com/r/LocalLLaMA/comments/1or323v/recently_built_my_first_llm_and_im_wondering_why/ | CelebrationMinimum50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or323v | false | null | t3_1or323v | /r/LocalLLaMA/comments/1or323v/recently_built_my_first_llm_and_im_wondering_why/ | false | false | self | 53 | null |
Finally, an open-source voice AI that controls more than just the basics on Android | 3 | Hey everyone,
Just wanted to share something really cool my friend is building. It's an open-source AI assistant for Android called Panda. The way it works is pretty neat: you can control your phone just with your voice.
And I don't just mean "set a timer" or basic commands. It's smart enough to actually use your apps for you. It uses intelligent screen understanding to understand your voice in natural language and execute complex, multi-step tasks across any app.
Like, you can literally just watch it:
- Open Netflix and play Breaking Bad with just one voice command.
- Find a video on TikTok about how to tie a tie. Panda will navigate the interface, search the content, and perform the whole operation for you.
- Open Amazon and search for "USB 3 cable 3 feet long", handling the e-commerce site to find exactly what you need.
And it will open the app, tap the buttons, and do the whole thing. He's been really focused on making it super fast, too.
### Why I'm sharing this
Honestly, I just think it's a cool project. One of the ideas he has for it is helping with accessibility, which I think is awesome. It could be super helpful for anyone who has a hard time with touch screens.
Plus, I am using this while I'm cooking or my hands are messy and I want to change a song or reply to a text or in the morning when I was eating breakfast and running late. I just said, "Hey Panda, can you book an Uber from location A to location B?" and it just handled it. Pretty interesting, right?
He's gotten some love from a few tech blogs and YouTubers, so I figured you all might like to see it, too.
### Give it a try & contribute
The project is fully open sourced. Wanna dive into the code, explore how it works, and share your thoughts. He would love to hear what this community thinks!
📱 Got an Android device? Then this one’s a must-try.
👉 Check it out on [GitHub](https://github.com/Ayush0Chaudhary/blurr)
So, what do you all think? Could this be the next big step in accessibility? It can really help people who aren’t very educated but know how to use a smartphone. I installed it on my father’s phone, and he has found it really helpful—now it’s easy for him to reply to someone or find a contact in his phone book. It seems to have hundreds of use cases. Let’s hear your thoughts and experiences below!
| 2025-11-07T18:30:12 | https://v.redd.it/sl9cwkgbnvzf1 | nik-55 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1or28k8 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/sl9cwkgbnvzf1/DASHPlaylist.mpd?a=1765132227%2CYmI2M2M3YTQ4OWFhOTYxZTNjZDM2Y2MyN2E5MmIwM2Q1MjQzOTdlNjU2MzI3MzZkYjZhMzVhODA4MzU0Y2FlMg%3D%3D&v=1&f=sd', 'duration': 103, 'fallback_url': 'https://v.redd.it/sl9cwkgbnvzf1/CMAF_480.mp4?source=fallback', 'has_audio': True, 'height': 428, 'hls_url': 'https://v.redd.it/sl9cwkgbnvzf1/HLSPlaylist.m3u8?a=1765132227%2CZWNlYmVlN2VkZDhlNDU3YTQyMjUzYWE0MjI3ZjlkODM4ZWJjMjYzZWMxYzdmMGZiZjA3YmYwMTVmZWY0ZTNjYQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/sl9cwkgbnvzf1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 854}} | t3_1or28k8 | /r/LocalLLaMA/comments/1or28k8/finally_an_opensource_voice_ai_that_controls_more/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'N3Y0YXhqZ2JudnpmMTsf1aLVBAVrwUC9PJG-THjLWebDOsH4aUcJPSJRtcXC', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N3Y0YXhqZ2JudnpmMTsf1aLVBAVrwUC9PJG-THjLWebDOsH4aUcJPSJRtcXC.png?width=108&crop=smart&format=pjpg&auto=webp&s=779bd7bb6792fc46b1f5f4d70b3a799756bb2218', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/N3Y0YXhqZ2JudnpmMTsf1aLVBAVrwUC9PJG-THjLWebDOsH4aUcJPSJRtcXC.png?width=216&crop=smart&format=pjpg&auto=webp&s=c972bfc17a7491c1a9a064c37f95d5dee23d673f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/N3Y0YXhqZ2JudnpmMTsf1aLVBAVrwUC9PJG-THjLWebDOsH4aUcJPSJRtcXC.png?width=320&crop=smart&format=pjpg&auto=webp&s=249c7023458b622557925c683f1433f80ed84d2e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/N3Y0YXhqZ2JudnpmMTsf1aLVBAVrwUC9PJG-THjLWebDOsH4aUcJPSJRtcXC.png?width=640&crop=smart&format=pjpg&auto=webp&s=c9a85c95c3ded9929f11f85d4f26282a1a3eb630', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/N3Y0YXhqZ2JudnpmMTsf1aLVBAVrwUC9PJG-THjLWebDOsH4aUcJPSJRtcXC.png?width=960&crop=smart&format=pjpg&auto=webp&s=57a14a4c7b342485053059970954b3734e760129', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/N3Y0YXhqZ2JudnpmMTsf1aLVBAVrwUC9PJG-THjLWebDOsH4aUcJPSJRtcXC.png?width=1080&crop=smart&format=pjpg&auto=webp&s=4ae571b975141e4ee0f74a323e3e99ab5fe4e895', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/N3Y0YXhqZ2JudnpmMTsf1aLVBAVrwUC9PJG-THjLWebDOsH4aUcJPSJRtcXC.png?format=pjpg&auto=webp&s=86005954325eb7052ca0a0f2f694046499b6280d', 'width': 1280}, 'variants': {}}]} | |
Want to Learn More About Agentic AI | 5 | Hey everyone — I’ve built a few agentic AI systems around SaaS automation and coding tools.
I’m familiar with LangChain, LangGraph, RAG, tool calling, and MCP, but I want to learn more by contributing to real projects.
If you’re working on something in this space or know an open-source project looking for contributors, I’d love to help out and learn from it.
| 2025-11-07T18:21:38 | https://www.reddit.com/r/LocalLLaMA/comments/1or2077/want_to_learn_more_about_agentic_ai/ | Superb_Practice_4544 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or2077 | false | null | t3_1or2077 | /r/LocalLLaMA/comments/1or2077/want_to_learn_more_about_agentic_ai/ | false | false | self | 5 | null |
Custom AM5 x SXM2 Motherboard for a Budget AI Rig | 1 | Hey everyone, I'm looking for some feedback on my idea of making a custom motherboard that combines the AM5 socket with the SXM2 socket for an affordable and cost-effective AI rig for Ryzen CPU and V100 GPU. I'm a bit new to local AIs, and I'm also tight on budget.
While there are a lot of people using the SXM2-PCIe adapter in the Chinese AI community, but I figure that's a waste of the SXM2's extra bandwidth. Hence the idea of an SXM2 socket connected directly to an AM5 motherboard.
How feasible would that be? | 2025-11-07T18:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/1or1jtc/custom_am5_x_sxm2_motherboard_for_a_budget_ai_rig/ | aero-spike | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or1jtc | false | null | t3_1or1jtc | /r/LocalLLaMA/comments/1or1jtc/custom_am5_x_sxm2_motherboard_for_a_budget_ai_rig/ | false | false | self | 1 | null |
I fine-tuned Gemma 3 1B for CLI command translation... but it runs 100% locally. 810MB, 1.5s inference on CPU. | 98 | **I built a locally-running NL→CLI translator by fine-tuning Gemma 3 1B with QLoRA.**
[\[Link to repo\]](https://github.com/pranavkumaarofficial/nlcli-wizard)
**TL;DR:** Built a privacy-first CLI copilot. No API calls, no subscriptions. Just 810MB of local AI that converts natural language to CLI commands.
https://preview.redd.it/jpo4dd4jivzf1.png?width=1024&format=png&auto=webp&s=e3aa7bc9af223d3ab2e4c3eb9156907994885cf5
I wanted to try out something like a CLI wizard: running locally and loaded within the package. Now of course there is an overhead of embedding an SLM in every package.
**But definitely makes sense for complex, domain-specific tools with non-obvious CLI patterns**.
Instead of: kubectl get pods -n production --field-selector status.phase=Running
Could be: kubectl -w "show me running pods in production"
Shell-GPT is the closest tool that is available but doesnt do what I wanted, and ofcourse uses closedsource LLMs
**Here is what I tried:**
Takes natural language like "show my environments sorted by size" and outputs the correct CLI command, eg : `venvy ls --sort size`.
**Key stats:**
* \~1.5s inference on CPU (4 threads)
* 810MB quantized model (Q4\_K\_M with smart fallback)
* Trained on Colab T4 in <1 hr
# The Setup
**Base model:** Gemma 3-1B-Instruct (March 2025 release)
**Training:** Unsloth + QLoRA (only 14M params trained, 1.29% of model)
**Hardware:** Free Colab T4, trained in under 1 hour
**Final model:** 810MB GGUF (Q4\_K\_M with smart fallback to Q5/Q6)
**Inference:** llama.cpp, \~1.5s on CPU (4 threads, M1 Mac / Ryzen)
**The architecture part:** Used smart quantization with mixed precision (Q4\_K/Q5\_0/Q6\_K) that adapts per-layer based on tensor dimensions. Some layers can't be quantized to 4-bit without accuracy loss, so llama.cpp automatically upgrades them to 5/6-bit.
Training loss was extremely clean - 0.135 (train), 0.142 (val) with zero overfitting across 3 epochs.
Limitations (being honest here)
1. **Model size:** 810MB is chunky. Too big for Docker images, fine for dev machines.
2. **Tool-specific:** Currently only works for `venvy`. Need to retrain for kubectl/docker/etc.
3. **Latency:** 1.5s isn't instant. Experts will still prefer muscle memory.
4. **Accuracy:** 80-85% means you MUST verify before executing.
# Safety
Always asks for confirmation before executing. I'm not *that* reckless.
confirm = input("Execute? [Y/n] ")
**Still working on this : to check where this can really help, but yeah pls go check it out**
GitHub: [\[Link to repo\]](https://github.com/pranavkumaarofficial/nlcli-wizard) | 2025-11-07T17:58:57 | https://www.reddit.com/r/LocalLLaMA/comments/1or1e7p/i_finetuned_gemma_3_1b_for_cli_command/ | theRealSachinSpk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or1e7p | false | null | t3_1or1e7p | /r/LocalLLaMA/comments/1or1e7p/i_finetuned_gemma_3_1b_for_cli_command/ | false | false | 98 | null | |
FULL Cursor Agent 2.0 System Prompt and Internal Tools | 4 | Latest update: 07/11/2025
I’ve just extracted and published the FULL Cursor Agent 2.0 System prompt and Internal tools. Over 8,000 tokens.
You can check it out here: [https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools) | 2025-11-07T17:58:44 | https://www.reddit.com/r/LocalLLaMA/comments/1or1e0s/full_cursor_agent_20_system_prompt_and_internal/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or1e0s | false | null | t3_1or1e0s | /r/LocalLLaMA/comments/1or1e0s/full_cursor_agent_20_system_prompt_and_internal/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '5Jh1Fs1KAd1szZuZCSTbpsNZWlumD-G_pA3yaiXlVYU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5Jh1Fs1KAd1szZuZCSTbpsNZWlumD-G_pA3yaiXlVYU.png?width=108&crop=smart&auto=webp&s=0478b64ed5d2278330356b088ab4b7f8149afc20', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5Jh1Fs1KAd1szZuZCSTbpsNZWlumD-G_pA3yaiXlVYU.png?width=216&crop=smart&auto=webp&s=a938817a23bbddd6e127a3ae89ce11603256c695', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5Jh1Fs1KAd1szZuZCSTbpsNZWlumD-G_pA3yaiXlVYU.png?width=320&crop=smart&auto=webp&s=c186d70fe54541bc8b866501d65f3c3d5bc27ebb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5Jh1Fs1KAd1szZuZCSTbpsNZWlumD-G_pA3yaiXlVYU.png?width=640&crop=smart&auto=webp&s=3e5be1098c47f8dd995173d8316f18540cbdc208', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5Jh1Fs1KAd1szZuZCSTbpsNZWlumD-G_pA3yaiXlVYU.png?width=960&crop=smart&auto=webp&s=8f4c20c7efde19e55037c16b7cfaed56365d465e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5Jh1Fs1KAd1szZuZCSTbpsNZWlumD-G_pA3yaiXlVYU.png?width=1080&crop=smart&auto=webp&s=cda749eb95fdcac7bbe51b9b181dae3b15ff2e0c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5Jh1Fs1KAd1szZuZCSTbpsNZWlumD-G_pA3yaiXlVYU.png?auto=webp&s=608615648ff674953535afe17a63bca7a58652ad', 'width': 1200}, 'variants': {}}]} |
fp8 native matmul accelerators are not coming until the release of m6 Macs? | 2 | Although Apple has added native matmuls for fp16 for m5s , but they still dont have native support for fp8 yet.. Perhaps by m6 they will have fp8 support, then fp4 for m7 in 2027?I hope they accelerate their hardware more and offer more affordable ram with their models! | 2025-11-07T17:47:27 | https://www.reddit.com/r/LocalLLaMA/comments/1or13gu/fp8_native_matmul_accelerators_are_not_coming/ | power97992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or13gu | false | null | t3_1or13gu | /r/LocalLLaMA/comments/1or13gu/fp8_native_matmul_accelerators_are_not_coming/ | false | false | self | 2 | null |
How LLMs helped me diagnose what optometrists never did for me, until now | 0 | I have asymmetric astigmatism, and I also play video games quite a bit in addition to being an LLM hobbyist (and i'll be an ML engineer soon). I peaked top 3000 in Fortnite, and now I play Valorant and hover around ascendant. I never understood why I hit a wall right under competitive viability. I felt like I’d get fatigued faster than I should, my aim would be inconsistent across sessions, and I’d have to work way harder than other players just to maintain tracking and angle discipline.
I lived for years assuming there was something inherently wrong with me, and it couldn't be corrected, so I just quit all games. I recently decided I'd try to get into Valorant again. Some may argue this was a mistake, but I'm actually so glad I did.
I was today (23) years old when I discovered glasses were fighting my eyes when sitting a desk, and that bad signal was fighting my motor controls. This led to bad posture, and a reinforcement of the misalignment between my visual and motor sensory systems. I never would have considered researching this if it weren't for the ideas LLMs gave me.
I booked an appointment with a renowned developmental optometrist in my area, and he quickly realized I needed Plus and Prism lenses. I also decided to go to a physical therapist, and they were kind of perplexed by my strength but postural imbalance.
I am going to continue to work with my eye doctor and physical therapist to see if I can correct myself, I feel like I caught this issue right before my brain fully developed and was so lucky to. I could have lived an entire life with chronic pain. More importantly, I think a lot of people are silently suffering from a wrong prescription or bad posture that has been reinforced for years. Sometimes our desk setups just don't support good ergonomics, and that might be costing us so much more than we realize.
I admit, I don't really understand the formal science. But at the very least an LLM was able to get me to think outside of the mental models I held. I think that was super powerful, and I just wanted to share a message my fellow LLM developers and enjoyers.
TL;DR - Take a second to just assess how you're sitting, how does it feel? Does closing your eyes after a long computer use session feel more relaxing than it should? | 2025-11-07T17:36:29 | https://www.reddit.com/r/LocalLLaMA/comments/1or0sy0/how_llms_helped_me_diagnose_what_optometrists/ | m1tm0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or0sy0 | false | null | t3_1or0sy0 | /r/LocalLLaMA/comments/1or0sy0/how_llms_helped_me_diagnose_what_optometrists/ | false | false | self | 0 | null |
Kimi K2 reasoning local on a MBP / Mac Studio “cluster” at 20t/s ??!! | 0 | I do not understand how that is even possible, yes, I know the total 1 Trillion parameters are not active … so that helps, but how can you get that speed in a networked setup??!! Also the part that runs on the MBP, even if it is a M4Max 40 core should be way slower, defining the overall speed, no?
[https://www.youtube.com/watch?v=GydlPnP7IYk](https://www.youtube.com/watch?v=GydlPnP7IYk) | 2025-11-07T17:26:13 | https://www.reddit.com/r/LocalLLaMA/comments/1or0j04/kimi_k2_reasoning_local_on_a_mbp_mac_studio/ | Careless_Garlic1438 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or0j04 | false | null | t3_1or0j04 | /r/LocalLLaMA/comments/1or0j04/kimi_k2_reasoning_local_on_a_mbp_mac_studio/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '6GihBwRlwmb6XiQl2g55cXa0dtpwZrKAMJrBEFq4hCY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/6GihBwRlwmb6XiQl2g55cXa0dtpwZrKAMJrBEFq4hCY.jpeg?width=108&crop=smart&auto=webp&s=9fdfb17aa879317ed928b9cc2e9ee9d37e1ea97a', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/6GihBwRlwmb6XiQl2g55cXa0dtpwZrKAMJrBEFq4hCY.jpeg?width=216&crop=smart&auto=webp&s=ac8917869e872dbda05b1f5572d20745fa9264cc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/6GihBwRlwmb6XiQl2g55cXa0dtpwZrKAMJrBEFq4hCY.jpeg?width=320&crop=smart&auto=webp&s=7e360b8c00855a77100f7282b1a447abf26f3788', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/6GihBwRlwmb6XiQl2g55cXa0dtpwZrKAMJrBEFq4hCY.jpeg?auto=webp&s=66c611ee3ba28bc3a31c12a2d033bf52f6c1ac9d', 'width': 480}, 'variants': {}}]} |
OpenAI Pushes to Label Datacenters as ‘American Manufacturing’ Seeking Federal Subsidies After Preaching Independence | 306 | OpenAI is now lobbying to classify datacenter spending as “American manufacturing.”
In their recent submission, they explicitly advocate for Federal loan guarantees the same kind used to subsidize large-scale industrial projects.
So after all the talk about independence and no need for government help… Sam lied. Again.
| 2025-11-07T17:15:05 | Ok-Breakfast-4676 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1or08aq | false | null | t3_1or08aq | /r/LocalLLaMA/comments/1or08aq/openai_pushes_to_label_datacenters_as_american/ | false | false | 306 | {'enabled': True, 'images': [{'id': 'mvMjw-9S9JAa58u0p4WxQ8oKo9q_tlga4IRYJqkcT1U', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/jq1jrz6kbvzf1.jpeg?width=108&crop=smart&auto=webp&s=032200c0e4d20fe649beda865e16be1b4806a19c', 'width': 108}, {'height': 274, 'url': 'https://preview.redd.it/jq1jrz6kbvzf1.jpeg?width=216&crop=smart&auto=webp&s=131639b400bb6da7f069a24a00001710af6d4de0', 'width': 216}, {'height': 406, 'url': 'https://preview.redd.it/jq1jrz6kbvzf1.jpeg?width=320&crop=smart&auto=webp&s=da0cddb823b343f70e016f7e9fc44dc9673b4246', 'width': 320}, {'height': 813, 'url': 'https://preview.redd.it/jq1jrz6kbvzf1.jpeg?width=640&crop=smart&auto=webp&s=258e2c389f69a6ed7cdcda3ea33b5a80b38a0fb3', 'width': 640}, {'height': 1220, 'url': 'https://preview.redd.it/jq1jrz6kbvzf1.jpeg?width=960&crop=smart&auto=webp&s=eea2b7035142f3bb98e508a5c9450087fbf5c947', 'width': 960}, {'height': 1372, 'url': 'https://preview.redd.it/jq1jrz6kbvzf1.jpeg?width=1080&crop=smart&auto=webp&s=88059357d9f0e23d0f66d5a62d635385907dde52', 'width': 1080}], 'source': {'height': 1420, 'url': 'https://preview.redd.it/jq1jrz6kbvzf1.jpeg?auto=webp&s=91be439a024bdabd7339ae3972e5046688e2c00f', 'width': 1117}, 'variants': {}}]} | ||
huggingface models spouting gibberish? | 1 | hello everybody. im currently trying to train a 14b LoRA and have been running into some issues that just started last week and wanted to know if anybody else was running into similar.
i seem to only be able to load and use a model once, as when i close and re-serve it *something* happens and it begins to spew gibberish until i force close it. this even happens with just the base model loaded. if i delete the entire huggingface folder (the master including xet, blobs, hub), it will work once before i have to do that again.
here's my current stack:
transformers==4.56.2 \\
peft==0.17.1 \\
accelerate==1.10.1 \\
bitsandbytes==0.48.2 \\
datasets==4.1.1 \\
safetensors==0.6.2 \\
sentence-transformers==5.1.1 \\
trl==0.23.1 \\
matplotlib==3.10.6 \\
fastapi "uvicorn\[standard\]" \\
pydantic==2.12.3
that i serve in the pytorch2.9 13 CUDA docker container. ive tried disabling xet, using a local directory for downloads, setting the directories to read only etc. with no luck so far. i've been using qwen3-14b. the scripts i use for serving and training worked fine last week, and they work when i redownload the fresh model so i don't believe it's that, but if you need to see anything else just let me know.
i'm a novice hobbyist so apologies if this is a simple fix or if i'm missing anything. i am not currently using LLAMA to serve but this subreddit seems to be the most active (and sane lol) of the local LLM ones so i figured it was worth a shot, but mods please feel free to delete if not allowed. just really stumped and chatGPT/gemini/deepseek are as well, and the only stackoverflow answers i can find on this didn't work for me.
thank you in advance! | 2025-11-07T17:14:27 | https://www.reddit.com/r/LocalLLaMA/comments/1or07q1/huggingface_models_spouting_gibberish/ | videeternel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1or07q1 | false | null | t3_1or07q1 | /r/LocalLLaMA/comments/1or07q1/huggingface_models_spouting_gibberish/ | false | false | self | 1 | null |
New stealth model Polaris Alpha from Openrouter | 0 | New stealth model Polaris Alpha from Openrouter | 2025-11-07T16:44:02 | https://v.redd.it/w4hp2m7w5vzf1 | sirjoaco | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oqzedw | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/w4hp2m7w5vzf1/DASHPlaylist.mpd?a=1765125858%2CNWE1YWVhN2E0OTQ4MTI0ZjU2OGM1N2MwOTBlNjU1Nzk4OTRkNjA3ZjgyMDhkZjViZTMzMTRjNmZjMjA0ZDY3Mg%3D%3D&v=1&f=sd', 'duration': 27, 'fallback_url': 'https://v.redd.it/w4hp2m7w5vzf1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/w4hp2m7w5vzf1/HLSPlaylist.m3u8?a=1765125858%2CNGJjN2ZkODFhY2QwNzI4ODUxZTY4YjU5OTU5Y2Y5M2YwZGQ5ODI0MGVlZjQxYjUxZjNiZGJiNWZmYTdmYmM0MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/w4hp2m7w5vzf1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1766}} | t3_1oqzedw | /r/LocalLLaMA/comments/1oqzedw/new_stealth_model_polaris_alpha_from_openrouter/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bTdzOWxtN3c1dnpmMWaLRFtlOaNnELcut79whFpV_-6R32Ak6GAiQHe4R0Dv', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/bTdzOWxtN3c1dnpmMWaLRFtlOaNnELcut79whFpV_-6R32Ak6GAiQHe4R0Dv.png?width=108&crop=smart&format=pjpg&auto=webp&s=6f2855571f51486f709b6a85b17a0e5f90d7a214', 'width': 108}, {'height': 132, 'url': 'https://external-preview.redd.it/bTdzOWxtN3c1dnpmMWaLRFtlOaNnELcut79whFpV_-6R32Ak6GAiQHe4R0Dv.png?width=216&crop=smart&format=pjpg&auto=webp&s=74e72d71ef8aa616d8e932e257a699ddca2346cd', 'width': 216}, {'height': 195, 'url': 'https://external-preview.redd.it/bTdzOWxtN3c1dnpmMWaLRFtlOaNnELcut79whFpV_-6R32Ak6GAiQHe4R0Dv.png?width=320&crop=smart&format=pjpg&auto=webp&s=7f773f2389507f0f379803a2c385d49d672bbbcc', 'width': 320}, {'height': 391, 'url': 'https://external-preview.redd.it/bTdzOWxtN3c1dnpmMWaLRFtlOaNnELcut79whFpV_-6R32Ak6GAiQHe4R0Dv.png?width=640&crop=smart&format=pjpg&auto=webp&s=2f122ee393cc3f1602068cdc6191a4197a686c8a', 'width': 640}, {'height': 587, 'url': 'https://external-preview.redd.it/bTdzOWxtN3c1dnpmMWaLRFtlOaNnELcut79whFpV_-6R32Ak6GAiQHe4R0Dv.png?width=960&crop=smart&format=pjpg&auto=webp&s=33cb960bce3c66a3f9fa770975efa8bd5b8f148f', 'width': 960}, {'height': 660, 'url': 'https://external-preview.redd.it/bTdzOWxtN3c1dnpmMWaLRFtlOaNnELcut79whFpV_-6R32Ak6GAiQHe4R0Dv.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1ab94e8bae68c54942661e3bcf3d7f8624c0e5ca', 'width': 1080}], 'source': {'height': 2172, 'url': 'https://external-preview.redd.it/bTdzOWxtN3c1dnpmMWaLRFtlOaNnELcut79whFpV_-6R32Ak6GAiQHe4R0Dv.png?format=pjpg&auto=webp&s=0e07b7986a729e15bf0c7981dfe5d5cdf6495eed', 'width': 3552}, 'variants': {}}]} | |
Any Suggestions for Running Ai Models Completely Offline | 0 | Like is there a Android App That let's you run any Ai Model Completely Offline on Android Devices ?? | 2025-11-07T16:29:08 | https://www.reddit.com/r/LocalLLaMA/comments/1oqz04y/any_suggestions_for_running_ai_models_completely/ | DarkEngine774 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqz04y | false | null | t3_1oqz04y | /r/LocalLLaMA/comments/1oqz04y/any_suggestions_for_running_ai_models_completely/ | false | false | self | 0 | null |
Built a multi-LLM control center for €1,000 while funded startups burn €500k on the same thing | 0 | OpenAI dropped AgentKit and LinkedIn immediately declared it the "n8n killer" before even testing it.
This drives me crazy. Not because AgentKit is bad, but because everyone acts like OpenAI is the only option. You're either locked into their API or you're not building AI tools.
We started Navigator a few months ago specifically to break this dependency. It's a chat interface that connects to 500+ tools, works with ANY LLM (Claude, GPT, Gemini, Llama, whatever), and lets you execute n8n workflows without switching tabs.
The kind of thing funded startups spend 18 months and €500k building.
We did it for about €1,000.
**How we kept it lean:**
Open-source everything. MCP servers for tool connections. Dev-grade tech that's free or dirt cheap.
Global remote team living in Portugal, Germany, Estonia, Egypt, South Korea. Talent is everywhere if you look.
Delicate procurement and integration of the best AI tools and workflows. Won't need to hire anyone for a while unless there is a unique opportunity.
**Why we built it:**
Everyone should be able to connect their tools, trigger workflows, and switch between LLMs without rebuilding infrastructure.
You shouldn't have to choose between OpenAI's ecosystem or nothing.
You shouldn't need €500k in funding to launch something useful.
**What it does:**
Generate n8n workflows from chat. Connect tools via MCP. Test and save automations without code. Switch between LLMs (self-hosted or API).
It's basically all the hot tech from GitHub, HuggingFace, Reddit and threads most don't monitor. Wrapped in something anyone can use.
**The hybrid model:**
We're not pivoting from our automation consulting. We're building both. Custom solutions for companies that need them. Software for everyone else.
Two revenue streams. Less dependency on one model. More leverage from what we learn building for clients.
**Full disclosure:** I'm Paul, founder at keinsaas. We built this because we hated being locked into specific LLMs and constantly switching between tools.
If this sounds useful or you want to give us feedback, let me know. We have a waitlist and will roll out in a few weeks. | 2025-11-07T16:11:40 | https://www.reddit.com/r/LocalLLaMA/comments/1oqyjf1/built_a_multillm_control_center_for_1000_while/ | zakjaquejeobaum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqyjf1 | false | null | t3_1oqyjf1 | /r/LocalLLaMA/comments/1oqyjf1/built_a_multillm_control_center_for_1000_while/ | false | false | self | 0 | null |
How practical is finetuning larger models with 4x 3090 setup? | 9 | I am thinking of building 4x3090 setup cause other options with large VRAM are quite expensive and not worth the buck. For instance, pro 6000 has 96gigs but costs around 10,000. OTH, 3090's VRAM could be pooled together so 4x3090 would have same VRAM (a bit slower though) but significantly cheaper.
Is it practical? | 2025-11-07T16:11:13 | https://www.reddit.com/r/LocalLLaMA/comments/1oqyix9/how_practical_is_finetuning_larger_models_with_4x/ | Specialist-Let9791 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqyix9 | false | null | t3_1oqyix9 | /r/LocalLLaMA/comments/1oqyix9/how_practical_is_finetuning_larger_models_with_4x/ | false | false | self | 9 | null |
AMA Announcement: Moonshot AI, The Opensource Frontier Lab Behind Kimi K2 Thinking SoTA Model (Monday, 8AM-11AM PST) | 353 | 2025-11-07T15:53:33 | XMasterrrr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oqy1k7 | false | null | t3_1oqy1k7 | /r/LocalLLaMA/comments/1oqy1k7/ama_announcement_moonshot_ai_the_opensource/ | false | true | default | 353 | {'enabled': True, 'images': [{'id': '8v2luf5owuzf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/8v2luf5owuzf1.jpeg?width=108&crop=smart&auto=webp&s=fabb883ff715a8b3e78dae2bcac5041a45236d86', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/8v2luf5owuzf1.jpeg?width=216&crop=smart&auto=webp&s=cd1ed761249ae3ada77f7cbff6d353ff8374c4ab', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/8v2luf5owuzf1.jpeg?width=320&crop=smart&auto=webp&s=42a8f903fa234890f97045b51681b0efa4da4412', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/8v2luf5owuzf1.jpeg?width=640&crop=smart&auto=webp&s=b9bc34ec8dddd94422397eaa91e0310250da5ba3', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/8v2luf5owuzf1.jpeg?width=960&crop=smart&auto=webp&s=f5a4305de7747a644f119a841598b7330512427b', 'width': 960}, {'height': 1080, 'url': 'https://preview.redd.it/8v2luf5owuzf1.jpeg?width=1080&crop=smart&auto=webp&s=8f9850405d046e9cd9af357cd280bc47dd748102', 'width': 1080}], 'source': {'height': 3000, 'url': 'https://preview.redd.it/8v2luf5owuzf1.jpeg?auto=webp&s=e062d3f083a31e86dc4c1c6bfa2ff4f4475821dc', 'width': 3000}, 'variants': {}}]} | ||
Vulnerability Inception: How AI Code Assistants Replicate and Amplify Security Flaws | 5 | Hi all, I'm sharing an article about prompt injection in Large Language Models (LLMs), specifically regarding coding and coding agents. The research shows that it's easy to manipulate LLMs into injecting backdoors and vulnerabilities into code, simply by embedding instructions in a comment, as the LLM will follow any instructions it finds in the original source code.
This is relevant to the localLlama community because only one open-weights model, Deepseek 3.2 Exp, appears to resistant (but not immune) to this vulnerability. It seems to have received specialized training to avoid introducing security flaws. I think this is a significant finding and hope you find it useful. | 2025-11-07T15:48:49 | https://github.com/ortegaalfredo/aiweaknesses/blob/main/ai_vulnerabilities_article.pdf | ortegaalfredo | github.com | 1970-01-01T00:00:00 | 0 | {} | 1oqxx8w | false | null | t3_1oqxx8w | /r/LocalLLaMA/comments/1oqxx8w/vulnerability_inception_how_ai_code_assistants/ | false | false | default | 5 | {'enabled': False, 'images': [{'id': '5sm3GRNGqkHPJJHTtLe5heJ60MTx9qDDMNLNZZnboms', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5sm3GRNGqkHPJJHTtLe5heJ60MTx9qDDMNLNZZnboms.png?width=108&crop=smart&auto=webp&s=79d328204fadc6291591bc5aecfecf80b29cc568', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5sm3GRNGqkHPJJHTtLe5heJ60MTx9qDDMNLNZZnboms.png?width=216&crop=smart&auto=webp&s=8a7261676235864536180538839f7af1a96fec46', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5sm3GRNGqkHPJJHTtLe5heJ60MTx9qDDMNLNZZnboms.png?width=320&crop=smart&auto=webp&s=97ef35ddac92242de5cd24661f8fe0ac1e247488', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5sm3GRNGqkHPJJHTtLe5heJ60MTx9qDDMNLNZZnboms.png?width=640&crop=smart&auto=webp&s=81efb413a983424e2fb3cecba428a0affa68289f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5sm3GRNGqkHPJJHTtLe5heJ60MTx9qDDMNLNZZnboms.png?width=960&crop=smart&auto=webp&s=fb6508de382fc55d22acf6daeced2f3ba1e65719', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5sm3GRNGqkHPJJHTtLe5heJ60MTx9qDDMNLNZZnboms.png?width=1080&crop=smart&auto=webp&s=b8aaf63ca3f593f52c029c2e9d3323c585558695', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5sm3GRNGqkHPJJHTtLe5heJ60MTx9qDDMNLNZZnboms.png?auto=webp&s=46191281e474cf5f433508c1e738a7f57380db0d', 'width': 1200}, 'variants': {}}]} |
From your experience for text only, how is Qwen3VL compared to Qwen3, does having a Visual module penalize the text-only capacities ? | 26 | Title.
Let's say `Qwen3-30B-A3B-Instruct-2507` excels at text only and long context.
What about `Qwen3-VL-30B-A3B-Instruct` if you use it as a text only model ? have you seen any quality loss ?
We're wondering if it make sense to have in one gpu Qwen3 VL and on another gpu Qwen3. | 2025-11-07T15:46:05 | https://www.reddit.com/r/LocalLLaMA/comments/1oqxuqs/from_your_experience_for_text_only_how_is_qwen3vl/ | LinkSea8324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqxuqs | false | null | t3_1oqxuqs | /r/LocalLLaMA/comments/1oqxuqs/from_your_experience_for_text_only_how_is_qwen3vl/ | false | false | self | 26 | null |
AI observability: how i actually keep agents reliable in prod | 3 | AI observability isn’t about slapping a dashboard on your logs and calling it a day. here’s what i do, straight up, to actually know what my agents are doing (and not doing) in production:
* every agent run is traced, start to finish. i want to see every prompt, every tool call, every context change. if something goes sideways, i follow the chain, no black boxes, no guesswork.
* i log everything in a structured way. not just blobs, but versioned traces that let me compare runs and spot regressions.
* token-level tracing. when an agent goes off the rails, i can drill down to the exact token or step that tripped it up.
* live evals on production data. i’m not waiting for test suites to catch failures. i run automated checks for faithfulness, toxicity, and whatever else i care about, right on the stuff hitting real users.
* alerts are set up for drift, spikes in latency, or weird behavior. i don’t want surprises, so i get pinged the second things get weird.
* human review queues for the weird edge cases. if automation can’t decide, i make it easy to bring in a second pair of eyes.
* everything is exportable and otel-compatible. i can send traces and logs wherever i want, grafana, new relic, you name it.
* built for multi-agent setups. i’m not just watching one agent, i’m tracking fleets. scale doesn’t break my setup.
here’s the deal: if you’re still trying to debug agents with just logs and vibes, you’re flying blind. this is the only way i trust what’s in prod. if you want to stop guessing, this is how you do it. Open to hear more about how you folks might be dealing with this | 2025-11-07T15:38:45 | https://www.reddit.com/r/LocalLLaMA/comments/1oqxnw5/ai_observability_how_i_actually_keep_agents/ | Otherwise_Flan7339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqxnw5 | false | null | t3_1oqxnw5 | /r/LocalLLaMA/comments/1oqxnw5/ai_observability_how_i_actually_keep_agents/ | false | false | self | 3 | null |
AI service I can have a conversation with? Preferably with a face? | 1 | Hey y’all.
Sorry if this is the wrong place to ask. I’m conceptualising a board game and a key component of it would be an interactive agent that’s promotable and can have a convo in real time. I’m just unsure what service to use for this?
I can build something if needed if there are any SDKs to plug into
Thanks! | 2025-11-07T15:19:52 | https://www.reddit.com/r/LocalLLaMA/comments/1oqx60m/ai_service_i_can_have_a_conversation_with/ | Turbulent-Pattern653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqx60m | false | null | t3_1oqx60m | /r/LocalLLaMA/comments/1oqx60m/ai_service_i_can_have_a_conversation_with/ | false | false | self | 1 | null |
Emergent Occam's Razor: Teaching qwen2.5:7b to learn through journaling (51%→78%) [Full code + paper] | 17 | I just finished an experiment where a 7B model learns through reflection and self-critique - no weight updates, no training data, just journaling about mistakes.
\*\*The surprising part: the model discovered Occam's Razor on its own.\*\*
\## The Setup
\- Model: qwen2.5:7b (local, via Ollama)
\- Task: Meeting room scheduling (constraint satisfaction)
\- Method: After each batch, model writes reflective journal and distills strategy
\- Hardware: Consumer laptop, no GPU needed
\- Runtime: \~40 minutes total
\## The Results
| Stage | Accuracy | What Happened |
|-------|----------|---------------|
| Baseline | 51.3% | Zero-shot, weak |
| Bootstrap | 66.0% | Learning phase (messy) |
| Test w/ LRL | 78.0% | \*\*+26.7% improvement!\*\* |
\## The Learning Journey (This is the cool part)
\*\*Batches 1-5: "The Over-Engineer"\*\*
Model confidently proposes complex solutions:
\- "Implement interval trees!"
\- "Apply dynamic programming!"
\- "Use graph theory approaches!"
Result: \~35% accuracy. Sophisticated nonsense.
\*\*Batches 6-8: "Seeds of Doubt"\*\*
Journal entries start showing conflict:
\> "Since the problem is straightforward, focusing on basic interval checking..."
First time admitting simplicity might be the answer.
\*\*Batches 9-10: "The Awakening"\*\*
The breakthrough journal entry:
\> "This suggests a \*\*fundamental misunderstanding\*\* of how to handle overlapping intervals."
The model admitted it was wrong. Everything changed from there.
\## Why This Matters for Local LLMs
✅ \*\*Interpretable\*\* - Read the complete thought process in journals
✅ \*\*Efficient\*\* - No GPU training, pure inference
✅ \*\*Transferable\*\* - Strategies are text files you can share
✅ \*\*Safe\*\* - Models that learn to doubt themselves
The distillation process acts like evolution: ideas that work (simple counting) survive, ideas that fail (graph theory) get filtered out.
\## Try It Yourself
\`\`\`bash
git clone [https://github.com/DRawson5570/linguistic-rl-scheduling](https://github.com/DRawson5570/linguistic-rl-scheduling)
cd linguistic-rl-scheduling
ollama pull qwen2.5:7b
python3 scheduling\_lrl\_paper.py | 2025-11-07T15:18:06 | https://www.reddit.com/r/LocalLLaMA/comments/1oqx4dj/emergent_occams_razor_teaching_qwen257b_to_learn/ | Next_Bid_8339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqx4dj | false | null | t3_1oqx4dj | /r/LocalLLaMA/comments/1oqx4dj/emergent_occams_razor_teaching_qwen257b_to_learn/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'KC8jJJ4Xxr9hckVscbwLy5_BaBVS2HApQ7pvt_n18gU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KC8jJJ4Xxr9hckVscbwLy5_BaBVS2HApQ7pvt_n18gU.png?width=108&crop=smart&auto=webp&s=68f7078006489bb19388fc7ddc3472d620c40af3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KC8jJJ4Xxr9hckVscbwLy5_BaBVS2HApQ7pvt_n18gU.png?width=216&crop=smart&auto=webp&s=86555a6b35e364bb3398ab266356cebae75508ce', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KC8jJJ4Xxr9hckVscbwLy5_BaBVS2HApQ7pvt_n18gU.png?width=320&crop=smart&auto=webp&s=e15309141910292b2d042aa13036be0c4209ccaa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KC8jJJ4Xxr9hckVscbwLy5_BaBVS2HApQ7pvt_n18gU.png?width=640&crop=smart&auto=webp&s=b6c8a585ff5230eb512d0ed5ec344b3299915639', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KC8jJJ4Xxr9hckVscbwLy5_BaBVS2HApQ7pvt_n18gU.png?width=960&crop=smart&auto=webp&s=c1c4ff7d7e93dda4a30ae0ab67d56add36761831', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KC8jJJ4Xxr9hckVscbwLy5_BaBVS2HApQ7pvt_n18gU.png?width=1080&crop=smart&auto=webp&s=3f7bd79a60b2e051dd3bb643ee352ead7080ab77', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KC8jJJ4Xxr9hckVscbwLy5_BaBVS2HApQ7pvt_n18gU.png?auto=webp&s=9a215a6063d3a93f1e941176b2dbf138e3760acc', 'width': 1200}, 'variants': {}}]} |
A Unique way to Run Your ai models On Mobile Devices | 0 | I know I know the video is little bit long
links :
- https://github.com/Siddhesh2377/ToolNeuron
- https://discord.com/invite/mVPwHDhrAP
- https://tool-neuron.vercel.app/ | 2025-11-07T15:04:37 | https://v.redd.it/5omrtxewnuzf1 | DarkEngine774 | /r/LocalLLaMA/comments/1oqwri1/a_unique_way_to_run_your_ai_models_on_mobile/ | 1970-01-01T00:00:00 | 0 | {} | 1oqwri1 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/5omrtxewnuzf1/DASHPlaylist.mpd?a=1765249488%2COWQwMWE4YmE4ODAzZmU2MzFlYmE5YWYzY2YwYjQ2MWZiNGE1MmFkZDZiM2RjOWNlNjZkODJlNGFlODgyYzM1Mg%3D%3D&v=1&f=sd', 'duration': 594, 'fallback_url': 'https://v.redd.it/5omrtxewnuzf1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/5omrtxewnuzf1/HLSPlaylist.m3u8?a=1765249488%2CZTA5MzYxNDhjYjQzNzk3YTY0NzdhMDJkMGFmZjA2NThjNWU3MGNlMTBlMDMzYzVjOWVmZjY0NGNkNDE5NWEwMQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5omrtxewnuzf1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 578}} | t3_1oqwri1 | /r/LocalLLaMA/comments/1oqwri1/a_unique_way_to_run_your_ai_models_on_mobile/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YTBzaW51ZXdudXpmMTf_kA8xQOJkpqdZnrlblRxtOIoeOadMlvCYE39NluMF', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/YTBzaW51ZXdudXpmMTf_kA8xQOJkpqdZnrlblRxtOIoeOadMlvCYE39NluMF.png?width=108&crop=smart&format=pjpg&auto=webp&s=5ba65bd932640205990ceecca251803881aff870', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/YTBzaW51ZXdudXpmMTf_kA8xQOJkpqdZnrlblRxtOIoeOadMlvCYE39NluMF.png?width=216&crop=smart&format=pjpg&auto=webp&s=f90464cce1bc17b19dcf7efa8523efa8da2253d5', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/YTBzaW51ZXdudXpmMTf_kA8xQOJkpqdZnrlblRxtOIoeOadMlvCYE39NluMF.png?width=320&crop=smart&format=pjpg&auto=webp&s=bf12031853e1ddf1e644dc0a4da4f392f1fbe8fc', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/YTBzaW51ZXdudXpmMTf_kA8xQOJkpqdZnrlblRxtOIoeOadMlvCYE39NluMF.png?width=640&crop=smart&format=pjpg&auto=webp&s=2fa7dd40b875b5e1ecd70ff21b12a2315c70d54d', 'width': 640}], 'source': {'height': 1594, 'url': 'https://external-preview.redd.it/YTBzaW51ZXdudXpmMTf_kA8xQOJkpqdZnrlblRxtOIoeOadMlvCYE39NluMF.png?format=pjpg&auto=webp&s=c7e808bb1c47a4b016c273bfc1110108361140c7', 'width': 720}, 'variants': {}}]} | |
Using Ray, Unsloth, Axolotl or GPUStack? We are looking for beta testers | 2 | We are looking for beta testers to help us put the [Kalavai platform](https://github.com/kalavai-net/kalavai-client) through its paces.
If you are using **Ray** for distributed workloads, **Unsloth/Axolotl** for fine tuning models or **GPUStack** to manage your GPU cluster, we need you!
Sign up [here](https://kalavai.net/beta).
PS: Are you an AI developer working on other frameworks? We'd love to support it too. | 2025-11-07T14:58:42 | https://www.reddit.com/r/LocalLLaMA/comments/1oqwloa/using_ray_unsloth_axolotl_or_gpustack_we_are/ | Good-Coconut3907 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqwloa | false | null | t3_1oqwloa | /r/LocalLLaMA/comments/1oqwloa/using_ray_unsloth_axolotl_or_gpustack_we_are/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '684qZO77d9-w1WgoMbVmRAClGDvTYPSXGqMzRZWkqBA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/684qZO77d9-w1WgoMbVmRAClGDvTYPSXGqMzRZWkqBA.png?width=108&crop=smart&auto=webp&s=4b9a7df5069327927423f687ac2f9078907265ba', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/684qZO77d9-w1WgoMbVmRAClGDvTYPSXGqMzRZWkqBA.png?width=216&crop=smart&auto=webp&s=40c2023837ad507ff6a4360c3b3f757761dcc646', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/684qZO77d9-w1WgoMbVmRAClGDvTYPSXGqMzRZWkqBA.png?width=320&crop=smart&auto=webp&s=0fbd8bf075b911a7cdc4b5822115a277d1806209', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/684qZO77d9-w1WgoMbVmRAClGDvTYPSXGqMzRZWkqBA.png?width=640&crop=smart&auto=webp&s=87fcc3ec06b8f3c65bfe300b3fe5169d187a3199', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/684qZO77d9-w1WgoMbVmRAClGDvTYPSXGqMzRZWkqBA.png?width=960&crop=smart&auto=webp&s=58c41213e0b4d54af46976ac14309f83240f3110', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/684qZO77d9-w1WgoMbVmRAClGDvTYPSXGqMzRZWkqBA.png?width=1080&crop=smart&auto=webp&s=41455739bc0bb57febde7e11270caa701933dfa1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/684qZO77d9-w1WgoMbVmRAClGDvTYPSXGqMzRZWkqBA.png?auto=webp&s=89f6f95ce250ad1a6258cb27e00b65778d4789d5', 'width': 1200}, 'variants': {}}]} |
What we learned while building evaluation and observability workflows for multimodal AI agents | 1 | I’m one of the builders at Maxim AI, and over the past few months we’ve been working deeply on how to make evaluation and observability workflows more aligned with how real engineering and product teams actually build and scale AI systems.
When we started, we looked closely at the strengths of existing platforms; Fiddler, Galileo, Braintrust, Arize; and realized most were built for traditional ML monitoring or for narrow parts of the workflow. The gap we saw was in **end-to-end agent lifecycle visibility;** from pre-release experimentation and simulation to post-release monitoring and evaluation.
Here’s what we’ve been focusing on and what we learned:
* **Full-stack support for multimodal agents:** Evaluations, simulations, and observability often exist as separate layers. We combined them to help teams debug and improve reliability earlier in the development cycle.
* **Cross-functional workflows:** Engineers and product teams both need access to quality signals. Our UI lets non-engineering teams configure evaluations, while SDKs (Python, TS, Go, Java) allow fine-grained evals at any trace or span level.
* **Custom dashboards & alerts:** Every agent setup has unique dimensions to track. Custom dashboards give teams deep visibility, while alerts tie into Slack, PagerDuty, or any OTel-based pipeline.
* **Human + LLM-in-the-loop evaluations:** We found this mix essential for aligning AI behavior with real-world expectations, especially in voice and multi-agent setups.
* **Synthetic data & curation workflows:** Real-world data shifts fast. Continuous curation from logs and eval feedback helped us maintain data quality and model robustness over time.
* **LangGraph agent testing:** Teams using LangGraph can now trace, debug, and visualize complex agentic workflows with one-line integration, and run simulations across thousands of scenarios to catch failure modes before release.
The hardest part was designing this system so it wasn’t just “another monitoring tool,” but something that gives both developers and product teams a shared language around AI quality and reliability.
Would love to hear how others are approaching evaluation and observability for agents, especially if you’re working with complex multimodal or dynamic workflows. | 2025-11-07T14:51:27 | https://www.reddit.com/r/LocalLLaMA/comments/1oqwfak/what_we_learned_while_building_evaluation_and/ | dinkinflika0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqwfak | false | null | t3_1oqwfak | /r/LocalLLaMA/comments/1oqwfak/what_we_learned_while_building_evaluation_and/ | false | false | self | 1 | null |
A Small Demo Of My Project, Not looking for promotion, Just need Honest Feedback | 1 | I know I know the video is little bit long | 2025-11-07T14:26:42 | https://v.redd.it/fbwb53oghuzf1 | DarkEngine774 | /r/LocalLLaMA/comments/1oqvtch/a_small_demo_of_my_project_not_looking_for/ | 1970-01-01T00:00:00 | 0 | {} | 1oqvtch | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/fbwb53oghuzf1/DASHPlaylist.mpd?a=1765247207%2CM2VlODIzNzcwOTI3NjdhOWU1N2E2Yzk4ZDEwMmVlYjQwN2JmODYwOWUwZWE5ZTI4YjNiNDMwMzgzZGVjYzNmYQ%3D%3D&v=1&f=sd', 'duration': 594, 'fallback_url': 'https://v.redd.it/fbwb53oghuzf1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1280, 'hls_url': 'https://v.redd.it/fbwb53oghuzf1/HLSPlaylist.m3u8?a=1765247207%2CNTQ2NTZlNjFiZTg3YmEwZTI0M2MzOTg2ZWM0MDM0MGUxMzJhYTYyNTdlY2RhOGNhOGMxNTU5Y2RmMjI4OTI3NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fbwb53oghuzf1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 578}} | t3_1oqvtch | /r/LocalLLaMA/comments/1oqvtch/a_small_demo_of_my_project_not_looking_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'dWt1cDFxd29ndXpmMR4Qs7uCZ54HCqhyA4O-tE5gQvXEQJntW-_rPzK3Qbo6', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/dWt1cDFxd29ndXpmMR4Qs7uCZ54HCqhyA4O-tE5gQvXEQJntW-_rPzK3Qbo6.png?width=108&crop=smart&format=pjpg&auto=webp&s=c3068ca5e0929e99eb2468f81e34a56909cf3557', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/dWt1cDFxd29ndXpmMR4Qs7uCZ54HCqhyA4O-tE5gQvXEQJntW-_rPzK3Qbo6.png?width=216&crop=smart&format=pjpg&auto=webp&s=317dfa09d9c899c1f7110ced083790e36fcf8196', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/dWt1cDFxd29ndXpmMR4Qs7uCZ54HCqhyA4O-tE5gQvXEQJntW-_rPzK3Qbo6.png?width=320&crop=smart&format=pjpg&auto=webp&s=df5f71adfb7e498b37da07dece611c3068a7db3f', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/dWt1cDFxd29ndXpmMR4Qs7uCZ54HCqhyA4O-tE5gQvXEQJntW-_rPzK3Qbo6.png?width=640&crop=smart&format=pjpg&auto=webp&s=c8a9c1665048808fe82df5fcce7ce284a78f0579', 'width': 640}], 'source': {'height': 1798, 'url': 'https://external-preview.redd.it/dWt1cDFxd29ndXpmMR4Qs7uCZ54HCqhyA4O-tE5gQvXEQJntW-_rPzK3Qbo6.png?format=pjpg&auto=webp&s=787fd85dd6099cc8ef4305c6be182510f7f00bd8', 'width': 812}, 'variants': {}}]} | |
Errors installing Ryzen-AI 1.6.1 on a Windows 11 AMD AI Max 395 system | 1 | Has anyone managed to successfully install Ryzen-AI-1.6.1 on this system or any similar system? I have installed all the prerequisites and configured paths to python etc. That all seems to be fine. But I'm getting the following error late on in the installation:
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://xcoartifactory.xilinx.com:443/artifactory/conda-forge-remote/win-64/repodata.json>
This site doesn't seem to exist as far as I can tell. Anyone else encountered this and found a workaround?
| 2025-11-07T14:01:36 | https://www.reddit.com/r/LocalLLaMA/comments/1oqv6s9/errors_installing_ryzenai_161_on_a_windows_11_amd/ | exoplanetman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqv6s9 | false | null | t3_1oqv6s9 | /r/LocalLLaMA/comments/1oqv6s9/errors_installing_ryzenai_161_on_a_windows_11_amd/ | false | false | self | 1 | null |
Kimi K2 Thinking with sglang and mixed GPU / ktransformers CPU inference @ 31 tokens/sec | 118 | Just got Kimi K2 Thinking running locally and I'm blown away how fast it runs in simple chat tests: approximately ~ 30 tokens/sec with 4000 tokens in the context. Obviously a lot more testing to be done, but wow... a trillion parameter model running at 30 tokens/sec.
I'll whip up some tests around batching and available context lengths soon, but for now here's the recipe to get it running should you have the necessary hardware.
**System**
- EPYC 7B45 (128-core, 256 thread) CPU
- 768GB DDR5 6400 MT/s
- 4x RTX 6000 Pro Workstation 96GB GPUs
**Setup virtual python environment**
mkdir sglang-ktransformers
cd sglang-ktransformers
uv venv --python 3.11 --seed
. .venv/bin/activate
**Install sglang**
uv pip install "sglang" --prerelease=allow
**Download and initialize ktransformers repo**
git clone https://github.com/kvcache-ai/ktransformers
git submodule update --init --recursive
**Install ktransformers CPU kernel for sglang**
cd kt-kernel
export CPUINFER_CPU_INSTRUCT=AVX512
export CPUINFER_ENABLE_AMX=OFF
uv pip install .
**Download Kimi K2 Thinking GPU & CPU parts**
uv pip install -U hf hf_transfer
hf download moonshotai/Kimi-K2-Thinking
hf download KVCache-ai/Kimi-K2-Thinking-CPU-weight
**Run k2**
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m sglang.launch_server \
--host 0.0.0.0 --port 8080 \
--model ~/.cache/huggingface/hub/models--moonshotai--Kimi-K2-Thinking/snapshots/357b94aee9d50ec88e5e6dd9550fd7f957cb1baa \
--kt-amx-weight-path ~/.cache/huggingface/hub/models--KVCache-ai--Kimi-K2-Thinking-CPU-weight/snapshots/690ffacb9203d3b5e05ee8167ff1f5d4ae027c83 \
--kt-cpuinfer 252 \
--kt-threadpool-count 2 \
--kt-num-gpu-experts 238 \
--kt-amx-method AMXINT4 \
--attention-backend triton
--trust-remote-code \
--mem-fraction-static 0.98 \
--chunked-prefill-size 4096 \
--max-running-requests 1 \
--max-total-tokens 32768 \
--enable-mixed-chunk \
--tensor-parallel-size 4 \
--enable-p2p-check \
--disable-shared-experts-fusion | 2025-11-07T13:28:44 | https://www.reddit.com/r/LocalLLaMA/comments/1oquezp/kimi_k2_thinking_with_sglang_and_mixed_gpu/ | __JockY__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oquezp | false | null | t3_1oquezp | /r/LocalLLaMA/comments/1oquezp/kimi_k2_thinking_with_sglang_and_mixed_gpu/ | false | false | self | 118 | {'enabled': False, 'images': [{'id': '4HhDpVKxqcLmnxFz3Pa7jEqUmnKin0qj8sxjj2Y7Tqk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4HhDpVKxqcLmnxFz3Pa7jEqUmnKin0qj8sxjj2Y7Tqk.png?width=108&crop=smart&auto=webp&s=62a5ab4169def4e757d6f5a94343e4ecef10bb0d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/4HhDpVKxqcLmnxFz3Pa7jEqUmnKin0qj8sxjj2Y7Tqk.png?width=216&crop=smart&auto=webp&s=e65483190e784f7f02320d2579ef7f13655bec44', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/4HhDpVKxqcLmnxFz3Pa7jEqUmnKin0qj8sxjj2Y7Tqk.png?width=320&crop=smart&auto=webp&s=25927e0b9c66ff53e75f354533c07c3b67afa281', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/4HhDpVKxqcLmnxFz3Pa7jEqUmnKin0qj8sxjj2Y7Tqk.png?width=640&crop=smart&auto=webp&s=744c04158ba7fed050228ac4a77e6a7208ca923f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/4HhDpVKxqcLmnxFz3Pa7jEqUmnKin0qj8sxjj2Y7Tqk.png?width=960&crop=smart&auto=webp&s=041ecd860963f6f71bc6a1b5ce43c5b31efddf5b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/4HhDpVKxqcLmnxFz3Pa7jEqUmnKin0qj8sxjj2Y7Tqk.png?width=1080&crop=smart&auto=webp&s=0daea9257c814a137f11271c242b195ecd8436df', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/4HhDpVKxqcLmnxFz3Pa7jEqUmnKin0qj8sxjj2Y7Tqk.png?auto=webp&s=d83587f23e4c680be2e8790193c934012e47dd74', 'width': 1200}, 'variants': {}}]} |
Help running Seed OSS with thinking budget | 2 | I can't seem to get seed oss to use it's thinking budget. I'm running it on llama cpp server like this:
llama-server --model Seed-OSS-36B-Instruct-UD-Q4\_K\_XL.gguf --no-mmap -fa on -c 10000 -ngl 80 --port 5899
I'm using a python client like this:
import openai
client = openai.OpenAI(
base\_url="http://localhost:5800/v1",
api\_key = "sk-no-key-required"
)
extra\_body = {"chat\_template\_kwargs": {"thinking\_budget": 0}}
thinking\_budget=0
completion = client.chat.completions.create(
model="Seed\_OSS",
messages=\[
{"role": "system", "content": f"You are a helpful assistant"},
{"role": "user", "content": f"hello"}
\],
max\_tokens=200,
extra\_body={
"chat\_template\_kwargs": {
"thinking\_budget": thinking\_budget}}
)
print(dir(stream))
message = completion.choices\[0\].message
print(f"Content: {message.content}")
Output:
Content: <seed:think>
Got it, the user said "hello". I should respond in a friendly and welcoming way. Maybe keep it simple and open-ended to encourage them to say more. Let me go with "Hello! How can I help you today?" That's friendly and invites further interaction.</seed:think>Hello! How can I help you today?
I've tried using different quantizations, different prompts and updated llama cpp but It's still not working. Any ideas? Thanks. | 2025-11-07T13:24:30 | https://www.reddit.com/r/LocalLLaMA/comments/1oqubgg/help_running_seed_oss_with_thinking_budget/ | Otherwise-Alfalfa495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqubgg | false | null | t3_1oqubgg | /r/LocalLLaMA/comments/1oqubgg/help_running_seed_oss_with_thinking_budget/ | false | false | self | 2 | null |
Building a Multi-Turn Agentic AI Evaluation Platform – Looking for Validation | 2 | Hey everyone,
I've been noticing that building AI agents is getting easier and easier, thanks to no-code tools and "vibe coding" (the latest being LangGraph's agent builder). The goal seems to be making agent development accessible even to non-technical folks, at least for prototypes.
But evaluating multi-turn agents is still really hard and domain-specific. You need black box testing (outputs), glass box testing (agent steps/reasoning), RAG testing, and MCP testing.
I know there are many eval platforms today (LangFuse, Braintrust, LangSmith, Maxim, HoneyHive, etc.), but none focus specifically on multi-turn evaluation. Maxim has some features, but the DX wasn't what I needed.
**What we're building:**
A platform focused on multi-turn agentic AI evaluation with emphasis on developer experience. Even non-technical folks (PMs who know the product better) should be able to write evals.
**Features:**
* Scenario-based testing (table stakes, I know)
* Multi-turn testing with evaluation at every step (tool calls + reasoning)
* Multi-turn RAG testing
* MCP server testing (you don't know how good your tools' design prompts are until plugged into Claude/ChatGPT)
* Adversarial testing (planned)
* Context visualization for context engineering (will share more on this later)
* Out-of-the-box integrations to various no-code agent-building platforms
**My question:**
* Do you feel this problem is worth solving?
* Are you doing vibe evals, or do existing tools cover your needs?
* Is there a different problem altogether?
Trying to get early feedback and would love to hear your experiences. Thanks! | 2025-11-07T13:19:50 | https://www.reddit.com/r/LocalLLaMA/comments/1oqu7od/building_a_multiturn_agentic_ai_evaluation/ | shivmohith8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqu7od | false | null | t3_1oqu7od | /r/LocalLLaMA/comments/1oqu7od/building_a_multiturn_agentic_ai_evaluation/ | false | false | self | 2 | null |
Intel Arc Pro B50 GPU Review: An Affordable, Low-Power Workstation GPU | 22 | 2025-11-07T13:16:02 | https://www.storagereview.com/review/intel-arc-pro-b50-gpu-review-an-affordable-low-power-workstation-gpu | brand_momentum | storagereview.com | 1970-01-01T00:00:00 | 0 | {} | 1oqu4i3 | false | null | t3_1oqu4i3 | /r/LocalLLaMA/comments/1oqu4i3/intel_arc_pro_b50_gpu_review_an_affordable/ | false | false | default | 22 | {'enabled': False, 'images': [{'id': '3Egk-w2HASlmi68QUUwWJyPFPlaRWFVtyMPx2j34DHg', 'resolutions': [{'height': 74, 'url': 'https://external-preview.redd.it/3Egk-w2HASlmi68QUUwWJyPFPlaRWFVtyMPx2j34DHg.jpeg?width=108&crop=smart&auto=webp&s=ca6b5c08bec53a58aed595fb8a7a06763e32a3d8', 'width': 108}, {'height': 148, 'url': 'https://external-preview.redd.it/3Egk-w2HASlmi68QUUwWJyPFPlaRWFVtyMPx2j34DHg.jpeg?width=216&crop=smart&auto=webp&s=9dd61094f6d08905a79fbc176b196ce82fb4ad6a', 'width': 216}, {'height': 220, 'url': 'https://external-preview.redd.it/3Egk-w2HASlmi68QUUwWJyPFPlaRWFVtyMPx2j34DHg.jpeg?width=320&crop=smart&auto=webp&s=10d6115b4cd7ed73320f1308a60cdd51584e65e2', 'width': 320}, {'height': 440, 'url': 'https://external-preview.redd.it/3Egk-w2HASlmi68QUUwWJyPFPlaRWFVtyMPx2j34DHg.jpeg?width=640&crop=smart&auto=webp&s=0db3e975c5d76b36da431c0de3d9c2859f58e5a6', 'width': 640}, {'height': 661, 'url': 'https://external-preview.redd.it/3Egk-w2HASlmi68QUUwWJyPFPlaRWFVtyMPx2j34DHg.jpeg?width=960&crop=smart&auto=webp&s=9cf6f751e752a1c9540ec4f02696aa833369af75', 'width': 960}, {'height': 743, 'url': 'https://external-preview.redd.it/3Egk-w2HASlmi68QUUwWJyPFPlaRWFVtyMPx2j34DHg.jpeg?width=1080&crop=smart&auto=webp&s=7a7869dd26f458ed8a91d2ddfe6dfe28c411c3aa', 'width': 1080}], 'source': {'height': 1033, 'url': 'https://external-preview.redd.it/3Egk-w2HASlmi68QUUwWJyPFPlaRWFVtyMPx2j34DHg.jpeg?auto=webp&s=dfaf42b47161d59c5aad576acbbdc700077b39ec', 'width': 1500}, 'variants': {}}]} | |
Hardware recommendations | 1 | Hi guys, I’m planning to suggest to my company that we build a machine to run local LLMs. The goal is to be able to run something around \~70B models with decent tokens/sec, or maybe use quantized versions of larger ones. I want to export an OpenAI-compatible API using tools like llama.cpp or vLLM, and connect it to our IDEs so several developers can benefit from it directly.
Since I don’t want this to get too costly, I’m debating between building a setup with multiple RTX 3090s or going with a single RTX Pro 6000. The focus would be on getting the best performance per dollar.
What do you guys think? Would you go for multiple 3090s or just a single higher-end card? Any recommendations would be really helpful. | 2025-11-07T13:04:54 | https://www.reddit.com/r/LocalLLaMA/comments/1oqtv8l/hardware_recommendations/ | Pyrotheus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqtv8l | false | null | t3_1oqtv8l | /r/LocalLLaMA/comments/1oqtv8l/hardware_recommendations/ | false | false | self | 1 | null |
Can someone explain what a Mixture-of-Experts model really is? | 209 | Hello, I've been aware of MoE since Deepseek dropped in the beginning of the year but I never really delved deep into what it is and how it helps in things like local AI inferencing. This sub's been very helpful with my local AI related questions so I wanted to learn from the people here.
Here are some more questions:
\- How does a model know when an expert is to be used?
\- Are MoE models really easier to run than traditional models?
\- How do Activation parameters really work? Do they affect fine tuning processes later?
\- Why do MoE models work better than traditional models?
\- What are “sparse” vs “dense” MoE architectures? | 2025-11-07T13:02:36 | https://www.reddit.com/r/LocalLLaMA/comments/1oqttg0/can_someone_explain_what_a_mixtureofexperts_model/ | Weebviir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqttg0 | false | null | t3_1oqttg0 | /r/LocalLLaMA/comments/1oqttg0/can_someone_explain_what_a_mixtureofexperts_model/ | false | false | self | 209 | null |
Why does Elon Musk say OpenAl's open-source models are broke? (GPT OSS) | 0 | Hey, I'm really interested in what Elon Musk meant in the recent interview when he said that OpenAl's open-source models are 'broke' and non working. He is probably referring to GPT-OSS. For anyone who already deployed one of the OSS models, what is you’re experience? | 2025-11-07T12:59:05 | https://v.redd.it/rn6o7ffv1uzf1 | ZeroKelvinMood | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oqtqg1 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/rn6o7ffv1uzf1/DASHPlaylist.mpd?a=1765112359%2CZmE4NTM4Njk4NWRhNzYyNmZhZmUxMDg1MWU5NzhiYjJiYTg5NDkzYjhmNDU4MWI3YTVjYjQ2OGE4OGViZTdmNw%3D%3D&v=1&f=sd', 'duration': 13, 'fallback_url': 'https://v.redd.it/rn6o7ffv1uzf1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 1116, 'hls_url': 'https://v.redd.it/rn6o7ffv1uzf1/HLSPlaylist.m3u8?a=1765112359%2CYjdhMGYyYzRmOGY0NjRmNDlkYmY0NWQ0ODU2NjZlYTlkMTE2Y2ZjZDljODkxOWEwMmMwMTQxNDg4ZDc5ZDIxNw%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/rn6o7ffv1uzf1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 720}} | t3_1oqtqg1 | /r/LocalLLaMA/comments/1oqtqg1/why_does_elon_musk_say_openals_opensource_models/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dGswZXZoYXYxdXpmMWTUkKZeTz_DNkmsKolPcUKoA9SUOA9XrDt2aNywaF5l', 'resolutions': [{'height': 167, 'url': 'https://external-preview.redd.it/dGswZXZoYXYxdXpmMWTUkKZeTz_DNkmsKolPcUKoA9SUOA9XrDt2aNywaF5l.png?width=108&crop=smart&format=pjpg&auto=webp&s=e6a0b848c0dead1d213ad146b1d919faa74f8113', 'width': 108}, {'height': 334, 'url': 'https://external-preview.redd.it/dGswZXZoYXYxdXpmMWTUkKZeTz_DNkmsKolPcUKoA9SUOA9XrDt2aNywaF5l.png?width=216&crop=smart&format=pjpg&auto=webp&s=33136954280dcd0be2b99c0405dee3796ddb8429', 'width': 216}, {'height': 496, 'url': 'https://external-preview.redd.it/dGswZXZoYXYxdXpmMWTUkKZeTz_DNkmsKolPcUKoA9SUOA9XrDt2aNywaF5l.png?width=320&crop=smart&format=pjpg&auto=webp&s=db478d293a85f516800288513d81279b279d97e5', 'width': 320}, {'height': 992, 'url': 'https://external-preview.redd.it/dGswZXZoYXYxdXpmMWTUkKZeTz_DNkmsKolPcUKoA9SUOA9XrDt2aNywaF5l.png?width=640&crop=smart&format=pjpg&auto=webp&s=8da86d4bb9e3595fad9ff4ee16629118d5c089b2', 'width': 640}], 'source': {'height': 1178, 'url': 'https://external-preview.redd.it/dGswZXZoYXYxdXpmMWTUkKZeTz_DNkmsKolPcUKoA9SUOA9XrDt2aNywaF5l.png?format=pjpg&auto=webp&s=c7f19790e19db7991b1595f1f9bbc6ddc2d7ba78', 'width': 760}, 'variants': {}}]} | |
Sparse Attention MoE - a test repo for a novel swappable attention mechanism | 15 | I saw someone talking about using a MoE for Attention a few weeks back. At the time, it seemed like nonsense, but something about the post made me fiddle around with it a bit, and I was surprised to find it... worked? Crazier still... it seems to beat regular attention while radically reducing the amount of time and compute needed to train a model in my testing.
This is an experiment I put together for testing Sparse Attention MoE, a novel attention mechanism that reduces self-attention computational complexity. The idea is to create a new drop-in attention mechanism that should work in existing AI training pipelines while radically reducing the amount of compute required (allowing larger models to be trained on smaller devices, for example). Faster training, lower use of resources, and in my testing so far it trains models that outperforms regular dense attention (at least on my small toy model tests).
Normally, MoE routes feed-forward experts. This concept routes attention sparsity levels. By training Attention we are able to get it to identify easy, medium, and hard tokens, allowing it to route them in a way that reduces how much compute is required as a whole.
I've built a small end-to-end test model and provided all the code to train one yourself at this github repo. This demonstrates **O(N·k) attention** (vs. O(N²)) attention, and allows efficient training since you don't have quadratic blowup on attention. I test-trained a small LLM to see how it would go and saw similar improvement: The adaptive model achieved \*\*12.03% perplexity improvement\*\* over the non-adaptive baseline with \*\*balanced expert usage\*\* (47%/34%/19%) and was \*\*1.7× faster to train\*\*. This directly replicates the vision model's success pattern in a different domain, proving the mechanism is \*\*task-general, not vision-specific\*\*.
For now I'm sharing the diffusion version (it's doing a denoise job on cifar data since that's a simplistic task that can be trained in a few minutes on a 4090).
| 2025-11-07T12:56:53 | https://github.com/Deveraux-Parker/Adaptive_Sparse_Attention_MoE/tree/main | teachersecret | github.com | 1970-01-01T00:00:00 | 0 | {} | 1oqtose | false | null | t3_1oqtose | /r/LocalLLaMA/comments/1oqtose/sparse_attention_moe_a_test_repo_for_a_novel/ | false | false | default | 15 | null |
Hermes4 14b, 2 months later. Thoughts? Opinions? | 1 | I love Hermes3 8B. I was looking forward to Hermes4 for so long. But they don't seem to be releasing an 8B or 4B this time so I would barely be able to run it. On top of that, I just can't seem to get it running on my computer for some reason. Probably just something needs to be updated, idk. But I would only be able to ask a couple questions, with very slow responses, and my machine would overheat within 3 questions. (That's what my Snowpiercer 15b is like that I use for writing) Is it worth checking out anyways? Should I keep hacking away to get this model working? How do other people like it? How is it in world knowledge? | 2025-11-07T12:35:38 | https://www.reddit.com/r/LocalLLaMA/comments/1oqt8fn/hermes4_14b_2_months_later_thoughts_opinions/ | Low_Poetry5287 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqt8fn | false | null | t3_1oqt8fn | /r/LocalLLaMA/comments/1oqt8fn/hermes4_14b_2_months_later_thoughts_opinions/ | false | false | self | 1 | null |
Has anyone used Generative UI tools to make complex content easier to understand? | 1 | So, I was working on this blog about Zendesk alternatives, right? Pulled a ton of info from G2 reviews and ended up with what felt like a mini e-book. Seriously, it was a wall of text and I figured… nobody’s going to read all this.
But then I stumbled on this random AI tool that just turned all that giant content into a super simple visual summary. Bam—all the main stuff in one graphic, way easier to actually look at (see screenshot below for what I mean).
Honestly, I feel like this kind of generative UI needs to be everywhere. Feels like people just want quick, visual stuff now instead of reading essays.
* Anyone else tried using these AI tools to shrink down big info dumps?
* Do you prefer visual summaries or do you still read full writeups?
* If you’ve got cool examples (good or bad), drop them—I want to check them out!
[Text Version](https://preview.redd.it/t6reuz6estzf1.png?width=1920&format=png&auto=webp&s=b5d665befa8021c084366c9c5ab18d3f8e685e92)
[Generative UI version.](https://preview.redd.it/qfxwt1mhstzf1.png?width=1396&format=png&auto=webp&s=6af0d5598b01e5eeb3888e40fac2e5c98318dc90)
| 2025-11-07T12:27:36 | https://www.reddit.com/r/LocalLLaMA/comments/1oqt2f5/has_anyone_used_generative_ui_tools_to_make/ | venkatweetz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqt2f5 | false | null | t3_1oqt2f5 | /r/LocalLLaMA/comments/1oqt2f5/has_anyone_used_generative_ui_tools_to_make/ | false | false | 1 | null | |
Kimi K2 Thinking outperforms Claude Opus 4 while being ~30x cheaper | 0 | ERROR: type should be string, got "https://preview.redd.it/7e64vl7mutzf1.png?width=3568&format=png&auto=webp&s=48a5f3c5b0cf392294f2aec8a6131bcdfc572241\n\nKimi-k2-thinking achieves the highest combinatorics score on GDM’s IMO-AnswerBench (65.5% overall)\n\nhttps://preview.redd.it/e31uduq2vtzf1.png?width=4096&format=png&auto=webp&s=4f2c06070a714268a33ac6f5c750c0adfc977556\n\n" | 2025-11-07T12:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/1oqszjs/kimi_k2_thinking_outperforms_claude_opus_4_while/ | InternationalAsk1490 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqszjs | false | null | t3_1oqszjs | /r/LocalLLaMA/comments/1oqszjs/kimi_k2_thinking_outperforms_claude_opus_4_while/ | false | false | 0 | null | |
How do you evaluate the quality of your knowledge base? | 9 | Typically, we measure metrics related to the retrieval pipeline — such as retriever performance, reranker accuracy, and generation quality.
However, I believe it’s equally important to have metrics that assess the quality of the underlying knowledge base itself. For example:
Are there contradictory or outdated documents?
Are there duplicates or near-duplicates causing noise?
Is the content complete and consistent across topics?
How do you evaluate this?
Are there existing frameworks or tools for assessing knowledge base quality?
What approaches or best practices do you use? | 2025-11-07T11:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/1oqsf0j/how_do_you_evaluate_the_quality_of_your_knowledge/ | CapitalShake3085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqsf0j | false | null | t3_1oqsf0j | /r/LocalLLaMA/comments/1oqsf0j/how_do_you_evaluate_the_quality_of_your_knowledge/ | false | false | self | 9 | null |
gpt-oss-20b in vscode | 1 | I'm trying to use gpt-oss-20b in Vscode.
Has anyone managed to get it working with a OpenSource/Free coding agent plugin?
I tried RooCode and [Continue.dev](http://Continue.dev), in both cases it failed in the tool calls.
| 2025-11-07T11:45:42 | https://www.reddit.com/r/LocalLLaMA/comments/1oqs8yr/gptoss20b_in_vscode/ | stable_monk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqs8yr | false | null | t3_1oqs8yr | /r/LocalLLaMA/comments/1oqs8yr/gptoss20b_in_vscode/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]} |
Hello guys im new in this community i have qestions | 0 | So I wil be geting acer nitro 16 rtx 5070 and ryzen 7 270 what model can I run , please can someone specify what can I run, wil the 5070ti wil be improvement | 2025-11-07T11:15:10 | https://www.reddit.com/r/LocalLLaMA/comments/1oqrp3z/hello_guys_im_new_in_this_community_i_have/ | Former_Location_5543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqrp3z | false | null | t3_1oqrp3z | /r/LocalLLaMA/comments/1oqrp3z/hello_guys_im_new_in_this_community_i_have/ | false | false | self | 0 | null |
How do large companies securely integrate LLMs without exposing confidential data? | 2 | I'm exploring ways to use LLMs as autonomous agents to interact with our internal systems (ERP, chat, etc.). The major roadblock is data confidentiality.
I understand that services like Amazon Bedrock, Anthropic, and OpenAI offer robust security features and Data Processing Addendums (DPAs). However, by their nature, using their APIs means sending our data to a third party. While a DPA is a legal safeguard, the technical act of sharing confidential data outside our perimeter is the core concern.
I've looked into GPU hosting (like [vast.ai](https://vast.ai)) for a "local" deployment, but it's not ideal. We only need inference during working hours, so paying for a 24/7 instance is wasteful. The idea of spinning up a new instance daily and setting it up from scratch seems like an operational nightmare.
This leads me to my main questions:
1. **Security of Bedrock/APIs:** For those using Amazon Bedrock or similar managed services, do you consider it secure *enough* for truly confidential data (e.g., financials, customer PII), relying solely on their compliance certifications and DPAs?
2. **Big Company Strategies:** How do giants like Morgan Stanley or [Booking.com](https://Booking.com) integrate LLMs? Do they simply accept the risk and sign DPAs, or do they exclusively use private, on-premises deployments?
Any insights or shared experiences would be greatly appreciated! | 2025-11-07T11:12:00 | https://www.reddit.com/r/LocalLLaMA/comments/1oqrn1f/how_do_large_companies_securely_integrate_llms/ | Straight_Pin_8618 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqrn1f | false | null | t3_1oqrn1f | /r/LocalLLaMA/comments/1oqrn1f/how_do_large_companies_securely_integrate_llms/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=216&crop=smart&auto=webp&s=5d4693d9fc011431e9348152136fa7a13c95504b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=320&crop=smart&auto=webp&s=93ef867725a538dad3a6209e5062d3d1de60aeaa', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=640&crop=smart&auto=webp&s=fc186b216811c20876ecdaf0e913cc0b59498d7a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=960&crop=smart&auto=webp&s=67812638cc7d2b930cd8bebf733409c3b2d92397', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=1080&crop=smart&auto=webp&s=bc092f31a95e3a3df682dc8f7222b0fb1363a5df', 'width': 1080}], 'source': {'height': 2250, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?auto=webp&s=c5b1db2b11bd21a955cbe1e863cde94ef57607f4', 'width': 4000}, 'variants': {}}]} |
The real difference between Pipecat and LiveKit? | 1 | I know both Pipecat and LiveKit are open source Python frameworks used for building AI phone assistants, but I’m having a hard time understanding the real practical differences between them.
From a development and production standpoint, when should i choose pipecat vs LiveKit and is one more developed then the other? (In both cases I’m referring to the self-hosted setup.)
appreciate your help | 2025-11-07T11:10:27 | https://www.reddit.com/r/LocalLLaMA/comments/1oqrm0r/the_real_difference_between_pipecat_and_livekit/ | ZeroKelvinMood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqrm0r | false | null | t3_1oqrm0r | /r/LocalLLaMA/comments/1oqrm0r/the_real_difference_between_pipecat_and_livekit/ | false | false | self | 1 | null |
Why can't a local model (Qwen 3 14b) call correctly a local agent ? | 0 | Using Qwen 3 14B as an orchestrator for a Claude 4.5 review agent. Despite clear routing logic, Qwen calls the agent without passing the code snippets. When the agent requests the code again, Qwen ignores it and starts doing the review itself, even though Claude should handle that part.
System: Ryzen 5 3600, 32 GB RAM, RTX 2080, Ubuntu 24 (WSL on Windows 11)
Conversation log: [https://opencode.ai/s/eDgu32IS](https://opencode.ai/s/eDgu32IS)
I just started experimenting with OpenCode and agents — anyone know why Qwen behaves like this? | 2025-11-07T10:22:53 | Toulalaho | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oqqt97 | false | null | t3_1oqqt97 | /r/LocalLLaMA/comments/1oqqt97/why_cant_a_local_model_qwen_3_14b_call_correctly/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ovvkqres9tzf1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/ovvkqres9tzf1.png?width=108&crop=smart&auto=webp&s=c78fd7dc2e0df2e8935acb191d526378d2a0493d', 'width': 108}, {'height': 208, 'url': 'https://preview.redd.it/ovvkqres9tzf1.png?width=216&crop=smart&auto=webp&s=013d0363e516d6da23eed0e84fdfbe7409692854', 'width': 216}, {'height': 309, 'url': 'https://preview.redd.it/ovvkqres9tzf1.png?width=320&crop=smart&auto=webp&s=dc245f58978a2ce891985a9ad4fc7d78657c54cd', 'width': 320}, {'height': 618, 'url': 'https://preview.redd.it/ovvkqres9tzf1.png?width=640&crop=smart&auto=webp&s=933d4703d25fd2436411c9e201df2d5213b11bd7', 'width': 640}, {'height': 927, 'url': 'https://preview.redd.it/ovvkqres9tzf1.png?width=960&crop=smart&auto=webp&s=88c4576aba28dc390e31f7e1eb415f71411d22ed', 'width': 960}, {'height': 1043, 'url': 'https://preview.redd.it/ovvkqres9tzf1.png?width=1080&crop=smart&auto=webp&s=193fe3abfddd8e3b6c3ada001cfd1c59b0359b3f', 'width': 1080}], 'source': {'height': 1129, 'url': 'https://preview.redd.it/ovvkqres9tzf1.png?auto=webp&s=72eee479b099afa1fb567e03420b72df567243e2', 'width': 1168}, 'variants': {}}]} | |
Kimi-K2 thinking self host help needed | 1 | We plan to host Kimi-K2 for our multiple clients preferably with full context length.
How can it handle around 20-40 requests at once with good context length?
We can get 6xh200s or similar specs systems.
But we want to know, What’s the cheapest way to go about it? | 2025-11-07T10:08:20 | https://www.reddit.com/r/LocalLLaMA/comments/1oqqkpu/kimik2_thinking_self_host_help_needed/ | work_urek03 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqqkpu | false | null | t3_1oqqkpu | /r/LocalLLaMA/comments/1oqqkpu/kimik2_thinking_self_host_help_needed/ | false | false | self | 1 | null |
Release: VellumK2 Fantasy Datasets — 5 Complete DPO Datasets totalling 17k response pairs | 6 | Wanted share my series of writing datasets I've created using Kimi K2 0905 and Phi 4 Mini Instruct (which I thought would be a good negative signal since it inherently has a lot of slop and was purely trained on synthetic data).
* **VellumK2-Fantasy-DPO-Tiny-01**: 126 rows - Testing and validation
* **VellumK2-Fantasy-DPO-Small-01**: 1,038 rows - Light training and experiments
* **VellumK2-Fantasy-DPO-Medium-01**: 3,069 rows - Combination training component
* **VellumK2-Fantasy-DPO-Large-01**: 10,222 rows - Larger scale training
* **VellumK2-Unfettered-DPO-01**: 2,576 rows - Decensoring dataset to reduce refusal on sensitive content
* **Collection**: [https://huggingface.co/collections/lemon07r/vellumforge2-datasets](https://huggingface.co/collections/lemon07r/vellumforge2-datasets)
Check out some of the prompts and responses in the HF dataset viewer, they're pretty good quality. A lot better the same older synthetic datasets of this type, since we have access to better writing models now (Kimi K2 in this case).
These were generated using my tool [https://github.com/lemon07r/VellumForge2](https://github.com/lemon07r/VellumForge2) which I shared here a lil while ago, but it's been overhauled very much since then. It's been made much simpler/straight forward, significantly more robust, got a lot of fixes, added checkpointing + session resume, cleaned up the documentation, made it much more configurable now, and spent a ton of time on performance improvements (mostly spent profiling these improvements for regressions).
A 4k row dataset takes roughly only 2 hours\~ using a rate limited free provider like nvidia nim api at 40 RPM and a small local model for rejected responses on a low-mid end gpu (6700 XT running llama.cpp server in my case, you'll get better results with an nvidia card, or using vLLM). The 10k row large dataset only took around 7 hours to complete.
https://preview.redd.it/3zmmksi6zszf1.png?width=1767&format=png&auto=webp&s=ff410b21c5c748d429cf2bcadef6db8783e74ec7
| 2025-11-07T09:24:57 | https://www.reddit.com/r/LocalLLaMA/comments/1oqpvyr/release_vellumk2_fantasy_datasets_5_complete_dpo/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqpvyr | false | null | t3_1oqpvyr | /r/LocalLLaMA/comments/1oqpvyr/release_vellumk2_fantasy_datasets_5_complete_dpo/ | false | false | 6 | null | |
Built a local AI Benchmark Tool - just something I made for my own testing | 1 | [removed] | 2025-11-07T09:23:37 | https://www.reddit.com/gallery/1oqpv6t | PKC83 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oqpv6t | false | null | t3_1oqpv6t | /r/LocalLLaMA/comments/1oqpv6t/built_a_local_ai_benchmark_tool_just_something_i/ | false | false | 1 | null | |
Best way to handle foreign language documents | 1 | I’ve been working on a **receipt extraction pipeline** involving a lot of **Hebrew-language receipts** (right-to-left text, mixed fonts, etc.).
I’ve tested **Qwen3-VL-7B** and **Qwen3-VL-30B-A3B**, but both struggle with extracting **raw Hebrew text** directly from images — the layout is fine, but the actual text is often garbled or partially Latinized.
Interestingly, when I first run the images through a **dedicated Hebrew OCR** (like [i2OCR](https://www.i2ocr.com/free-online-hebrew-ocr)) and *then* feed the recognized text into an **LLM for field extraction and translation**, the results are far more accurate.
This makes me wonder:
* Are **VLMs** (e.g., Qwen-VL, InternVL, Gemini, etc.) generally weak on **non-Latin OCR** tasks?
* Would it be better to pair a strong **Hebrew OCR (Tesseract + DictaLM-2.0 fine-tuned for text correction)** with an LLM pipeline instead of relying on a VLM alone?
* Has anyone tried **multilingual OCR models** (like TrOCR-base-stage1-hebrew or PaddleOCR multilingual) for similar cases? | 2025-11-07T09:21:33 | https://www.reddit.com/r/LocalLLaMA/comments/1oqpu0v/best_way_to_handle_foreign_language_documents/ | Ok_Television_9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqpu0v | false | null | t3_1oqpu0v | /r/LocalLLaMA/comments/1oqpu0v/best_way_to_handle_foreign_language_documents/ | false | false | self | 1 | null |
Minimax will launch a coding package on November 14th | 24 | 2025-11-07T09:20:02 | https://www.reddit.com/gallery/1oqpt4w | Fun-Doctor6855 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oqpt4w | false | null | t3_1oqpt4w | /r/LocalLLaMA/comments/1oqpt4w/minimax_will_launch_a_coding_package_on_november/ | false | false | 24 | null | ||
Idea idk | 0 | So what if you trained a trillion tokens worth of weights of large language models of different purposes, sizes, functions, and then ran all of that through a similar transformer architecture training to output a model and then have an interface that can communicate with that? | 2025-11-07T08:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/1oqp8rv/idea_idk/ | mr_happy_nice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqp8rv | false | null | t3_1oqp8rv | /r/LocalLLaMA/comments/1oqp8rv/idea_idk/ | false | false | self | 0 | null |
Kimi K2 Thinking Fast Provider Waiting Room | 0 | Please update us if you find a faster inference Provider for Kimi K2 Thinking. The Provider mustn't distill it! | 2025-11-07T08:24:28 | marvijo-software | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oqoy4m | false | null | t3_1oqoy4m | /r/LocalLLaMA/comments/1oqoy4m/kimi_k2_thinking_fast_provider_waiting_room/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'g6uzazbwoszf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/g6uzazbwoszf1.png?width=108&crop=smart&auto=webp&s=6039419b70a4da84942ae091a7dadb39cf718470', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/g6uzazbwoszf1.png?width=216&crop=smart&auto=webp&s=a0f8b7e0d8f1f4bfbcd6ac5ce1f7e7878e526d42', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/g6uzazbwoszf1.png?width=320&crop=smart&auto=webp&s=5f2366172a8ae46abcfc614adc23c20a1a14d0d8', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/g6uzazbwoszf1.png?width=640&crop=smart&auto=webp&s=84898472336384eabb1472dfa785deb5aba45382', 'width': 640}], 'source': {'height': 1548, 'url': 'https://preview.redd.it/g6uzazbwoszf1.png?auto=webp&s=b7da21922dc059c438b738a961b357e954880453', 'width': 720}, 'variants': {}}]} | |
LLM Running On Multi GPU With PCIe 1x | 0 | Noob here sorry for the amateur question, currently I have RTX 4070 as my GPU, I plan on getting new GPU to run LLM but my motherboard only has 1x PCie 3.0 slot left. Can I run single large model on a setup like that ? | 2025-11-07T08:08:54 | https://www.reddit.com/r/LocalLLaMA/comments/1oqopjv/llm_running_on_multi_gpu_with_pcie_1x/ | Emergency_exit_now | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oqopjv | false | null | t3_1oqopjv | /r/LocalLLaMA/comments/1oqopjv/llm_running_on_multi_gpu_with_pcie_1x/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.