title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Can you add pacing control option in TTS ?
6
I'm trying Fish Speech Open Audio S1 mini. This one: [https://github.com/fishaudio/fish-speech](https://github.com/fishaudio/fish-speech) In the web ui, there is no pacing option. Is there anyway we can control the pacing? When you upload a referenced audio, put a text prompt and generate the audio, I...
2025-07-14T01:53:18
https://www.reddit.com/r/LocalLLaMA/comments/1lza5bu/can_you_add_pacing_control_option_in_tts/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lza5bu
false
null
t3_1lza5bu
/r/LocalLLaMA/comments/1lza5bu/can_you_add_pacing_control_option_in_tts/
false
false
self
6
{'enabled': False, 'images': [{'id': 'dGuJLatJajGp-nrEX3yz6d31fPziHLA-BG1PXpYICXk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dGuJLatJajGp-nrEX3yz6d31fPziHLA-BG1PXpYICXk.png?width=108&crop=smart&auto=webp&s=95436e0dba572c9aee2a59c53fd67015c882eebb', 'width': 108}, {'height': 108, 'url': 'h...
I want to create a no filter character.ai clone, which model should i use?
1
[removed]
2025-07-14T00:31:51
https://www.reddit.com/r/LocalLLaMA/comments/1lz8h37/i_want_to_create_a_no_filter_characterai_clone/
Giaochab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz8h37
false
null
t3_1lz8h37
/r/LocalLLaMA/comments/1lz8h37/i_want_to_create_a_no_filter_characterai_clone/
false
false
self
1
null
Which LLM should I use to generate high quality Q&A from physics textbook chapters?
26
I’m looking for LLMs to generate questions and answers from physics textbook chapters. The chapters I’ll provide can be up to 10 pages long and may include images. I’ve tried GPT, but the question quality is poor and often too similar to the examples I give. Claude didn’t work either as it rejects the input file, sayin...
2025-07-14T00:10:40
https://www.reddit.com/r/LocalLLaMA/comments/1lz81ea/which_llm_should_i_use_to_generate_high_quality/
WhiteTentacle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz81ea
false
null
t3_1lz81ea
/r/LocalLLaMA/comments/1lz81ea/which_llm_should_i_use_to_generate_high_quality/
false
false
self
26
null
Will this work?
0
Planning to build a budget local llm server with 2 mi50s. Since they need to have a Radeon 7 bios flashed on to provide display output, I was wondering if I can just use a cpu with an igpu to ignore that part. Something like a 5600g or ancintel cpu without the F suffix
2025-07-14T00:02:46
https://www.reddit.com/r/LocalLLaMA/comments/1lz7vh3/will_this_work/
opoot_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz7vh3
false
null
t3_1lz7vh3
/r/LocalLLaMA/comments/1lz7vh3/will_this_work/
false
false
self
0
null
Safe methods of increasing Context Window of models?
8
Let's say we have a 30b, 24b, 14b, 7b model that exceeds in quality but the context window is like... 8k or worse, 4k. What can you possibly do in this case? Back in 2022 I used a unkown gpt plugin involving PDF files are permanent memory that didn't used the context window, even now it would be really useful if there...
2025-07-13T22:28:03
https://www.reddit.com/r/LocalLLaMA/comments/1lz5sm6/safe_methods_of_increasing_context_window_of/
WEREWOLF_BX13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz5sm6
false
null
t3_1lz5sm6
/r/LocalLLaMA/comments/1lz5sm6/safe_methods_of_increasing_context_window_of/
false
false
self
8
null
480mm wide multi GPU frame - can only find 500+mm frames
2
Similiar in principle to the '6x GPU Build. 4x RTX 3090 and 2x MI60. Epyc 7002' someone posted several months ago But that thing is ginormous (dual PSU on either side of motherboard) so likely the 636mm wide version. I have a 12U (19inch) rack I want to install this into. The total rack width is 520mm BUT with the T-...
2025-07-13T22:08:49
https://www.reddit.com/r/LocalLLaMA/comments/1lz5cwa/480mm_wide_multi_gpu_frame_can_only_find_500mm/
munkiemagik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz5cwa
false
null
t3_1lz5cwa
/r/LocalLLaMA/comments/1lz5cwa/480mm_wide_multi_gpu_frame_can_only_find_500mm/
false
false
https://b.thumbs.redditm…trLcegu0T6lk.jpg
2
null
Computing embeddings offline for Gemma 3 1B (on-device model)
7
Google has the on-device model Gemma 3 1B that I am using for my scam detection Android app. Google has instructions for RAG here - [https://ai.google.dev/edge/mediapipe/solutions/genai/rag/android](https://ai.google.dev/edge/mediapipe/solutions/genai/rag/android) But that gets too slow for loading even 1000 chunks. A...
2025-07-13T21:44:11
https://www.reddit.com/r/LocalLLaMA/comments/1lz4sk3/computing_embeddings_offline_for_gemma_3_1b/
Basic-Donut1740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz4sk3
false
null
t3_1lz4sk3
/r/LocalLLaMA/comments/1lz4sk3/computing_embeddings_offline_for_gemma_3_1b/
false
false
self
7
{'enabled': False, 'images': [{'id': 'iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/iaG91J8UPyw0LumfZ8FtQViH0YMYP9q-z6paL0E-fpE.png?width=108&crop=smart&auto=webp&s=a1cc13c1cb1062998d0e6a2cc88bc3272f2368f7', 'width': 108}, {'height': 135, 'url': 'h...
📢 [Paid Study] Interviewing Individual AI Agent Developers – Share Your Experience + $15/hr
0
📢 Paid Research Interview Opportunity for AI Agent Developers Hi everyone – I’m Mingyao, a researcher from the University of Washington, conducting a study on how **individual AI agent developers handle privacy and security** when building autonomous systems using tools like LangChain, GPT, AutoGPT, etc. 🧠 Why it m...
2025-07-13T21:28:07
https://www.reddit.com/r/LocalLLaMA/comments/1lz4f51/paid_study_interviewing_individual_ai_agent/
TalkComfortable9144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz4f51
false
null
t3_1lz4f51
/r/LocalLLaMA/comments/1lz4f51/paid_study_interviewing_individual_ai_agent/
false
false
self
0
null
I vibe coded an open source Rust server to standardize context serving for LLMs & AI agents 🚀
0
Hey everyone, I wanted to share a little side project I vibe coded recently: context-server-rs — a lightweight Rust server that implements the Model Context Protocol (MCP) to help with the context engineering problem in AI workflows. While building my own AI-powered tools, I kept running into the same issue: How do y...
2025-07-13T21:23:49
https://www.reddit.com/r/LocalLLaMA/comments/1lz4bg2/i_vibe_coded_an_open_source_rust_server_to/
hrirkslab
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz4bg2
false
null
t3_1lz4bg2
/r/LocalLLaMA/comments/1lz4bg2/i_vibe_coded_an_open_source_rust_server_to/
false
false
self
0
null
Added MCP Support to Kimi.com via MCP SuperAssistant
10
Now use MCP in [Kimi.com](http://Kimi.com) :) Login into the Kimi for experience and file support, without login file support is not available. Support added in the version v0.5.3 Added Settings panel for custom delays for auto execute, auto submit, and auto insert. Improved system prompt for better performan...
2025-07-13T20:55:56
https://v.redd.it/jb41717pfpcf1
EfficientApartment52
v.redd.it
1970-01-01T00:00:00
0
{}
1lz3n8n
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jb41717pfpcf1/DASHPlaylist.mpd?a=1755032172%2CMDhiYWI4ZGUxYTc2MWU1ZmJjZDNiMjIyNGY4ODdiMzg0YzcxMzYzZTI3OTllYjVlNWIzODEyYzg1Y2ZmNjEwMQ%3D%3D&v=1&f=sd', 'duration': 89, 'fallback_url': 'https://v.redd.it/jb41717pfpcf1/DASH_1080.mp4?source=fallback', 'h...
t3_1lz3n8n
/r/LocalLLaMA/comments/1lz3n8n/added_mcp_support_to_kimicom_via_mcp/
false
false
https://external-preview…57895e3b70702e7e
10
{'enabled': False, 'images': [{'id': 'N284YTEwN3BmcGNmMXWHl8ryF0G-UJ9iQeTnkvyOl6nNy-YVT3WaTg4-v78Y', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/N284YTEwN3BmcGNmMXWHl8ryF0G-UJ9iQeTnkvyOl6nNy-YVT3WaTg4-v78Y.png?width=108&crop=smart&format=pjpg&auto=webp&s=abf24aff4cb544a6fbc7cd15d2c121b4326c9...
Local free PDF parser for academic pdfs
2
So I've tried using different (free) ways to parse academic pdfs \*locally\*, so I can get the author's name, publication year, and abbreviated title. The two approaches are: (1) GROBID (lightweight) (2) PyPDF2 + pytesseract + pdf2image Neither of them are great, with success rate of around 60% (full correctness). A...
2025-07-13T20:29:17
https://www.reddit.com/r/LocalLLaMA/comments/1lz2zt2/local_free_pdf_parser_for_academic_pdfs/
Objective_Science965
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz2zt2
false
null
t3_1lz2zt2
/r/LocalLLaMA/comments/1lz2zt2/local_free_pdf_parser_for_academic_pdfs/
false
false
self
2
null
Kimi k2 on cli ?
3
Do you know if we can use the api key of kimi k2 in a cli like Claude code ?
2025-07-13T20:09:37
https://www.reddit.com/r/LocalLLaMA/comments/1lz2i5h/kimi_k2_on_cli/
Equivalent-Fig1588
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz2i5h
false
null
t3_1lz2i5h
/r/LocalLLaMA/comments/1lz2i5h/kimi_k2_on_cli/
false
false
self
3
null
Some small PPL benchmarks on DeepSeek R1 0528 quants, from Unlosh and ubergarm, from 1.6bpw (1Q_S_R4) to 4.7bpw (IQ4_KS_R4) (and Q8/FP8 baseline). Also a few V3 0324 ones.
85
HI there guys, hoping you're doing fine. As always related to PPL benchmarks, take them with a grain of salt as it may not represent the quality of the model itself, but it may help as a guide at how much a model could get affected by quantization. As it has been mentioned sometimes, and a bit of spoiler, quantizatio...
2025-07-13T19:40:41
https://www.reddit.com/r/LocalLLaMA/comments/1lz1s8x/some_small_ppl_benchmarks_on_deepseek_r1_0528/
panchovix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz1s8x
false
null
t3_1lz1s8x
/r/LocalLLaMA/comments/1lz1s8x/some_small_ppl_benchmarks_on_deepseek_r1_0528/
false
false
https://b.thumbs.redditm…-V8PPj-Eb8sA.jpg
85
{'enabled': False, 'images': [{'id': '7ISepN1ZhP4X7ew10cBPIuuqsS75KZVYV_G0DmVulbM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7ISepN1ZhP4X7ew10cBPIuuqsS75KZVYV_G0DmVulbM.png?width=108&crop=smart&auto=webp&s=916df2f54c0cd22cd11ed7809b3c140706edfcee', 'width': 108}, {'height': 116, 'url': 'h...
We're all context for llms
0
The way llm agents are going, everything is going to be rebuilt for them.
2025-07-13T19:40:15
https://www.reddit.com/r/LocalLLaMA/comments/1lz1rv1/were_all_context_for_llms/
Proud-Victory2562
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz1rv1
false
null
t3_1lz1rv1
/r/LocalLLaMA/comments/1lz1rv1/were_all_context_for_llms/
false
false
self
0
null
Problems with LocalDocs on GPT4All
2
HI folks, when I put a simple markdown (.md) file in the local docs folder (it as full permissions) it tries to embed, but never moves off 0% -- im not sure if something is broke or im doing something wrong -- can anyone help?
2025-07-13T19:26:33
https://www.reddit.com/r/LocalLLaMA/comments/1lz1fjz/problems_with_localdocs_on_gpt4all/
slrg1968
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz1fjz
false
null
t3_1lz1fjz
/r/LocalLLaMA/comments/1lz1fjz/problems_with_localdocs_on_gpt4all/
false
false
self
2
null
Anyone else thinks that Kimi-K2 has the potential to self-bootstrap?
0
Kimi-K2 seems pretty capable, and it’s priors seem rich as well, which lead to my little thought experiment and me spending time looking across multiple other new research papers from UC Berkeley, Alibaba, NVIDIA and DeepMind Here are my notes and thoughts: Use ProRL (“ProRL: Prolonged Reinforcement Learning Expand...
2025-07-13T19:24:47
https://www.reddit.com/r/LocalLLaMA/comments/1lz1e20/anyone_else_thinks_that_kimik2_has_the_potential/
Key_Clerk_1431
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz1e20
false
null
t3_1lz1e20
/r/LocalLLaMA/comments/1lz1e20/anyone_else_thinks_that_kimik2_has_the_potential/
false
false
self
0
null
Madness, the ignorant's question. Would it be possible to lighten an LLM model?
4
Hello everyone, Here is a question that has been in my head for some time. Would it be possible to lighten an LLM by removing content? I know it's a question that for someone really knowledgeable will be crazy and stupid. The idea would be, if possible, to remove information that is not relevant to the user on a ...
2025-07-13T19:17:46
https://www.reddit.com/r/LocalLLaMA/comments/1lz17w8/madness_the_ignorants_question_would_it_be/
Macestudios32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz17w8
false
null
t3_1lz17w8
/r/LocalLLaMA/comments/1lz17w8/madness_the_ignorants_question_would_it_be/
false
false
self
4
null
Need advice on search pipeline for retail products (BM25 + embeddings + reranking)
1
Hey everyone, I’m working on building a search engine for a retail platform with a product catalog that includes things like title, description, size, color, and categories (e.g., “men’s clothing > shirts” or “women’s shoes”). I'm still new to search, embeddings, and reranking, and I’ve got a bunch of questions. Wou...
2025-07-13T18:48:08
https://www.reddit.com/r/LocalLLaMA/comments/1lz0hk3/need_advice_on_search_pipeline_for_retail/
zedeleyici3401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz0hk3
false
null
t3_1lz0hk3
/r/LocalLLaMA/comments/1lz0hk3/need_advice_on_search_pipeline_for_retail/
false
false
self
1
null
OpenAI’s announcement of their new Open Weights (Probably)
0
“We have discovered a novel method to lock Open Weights for models to prevent fine tuning, safety reversal with the only side effect being the weights cannot be quantized. This method builds off of quantization aware training, in effect reversing that process. Any attempt to fine tune, adjust safe guards or quantizatio...
2025-07-13T18:40:39
https://www.reddit.com/r/LocalLLaMA/comments/1lz0b1p/openais_announcement_of_their_new_open_weights/
silenceimpaired
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lz0b1p
false
null
t3_1lz0b1p
/r/LocalLLaMA/comments/1lz0b1p/openais_announcement_of_their_new_open_weights/
false
false
self
0
{'enabled': False, 'images': [{'id': 'KJ7pkyeZcmLTiq2mOGe-ze2YhgYpuhFj7rAwbWEG8d8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/KJ7pkyeZcmLTiq2mOGe-ze2YhgYpuhFj7rAwbWEG8d8.png?width=108&crop=smart&auto=webp&s=49a3c770178c2a312b52ba291a11497cae75ca76', 'width': 108}, {'height': 113, 'url': 'h...
Kimi k2 not available on iPhone
0
I use the Kimi app on my iPhone but it seems like the thinking options only offers like kimi 1.5? Do I do something wrong here or do I have to activate it?
2025-07-13T17:41:23
https://www.reddit.com/r/LocalLLaMA/comments/1lyyu6i/kimi_k2_not_available_on_iphone/
ThatrandomGuyxoxo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyyu6i
false
null
t3_1lyyu6i
/r/LocalLLaMA/comments/1lyyu6i/kimi_k2_not_available_on_iphone/
false
false
self
0
null
i need the best local llm i can run on my gaming pc
0
i need a good LLM i can run on these specs. should i wait for grok 3? https://preview.redd.it/4ky5a383hocf1.png?width=993&format=png&auto=webp&s=a676584defa0b0894ca945493a2b5ca4413aa1f7
2025-07-13T17:38:59
https://www.reddit.com/r/LocalLLaMA/comments/1lyyryy/i_need_the_best_local_llm_i_can_run_on_my_gaming/
Interesting_Pay7816
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyyryy
false
null
t3_1lyyryy
/r/LocalLLaMA/comments/1lyyryy/i_need_the_best_local_llm_i_can_run_on_my_gaming/
false
false
https://a.thumbs.redditm…cklsmPU62Fs4.jpg
0
null
How to get LLM structured outputs in TS?
4
Hey everyone, I come from a Python background where I use Pydantic AI a lot, especially for handling structured data and validation. I’m starting a new project in TypeScript and I’m looking for libraries or frameworks that can help me achieve similar functionality, specifically for structured output and data validatio...
2025-07-13T17:35:07
https://www.reddit.com/r/LocalLLaMA/comments/1lyyoff/how_to_get_llm_structured_outputs_in_ts/
too_much_lag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyyoff
false
null
t3_1lyyoff
/r/LocalLLaMA/comments/1lyyoff/how_to_get_llm_structured_outputs_in_ts/
false
false
self
4
null
Never seen fastllm mentioned here, anyone using it? (kimi k2 local)
56
Got tired of waiting for k2 ggufs and found this guy: [https://huggingface.co/fastllm/Kimi-K2-Instruct-INT4MIX/tree/main](https://huggingface.co/fastllm/Kimi-K2-Instruct-INT4MIX/tree/main) There is a typo in the commands but it seems to work great, and really easy to get going: pip install ftllm ftllm server fas...
2025-07-13T17:27:42
https://www.reddit.com/r/LocalLLaMA/comments/1lyyhwz/never_seen_fastllm_mentioned_here_anyone_using_it/
Conscious_Cut_6144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyyhwz
false
null
t3_1lyyhwz
/r/LocalLLaMA/comments/1lyyhwz/never_seen_fastllm_mentioned_here_anyone_using_it/
false
false
self
56
{'enabled': False, 'images': [{'id': 'IHmG84k97laJH2U80Nq7hMuZfDwRJ7BBdwRt7MKEcMw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IHmG84k97laJH2U80Nq7hMuZfDwRJ7BBdwRt7MKEcMw.png?width=108&crop=smart&auto=webp&s=1ae33582b4b0826302a3cd3ed7609af7df200f8d', 'width': 108}, {'height': 116, 'url': 'h...
What kind of hardware would I need to self-host a local LLM for coding (like Cursor)?
1
Hey everyone, I’m interested in running a self-hosted local LLM for coding assistance—something similar to what Cursor offers, but fully local for privacy and experimentation. Ideally, I’d like it to support code completion, inline suggestions, and maybe even multi-file context. What kind of hardware would I realistic...
2025-07-13T17:23:58
https://www.reddit.com/r/LocalLLaMA/comments/1lyyelr/what_kind_of_hardware_would_i_need_to_selfhost_a/
ClassicHabit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyyelr
false
null
t3_1lyyelr
/r/LocalLLaMA/comments/1lyyelr/what_kind_of_hardware_would_i_need_to_selfhost_a/
false
false
self
1
null
Jan doesn't show all available GGUF models from Hugging Face
13
I've noticed that when using Jan's built-in Hub, the list of available models seems very limited. Even though there are many GGUF models available on Hugging Face (with proper formatting and quantization), they often don't appear in the search results inside Jan. I can download them manually by downloading them fron H...
2025-07-13T17:20:41
https://www.reddit.com/r/LocalLLaMA/comments/1lyybq8/jan_doesnt_show_all_available_gguf_models_from/
SensitiveDisk0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyybq8
false
null
t3_1lyybq8
/r/LocalLLaMA/comments/1lyybq8/jan_doesnt_show_all_available_gguf_models_from/
false
false
self
13
null
Easy way to log input/output in llama.cpp? (server and chat)
0
Hi. I been trying to automatically log the inputs and outputs in the CLI and API webgui in llama.cpp. Looking for an efficient one.
2025-07-13T17:12:38
https://www.reddit.com/r/LocalLLaMA/comments/1lyy4k8/easy_way_to_log_inputoutput_in_llamacpp_server/
aayehh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyy4k8
false
null
t3_1lyy4k8
/r/LocalLLaMA/comments/1lyy4k8/easy_way_to_log_inputoutput_in_llamacpp_server/
false
false
self
0
null
IndexTTS2, the most realistic and expressive text-to-speech model so far, has leaked their demos ahead of the official launch! And... wow!
580
# IndexTTS2: A Breakthrough in Emotionally Expressive and Duration-Controlled Auto-Regressive Zero-Shot Text-to-Speech https://arxiv.org/abs/2506.21619 Features: - Fully local with open weights. - Zero-shot voice cloning. You just provide one audio file and it will extremely accurately clone the voice. - Zero-shot e...
2025-07-13T17:11:10
https://www.reddit.com/r/LocalLLaMA/comments/1lyy39n/indextts2_the_most_realistic_and_expressive/
pilkyton
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyy39n
false
null
t3_1lyy39n
/r/LocalLLaMA/comments/1lyy39n/indextts2_the_most_realistic_and_expressive/
false
false
self
580
null
dots.llm1 appears to be very sensitive to quantization?
23
With 64GB RAM I could run dots with `mmap` at Q4 with some hiccups (offloading a small part of the model to the SSD). I had [mixed feelings](https://www.reddit.com/r/LocalLLaMA/comments/1lqh55j/comment/n13cnzx/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) about the model: >I...
2025-07-13T17:08:35
https://www.reddit.com/r/LocalLLaMA/comments/1lyy0yi/dotsllm1_appears_to_be_very_sensitive_to/
Admirable-Star7088
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyy0yi
false
null
t3_1lyy0yi
/r/LocalLLaMA/comments/1lyy0yi/dotsllm1_appears_to_be_very_sensitive_to/
false
false
self
23
{'enabled': False, 'images': [{'id': 'GsxrVt2s0faAa6ahT3QXDdQ0OgidqVHBIx2G_Y7sxrA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GsxrVt2s0faAa6ahT3QXDdQ0OgidqVHBIx2G_Y7sxrA.png?width=108&crop=smart&auto=webp&s=ba3d79ad811c3a5dd1850b68eecf457670859c35', 'width': 108}, {'height': 116, 'url': 'h...
Benchmarking Qwen3 30B and 235B on dual RTX PRO 6000 Blackwell Workstation Edition
66
As promised in the banana thread. OP delivers. **Benchmarks** The following benchmarks were taken using official Qwen3 models from Huggingface's Qwen repo for consistency: * Qwen3 235B A22B GPTQ Int4 quant in Tensor Parallel * Qwen3 30B A3B BF16 in Tensor Parallel * Qwen3 30B A3B BF16 on a single GPU * Qwen3 30B A3B...
2025-07-13T16:44:00
https://www.reddit.com/r/LocalLLaMA/comments/1lyxf1f/benchmarking_qwen3_30b_and_235b_on_dual_rtx_pro/
blackwell_tart
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyxf1f
false
null
t3_1lyxf1f
/r/LocalLLaMA/comments/1lyxf1f/benchmarking_qwen3_30b_and_235b_on_dual_rtx_pro/
false
false
self
66
null
MetaStone-S1-32B
7
2025-07-13T16:15:41
https://huggingface.co/MetaStoneTec/MetaStone-S1-32B
AaronFeng47
huggingface.co
1970-01-01T00:00:00
0
{}
1lywqae
false
null
t3_1lywqae
/r/LocalLLaMA/comments/1lywqae/metastones132b/
false
false
default
7
null
Audiobook Creator - v1.4 - Added support for Orpheus along with Kokoro
112
I'm releasing a new version of my [audiobook creator app](https://github.com/prakharsr/audiobook-creator) which now supports Kokoro and Orpheus. This release adds support for [Orpheus TTS](https://github.com/canopyai/Orpheus-TTS) which supports high-quality audio and more expressive speech. This version also adds suppo...
2025-07-13T15:52:39
https://www.reddit.com/r/LocalLLaMA/comments/1lyw5u2/audiobook_creator_v14_added_support_for_orpheus/
prakharsr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyw5u2
false
null
t3_1lyw5u2
/r/LocalLLaMA/comments/1lyw5u2/audiobook_creator_v14_added_support_for_orpheus/
false
false
self
112
{'enabled': False, 'images': [{'id': 'UoHhOkeVuwQG6KOjlcba2eN3oWFHe5ObpsY1_6Psfzk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UoHhOkeVuwQG6KOjlcba2eN3oWFHe5ObpsY1_6Psfzk.png?width=108&crop=smart&auto=webp&s=3f5d797fded6eb1be79f568494cf1d46669cab00', 'width': 108}, {'height': 108, 'url': 'h...
Like some help setting up MCP sever for LM studio
7
Hey guys recently LM studio add support for tool use for local running llms. I wanting to add the option for my local running llm to do searching with my default browser for more up to date information. But I have no clue how I want to keep in contained to the LM studio UI if possible.
2025-07-13T15:44:14
https://www.reddit.com/r/LocalLLaMA/comments/1lyvyhq/like_some_help_setting_up_mcp_sever_for_lm_studio/
Night5124
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyvyhq
false
null
t3_1lyvyhq
/r/LocalLLaMA/comments/1lyvyhq/like_some_help_setting_up_mcp_sever_for_lm_studio/
false
false
self
7
null
Orpheus TTS FastAPI Server Release v1.0 (Async and Audio Issues Fixes)
46
I'm releasing a v1.0 of my [Orpheus TTS FastAPI Server](https://github.com/prakharsr/Orpheus-TTS-FastAPI). Its a high-performance FastAPI-based server that provides OpenAI-compatible Text-to-Speech (TTS) endpoints using the [Orpheus TTS](https://github.com/canopyai/Orpheus-TTS) model. The server supports async parallel...
2025-07-13T15:37:33
https://www.reddit.com/r/LocalLLaMA/comments/1lyvsqv/orpheus_tts_fastapi_server_release_v10_async_and/
prakharsr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyvsqv
false
null
t3_1lyvsqv
/r/LocalLLaMA/comments/1lyvsqv/orpheus_tts_fastapi_server_release_v10_async_and/
false
false
self
46
{'enabled': False, 'images': [{'id': 'IESpl2ifzDFrsnd4p7lWGb7dUcwjRO9ltHokb4hE5Co', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IESpl2ifzDFrsnd4p7lWGb7dUcwjRO9ltHokb4hE5Co.png?width=108&crop=smart&auto=webp&s=45d52cf1200f1189b715c76836164ff9cecf79b9', 'width': 108}, {'height': 108, 'url': 'h...
Let’s talk about models you believed are more Hyped than Hot
2
My suggestion for how to make this profitable is list the hyped model and explain what it is very bad at for you… then list one or two models and the environment you use them in daily that do a better job. I had multiple people gushing over how effective Reka was for creative writing, and so I tried it in a RP conver...
2025-07-13T15:27:53
https://www.reddit.com/r/LocalLLaMA/comments/1lyvkhr/lets_talk_about_models_you_believed_are_more/
silenceimpaired
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyvkhr
false
null
t3_1lyvkhr
/r/LocalLLaMA/comments/1lyvkhr/lets_talk_about_models_you_believed_are_more/
false
false
self
2
null
Tried Kimi K2 for writing and reasoning, and was not impressed.
63
I tried using Kimi k2 to flesh out setting/plot ideas. E.G. I would say things like "here's a scenario, what do you think is the most realistic thing to happen?" or "what do you think would be a good solution to this issue?". I found it quite bad in this regard. * It frequently made things up, even when specifically i...
2025-07-13T15:16:28
https://www.reddit.com/r/LocalLLaMA/comments/1lyvah4/tried_kimi_k2_for_writing_and_reasoning_and_was/
GlompSpark
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyvah4
false
null
t3_1lyvah4
/r/LocalLLaMA/comments/1lyvah4/tried_kimi_k2_for_writing_and_reasoning_and_was/
false
false
self
63
null
Testing ChatGPT and Claude capabilities to "simple projects": Block Site extension for Google Chrome
1
Anyone has tried something like that? I just put: create a google chrome extension that blocks websites. it's just something that takes a list of websites and blocks them. The extension does not work in both codes provided by the LLMs.
2025-07-13T15:13:28
https://www.reddit.com/r/LocalLLaMA/comments/1lyv7s7/testing_chatgpt_and_claude_capabilities_to_simple/
helioscarbex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyv7s7
false
null
t3_1lyv7s7
/r/LocalLLaMA/comments/1lyv7s7/testing_chatgpt_and_claude_capabilities_to_simple/
false
false
self
1
null
How I use Gemma 3 to help me reply my texts
80
Ever since there're code completions, I wish I could have something similar when texting people. Now there's finally a decent method for that. The app works on any endpoint that's OpenAI compatible. Once you set it up, it gives you texting completions right inside WhatsApp, Signal, and some other texting apps. I test...
2025-07-13T15:12:40
https://v.redd.it/48w6qb1mincf1
sean01-eth
v.redd.it
1970-01-01T00:00:00
0
{}
1lyv750
false
{'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/48w6qb1mincf1/DASHPlaylist.mpd?a=1755011576%2COWM0NzJhZjM0YzIzNWM0NDEwNTk3MDIzNjUyMWRhNmRkNTBhYzUwODgzYWYyNzZhYWY4Zjg2ZWFjZjRkMWViMw%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/48w6qb1mincf1/DASH_270.mp4?source=fallback', 'has_...
t3_1lyv750
/r/LocalLLaMA/comments/1lyv750/how_i_use_gemma_3_to_help_me_reply_my_texts/
false
false
https://external-preview…62d1f7121ab5794f
80
{'enabled': False, 'images': [{'id': 'NnNqeWViMW1pbmNmMRSzfaNqfuwOOC92Xq4viycMqSPOW2XTomk7HN62InbQ', 'resolutions': [{'height': 26, 'url': 'https://external-preview.redd.it/NnNqeWViMW1pbmNmMRSzfaNqfuwOOC92Xq4viycMqSPOW2XTomk7HN62InbQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=ca55479fe3e543c483ca9fd1a6e1c7663b1e1...
LLM model for live translation into subtitles [RU-EN]
2
Hey guys, noobie here. I am using OBS and there is a plugin called 'localvocal'. I can choose there several LLMs etc. Which one should be the best for my use case? How can I add other LLMs from huggingface? Any help is appreciated, thank you!
2025-07-13T15:11:09
https://www.reddit.com/r/LocalLLaMA/comments/1lyv5uc/llm_model_for_live_translation_into_subtitles_ruen/
TuGuX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyv5uc
false
null
t3_1lyv5uc
/r/LocalLLaMA/comments/1lyv5uc/llm_model_for_live_translation_into_subtitles_ruen/
false
false
self
2
null
Built a plugin-based system automation layer for LLMs, safe, modular, and dead simple to extend
2
I’ve been building an AI assistant (Caelum) that can control a system using natural language, but I didn’t want it running raw shell commands or hallucinating `subprocess` calls. That’s unreliable and messy, so I built a structured `do()` system with plugin routing, safety flags, and argument parsing. Each command is a...
2025-07-13T15:01:55
https://www.reddit.com/r/LocalLLaMA/comments/1lyuxj5/built_a_pluginbased_system_automation_layer_for/
BlackBeardJW
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyuxj5
false
null
t3_1lyuxj5
/r/LocalLLaMA/comments/1lyuxj5/built_a_pluginbased_system_automation_layer_for/
false
false
self
2
{'enabled': False, 'images': [{'id': 'wGF2WkFacBqaY1u9I-h9qjml9wj3Hxc5p-hofX39V7U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wGF2WkFacBqaY1u9I-h9qjml9wj3Hxc5p-hofX39V7U.png?width=108&crop=smart&auto=webp&s=603529e7fcabe20144806792dd5e7c2476b13cfe', 'width': 108}, {'height': 108, 'url': 'h...
AI fever D:
0
Hey folks, I’m getting serious AI fever. I know there are a lot of enthusiasts here, so I’m looking for advice on budget-friendly options. I am focused on running large LLMs, not training them. Is it currently worth investing in a Mac Studio M1 128GB RAM? Can it run 70B models with decent quantization and a reasonabl...
2025-07-13T14:30:48
https://www.reddit.com/r/LocalLLaMA/comments/1lyu7bf/ai_fever_d/
Czydera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyu7bf
false
null
t3_1lyu7bf
/r/LocalLLaMA/comments/1lyu7bf/ai_fever_d/
false
false
self
0
null
AI Agent Deployment Platform (Vercel for AI Agents)
1
I’ve been in the AI agent space for a while now, and one thing that’s still surprisingly painful is deployment. Most tools are locked into specific frameworks (LangChain, LangGraph, etc.), and if you’re using your custom stack, you’re basically on your own. Yesterday, I saw that an old team I worked with launched some...
2025-07-13T14:30:00
https://www.reddit.com/r/LocalLLaMA/comments/1lyu6nu/ai_agent_deployment_platform_vercel_for_ai_agents/
Great_Building1646
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyu6nu
false
null
t3_1lyu6nu
/r/LocalLLaMA/comments/1lyu6nu/ai_agent_deployment_platform_vercel_for_ai_agents/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RSXa8yKKuzQHl7Ybk6IMVedhrzxrlSGR6194deh2Ruo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RSXa8yKKuzQHl7Ybk6IMVedhrzxrlSGR6194deh2Ruo.png?width=108&crop=smart&auto=webp&s=830f474cb3e77ffd050befefb700569923b3ce36', 'width': 108}, {'height': 108, 'url': 'h...
AI Agent Deployment Platform (Vercel for AI Agents)
1
[removed]
2025-07-13T14:26:00
https://www.reddit.com/r/LocalLLaMA/comments/1lyu3do/ai_agent_deployment_platform_vercel_for_ai_agents/
Ok-Reflection-4049
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyu3do
false
null
t3_1lyu3do
/r/LocalLLaMA/comments/1lyu3do/ai_agent_deployment_platform_vercel_for_ai_agents/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RSXa8yKKuzQHl7Ybk6IMVedhrzxrlSGR6194deh2Ruo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RSXa8yKKuzQHl7Ybk6IMVedhrzxrlSGR6194deh2Ruo.png?width=108&crop=smart&auto=webp&s=830f474cb3e77ffd050befefb700569923b3ce36', 'width': 108}, {'height': 108, 'url': 'h...
AI Agent Deployment Platform (Vercel for AI Agents)
1
[removed]
2025-07-13T14:25:03
https://www.reddit.com/r/LocalLLaMA/comments/1lyu2mb/ai_agent_deployment_platform_vercel_for_ai_agents/
Ok-Reflection-4049
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyu2mb
false
null
t3_1lyu2mb
/r/LocalLLaMA/comments/1lyu2mb/ai_agent_deployment_platform_vercel_for_ai_agents/
false
false
https://b.thumbs.redditm…BQdGNrBvtwio.jpg
1
null
AI Agent Deployment Platform
1
[removed]
2025-07-13T14:18:08
https://www.reddit.com/r/LocalLLaMA/comments/1lytwwe/ai_agent_deployment_platform/
Ok-Reflection-4049
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lytwwe
false
null
t3_1lytwwe
/r/LocalLLaMA/comments/1lytwwe/ai_agent_deployment_platform/
false
false
self
1
null
AI Agent Deployment Platform
1
[removed]
2025-07-13T14:12:57
https://www.reddit.com/r/LocalLLaMA/comments/1lytsle/ai_agent_deployment_platform/
Ok-Reflection-4049
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lytsle
false
null
t3_1lytsle
/r/LocalLLaMA/comments/1lytsle/ai_agent_deployment_platform/
false
false
https://b.thumbs.redditm…gi3gYTgKklyQ.jpg
1
null
Looking for my next laptop soon
6
Hello all, Soon I will be looking for my next laptop, I am an industrial programmer, sometimes asking AI for a specific algorithm implementation, check some code I've done... helps. Sending code to an internet service is usually breaks the NDA so I thought on using something like JAN to execute the models in my...
2025-07-13T14:00:28
https://www.reddit.com/r/LocalLLaMA/comments/1lytioc/looking_for_my_next_laptop_soon/
robotecnik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lytioc
false
null
t3_1lytioc
/r/LocalLLaMA/comments/1lytioc/looking_for_my_next_laptop_soon/
false
false
self
6
null
How can I figure out the speed in tokens per second that my model will run on the CPU?
2
I'm trying to figure out a formula to calculate the tokens/s when I run an LLM on a CPU. I always deploy small models on different devices, and I know that RAM MHz is the most important factor, but is it the only one? What about the CPU single/multi core benchmark? Does AMD's GPU have anything to do with this? Can I ju...
2025-07-13T13:40:28
https://www.reddit.com/r/LocalLLaMA/comments/1lyt372/how_can_i_figure_out_the_speed_in_tokens_per/
Holiday-Picture6796
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyt372
false
null
t3_1lyt372
/r/LocalLLaMA/comments/1lyt372/how_can_i_figure_out_the_speed_in_tokens_per/
false
false
self
2
null
Is anyone training a religion model?
0
With every religious text or practice of import in all languages each, etc? Anyone know of any "godly ai"' .. or is that unnecessary because the current models already have all the texts?
2025-07-13T13:37:39
https://www.reddit.com/r/LocalLLaMA/comments/1lyt0zp/is_anyone_training_a_religion_model/
SeasonNo3107
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyt0zp
false
null
t3_1lyt0zp
/r/LocalLLaMA/comments/1lyt0zp/is_anyone_training_a_religion_model/
false
false
self
0
null
[Rumor] Huawei 920 accelerator coming H2 2026
29
So 6 months ago I discussed some information about the at the time not launched [910C accelerator here](https://www.reddit.com/r/LocalLLaMA/comments/1iadomi/rumor_huawei_910c_will_double_910b_performance/). Since then Huawei has been aggressively seeding the 910B accelerator (yes the prior gen 910B with 8 accelerators...
2025-07-13T13:23:53
https://www.reddit.com/r/LocalLLaMA/comments/1lysqk7/rumor_huawei_920_accelerator_coming_h2_2026/
44seconds
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lysqk7
false
null
t3_1lysqk7
/r/LocalLLaMA/comments/1lysqk7/rumor_huawei_920_accelerator_coming_h2_2026/
false
false
self
29
null
Qwen3-235B-A22B @ 0.7t/s. Hardware or configuration bottleneck?
8
Preface: Just a disclaimer that the machine this is running on was never intended to be an inference machine. I am using it (to the dismay of its actual at-the-keyboard user!) due to it being the only machine I could fit the GPU into. As per the title, I have attempted to run Qwen3-235B-A22B using \`llama-server\` on ...
2025-07-13T13:18:47
https://www.reddit.com/r/LocalLLaMA/comments/1lysmo9/qwen3235ba22b_07ts_hardware_or_configuration/
ConnectionOutside485
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lysmo9
false
null
t3_1lysmo9
/r/LocalLLaMA/comments/1lysmo9/qwen3235ba22b_07ts_hardware_or_configuration/
false
false
self
8
null
RunAgent - AI Agent Deployment Platform (Vercel for AI AGENTS)
1
[removed]
2025-07-13T13:12:30
https://www.reddit.com/r/LocalLLaMA/comments/1lyshxr/runagent_ai_agent_deployment_platform_vercel_for/
Ok-Reflection-4049
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyshxr
false
null
t3_1lyshxr
/r/LocalLLaMA/comments/1lyshxr/runagent_ai_agent_deployment_platform_vercel_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RSXa8yKKuzQHl7Ybk6IMVedhrzxrlSGR6194deh2Ruo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RSXa8yKKuzQHl7Ybk6IMVedhrzxrlSGR6194deh2Ruo.png?width=108&crop=smart&auto=webp&s=830f474cb3e77ffd050befefb700569923b3ce36', 'width': 108}, {'height': 108, 'url': 'h...
What does it take to run llms?
0
If there is any reference or if anyone has clear idea please do reply. I have a 64gb ram 8core machine. 3billion parameters models response running via ollama is slower than 600gb models api response. How insane is that.? Question: how do you decide on infra If a model is 600B params, each param is one byte so it go...
2025-07-13T11:50:30
https://www.reddit.com/r/LocalLLaMA/comments/1lyqwil/what_does_it_take_to_run_llms/
Impossible_Nose_2956
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyqwil
false
null
t3_1lyqwil
/r/LocalLLaMA/comments/1lyqwil/what_does_it_take_to_run_llms/
false
false
self
0
null
Why has Meta started throwing billions at AI now?
0
Could it be because V-JEPA2 gave them strong confidence? [https://arxiv.org/abs/2506.09985](https://arxiv.org/abs/2506.09985)
2025-07-13T11:26:44
https://www.reddit.com/r/LocalLLaMA/comments/1lyqhqq/why_has_meta_started_throwing_billions_at_ai_now/
VR-Person
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyqhqq
false
null
t3_1lyqhqq
/r/LocalLLaMA/comments/1lyqhqq/why_has_meta_started_throwing_billions_at_ai_now/
false
false
self
0
null
What are these random AI services?? Why are they so bad?
0
Working on a hackathon project and used 'exa' for AI web search. It's so dogwater, it literally kept making up sources and didn't even TRY to parse the output. If I have to put EXTRA work into LEARNING to use your damn service, what am i paying you for??? Like come on man... at least make it easier, if I knew it was li...
2025-07-13T11:21:14
https://www.reddit.com/r/LocalLLaMA/comments/1lyqefd/what_are_these_random_ai_services_why_are_they_so/
Affectionate-Divide8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyqefd
false
null
t3_1lyqefd
/r/LocalLLaMA/comments/1lyqefd/what_are_these_random_ai_services_why_are_they_so/
false
false
self
0
null
480mm multi GPU frame - can only find 500+mm
1
[removed]
2025-07-13T11:14:37
https://www.reddit.com/r/LocalLLaMA/comments/1lyqag7/480mm_multi_gpu_frame_can_only_find_500mm/
munkiemagik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyqag7
false
null
t3_1lyqag7
/r/LocalLLaMA/comments/1lyqag7/480mm_multi_gpu_frame_can_only_find_500mm/
false
false
self
1
null
Help Needed for MedGemma 27B
3
Tried vertex.. 35 tps HuggingFace with q6 from unsloth 48 tps original from Google 35 tps I need 100tps.. please help I know not much about inference infrastructure.
2025-07-13T11:09:40
https://www.reddit.com/r/LocalLLaMA/comments/1lyq7mc/help_needed_for_medgemma_27b/
FewOwl9332
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyq7mc
false
null
t3_1lyq7mc
/r/LocalLLaMA/comments/1lyq7mc/help_needed_for_medgemma_27b/
false
false
self
3
null
480mm wide mining frame for multiple GPU's - can only find 500+mm wide
1
[removed]
2025-07-13T11:02:56
https://www.reddit.com/r/LocalLLaMA/comments/1lyq3vq/480mm_wide_mining_frame_for_multiple_gpus_can/
munkiemagik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyq3vq
false
null
t3_1lyq3vq
/r/LocalLLaMA/comments/1lyq3vq/480mm_wide_mining_frame_for_multiple_gpus_can/
false
false
self
1
null
Local LLM to back Elastic AI
7
Hey all, I'm building a fully air-gapped deployment that integrates with Elastic Security and Observability, including Elastic AI Assistant via OpenInference API. My use case involves log summarisation, alert triage, threat intel enrichment (using MISP), and knowledge base retrieval. About 5000 users, about 2000 serve...
2025-07-13T11:00:03
https://www.reddit.com/r/LocalLLaMA/comments/1lyq22j/local_llm_to_back_elastic_ai/
OldManCyberNinja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyq22j
false
null
t3_1lyq22j
/r/LocalLLaMA/comments/1lyq22j/local_llm_to_back_elastic_ai/
false
false
self
7
{'enabled': False, 'images': [{'id': 'G2yA00beNF7t7h7F-Vm0zYQ1_GPWt2mKaYLvl77xrgc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/G2yA00beNF7t7h7F-Vm0zYQ1_GPWt2mKaYLvl77xrgc.png?width=108&crop=smart&auto=webp&s=8e7dcc983e13bd8aed1654a54d718d49f54cdaae', 'width': 108}, {'height': 121, 'url': 'h...
LLM evaluation in real life?
7
Hi everyone! Wanted to ask a question that's been on my mind recently. I've done LLM research in academia in various forms, each time I thought of a way to improve a certain aspect of LLMs for different tasks, and when asked to prove that my alteration actually improved upon something I almost always had a benchmark ...
2025-07-13T10:59:53
https://www.reddit.com/r/LocalLLaMA/comments/1lyq1yh/llm_evaluation_in_real_life/
Plastic-Bus-7003
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyq1yh
false
null
t3_1lyq1yh
/r/LocalLLaMA/comments/1lyq1yh/llm_evaluation_in_real_life/
false
false
self
7
null
480mm wide 'mining frame' for rack mounting mulitple GPU's - can only find 500+mm wide
1
[removed]
2025-07-13T10:51:01
https://www.reddit.com/r/LocalLLaMA/comments/1lypwzy/480mm_wide_mining_frame_for_rack_mounting/
munkiemagik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lypwzy
false
null
t3_1lypwzy
/r/LocalLLaMA/comments/1lypwzy/480mm_wide_mining_frame_for_rack_mounting/
false
false
self
1
null
Building an App That Builds Apps – Feedback Appreciated
0
Hi everyone, I’m developing a tool that allows you to create full applications by simply describing what you want in plain English—no complicated setup, no boilerplate code. Here’s what it currently offers: • Supports over 10 programming languages • Lets you connect your GitHub repository • Can fix bugs or make im...
2025-07-13T10:44:59
https://i.redd.it/0t2fav6bfmcf1.jpeg
Prestigious_Skin6507
i.redd.it
1970-01-01T00:00:00
0
{}
1lyptl7
false
null
t3_1lyptl7
/r/LocalLLaMA/comments/1lyptl7/building_an_app_that_builds_apps_feedback/
false
false
https://b.thumbs.redditm…CoGp77BH_QKk.jpg
0
{'enabled': True, 'images': [{'id': 'osOMZVrkvVKxIqXVX5EQh7Ig9gY66_BSUdZF017hArA', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/0t2fav6bfmcf1.jpeg?width=108&crop=smart&auto=webp&s=36351b492606d376ec84f6c3266514145d42d6c5', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/0t2fav6bfmcf1.jp...
Wrote a deep dive on LLM tool calling with step-by-step REST and Spring AI examples
9
2025-07-13T09:49:23
https://muthuishere.medium.com/understanding-tool-function-calling-in-llms-step-by-step-examples-in-rest-and-spring-ai-2149ecd6b18b
muthuishere2101
muthuishere.medium.com
1970-01-01T00:00:00
0
{}
1lyozcn
false
null
t3_1lyozcn
/r/LocalLLaMA/comments/1lyozcn/wrote_a_deep_dive_on_llm_tool_calling_with/
false
false
https://external-preview…5c5a36373a796f6a
9
{'enabled': False, 'images': [{'id': '1FwVKbMgCNO8FIf_E8SSF1AT1y6r8EhRo4JtU_kQIJY', 'resolutions': [{'height': 165, 'url': 'https://external-preview.redd.it/1FwVKbMgCNO8FIf_E8SSF1AT1y6r8EhRo4JtU_kQIJY.png?width=108&crop=smart&auto=webp&s=bc6872452b4829cdf316ac0294b8c4c189b660af', 'width': 108}, {'height': 331, 'url': '...
How are people actually able to get the system prompt of these AI companies?
14
While I am extremely grateful that people do post the leaked system prompt online for inspiration, but also curious how its actually possible? There are three things that come to my mind: 1. ***Using some prompt injection (re-iteratively)***: Some kind of jailbreak prompt and see if same things are being repeated, as...
2025-07-13T09:26:26
https://www.reddit.com/r/LocalLLaMA/comments/1lyonb4/how_are_people_actually_able_to_get_the_system/
divyamchandel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyonb4
false
null
t3_1lyonb4
/r/LocalLLaMA/comments/1lyonb4/how_are_people_actually_able_to_get_the_system/
false
false
self
14
null
AI Ripper
0
# The Free Hidden/Invisible Unicode Character Cleaner https://preview.redd.it/1gg3eyw2ulcf1.png?width=1823&format=png&auto=webp&s=87deb6f34cdde491f9896984172fa0ca19305863 # # A simple Overview # # Why is is needed and it´s implications on the ethics dilemma of society In the digital age, text data is ubiquitous...
2025-07-13T08:46:25
https://www.reddit.com/r/LocalLLaMA/comments/1lyo2l7/ai_ripper/
AI-ripper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyo2l7
false
null
t3_1lyo2l7
/r/LocalLLaMA/comments/1lyo2l7/ai_ripper/
false
false
https://external-preview…68de9141d51ef7ed
0
{'enabled': False, 'images': [{'id': 'brF_GYNtXRD5XZ1ZFE5akjroldPSoIKa2ARLoFZOj74', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/brF_GYNtXRD5XZ1ZFE5akjroldPSoIKa2ARLoFZOj74.png?width=108&crop=smart&auto=webp&s=fdca545ba66b703ee39c7df953a58c7ca4d1f1f7', 'width': 108}], 'source': {'height': 19...
Need Help with Agents and AnythingLLM
2
So i finally have my LM studio hosting my Models and have AnythingLLM doing my RAG , soi thought i would extend to agents ,,, look at Youtube , but nothing is working , its constantly saying that "I currently **don’t have direct web browsing capabilitie", what am i doing wrong ?** https://preview.redd.it/1soone6wrlc...
2025-07-13T08:34:50
https://www.reddit.com/r/LocalLLaMA/comments/1lynwk4/need_help_with_agents_and_anythingllm/
uber-linny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lynwk4
false
null
t3_1lynwk4
/r/LocalLLaMA/comments/1lynwk4/need_help_with_agents_and_anythingllm/
false
false
https://b.thumbs.redditm…qV205hbtTOkE.jpg
2
null
Dark Arts: Speaker embedding gradient descent for local TTS models
15
\[As with all my posts, the code and text are organic with no LLM involved. Note that I myself have not confirmed that this works in all cases--I personally have no interest in voice cloning--but in my head the theory is strong and I am confident it should work. Plus, there is historical precedent in soft prompting and...
2025-07-13T07:08:39
https://www.reddit.com/r/LocalLLaMA/comments/1lymlgp/dark_arts_speaker_embedding_gradient_descent_for/
rzvzn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lymlgp
false
null
t3_1lymlgp
/r/LocalLLaMA/comments/1lymlgp/dark_arts_speaker_embedding_gradient_descent_for/
false
false
self
15
null
I have a Laptop with 3050 Ti 4GB VRAM, will upgrading my RAM from 16 to 48 help?
1
I currently have an ASUS TUF Gaming F15, and before people start telling me to give up on local models, let me just say that I have currently been able to successfully run various LLMs and even Images Diffusion models locally with very little issues (mainly just speed and sometimes lag due to OOM). I can easily run 7B ...
2025-07-13T06:57:24
https://www.reddit.com/r/LocalLLaMA/comments/1lymewq/i_have_a_laptop_with_3050_ti_4gb_vram_will/
GamerWael
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lymewq
false
null
t3_1lymewq
/r/LocalLLaMA/comments/1lymewq/i_have_a_laptop_with_3050_ti_4gb_vram_will/
false
false
self
1
null
GPUHammer: New RowHammer Attack Variant Degrades AI Models on NVIDIA GPUs
1
[removed]
2025-07-13T06:44:45
https://thehackernews.com/2025/07/gpuhammer-new-rowhammer-attack-variant.html
tabspaces
thehackernews.com
1970-01-01T00:00:00
0
{}
1lym7r1
false
null
t3_1lym7r1
/r/LocalLLaMA/comments/1lym7r1/gpuhammer_new_rowhammer_attack_variant_degrades/
false
false
default
1
null
When a model is delayed because the boss isn't happy, is it doomed forever?
0
First behemoth was "delayed" by meta and it looks like it is never coming out. Now R2 is delayed by deepseek. Does that mean the end for deepseek too?
2025-07-13T06:19:38
https://www.reddit.com/r/LocalLLaMA/comments/1lyltyb/when_a_model_is_delayed_because_the_boss_isnt/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyltyb
false
null
t3_1lyltyb
/r/LocalLLaMA/comments/1lyltyb/when_a_model_is_delayed_because_the_boss_isnt/
false
false
self
0
null
Kimi-K2 takes top spot on EQ-Bench3 and Creative Writing
800
[https://eqbench.com/](https://eqbench.com/) Writing samples: [https://eqbench.com/results/creative-writing-v3/moonshotai\_\_Kimi-K2-Instruct.html](https://eqbench.com/results/creative-writing-v3/moonshotai__Kimi-K2-Instruct.html) EQ-Bench responses: [https://eqbench.com/results/eqbench3\_reports/moonshotai\_\_kimi...
2025-07-13T06:09:23
https://www.reddit.com/gallery/1lylo75
_sqrkl
reddit.com
1970-01-01T00:00:00
0
{}
1lylo75
false
null
t3_1lylo75
/r/LocalLLaMA/comments/1lylo75/kimik2_takes_top_spot_on_eqbench3_and_creative/
false
false
https://b.thumbs.redditm…6BOfAQW9BupA.jpg
800
null
How do you make Loras for Qwen coder / devstral?
13
I am wondering if anyone did this before, at least I couldn't find information on it. I want to fine tune a coding model without changing the whole model (for hardware restriction reasons). Loras, in theory, would do that. But how? For image and video generation this is pretty much solved and common, but llms?
2025-07-13T05:39:00
https://www.reddit.com/r/LocalLLaMA/comments/1lyl697/how_do_you_make_loras_for_qwen_coder_devstral/
ComprehensiveBird317
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyl697
false
null
t3_1lyl697
/r/LocalLLaMA/comments/1lyl697/how_do_you_make_loras_for_qwen_coder_devstral/
false
false
self
13
null
SmolLM-3B when asked if it was Peter Griffin
64
I was testing the [SmolLM3-3B-WebGPU](https://huggingface.co/spaces/HuggingFaceTB/SmolLM3-3B-WebGPU) Hugging Face Space to check its token speed on my machine (a solid 46 t/s!) before downloading and running it locally. When I prompted it with: "Are you peter griffin?", it just generated a 4000-token list of "Key Takea...
2025-07-13T05:12:45
https://www.reddit.com/r/LocalLLaMA/comments/1lykqbu/smollm3b_when_asked_if_it_was_peter_griffin/
Humble_Hovercraft199
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lykqbu
false
null
t3_1lykqbu
/r/LocalLLaMA/comments/1lykqbu/smollm3b_when_asked_if_it_was_peter_griffin/
false
false
https://b.thumbs.redditm…CsTKS3NTVBUU.jpg
64
{'enabled': False, 'images': [{'id': 'tuLFQadP8tHP69V_jd2l5xXDX_8ASMZX4NQj9QhtyOg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/tuLFQadP8tHP69V_jd2l5xXDX_8ASMZX4NQj9QhtyOg.png?width=108&crop=smart&auto=webp&s=879705209dd5c5f6fab885d338990dee069409f5', 'width': 108}, {'height': 121, 'url': 'h...
Which model is best for translation?
1
I want to translate english text to various languages, these include European as well as Asian languages. But since models have problems with asian languages, I trying to make my project work best for European Languages like Spanish, French, German, etc. Could you guys suggest some open source models to me that can he...
2025-07-13T05:11:43
https://www.reddit.com/r/LocalLLaMA/comments/1lykpo6/which_model_is_best_for_translation/
slipped-and-fell
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lykpo6
false
null
t3_1lykpo6
/r/LocalLLaMA/comments/1lykpo6/which_model_is_best_for_translation/
false
false
self
1
null
What Causes Poor Long-Context Performance?
63
While some models (Gemini, MiniMax, Llama4) claim context lengths in the 1M+ token range, performance beyond ~100K tokens is usually quite poor. Beyond those lengths is it is usually [better to do RAG](https://www.databricks.com/blog/long-context-rag-performance-llms). Why is that? Does the limit come from architectur...
2025-07-13T04:54:44
https://www.reddit.com/r/LocalLLaMA/comments/1lykf92/what_causes_poor_longcontext_performance/
simulated-souls
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lykf92
false
null
t3_1lykf92
/r/LocalLLaMA/comments/1lykf92/what_causes_poor_longcontext_performance/
false
false
self
63
{'enabled': False, 'images': [{'id': '88QOjYTKpMZIM86KNi6b42N407jrxbaPAjbmnwm1Z0A', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/88QOjYTKpMZIM86KNi6b42N407jrxbaPAjbmnwm1Z0A.png?width=108&crop=smart&auto=webp&s=afbb01428ea71e1677bafb9d67ec3dca25236c74', 'width': 108}, {'height': 113, 'url': 'h...
What LLMs work with VScode like copilot?
5
1. I want to stick to using vscode 2. Currently using chatgpt plus for coding but dont like going back and forth between windows 3. Is there anything like copilot (keep being told it sucks) but powered by an LLM of my choice eg. something by OpenAI or Anthropic? 4. I dont understand why Claude Code is the king now when...
2025-07-13T04:54:30
https://www.reddit.com/r/LocalLLaMA/comments/1lykf38/what_llms_work_with_vscode_like_copilot/
sprmgtrb
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lykf38
false
null
t3_1lykf38
/r/LocalLLaMA/comments/1lykf38/what_llms_work_with_vscode_like_copilot/
false
false
self
5
null
Best abliterated / uncensored model?
1
[removed]
2025-07-13T04:50:04
https://www.reddit.com/r/LocalLLaMA/comments/1lykcde/best_abliterated_uncensored_model/
madmax_br5
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lykcde
false
null
t3_1lykcde
/r/LocalLLaMA/comments/1lykcde/best_abliterated_uncensored_model/
false
false
self
1
null
What providers are people using for GLM-4?
1
Any suggestions for providers to use for GLM-4. Tried open router but it's very slow even with max tokens set to 8K. Need generation time to be <4 minutes ideally.
2025-07-13T04:08:49
https://www.reddit.com/r/LocalLLaMA/comments/1lyjm7t/what_providers_are_people_using_for_glm4/
adviceguru25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyjm7t
false
null
t3_1lyjm7t
/r/LocalLLaMA/comments/1lyjm7t/what_providers_are_people_using_for_glm4/
false
false
self
1
null
[Help] Fastest model for real-time UI automation? (Browser-Use too slow)
13
I’m working on a browser automation system that follows a planned sequence of UI actions, but needs an LLM to resolve which DOM element to click when there are multiple similar options. I’ve been using **Browser-Use**, which is solid for tracking state/actions, but execution is too slow — especially when an LLM is in t...
2025-07-13T04:00:54
https://www.reddit.com/r/LocalLLaMA/comments/1lyjgwv/help_fastest_model_for_realtime_ui_automation/
BulkyAd7044
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyjgwv
false
null
t3_1lyjgwv
/r/LocalLLaMA/comments/1lyjgwv/help_fastest_model_for_realtime_ui_automation/
false
false
self
13
null
Do you think an AI will achieve gold medal in 2025 International Math Olympad (tomorrow)
92
The International Math Olympiad will take place on 15th and 16th July in Australia. Google Deepmind will attempt to win a gold medal with their models AlphaProof and AlphaGeometry, after announcing a silver medal performance in 2024. Any open-source model that wins a gold medal will receive a $5 million AIMO prize from...
2025-07-13T03:47:18
https://www.reddit.com/r/LocalLLaMA/comments/1lyj81f/do_you_think_an_ai_will_achieve_gold_medal_in/
mathsTeacher82
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyj81f
false
null
t3_1lyj81f
/r/LocalLLaMA/comments/1lyj81f/do_you_think_an_ai_will_achieve_gold_medal_in/
false
false
self
92
{'enabled': False, 'images': [{'id': 'AcBUFtk7lDqLasMm0XdsCf0q3gy8tW9LCvLdK4SfhIc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/AcBUFtk7lDqLasMm0XdsCf0q3gy8tW9LCvLdK4SfhIc.jpeg?width=108&crop=smart&auto=webp&s=f46c2aab57d92285291510c11c6203406a312438', 'width': 108}, {'height': 162, 'url': '...
32g SXM2 V100s for $360, Good Deal for LLMs?
5
I come across many v100 32g gpus, ecc all intact for $360 on chinese second hand market (I live in China) and can easily get stuff like bifurcated 300G nvlink sxm2 to pcie adapters etc. for no more than $40. Also, if I get the 16gb version of the v100, it only costs $80 per card. Wouldn't this be a better deal than...
2025-07-13T03:33:15
https://www.reddit.com/r/LocalLLaMA/comments/1lyiyvq/32g_sxm2_v100s_for_360_good_deal_for_llms/
starikari
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyiyvq
false
null
t3_1lyiyvq
/r/LocalLLaMA/comments/1lyiyvq/32g_sxm2_v100s_for_360_good_deal_for_llms/
false
false
self
5
null
Any suggestions for generating academic-style/advanced plots?
3
Hi LocalLLaMA community, I am a researcher, and recently I have noticed that LLMs such as OpenAI's and Google's are not good at generating academic-style and/or beautiful plots. Open sourced model also doesn’t work well. Beyond the simple plots which they can do just fine, anything more advanced that includes LaTex t...
2025-07-13T03:25:25
https://www.reddit.com/r/LocalLLaMA/comments/1lyitq9/any_suggestions_for_generating/
plsendfast
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyitq9
false
null
t3_1lyitq9
/r/LocalLLaMA/comments/1lyitq9/any_suggestions_for_generating/
false
false
self
3
null
Heaviest model that can be ran with RTX 3060 12Gb?
4
I finally got a RTX 3060 12GB to start using AI. Now I wanted to know what's the heaviest it can run and if there are new methods of increasing performance by now. Ideally, I can't read at speed of light so models that might run at 4-6 words per second is enough. I can't upgrade from 12GB to 32GB ram yet, so what is t...
2025-07-13T02:33:18
https://www.reddit.com/r/LocalLLaMA/comments/1lyhuuq/heaviest_model_that_can_be_ran_with_rtx_3060_12gb/
WEREWOLF_BX13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyhuuq
false
null
t3_1lyhuuq
/r/LocalLLaMA/comments/1lyhuuq/heaviest_model_that_can_be_ran_with_rtx_3060_12gb/
false
false
self
4
null
Is there any book writing software that can utilize an local LLM?
5
Maybe it'd be more of an LLM tool designed for book writing than the other way around but I'm looking for software that can utilize a locally running LLM to help me write a book. Hoping for something where I can include descriptions of characters, set the scenes, basic outline and such. Then let the LLM do the bulk of...
2025-07-13T02:22:40
https://www.reddit.com/r/LocalLLaMA/comments/1lyhnhw/is_there_any_book_writing_software_that_can/
123android
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyhnhw
false
null
t3_1lyhnhw
/r/LocalLLaMA/comments/1lyhnhw/is_there_any_book_writing_software_that_can/
false
false
self
5
null
How do you keep up with all these things?
44
I feel like everyday I come here someone mentions a a new tool or a newly released model or software that I never heard off. Where in earth are you going to get your most up to dated trusted news/info?
2025-07-13T00:39:07
https://www.reddit.com/r/LocalLLaMA/comments/1lyfngg/how_do_you_keep_up_with_all_these_things/
ontologicalmemes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyfngg
false
null
t3_1lyfngg
/r/LocalLLaMA/comments/1lyfngg/how_do_you_keep_up_with_all_these_things/
false
false
self
44
null
Anybody else broken Meta "Ai" yet?
0
https://preview.redd.it/…about it's role.
2025-07-13T00:18:09
https://www.reddit.com/r/LocalLLaMA/comments/1lyf8g5/anybody_else_broken_meta_ai_yet/
ChrisZavadil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyf8g5
false
null
t3_1lyf8g5
/r/LocalLLaMA/comments/1lyf8g5/anybody_else_broken_meta_ai_yet/
false
false
https://b.thumbs.redditm…zW27dYrCU6_w.jpg
0
null
ScreenMonitorMCP
1
Hello guys, I've finally completed my MCP server that I've been working on for a long time and released it as open source on GitHub. MCP provides an advanced toolset for AI assistants and applications. You can particularly: \- Intelligently analyze screenshots \- Understand and process visual content \-...
2025-07-13T00:06:02
https://www.reddit.com/r/LocalLLaMA/comments/1lyezt8/screenmonitormcp/
inkbytefo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyezt8
false
null
t3_1lyezt8
/r/LocalLLaMA/comments/1lyezt8/screenmonitormcp/
false
false
self
1
null
Laptop GPU for Agentic Coding -- Worth it?
7
Anyone who actually codes with local LLM on their laptops, what's your setup and are you happy with the quality and speed? Should I even bother trying to code with an LLM that fits on a laptop GPU, or just tether back to my beefier home server or openrouter?
2025-07-12T23:48:50
https://www.reddit.com/r/LocalLLaMA/comments/1lyen05/laptop_gpu_for_agentic_coding_worth_it/
randomqhacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyen05
false
null
t3_1lyen05
/r/LocalLLaMA/comments/1lyen05/laptop_gpu_for_agentic_coding_worth_it/
false
false
self
7
null
DeepSeek R2 delayed
0
>Over the past several months, DeepSeek's engineers have been working to refine R2 until Liang gives the green light for release, according to The Information. However, a fast adoption of R2 could be difficult due to a shortage of Nvidia server chips in China as a result of U.S. export regulations, the report said, cit...
2025-07-12T23:04:13
https://i.redd.it/i6y02pp6yicf1.jpeg
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1lydp3k
false
null
t3_1lydp3k
/r/LocalLLaMA/comments/1lydp3k/deepseek_r2_delayed/
false
false
default
0
{'enabled': True, 'images': [{'id': 'i6y02pp6yicf1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/i6y02pp6yicf1.jpeg?width=108&crop=smart&auto=webp&s=5b9ade7b934554601c33a4907a46f290f5035d6b', 'width': 108}, {'height': 130, 'url': 'https://preview.redd.it/i6y02pp6yicf1.jpeg?width=216&crop=smart&auto=w...
Looking for trusted websites with benchmark leaderboards to build LLM reranking — plus how to evaluate LLMs in production without ground truth?
1
hey, I’m working on a system that uses reranking to select the best LLM for each specific task. To do this, I want to use a trusted website as a knowledge base—ideally one that provides leaderboards across multiple benchmarks and tasks so I can retrieve reliable performance info for different models. Question 1: What...
2025-07-12T22:12:27
https://www.reddit.com/r/LocalLLaMA/comments/1lyckyk/looking_for_trusted_websites_with_benchmark/
Realistic_Force688
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lyckyk
false
null
t3_1lyckyk
/r/LocalLLaMA/comments/1lyckyk/looking_for_trusted_websites_with_benchmark/
false
false
self
1
null
Accept it Sam. If you can't protect a model behind an API, it will NEVER be "safe"
2
2025-07-12T21:54:57
https://i.redd.it/i30vh9nvlicf1.png
Longjumping_Spot5843
i.redd.it
1970-01-01T00:00:00
0
{}
1lyc7he
false
null
t3_1lyc7he
/r/LocalLLaMA/comments/1lyc7he/accept_it_sam_if_you_cant_protect_a_model_behind/
false
false
default
2
{'enabled': True, 'images': [{'id': 'i30vh9nvlicf1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/i30vh9nvlicf1.png?width=108&crop=smart&auto=webp&s=2434be0aadb9f3f44d5dd374b81df1d522d76157', 'width': 108}, {'height': 148, 'url': 'https://preview.redd.it/i30vh9nvlicf1.png?width=216&crop=smart&auto=web...
Accept it sam. If you can't protect a model behind an API, it will NEVER be safe
1
2025-07-12T21:54:01
https://i.redd.it/5nrj1l6qlicf1.png
Longjumping_Spot5843
i.redd.it
1970-01-01T00:00:00
0
{}
1lyc6s3
false
null
t3_1lyc6s3
/r/LocalLLaMA/comments/1lyc6s3/accept_it_sam_if_you_cant_protect_a_model_behind/
false
false
default
1
{'enabled': True, 'images': [{'id': '5nrj1l6qlicf1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/5nrj1l6qlicf1.png?width=108&crop=smart&auto=webp&s=fae81e5a3e6753d3aaa1566090585442c2331f82', 'width': 108}, {'height': 148, 'url': 'https://preview.redd.it/5nrj1l6qlicf1.png?width=216&crop=smart&auto=web...
Internal networking components for Nvidia’s System
2
https://preview.redd.it/…ese components?
2025-07-12T21:41:51
https://www.reddit.com/r/LocalLLaMA/comments/1lybx9x/internal_networking_components_for_nvidias_system/
250sunnyisles
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lybx9x
false
null
t3_1lybx9x
/r/LocalLLaMA/comments/1lybx9x/internal_networking_components_for_nvidias_system/
false
false
https://b.thumbs.redditm…LPzr66D7CQBY.jpg
2
null
Should I buy Tesla K80 for 70€ or Tesla M10 for 110€?
2
I've heard they are somewhat okay for llms and for like a little less than half the price of a 3060 they seem pretty enticing but I just need some advice on wether I should buy one of these two or pass on them.
2025-07-12T21:33:34
https://www.reddit.com/r/LocalLLaMA/comments/1lybqtw/should_i_buy_tesla_k80_for_70_or_tesla_m10_for_110/
Similar-Republic149
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lybqtw
false
null
t3_1lybqtw
/r/LocalLLaMA/comments/1lybqtw/should_i_buy_tesla_k80_for_70_or_tesla_m10_for_110/
false
false
self
2
null
Unlocking AMD MI300X for High-Throughput, Low-Cost LLM Inference
5
2025-07-12T21:27:49
https://www.herdora.com/blog/the-overlooked-gpu
Upstairs-Fun8458
herdora.com
1970-01-01T00:00:00
0
{}
1lybm7b
false
null
t3_1lybm7b
/r/LocalLLaMA/comments/1lybm7b/unlocking_amd_mi300x_for_highthroughput_lowcost/
false
false
default
5
null
Runpod, Hugging Face, or what for super-simple uncensored LLM-in-the-cloud setup?
1
What's the simplest way to get an uncensored LLM with image generation set up in the cloud? If one doesn't need much customization and to play with many options, but just wants speed and ease-of-use, what's the best way?
2025-07-12T21:21:40
https://www.reddit.com/r/LocalLLaMA/comments/1lybh8e/runpod_hugging_face_or_what_for_supersimple/
goldenapple212
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lybh8e
false
null
t3_1lybh8e
/r/LocalLLaMA/comments/1lybh8e/runpod_hugging_face_or_what_for_supersimple/
false
false
self
1
null
Qwen3-30B-A3B aider polyglot score?
8
Why no aider polyglot benchmark test for qwen3-30b-a3b ? What would the numbers be if someone passed the benchmark ?
2025-07-12T21:17:16
https://www.reddit.com/r/LocalLLaMA/comments/1lybdr2/qwen330ba3b_aider_polyglot_score/
LogicalSink1366
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lybdr2
false
null
t3_1lybdr2
/r/LocalLLaMA/comments/1lybdr2/qwen330ba3b_aider_polyglot_score/
false
false
self
8
null
Banana for scale
28
In time-honored tradition we present the relative physical dimensions of the Workstation Pro 6000.
2025-07-12T21:11:10
https://i.redd.it/3gsbxg74eicf1.jpeg
blackwell_tart
i.redd.it
1970-01-01T00:00:00
0
{}
1lyb8tz
false
null
t3_1lyb8tz
/r/LocalLLaMA/comments/1lyb8tz/banana_for_scale/
false
false
default
28
{'enabled': True, 'images': [{'id': '3gsbxg74eicf1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/3gsbxg74eicf1.jpeg?width=108&crop=smart&auto=webp&s=c4cff1cf527148cb67e28ceb515a52bec33ece4d', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/3gsbxg74eicf1.jpeg?width=216&crop=smart&auto=w...
Moonshot AI just made their moonshot
866
- Screenshot: https://openrouter.ai/moonshotai - Announcement: https://moonshotai.github.io/Kimi-K2/ - Model: https://huggingface.co/moonshotai/Kimi-K2-Instruct
2025-07-12T20:46:47
https://i.redd.it/95q67pnr9icf1.jpeg
Balance-
i.redd.it
1970-01-01T00:00:00
0
{}
1lyaozv
false
null
t3_1lyaozv
/r/LocalLLaMA/comments/1lyaozv/moonshot_ai_just_made_their_moonshot/
false
false
default
866
{'enabled': True, 'images': [{'id': '95q67pnr9icf1', 'resolutions': [{'height': 99, 'url': 'https://preview.redd.it/95q67pnr9icf1.jpeg?width=108&crop=smart&auto=webp&s=f5225d3e72ca9cbedd6a6f3b7456742c3f7a2d71', 'width': 108}, {'height': 198, 'url': 'https://preview.redd.it/95q67pnr9icf1.jpeg?width=216&crop=smart&auto=w...
Browser Use vs Model Context Protocol (MCP): Two Philosophies for AI Interaction with the Digital World
2
2025-07-12T20:22:37
https://www.linkedin.com/pulse/browser-use-vs-model-context-protocol-mcp-two-ai-interaction-wang-irqye/
Crafty_Read_6928
linkedin.com
1970-01-01T00:00:00
0
{}
1lya4ks
false
null
t3_1lya4ks
/r/LocalLLaMA/comments/1lya4ks/browser_use_vs_model_context_protocol_mcp_two/
false
false
default
2
null
K2-Mini: Successfully compressed Kimi-K2 from 1.07T to   32.5B parameters (97% reduction) - runs on single H100
115
  Hey r/MachineLearning! 👋   I've been working on something that I think this community might find   interesting - **K2-Mini**, an open-source project that compresses the massive   Kimi-K2 model down to something actually usable.   **TL;DR**   \- Compressed 1.07T parameter Kimi-K2 → 32.5B parameter K2-Mini   \- ...
2025-07-12T19:56:23
https://www.reddit.com/r/LocalLLaMA/comments/1ly9iqw/k2mini_successfully_compressed_kimik2_from_107t/
Important-Union-9128
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ly9iqw
false
null
t3_1ly9iqw
/r/LocalLLaMA/comments/1ly9iqw/k2mini_successfully_compressed_kimik2_from_107t/
false
false
self
115
null