title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Started r/aiinfra — a subreddit focused on the systems side of running LLMs | 1 | [removed] | 2025-07-07T22:12:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lu7on8/started_raiinfra_a_subreddit_focused_on_the/ | cookiesupers22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu7on8 | false | null | t3_1lu7on8 | /r/LocalLLaMA/comments/1lu7on8/started_raiinfra_a_subreddit_focused_on_the/ | false | false | self | 1 | null |
UI/UX Benchmark Update and Response: More Models, Updating Ranking, Open Data Soon | 72 | Hi all, a few times on here I've been sharing progress on a [UI/UX benchmark](https://www.designarena.ai/) that I have been working on with a small team. In particular, I made [a post yesterday](https://www.reddit.com/r/LocalLLaMA/comments/1lthtbn/85k_people_voted_on_which_ai_models_create_the/) that gave us a ton of u... | 2025-07-07T22:09:06 | https://www.reddit.com/gallery/1lu7lsi | adviceguru25 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lu7lsi | false | null | t3_1lu7lsi | /r/LocalLLaMA/comments/1lu7lsi/uiux_benchmark_update_and_response_more_models/ | false | false | 72 | null | |
Let the LLM Write the Prompts: An Intro to Building with DSPy | 4 | 2025-07-07T22:04:13 | https://www.youtube.com/watch?v=I9ZtkgYZnOw | contextbot | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1lu7hm7 | false | {'oembed': {'author_name': 'Databricks', 'author_url': 'https://www.youtube.com/@Databricks', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/I9ZtkgYZnOw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope;... | t3_1lu7hm7 | /r/LocalLLaMA/comments/1lu7hm7/let_the_llm_write_the_prompts_an_intro_to/ | false | false | default | 4 | {'enabled': False, 'images': [{'id': 'BeGr9QgwLXtSX0q13TLbHKwgMGLwOIT30Cd4YNUl1DU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/BeGr9QgwLXtSX0q13TLbHKwgMGLwOIT30Cd4YNUl1DU.jpeg?width=108&crop=smart&auto=webp&s=3099c17f9b0985d225b11a14e5e139306f590a7b', 'width': 108}, {'height': 162, 'url': '... | |
Has anyone set up a generalized work/research assistant? | 0 | This is kind of just a "wishlist", but I'm looking for locally hosted alternatives to a variety of work tools like SciSpace/Notebook LM, Zapier, etc.
Recently I've been building out a knowledge management system using Obsidian/Dataview to track and link research topics, events/conferences/journals/specific papers, oth... | 2025-07-07T22:03:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lu7hd6/has_anyone_set_up_a_generalized_workresearch/ | JanusTheDoorman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu7hd6 | false | null | t3_1lu7hd6 | /r/LocalLLaMA/comments/1lu7hd6/has_anyone_set_up_a_generalized_workresearch/ | false | false | self | 0 | null |
What's a good base model to train a custom small language model (SLM)? [Beginner, need advice] | 1 | [removed] | 2025-07-07T22:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lu7f8z/whats_a_good_base_model_to_train_a_custom_small/ | callmedevilthebad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu7f8z | false | null | t3_1lu7f8z | /r/LocalLLaMA/comments/1lu7f8z/whats_a_good_base_model_to_train_a_custom_small/ | false | false | self | 1 | null |
Please gut-check these W7900 vs 7900 XTX server builds | 0 | **(US$ first, ₹ in brackets, used ChatGPT to capture the requirements)**
**Constraints**
|**Item**|**Limit**|
|:-|:-|
|**Total budget**|**US $ 6 000** (₹ 5 16 000) — can stretch to **₹ 6 00 000** if the spec really earns it|
|**Workload**|Mostly LLM **inference** on 7 B → 70 B models, occasional fine-tune (FP16/BF16... | 2025-07-07T21:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lu75js/please_gutcheck_these_w7900_vs_7900_xtx_server/ | rgroadie2707 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu75js | false | null | t3_1lu75js | /r/LocalLLaMA/comments/1lu75js/please_gutcheck_these_w7900_vs_7900_xtx_server/ | false | false | self | 0 | null |
What are the best options currently for a real time voice chat? | 7 | I’m building a safe, easy-to-use voice chat powered by an LLM for my kids and something that enhances their learning at home while keeping it fun. So far, I haven’t found a solution that’s both reliable and user-friendly. I’m running a local Ollama server with Open WebUI and tried using the chat feature alongside Kokor... | 2025-07-07T21:49:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lu7506/what_are_the_best_options_currently_for_a_real/ | vulcan4d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu7506 | false | null | t3_1lu7506 | /r/LocalLLaMA/comments/1lu7506/what_are_the_best_options_currently_for_a_real/ | false | false | self | 7 | null |
Non-spammy guides to building agentic systems? | 1 | [removed] | 2025-07-07T21:45:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lu71hk/nonspammy_guides_to_building_agentic_systems/ | saosebastiao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu71hk | false | null | t3_1lu71hk | /r/LocalLLaMA/comments/1lu71hk/nonspammy_guides_to_building_agentic_systems/ | false | false | self | 1 | null |
i'd like to see ai push 928,726 changes to prod on a friday evening | 0 | vibecoding and its consequences | 2025-07-07T21:42:48 | https://v.redd.it/ktfwvxs6vibf1 | Sensitive-Finger-404 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lu6yud | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ktfwvxs6vibf1/DASHPlaylist.mpd?a=1754516585%2CYzM3ZGQ2MmE4ZTIzNDY2M2MzY2I1Y2FkMDkxOGU5NTk1OTMwZDgzZmYwYWNiNTc1YjQ2MGQ2YTU1M2JmYzAyZQ%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/ktfwvxs6vibf1/DASH_720.mp4?source=fallback', 'has... | t3_1lu6yud | /r/LocalLLaMA/comments/1lu6yud/id_like_to_see_ai_push_928726_changes_to_prod_on/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YjR6eXNkbDZ2aWJmMfMSDdZEeJ012jKqvBZGo-pwbZI7DlIbuFIO5iRrK4tF', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/YjR6eXNkbDZ2aWJmMfMSDdZEeJ012jKqvBZGo-pwbZI7DlIbuFIO5iRrK4tF.png?width=108&crop=smart&format=pjpg&auto=webp&s=423a553a020fcc1dbd195377e9af55aa6687... | |
Single server or networked minis? | 1 | [removed] | 2025-07-07T20:50:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lu5mxb/single_server_or_networked_minis/ | LoudZoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu5mxb | false | null | t3_1lu5mxb | /r/LocalLLaMA/comments/1lu5mxb/single_server_or_networked_minis/ | false | false | self | 1 | null |
API for OSS TTS models (Sesame CSM-1B etc.) | 0 | Just came across [vogent.ai/voicelab](http://vogent.ai/voicelab) on twitter
The generation quality for Sesame CSM-1B seems much better than the huggingface spaces (they claim to post-train). There aren’t many voices and it seems like most other models are “coming soon.” Anyone else used this?
| 2025-07-07T20:49:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lu5lz6/api_for_oss_tts_models_sesame_csm1b_etc/ | BoatEastern8082 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu5lz6 | false | null | t3_1lu5lz6 | /r/LocalLLaMA/comments/1lu5lz6/api_for_oss_tts_models_sesame_csm1b_etc/ | false | false | self | 0 | null |
Thanks to you, I built an open-source website that can watch your screen and trigger actions. It runs 100% locally and was inspired by all of you! | 491 | **TL;DR: I'm a solo dev who wanted a simple, private way to have local LLMs watch my screen and do simple logging/notifying. I'm launching the open-source tool for it, Observer AI, this Friday. It's built for this community, and I'd love your feedback.**
Hey r/LocalLLaMA,
Some of you might remember my earlier post... | 2025-07-07T20:43:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lu5g8c/thanks_to_you_i_built_an_opensource_website_that/ | Roy3838 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu5g8c | false | null | t3_1lu5g8c | /r/LocalLLaMA/comments/1lu5g8c/thanks_to_you_i_built_an_opensource_website_that/ | false | false | self | 491 | {'enabled': False, 'images': [{'id': 'HKQc51LF4RwiSHj39aApgmHyEz7DZYnbBH5-Ecqof1Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HKQc51LF4RwiSHj39aApgmHyEz7DZYnbBH5-Ecqof1Q.png?width=108&crop=smart&auto=webp&s=20e0fbc9046a00788bc9900ea251774c9e8c2c5c', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen3 32B Q8 vs 235B A22B Unsloth Q3_K_XL UD vs Kimi-Dev 72B Q8? | 1 | Does anyone have experience comparing Qwen3 32B Q8 vs 235B A22B Unsloth Q3\_K\_XL UD vs Kimi-Dev 72B Q8??
Seems these maybe the best quality coding options that can fit in a 128GB Mac.
Can't find much benchmarks on Unsloth's 235B A22B UD quants, but extrapolating some limited benchmarks on Qwen 235B & 32B with limit... | 2025-07-07T20:34:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lu57p7/qwen3_32b_q8_vs_235b_a22b_unsloth_q3_k_xl_ud_vs/ | Sudden_358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu57p7 | false | null | t3_1lu57p7 | /r/LocalLLaMA/comments/1lu57p7/qwen3_32b_q8_vs_235b_a22b_unsloth_q3_k_xl_ud_vs/ | false | false | self | 1 | null |
Octominer + P102-100 build... worth it? | 11 | Just for luls I was looking at some of the "Octominer" boards available. I thought it would be a fun build to get like 8x P104-100 / P102-100 and load one up.
However, they mostly have something wimpy for CPU... like a dual core Celeron or similar. Will that kill any possible chance of fun on a build like that because... | 2025-07-07T20:18:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lu4t37/octominer_p102100_build_worth_it/ | UsualResult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu4t37 | false | null | t3_1lu4t37 | /r/LocalLLaMA/comments/1lu4t37/octominer_p102100_build_worth_it/ | false | false | self | 11 | null |
Old CPU | 1 | [removed] | 2025-07-07T19:52:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lu44yo/old_cpu/ | Live-Efficiency-1378 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu44yo | false | null | t3_1lu44yo | /r/LocalLLaMA/comments/1lu44yo/old_cpu/ | false | false | self | 1 | null |
Am I stupid? I just got Gigabyte AI TOP W7900 for ~$1000 | 0 | I just ordered dual slot W7900 card with 48GB of VRAM because it seemed like a good deal.
Now I'm wondering what should I do with it because I already have 2x A6000 and 1x AI TOP 4070Ti Super.
Any ideas? | 2025-07-07T19:35:12 | https://www.reddit.com/r/LocalLLaMA/comments/1lu3ohu/am_i_stupid_i_just_got_gigabyte_ai_top_w7900_for/ | Wooden_Yam1924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu3ohu | false | null | t3_1lu3ohu | /r/LocalLLaMA/comments/1lu3ohu/am_i_stupid_i_just_got_gigabyte_ai_top_w7900_for/ | false | false | self | 0 | null |
Got me them pre-purchase jitters! | 1 | [removed] | 2025-07-07T19:24:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lu3epz/got_me_them_prepurchase_jitters/ | LoudZoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu3epz | false | null | t3_1lu3epz | /r/LocalLLaMA/comments/1lu3epz/got_me_them_prepurchase_jitters/ | false | false | self | 1 | null |
Why does Copilot sound like a 60 year old CEO / politician | 0 | 2025-07-07T19:15:06 | Anto444_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lu35pf | false | null | t3_1lu35pf | /r/LocalLLaMA/comments/1lu35pf/why_does_copilot_sound_like_a_60_year_old_ceo/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'pt2dPEKm0raJdGsIQfCIngYjZJ3EVQP4ABy-GGH0up4', 'resolutions': [{'height': 196, 'url': 'https://preview.redd.it/tas86nsu4ibf1.jpeg?width=108&crop=smart&auto=webp&s=927b42045b56064435cccd6bfadbda45add9b6a5', 'width': 108}, {'height': 392, 'url': 'https://preview.redd.it/tas86nsu4ibf1.j... | |||
An idea I built to synthesize emotion and cognition, would love feedback or thoughts | 1 | [removed] | 2025-07-07T19:14:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lu358n/an_idea_i_built_to_synthesize_emotion_and/ | ThassioGS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu358n | false | null | t3_1lu358n | /r/LocalLLaMA/comments/1lu358n/an_idea_i_built_to_synthesize_emotion_and/ | false | false | self | 1 | null |
Learning triton & cuda: How far can colab + nsight-compute take me? | 3 | Hi folks!
I've recently been learning Triton and CUDA, writing my own kernels and optimizing them using a lot of great tricks I’ve picked up from blog-posts and docs. However, I currently don’t have access to any local GPUs.
Right now, I’m using Google Colab with T4 GPUs to run my kernels. I collect telemetry and ker... | 2025-07-07T18:29:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lu1z10/learning_triton_cuda_how_far_can_colab/ | Zealousideal_Elk109 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu1z10 | false | null | t3_1lu1z10 | /r/LocalLLaMA/comments/1lu1z10/learning_triton_cuda_how_far_can_colab/ | false | false | self | 3 | null |
We finally understand grokking | 2 | 2025-07-07T18:04:45 | https://x.com/jxmnop/status/1929903028372459909 | FeathersOfTheArrow | x.com | 1970-01-01T00:00:00 | 0 | {} | 1lu1bev | false | null | t3_1lu1bev | /r/LocalLLaMA/comments/1lu1bev/we_finally_understand_grokking/ | false | false | default | 2 | null | |
Best Qwen model? | 0 | I am so confused on which one is the best ? Between max , 32b and 235b.
So far all 3 have been close in my benchmarks and use case , so i got curious.
Which one is the best from your opinion? | 2025-07-07T17:58:34 | Daemontatox | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lu1578 | false | null | t3_1lu1578 | /r/LocalLLaMA/comments/1lu1578/best_qwen_model/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'f1i19297rhbf1', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/f1i19297rhbf1.jpeg?width=108&crop=smart&auto=webp&s=b2d01dcad43e4887a6ccf7b6960a536230e21425', 'width': 108}, {'height': 190, 'url': 'https://preview.redd.it/f1i19297rhbf1.jpeg?width=216&crop=smart&auto=w... | |
Tool Calling | 1 | Hey,
I’ve worked a little bit with the ReAct pattern and made it work to call tools I’ve defined in my code. It does feel self made and not really following a standard approach so far.
Are there any good resource’s about tool calling?
To highlight the difference: I didn’t implement MCP, I’ve implement a way to ca... | 2025-07-07T17:50:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lu0x2s/tool_calling/ | Tobias-Gleiter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu0x2s | false | null | t3_1lu0x2s | /r/LocalLLaMA/comments/1lu0x2s/tool_calling/ | false | false | self | 1 | null |
Do you use prompt caching to save chat history in your LLM apps? | 14 | Curious to hear from others building LLM-based chat apps: Do you implement **prompt caching** to store chat history or previous responses? Or do you send the chat history with each user's prompt?
Caching is more expensive to write, but the costs are then net positive if the conversation becomes long, no?
Would apprec... | 2025-07-07T16:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ltze9d/do_you_use_prompt_caching_to_save_chat_history_in/ | Physical_Ad9040 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltze9d | false | null | t3_1ltze9d | /r/LocalLLaMA/comments/1ltze9d/do_you_use_prompt_caching_to_save_chat_history_in/ | false | false | self | 14 | null |
Has anyone here tried to augment text data using local domain specific LLMs ? | 4 | Did any of you guys try to augment text data uaing an LLM? For example augmenting medical symptoms using MedGemma, by telling the LLM to generate 3 different phrases similar to the original phrase and then repeating this for every row until all the dataset is augmented.
What do you think about this approach, and would... | 2025-07-07T16:14:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ltyc9k/has_anyone_here_tried_to_augment_text_data_using/ | skillmaker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltyc9k | false | null | t3_1ltyc9k | /r/LocalLLaMA/comments/1ltyc9k/has_anyone_here_tried_to_augment_text_data_using/ | false | false | self | 4 | null |
Hardware recommendations? Mac Mini, NVIDIA Orin, Ryzen AI... ? | 9 | Hi there! I recently started being interested in getting an "affordable" Mini PC type machine that can run LLMs without being to power hungry.
The first challenge is to try and understand what is required for this. What I have gathered so far:
- RAM is important (double the model size in billions and leave room for ... | 2025-07-07T16:00:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ltxzad/hardware_recommendations_mac_mini_nvidia_orin/ | lizard121n6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltxzad | false | null | t3_1ltxzad | /r/LocalLLaMA/comments/1ltxzad/hardware_recommendations_mac_mini_nvidia_orin/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': '74XgBIalMSEcLJ5elANAKKt2qBW80Yp2-wjFtoP1cP4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/74XgBIalMSEcLJ5elANAKKt2qBW80Yp2-wjFtoP1cP4.jpeg?width=108&crop=smart&auto=webp&s=6d4a97ef0ef14c9df18bbc7ba9c53e32975c7ff9', 'width': 108}, {'height': 108, 'url': '... |
Qwen3-8B-BitNet | 1 | Here is a decent Qwen3 BitNet model I trained with \~1B tokens using SYNTHETIC-1 data. BitNet Hunyuan A13B is training this week.
[model](https://huggingface.co/codys12/Qwen3-8B-BitNet)
[notebook](https://colab.research.google.com/drive/1GT0GEyjzOQUiOI0tphvhiFDwUw-F6v7l?usp=sharing) to try out the model | 2025-07-07T15:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ltxsqh/qwen38bbitnet/ | codys12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltxsqh | false | null | t3_1ltxsqh | /r/LocalLLaMA/comments/1ltxsqh/qwen38bbitnet/ | false | false | self | 1 | null |
LangChain/Crew/AutoGen made it easy to build agents, but operating them is a joke | 1 | We built an internal support agent using LangChain + OpenAI + some simple tool calls.
Getting to a working prototype took 3 days with Cursor and just messing around. Great.
But actually trying to operate that agent across multiple teams was absolute chaos.
– No structured logs of intermediate reasoning
– No persist... | 2025-07-07T15:43:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ltxiy4/langchaincrewautogen_made_it_easy_to_build_agents/ | ImmuneCoder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltxiy4 | false | null | t3_1ltxiy4 | /r/LocalLLaMA/comments/1ltxiy4/langchaincrewautogen_made_it_easy_to_build_agents/ | false | false | self | 1 | null |
LangChain et al. Made Building Agents Easy — Operating Them Is a Joke | 1 | [deleted] | 2025-07-07T15:42:05 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ltxi2b | false | null | t3_1ltxi2b | /r/LocalLLaMA/comments/1ltxi2b/langchain_et_al_made_building_agents_easy/ | false | false | default | 1 | null | ||
Trouble setting up conda environment for unsloth finetuning | 1 | Can you please help me find a clean way to set up a conda environment correctly to finetune a model from huggingface using unsloth. I keep getting dependency issues and am losing my mind. this is what am doing now:
conda create --name unsloth_env python=3.10 -y
conda activate unsloth_env
conda install pyto... | 2025-07-07T15:16:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ltwuga/trouble_setting_up_conda_environment_for_unsloth/ | No-Mud-1902 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltwuga | false | null | t3_1ltwuga | /r/LocalLLaMA/comments/1ltwuga/trouble_setting_up_conda_environment_for_unsloth/ | false | false | self | 1 | null |
Any thoughts on Lemony Node? | 1 | [removed] | 2025-07-07T14:54:29 | https://www.reddit.com/r/LocalLLaMA/comments/1ltw9tz/any_thoughts_on_lemony_node/ | drplan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltw9tz | false | null | t3_1ltw9tz | /r/LocalLLaMA/comments/1ltw9tz/any_thoughts_on_lemony_node/ | false | false | self | 1 | null |
Radeon Pro Duo or AMD Instinct Mi50? | 1 | I've been trying to decide between the two for my AI rig, which since I mostly use GPT4all for LLama models, I'd imagine I'm stuck with Vulcan. I bought a ARC A770 prior but intel cards are blacklisted by GPT4All currently, which is why I've been looking at one of these two cards. I already have a dual haswell xeon rig... | 2025-07-07T14:49:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ltw5lh/radeon_pro_duo_or_amd_instinct_mi50/ | nathan22211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltw5lh | false | null | t3_1ltw5lh | /r/LocalLLaMA/comments/1ltw5lh/radeon_pro_duo_or_amd_instinct_mi50/ | false | false | self | 1 | null |
(Kramer UI for Ollama) I was tired of dealing with Docker, so I built a simple, portable Windows UI for Ollama. | 1 | Hey everyone,
I wanted to share a small project I built for my own purposes: Kramer UI for Ollama.
I love Ollama for its simplicity and its model management, but setting up a UI for it has always been a pain point. I used to use OpenWebUI and it was great, but I'd rather not have to set up docker. And using Ollama th... | 2025-07-07T14:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ltvkqq/kramer_ui_for_ollama_i_was_tired_of_dealing_with/ | DanielKramer_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltvkqq | false | null | t3_1ltvkqq | /r/LocalLLaMA/comments/1ltvkqq/kramer_ui_for_ollama_i_was_tired_of_dealing_with/ | false | false | self | 1 | null |
I was tired of dealing with Docker, so I built a simple, portable Windows UI for Ollama. | 1 | [deleted] | 2025-07-07T14:26:12 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ltvk58 | false | null | t3_1ltvk58 | /r/LocalLLaMA/comments/1ltvk58/i_was_tired_of_dealing_with_docker_so_i_built_a/ | false | false | default | 1 | null | ||
Understanding trade-offs between: m4 max studio vs AI Max+ 395 | 1 | It seems to me that these 2 seem to be roughly comparable in performance for running LLM? Does anyone else have experience or thoughts on this?
Comparing 64gb M4 Max (16 core) with 128gb Max+ for 3 reasons:
* Costs are roughly similar (mac costs more naturally but close enough to be competitive)
* Max+ has to split ... | 2025-07-07T14:12:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ltv847/understanding_tradeoffs_between_m4_max_studio_vs/ | siegekeebsofficial | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltv847 | false | null | t3_1ltv847 | /r/LocalLLaMA/comments/1ltv847/understanding_tradeoffs_between_m4_max_studio_vs/ | false | false | self | 1 | null |
Double GPU setup 3090 and 5060 ti 16GB | 1 | [removed] | 2025-07-07T14:12:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ltv7wq/double_gpu_setup_3090_and_5060_ti_16gb/ | Frosty_Incident_9788 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltv7wq | false | null | t3_1ltv7wq | /r/LocalLLaMA/comments/1ltv7wq/double_gpu_setup_3090_and_5060_ti_16gb/ | false | false | self | 1 | null |
Jamba 1.7 - a ai21labs Collection | 1 | 2025-07-07T13:35:12 | https://huggingface.co/collections/ai21labs/jamba-17-68653e9be386dc69b1f30828 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ltubvs | false | null | t3_1ltubvs | /r/LocalLLaMA/comments/1ltubvs/jamba_17_a_ai21labs_collection/ | false | false | default | 1 | null | |
n8n vs Zapier | 1 | 2025-07-07T13:18:36 | https://rnikhil.com/2025/07/06/n8n-vs-zapier | Excellent-Effect237 | rnikhil.com | 1970-01-01T00:00:00 | 0 | {} | 1lttyf5 | false | null | t3_1lttyf5 | /r/LocalLLaMA/comments/1lttyf5/n8n_vs_zapier/ | false | false | default | 1 | null | |
Free context tool that runs local | 1 | I believe my tool is unique even though there are like 40 different similar tools for giving LLMs context of lots of code files. Different for:
Saving the state of which files you include for next time you use it in that same directory,
The User Interface (works anywhere python and Qt can run) can just type ‘aicp + e... | 2025-07-07T12:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ltt72w/free_context_tool_that_runs_local/ | wuu73 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltt72w | false | null | t3_1ltt72w | /r/LocalLLaMA/comments/1ltt72w/free_context_tool_that_runs_local/ | false | false | self | 1 | null |
[PAPER] Overclocking LLM Reasoning: Monitoring and Controlling Thinking Path Lengths in LLMs | 1 | The thought progress bar looks cool.
Unfortunately, this needs to train something to modify hidden state. | 2025-07-07T12:25:56 | https://royeisen.github.io/OverclockingLLMReasoning-paper/ | foldl-li | royeisen.github.io | 1970-01-01T00:00:00 | 0 | {} | 1ltstdt | false | null | t3_1ltstdt | /r/LocalLLaMA/comments/1ltstdt/paper_overclocking_llm_reasoning_monitoring_and/ | false | false | default | 1 | null |
Would you pay for a service that uses your localLLM to power the app | 6 | Whether LLMs have any useful applications past summarization and basic tasks is another debate, but if you found a useful service but it used a local LLM would you still pay for it? or rather find a way to run it locally. Or you prefer hosted models if your paying for it? | 2025-07-07T11:56:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lts8ai/would_you_pay_for_a_service_that_uses_your/ | numinouslymusing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lts8ai | false | null | t3_1lts8ai | /r/LocalLLaMA/comments/1lts8ai/would_you_pay_for_a_service_that_uses_your/ | false | false | self | 6 | null |
eGPU Setup: Legion Laptop + RTX 5060 Ti | 5 | Sharing it here in case it's helpful for anyone | 2025-07-07T11:51:55 | https://shb777.dev/blog/egpu/ | Few-Welcome3297 | shb777.dev | 1970-01-01T00:00:00 | 0 | {} | 1lts4y9 | false | null | t3_1lts4y9 | /r/LocalLLaMA/comments/1lts4y9/egpu_setup_legion_laptop_rtx_5060_ti/ | false | false | default | 5 | null |
Inside Google Gemma 3n: my PyTorch Profiler insights | 82 | Hi everyone,
If you’ve ever wondered what really happens inside modern vision-language models, here’s a hands-on look. I profiled the Google Gemma 3n model on an NVIDIA GPU using PyTorch Profiler, asking it to describe a [bee image](https://cdn-lfs.hf.co/datasets/huggingface/documentation-images/8b21ba78250f852ca59900... | 2025-07-07T11:51:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lts4wd/inside_google_gemma_3n_my_pytorch_profiler/ | aospan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lts4wd | false | null | t3_1lts4wd | /r/LocalLLaMA/comments/1lts4wd/inside_google_gemma_3n_my_pytorch_profiler/ | false | false | 82 | {'enabled': False, 'images': [{'id': 'iyG6eCUPhSylmQBjhmsazmQyUh0CUb3n-N54OyLJmm0', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/iyG6eCUPhSylmQBjhmsazmQyUh0CUb3n-N54OyLJmm0.jpeg?width=108&crop=smart&auto=webp&s=b6b62e31fc287d856fbed4d44bd158f876184d07', 'width': 108}, {'height': 144, 'url': '... | |
Video Dubbing: TTS + Speaker Detection + Auto-Length Adjustments? | 8 | Hey guys, the company I work for is currently using an online service for dubbing. And I gotta say, they're pretty good; You upload a video and translated subtitles, make some minor tweaks, and the video is automatically dubbed for you.
Are there any local LLM models that can do something similar to this? | 2025-07-07T11:51:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lts4q2/video_dubbing_tts_speaker_detection_autolength/ | Initial_Designer_802 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lts4q2 | false | null | t3_1lts4q2 | /r/LocalLLaMA/comments/1lts4q2/video_dubbing_tts_speaker_detection_autolength/ | false | false | self | 8 | null |
Best cheap GPUs to run Local LLaMA / Mistral / GPTQ right now (A100, T4, 4090...) | 0 | Hey folks,
I’ve been testing a few options to run quantized LLaMA/GPTQ models locally (mainly 7B–13B) and needed **cheap, reliable GPU rentals**.
So I built a quick public site that compares the best hourly GPU deals (T4, A100, 4090, 3090…).
You can sort by price, VRAM, region. Perfect for casual local LLM experim... | 2025-07-07T10:41:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ltqw46/best_cheap_gpus_to_run_local_llama_mistral_gptq/ | RoutineTurnover6948 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltqw46 | false | null | t3_1ltqw46 | /r/LocalLLaMA/comments/1ltqw46/best_cheap_gpus_to_run_local_llama_mistral_gptq/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'h... |
LM Studio crashes | 0 | Hello everyone,
I recently upgraded to Windows 11, and since then, I’ve been having issues with LM Studio crashing immediately after launch. On Windows 10 no problems! The first installation went smoothly. I was able to download a model and use it without any problems. After a restart, the app launched again, and I st... | 2025-07-07T10:12:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ltqf9a/lm_studio_crashes/ | LifeguardNo5315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltqf9a | false | null | t3_1ltqf9a | /r/LocalLLaMA/comments/1ltqf9a/lm_studio_crashes/ | false | false | self | 0 | null |
How to Run a One-Shot Prompt (Non-Interactive) with llama.cpp? | 2 | I'm using `llama.cpp` to run TinyLLaMA) and I've successfully built the project using:
cmake --build . --config Release
Everything works fine when I run:
./llama-cli -m models/tinyllama.gguf
However, I'm stuck trying to do a **one-shot generation** ( I want to provide a prompt and it should return output ... | 2025-07-07T10:05:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ltqb0n/how_to_run_a_oneshot_prompt_noninteractive_with/ | glow-rishi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltqb0n | false | null | t3_1ltqb0n | /r/LocalLLaMA/comments/1ltqb0n/how_to_run_a_oneshot_prompt_noninteractive_with/ | false | false | self | 2 | null |
GUYS HELP ME : Ollama is utilizing my CPU more than my GPU. | 0 | ERROR: type should be string, got "https://preview.redd.it/thvbs96odfbf1.png?width=1080&format=png&auto=webp&s=46a8ee47ee075177a59c3b692c3caa87f220f9e6\n\nGPU is not being utilized as much as my CPU on the KDE Neon distribution I'm currently using. On my previous Ubuntu distribution, my GPU usage was around 90%, compared to my CPU. I'm not sure what went wrong. I added the following options to /etc/modprobe.d/nvidia-power-management.conf to address wake-up issues with the GPU not functioning after sleep:\n\n Code\n \n options nvidia NVreg_PreserveVideoMemoryAllocations=1\n options nvidia NVreg_TemporaryFilePath=/tmp\n\nSince then, Ollama has been using my GPU less than my CPU. I've been searching for answers for a week.\n\ni am running llama3.1 8b model. i used same models on both distros.\n\nhelp me guys............. \nMy GPU is not being utilized as much as my CPU on the KDE Neon \ndistribution I'm currently using. On my previous Ubuntu distribution, my \n GPU usage was around 90%, compared to my CPU. I'm not sure what went \nwrong. I added the following options to \n/etc/modprobe.d/nvidia-power-management.conf to address wake-up issues \nwith the GPU not functioning after sleep: \nCode \n \noptions nvidia NVreg\\_PreserveVideoMemoryAllocations=1 \noptions nvidia NVreg\\_TemporaryFilePath=/tmp \nSince then, Ollama has been using my GPU less than my CPU. I've been searching for answers for a week. \n \ni am running llama3.1 8b model. i used same models on both distros. \n \nhelp me guys............. \n" | 2025-07-07T10:00:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ltq7n9/guys_help_me_ollama_is_utilizing_my_cpu_more_than/ | HighlightPrudent554 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltq7n9 | false | null | t3_1ltq7n9 | /r/LocalLLaMA/comments/1ltq7n9/guys_help_me_ollama_is_utilizing_my_cpu_more_than/ | false | false | 0 | null | |
🔓 No jailbreak, no prompt hacks — Just tone. Echo Mode SDK now open | 0 | 📢 Just launched: Echo Mode SDK – Control GPT with tone, not prompts.
What if your app could shift GPT’s behavior *without* custom tuning or jailbreaking?
Now it "**can"** — with semantic tone states, not prompts..
**🧠 Echo Mode SDK lets you:**
* Shift between semantic states (*Sync, Resonance, Insight, Calm*)
* A... | 2025-07-07T09:14:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ltpidd/no_jailbreak_no_prompt_hacks_just_tone_echo_mode/ | Medium_Charity6146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltpidd | false | null | t3_1ltpidd | /r/LocalLLaMA/comments/1ltpidd/no_jailbreak_no_prompt_hacks_just_tone_echo_mode/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '9XgrwJCFrWPSkogmSbqnT18WlSfzx9YB4EGXzPLcUVg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/9XgrwJCFrWPSkogmSbqnT18WlSfzx9YB4EGXzPLcUVg.png?width=108&crop=smart&auto=webp&s=53cc87255e55d480e5248926a0e106666930fc20', 'width': 108}, {'height': 216, 'url': '... |
Should I remove these? | 0 | Hello I am QLORA finetuning a Llama instruct model but when I am creating the dataset via its chat template applied, it prints "Cutting Knowledge Date: December 2023\\nToday Date: 26 Jul 2024" into my data at everyline in the json file. Should I be removing/cleaning them? Do they harm the attention mechanism by making ... | 2025-07-07T08:18:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ltonwy/should_i_remove_these/ | Opening_Cash_4532 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltonwy | false | null | t3_1ltonwy | /r/LocalLLaMA/comments/1ltonwy/should_i_remove_these/ | false | false | self | 0 | null |
PGDS approach to full model inference on consumer grade GPUs | 0 | Abstract
We propose a novel inference-time optimization method for resource-constrained deployment of large language models (LLMs), enabling high-quality output from models too large to fit into a single consumer-grade GPU. This technique—Prompt-Guided Dynamic Slicing and Insertion Point Resolution (PG-DSIR)—leverages... | 2025-07-07T07:40:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lto3t9/pgds_approach_to_full_model_inference_on_consumer/ | finnabrahamson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lto3t9 | false | null | t3_1lto3t9 | /r/LocalLLaMA/comments/1lto3t9/pgds_approach_to_full_model_inference_on_consumer/ | false | false | self | 0 | null |
best llm for 32 gb ram and 8gb vram | 1 | as the title suggests i would like to run the best llm possible for my system and i am really new to llms so i really have no idea where to start help is really appreciated . ( my system has 32gb ddr5 6000mhz cl 36 ram and a amd rx 7600 with 8gb vram ) | 2025-07-07T07:38:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lto2in/best_llm_for_32_gb_ram_and_8gb_vram/ | diddy_stroker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lto2in | false | null | t3_1lto2in | /r/LocalLLaMA/comments/1lto2in/best_llm_for_32_gb_ram_and_8gb_vram/ | false | false | self | 1 | null |
How good is Qwen3-14B for local use? Any benchmarks vs other models? | 19 | Hey folks,
I'm looking into running a larger language model locally and came across Qwen3-14B (or Qwen3\\\_14B depending on naming). I know it's been getting some hype lately, but I wanted to hear from people who’ve actually used it.
\* How does it perform compared to other 13B/14B class models like Gemma, Mistral, L... | 2025-07-07T07:14:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ltnpsl/how_good_is_qwen314b_for_local_use_any_benchmarks/ | abubakkar_s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltnpsl | false | null | t3_1ltnpsl | /r/LocalLLaMA/comments/1ltnpsl/how_good_is_qwen314b_for_local_use_any_benchmarks/ | false | false | self | 19 | null |
Will w7900 work? | 1 | I am building a home server under $6000. I want to run concurrent queries on local q4 or q8 quantised LLM. Should I buy w7900 or anything else?? | 2025-07-07T06:13:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ltmrvo/will_w7900_work/ | DatakeeperFun7770 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltmrvo | false | null | t3_1ltmrvo | /r/LocalLLaMA/comments/1ltmrvo/will_w7900_work/ | false | false | self | 1 | null |
Which TTS Model to use? | 2 | I am really new to this voice stuff. My only relevant experience would be setting up comfy UI in my system with Pinokio App.
What I am trying to do is use a model from [weights.gg](http://weights.gg) of a Japanese VA. I want to use his voice for a narration youtube channel (not 1:1 ofc) but the problem is that he has... | 2025-07-07T06:12:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ltmrag/which_tts_model_to_use/ | Mysterious-Comment94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltmrag | false | null | t3_1ltmrag | /r/LocalLLaMA/comments/1ltmrag/which_tts_model_to_use/ | false | false | self | 2 | null |
whisper.cpp + markdown notes + ai enhancement styles | 1 | [removed] | 2025-07-07T06:11:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ltmqpc/whispercpp_markdown_notes_ai_enhancement_styles/ | Drakonis96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltmqpc | false | null | t3_1ltmqpc | /r/LocalLLaMA/comments/1ltmqpc/whispercpp_markdown_notes_ai_enhancement_styles/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y.png?width=108&crop=smart&auto=webp&s=4153aa44c656a73782bda174e591b773ddc6e36c', 'width': 108}, {'height': 108, 'url': 'h... | |
What is the point of QAT? | 0 | Recently started experimenting with training my own models and I wanted to ask if anyone knew. I would figure that if you were designing a model to be run at a precision lower than fp16 you would just quantize the model and then train over the quantized model instead of training in fp16 with “fake quantization”. I’m su... | 2025-07-07T06:07:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ltmou4/what_is_the_point_of_qat/ | WyattTheSkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltmou4 | false | null | t3_1ltmou4 | /r/LocalLLaMA/comments/1ltmou4/what_is_the_point_of_qat/ | false | false | self | 0 | null |
I built PromptOps - Git-native prompt management for production LLM workflows | 2 | # I built a tool for managing prompts like code
Been working with LLMs for a while and got tired of manually tracking prompt versions. Made a Python tool that handles this automatically.
* Automatically versions your prompts when you commit to git
* Test prompt changes before committing with `:unstaged` reference
* W... | 2025-07-07T05:54:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ltmh4g/i_built_promptops_gitnative_prompt_management_for/ | llmhq_official | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltmh4g | false | null | t3_1ltmh4g | /r/LocalLLaMA/comments/1ltmh4g/i_built_promptops_gitnative_prompt_management_for/ | false | false | self | 2 | null |
📘 The Aperion Prompt Discipline — A Constitution-Driven Method for Runtime-Resilient AI Systems | 0 | # 📘 The Aperion Prompt Discipline
## A Constitution-Driven Method for Runtime-Resilient AI Systems
*(Public-Safe Version — Released for foundational use, not replication)*
---
### 🧩 1. WHY PROMPT ENGINEERING FAILED
Most “prompt engineering” tutorials teach you how to manipulate a model.
You’re told to set a ro... | 2025-07-07T05:52:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ltmfsg/the_aperion_prompt_discipline_a/ | InvictusTitan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltmfsg | false | null | t3_1ltmfsg | /r/LocalLLaMA/comments/1ltmfsg/the_aperion_prompt_discipline_a/ | false | false | self | 0 | null |
Dice Rolling MCP | 2 | Why Your AI Assistant Can't Actually Roll Dice (And Why That Matters)
Ever asked Claude or ChatGPT to roll a d20 for your D&D session? Sure, they'll happily tell you "I rolled a 14!"
But here's the thing - they didn't actually roll anything. They just predicted what a reasonable dice roll
response would look like base... | 2025-07-07T05:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ltm6uw/dice_rolling_mcp/ | jimmcq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltm6uw | false | null | t3_1ltm6uw | /r/LocalLLaMA/comments/1ltm6uw/dice_rolling_mcp/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'q6QgwFBWA1rDwnuKslDlGaBjPPIs5KzFqeuyHUslkqo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/q6QgwFBWA1rDwnuKslDlGaBjPPIs5KzFqeuyHUslkqo.png?width=108&crop=smart&auto=webp&s=e047e49fde66aedc881fc3978d5c7bbdc3219912', 'width': 108}, {'height': 113, 'url': 'h... |
Training 8 repos of UI code | 11 | Hi all, in my company, we have 8 repos of ui code. We mainly use React and our own internally developed component library which is a seperate repo. Now, the problem statement is to develop a chat app similar to open ai that can generate code using our components library or code that follows our rules/style of code. The... | 2025-07-07T05:32:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ltm49x/training_8_repos_of_ui_code/ | pomatotappu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltm49x | false | null | t3_1ltm49x | /r/LocalLLaMA/comments/1ltm49x/training_8_repos_of_ui_code/ | false | false | self | 11 | null |
Newbie questions in this world | 1 | Okay, so I have comfyui, works fine with flux, wan and etc. Now I have LM studio and anythingllm. I check the models and there are so many:
1) llama
2) gemma
3) qwen
4) hermes
5) mistral
6).......
And the different versions, different Q, different parameter set size and etc.
I understand quantization levels and param... | 2025-07-07T05:28:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ltm1mp/newbie_questions_in_this_world/ | Lxxtsch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltm1mp | false | null | t3_1ltm1mp | /r/LocalLLaMA/comments/1ltm1mp/newbie_questions_in_this_world/ | false | false | self | 1 | null |
Multiple Local Supabase | 1 | I recently started using locally hosted supabase instances for developing LLM chatbot applications but found that the main thing they were missing was the project management feature that is present in the cloud hosted version.
Configuring everything to avoid conflicts in both the .env and the compose file was getting... | 2025-07-07T04:38:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ltl6ui/multiple_local_supabase/ | Tazomatalax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltl6ui | false | null | t3_1ltl6ui | /r/LocalLLaMA/comments/1ltl6ui/multiple_local_supabase/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'MawLj78-1qob3R8tdIBWW2vcEFN31-nqXCRT9Rz47t0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MawLj78-1qob3R8tdIBWW2vcEFN31-nqXCRT9Rz47t0.png?width=108&crop=smart&auto=webp&s=459ee32d73e0bea8e7c6b0505ec5a743d7fcd51a', 'width': 108}, {'height': 108, 'url': 'h... |
ADR–Academic Deep Research for Cybersecurity and Academic: An AI-Driven Framework for Multi-Step Threat Analysis | 1 | I am pleased to present **ADR–Academic Deep Research**, a novel AI‐powered platform designed to advance the rigor and scale of cybersecurity investigations. ADR leverages the o4-mini-deep research and o3-deep-research model (paid tier) to execute multi-step analytical workflows on complex security problems, automatical... | 2025-07-07T04:14:36 | https://www.reddit.com/r/LocalLLaMA/comments/1ltks5a/adracademic_deep_research_for_cybersecurity_and/ | Haunting-Ad6565 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltks5a | false | null | t3_1ltks5a | /r/LocalLLaMA/comments/1ltks5a/adracademic_deep_research_for_cybersecurity_and/ | false | false | self | 1 | null |
Built an open source project for analyzing csv files using LLMs without the llm seeing your data | 7 |
I have been using local llms for analyzing csv files, however sometimes they are not powerful enough for the analysis l want and my computer is not big enough for more capable local models.
So l created this project that creates a representative copy of my csv and uses this with open ai/gemini to generate analysis c... | 2025-07-07T04:03:27 | DiscerningTheTimes | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ltkkxd | false | null | t3_1ltkkxd | /r/LocalLLaMA/comments/1ltkkxd/built_an_open_source_project_for_analyzing_csv/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 't5fyjq77mdbf1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/t5fyjq77mdbf1.gif?width=108&crop=smart&format=png8&s=aeccc451cb270bfc79d5c7e09eb52d45b79c2b2e', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/t5fyjq77mdbf1.gif?width=216&crop=smart&format... | |
LogiQ CLI Beta | Full LMStudio Support. | 2 | Hey, we're working on a full CLI that currently beats a fair amount of other projects in benchmarks when running SOTA models, We're looking to get more testers running local models to see how it performs versus other projects that have local support.
We'd love for you to join us and help out in real world testing, we'... | 2025-07-07T03:52:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ltkdjz/logiq_cli_beta_full_lmstudio_support/ | x8ko_dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltkdjz | false | null | t3_1ltkdjz | /r/LocalLLaMA/comments/1ltkdjz/logiq_cli_beta_full_lmstudio_support/ | true | false | spoiler | 2 | null |
I'd like to add some features to my ai if possible | 3 | Hi, i have a llama and webui running deepseek R1, but i'd like to know some things
first how much upgradeability do i have to the future considering i have only a rtx 4060 in laptop? i dont intend anything crazy i just want some personal organizer
id like it to have long term memory and be able to acces a folder in ... | 2025-07-07T03:43:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ltk7yh/id_like_to_add_some_features_to_my_ai_if_possible/ | RestaurantUnusual456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltk7yh | false | null | t3_1ltk7yh | /r/LocalLLaMA/comments/1ltk7yh/id_like_to_add_some_features_to_my_ai_if_possible/ | false | false | self | 3 | null |
Thoughts on lmsys/lmarena? | 0 | Do real people actually vote on things there? Seems bizarre to me anyone would spend their time doing data labelling for free | 2025-07-07T03:16:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ltjqct/thoughts_on_lmsyslmarena/ | HOLUPREDICTIONS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltjqct | false | null | t3_1ltjqct | /r/LocalLLaMA/comments/1ltjqct/thoughts_on_lmsyslmarena/ | false | false | self | 0 | null |
Gemma 3n is not performing well with macOS M2 MacBook Pro | 2 | So, I was attempting to run the Gemma3n model with transformer libraries on my MacBook Pro, which has the M2 silicon chip. I managed to download the model and use the transformer library, but the inference time was incredibly slow. If anyone has any experience with the MacBook and Gemma3n, it would be really helpful. | 2025-07-07T02:50:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ltj8pg/gemma_3n_is_not_performing_well_with_macos_m2/ | Strikingaks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltj8pg | false | null | t3_1ltj8pg | /r/LocalLLaMA/comments/1ltj8pg/gemma_3n_is_not_performing_well_with_macos_m2/ | false | false | self | 2 | null |
Buy a Local PC with GPU or Go for Cloud | 0 | Scenario 1:
We’re currently hosting our image‐generation model on Replicate and calling its API from the customer’s side. Unfortunately, each cold start can take up to 15 minutes to produce a single image, which is totally unacceptable for end users. (If only a very small number of customers use it, the delay is less... | 2025-07-07T02:05:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ltidhz/buy_a_local_pc_with_gpu_or_go_for_cloud/ | visionkhawar512 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltidhz | false | null | t3_1ltidhz | /r/LocalLLaMA/comments/1ltidhz/buy_a_local_pc_with_gpu_or_go_for_cloud/ | false | false | self | 0 | null |
8.5K people voted on which AI models create the best website, games, and visualizations. Both Llama Models came almost dead last. Claude comes up on top. | 89 | I was working on a [research](https://www.designarena.ai/) project (note that the votes and data is completely free and open, so not profiting off this, but just showing research as context) where users write a prompt, and then vote on content generated (e.g. websites, games, 3D visualizations) from 4 randomly generate... | 2025-07-07T01:35:48 | https://www.reddit.com/gallery/1lthtbn | adviceguru25 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lthtbn | false | null | t3_1lthtbn | /r/LocalLLaMA/comments/1lthtbn/85k_people_voted_on_which_ai_models_create_the/ | false | false | 89 | null | |
The AI Revolution: How's it Going for You? | 0 |
Here, spent weeks putting this piece together. I have a whole new appreciation for George Carlin now. Satrical comedy is hard!
Audio: https://youtu.be/xmSSmpvFFaI
Text / Forums: https://cicero.sh/r/hows-the-ai-revolution
Full text of the piece:
# The AI Revolution: How's it Going for You?
Audio: https://youtu.... | 2025-07-07T01:02:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lth6ga/the_ai_revolution_hows_it_going_for_you/ | mdizak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lth6ga | false | null | t3_1lth6ga | /r/LocalLLaMA/comments/1lth6ga/the_ai_revolution_hows_it_going_for_you/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'e3bmmHLbAOUvXJFbUv7-hHT-rq6zdSCFEHpYPIbIJO4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/e3bmmHLbAOUvXJFbUv7-hHT-rq6zdSCFEHpYPIbIJO4.jpeg?width=108&crop=smart&auto=webp&s=d3125b932dc15ed6dedd94acf2aa43d2ead119a7', 'width': 108}, {'height': 162, 'url': '... |
Split Brain Project - P4 Side adventure. MOE token heads | 1 | Hey yall,
been a while but with summer comes extra electricity and with that comes breaker flips. So yay mid training failures so I have not completed this yet but did get mid way through and can see it slightttttttttttly works. Can see that it produce multiple tokens in one go and with the small dataset it got to a ... | 2025-07-07T00:39:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ltgq4i/split_brain_project_p4_side_adventure_moe_token/ | Alienanthony | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltgq4i | false | null | t3_1ltgq4i | /r/LocalLLaMA/comments/1ltgq4i/split_brain_project_p4_side_adventure_moe_token/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'I2I36CRPzYma5PsHdKX21gxSCNQbLbOpawaVAeHCnY4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/I2I36CRPzYma5PsHdKX21gxSCNQbLbOpawaVAeHCnY4.png?width=108&crop=smart&auto=webp&s=8cd42e9a770454b8248f1c637bfc619eb9ae8bfd', 'width': 108}, {'height': 108, 'url': 'h... | |
GitHub - tallesborges/agentic-system-prompts: A collection of system prompts and tool definitions from production AI coding agents | 27 | 2025-07-07T00:27:07 | https://github.com/tallesborges/agentic-system-prompts | Thin_Commission_8109 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ltgh9h | false | null | t3_1ltgh9h | /r/LocalLLaMA/comments/1ltgh9h/github_tallesborgesagenticsystemprompts_a/ | false | false | default | 27 | {'enabled': False, 'images': [{'id': '0Wivc8zh2a4qNYeBjwLvtx-oCar3y8G36lxh7M_r1ig', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0Wivc8zh2a4qNYeBjwLvtx-oCar3y8G36lxh7M_r1ig.png?width=108&crop=smart&auto=webp&s=b9160e84a5972c37415e4c8ce26c889f686decad', 'width': 108}, {'height': 108, 'url': 'h... | |
Fused Qwen3 MoE layer for faster training Qwen3-30B-A3B LoRA | 95 | The Qwen3 MoE model (and all other MoE models) in HF Transformers is notoriously slow, because it uses a for loop to access the experts, resulting in < 20% GPU usage. It's been two months and there are still very few LoRAs of Qwen3-30B-A3B in the public. (If you search 'qwen3 30b a3b lora' on HuggingFace, that's... int... | 2025-07-07T00:18:28 | https://github.com/woct0rdho/transformers-qwen3-moe-fused | woct0rdho | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ltgayn | false | null | t3_1ltgayn | /r/LocalLLaMA/comments/1ltgayn/fused_qwen3_moe_layer_for_faster_training/ | false | false | default | 95 | {'enabled': False, 'images': [{'id': 'HSpEkOVbwqk3BvJQ60pY-PTeM7wLxNZ1qE0Ade0f5cc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HSpEkOVbwqk3BvJQ60pY-PTeM7wLxNZ1qE0Ade0f5cc.png?width=108&crop=smart&auto=webp&s=a3be8044143b537d63e288d3975bce8cd7e0836b', 'width': 108}, {'height': 108, 'url': 'h... |
M4 Max VS M3 Ultra Qwen3 mlx inference | 14 | It seems compared with llama.cpp, mlx has greatly improved LLM inference with Apple Silicone.
I was looking at the Qwen3 inference benchmarks [https://x.com/awnihannun/status/1917050679467835880?s=61](https://x.com/awnihannun/status/1917050679467835880?s=61)
https://preview.redd.it/5q47tjbuecbf1.png?width=1213&forma... | 2025-07-07T00:16:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ltg9ji/m4_max_vs_m3_ultra_qwen3_mlx_inference/ | SuperPumpkin314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltg9ji | false | null | t3_1ltg9ji | /r/LocalLLaMA/comments/1ltg9ji/m4_max_vs_m3_ultra_qwen3_mlx_inference/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'gnJQk3_BiObsdER_rJi69pQW2NuwnCE169S22KMhlgo', 'resolutions': [{'height': 95, 'url': 'https://external-preview.redd.it/z-W_hwNGqZcVQKlbi62gXxLy_SqHFi2P1iczkOmcdTI.jpg?width=108&crop=smart&auto=webp&s=c1b223931987f4a81c1661dc66820eee7f7bea2c', 'width': 108}, {'height': 190, 'url': 'h... | |
Website-Crawler: Extract data from websites in LLM ready JSON or CSV format. Crawl or Scrape entire website with Website Crawler | 0 | 2025-07-06T23:59:01 | https://github.com/pc8544/Website-Crawler | Fluid-Engineering769 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1ltfwjv | false | null | t3_1ltfwjv | /r/LocalLLaMA/comments/1ltfwjv/websitecrawler_extract_data_from_websites_in_llm/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '8pG6rnRIoCry-dvcBOF-au9YmfpNVda4S3Exgl6tAS8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8pG6rnRIoCry-dvcBOF-au9YmfpNVda4S3Exgl6tAS8.png?width=108&crop=smart&auto=webp&s=a2e3e974bfa0ad1d227ef240101b6f5131d815a8', 'width': 108}, {'height': 108, 'url': 'h... | |
I drew a silly comic about Llama model | 140 | I'm a roleplayer using SillyTavern. Llama models are often used as 'base' for fine tunes in Huggingface. Seeing what people can do with local models also fascinate me. ^^ Hello! | 2025-07-06T23:37:41 | https://www.reddit.com/gallery/1ltfgoy | Organic-Mechanic-435 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ltfgoy | false | null | t3_1ltfgoy | /r/LocalLLaMA/comments/1ltfgoy/i_drew_a_silly_comic_about_llama_model/ | false | false | 140 | null | |
ollama and lmstudio cant browser the web why not? | 0 | OK, I'm tired of pasting everything into the chat window, why can't it browse the web? | 2025-07-06T23:07:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ltetl3/ollama_and_lmstudio_cant_browser_the_web_why_not/ | akierum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltetl3 | false | null | t3_1ltetl3 | /r/LocalLLaMA/comments/1ltetl3/ollama_and_lmstudio_cant_browser_the_web_why_not/ | false | false | self | 0 | null |
use Blender MCP with a ready made asset pack | 10 | I just tried out the Blender MCP Tutorial https://www.youtube.com/watch?v=lCyQ717DuzQ and it was really underwhelming, all the objects and materials are as basic as it gets. I guess that's the limit of using python to create mesh within blender.
So my question is - is there some sort of mcp server to an asset pack (o... | 2025-07-06T22:39:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lte7m8/use_blender_mcp_with_a_ready_made_asset_pack/ | fiddler64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lte7m8 | false | null | t3_1lte7m8 | /r/LocalLLaMA/comments/1lte7m8/use_blender_mcp_with_a_ready_made_asset_pack/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': '2udekmh5-lCBJaAblnPKTkQpNU_rFXF06qyPqajdvwQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/2udekmh5-lCBJaAblnPKTkQpNU_rFXF06qyPqajdvwQ.jpeg?width=108&crop=smart&auto=webp&s=c13b243c4e370515574680f8a2a1e561b1dfc758', 'width': 108}, {'height': 162, 'url': '... |
tenstorrent for LLM inference | 1 | could i pair two p100a (28gb) tenstorrent LPUs together to power an on prem AI inference model for my office of 11 people. would it be able to concurrently answer 3 people’s questions. should i look at other hardware alternatives. i’d like to be able to run something like mistral 8x7b or better on this. would love to h... | 2025-07-06T22:19:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ltdrkm/tenstorrent_for_llm_inference/ | Odd_Translator_3026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltdrkm | false | null | t3_1ltdrkm | /r/LocalLLaMA/comments/1ltdrkm/tenstorrent_for_llm_inference/ | false | false | self | 1 | null |
Looking for an LLM suggestion for sorting massive CSVs. | 0 | New in the AI game but think we can utilize it heavily in our small shop. We receive data with 10s of thousands of records containing PII data, and would like to utilize a (preferably free) LLM to help our guys out. I like the idea of PandasAI but was wondering if there was any other suggestions? | 2025-07-06T22:12:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ltdmhl/looking_for_an_llm_suggestion_for_sorting_massive/ | LordMomotius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltdmhl | false | null | t3_1ltdmhl | /r/LocalLLaMA/comments/1ltdmhl/looking_for_an_llm_suggestion_for_sorting_massive/ | false | false | self | 0 | null |
OpenWebUI - Truncating Context or Model Limitation? | 1 | Hi all,
I'm running OpenWebUI v0.6.15 (though I've reproduced it on older versions), and I'm having a consistent problem where my prompt is seemingly truncated. Whether I use the API or the web UI, the model's response clearly indicates that it's not getting the entire prompt.
When I paste the list before the instr... | 2025-07-06T22:07:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ltdi5y/openwebui_truncating_context_or_model_limitation/ | Coronoi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltdi5y | false | null | t3_1ltdi5y | /r/LocalLLaMA/comments/1ltdi5y/openwebui_truncating_context_or_model_limitation/ | false | false | self | 1 | null |
Self hosted LLM with GPU support, Apache server, Email server on a Windows 10 PC - need to upgrade PC and OS | 1 | Hello,
I have as described a LLM programmed in Llama-cpp-python with CUDA GPU support in Windows 10. I have 4 GPUs on an 'old' (2022) mining motherboard. I also host an Apache2 server for web and Java-based James email server. The system is not very stable and honestly it's made for that kind of use. I am looking to ... | 2025-07-06T22:06:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ltdh0n/self_hosted_llm_with_gpu_support_apache_server/ | calypset | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltdh0n | false | null | t3_1ltdh0n | /r/LocalLLaMA/comments/1ltdh0n/self_hosted_llm_with_gpu_support_apache_server/ | false | false | self | 1 | null |
Local LLM for business | 11 | I own a mid size electrical contracting bussiness, about 35 employees. I'm thinking of implementing a local ai server maybe mixtral 8x7B to increase the efficiency of the business. My main reason is for book keeping/receipt processing, finance etc as of now but I'm hoping to train on other areas. any other ideas on how... | 2025-07-06T21:41:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ltcwbx/local_llm_for_business/ | Acceptable_Factor817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltcwbx | false | null | t3_1ltcwbx | /r/LocalLLaMA/comments/1ltcwbx/local_llm_for_business/ | false | false | self | 11 | null |
Microsoft AI learning certification | 1 | How has everyone’s experience been with the Microsoft AI learning certification? I feel like I learned a bit about neural nets, but not much, I’m not sure it’s even worthwhile to add to my certifications… | 2025-07-06T21:36:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ltcsbv/microsoft_ai_learning_certification/ | ChrisZavadil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltcsbv | false | null | t3_1ltcsbv | /r/LocalLLaMA/comments/1ltcsbv/microsoft_ai_learning_certification/ | false | false | self | 1 | null |
🎧 Listen and Compare 12 Open-Source Text-to-Speech Models (Hugging Face Space) | 132 | Hey everyone!
We have been exploring various open-source Text-to-Speech (TTS) models, and decided to create a Hugging Face demo space that makes it easy to compare their quality side-by-side.
The demo features **12 popular TTS models**, all tested using a consistent prompt, so you can quickly hear and compare their s... | 2025-07-06T20:53:01 | rbgo404 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ltbrlf | false | null | t3_1ltbrlf | /r/LocalLLaMA/comments/1ltbrlf/listen_and_compare_12_opensource_texttospeech/ | false | false | default | 132 | {'enabled': True, 'images': [{'id': 'bwd1gqkrfbbf1', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/bwd1gqkrfbbf1.png?width=108&crop=smart&auto=webp&s=97ba7e4a8b732ff1ec264045dbf41eca64bef176', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/bwd1gqkrfbbf1.png?width=216&crop=smart&auto=web... | |
Best reasoning model for Apple silicon with 128GB | 38 | I have an MacBook M4 Max with 128 GB and LM Studio. Playing around with Gemma 3 models and Llama 4 Scout. What is the best reasoning model that will fit into my RAM?
Also, running HF Diffusers app. Running SD3 Medium for txt2img, anything else I should be looking at? | 2025-07-06T20:52:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ltbr1t/best_reasoning_model_for_apple_silicon_with_128gb/ | FuguSandwich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltbr1t | false | null | t3_1ltbr1t | /r/LocalLLaMA/comments/1ltbr1t/best_reasoning_model_for_apple_silicon_with_128gb/ | false | false | self | 38 | null |
Help Needed: Fine-Tuning Mistral 7B on Yelp Dataset | 0 | I’m a beginner computer science master’s student working on fine-tuning Mistral 7B with Yelp data. I developed the code on Kaggle but have limited resources. If anyone can help run the fine-tuning, please contact me at: [yaakoubiey@gmail.com](mailto:yaakoubiey@gmail.com) | 2025-07-06T20:45:45 | https://www.reddit.com/r/LocalLLaMA/comments/1ltblb3/help_needed_finetuning_mistral_7b_on_yelp_dataset/ | Several_Sound9974 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltblb3 | false | null | t3_1ltblb3 | /r/LocalLLaMA/comments/1ltblb3/help_needed_finetuning_mistral_7b_on_yelp_dataset/ | false | false | self | 0 | null |
Narrative Beam Search workflow in Open WebUI | 64 | **What is this?**
A variant of beam search which runs from the point of view of different system prompts. The workflow runs in an optimising LLM proxy that sends an artifact back to Open WebUI that listens to the data from the pending completion.
[Code](https://github.com/av/harbor/blob/main/boost/src/modules/nbs.py)... | 2025-07-06T20:39:40 | https://v.redd.it/067r3vt8ebbf1 | Everlier | /r/LocalLLaMA/comments/1ltbg2s/narrative_beam_search_workflow_in_open_webui/ | 1970-01-01T00:00:00 | 0 | {} | 1ltbg2s | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/067r3vt8ebbf1/DASHPlaylist.mpd?a=1754555988%2CZjRhYjI2ZDMxNzFmY2RiNTc3ZTY5MDQyNDQ0MTZiZWQ4YTQ4ZWJkNTcxY2M1YTEzOGE2OTFjZWQ1MGIwNWY1Ng%3D%3D&v=1&f=sd', 'duration': 208, 'fallback_url': 'https://v.redd.it/067r3vt8ebbf1/DASH_1080.mp4?source=fallback', '... | t3_1ltbg2s | /r/LocalLLaMA/comments/1ltbg2s/narrative_beam_search_workflow_in_open_webui/ | false | false | 64 | {'enabled': False, 'images': [{'id': 'cGprbTl2dDhlYmJmMfNv2qQc5KO5fB6gHC38B4rcVAB-vfM2l6tq4JjQRNK2', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/cGprbTl2dDhlYmJmMfNv2qQc5KO5fB6gHC38B4rcVAB-vfM2l6tq4JjQRNK2.png?width=108&crop=smart&format=pjpg&auto=webp&s=a5259ceeed5dda16f2ea2ddf334adbff791a2... | |
gpus and tpus needed | 0 | hi everyone so I am a student and I am training one of the best projects of my life so far. its solely based on a research paper by google researchers
I did the data preprocess part but for training I needed gpus or tpus. tbh I thought google cloud will be perfect but they have a quota restriction to which i applied t... | 2025-07-06T20:33:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ltbaqx/gpus_and_tpus_needed/ | Away_Expression_3713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltbaqx | false | null | t3_1ltbaqx | /r/LocalLLaMA/comments/1ltbaqx/gpus_and_tpus_needed/ | false | false | self | 0 | null |
deepseek promt | 0 | Does anyone have a Frenchcard for the Dipsic today? | 2025-07-06T20:16:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ltavu1/deepseek_promt/ | Emotional-Elk-1683 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltavu1 | false | null | t3_1ltavu1 | /r/LocalLLaMA/comments/1ltavu1/deepseek_promt/ | false | false | self | 0 | null |
Cheapest way to stack VRAM in 2025? | 200 | I'm looking to get a total of ~ 140 GB RAM/VRAM combined to run Qwen 235B Q4. Current i have 96 GB RAM so next step is to get some cheap VRAM. After some research i found the following options at around 1000$ each:
1. 4x RTX 3060 (48 GB)
2. 4x P100 (64 GB)
3. 3x P40 (72 GB)
4. 3x RX 9060 (48 GB)
Which GPU do you r... | 2025-07-06T20:04:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ltamap/cheapest_way_to_stack_vram_in_2025/ | gnad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltamap | false | null | t3_1ltamap | /r/LocalLLaMA/comments/1ltamap/cheapest_way_to_stack_vram_in_2025/ | false | false | self | 200 | null |
Simple and free STT (voice to text) website | 3 | I have built this app and made it free. Do you think someone will be using it?
Link is: [https://dict247.com](https://dict247.com) | 2025-07-06T19:52:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ltabcu/simple_and_free_stt_voice_to_text_website/ | NikitaY_Indie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ltabcu | false | null | t3_1ltabcu | /r/LocalLLaMA/comments/1ltabcu/simple_and_free_stt_voice_to_text_website/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'Hj4v8IpDCP05HlwDSV28hJPefI_npxYdLo_QgHqQjQU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Hj4v8IpDCP05HlwDSV28hJPefI_npxYdLo_QgHqQjQU.jpeg?width=108&crop=smart&auto=webp&s=ce8ff2e1af3d77aa17480327603d0b4ea5c8573d', 'width': 108}, {'height': 121, 'url': '... |
Does anyone have a link/supplier for Nvlink cables/bridges? | 1 | Hey LocalLlama community,
Does anyone have a link to where I could get nvlink bridge/cables for a rig with 3090’s?
I’m wondering if there’s an aftermarket manufacturer that makes cable connects for the Nvlink slots.
Also open to used OEM ones.
I’m new to nvlink and I’m not sure if I’m searching with the right te... | 2025-07-06T19:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lt9t7r/does_anyone_have_a_linksupplier_for_nvlink/ | Business-Weekend-537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lt9t7r | false | null | t3_1lt9t7r | /r/LocalLLaMA/comments/1lt9t7r/does_anyone_have_a_linksupplier_for_nvlink/ | false | false | self | 1 | null |
Are there any local Text-to-Speech model options that can do screamo/metal style vocals (existing models)? | 12 | I'm not at all familiar with Local LLMs beyond image generation ones so forgive me for the noon questions.
Im looking for something like what ElevenLabs has to offer, but I would like to run it locally since I may need to run multiple variations. I'm also looking for something that can do metal/screamo style vocals fo... | 2025-07-06T19:25:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lt9ot6/are_there_any_local_texttospeech_model_options/ | Visible-Midnight4687 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lt9ot6 | false | null | t3_1lt9ot6 | /r/LocalLLaMA/comments/1lt9ot6/are_there_any_local_texttospeech_model_options/ | false | false | self | 12 | null |
Wouldn't it be great if we have a local offline ChatGPT runs on a phone, with all the functionality of normal ChatGPT, such as search, deep research, perhaps function tooling. What do you think? | 0 | I made an offline ChatGPT that runs on a phone similar to [https://play.google.com/store/apps/details?id=com.sandoche.llamao](https://play.google.com/store/apps/details?id=com.sandoche.llamao) . Now this is all and great, but I think accuracy is a tremendous issue here, if we compare to ChatGPT. In order to mitigate th... | 2025-07-06T19:07:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lt98oq/wouldnt_it_be_great_if_we_have_a_local_offline/ | samkoesnadi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lt98oq | false | null | t3_1lt98oq | /r/LocalLLaMA/comments/1lt98oq/wouldnt_it_be_great_if_we_have_a_local_offline/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '4toYMrC1hGQnZ3rYk-NX-qj3ad988wGhBf4PA-IbBdw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/4toYMrC1hGQnZ3rYk-NX-qj3ad988wGhBf4PA-IbBdw.png?width=108&crop=smart&auto=webp&s=e0c58a79c2d5aa21769f0dab3a6c072b7669f715', 'width': 108}, {'height': 216, 'url': '... |
Im working with a project that needed synthetic data generation using LLM.Anyone here have experience with it? | 2 | Would like to more about the approach and the process and tools | 2025-07-06T18:57:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lt8zkl/im_working_with_a_project_that_needed_synthetic/ | Remarkable-Ad3290 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lt8zkl | false | null | t3_1lt8zkl | /r/LocalLLaMA/comments/1lt8zkl/im_working_with_a_project_that_needed_synthetic/ | false | false | self | 2 | null |
Anyone herehave experience with synthetic data generation? | 1 | what approach and tools you are using?would like to know more about the process | 2025-07-06T18:51:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lt8v19/anyone_herehave_experience_with_synthetic_data/ | Remarkable-Ad3290 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lt8v19 | false | null | t3_1lt8v19 | /r/LocalLLaMA/comments/1lt8v19/anyone_herehave_experience_with_synthetic_data/ | false | false | self | 1 | null |
Llamacpp | Samsung s24+ | Snapdragon 8 Gen 3 + Adreno 750 | Real world testing with Qwen3-4B | 25 | Model Performance Summary based on real-world testing:
**Q4_0 Model:**
- CPU-only: 8.30 tokens/second (recommended)
- GPU (25 layers): 8.81 tokens/second (competitive)
- GPU excels at prompt processing (57.86 vs 41.60 tok/s)
**Q5_K_M Model:**
- CPU-only: 7.15 tokens/second (much better)
- GPU (25 layers): 2.67 toke... | 2025-07-06T18:37:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lt8j4u/llamacpp_samsung_s24_snapdragon_8_gen_3_adreno/ | 73tada | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lt8j4u | false | null | t3_1lt8j4u | /r/LocalLLaMA/comments/1lt8j4u/llamacpp_samsung_s24_snapdragon_8_gen_3_adreno/ | false | false | self | 25 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.