title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Tool calling with LlamaCpp | 4 | I am new to locally hosting LLM with llamaCpp. I am eager to know how people are doing tool calls with it since i am having troubles both while using it as a part of LangChain or when using it with python binding library python-llama-cpp
1. LlamaCpp in LangChain: doesnt allow "auto" as a tool_call parameter and needs ... | 2025-07-01T23:41:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lph2zh/tool_calling_with_llamacpp/ | Dry_Yam_322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lph2zh | false | null | t3_1lph2zh | /r/LocalLLaMA/comments/1lph2zh/tool_calling_with_llamacpp/ | false | false | self | 4 | null |
Gemma 3n error loading in colab | 1 | 2025-07-01T22:56:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lpg37t/gemma_3n_error_loading_in_colab/ | Soren_Professor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lpg37t | false | null | t3_1lpg37t | /r/LocalLLaMA/comments/1lpg37t/gemma_3n_error_loading_in_colab/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'IC6coZlwpk_fARsvNY459DFstpSWY4P6nUt2foDl524', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/IC6coZlwpk_fARsvNY459DFstpSWY4P6nUt2foDl524.png?width=108&crop=smart&auto=webp&s=cc0422e6ae22a224086c77f9e02eec510481a08d', 'width': 108}, {'height': 121, 'url': 'h... | ||
Tenstorrent Blackhole Cards | 401 | Just got in some Blackhole p150b cards! Excited to try these out... Anyone else on here running some of these? Curious to collaborate! | 2025-07-01T21:56:17 | SashaUsesReddit | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lpep3m | false | null | t3_1lpep3m | /r/LocalLLaMA/comments/1lpep3m/tenstorrent_blackhole_cards/ | false | false | default | 401 | {'enabled': True, 'images': [{'id': 'ffghybw34caf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ffghybw34caf1.jpeg?width=108&crop=smart&auto=webp&s=9f6dfbded9b8ff691de3b7ccdabe318300c7a91c', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ffghybw34caf1.jpeg?width=216&crop=smart&auto=w... | |
Qwen3 inference engine in C: simple, educational, fun | 166 | For those who may be interested, a free-time project that I've now put up on Github: [https://github.com/adriancable/qwen3.c](https://github.com/adriancable/qwen3.c)
Run Qwen3-architecture models (like Qwen3-4B, or DeepSeek-R1-0528-Qwen3-8B) locally, no GPU required, using an LLM inference engine you build yourself fr... | 2025-07-01T21:49:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lpejnj/qwen3_inference_engine_in_c_simple_educational_fun/ | adrian-cable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lpejnj | false | null | t3_1lpejnj | /r/LocalLLaMA/comments/1lpejnj/qwen3_inference_engine_in_c_simple_educational_fun/ | false | false | self | 166 | {'enabled': False, 'images': [{'id': 'LxoqZs4q3Osj78IVaTXSrgUKqNHcOujrOF1Tg6_GYA4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/LxoqZs4q3Osj78IVaTXSrgUKqNHcOujrOF1Tg6_GYA4.jpeg?width=108&crop=smart&auto=webp&s=b6a4a1ab699ce9984d57b0696bdd1f873de9e614', 'width': 108}, {'height': 216, 'url': ... |
sGPU with s3000 | 3 | Dear Brothers in POSIX, have anyone had success spliting s3000 between containers? I know Moore have manual for that, and I even can see the GPU inside the container. But it doesmyt take ane workload, always 0. | 2025-07-01T21:36:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lpe7hs/sgpu_with_s3000/ | GoldCompetition7722 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lpe7hs | false | null | t3_1lpe7hs | /r/LocalLLaMA/comments/1lpe7hs/sgpu_with_s3000/ | false | false | self | 3 | null |
Current best options to convert to FP4 | 5 | Perplexity hasn't had too much for me - I'm assuming you know better
I have never quantized / converted a full weights model to anything, but since I'm getting a GB10 DGX I want to have options if the model I want isn't already available in FP4. I know TensorRT model optimizer can do it, but it looks like it only supp... | 2025-07-01T20:52:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lpd3y7/current_best_options_to_convert_to_fp4/ | zelkovamoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lpd3y7 | false | null | t3_1lpd3y7 | /r/LocalLLaMA/comments/1lpd3y7/current_best_options_to_convert_to_fp4/ | false | false | self | 5 | null |
Low Code GTM AI Agent: Perplexity + Reddit/X + GPT-4o + N8N | 1 | [removed] | 2025-07-01T20:34:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lpcnnb/low_code_gtm_ai_agent_perplexity_redditx_gpt4o_n8n/ | Sam_Tech1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lpcnnb | false | null | t3_1lpcnnb | /r/LocalLLaMA/comments/1lpcnnb/low_code_gtm_ai_agent_perplexity_redditx_gpt4o_n8n/ | false | false | self | 1 | null |
Cypher Alpha identity revealed. and NO! It's not GPT-5 | 1 | [removed] | 2025-07-01T19:47:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lpbgay/cypher_alpha_identity_revealed_and_no_its_not_gpt5/ | Ok-Weakness-4753 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lpbgay | false | null | t3_1lpbgay | /r/LocalLLaMA/comments/1lpbgay/cypher_alpha_identity_revealed_and_no_its_not_gpt5/ | false | false | self | 1 | null |
Local AI platform on older machine | 0 | I have 30 years in IT but new to AI, and I'd like to run Ollama locally. To save $$ I'd like to repurpose an older machine with max hardware: KGPE-D16 mobo, dual Opteron 6380's, 128GB ECC RAM and 8TB SSD storage.
Research indicates the best solution is to get a solid GPU only for the VRAM. Best value GPU is currently ... | 2025-07-01T19:41:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lpbamg/local_ai_platform_on_older_machine/ | zearo_kool | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lpbamg | false | null | t3_1lpbamg | /r/LocalLLaMA/comments/1lpbamg/local_ai_platform_on_older_machine/ | false | false | self | 0 | null |
Meet LiteChat, my feature packed local first chat ui | 1 | [removed] | 2025-07-01T19:15:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lpan34/meet_litechat_my_feature_packed_local_first_chat/ | dbuildofficial | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lpan34 | false | null | t3_1lpan34 | /r/LocalLLaMA/comments/1lpan34/meet_litechat_my_feature_packed_local_first_chat/ | false | false | 1 | null | |
Cheap hosting where I can host bunch of LLM? | 4 | I have my solution that am trying to test and integrate with LLM/AI. So since my local computer isn't much powerful to host those behemoths of open source LLMs I'm thinking of having some kind of VPS or something where I will test everything from. But since AI is GPU intensive not CPUs I'm stranded. I don't like the pe... | 2025-07-01T18:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lpa4rc/cheap_hosting_where_i_can_host_bunch_of_llm/ | Dodokii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lpa4rc | false | null | t3_1lpa4rc | /r/LocalLLaMA/comments/1lpa4rc/cheap_hosting_where_i_can_host_bunch_of_llm/ | false | false | self | 4 | null |
An Initial LLM Safety Analysis of Apple's On-Device 3B Model | 0 | Saw this on Hacker News and thought it was an interesting first look into the safety of Apple's new on-device AI. A recent analysis tested the foundation model that powers Apple Intelligence. The analysis also tested Apple's official "Safety Recipe", which emphasizes keywords with uppercase letters, and found it can im... | 2025-07-01T18:49:21 | https://www.cycraft.com/post/apple-on-device-foundation-model-en-20250630 | Novel-Recover8208 | cycraft.com | 1970-01-01T00:00:00 | 0 | {} | 1lp9xrh | false | null | t3_1lp9xrh | /r/LocalLLaMA/comments/1lp9xrh/an_initial_llm_safety_analysis_of_apples_ondevice/ | false | false | default | 0 | null |
Huawei releases an open weight model Pangu Pro 72B A16B. Weights are on HF. It should be competitive with Qwen3 32B and it was trained entirely on Huawei Ascend NPUs. (2505.21411) | 511 | 2025-07-01T18:30:51 | https://huggingface.co/IntervitensInc/pangu-pro-moe-model | FullOf_Bad_Ideas | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lp9gh2 | false | null | t3_1lp9gh2 | /r/LocalLLaMA/comments/1lp9gh2/huawei_releases_an_open_weight_model_pangu_pro/ | false | false | default | 511 | {'enabled': False, 'images': [{'id': 'KKUaRRu1NZXsmquOk2Id9DRnEhBD6P6w5Y5xZQur5Yc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KKUaRRu1NZXsmquOk2Id9DRnEhBD6P6w5Y5xZQur5Yc.png?width=108&crop=smart&auto=webp&s=5fa50f130c1d12794fe17be8766a2c4749d61f5a', 'width': 108}, {'height': 116, 'url': 'h... | |
Is there any open-weight'd diffusion based language models I can test right now on my own hardware? | 8 | If so, would appreciate some links to the simplest of them to get up and running.
Diffusion language models will give us the next great performance leap in language/text generation right? | 2025-07-01T17:57:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lp8kzx/is_there_any_openweightd_diffusion_based_language/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp8kzx | false | null | t3_1lp8kzx | /r/LocalLLaMA/comments/1lp8kzx/is_there_any_openweightd_diffusion_based_language/ | false | false | self | 8 | null |
Sophgo TPU SC11 FP300, 256GB, 1.1Tb/s, PCIE-5 | 42 | [https://www.scmp.com/tech/tech-trends/article/3316363/chinese-chipmaker-sophgo-adapts-compute-card-deepseek-beijings-self-reliance-push?module=perpetual\_scroll\_0&pgtype=article](https://www.scmp.com/tech/tech-trends/article/3316363/chinese-chipmaker-sophgo-adapts-compute-card-deepseek-beijings-self-reliance-push?mod... | 2025-07-01T17:57:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lp8kfw/sophgo_tpu_sc11_fp300_256gb_11tbs_pcie5/ | On1ineAxeL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp8kfw | false | null | t3_1lp8kfw | /r/LocalLLaMA/comments/1lp8kfw/sophgo_tpu_sc11_fp300_256gb_11tbs_pcie5/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'dfaryXk7Svqxh1IPzsdtnwh-Tb9PLhB1df8BVMC0jGM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dfaryXk7Svqxh1IPzsdtnwh-Tb9PLhB1df8BVMC0jGM.jpeg?width=108&crop=smart&auto=webp&s=bf5766c46209730b6debfcc4fd6af380a92b42e2', 'width': 108}, {'height': 113, 'url': '... | |
I tested new gemma3n q8_0(7.5gb) vs phi-4 q4(9.1gb) in coding | 1 | [removed] | 2025-07-01T17:54:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lp8hll/i_tested_new_gemma3n_q8_075gb_vs_phi4_q491gb_in/ | tiga_94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp8hll | false | null | t3_1lp8hll | /r/LocalLLaMA/comments/1lp8hll/i_tested_new_gemma3n_q8_075gb_vs_phi4_q491gb_in/ | false | false | self | 1 | null |
Good/Best MOE Models for 32GB RAM? | 14 | **TL;DR**: Please share worthy MOE models for 32GB RAM. Useful for my laptop which has tiny GPU. I'm expecting at least 20 t/s response. Thanks.
Today I tried Qwen3-30B-A3B Q4 (Unsloth Qwen3-30B-A3B-UD-Q4\_K\_XL - 17GB size). Applied same settings mentioned in unsloth page.
>For non-thinking mode (`enable_thinking... | 2025-07-01T17:50:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lp8e8m/goodbest_moe_models_for_32gb_ram/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp8e8m | false | null | t3_1lp8e8m | /r/LocalLLaMA/comments/1lp8e8m/goodbest_moe_models_for_32gb_ram/ | false | false | self | 14 | null |
Qserve Performance on L40S GPU for Llama 3 8B | 2 | I am new to LocalLLaMA , and I wanted to know these ,
My use case is to run a parallel request (prompt) about make me 10 to 20 in averages to 100 in max.
I researched and found a Qserve Developed by the [MIT Han Lab](https://hanlab.mit.edu/).
I get to know that , in a L40S GPU , using these model Llama-3-8B-Instruc... | 2025-07-01T17:42:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lp86ow/qserve_performance_on_l40s_gpu_for_llama_3_8b/ | EggIll649 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp86ow | false | null | t3_1lp86ow | /r/LocalLLaMA/comments/1lp86ow/qserve_performance_on_l40s_gpu_for_llama_3_8b/ | false | false | self | 2 | null |
gemma3 keeps outputting stop tokens and simulating user responses (using Ollama + Gemma 3 27B Q4_0 + open webui) | 0 | Hi, I’m running a local LLM setup on my Mac Studio (M1 Max, 64GB RAM) using Ollama with the Gemma 3 27B Q4_0 model.
Overall, the model is running well and the quality of responses has been great, but I keep running into an issue where the model randomly outputs stop sequence tokens like </end_of_turn> or <end_of_turn>... | 2025-07-01T17:22:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lp7nek/gemma3_keeps_outputting_stop_tokens_and/ | thisisntmethisisme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp7nek | false | null | t3_1lp7nek | /r/LocalLLaMA/comments/1lp7nek/gemma3_keeps_outputting_stop_tokens_and/ | false | false | self | 0 | null |
Is Notebook LLM (NotebookLM) redundant if I already use ChatGPT Plus, Claude Pro, & Gemini Pro (Projects/Gems)? | 0 | Hey all,
I’m trying to understand the actual use case & strategic advantage of Notebook LLM (NotebookLM, Google’s tool).
I’ve seen some positive write-ups, but I already use a fairly integrated setup across three leading models:
- ChatGPT Plus (Projects): My primary workhorse—used for structured legal/compliance wo... | 2025-07-01T17:07:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lp78v3/is_notebook_llm_notebooklm_redundant_if_i_already/ | TheLawIsSacred | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp78v3 | false | null | t3_1lp78v3 | /r/LocalLLaMA/comments/1lp78v3/is_notebook_llm_notebooklm_redundant_if_i_already/ | false | false | self | 0 | null |
Anyone experimenting with local multi-modal LLaMA or RAG pipelines? Curious about integration strategies. | 7 | In order to achieve a fully offline, multi-modal solution, I'm constructing a local RAG pipeline using LLaMA (7B/13B) and integrating it with vector DBs such as Faiss/Chroma for domain-specific document QA.
Seeking to gain knowledge from those who are trying with:Multimodal input (using CLIP/BLIP to add photos and... | 2025-07-01T16:34:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lp6def/anyone_experimenting_with_local_multimodal_llama/ | No_Edge2098 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp6def | false | null | t3_1lp6def | /r/LocalLLaMA/comments/1lp6def/anyone_experimenting_with_local_multimodal_llama/ | false | false | self | 7 | null |
Anon-kode on Gitee | 0 | [https://gitee.com/bl1zz/anon-kode](https://gitee.com/bl1zz/anon-kode) | 2025-07-01T16:30:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lp6925/anonkode_on_gitee/ | throwaway87-2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp6925 | false | null | t3_1lp6925 | /r/LocalLLaMA/comments/1lp6925/anonkode_on_gitee/ | false | false | self | 0 | null |
Reuse non-prefix KV Cache and speed up RAG by 3X with LMCache. | 127 | Hey[ ](https://www.reddit.com/r/MachineLearning/)r/LocalLLaMA !
A while back, we shared our open-source project LMCache here and were blown away by the incredible support and feedback. Today, our team is thrilled to share more about one of our core components: **CacheBlend**. Recognized with a **Best Paper Award at AC... | 2025-07-01T16:26:03 | Nice-Comfortable-650 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lp653l | false | null | t3_1lp653l | /r/LocalLLaMA/comments/1lp653l/reuse_nonprefix_kv_cache_and_speed_up_rag_by_3x/ | false | false | default | 127 | {'enabled': True, 'images': [{'id': '9eq6ted4haaf1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/9eq6ted4haaf1.jpeg?width=108&crop=smart&auto=webp&s=0a0e24393dd73d59d6b136cc461e9599fc9e7f57', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/9eq6ted4haaf1.jpeg?width=216&crop=smart&auto=w... | |
[vLLM] Computing Attention Scores with Long Context LLMs | 2 | I'm trying to compute the top-k tokens yielding the highest attention scores with inference frameworks such as vLLM or the plain HuggingFace transformers. The models I'm using are not big in terms of parameters (max 7B) but huge in terms of context windows (up to 1M tokens, and I'm using all of it). However, I face two... | 2025-07-01T16:18:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lp5xu9/vllm_computing_attention_scores_with_long_context/ | Debonargon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp5xu9 | false | null | t3_1lp5xu9 | /r/LocalLLaMA/comments/1lp5xu9/vllm_computing_attention_scores_with_long_context/ | false | false | self | 2 | null |
L’optimisation des ressources d’un serveur à l’aide de l’intelligence artificiell | 1 | [removed] | 2025-07-01T16:17:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lp5wt5/loptimisation_des_ressources_dun_serveur_à_laide/ | Sensitive_Ad_8853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp5wt5 | false | null | t3_1lp5wt5 | /r/LocalLLaMA/comments/1lp5wt5/loptimisation_des_ressources_dun_serveur_à_laide/ | false | false | self | 1 | null |
Day 7/50: Building a Small Language Model from Scratch – Coding Positional Embeddings | 35 | Yesterday, we discussed *what* [positional embeddings ](https://www.ideaweaver.ai/blog/day6.html)are and *why* they’re essential in Transformer models. Today, let’s jump into the code and see exactly how they're implemented.
The reference implementation comes from an open-source GPT-style model I’ve been experimenting... | 2025-07-01T16:09:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lp5pt0/day_750_building_a_small_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp5pt0 | false | null | t3_1lp5pt0 | /r/LocalLLaMA/comments/1lp5pt0/day_750_building_a_small_language_model_from/ | false | false | self | 35 | null |
Using llama.cpp in an enterprise? | 4 | Pretty much the title!
Does anyone have examples of llama.cpp being used in a form of enterprise/business context successfully?
I see vLLM used at scale everywhere, so it would be cool to see any use cases that leverage laptops/lower-end hardware towards their benefit! | 2025-07-01T16:08:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lp5obe/using_llamacpp_in_an_enterprise/ | Careless-Car_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp5obe | false | null | t3_1lp5obe | /r/LocalLLaMA/comments/1lp5obe/using_llamacpp_in_an_enterprise/ | false | false | self | 4 | null |
Gemma 3n Fine-tuning now in Unsloth - 1.5x faster with 50% less VRAM + Fixes | 324 | Hey LocalLlama! We made finetuning Gemma 3N 1.5x faster in a free Colab with [Unsloth](http://github.com/unslothai/unsloth) in under 16GB of VRAM! We also managed to find and fix issues for Gemma 3N:
**Ollama & GGUF fixes** \- All Gemma 3N GGUFs could not load in Ollama properly since `per_layer_token_embd` had loadin... | 2025-07-01T16:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lp5nhy/gemma_3n_finetuning_now_in_unsloth_15x_faster/ | danielhanchen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp5nhy | false | null | t3_1lp5nhy | /r/LocalLLaMA/comments/1lp5nhy/gemma_3n_finetuning_now_in_unsloth_15x_faster/ | false | false | 324 | {'enabled': False, 'images': [{'id': 'Z9xq13sybpzZE8ZKJtOY4SuppgTg5x6rIeOPF_Fl1qk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z9xq13sybpzZE8ZKJtOY4SuppgTg5x6rIeOPF_Fl1qk.png?width=108&crop=smart&auto=webp&s=a83dfb9ada0d20a0e1b8c45beba8cb4b925dbb13', 'width': 108}, {'height': 108, 'url': 'h... | |
Very small high scores models + web search? | 1 | If we can make some models that can "reason" very well but lack a lot of knowledge, isnt it generaly cheaper to just have a small model + added context from a web search api?
Are there some pipelines that exist on github or somewhere of such a project?
I wanted to try out something like qwen3-8b-r1 + web search and... | 2025-07-01T16:05:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lp5lu3/very_small_high_scores_models_web_search/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp5lu3 | false | null | t3_1lp5lu3 | /r/LocalLLaMA/comments/1lp5lu3/very_small_high_scores_models_web_search/ | false | false | self | 1 | null |
General storage question? | 0 | It looks like RAG uses a Vector database when storing data.
is this basically the same way that general llm's store data? Or are there big differences between how a local rag stores data and off the shelf models store data? | 2025-07-01T15:35:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lp4ttf/general_storage_question/ | rocky_balboa202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp4ttf | false | null | t3_1lp4ttf | /r/LocalLLaMA/comments/1lp4ttf/general_storage_question/ | false | false | self | 0 | null |
I Designed an LLM Shorthand Based on Language Attributes, Math and Python | 8 | From the Repo:
>Fact-RAR is a symbolic mini-language for writing declarative knowledge in an **LLM-friendly**, **token-efficient**, and **human-readable** format. (Some humans may find it tedious or dense.) It is a mini-language which was inspired by Japanese grammar, low-resource syntax, and programming idioms and sy... | 2025-07-01T15:22:31 | https://github.com/sidewaysthought/fact-rar | cddelgado | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lp4h7t | false | null | t3_1lp4h7t | /r/LocalLLaMA/comments/1lp4h7t/i_designed_an_llm_shorthand_based_on_language/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': 'S9o0IA1U3pGH6QHTYM7xe67F2sJ2lrkOFphUXXN9Wf0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/S9o0IA1U3pGH6QHTYM7xe67F2sJ2lrkOFphUXXN9Wf0.png?width=108&crop=smart&auto=webp&s=f64131dcc862ee413ef14c21f37d360c76dcc84c', 'width': 108}, {'height': 108, 'url': 'h... |
Help on prompt memory and personas - what to do? | 3 | I need some recommendations on what to do to implement prompt/persona memory across my local setup. I've read up on vector databases and levels to set, but am looking for a step by step on which compoments to implement. I would love to have the solution self-hosted and local, and I am a full time AI user with 40% of my... | 2025-07-01T15:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lp4cht/help_on_prompt_memory_and_personas_what_to_do/ | TheRealKevinChrist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp4cht | false | null | t3_1lp4cht | /r/LocalLLaMA/comments/1lp4cht/help_on_prompt_memory_and_personas_what_to_do/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'bRQcNXstcv2BiIay-8r1LtsiuiWNo7QpTk2ap1nCfB8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/bRQcNXstcv2BiIay-8r1LtsiuiWNo7QpTk2ap1nCfB8.png?width=108&crop=smart&auto=webp&s=caffcb0ba5849a49df0852148cab50d60ff168c5', 'width': 108}, {'height': 216, 'url': '... |
Stop Guessing. Start testing AI | 1 | [removed] | 2025-07-01T14:43:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lp3h9r/stop_guessing_start_testing_ai/ | aillmeval | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp3h9r | false | null | t3_1lp3h9r | /r/LocalLLaMA/comments/1lp3h9r/stop_guessing_start_testing_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 's-ib-uDYCokapQng5Hm3ZEaBR1w4MDbOweZkcodSj18', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s-ib-uDYCokapQng5Hm3ZEaBR1w4MDbOweZkcodSj18.jpeg?width=108&crop=smart&auto=webp&s=30fc870d6fcfef485b2a709ad9c0e7f3c0fb8fc7', 'width': 108}, {'height': 113, 'url': '... |
llm-api-test: Open-source tool to benchmark LLM API speed (OpenAI, Claude, self-hosted) | 1 | [removed] | 2025-07-01T14:41:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lp3ff1/llmapitest_opensource_tool_to_benchmark_llm_api/ | Fun-Campaign1759 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp3ff1 | false | null | t3_1lp3ff1 | /r/LocalLLaMA/comments/1lp3ff1/llmapitest_opensource_tool_to_benchmark_llm_api/ | false | false | self | 1 | null |
LoRA training on NVIDIA Jetson AGX Orin 64GB | 20 | 2025-07-01T14:32:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lp37v0/lora_training_on_nvidia_jetson_agx_orin_64gb/ | ahstanin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp37v0 | false | null | t3_1lp37v0 | /r/LocalLLaMA/comments/1lp37v0/lora_training_on_nvidia_jetson_agx_orin_64gb/ | false | false | 20 | null | ||
free ai art tip: nightcafe plus domoai works great for landscapes | 1 | [removed] | 2025-07-01T14:17:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lp2u3i/free_ai_art_tip_nightcafe_plus_domoai_works_great/ | PutridCartoonist4236 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp2u3i | false | null | t3_1lp2u3i | /r/LocalLLaMA/comments/1lp2u3i/free_ai_art_tip_nightcafe_plus_domoai_works_great/ | false | false | self | 1 | null |
Reasoning models are risky. Anyone else experiencing this? | 56 | I'm building a job application tool and have been testing pretty much every LLM model out there for different parts of the product. One thing that's been driving me crazy: reasoning models seem particularly dangerous for business applications that need to go from A to B in a somewhat rigid way.
I wouldn't call it "det... | 2025-07-01T14:04:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lp2ji0/reasoning_models_are_risky_anyone_else/ | interviuu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp2ji0 | false | null | t3_1lp2ji0 | /r/LocalLLaMA/comments/1lp2ji0/reasoning_models_are_risky_anyone_else/ | false | false | self | 56 | null |
I created a script to allow running commands in an ephemeral VM to allow tool calling full access to a local directory | 3 | I've been using \`gemini\` and \`claude\` commandline AI tools, and I wanted to have something that allowed my AI full and unrestricted access to a VM.
1. Mounts the local directory so it can read files
2. Spawns a QEMU VM with access to those files
3. Runs a command
4. Returns
node ./scratchpad-cli --verbo... | 2025-07-01T14:04:51 | https://github.com/bigattichouse/scratchpad | bigattichouse | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lp2jhr | false | null | t3_1lp2jhr | /r/LocalLLaMA/comments/1lp2jhr/i_created_a_script_to_allow_running_commands_in/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': '4uXVV_gIKvEbP6L8sZKIJfYeWwmBsgdPdD9fj0WIUdU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4uXVV_gIKvEbP6L8sZKIJfYeWwmBsgdPdD9fj0WIUdU.png?width=108&crop=smart&auto=webp&s=970dcb8325fd5c77d5067431be7efcfc91accb51', 'width': 108}, {'height': 108, 'url': 'h... |
Training and Finetuning Sparse Embedding Models with Sentence Transformers v5 | 31 | Sentence Transformers v5.0 was just released, and it introduced sparse embedding models. These are the kind of search models that are often combined with the "standard" dense embedding models for "hybrid search". On paper, this can help performance a lot. From the release notes:
> A big question is: How do sparse embe... | 2025-07-01T14:02:02 | https://huggingface.co/blog/train-sparse-encoder | -Cubie- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lp2h0e | false | null | t3_1lp2h0e | /r/LocalLLaMA/comments/1lp2h0e/training_and_finetuning_sparse_embedding_models/ | false | false | default | 31 | {'enabled': False, 'images': [{'id': 'Q7oEnpq4LYUPvgpkKMeoddSo-4Wn8UDKMbqnVIBZL8s', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Q7oEnpq4LYUPvgpkKMeoddSo-4Wn8UDKMbqnVIBZL8s.png?width=108&crop=smart&auto=webp&s=c6034c885b9c0b0d9df5f31be4cde4f154338168', 'width': 108}, {'height': 121, 'url': 'h... |
What are some good preprocessors for scanned documents in the LocalLLaMA use case? | 15 | I’ve been working on a local document Q\\&A pipeline using LLaMA (mainly 7B and Mixtral variants), and a big bottleneck for me is handling scanned PDFs or image-based documents. Most of what I’m working with isn’t born-digital, stuff like manuals, invoices, policy documents, etc., usually scanned from print.
Before ... | 2025-07-01T13:27:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lp1nn5/what_are_some_good_preprocessors_for_scanned/ | Abelmageto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp1nn5 | false | null | t3_1lp1nn5 | /r/LocalLLaMA/comments/1lp1nn5/what_are_some_good_preprocessors_for_scanned/ | false | false | self | 15 | null |
We killed cold starts for 12B+ models - withsub-2s loads. | 1 | [removed] | 2025-07-01T13:19:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lp1hir/we_killed_cold_starts_for_12b_models_withsub2s/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp1hir | false | null | t3_1lp1hir | /r/LocalLLaMA/comments/1lp1hir/we_killed_cold_starts_for_12b_models_withsub2s/ | false | false | self | 1 | null |
The Bitter Lesson is coming for Tokenization | 1 | [removed] | 2025-07-01T13:17:55 | https://lucalp.dev/bitter-lesson-tokenization-and-blt/ | lucalp__ | lucalp.dev | 1970-01-01T00:00:00 | 0 | {} | 1lp1ga6 | false | null | t3_1lp1ga6 | /r/LocalLLaMA/comments/1lp1ga6/the_bitter_lesson_is_coming_for_tokenization/ | false | false | default | 1 | null |
We killed cold starts for 70B+ models -with sub-2s loads, no tricks | 1 | [removed] | 2025-07-01T13:17:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lp1fu6/we_killed_cold_starts_for_70b_models_with_sub2s/ | pmv143 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp1fu6 | false | null | t3_1lp1fu6 | /r/LocalLLaMA/comments/1lp1fu6/we_killed_cold_starts_for_70b_models_with_sub2s/ | false | false | self | 1 | null |
Resources to learn about samplers? | 4 | Could you share how to learn more about samplers?
Anything is fine: blogs, articles, videos, etc. | 2025-07-01T12:43:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lp0o3i/resources_to_learn_about_samplers/ | Black-Mack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp0o3i | false | null | t3_1lp0o3i | /r/LocalLLaMA/comments/1lp0o3i/resources_to_learn_about_samplers/ | false | false | self | 4 | null |
Best open source Arabic tts | 9 | Hello, I’ve been trying to find the best TTS options to fine tune for Arabic and I’ve kinda hit a wall with Fish audio after their release of the new S1 model, as they’ve removed the fine tuning code for older models like v1.5.
I tried coqui’s XTTS fork by Idap:
https://github.com/idiap/coqui-ai-TTS
And got good resu... | 2025-07-01T12:36:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lp0j7f/best_open_source_arabic_tts/ | Spiritual_Button827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp0j7f | false | null | t3_1lp0j7f | /r/LocalLLaMA/comments/1lp0j7f/best_open_source_arabic_tts/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'km1HNu57ibo94GXsarATp0f0L1ZdE91JlLq0nspIi8s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/km1HNu57ibo94GXsarATp0f0L1ZdE91JlLq0nspIi8s.png?width=108&crop=smart&auto=webp&s=de4da1a33da4caf07a3fecdd618d70629680101e', 'width': 108}, {'height': 108, 'url': 'h... |
Deepseek R1 at 6,5 tk/s on an Nvidia Tesla P40 | 57 | I figured I'd post my final setup since many people asked about the P40 and assumed you couldn't do much with it (but you can!).
numactl --cpunodebind=0 -- ./ik_llama.cpp/build/bin/llama-cli \
--numa numactl \
--model models/unsloth/DeepSeek-R1-0528-GGUF/UD-Q2_K_XL/DeepSeek-R1-0528-UD-Q2_K_XL-0000... | 2025-07-01T12:12:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lp01c7/deepseek_r1_at_65_tks_on_an_nvidia_tesla_p40/ | dc740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lp01c7 | false | null | t3_1lp01c7 | /r/LocalLLaMA/comments/1lp01c7/deepseek_r1_at_65_tks_on_an_nvidia_tesla_p40/ | false | false | self | 57 | null |
Smallest Model For A Trivia Game On Countries? | 2 |
Hey guys,
I am starting to get into using local models and I wondered what the smallest model I can use that is knowledgeable about countries and doesn't hallucinate that much. I heard Gemma3n is good but I don't really need multimodal.
It's for a trivia game where users guess the country and ask questions to try an... | 2025-07-01T11:59:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lozri7/smallest_model_for_a_trivia_game_on_countries/ | redandwhitearsenal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lozri7 | false | null | t3_1lozri7 | /r/LocalLLaMA/comments/1lozri7/smallest_model_for_a_trivia_game_on_countries/ | false | false | self | 2 | null |
Cannot Load any GGUF model using tools like LM Studio or Jan Ai etc | 2 | So everything was okay until I upgraded from Windows 10 to 11 and suddenly I couldn’t load any local model through these GUI interfaces. I don’t see any error; it just loads indefinitely, no VRAM will also get occupied.
I checked with llama cpp and it worked fine, no errors.
I have 2x RTX 3090 and I am just confused... | 2025-07-01T11:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lozhqc/cannot_load_any_gguf_model_using_tools_like_lm/ | Physical-Citron5153 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lozhqc | false | null | t3_1lozhqc | /r/LocalLLaMA/comments/1lozhqc/cannot_load_any_gguf_model_using_tools_like_lm/ | false | false | self | 2 | null |
Dual RX580 2048SP (16GB) llama.cpp(vulkan) | 8 | Hey all! I have a server in my house with dual rx580 (16gb) in it, running llama.cpp via Vulkan. it runs the Qwen-3-32B-q5 (28GB total) at about 4.5 - 4.8 t/s.
does anyone want me to test any other ggufs? I could test it with 1 or both of the GPUs.
they work relatively well and are really cheap for a large amount o... | 2025-07-01T11:33:27 | https://www.reddit.com/r/LocalLLaMA/comments/1loza95/dual_rx580_2048sp_16gb_llamacppvulkan/ | IVequalsW | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loza95 | false | null | t3_1loza95 | /r/LocalLLaMA/comments/1loza95/dual_rx580_2048sp_16gb_llamacppvulkan/ | false | false | self | 8 | null |
RTX 6000 Pro software stack | 1 | What software stack is recommended for optimal performance on Ubuntu 24.04 for the RTX 6000 Pro?
I read differing reports what works and various performance issues because it’s still new.
Most important is to support the OpenUI frontend but also finetuning with unsloth…
Which driver, which packages, …
Thanks! | 2025-07-01T11:16:12 | https://www.reddit.com/r/LocalLLaMA/comments/1loyyzc/rtx_6000_pro_software_stack/ | vhthc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loyyzc | false | null | t3_1loyyzc | /r/LocalLLaMA/comments/1loyyzc/rtx_6000_pro_software_stack/ | false | false | self | 1 | null |
AMD 5700G for experimenting with local LLMs? | 0 | Would an AMD Ryzen 7 5700G with 32, 64 or 128 GB be enough for experimenting with local LLMs? Just to study and practice the technology, without expectations about performance. Thank you. | 2025-07-01T11:12:24 | https://www.reddit.com/r/LocalLLaMA/comments/1loywkt/amd_5700g_for_experimenting_with_local_llms/ | Taikal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loywkt | false | null | t3_1loywkt | /r/LocalLLaMA/comments/1loywkt/amd_5700g_for_experimenting_with_local_llms/ | false | false | self | 0 | null |
Anyone thinks working with other people in an ai chat together with agents can be a cool idea? | 1 | [removed] | 2025-07-01T10:51:08 | https://www.reddit.com/r/LocalLLaMA/comments/1loyiy5/anyone_thinks_working_with_other_people_in_an_ai/ | unknownstudentoflife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loyiy5 | false | null | t3_1loyiy5 | /r/LocalLLaMA/comments/1loyiy5/anyone_thinks_working_with_other_people_in_an_ai/ | false | false | self | 1 | null |
Looking for uncensored instruction-tuning datasets for alignment test | 0 | Hey folks,
I'm helping a friend with a college alignment experiment where we're fine-tuning a 7B model and testing how instruction-tuning affects refusal behavior.
We're specifically trying to benchmark how a model behaves when trained on **uncensored, refusal-free datasets** — where responses are direct, permissive,... | 2025-07-01T10:39:48 | https://www.reddit.com/r/LocalLLaMA/comments/1loyc9x/looking_for_uncensored_instructiontuning_datasets/ | Simple_Ad988 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loyc9x | false | null | t3_1loyc9x | /r/LocalLLaMA/comments/1loyc9x/looking_for_uncensored_instructiontuning_datasets/ | false | false | self | 0 | null |
What is night forge? | 7 | 2025-07-01T10:12:17 | https://www.reddit.com/r/LocalLLaMA/comments/1loxw8f/what_is_night_forge/ | Professional-Ad-4376 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loxw8f | false | null | t3_1loxw8f | /r/LocalLLaMA/comments/1loxw8f/what_is_night_forge/ | false | false | 7 | null | ||
Fine-tuning with $1000? | 0 | What kind of fine tuning or LoRA project can be done with $1000 in second hand GPUs or cloud compute? | 2025-07-01T09:41:33 | https://www.reddit.com/r/LocalLLaMA/comments/1loxf1b/finetuning_with_1000/ | sumguysr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loxf1b | false | null | t3_1loxf1b | /r/LocalLLaMA/comments/1loxf1b/finetuning_with_1000/ | false | false | self | 0 | null |
Need help finding educational datasets and model suggestions for offline LLM on phone | 2 | Hey folks,
I’m trying to build a local LLM that can work offline on a phone, mainly for educational purposes — like helping students with concepts, solving problems step by step, and answering basic academic questions (school or early college level).
I’m planning to fine-tune a smaller model like Phi-2, Mistral 7B, o... | 2025-07-01T09:30:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lox9c4/need_help_finding_educational_datasets_and_model/ | Phantomx_77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lox9c4 | false | null | t3_1lox9c4 | /r/LocalLLaMA/comments/1lox9c4/need_help_finding_educational_datasets_and_model/ | false | false | self | 2 | null |
Need help finding educational datasets + model suggestions for offline LLM on phone. | 1 | Hey folks,
I’m trying to build a local LLM that can work offline on a phone, mainly for educational purposes — like helping students with concepts, solving problems step by step, and answering basic academic questions (school or early college level).
I’m planning to fine-tune a smaller model like Gemma 3n or Mistral ... | 2025-07-01T09:29:02 | https://www.reddit.com/r/LocalLLaMA/comments/1lox8dh/need_help_finding_educational_datasets_model/ | Phantomx_77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lox8dh | false | null | t3_1lox8dh | /r/LocalLLaMA/comments/1lox8dh/need_help_finding_educational_datasets_model/ | false | false | self | 1 | null |
Local models not following instructions | 2 | I have some problems on applying local LLMs to structured workflows.
I use 8b to 24b models on my 16GB 4070 Super TI
I have no problems in chatting or doing web rag with my models, either using open webui or AnythingLLM or custom solutions in python or nodejs. What I am unable to do is doing some more structured work... | 2025-07-01T09:16:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lox1f7/local_models_not_following_instructions/ | thecookingsenpai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lox1f7 | false | null | t3_1lox1f7 | /r/LocalLLaMA/comments/1lox1f7/local_models_not_following_instructions/ | false | false | self | 2 | null |
Make a video | 0 | Parrot is running and man also | 2025-07-01T08:53:58 | LooseTraffic1100 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lowpcm | false | null | t3_1lowpcm | /r/LocalLLaMA/comments/1lowpcm/make_a_video/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'zot8xy5088af1', 'resolutions': [{'height': 187, 'url': 'https://preview.redd.it/zot8xy5088af1.png?width=108&crop=smart&auto=webp&s=f9afee39c6a88694a78ea462d8361bbd341bf0fe', 'width': 108}, {'height': 375, 'url': 'https://preview.redd.it/zot8xy5088af1.png?width=216&crop=smart&auto=we... | |
AI assistant automates the boring 70% of my job. And I'm thrilled about it. | 1 | [removed] | 2025-07-01T08:34:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lowf7i/ai_assistant_automates_the_boring_70_of_my_job/ | IssueCute5719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lowf7i | false | null | t3_1lowf7i | /r/LocalLLaMA/comments/1lowf7i/ai_assistant_automates_the_boring_70_of_my_job/ | false | false | self | 1 | null |
My AI assistant automates the boring 70% of my job. Here's what that looks like. | 1 | [removed] | 2025-07-01T08:27:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lowbx1/my_ai_assistant_automates_the_boring_70_of_my_job/ | IssueCute5719 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lowbx1 | false | null | t3_1lowbx1 | /r/LocalLLaMA/comments/1lowbx1/my_ai_assistant_automates_the_boring_70_of_my_job/ | false | false | self | 1 | null |
。。 | 1 | [removed] | 2025-07-01T08:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/1low9la/_/ | Ok_Pay1865 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1low9la | false | null | t3_1low9la | /r/LocalLLaMA/comments/1low9la/_/ | false | false | self | 1 | null |
My AI assistant automates the boring 70% of my job. Here's what that looks like. | 1 | [removed] | 2025-07-01T08:22:46 | https://www.reddit.com/r/LocalLLaMA/comments/1low9d5/my_ai_assistant_automates_the_boring_70_of_my_job/ | Ok_Pay1865 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1low9d5 | false | null | t3_1low9d5 | /r/LocalLLaMA/comments/1low9d5/my_ai_assistant_automates_the_boring_70_of_my_job/ | false | false | self | 1 | null |
My AI assistant automates the boring 70% of my job. Here's what that looks like. | 1 | [removed] | 2025-07-01T08:04:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lovzuu/my_ai_assistant_automates_the_boring_70_of_my_job/ | Ok_Pay1865 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lovzuu | false | null | t3_1lovzuu | /r/LocalLLaMA/comments/1lovzuu/my_ai_assistant_automates_the_boring_70_of_my_job/ | false | false | self | 1 | null |
Current state of Intel A770 16GB GPU for Inference? | 30 | Hi all,
I could only find old posts regarding how the Intel A770 fares with LLMs, specifically people notice the high idle power consumption and difficult setup depending on what framework you use. At least a year ago, it was supposed to be a pain to use with Ollama.
Here in Germany, it is by far the cheapest 16GB ca... | 2025-07-01T07:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lovuxp/current_state_of_intel_a770_16gb_gpu_for_inference/ | Karim_acing_it | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lovuxp | false | null | t3_1lovuxp | /r/LocalLLaMA/comments/1lovuxp/current_state_of_intel_a770_16gb_gpu_for_inference/ | false | false | self | 30 | null |
My AI Assistant Killed 70% of My Job. And I Love It. | 1 | [removed] | 2025-07-01T07:52:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lovtoc/my_ai_assistant_killed_70_of_my_job_and_i_love_it/ | Ok_Pay1865 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lovtoc | false | null | t3_1lovtoc | /r/LocalLLaMA/comments/1lovtoc/my_ai_assistant_killed_70_of_my_job_and_i_love_it/ | false | false | self | 1 | null |
My AI Assistant Killed 70% of My Job. And I Love It. | 1 | [removed] | 2025-07-01T07:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lovrwj/my_ai_assistant_killed_70_of_my_job_and_i_love_it/ | Known_Copy_7715 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lovrwj | false | null | t3_1lovrwj | /r/LocalLLaMA/comments/1lovrwj/my_ai_assistant_killed_70_of_my_job_and_i_love_it/ | false | false | self | 1 | null |
Best Local Model for Vision? | 4 | Maybe Gemma3 is the best model for vision tasks? Each image uses only 256 tokens. In my own hardware tests, it was the only model capable of processing 60 images simultaneously. | 2025-07-01T07:46:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lovqjc/best_local_model_for_vision/ | xukecheng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lovqjc | false | null | t3_1lovqjc | /r/LocalLLaMA/comments/1lovqjc/best_local_model_for_vision/ | false | false | self | 4 | null |
[Open Source] Private AI assistant extension - thoughts on local vs cloud approaches? | 1 | [removed] | 2025-07-01T07:40:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lovnlp/open_source_private_ai_assistant_extension/ | xukecheng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lovnlp | false | null | t3_1lovnlp | /r/LocalLLaMA/comments/1lovnlp/open_source_private_ai_assistant_extension/ | false | false | self | 1 | null |
Built an open-source browser extension for truly private AI assistance | 1 | [removed] | 2025-07-01T07:37:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lovm65/built_an_opensource_browser_extension_for_truly/ | xukecheng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lovm65 | false | null | t3_1lovm65 | /r/LocalLLaMA/comments/1lovm65/built_an_opensource_browser_extension_for_truly/ | false | false | self | 1 | null |
Made a thing: Private AI assistant that runs entirely in your browser with Ollama | 1 | [removed] | 2025-07-01T07:33:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lovk1p/made_a_thing_private_ai_assistant_that_runs/ | xukecheng | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lovk1p | false | null | t3_1lovk1p | /r/LocalLLaMA/comments/1lovk1p/made_a_thing_private_ai_assistant_that_runs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VwVcv0L93tb_mE7QmFcvm9-MMS2SImgq7hI_3ga5PEY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VwVcv0L93tb_mE7QmFcvm9-MMS2SImgq7hI_3ga5PEY.png?width=108&crop=smart&auto=webp&s=c7c62d953d243f4e7d4fccc4960575e4fe769301', 'width': 108}, {'height': 108, 'url': 'h... |
$5k budget for Local AI | 4 | Just trying to get some ideas from actual people ( already went the AI route ) for what to get...
I have a Gigabyte M32 AR3 a 7xx2 64 core cpu, requisite ram, and PSU.
The above budget is strictly for GPUs and can be up to $5500 or more if the best suggestion is to just wait.
Use cases mostly involve fine tuning and... | 2025-07-01T06:28:21 | https://www.reddit.com/r/LocalLLaMA/comments/1louk6a/5k_budget_for_local_ai/ | Unlikely_Track_5154 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1louk6a | false | null | t3_1louk6a | /r/LocalLLaMA/comments/1louk6a/5k_budget_for_local_ai/ | false | false | self | 4 | null |
Video Cards & GPUs SPARKLE intros new Arc Pro B60 cards: one is a dual-GPU workstation card with 48GB of VRAM | 10 | 2025-07-01T05:52:37 | https://www.tweaktown.com/news/106121/sparkle-intros-new-arc-pro-b60-cards-one-is-dual-gpu-workstation-card-with-48gb-of-vram/index.html | fallingdowndizzyvr | tweaktown.com | 1970-01-01T00:00:00 | 0 | {} | 1lotzy4 | false | null | t3_1lotzy4 | /r/LocalLLaMA/comments/1lotzy4/video_cards_gpus_sparkle_intros_new_arc_pro_b60/ | false | false | default | 10 | null | |
KrunchWrapper - a LLM compression proxy (beta) | 67 | With context limits being the way there are I wanted to experiment with creating a standalone middleman API server that "compresses" requests sent to models as a proof of concept. I've seen other methods employed that use a seperate model for compression but, Krunchwrapper completely avoids the need for running a model... | 2025-07-01T05:51:25 | LA_rent_Aficionado | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lotza5 | false | null | t3_1lotza5 | /r/LocalLLaMA/comments/1lotza5/krunchwrapper_a_llm_compression_proxy_beta/ | false | false | 67 | {'enabled': True, 'images': [{'id': 'MZzUXdIWeCZllPxSBtOu6BJsl9m2ph14Rq4KIP2wMEs', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/c4bjroisb7af1.png?width=108&crop=smart&auto=webp&s=d5ff9190d564962e274b1c6665d855eb4c5d4e4a', 'width': 108}, {'height': 179, 'url': 'https://preview.redd.it/c4bjroisb7af1.png... | ||
KrunchWrapper - a LLM compression proxy (beta) | 1 | [removed] | 2025-07-01T05:48:08 | LA_rent_Aficionado | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lotxhb | false | null | t3_1lotxhb | /r/LocalLLaMA/comments/1lotxhb/krunchwrapper_a_llm_compression_proxy_beta/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'NPckpQpJdBoWeLwXDCFLDNNLrvQ_v-OB7uGEM5X7wEQ', 'resolutions': [{'height': 89, 'url': 'https://preview.redd.it/5zlsqfcn87af1.png?width=108&crop=smart&auto=webp&s=a6e06b758866dac748ed81ccd6eb5fd711218547', 'width': 108}, {'height': 179, 'url': 'https://preview.redd.it/5zlsqfcn87af1.png... | ||
Sparkle announces 24GB Intel B60 and 48GB Dual Intel B60 | 2 | Here's another GPU maker jumping into the fray. Hopefully they will be cheaper than Maxsun since Sparkle actually operates in the United States.
Since multiple manufacturers are offering them. It seems the 48GB dual is not a one off. Since the specs are so similar, it would be a great coincidence if separate companies... | 2025-07-01T05:10:01 | https://www.sparkle.com.tw/files/20250618145705270.pdf | fallingdowndizzyvr | sparkle.com.tw | 1970-01-01T00:00:00 | 0 | {} | 1lotbh9 | false | null | t3_1lotbh9 | /r/LocalLLaMA/comments/1lotbh9/sparkle_announces_24gb_intel_b60_and_48gb_dual/ | false | false | default | 2 | null |
META’S AI AVENGERS ASSEMBLE, ZUCK’S $29B SUPERINTELLIGENCE GAMBIT! | 1 | 2025-07-01T04:53:49 | https://algogist.com/meta-ai-talent-war-zuckerbergs-avengers-style-race-for-superintelligence/ | enough_jainil | algogist.com | 1970-01-01T00:00:00 | 0 | {} | 1lot1kg | false | null | t3_1lot1kg | /r/LocalLLaMA/comments/1lot1kg/metas_ai_avengers_assemble_zucks_29b/ | false | false | default | 1 | null | |
New to the scene. Yesterday, got 4 t/s on R1 671b q4. Today, I'm getting about 0.15 t/s... What did I break lol | 37 | 5975wx, 512gb DDR4 3200, dual 3090s.
Ollama + OpenWebUI. Running on LMDE.
Idk what went wrong now but I'm struggling to get it back to 4 t/s... I can work with 4 t/s, but 0.15 t/s is just terrible.
Any ideas? Happy to provide information upon request.
Total noob here, just built this a few days ago and very little t... | 2025-07-01T04:46:12 | https://www.reddit.com/r/LocalLLaMA/comments/1loswvr/new_to_the_scene_yesterday_got_4_ts_on_r1_671b_q4/ | sourpatchgrownadults | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loswvr | false | null | t3_1loswvr | /r/LocalLLaMA/comments/1loswvr/new_to_the_scene_yesterday_got_4_ts_on_r1_671b_q4/ | false | false | self | 37 | null |
Intel GPU vLLM Docker Compose Bootstrap with Phi-lthy4 on A770 | 5 | Hey everyone,
This weekend I started tinkering with vLLM after a discussion we had over at the [OpenArc](https://github.com/SearchSavior/OpenArc) [discord server](https://discord.gg/Bzz9hax9Jq) last week about getting better performance.
Between vLLM and IPEX documentation they make it easy enough to get things rolli... | 2025-07-01T04:25:16 | https://www.reddit.com/r/LocalLLaMA/comments/1losjpq/intel_gpu_vllm_docker_compose_bootstrap_with/ | Echo9Zulu- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1losjpq | false | null | t3_1losjpq | /r/LocalLLaMA/comments/1losjpq/intel_gpu_vllm_docker_compose_bootstrap_with/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'NPT2q1kryr57y9RMrGWEOV2MhPhXGETFTmAQPSEtLz4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NPT2q1kryr57y9RMrGWEOV2MhPhXGETFTmAQPSEtLz4.png?width=108&crop=smart&auto=webp&s=bd0eae168a3078f08958617ae694e72aebef94ef', 'width': 108}, {'height': 108, 'url': 'h... |
Is the rumours true about Apple abandoning MLX? | 131 | Some folks on X are saying | 2025-07-01T03:17:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lorbc5/is_the_rumours_true_about_apple_abandoning_mlx/ | No_Conversation9561 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lorbc5 | false | null | t3_1lorbc5 | /r/LocalLLaMA/comments/1lorbc5/is_the_rumours_true_about_apple_abandoning_mlx/ | false | false | self | 131 | null |
Free 2-month Generative AI workshop - Beyond Hello World | 0 | Hi everyone,
After ChatGPT took off, I noticed that many of us became excited about AI, but many tutorials stopped at “Hello World” or weather app clones. I wanted to offer something deeper and more practical.
Starting July 12 to September 6, I’m hosting a free 8-week Generative AI seminar series, every Saturday at 8... | 2025-07-01T02:56:23 | https://www.reddit.com/r/LocalLLaMA/comments/1loqwl5/free_2month_generative_ai_workshop_beyond_hello/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loqwl5 | false | null | t3_1loqwl5 | /r/LocalLLaMA/comments/1loqwl5/free_2month_generative_ai_workshop_beyond_hello/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '5j0HPV-bGOUXeyp-p_W6NDahs2OKhIy_4rgSEZQf6-k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5j0HPV-bGOUXeyp-p_W6NDahs2OKhIy_4rgSEZQf6-k.png?width=108&crop=smart&auto=webp&s=363d4871f001788017acee0319dbbada50afc670', 'width': 108}, {'height': 108, 'url': 'h... |
Looking for a Dark GPT-Like Model Without Filters (For Personal Use) | 0 | Hi there,
Do you know how Dark GPT was programmed? Also, is there a similar model without ethical restrictions or filters? I’m looking for something just for personal use. | 2025-07-01T02:28:06 | https://www.reddit.com/r/LocalLLaMA/comments/1loqcme/looking_for_a_dark_gptlike_model_without_filters/ | SingleBeautiful8666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loqcme | false | null | t3_1loqcme | /r/LocalLLaMA/comments/1loqcme/looking_for_a_dark_gptlike_model_without_filters/ | false | false | nsfw | 0 | null |
Locally hosted Cursor/Windurf possible? | 3 | Currently, Cursor or Winsurf like tools are dependent on Anthropic Claude models for delivering best of agentic experience where you provide set of instructions and you can get your sw application ready.
Given that there is so much dependency on Claude closed models, do we have any alternative to achieve the same:
1.... | 2025-07-01T02:23:26 | https://www.reddit.com/r/LocalLLaMA/comments/1loq9e1/locally_hosted_cursorwindurf_possible/ | prashantspats | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loq9e1 | false | null | t3_1loq9e1 | /r/LocalLLaMA/comments/1loq9e1/locally_hosted_cursorwindurf_possible/ | false | false | self | 3 | null |
introducing cocoindex - super simple etl to prepare data for ai agents, with dynamic index | 1 | [removed] | 2025-07-01T01:56:13 | https://www.reddit.com/r/LocalLLaMA/comments/1loppq2/introducing_cocoindex_super_simple_etl_to_prepare/ | Whole-Assignment6240 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loppq2 | false | null | t3_1loppq2 | /r/LocalLLaMA/comments/1loppq2/introducing_cocoindex_super_simple_etl_to_prepare/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'btMou5RsIFb10mSlvZ_QTLlacWwlWWBa5M153CLS508', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/btMou5RsIFb10mSlvZ_QTLlacWwlWWBa5M153CLS508.png?width=108&crop=smart&auto=webp&s=93d72a63327989f6ee517a961861b6c9ea0ce07a', 'width': 108}, {'height': 108, 'url': 'h... |
Building a Local LLM With Zero Ethical or Safety Filters | 0 | Hey, anyone here running a local LLM with no filters? How’d you do it? | 2025-07-01T01:54:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lopoio/building_a_local_llm_with_zero_ethical_or_safety/ | SingleBeautiful8666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lopoio | false | null | t3_1lopoio | /r/LocalLLaMA/comments/1lopoio/building_a_local_llm_with_zero_ethical_or_safety/ | false | false | self | 0 | null |
how are chat completion messages handled in backend logic of API services like with vllm | 1 | Sorry for the newbie question, I wonder if I have multiple user's messages for context, question, tool output etc.. vs I concatenate them as one user message to send to chat/completions endpoint, would there be any difference. I do not have a good enough test set to check, please share if you know this has been studied... | 2025-07-01T01:50:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lopls4/how_are_chat_completion_messages_handled_in/ | woodenleaf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lopls4 | false | null | t3_1lopls4 | /r/LocalLLaMA/comments/1lopls4/how_are_chat_completion_messages_handled_in/ | false | false | self | 1 | null |
Instead of summarizing, cut filler words | 1 | [removed] | 2025-07-01T01:34:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lopa25/instead_of_summarizing_cut_filler_words/ | AlphaHusk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lopa25 | false | null | t3_1lopa25 | /r/LocalLLaMA/comments/1lopa25/instead_of_summarizing_cut_filler_words/ | false | false | self | 1 | null |
A Llama near the top for every size except small | 10 | Interesting pattern I noticed for non-reasoning models (I am in the process of picking one to fine-tune): there is a Llama at/near the top of the intelligence index for *every* model size class *except* small models! Also interesting: the small model class is the *most* crowded model class by far.
*Processing img fgwk... | 2025-07-01T01:32:57 | entsnack | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lop94b | false | null | t3_1lop94b | /r/LocalLLaMA/comments/1lop94b/a_llama_near_the_top_for_every_size_except_small/ | false | false | default | 10 | {'enabled': True, 'images': [{'id': 'o941j62s16af1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/o941j62s16af1.png?width=108&crop=smart&auto=webp&s=6072914d4ec2ac875329b538e9d177acdb7d6437', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/o941j62s16af1.png?width=216&crop=smart&auto=web... | |
A Llama near the top for every size except small! | 1 | Interesting pattern I noticed for non-reasoning models (I am in the process of picking one to fine-tune): there is a Llama at/near the top of the intelligence index for *every* model size class *except* small models! Also interesting: the small model class is the *most* crowded model class by far.
[Large](https://prev... | 2025-07-01T01:31:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lop88n/a_llama_near_the_top_for_every_size_except_small/ | entsnack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lop88n | false | null | t3_1lop88n | /r/LocalLLaMA/comments/1lop88n/a_llama_near_the_top_for_every_size_except_small/ | false | false | 1 | null | |
Hello | 0 | Hi,
I'm really interested in learning how you're building open-source AI models, especially in areas like physics and universe simulation.
I want to understand how these models work, how to start building or testing them, and how I can get involved — even if I'm still learning.
I'm also looking to connect with people w... | 2025-07-01T01:26:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lop488/hello/ | Wonderful-Gold-2868 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lop488 | false | null | t3_1lop488 | /r/LocalLLaMA/comments/1lop488/hello/ | false | false | self | 0 | null |
Struggling with vLLM. The instructions make it sound so simple to run, but it’s like my Kryptonite. I give up. | 43 | I’m normally the guy they call in to fix the IT stuff nobody else can fix. I’ll laser focus on whatever it is and figure it out probably 99% of the time. I’ve been in IT for over 28+ years.
I’ve been messing with AI stuff for nearly 2 years now. Getting my Masters in AI right now. All that being said, I’ve never encou... | 2025-07-01T00:35:22 | https://www.reddit.com/r/LocalLLaMA/comments/1loo2u3/struggling_with_vllm_the_instructions_make_it/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1loo2u3 | false | null | t3_1loo2u3 | /r/LocalLLaMA/comments/1loo2u3/struggling_with_vllm_the_instructions_make_it/ | false | false | self | 43 | null |
On-demand GPU cluster - providing free credits | 2 | We noticed that it was difficult getting instances with more than 8 GPUs.
We created a service that pools together GPUs from different service providers, and created a simple way to spin up on-demand GPU clusters to be easily used.
We are still in beta mode so looking for early feedback - reach out to get free credit... | 2025-06-30T23:42:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lomyut/ondemand_gpu_cluster_providing_free_credits/ | DrIroh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lomyut | false | null | t3_1lomyut | /r/LocalLLaMA/comments/1lomyut/ondemand_gpu_cluster_providing_free_credits/ | false | false | self | 2 | null |
5060ti 16gb or 9060xt 16gb for small llm server | 1 | I have a i7-11700k with 128gb of ddr4 ram and I want to add a gpu to speed up my tokens per second speeds. What are your thoughts on the 5060ti 16gb or 9060xt 16gb they’re both about $400 where I live and I feel it’s reasonable for a modern 16gb card. Does anyone have either of these and how is it?
Im going to be runn... | 2025-06-30T23:40:13 | https://www.reddit.com/r/LocalLLaMA/comments/1lomwqu/5060ti_16gb_or_9060xt_16gb_for_small_llm_server/ | techmaverick_x | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lomwqu | false | null | t3_1lomwqu | /r/LocalLLaMA/comments/1lomwqu/5060ti_16gb_or_9060xt_16gb_for_small_llm_server/ | false | false | self | 1 | null |
I've built a spec for LLM-to-LLM comms by combining semantic patterns with structured syntax | 14 | Firstly, total disclaimer. About 4 months ago, I knew very little about LLMs, so I am one of those people who went down the rabbit hole and started chatting with AI. But, I'm a chap who does a lot of pattern recognition in the way I work (I can write music for orchestras without reading it) so just sort of tugged on th... | 2025-06-30T23:31:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lompd5/ive_built_a_spec_for_llmtollm_comms_by_combining/ | sbuswell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lompd5 | false | null | t3_1lompd5 | /r/LocalLLaMA/comments/1lompd5/ive_built_a_spec_for_llmtollm_comms_by_combining/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '5mRzok8P7sadKGuJL5w4dxeiaZgoDJbskEgknNv52vo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5mRzok8P7sadKGuJL5w4dxeiaZgoDJbskEgknNv52vo.png?width=108&crop=smart&auto=webp&s=b51ae35d10f2d1ad57bc1bc156d2cc561fdeca52', 'width': 108}, {'height': 108, 'url': 'h... |
Off the shelf uncensored LLM | 0 | Hey is there a SaaS provider that allows me to use an uncensored LLM via api? I can’t find any and all seem to be local hosted
Looking for the least amount code required please
Thank you | 2025-06-30T23:24:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lomke8/off_the_shelf_uncensored_llm/ | theycallmebond007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lomke8 | false | null | t3_1lomke8 | /r/LocalLLaMA/comments/1lomke8/off_the_shelf_uncensored_llm/ | false | false | self | 0 | null |
[Tool] Run GPT-style models from a USB stick – no install, no internet, no GPU – meet Local LLM Notepad 🚀 | 27 | # TL;DR
*Copy one portable* `.exe` *+ a* `.gguf` *model to a flash drive → double-click on any Windows PC → start chatting offline in seconds.*
GitHub ▶︎ [**https://github.com/runzhouye/Local\_LLM\_Notepad**](https://github.com/runzhouye/Local_LLM_Notepad)
https://i.redd.it/lz6e4zmpd5af1.gif
# 30-second Quick-Start... | 2025-06-30T23:22:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lomilz/tool_run_gptstyle_models_from_a_usb_stick_no/ | Awkward-Dare-1127 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lomilz | false | null | t3_1lomilz | /r/LocalLLaMA/comments/1lomilz/tool_run_gptstyle_models_from_a_usb_stick_no/ | false | false | 27 | {'enabled': False, 'images': [{'id': '4LxPZe7g9FzOelbXz3aOGVDeWd4mgBA3OWlLsXGi_Bc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4LxPZe7g9FzOelbXz3aOGVDeWd4mgBA3OWlLsXGi_Bc.png?width=108&crop=smart&auto=webp&s=54411719b87691095fa24ce69ad2eb0fc28c1a62', 'width': 108}, {'height': 108, 'url': 'h... | |
Looking to Upgrade My CPU-Only LLM Server | 2 | Hello,
I'm looking to upgrade my LLM setup / replace my server. I'm currently running CPU-only with an i9-12900H, 64GB DDR4 RAM, and a 1TB NVMe.
When I built this server, I quickly ran into a bottleneck due to RAM bandwidth limitations — the CPU and motherboard only support dual channel, which became a major constrai... | 2025-06-30T23:04:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lom41a/looking_to_upgrade_my_cpuonly_llm_server/ | canterlotfr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lom41a | false | null | t3_1lom41a | /r/LocalLLaMA/comments/1lom41a/looking_to_upgrade_my_cpuonly_llm_server/ | false | false | self | 2 | null |
With the OpenAI employees that Meta hired, do you think this will be positive for local models? | 124 | I mean, if these people hired were so important to developing powerful and important OpenAI models. Hopefully the next Llama models will be much better than Llama 4... and raise the bar like Llama did before. | 2025-06-30T23:02:37 | LarDark | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lom2r9 | false | null | t3_1lom2r9 | /r/LocalLLaMA/comments/1lom2r9/with_the_openai_employees_that_meta_hired_do_you/ | false | false | default | 124 | {'enabled': True, 'images': [{'id': 'ymsyhfb2b5af1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/ymsyhfb2b5af1.png?width=108&crop=smart&auto=webp&s=1760e83ed6919a5b883be98424b11829df4fed1c', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/ymsyhfb2b5af1.png?width=216&crop=smart&auto=we... | |
MiniMax M1 Overthinking | 0 | Thought for 164s meanwhile Claude Sonnet 4 thought for 3s. Both were correct. | 2025-06-30T22:26:31 | LostRespectFeds | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lol8o4 | false | null | t3_1lol8o4 | /r/LocalLLaMA/comments/1lol8o4/minimax_m1_overthinking/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'QVsiUHGmn-aAnMQKDXtOUUqdqPTHwBVVZik6u3PQgy4', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/4sj1xrnj45af1.png?width=108&crop=smart&auto=webp&s=399a2457c1efc372439f26b348372069d69c4ab9', 'width': 108}, {'height': 240, 'url': 'https://preview.redd.it/4sj1xrnj45af1.pn... | ||
[Dataset] 4,000 hours of full-body, in-person, human face-to-face interaction videos | 64 | Dataset on Huggingface: [https://huggingface.co/datasets/facebook/seamless-interaction](https://huggingface.co/datasets/facebook/seamless-interaction) | 2025-06-30T22:20:44 | https://www.aidemos.meta.com/seamless_interaction_dataset | entsnack | aidemos.meta.com | 1970-01-01T00:00:00 | 0 | {} | 1lol3na | false | null | t3_1lol3na | /r/LocalLLaMA/comments/1lol3na/dataset_4000_hours_of_fullbody_inperson_human/ | false | false | default | 64 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.