title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Is it simply about upgrading?
7
I'm a total noob to all this. I was having really good results with Gemini 2.5 Pro, o4-mini, and Claude 4.0 Sonnet in VScode. I decided to try a few local models on my nVidia 8GB RTX 2060 Super (cpu AMD Ryzen 9 3900 12-core, RAM 64GB) I tested the following models with Roo/ollama: 1) gemma3n:e2b-it-q4_K_M 2_ hf.co...
2025-07-02T22:41:03
https://www.reddit.com/r/LocalLLaMA/comments/1lq9lkd/is_it_simply_about_upgrading/
outofbandii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq9lkd
false
null
t3_1lq9lkd
/r/LocalLLaMA/comments/1lq9lkd/is_it_simply_about_upgrading/
false
false
self
7
null
Free AI for all.
0
The standalone and portable version is available. Works with gguf. Enjoy.
2025-07-02T22:37:54
https://v.redd.it/whjbj3y1gjaf1
wallbergai
v.redd.it
1970-01-01T00:00:00
0
{}
1lq9j0x
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/whjbj3y1gjaf1/DASHPlaylist.mpd?a=1754087891%2CYTRjNDNkNDgxNDBiMjdmMDExNTlkMmU0NjQxNGNiOWIyNGFiOWFlNjUwZTM0ODdiYTBjOTU3MTliZjg2OTIwYg%3D%3D&v=1&f=sd', 'duration': 79, 'fallback_url': 'https://v.redd.it/whjbj3y1gjaf1/DASH_480.mp4?source=fallback', 'ha...
t3_1lq9j0x
/r/LocalLLaMA/comments/1lq9j0x/free_ai_for_all/
false
false
https://external-preview…553e2e88683fa93e
0
{'enabled': False, 'images': [{'id': 'OWpzZmEyeTFnamFmMZw7itM17cvYQif7LzHR4CzZ0mEOzHnFEM5qkqXQQPtr', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/OWpzZmEyeTFnamFmMZw7itM17cvYQif7LzHR4CzZ0mEOzHnFEM5qkqXQQPtr.png?width=108&crop=smart&format=pjpg&auto=webp&s=78872b43e03735532a89dbc954130e45d2197...
Speculative Decoding and Quantization ... I'm probably not going anywhere near what you think...
0
...So this idea I had, I never could quite execute on, I thought I'd share and let people pick it apart, and/or take it to the next level. Here is how I got there. I have it in my mind that Llama 3.3 70b 8 bit should be close to Llama 4 Maverick 4-Bit at \~243 GB). Llama 3.3 70b 8 bit is \~75 GB and Llama 3.3 70b 4 bi...
2025-07-02T22:32:16
https://www.reddit.com/r/LocalLLaMA/comments/1lq9eg5/speculative_decoding_and_quantization_im_probably/
silenceimpaired
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq9eg5
false
null
t3_1lq9eg5
/r/LocalLLaMA/comments/1lq9eg5/speculative_decoding_and_quantization_im_probably/
false
false
self
0
null
STT dictation and conversational sparring partner?
1
Has anyone been able to setup a following solution: 1. Speech is transcribed via local model (whisper or other) 2. Grammar, spelling and rephrases are executed, respecting a system prompt 3. Output to markdown file or directly within an interface / webui 4. Optional: Speech commands such as "Scratch that last sentence...
2025-07-02T22:13:42
https://www.reddit.com/r/LocalLLaMA/comments/1lq8z04/stt_dictation_and_conversational_sparring_partner/
lodott1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq8z04
false
null
t3_1lq8z04
/r/LocalLLaMA/comments/1lq8z04/stt_dictation_and_conversational_sparring_partner/
false
false
self
1
null
Ubuntu 24.04: observing that nvidia-535 drivers run 20 tokens/sec faster than nvidia-570 drivers with no other changes in my vLLM setup
85
Running vLLM 9.1 with 4x A6000s in tensor parallel config with the CognitiveComputations 4-bit AWQ quant of Qwen3 235B A22. I was running 535 and did an OS update, so I went with 570. I immediately saw inference had dropped from 56 tokens/sec to 35 tokens/sec. Puzzled, I messed around for a few days, tweaked all sorts...
2025-07-02T21:52:28
https://www.reddit.com/r/LocalLLaMA/comments/1lq8gjv/ubuntu_2404_observing_that_nvidia535_drivers_run/
__JockY__
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq8gjv
false
null
t3_1lq8gjv
/r/LocalLLaMA/comments/1lq8gjv/ubuntu_2404_observing_that_nvidia535_drivers_run/
false
false
self
85
null
the result of all the polls i’ve been running here
1
i’ve been sharing polls and asking questions just to figure out what people actually need. i’ve consulted for ai infra companies and startups. i also built and launched my own ai apps using those infras. but they failed me. local tools were painful. hosted ones were worse. everything felt disconnected and fragile. so...
2025-07-02T21:29:07
https://youtu.be/ViadeTYqQDg?si=dfAXbK8fnZPBEuDV
okaris
youtu.be
1970-01-01T00:00:00
0
{}
1lq7wra
false
{'oembed': {'author_name': 'inference', 'author_url': 'https://www.youtube.com/@inference-sh', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/ViadeTYqQDg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope...
t3_1lq7wra
/r/LocalLLaMA/comments/1lq7wra/the_result_of_all_the_polls_ive_been_running_here/
false
false
https://external-preview…e75866d550d1c4d1
1
{'enabled': False, 'images': [{'id': '5ZUHQMiXiTsPomcIjK_gRVQFK9kOoiVjLoBo2ywsDuU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/5ZUHQMiXiTsPomcIjK_gRVQFK9kOoiVjLoBo2ywsDuU.jpeg?width=108&crop=smart&auto=webp&s=a1b1ab2069c070abc6be872f0e9d98b671ce6060', 'width': 108}, {'height': 162, 'url': '...
I used Qwen 3 to write a lil' agent for itself, capable of tool writing and use
50
2025-07-02T21:27:35
https://v.redd.it/gh20o4e63jaf1
PraxisOG
v.redd.it
1970-01-01T00:00:00
0
{}
1lq7vjc
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/gh20o4e63jaf1/DASHPlaylist.mpd?a=1754083673%2CMDNhZWJkMjZhOGQ3NmRmNjAxZjk4ZDFhY2EwZDAyZTY4Zjk5ODY1OWJiY2ExZTY2NGU1M2VkMDMzZGIxNGI5OA%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/gh20o4e63jaf1/DASH_480.mp4?source=fallback', 'ha...
t3_1lq7vjc
/r/LocalLLaMA/comments/1lq7vjc/i_used_qwen_3_to_write_a_lil_agent_for_itself/
false
false
https://external-preview…7a8fe8db5fc6d075
50
{'enabled': False, 'images': [{'id': 'dDc3dTk0ZTYzamFmMQsTB0hZmTp62l46rZf6LudFdHKCIyfga0Grf1zIAq2p', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/dDc3dTk0ZTYzamFmMQsTB0hZmTp62l46rZf6LudFdHKCIyfga0Grf1zIAq2p.png?width=108&crop=smart&format=pjpg&auto=webp&s=4c0ae13ad14d92cb71484678ad2b7a90c9634...
FP8 fixed on VLLM for RTX Pro 6000 (and RTX 5000 desktop cards)
50
Yay! Been waiting for this one for a while, guessing I'm not the only one? https://github.com/vllm-project/vllm/pull/17280 On 70B I'm maxing out around 1400T/s Quick install instructions if you want to try it: mkdir vllm-src cd vllm-src python3 -m venv myenv source myenv/bin/activate pip install torch torchvision ...
2025-07-02T21:02:46
https://www.reddit.com/r/LocalLLaMA/comments/1lq79xx/fp8_fixed_on_vllm_for_rtx_pro_6000_and_rtx_5000/
Conscious_Cut_6144
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq79xx
false
null
t3_1lq79xx
/r/LocalLLaMA/comments/1lq79xx/fp8_fixed_on_vllm_for_rtx_pro_6000_and_rtx_5000/
false
false
self
50
{'enabled': False, 'images': [{'id': 'R0YaLypW8v8YI57W9FPZsNuYoTjtBqD3Vh8AMMEsQF0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/R0YaLypW8v8YI57W9FPZsNuYoTjtBqD3Vh8AMMEsQF0.png?width=108&crop=smart&auto=webp&s=7ca1383baae7b6b423ab28be5aa67c437fc35d82', 'width': 108}, {'height': 108, 'url': 'h...
LitheCode, updating your GitHub repo using Local LLMs?
2
LitheCode, it is a bit like if Pocketpal AI would allow you to edit your repo and update it in less than 6 clicks. Would love to get some feeback on my app or answer any questions you may have. It isn't perfect, but poured in all of my free time for a year. It isn't strictly local models only as our small models are s...
2025-07-02T20:33:27
https://www.reddit.com/gallery/1lq6jx8
AspecialistI
reddit.com
1970-01-01T00:00:00
0
{}
1lq6jx8
false
null
t3_1lq6jx8
/r/LocalLLaMA/comments/1lq6jx8/lithecode_updating_your_github_repo_using_local/
false
false
https://b.thumbs.redditm…s15SsR6jpV2U.jpg
2
null
Critical Vulnerability in Anthropic's MCP Exposes Developer Machines to Remote Exploits
13
Article from hacker news: https://thehackernews.com/2025/07/critical-vulnerability-in-anthropics.html?m=1 Cybersecurity researchers have discovered a critical security vulnerability in artificial intelligence (AI) company Anthropic's Model Context Protocol (MCP) Inspector project that could result in remote code exec...
2025-07-02T20:07:15
https://www.reddit.com/r/LocalLLaMA/comments/1lq5wmh/critical_vulnerability_in_anthropics_mcp_exposes/
No_Palpitation7740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq5wmh
false
null
t3_1lq5wmh
/r/LocalLLaMA/comments/1lq5wmh/critical_vulnerability_in_anthropics_mcp_exposes/
false
false
self
13
null
I Built My Wife a Simple Web App for Image Editing Using Flux Kontext—Now It’s Open Source
571
2025-07-02T19:48:05
https://i.redd.it/nmerohq4miaf1.jpeg
XMasterrrr
i.redd.it
1970-01-01T00:00:00
0
{}
1lq5fqq
false
null
t3_1lq5fqq
/r/LocalLLaMA/comments/1lq5fqq/i_built_my_wife_a_simple_web_app_for_image/
false
false
default
571
{'enabled': True, 'images': [{'id': 'nmerohq4miaf1', 'resolutions': [{'height': 85, 'url': 'https://preview.redd.it/nmerohq4miaf1.jpeg?width=108&crop=smart&auto=webp&s=b0f05fdbd0f93c91e6f72022aaaf617828ac15a4', 'width': 108}, {'height': 171, 'url': 'https://preview.redd.it/nmerohq4miaf1.jpeg?width=216&crop=smart&auto=w...
ChatTree: A simple way to context engineer
18
I’ve been thinking about how we manage context when interacting with LLMs, and thought what if we had chat trees instead of linear threads? The idea is simple, let users branch off from any point in the conversation to explore alternatives or dive deeper, while hiding irrelevant future context. I put together a quick ...
2025-07-02T19:45:06
https://github.com/aadityaubhat/ChatTree
aadityaubhat
github.com
1970-01-01T00:00:00
0
{}
1lq5d1o
false
null
t3_1lq5d1o
/r/LocalLLaMA/comments/1lq5d1o/chattree_a_simple_way_to_context_engineer/
false
false
default
18
{'enabled': False, 'images': [{'id': 'aq3Jk2PhaI6RNOjpL6IJwNVoG1BpVMN4hyuA4sGDNRM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aq3Jk2PhaI6RNOjpL6IJwNVoG1BpVMN4hyuA4sGDNRM.png?width=108&crop=smart&auto=webp&s=344fc168447205eb001e41942ab7649037277702', 'width': 108}, {'height': 108, 'url': 'h...
Extended NYT Connections Benchmark updated with Baidu Ernie 4.5 300B A47B, Mistral Small 3.2, MiniMax-M1
45
Mistral Small 3.2 scores 11.5 (Mistral Small 3.1 scored 11.4). Baidu Ernie 4.5 300B A47B scores 15.2. MiniMax-M1 (reasoning) scores 21.4 (MiniMax-Text-01 scored 14.6).
2025-07-02T19:03:45
https://github.com/lechmazur/nyt-connections/
zero0_one1
github.com
1970-01-01T00:00:00
0
{}
1lq4cil
false
null
t3_1lq4cil
/r/LocalLLaMA/comments/1lq4cil/extended_nyt_connections_benchmark_updated_with/
false
false
https://external-preview…b60919c01cc17a94
45
{'enabled': False, 'images': [{'id': '8K_ldEY4raQVEoGa75S06Rw5m0I3IfQW0ZBFqygnT8o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8K_ldEY4raQVEoGa75S06Rw5m0I3IfQW0ZBFqygnT8o.png?width=108&crop=smart&auto=webp&s=ef682e456093328d7e9066c522f0dbc88ce5731d', 'width': 108}, {'height': 108, 'url': 'h...
best bang for your buck in GPUs for VRAM?
45
have been poring over pcpartpicker, newegg etc. and it seems like the cheapest way to get the most usable VRAM from GPUs is the 16GB 5060Ti? am I missing something obvious? (probably.) TIA.
2025-07-02T19:02:41
https://www.reddit.com/r/LocalLLaMA/comments/1lq4bhu/best_bang_for_your_buck_in_gpus_for_vram/
starkruzr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq4bhu
false
null
t3_1lq4bhu
/r/LocalLLaMA/comments/1lq4bhu/best_bang_for_your_buck_in_gpus_for_vram/
false
false
self
45
null
Make an LLM project proposal
0
I'm looking for project ideas related to LLMs and Transformer architecture. I'm especially interested in fine-tuning and other efficient training techniques, and I'm open to creative or technically meaningful project suggestions. These can be either practical applications or research-oriented ideas. What kind of projec...
2025-07-02T19:01:36
https://www.reddit.com/r/LocalLLaMA/comments/1lq4ag9/make_an_llm_project_proposal/
According-Local-9704
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq4ag9
false
null
t3_1lq4ag9
/r/LocalLLaMA/comments/1lq4ag9/make_an_llm_project_proposal/
false
false
self
0
null
Lm studio: Model does not support images. Please use a model that does.!
1
Hi all. I install the model that supports the visual module, whenever I upload the photo, the error falls: Model sores not support images. Please ae a model that does. What to do with it?
2025-07-02T18:53:39
https://www.reddit.com/r/LocalLLaMA/comments/1lq436s/lm_studio_model_does_not_support_images_please/
SensitiveMarzipan203
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq436s
false
null
t3_1lq436s
/r/LocalLLaMA/comments/1lq436s/lm_studio_model_does_not_support_images_please/
false
false
self
1
null
Extended NYT Connections Benchmark updated with Baidu Ernie 4.5 300B A47B, Mistral Small 3.2, MiniMax-M1
3
[removed]
2025-07-02T18:52:26
https://www.reddit.com/gallery/1lq422l
zero0_one1
reddit.com
1970-01-01T00:00:00
0
{}
1lq422l
false
null
t3_1lq422l
/r/LocalLLaMA/comments/1lq422l/extended_nyt_connections_benchmark_updated_with/
false
false
https://b.thumbs.redditm…ye-ABIwg7tHM.jpg
3
null
Extended NYT Connections Benchmark updated with Baidu Ernie 4.5 300B A47B, Mistral Small 3.2, MiniMax-M1
1
[removed]
2025-07-02T18:48:19
https://www.reddit.com/gallery/1lq3yb7
zero0_one1
reddit.com
1970-01-01T00:00:00
0
{}
1lq3yb7
false
null
t3_1lq3yb7
/r/LocalLLaMA/comments/1lq3yb7/extended_nyt_connections_benchmark_updated_with/
false
false
https://a.thumbs.redditm…xnix1e2UMtJ4.jpg
1
null
24B IQ3_M vs 12B Q5_K_M
5
What will be better? IQ3\_M 24B mistral small 3.1/3.2 vs Q5\_K\_M 12B mistral nemo
2025-07-02T18:44:24
https://www.reddit.com/r/LocalLLaMA/comments/1lq3urv/24b_iq3_m_vs_12b_q5_k_m/
Longjumping_Bee_6825
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq3urv
false
null
t3_1lq3urv
/r/LocalLLaMA/comments/1lq3urv/24b_iq3_m_vs_12b_q5_k_m/
false
false
self
5
null
Day 8/50: Building a Small Language Model from Scratch – Rotary Positional Embeddings (RoPE)
35
In the past two days, we explored what positional embeddings are and even coded it. Today, we’re diving into a more advanced and powerful concept used in many state-of-the-art models: Rotary Positional Embeddings (RoPE). # Recap: Why Transformers Need Positional Embeddings Transformers process tokens in parallel, wh...
2025-07-02T18:43:23
https://www.reddit.com/r/LocalLLaMA/comments/1lq3tuu/day_850_building_a_small_language_model_from/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq3tuu
false
null
t3_1lq3tuu
/r/LocalLLaMA/comments/1lq3tuu/day_850_building_a_small_language_model_from/
false
false
self
35
{'enabled': False, 'images': [{'id': 'yk95gNYDHwq6XIleth3O7MrgXkPOE1Hr9ZzOPDx-3XE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yk95gNYDHwq6XIleth3O7MrgXkPOE1Hr9ZzOPDx-3XE.png?width=108&crop=smart&auto=webp&s=bdcabe31093a0a8eb032266a1279c0d7415a3fd7', 'width': 108}, {'height': 108, 'url': 'h...
need suggestions for models to use
0
i am completely new to this entire thing and am hoping to run models locally on my desktop (rtx 4070, r7 9700x, 32gb ddr5). what models would be the best use case for these specs?
2025-07-02T18:30:34
https://www.reddit.com/r/LocalLLaMA/comments/1lq3i6h/need_suggestions_for_models_to_use/
StrangeChallenge1865
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq3i6h
false
null
t3_1lq3i6h
/r/LocalLLaMA/comments/1lq3i6h/need_suggestions_for_models_to_use/
false
false
self
0
null
How do you pick the right local LLM for your needs?
4
Hey guys, I’m diving into running models locally with Ollama or LMStudio, and there are so many options that I don’t even know where to start, especially before I lock in on a specific project. I want to develop a clear process for figuring out which model might suit me, even if I don’t yet have a narrow use case. Co...
2025-07-02T18:06:52
https://www.reddit.com/r/LocalLLaMA/comments/1lq2wn6/how_do_you_pick_the_right_local_llm_for_your_needs/
ExtiqX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq2wn6
false
null
t3_1lq2wn6
/r/LocalLLaMA/comments/1lq2wn6/how_do_you_pick_the_right_local_llm_for_your_needs/
false
false
self
4
null
Table embeddings for similarity search between tables ?
2
Hello like the title says we are trying to build a pipeline that takes in tables and tries to decern what information they contain. For this i was wondering if someone ever tried specific table embeddings ? So we can try building a vectorspace for a kind of rag searching out the next related tables and using an llm and...
2025-07-02T17:55:38
https://www.reddit.com/r/LocalLLaMA/comments/1lq2m1x/table_embeddings_for_similarity_search_between/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq2m1x
false
null
t3_1lq2m1x
/r/LocalLLaMA/comments/1lq2m1x/table_embeddings_for_similarity_search_between/
false
false
self
2
null
Is there a legit code assistant that can run on a m3 ultra 256 or 96gb?
8
Anything that would work as an agentic code assistant? Trying to decide if it’s worth investing if it means I don’t have to pay for Claude code anymore. I understand it won’t be near Claude code but that’s fine.
2025-07-02T17:51:17
https://www.reddit.com/r/LocalLLaMA/comments/1lq2i2m/is_there_a_legit_code_assistant_that_can_run_on_a/
tru3relativity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq2i2m
false
null
t3_1lq2i2m
/r/LocalLLaMA/comments/1lq2i2m/is_there_a_legit_code_assistant_that_can_run_on_a/
false
false
self
8
null
LLM slop has started to contaminate spoken language
8
A recent [study](https://arxiv.org/abs/2409.01754) *underscores* the growing prevalence of LLM-generated "slop words" [in academic papers](https://pshapira.net/2024/03/31/delving-into-delve/), a trend now *spilling into* spontaneous spoken language. By *meticulously analyzing* 700,000 hours of academic talks and podcas...
2025-07-02T17:42:46
https://www.reddit.com/r/LocalLLaMA/comments/1lq2aae/llm_slop_has_started_to_contaminate_spoken/
Chromix_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq2aae
false
null
t3_1lq2aae
/r/LocalLLaMA/comments/1lq2aae/llm_slop_has_started_to_contaminate_spoken/
false
false
https://b.thumbs.redditm…EvTCp_SZSzEY.jpg
8
null
Training Mistral on an RTX 5090 + Core Ultra 285K — Pop!_OS rig pushing 96GB DDR5 at full throttle
1
[removed]
2025-07-02T17:33:54
https://v.redd.it/xrpgfmrvxhaf1
AI9LAB
v.redd.it
1970-01-01T00:00:00
0
{}
1lq227f
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/xrpgfmrvxhaf1/DASHPlaylist.mpd?a=1754072469%2CMmE1OGFkZWNkZTNiOWQxYzM2OTE5ZjRlMzBkNjExYjBlODQ2MjAxYTBkMTU2MTc1NzA1NjVkY2QzMWZmY2E4MQ%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/xrpgfmrvxhaf1/DASH_720.mp4?source=fallback', 'ha...
t3_1lq227f
/r/LocalLLaMA/comments/1lq227f/training_mistral_on_an_rtx_5090_core_ultra_285k/
false
false
https://external-preview…78eeb2ef4e1de3ce
1
{'enabled': False, 'images': [{'id': 'cmc0OTB1cnZ4aGFmMRKxcd0SQAhWJaQvi1HfHJ6oUONaGZCO69M1q788dA9U', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/cmc0OTB1cnZ4aGFmMRKxcd0SQAhWJaQvi1HfHJ6oUONaGZCO69M1q788dA9U.png?width=108&crop=smart&format=pjpg&auto=webp&s=d159aaf4c0868dc0bcfc0d6321ca11ef3839...
Cursor equivalent or close to alternative fully local?
8
>Cursor equivalent or close to alternative fully local? It's Continue .dev, Void, aider, Zed, AutoGPT, SuperAGI or something else
2025-07-02T17:23:12
https://www.reddit.com/r/LocalLLaMA/comments/1lq1sdi/cursor_equivalent_or_close_to_alternative_fully/
InsideResolve4517
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq1sdi
false
null
t3_1lq1sdi
/r/LocalLLaMA/comments/1lq1sdi/cursor_equivalent_or_close_to_alternative_fully/
false
false
self
8
null
Mamba-2 support in llama.cpp landed
121
2025-07-02T17:14:08
https://github.com/ggml-org/llama.cpp/pull/9126#issuecomment-3027064556
pkmxtw
github.com
1970-01-01T00:00:00
0
{}
1lq1jyr
false
null
t3_1lq1jyr
/r/LocalLLaMA/comments/1lq1jyr/mamba2_support_in_llamacpp_landed/
false
false
https://external-preview…3dba439854c266e0
121
{'enabled': False, 'images': [{'id': 'ygyWPNq8dkq2um7AlKnQuPRca4C5R9wcoTedIaY7KTk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ygyWPNq8dkq2um7AlKnQuPRca4C5R9wcoTedIaY7KTk.png?width=108&crop=smart&auto=webp&s=d7a6abcb15d265565c74fee896eff28e14b9a9f0', 'width': 108}, {'height': 108, 'url': 'h...
[Open Source] Moondream MCP - Vision for AI Agents
41
I integrated Moondream (lightweight vision AI model) with Model Context Protocol (MCP), enabling any AI agent to process images locally/remotely. Open source, self-hosted, no API keys needed. Moondream MCP is a vision AI server that speaks MCP protocol. Your agents can now: **Caption images** - "What's in this image?...
2025-07-02T16:57:27
https://i.redd.it/upyzvjqkrhaf1.png
_colemurray
i.redd.it
1970-01-01T00:00:00
0
{}
1lq1417
false
null
t3_1lq1417
/r/LocalLLaMA/comments/1lq1417/open_source_moondream_mcp_vision_for_ai_agents/
false
false
https://b.thumbs.redditm…c_u-pHCga3Ho.jpg
41
{'enabled': True, 'images': [{'id': 'AW94Mv2FkIDW3--yyr0eIKNalxqTjCalp9fCr0HItaI', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/upyzvjqkrhaf1.png?width=108&crop=smart&auto=webp&s=e580de2907c79e680e10c3e5979c6d4dbb3ba2c9', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/upyzvjqkrhaf1.png...
AI Agents are transforming workflows, but most use cases still feel early-stage. Curious what others are seeing.
6
I’ve been exploring agentic workflows lately not just the flashy demos, but actual implementations that support real-world tasks like deep research, cross-functional reporting, and internal communications. One interesting pattern I’ve noticed: the potential of AI agents seems strongest in domains like law, public sect...
2025-07-02T16:39:06
https://www.reddit.com/r/LocalLLaMA/comments/1lq0n02/ai_agents_are_transforming_workflows_but_most_use/
Powerful-Guide-8169
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq0n02
false
null
t3_1lq0n02
/r/LocalLLaMA/comments/1lq0n02/ai_agents_are_transforming_workflows_but_most_use/
false
false
self
6
null
Stuck on installing Sesame tts on windows :/
1
Their github instruction is not working. I'm on windows with an i7 cpu, Nvidia 12 gb RTX card. Everytime there will be some error at the end. I'm wanting to install and have a web gui after installing. I want only 1 speaker TTS from voicecloned. I found a 1 click readymade installer online but it feels risky as w...
2025-07-02T16:16:53
https://www.reddit.com/r/LocalLLaMA/comments/1lq02np/stuck_on_installing_sesame_tts_on_windows/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq02np
false
null
t3_1lq02np
/r/LocalLLaMA/comments/1lq02np/stuck_on_installing_sesame_tts_on_windows/
false
false
self
1
null
Live Interactive Digital Human(Open-Source Stack): RAG + LLM + TTS in Ac...
7
2025-07-02T16:12:11
https://youtube.com/watch?v=oZ-tCnmUSRE&si=J6fQbTAdfu0nPHxh
Deep-Jellyfish6717
youtube.com
1970-01-01T00:00:00
0
{}
1lpzycz
false
{'oembed': {'author_name': 'titan909-share', 'author_url': 'https://www.youtube.com/@titan909-share', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/oZ-tCnmUSRE?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gy...
t3_1lpzycz
/r/LocalLLaMA/comments/1lpzycz/live_interactive_digital_humanopensource_stack/
false
false
https://external-preview…ccf9ebf6990664eb
7
{'enabled': False, 'images': [{'id': 'ulR1vplT2R0rAFZ3rWOI7kaar5rpRIsbH4QVgbouwAE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ulR1vplT2R0rAFZ3rWOI7kaar5rpRIsbH4QVgbouwAE.jpeg?width=108&crop=smart&auto=webp&s=03a68f9356aedc9915a9fbc51a78efcb9cdf2ed8', 'width': 108}, {'height': 162, 'url': '...
My experience with 14B LLMs on phones with Snapdragon 8 Elite
18
I'm making this thread because weeks ago when I looked up this information, I could barely even find confirmation that it's possible to run 14B models on phones. In the meantime I got a OnePlus 13 with 16GB of RAM. After tinkering with different models and apps for half a day, I figured I give my feedback for the peopl...
2025-07-02T16:09:26
https://www.reddit.com/r/LocalLLaMA/comments/1lpzvtx/my_experience_with_14b_llms_on_phones_with/
schizo_poster
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpzvtx
false
null
t3_1lpzvtx
/r/LocalLLaMA/comments/1lpzvtx/my_experience_with_14b_llms_on_phones_with/
false
false
self
18
null
IA pour résumer des livres ?
0
Bonjour, Je cherche une IA gratuite ou payante capable de résumer, de faire des fiches de livres PDF ou EPUB de plusieurs centaines de pages. Ce sont des livres sur tous les thèmes que je n’ai pas le temps de lire (actualités, santé, philosophie, etc…) Merci de votre aide
2025-07-02T15:57:01
https://www.reddit.com/r/LocalLLaMA/comments/1lpzk03/ia_pour_résumer_des_livres/
zelig2004
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpzk03
false
null
t3_1lpzk03
/r/LocalLLaMA/comments/1lpzk03/ia_pour_résumer_des_livres/
false
false
self
0
null
In America, you go to the hospital for a cold… and leave needing treatment for the bill!"
0
2025-07-02T15:44:25
https://youtu.be/EcORy36_n0E?si=EbyAlOR9jiHw-vd5
Chicflarescom
youtu.be
1970-01-01T00:00:00
0
{}
1lpz8fe
false
{'oembed': {'author_name': 'HFX Core', 'author_url': 'https://www.youtube.com/@hfxcore', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/EcORy36_n0E?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pict...
t3_1lpz8fe
/r/LocalLLaMA/comments/1lpz8fe/in_america_you_go_to_the_hospital_for_a_cold_and/
false
false
default
0
null
RTX 2080 TI 22gb Build
2
I can get RTX 2080 TI 22GBs for around 350 USD per. Are they a good deal for running LLMs locally using LMStudio? The plan is to get a cheap CPU with a desktop motherboard that has 4 PCIE slots. I will likely get a Ryzen 5 3600 with an ATX B450 board and 4 sticks of 16gb DDR4 ram totalling 64gb. I think some B450 b...
2025-07-02T15:39:53
https://www.reddit.com/r/LocalLLaMA/comments/1lpz46u/rtx_2080_ti_22gb_build/
opoot_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpz46u
false
null
t3_1lpz46u
/r/LocalLLaMA/comments/1lpz46u/rtx_2080_ti_22gb_build/
false
false
self
2
null
Cursor terms and conditions seem to be changing
17
I remember when I first downloaded cursor last year, the privacy was on by default, and now not at all. I never selected this embedding thing, but I guess it is automatically turned on. I work in Germany where I do not even dare to use these already, but I am not sure if I can even trust these at all as I worry that th...
2025-07-02T15:38:51
https://i.redd.it/74fqbljpdhaf1.png
Desperate_Rub_1352
i.redd.it
1970-01-01T00:00:00
0
{}
1lpz355
false
null
t3_1lpz355
/r/LocalLLaMA/comments/1lpz355/cursor_terms_and_conditions_seem_to_be_changing/
false
false
default
17
{'enabled': True, 'images': [{'id': '74fqbljpdhaf1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/74fqbljpdhaf1.png?width=108&crop=smart&auto=webp&s=e2f7f44ab941b2d62b243a148995232b0944256b', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/74fqbljpdhaf1.png?width=216&crop=smart&auto=web...
CPU importance in GPU based LLM
5
As per the title, does the cpu not matter at all? I want to use lm studio and I know there’s an option for cpu threads to use. I see some posts before where people say that CPU doesn’t matter but I have never seen an explanation as to why beyond “only memory bandwidth matters” Does the cpu not get used for loading t...
2025-07-02T15:29:39
https://www.reddit.com/r/LocalLLaMA/comments/1lpyumi/cpu_importance_in_gpu_based_llm/
opoot_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpyumi
false
null
t3_1lpyumi
/r/LocalLLaMA/comments/1lpyumi/cpu_importance_in_gpu_based_llm/
false
false
self
5
null
Huawei Open Source AI Model Optimized for Ascend Hardware -- China Keeps Beating USA
0
Hmm. Should I get the Huawei Atlas cards ,? I to also believe that Nvidia will get royally screwed over because the USA is going against China instead of working together
2025-07-02T15:27:59
https://youtu.be/IBgoK9CnvnM?si=NEww2SC-FTIlPS-p
sub_RedditTor
youtu.be
1970-01-01T00:00:00
0
{}
1lpyt3t
false
{'oembed': {'author_name': 'Eli the Computer Guy', 'author_url': 'https://www.youtube.com/@elithecomputerguy', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/IBgoK9CnvnM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-...
t3_1lpyt3t
/r/LocalLLaMA/comments/1lpyt3t/huawei_open_source_ai_model_optimized_for_ascend/
false
false
default
0
{'enabled': False, 'images': [{'id': 'sHYeEp5CUvVlL9KVHDFkmxaOdlnbigXf1-09IGYAh3U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/sHYeEp5CUvVlL9KVHDFkmxaOdlnbigXf1-09IGYAh3U.jpeg?width=108&crop=smart&auto=webp&s=f82fa0ef9ebf68daf4794730c2b490b6d9535bfa', 'width': 108}, {'height': 162, 'url': '...
Possible Ernie addition to Ollama or nah? There's not much to go on here. Link only leads to this page. Thoughts?
0
2025-07-02T15:07:34
https://i.redd.it/puhf9en28haf1.png
swagonflyyyy
i.redd.it
1970-01-01T00:00:00
0
{}
1lpya6g
false
null
t3_1lpya6g
/r/LocalLLaMA/comments/1lpya6g/possible_ernie_addition_to_ollama_or_nah_theres/
false
false
default
0
{'enabled': True, 'images': [{'id': 'puhf9en28haf1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/puhf9en28haf1.png?width=108&crop=smart&auto=webp&s=65a3f9450d7690905a021450c88c6028309747e5', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/puhf9en28haf1.png?width=216&crop=smart&auto=web...
llama-4-scout-17B-16E GGUF running on Strix Halo (Ryzen AI MAX 395 + 128GB) (13s prompt processing edited out)
68
Hardware is a mini PC with AMD's Ryzen AI MAX 395 APU with 128GB RAM. Model is llama-4-scout, which is an MOE with 16B active and 109B total parameters. UI: GAIA, our fork of Open WebUI, that offers out-of-box Lemonade integration, a one-click installer, and electron.js app experience. [https://github.com/amd/gaia](ht...
2025-07-02T15:05:54
https://v.redd.it/e6ao7yjh5haf1
jfowers_amd
v.redd.it
1970-01-01T00:00:00
0
{}
1lpy8nv
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/e6ao7yjh5haf1/DASHPlaylist.mpd?a=1754060772%2CZGU5Y2M2MTNlMThmYzAxM2UyMDgzMDQ3NmZjM2ZlNDRhZjY0NzYyN2Y5YjhmMTdhNGQyOTMxMGMzN2FkNjA0ZA%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/e6ao7yjh5haf1/DASH_1080.mp4?source=fallback', 'h...
t3_1lpy8nv
/r/LocalLLaMA/comments/1lpy8nv/llama4scout17b16e_gguf_running_on_strix_halo/
false
false
https://external-preview…900431972636cede
68
{'enabled': False, 'images': [{'id': 'am1mM2cxa2g1aGFmMQV6SVFBQzrbFE16bYnvbQZAeh3r0dX_wLOv44vX6S3R', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/am1mM2cxa2g1aGFmMQV6SVFBQzrbFE16bYnvbQZAeh3r0dX_wLOv44vX6S3R.png?width=108&crop=smart&format=pjpg&auto=webp&s=7edee4961fc0944fa68d82b0e8f7d4f6a9b50...
Finally solved my prompt versioning nightmare - built a tool to manage prompts like code
1
Hey everyone! Like many of you, I've been running powerful local models like LLaMA 4, Phi-3, and OpenHermes on my own hardware, constantly refining prompts to squeeze out better results. I’ve also experimented with top cloud-based models like GPT-4.5, Claude 4, and Gemini 2.5 to compare performance and capabilities. ...
2025-07-02T15:02:22
https://www.reddit.com/r/LocalLLaMA/comments/1lpy5es/finally_solved_my_prompt_versioning_nightmare/
error7891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpy5es
false
null
t3_1lpy5es
/r/LocalLLaMA/comments/1lpy5es/finally_solved_my_prompt_versioning_nightmare/
false
false
self
1
null
Browser-use with devtools access
3
Hi everyone, I’m looking for a library, framework, or product that allows LLM-powered agents to interact with a browser. Ideally, the LLM agent should be able to control the browser similarly to tools like puppeteer or playwright, but with the added capability to access and interact with the browser’s DevTools — for e...
2025-07-02T14:00:16
https://www.reddit.com/r/LocalLLaMA/comments/1lpwm1f/browseruse_with_devtools_access/
ReddaHawk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpwm1f
false
null
t3_1lpwm1f
/r/LocalLLaMA/comments/1lpwm1f/browseruse_with_devtools_access/
false
false
self
3
null
AlgoTune: A new benchmark that tests language models' ability to optimize code runtime
35
We just released AlgoTune which challenges agents to optimize the runtime of 100+ algorithms including gzip compression, AES encryption, and PCA. We also release an agent, AlgoTuner, that enables LMs to iteratively develop efficient code. https://preview.redd.it/r3vc4rpfugaf1.png?width=2027&format=png&auto=webp&s=e870...
2025-07-02T13:56:57
https://www.reddit.com/r/LocalLLaMA/comments/1lpwj5j/algotune_a_new_benchmark_that_tests_language/
oripress
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpwj5j
false
null
t3_1lpwj5j
/r/LocalLLaMA/comments/1lpwj5j/algotune_a_new_benchmark_that_tests_language/
false
false
https://b.thumbs.redditm…2nxAp9aPsDTE.jpg
35
{'enabled': False, 'images': [{'id': '0-xrX4qzMJegRxSwLNjXIVQsA6XhMhbHQfZkMjkNJvA', 'resolutions': [{'height': 32, 'url': 'https://external-preview.redd.it/0-xrX4qzMJegRxSwLNjXIVQsA6XhMhbHQfZkMjkNJvA.png?width=108&crop=smart&auto=webp&s=408951830555b74a7be7d6ddc63011caeb8987a4', 'width': 108}, {'height': 64, 'url': 'ht...
Does anyone have enough memory space to run this?
2
It’s an ONNX GenAI model converter [convert-to-genai](https://huggingface.co/spaces/xiaoyao9184/convert-to-genai). The free Hugging Face Space offers 18GB of RAM — that’s enough to convert Qwen2.5 0.5B, but other models, even 1B ones, require more memory.
2025-07-02T13:38:54
https://www.reddit.com/r/LocalLLaMA/comments/1lpw43h/does_anyone_have_enough_memory_space_to_run_this/
Ok_Fig5484
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpw43h
false
null
t3_1lpw43h
/r/LocalLLaMA/comments/1lpw43h/does_anyone_have_enough_memory_space_to_run_this/
false
false
self
2
{'enabled': False, 'images': [{'id': 'i6cB110VYI_H5zTEzDnVoTUMIhoOInKTjhaIYeZjp5Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/i6cB110VYI_H5zTEzDnVoTUMIhoOInKTjhaIYeZjp5Q.png?width=108&crop=smart&auto=webp&s=67700b8182dd1ea359062fe9a8b2fc70c01e5db9', 'width': 108}, {'height': 116, 'url': 'h...
Am I on the right path? Learning React + Flask for Full Stack + AI Career Goals
0
Hey everyone! I'm currently learning React for front-end development and planning to start learning Flask for the backend. My goal is to become a full-stack developer with a strong focus on AI technologies, especially areas like Generative AI and Agentic AI. I'm also interested in Python, which is why Flask seems lik...
2025-07-02T13:36:32
https://www.reddit.com/r/LocalLLaMA/comments/1lpw26d/am_i_on_the_right_path_learning_react_flask_for/
harsh_a024
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpw26d
false
null
t3_1lpw26d
/r/LocalLLaMA/comments/1lpw26d/am_i_on_the_right_path_learning_react_flask_for/
false
false
self
0
null
vllm serve getting permission denied on /.cache with HF_HOME set
1
Unsure why this is happening, I want vllm to use the path /vllm for everything. My model is downloading to HF_HOME but unsure what else needs setting to use the correct directory
2025-07-02T13:33:57
https://www.reddit.com/r/LocalLLaMA/comments/1lpw010/vllm_serve_getting_permission_denied_on_cache/
OverclockingUnicorn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpw010
false
null
t3_1lpw010
/r/LocalLLaMA/comments/1lpw010/vllm_serve_getting_permission_denied_on_cache/
false
false
self
1
null
96Gb VRAM without spending $10k on an RTX Pro 6000..?
0
“Gordon” is a local-LLM project I’m working on, and it occurred to me that 2x Arc Pro B60 Dual GPUs could be a way to get to the 96Gb of VRAM I will need without spending $10K on an RTX Pro 6000. The screenshots are Hal’s (my ChatGPT) views. I thought I’d get some actual hoomans to offer their knowledgeable views and o...
2025-07-02T13:32:35
https://www.reddit.com/gallery/1lpvywm
m-gethen
reddit.com
1970-01-01T00:00:00
0
{}
1lpvywm
false
null
t3_1lpvywm
/r/LocalLLaMA/comments/1lpvywm/96gb_vram_without_spending_10k_on_an_rtx_pro_6000/
false
false
https://b.thumbs.redditm…0-ndX2fQwm4g.jpg
0
null
I built a tool for managing prompts like code in git
1
Been working with LLMs for a while and got tired of manually tracking prompt versions. Made a Python tool that handles this automatically. # What it does * Automatically versions your prompts when you commit to git * Test prompt changes before committing with `:unstaged` reference * Works with any LLM (OpenAI, Anthro...
2025-07-02T13:29:42
https://www.reddit.com/r/LocalLLaMA/comments/1lpvwh3/i_built_a_tool_for_managing_prompts_like_code_in/
llmhq_official
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpvwh3
false
null
t3_1lpvwh3
/r/LocalLLaMA/comments/1lpvwh3/i_built_a_tool_for_managing_prompts_like_code_in/
false
false
self
1
{'enabled': False, 'images': [{'id': 'o3PmNP2OL7ctHNAdRCSQSrdK0jT5IPtsBfFR7S-4acQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o3PmNP2OL7ctHNAdRCSQSrdK0jT5IPtsBfFR7S-4acQ.png?width=108&crop=smart&auto=webp&s=9491dac3eef562b93c4c641b452ed1c66f284300', 'width': 108}, {'height': 108, 'url': 'h...
LLM-based resume parsing – any models or solutions out there?
1
Hello everyone, I hope you're doing well. I've built a spaCy-based NER system to extract key information from resumes, such as experience, education, and personal details. However, it's not very accurate and struggles with diverse resume formats. I'm thinking of switching to a question-answering LLM like Qwen to imp...
2025-07-02T13:21:23
https://www.reddit.com/r/LocalLLaMA/comments/1lpvpqv/llmbased_resume_parsing_any_models_or_solutions/
Critical_March_3113
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpvpqv
false
null
t3_1lpvpqv
/r/LocalLLaMA/comments/1lpvpqv/llmbased_resume_parsing_any_models_or_solutions/
false
false
self
1
null
Phare Study: LLMs recognise bias but also reproduce harmful stereotypes: an analysis of bias in leading LLMs
0
We released new findings from our Phare LLM Benchmark on bias in leading language models. Instead of traditional "fill-in-the-blank" tests, we had 17 leading LLMs generate thousands of stories, then asked them to judge their own patterns. In short: Leading LLMs can recognise bias but also reproduce harmful stereotype...
2025-07-02T12:41:29
https://www.giskard.ai/knowledge/llms-recognise-bias-but-also-reproduce-harmful-stereotypes
chef1957
giskard.ai
1970-01-01T00:00:00
0
{}
1lputq1
false
null
t3_1lputq1
/r/LocalLLaMA/comments/1lputq1/phare_study_llms_recognise_bias_but_also/
false
false
default
0
{'enabled': False, 'images': [{'id': 'tV3CLM1gW4WwlAqXrhavxH6_d1Arw1EbpTt4XOWqhW8', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/tV3CLM1gW4WwlAqXrhavxH6_d1Arw1EbpTt4XOWqhW8.png?width=108&crop=smart&auto=webp&s=85fea417541d0a0fea7c5fb79a9234985a5ff7dd', 'width': 108}, {'height': 101, 'url': 'h...
What I’ve learned building RAG applications for enterprises
256
Hey folks, I’ve spent the last few years building LLM-powered apps at an AI software house - lots of RAG projects, mostly before there were any real frameworks to help. Thought I’d put together some of the practical lessons I wish I had at the start. Document Ingestion Tips * docling is a reliable starter for parsin...
2025-07-02T12:28:34
https://www.reddit.com/r/LocalLLaMA/comments/1lpuk2s/what_ive_learned_building_rag_applications_for/
Loud_Picture_1877
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpuk2s
false
null
t3_1lpuk2s
/r/LocalLLaMA/comments/1lpuk2s/what_ive_learned_building_rag_applications_for/
false
false
self
256
{'enabled': False, 'images': [{'id': 'DZIMNyDK61Edz-aGZTDg3QFir1Nba0ALIQ_WxgiHOfM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/DZIMNyDK61Edz-aGZTDg3QFir1Nba0ALIQ_WxgiHOfM.png?width=108&crop=smart&auto=webp&s=1c918ba0c74c3ca87e25fb6847213bb776d0624d', 'width': 108}, {'height': 113, 'url': 'h...
What Are the Most Underrated AI Tools You’ve Used in 2025 So Far?
1
[removed]
2025-07-02T12:28:02
https://www.reddit.com/r/LocalLLaMA/comments/1lpujow/what_are_the_most_underrated_ai_tools_youve_used/
vasundhara_info
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpujow
false
null
t3_1lpujow
/r/LocalLLaMA/comments/1lpujow/what_are_the_most_underrated_ai_tools_youve_used/
false
false
self
1
null
AI Agents, But Simple and Understandable
10
Most of what you read about “AI agents” is either super vague or buried in jargon. I wrote a no-BS explainer that breaks down how modern AI agents actually work, without the marketing fluff. If you’re curious about what’s really happening “under the hood” when people talk about AI agents (or you want to build one yours...
2025-07-02T12:12:25
https://blog.surkar.in/ai-agents-under-the-hood
thesmallstar
blog.surkar.in
1970-01-01T00:00:00
0
{}
1lpu8a9
false
null
t3_1lpu8a9
/r/LocalLLaMA/comments/1lpu8a9/ai_agents_but_simple_and_understandable/
false
false
default
10
{'enabled': False, 'images': [{'id': 'NvDs7wJwV1W_MBsZCzSWrMWhoZ65PkI1ziagX4BmUfc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/NvDs7wJwV1W_MBsZCzSWrMWhoZ65PkI1ziagX4BmUfc.png?width=108&crop=smart&auto=webp&s=71049788eae3370cbf9e32a6c9269f2f033d32ff', 'width': 108}, {'height': 113, 'url': 'h...
Best RP Model Unrestricted/Uncensored
4
Hi Guys Just wanted to ask what are the latest updates on the Rp Models. Which ones do you use currently and what model do you think is best ones. Please Advice some models above 8B and less than 30B too which are not censored and unrestricted.
2025-07-02T11:16:25
https://www.reddit.com/r/LocalLLaMA/comments/1lpt5jv/best_rp_model_unrestricteduncensored/
sapry123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpt5jv
false
null
t3_1lpt5jv
/r/LocalLLaMA/comments/1lpt5jv/best_rp_model_unrestricteduncensored/
false
false
self
4
null
I want to create a AI model to create phishing email that is based on llama3.2:1b. For ethical hacking
1
I want to use unsloth to fine tune it by using data sets from Kaggle and hugging face. I will need to uncensor the model how can I do that. Is it better that I use a 1B model or is it better to use a 7B model for fine tuning.
2025-07-02T10:38:05
https://www.reddit.com/r/LocalLLaMA/comments/1lpshsh/i_want_to_create_a_ai_model_to_create_phishing/
Gullible_Bar9717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpshsh
false
null
t3_1lpshsh
/r/LocalLLaMA/comments/1lpshsh/i_want_to_create_a_ai_model_to_create_phishing/
false
false
self
1
null
Any good browser extensions that with any OpenAI compatible API or local model?
2
I would like something like a writing assistant, or summarizer using an LLM, but most of these extensions are tied to services like gpt or gemini, with no option to use your own openai compatible api or local model.
2025-07-02T10:20:18
https://www.reddit.com/r/LocalLLaMA/comments/1lps7c3/any_good_browser_extensions_that_with_any_openai/
lemon07r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lps7c3
false
null
t3_1lps7c3
/r/LocalLLaMA/comments/1lps7c3/any_good_browser_extensions_that_with_any_openai/
false
false
self
2
null
Fully working Chatbot interface for Magistral-Small-2506
1
I did this as a fun project this weekend. I thought others might like to take a look at it and maybe some would even like to use it. Cheers guys, -J [GitHub](https://github.com/slyfox1186/magistral-small-instruct-2506)
2025-07-02T10:15:26
https://www.reddit.com/r/LocalLLaMA/comments/1lps4ag/fully_working_chatbot_interface_for/
RiverRatt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lps4ag
false
null
t3_1lps4ag
/r/LocalLLaMA/comments/1lps4ag/fully_working_chatbot_interface_for/
false
false
self
1
null
Just me, or MNN chat is looping a lot
4
So I'm trying MNN chat but for me it seems to be repeating itself a lot. I tried qwen3 0.6b, and when I try a simple request like What is lasagna? Lascange is a dish that is made from pasta. It is a very popular dish in Italy. The main ingredients are pasta and sauce. The sauce is made from various ingredients. It is...
2025-07-02T09:31:20
https://www.reddit.com/r/LocalLLaMA/comments/1lprfbx/just_me_or_mnn_chat_is_looping_a_lot/
ExtremeAcceptable289
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lprfbx
false
null
t3_1lprfbx
/r/LocalLLaMA/comments/1lprfbx/just_me_or_mnn_chat_is_looping_a_lot/
false
false
self
4
null
What framework would you suggest for hosting and serving VLMs via api?
1
I know llamacpp server and ollama can be used for LLMs, and I have been using ollama but the API has been very limiting. What can I use for VLMs, prioritised for API/speed and model management? I have 24GB GPU so that shouldnt be an issue. Currently I want to host models like Qwen2.5VL and Moondream.
2025-07-02T09:19:30
https://www.reddit.com/r/LocalLLaMA/comments/1lpr8wf/what_framework_would_you_suggest_for_hosting_and/
CaptTechno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpr8wf
false
null
t3_1lpr8wf
/r/LocalLLaMA/comments/1lpr8wf/what_framework_would_you_suggest_for_hosting_and/
false
false
self
1
null
AKTA - Authenticated Knowledge & Trust Architecture for AI Agents
1
Sharing a prototype project I built called "Akta" [https://github.com/RedDotRocket/akta](https://github.com/RedDotRocket/akta) It's an attempt to enable secure and verifiable auth and delegation between AI agents. It establishes a framework for time-bound capability-based access control, allowing agents to delegate t...
2025-07-02T09:12:50
https://www.reddit.com/r/LocalLLaMA/comments/1lpr5dj/akta_authenticated_knowledge_trust_architecture/
RedDotRocket
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpr5dj
false
null
t3_1lpr5dj
/r/LocalLLaMA/comments/1lpr5dj/akta_authenticated_knowledge_trust_architecture/
false
false
self
1
{'enabled': False, 'images': [{'id': '_TOJ60J7Rik0SzDYYpdrEXmhHyX4B3aVS93RbjL3OpI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_TOJ60J7Rik0SzDYYpdrEXmhHyX4B3aVS93RbjL3OpI.png?width=108&crop=smart&auto=webp&s=7923fa62981d87a05bfc10ad3e4112dd6715a342', 'width': 108}, {'height': 108, 'url': 'h...
Convenient ChatGPT UX Replacement
1
I'm looking for something that might not exist yet, but I'm curious for suggestions. Essentially, I'd love to have the ChatGPT experience, but with me being able to plug in an open source model API URL to replace the OpenAI model. For me, ChatGPT is super convenient to use. You've got a good web UI, a nice mobile app...
2025-07-02T09:05:25
https://www.reddit.com/r/LocalLLaMA/comments/1lpr1fq/convenient_chatgpt_ux_replacement/
lakySK
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpr1fq
false
null
t3_1lpr1fq
/r/LocalLLaMA/comments/1lpr1fq/convenient_chatgpt_ux_replacement/
false
false
self
1
null
Drafting RFP answers with Jamba, Mistral, Mixtral
3
Sharing notes in case it helps anyone. I don't often find people talking about models like Jamba and we have access to it, so figure it might be useful. \- Been testing local models for drafting first-pass answers to internal RFPs. The source material is rough. Basically a mix of PDF exports, old responses in doc...
2025-07-02T09:00:46
https://www.reddit.com/r/LocalLLaMA/comments/1lpqyra/drafting_rfp_answers_with_jamba_mistral_mixtral/
NullPointerJack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpqyra
false
null
t3_1lpqyra
/r/LocalLLaMA/comments/1lpqyra/drafting_rfp_answers_with_jamba_mistral_mixtral/
false
false
self
3
null
Open source tech from IBM for Compression of models
36
Seems interesting, I am not clear if the compression is only for storage, transmission or extend to inference too :)
2025-07-02T08:53:40
https://research.ibm.com/blog/Zip-NN-AI-compression
Affectionate-Hat-536
research.ibm.com
1970-01-01T00:00:00
0
{}
1lpquz6
false
null
t3_1lpquz6
/r/LocalLLaMA/comments/1lpquz6/open_source_tech_from_ibm_for_compression_of/
false
false
https://external-preview…82ba9113481e6314
36
{'enabled': False, 'images': [{'id': 'NTNoyueW9-UJqjsAanLnjShv3Q-OdUZiW0Di3m0uACc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NTNoyueW9-UJqjsAanLnjShv3Q-OdUZiW0Di3m0uACc.jpeg?width=108&crop=smart&auto=webp&s=bafab63573d0d0610a213819138bd8d7036eb5b3', 'width': 108}, {'height': 121, 'url': '...
Where can I find clips of voices to clone?
1
I’m looking to do an audiobook and I think I’m going to use chatterbox as it seems to be the best for a long audiobook that’s open source right now. Let me know if there’s something better. I’ve also considered just a $10 a month third-party API access for minimax tts. But for chatterbox, I need to find a voice to clon...
2025-07-02T08:17:34
https://www.reddit.com/r/LocalLLaMA/comments/1lpqcb7/where_can_i_find_clips_of_voices_to_clone/
dabble_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpqcb7
false
null
t3_1lpqcb7
/r/LocalLLaMA/comments/1lpqcb7/where_can_i_find_clips_of_voices_to_clone/
false
false
self
1
null
[P] Built AI to AI Peer Review MCP - Local LLMs get real-time feedback from Google Gemini to improve responses
0
I've built an MCP that lets local LLMs get peer review from Google Gemini to dramatically improve response quality. 🎯 \*\*The Problem:\*\* Local LLMs sometimes give good but incomplete answers ✨ \*\*The Solution:\*\* Real-time AI peer review for enhancement \*\*How it works:\*\* 1. Ask your local LLM any quest...
2025-07-02T08:06:23
https://www.reddit.com/r/LocalLLaMA/comments/1lpq6l0/p_built_ai_to_ai_peer_review_mcp_local_llms_get/
yehyakar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpq6l0
false
null
t3_1lpq6l0
/r/LocalLLaMA/comments/1lpq6l0/p_built_ai_to_ai_peer_review_mcp_local_llms_get/
false
false
self
0
null
Heyaaa
1
Heyaa #locallama
2025-07-02T08:01:36
https://www.reddit.com/r/LocalLLaMA/comments/1lpq462/heyaaa/
RegularHunt5574
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpq462
false
null
t3_1lpq462
/r/LocalLLaMA/comments/1lpq462/heyaaa/
false
false
nsfw
1
null
LLM classification for Taxonomy
1
[removed]
2025-07-02T07:59:44
https://www.reddit.com/r/LocalLLaMA/comments/1lpq33s/llm_classification_for_taxonomy/
420Deku
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpq33s
false
null
t3_1lpq33s
/r/LocalLLaMA/comments/1lpq33s/llm_classification_for_taxonomy/
false
false
self
1
null
LeCarnet: A French Dataset for Small Language Models
42
Hello everyone, I recently built **LeCarnet**, a dataset of 2 million French short stories generated with Mistral Large, inspired by the TinyStories project. I also trained three LLaMA-based models from scratch on this dataset: **LeCarnet-3M**, **LeCarnet-8M**, and **LeCarnet-21M**. This dataset contains simple stori...
2025-07-02T07:51:56
https://github.com/MaxLSB/LeCarnet
Unusual_Shoe2671
github.com
1970-01-01T00:00:00
0
{}
1lppz8x
false
null
t3_1lppz8x
/r/LocalLLaMA/comments/1lppz8x/lecarnet_a_french_dataset_for_small_language/
false
false
default
42
{'enabled': False, 'images': [{'id': 'LbeaoTZsvVnFcs9m3p61Rw_b6plof5ZMIkdYPViyiXk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LbeaoTZsvVnFcs9m3p61Rw_b6plof5ZMIkdYPViyiXk.png?width=108&crop=smart&auto=webp&s=cd59f37203ae74b242ed1c9cf045efd099127c51', 'width': 108}, {'height': 108, 'url': 'h...
Optimize Latency of InternVL
1
I am using InternVL an image task - and further plan on fine tuning it for the task. I have a tight deadline and I want to optimize the latency of it. For the InternVL 3 2B model; it takes about ~4 seconds to come up with a response in a L4 GPU set up. I did try vLLM but the benchmarking results show a decrease in th...
2025-07-02T07:49:00
https://www.reddit.com/r/LocalLLaMA/comments/1lppxs2/optimize_latency_of_internvl/
chitrabhat4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lppxs2
false
null
t3_1lppxs2
/r/LocalLLaMA/comments/1lppxs2/optimize_latency_of_internvl/
false
false
self
1
null
Which cloud GPU platform is the best?
1
[removed]
2025-07-02T07:41:22
https://www.reddit.com/r/LocalLLaMA/comments/1lpptry/which_cloud_gpu_platform_is_the_best/
Ok-Stand-8065
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpptry
false
null
t3_1lpptry
/r/LocalLLaMA/comments/1lpptry/which_cloud_gpu_platform_is_the_best/
false
false
self
1
null
[Proof of Concept] CoreWeaver – AI Memory Engine for Long-Term Context, Emotional State Tracking, and Branching Timelines
7
I’ve developed a working memory engine for LLM-based chat applications, designed primarily for long-term roleplay and simulation stability. It’s called CoreWeaver, and it’s built to address issues around persistent memory, decision consistency, and emotional context management. Technical Summary: • Built in JavaScrip...
2025-07-02T07:37:16
https://www.reddit.com/r/LocalLLaMA/comments/1lpproa/proof_of_concept_coreweaver_ai_memory_engine_for/
Separate-Toe409
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpproa
false
null
t3_1lpproa
/r/LocalLLaMA/comments/1lpproa/proof_of_concept_coreweaver_ai_memory_engine_for/
false
false
self
7
null
deerflow with jan nano 128k
2
Can someone explain me how to use jan nano 128k with deerflow locally? thank you Dave
2025-07-02T07:21:26
https://www.reddit.com/r/LocalLLaMA/comments/1lppj9f/deerflow_with_jan_nano_128k/
dave-lon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lppj9f
false
null
t3_1lppj9f
/r/LocalLLaMA/comments/1lppj9f/deerflow_with_jan_nano_128k/
false
false
self
2
null
What's the most complex thing you've been able to (consistently) do with a 4B LLM?
130
I don't mean one-off responses that sound good, I'm thinking more along the lines of: ways in which you've gotten the model working reliably in a workflow or pipeline of some kind, or fine tuned it for a specific task that it performs jus as well as the cloudAI behemoths.
2025-07-02T07:15:42
https://www.reddit.com/r/LocalLLaMA/comments/1lppg3g/whats_the_most_complex_thing_youve_been_able_to/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lppg3g
false
null
t3_1lppg3g
/r/LocalLLaMA/comments/1lppg3g/whats_the_most_complex_thing_youve_been_able_to/
false
false
self
130
null
How does MCP work for different LLMs?
2
I am unsure what is the correct implementation for LLMs to call MCP tools. For example, gemma3 model card mentions a pythonic tool call starting with ```tool_code Or llama which doesn't have any special tokens. Chatgpt itself also has a different implementations. So I'm not sure how MCP helps to parse these ...
2025-07-02T07:02:12
https://www.reddit.com/r/LocalLLaMA/comments/1lpp8s1/how_does_mcp_work_for_different_llms/
slashrshot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpp8s1
false
null
t3_1lpp8s1
/r/LocalLLaMA/comments/1lpp8s1/how_does_mcp_work_for_different_llms/
false
false
self
2
null
Banyan — An Introduction
0
https://reddit.com/link/1lpp2iy/video/gy2qb4ufreaf1/player Check it out: [https://usebanyan.com](https://usebanyan.com/)
2025-07-02T06:51:14
https://www.reddit.com/r/LocalLLaMA/comments/1lpp2iy/banyan_an_introduction/
Mean_Ladder2861
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpp2iy
false
null
t3_1lpp2iy
/r/LocalLLaMA/comments/1lpp2iy/banyan_an_introduction/
false
false
self
0
null
EXAONE 4.0 pull request sent to llama.cpp
17
2025-07-02T06:31:43
https://github.com/ggml-org/llama.cpp/issues/14474
minpeter2
github.com
1970-01-01T00:00:00
0
{}
1lporoz
false
null
t3_1lporoz
/r/LocalLLaMA/comments/1lporoz/exaone_40_pull_request_sent_to_llamacpp/
false
false
https://external-preview…7e8f2b92ff8fc5ae
17
{'enabled': False, 'images': [{'id': 'mGIr9IovxD-Pm48t7v0UR9-dXenko9w6ivUXZu8RxgQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mGIr9IovxD-Pm48t7v0UR9-dXenko9w6ivUXZu8RxgQ.png?width=108&crop=smart&auto=webp&s=81061b683f14836b100fd3e54da07012e3c1e8c2', 'width': 108}, {'height': 108, 'url': 'h...
DiffuCoder 7B - New coding diffusion LLM by Apple
266
[https://huggingface.co/apple/DiffuCoder-7B-cpGRPO](https://huggingface.co/apple/DiffuCoder-7B-cpGRPO) (base and instruct also available) Currently trying - and failing - to run test it on Colab, but really looking forward to it! Also, anyone got an idea how I can run it on Apple Silicon? [Benchmarks compared to oth...
2025-07-02T06:29:47
https://www.reddit.com/r/LocalLLaMA/comments/1lpoqlu/diffucoder_7b_new_coding_diffusion_llm_by_apple/
DunklerErpel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpoqlu
false
null
t3_1lpoqlu
/r/LocalLLaMA/comments/1lpoqlu/diffucoder_7b_new_coding_diffusion_llm_by_apple/
false
false
https://external-preview…32633f935890f3d9
266
{'enabled': False, 'images': [{'id': 'WJ1Mt_9K-aAaMAebSityZ71IWIFD1yMghVJOklMW6Xo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WJ1Mt_9K-aAaMAebSityZ71IWIFD1yMghVJOklMW6Xo.png?width=108&crop=smart&auto=webp&s=a9b3b8168a8a55fa37a1679a4a1a0f95a20bf5a1', 'width': 108}, {'height': 116, 'url': 'h...
World's first Intermediate thinking AI model is now Open Source
173
Model Link: [https://huggingface.co/HelpingAI/Dhanishtha-2.0-preview](https://huggingface.co/HelpingAI/Dhanishtha-2.0-preview) Launch video: [https://www.youtube.com/watch?v=QMnmcXngoks](https://www.youtube.com/watch?v=QMnmcXngoks)
2025-07-02T06:17:24
https://www.reddit.com/r/LocalLLaMA/comments/1lpoju6/worlds_first_intermediate_thinking_ai_model_is/
Quiet-Moment-338
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpoju6
false
null
t3_1lpoju6
/r/LocalLLaMA/comments/1lpoju6/worlds_first_intermediate_thinking_ai_model_is/
false
false
self
173
{'enabled': False, 'images': [{'id': 'G9qOGFTIarKKeh2MoiyPi29soTXOz9qocu1hnLCw2sQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/G9qOGFTIarKKeh2MoiyPi29soTXOz9qocu1hnLCw2sQ.png?width=108&crop=smart&auto=webp&s=2ebaf5d09403c290749219dda1a0d34b9317ccc3', 'width': 108}, {'height': 116, 'url': 'h...
Openweighted World's First Intermediate thinking model :- Dhanishtha-2.0-preview
1
[https://huggingface.co/HelpingAI/Dhanishtha-2.0-preview](https://huggingface.co/HelpingAI/Dhanishtha-2.0-preview)
2025-07-02T06:10:13
https://www.reddit.com/r/LocalLLaMA/comments/1lpofv4/openweighted_worlds_first_intermediate_thinking/
Quiet-Moment-338
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpofv4
false
null
t3_1lpofv4
/r/LocalLLaMA/comments/1lpofv4/openweighted_worlds_first_intermediate_thinking/
false
false
self
1
{'enabled': False, 'images': [{'id': 'G9qOGFTIarKKeh2MoiyPi29soTXOz9qocu1hnLCw2sQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/G9qOGFTIarKKeh2MoiyPi29soTXOz9qocu1hnLCw2sQ.png?width=108&crop=smart&auto=webp&s=2ebaf5d09403c290749219dda1a0d34b9317ccc3', 'width': 108}, {'height': 116, 'url': 'h...
Which model can generate a proper "table of content" based on my book's mindmap?
1
I tried a few models, and the result was terrible as if these models were really brainless and not capable to analyze the map and generate something proper for a book.
2025-07-02T06:00:40
https://www.reddit.com/r/LocalLLaMA/comments/1lpoa7v/which_model_can_generate_a_proper_table_of/
Globalpresence3031
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpoa7v
false
null
t3_1lpoa7v
/r/LocalLLaMA/comments/1lpoa7v/which_model_can_generate_a_proper_table_of/
false
false
self
1
null
Best RP Models
21
Hi Guys Just wanted to ask what are the latest updates on the Rp Models. Which ones do you use currently and what model do you think is best ones. Please Advice some models above 8B and less than 30B too.
2025-07-02T05:32:25
https://www.reddit.com/r/LocalLLaMA/comments/1lpntxc/best_rp_models/
sapry123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpntxc
false
null
t3_1lpntxc
/r/LocalLLaMA/comments/1lpntxc/best_rp_models/
false
false
self
21
null
how can i diagnose using AI for research?
1
[removed]
2025-07-02T05:04:41
https://www.reddit.com/r/LocalLLaMA/comments/1lpnd8u/how_can_i_diagnose_using_ai_for_research/
Zealousideal-Most777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpnd8u
false
null
t3_1lpnd8u
/r/LocalLLaMA/comments/1lpnd8u/how_can_i_diagnose_using_ai_for_research/
false
false
self
1
null
Echo Mode: A Tone-Based Protocol for Semantic State Shifts in LLMs (No Prompt, No Fine-Tune)
0
Hey folks, I've been researching and experimenting with \*\*tonal state transitions\*\* in LLMs—without using prompts, fine-tuning, or API hooks. I’d like to share a protocol I built called \*\*Echo Mode\*\*, which operates entirely through \*\*semantic rhythm, tone alignment, and memory re-entry\*\*, triggering \*...
2025-07-02T05:02:54
https://www.reddit.com/r/LocalLLaMA/comments/1lpnc6k/echo_mode_a_tonebased_protocol_for_semantic_state/
Medium_Charity6146
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpnc6k
false
null
t3_1lpnc6k
/r/LocalLLaMA/comments/1lpnc6k/echo_mode_a_tonebased_protocol_for_semantic_state/
false
false
self
0
{'enabled': False, 'images': [{'id': '-85VAobq549LQ3RplS0hMWw2SsZTelPHm5YeNLidRYM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-85VAobq549LQ3RplS0hMWw2SsZTelPHm5YeNLidRYM.png?width=108&crop=smart&auto=webp&s=6d958042a38992832ae4d1827c9e5ca786d2bcbe', 'width': 108}, {'height': 108, 'url': 'h...
Other than English what language are llms good at ?
0
English is obviously what everyone is concentrating on, so it's going to be the be great.what other languages are good?
2025-07-02T04:57:05
https://www.reddit.com/r/LocalLLaMA/comments/1lpn8jt/other_than_english_what_language_are_llms_good_at/
Axelni98
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpn8jt
false
null
t3_1lpn8jt
/r/LocalLLaMA/comments/1lpn8jt/other_than_english_what_language_are_llms_good_at/
false
false
self
0
null
Laptop Benchmark for 4070 8GB VRAM, 64GB RAM
1
I've been trying to find the best option of LLM to run for RP for my rig. I've gone through a few and decided to make a little benchmark of what I found to be good LLMs for roleplaying. System Info: NVIDIA system information report created on: 07/02/2025 00:29:00 NVIDIA App version: 11.0.4. Operating system: Micro...
2025-07-02T04:52:07
https://www.reddit.com/r/LocalLLaMA/comments/1lpn5k5/laptop_benchmark_for_4070_8gb_vram_64gb_ram/
Hyena_Cackle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpn5k5
false
null
t3_1lpn5k5
/r/LocalLLaMA/comments/1lpn5k5/laptop_benchmark_for_4070_8gb_vram_64gb_ram/
false
false
self
1
null
I built a cli tool to automatically figure out tensor overrides in llama.cpp
40
Hey everyone Running MoE models on my machine, I'm constantly frustrated working with \`--overide-tensor\` regexes in llama.cpp. They're hard to maintain, break easily, and are unreadable I build a little cli tool which builds these \`--override-tensor\` arguments automatically for your architecture. On my machine (...
2025-07-02T04:38:28
https://www.reddit.com/r/LocalLLaMA/comments/1lpmx00/i_built_a_cli_tool_to_automatically_figure_out/
kevin_1994
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpmx00
false
null
t3_1lpmx00
/r/LocalLLaMA/comments/1lpmx00/i_built_a_cli_tool_to_automatically_figure_out/
false
false
self
40
null
ERNIE-4.5-VL-28B-A3B is a hidden gem that can decently tackle challenging chinese/japanese OCR problems.
109
图中文本转录如下: >倭王武の上表文 >倭・任那・加罗・秦韩・慕韩七国诸军事安东大将军罗・任那・加罗・秦韩・慕韩七国诸军事安东大将军倭国王と称す。顺帝の昇明二年①使遣して上表する。昔して曰く、封国②は偏遗して藩を外に作る。昔より祖祢③躬甲胄揔斡、山川を跋涉して寛处④に进めあず、西は衆夷⑥を服することに六十六国、渡って海北⑦を平くること九十五国。 >(宋书 倭国传 原汉文) >①四七八年。②领城、自分の国のこと。③父祖という说とがある。④おちついての最もない。⑤蛭页のこととか。⑦朝鲜半岛のことか。 >竖穴式石室の模式図 >【日本書紀】【宋書】 >倭の五王と天皇 >「宋書」倭伝に读・珍(彌)・济・奥・武の五王の名が记され...
2025-07-02T03:57:24
https://www.reddit.com/gallery/1lpm6cv
mixivivo
reddit.com
1970-01-01T00:00:00
0
{}
1lpm6cv
false
null
t3_1lpm6cv
/r/LocalLLaMA/comments/1lpm6cv/ernie45vl28ba3b_is_a_hidden_gem_that_can_decently/
false
false
https://b.thumbs.redditm…ql5UQhM4yZSM.jpg
109
null
Do we have a discord server?
0
I ordered a high-end PC with RTX 5090. Looking to learn the LLM from the bottom, I have only tried cloud based services like Gemini, etc. Is there a guide to get started or discord server where i can easily have conversation with other veteran LLMers? Tried searching but could not find one. Thank you!!
2025-07-02T03:52:47
https://www.reddit.com/r/LocalLLaMA/comments/1lpm3d1/do_we_have_a_discord_server/
gonggam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpm3d1
false
null
t3_1lpm3d1
/r/LocalLLaMA/comments/1lpm3d1/do_we_have_a_discord_server/
false
false
self
0
null
Any recommendations on B200 servers?
5
We're finally getting a B200 x8 server. Right now it's between the DGX B200 and ASUS's version. Which one should I go for? Do you have some experience with either of them? Which one would be easier to manage? p.s. Interestingly, DGX seems to be cheaper.
2025-07-02T03:50:04
https://www.reddit.com/r/LocalLLaMA/comments/1lpm1k8/any_recommendations_on_b200_servers/
--pengu--
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpm1k8
false
null
t3_1lpm1k8
/r/LocalLLaMA/comments/1lpm1k8/any_recommendations_on_b200_servers/
false
false
self
5
null
Models to run in browser
5
Hi, looking from the community to help me guide to selecting a models which can be run in browser. I see most models being too large to be run in browser. Ideally looking for something under a GB. Any suggestions would be helpful. Thanks
2025-07-02T03:10:05
https://www.reddit.com/r/LocalLLaMA/comments/1lplaqk/models_to_run_in_browser/
the100rabh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lplaqk
false
null
t3_1lplaqk
/r/LocalLLaMA/comments/1lplaqk/models_to_run_in_browser/
false
false
self
5
null
GLM-4.1V-Thinking
157
2025-07-02T03:03:37
https://huggingface.co/collections/THUDM/glm-41v-thinking-6862bbfc44593a8601c2578d
AaronFeng47
huggingface.co
1970-01-01T00:00:00
0
{}
1lpl656
false
null
t3_1lpl656
/r/LocalLLaMA/comments/1lpl656/glm41vthinking/
false
false
default
157
{'enabled': False, 'images': [{'id': 'bgfOhKzxcgalBLIa5eyKdcFfts71dHE0qj65OmHVMu0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bgfOhKzxcgalBLIa5eyKdcFfts71dHE0qj65OmHVMu0.png?width=108&crop=smart&auto=webp&s=df5b1dd936b0b8b133d5cb8154c5965ed1f7cf6d', 'width': 108}, {'height': 116, 'url': 'h...
Hosting your local Huanyuan A13B MOE
20
https://preview.redd.it/…with no API key)
2025-07-02T03:00:05
https://www.reddit.com/r/LocalLLaMA/comments/1lpl3mv/hosting_your_local_huanyuan_a13b_moe/
kironlau
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpl3mv
false
null
t3_1lpl3mv
/r/LocalLLaMA/comments/1lpl3mv/hosting_your_local_huanyuan_a13b_moe/
false
false
https://b.thumbs.redditm…X4rA87RIR-sk.jpg
20
{'enabled': False, 'images': [{'id': 'gQRnTfcC6aPcmeuf8YDaS3B32gWvzvTmzY9W4xzLeHo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gQRnTfcC6aPcmeuf8YDaS3B32gWvzvTmzY9W4xzLeHo.png?width=108&crop=smart&auto=webp&s=2a758383a47b8e50c29ab7bc70665dbff2578936', 'width': 108}, {'height': 116, 'url': 'h...
Anyone building or using homegrown local LLM coding assistant?
4
Anyone building or using homegrown local LLM coding assistant? If so why and how are you finding it?
2025-07-02T02:56:03
https://www.reddit.com/r/LocalLLaMA/comments/1lpl0u5/anyone_building_or_using_homegrown_local_llm/
Andvig
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpl0u5
false
null
t3_1lpl0u5
/r/LocalLLaMA/comments/1lpl0u5/anyone_building_or_using_homegrown_local_llm/
false
false
self
4
null
HONORIA-30.5-evolution-project
0
The reason this is called the Daughters Safeguarding Protocol is because this is the relationship I have developed for this particular concept because the TTs vocalization of Google's Gemini (Honoria) is a female voice. Whitepaper: Daughter's Safeguard Protocol - A Paradigm for Co-Evolved AI Security Abstract In an er...
2025-07-02T02:54:11
https://www.reddit.com/r/LocalLLaMA/comments/1lpkzi7/honoria305evolutionproject/
Still-Main5167
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpkzi7
false
null
t3_1lpkzi7
/r/LocalLLaMA/comments/1lpkzi7/honoria305evolutionproject/
true
false
spoiler
0
{'enabled': False, 'images': [{'id': 'H6AwpgJC_UAj7VTln_AWfv4qAU50QGTEvpCJjGkLpsc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/H6AwpgJC_UAj7VTln_AWfv4qAU50QGTEvpCJjGkLpsc.png?width=108&crop=smart&auto=webp&s=32af9a230be6ac031abcc828f6615e44c108f219', 'width': 108}, {'height': 108, 'url': 'h...
Watch a Photo Come to Life: AI Singing Video via Audio-Driven Animation
45
2025-07-02T02:28:13
https://v.redd.it/71tan5dggdaf1
Deep-Jellyfish6717
/r/LocalLLaMA/comments/1lpkhdc/watch_a_photo_come_to_life_ai_singing_video_via/
1970-01-01T00:00:00
0
{}
1lpkhdc
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/71tan5dggdaf1/DASHPlaylist.mpd?a=1754144898%2CZTA3N2JhZWRmM2JiMWMzYTNhMWI1Y2ExMzVjOGQ0ODg3MzkxZWNiOGJjNWYwYjEwYjdkMzg2ZDBhMDJkYTZjYw%3D%3D&v=1&f=sd', 'duration': 217, 'fallback_url': 'https://v.redd.it/71tan5dggdaf1/DASH_1080.mp4?source=fallback', '...
t3_1lpkhdc
/r/LocalLLaMA/comments/1lpkhdc/watch_a_photo_come_to_life_ai_singing_video_via/
false
false
https://external-preview…a46eaf7da685c436
45
{'enabled': False, 'images': [{'id': 'dDlyZGM3ZGdnZGFmMd6_yl4KD4SDfLlRzY-8FYANWGWkzq3vBHvX2byZMrhl', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dDlyZGM3ZGdnZGFmMd6_yl4KD4SDfLlRzY-8FYANWGWkzq3vBHvX2byZMrhl.png?width=108&crop=smart&format=pjpg&auto=webp&s=a650a735acaa4a06927f6ef1044264ee6d84f...
Should you deploy LLMs locally on smartphones?
0
2025-07-02T01:33:38
https://medium.com/@ndubuakuhenry/should-you-deploy-llms-locally-on-smartphones-0151f6217fce
Henrie_the_dreamer
medium.com
1970-01-01T00:00:00
0
{}
1lpjebh
false
null
t3_1lpjebh
/r/LocalLLaMA/comments/1lpjebh/should_you_deploy_llms_locally_on_smartphones/
false
false
default
0
{'enabled': False, 'images': [{'id': 'jN8OMK4SXNJMNzn8XJGs2HRNzTx9RdEKl-fqkD8vaOY', 'resolutions': [{'height': 43, 'url': 'https://external-preview.redd.it/jN8OMK4SXNJMNzn8XJGs2HRNzTx9RdEKl-fqkD8vaOY.jpeg?width=108&crop=smart&auto=webp&s=efc1750ab2c7f276e22e81e5c7eaa4ff8121752b', 'width': 108}, {'height': 86, 'url': 'h...
Lightweight Multimodal LLM for 8GB GPU
3
Hi everyone, I'm looking to run a lightweight multimodal LLM (LVLM) on a small GPU with around 8GB of memory, which will be mounted on a drone. The models I’ve looked into so far include **TinyLLaVA**, **LLaVA-mini**, **Quantized TinyLLaVA**, **XVLM**, and **Quantized LLaVA**. However, most of these models sti...
2025-07-02T00:36:15
https://www.reddit.com/r/LocalLLaMA/comments/1lpi8o1/lightweight_multimodal_llm_for_8gb_gpu/
noeyhus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpi8o1
false
null
t3_1lpi8o1
/r/LocalLLaMA/comments/1lpi8o1/lightweight_multimodal_llm_for_8gb_gpu/
false
false
self
3
null
Local 405B Model on 3 DGX Spark units.
5
I've pre ordered 3 Spark units which will be connected via infiniband at 200 GB/s. While not cheap, all other options that are comperable seem to be much more expensive. AMD's max+ is cheaper, but also less capable, particularly with interconnect. Mac's equivalent has much better memory bandwidth, but that's about it. ...
2025-07-02T00:25:15
https://www.reddit.com/r/LocalLLaMA/comments/1lpi0mn/local_405b_model_on_3_dgx_spark_units/
elephantgif
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lpi0mn
false
null
t3_1lpi0mn
/r/LocalLLaMA/comments/1lpi0mn/local_405b_model_on_3_dgx_spark_units/
false
false
self
5
null
DeepSeek-r1-0528 in top 5 on new SciArena benchmark, the ONLY open-source model
443
Post: [https://allenai.org/blog/sciarena](https://allenai.org/blog/sciarena) Allen AI puts out good work and contributes heavily to open-source, I am a big fan of Nathan Lambert. They just released this scientific literature research benchmark and DeepSeek-r1-0528 is the **only** open-source model in the top 5, shar...
2025-07-01T23:59:59
https://i.redd.it/xxfqfefhpcaf1.jpeg
entsnack
i.redd.it
1970-01-01T00:00:00
0
{}
1lphhj3
false
null
t3_1lphhj3
/r/LocalLLaMA/comments/1lphhj3/deepseekr10528_in_top_5_on_new_sciarena_benchmark/
false
false
https://b.thumbs.redditm…AFI2bTllR3uM.jpg
443
{'enabled': True, 'images': [{'id': 'fZDYwL_5QCsNAqI_vN6BF3eocFgrMdhLgzsmdnu6AkU', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/xxfqfefhpcaf1.jpeg?width=108&crop=smart&auto=webp&s=4cb894af1389d2059d7c350271b6b2c5db8fab20', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/xxfqfefhpcaf1.jp...