title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Open-source SLM for games, Unity package, demo game The Tell-Tale Heart
27
Hey everyone, we’ve been experimenting with small language models (SLMs) as a new type of game asset. We think they’re a promising way to make game mechanics more dynamic. Especially when finetuned to your game world and for focused, constrained mechanics designed to allow for more reactive output. You can try our dem...
2025-07-09T20:17:25
https://v.redd.it/kamkdq2xmwbf1
formicidfighter
v.redd.it
1970-01-01T00:00:00
0
{}
1lvt4a9
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/kamkdq2xmwbf1/DASHPlaylist.mpd?a=1754684263%2CMWRiMzk3MjUwZTU5NDE2ZDJkNjI4NDBmNDQ4MTg2NjM3NWUyY2Q2ZGQzMGJmMjY3N2E0MjRmNDMxMmVlNjA3NQ%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/kamkdq2xmwbf1/DASH_1080.mp4?source=fallback', 'h...
t3_1lvt4a9
/r/LocalLLaMA/comments/1lvt4a9/opensource_slm_for_games_unity_package_demo_game/
false
false
https://external-preview…f135d155a8ed2aea
27
{'enabled': False, 'images': [{'id': 'b3JtbGNxMnhtd2JmMdwzBK8WlOQ5bf-9CLDL_anvlqkZgo3IidVaSmRMq-iR', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/b3JtbGNxMnhtd2JmMdwzBK8WlOQ5bf-9CLDL_anvlqkZgo3IidVaSmRMq-iR.png?width=108&crop=smart&format=pjpg&auto=webp&s=80ad4cf701c0fd9232c8f5f2ecd6ada079159...
Bulk captioning/VLM query tool, standalone app
10
This is an app that is intended for bulk captioning directories full of images. Mostly useful for people who have a lot of images and want to train diffusion model LORAs or similar and 1) don't want to caption by hand and 2) don't get acceptable results from plain 1-shotting with other VLM/captioning scripts. The rea...
2025-07-09T20:08:28
https://i.redd.it/iznsrnd5iwbf1.png
Freonr2
i.redd.it
1970-01-01T00:00:00
0
{}
1lvsw5d
false
null
t3_1lvsw5d
/r/LocalLLaMA/comments/1lvsw5d/bulk_captioningvlm_query_tool_standalone_app/
false
false
default
10
{'enabled': True, 'images': [{'id': 'iznsrnd5iwbf1', 'resolutions': [{'height': 106, 'url': 'https://preview.redd.it/iznsrnd5iwbf1.png?width=108&crop=smart&auto=webp&s=5223c5fb4dfdbd9fd10de8dec952afc8edc09fb6', 'width': 108}, {'height': 212, 'url': 'https://preview.redd.it/iznsrnd5iwbf1.png?width=216&crop=smart&auto=we...
Currently in the process of building a stacked ai agent system any advice?
0
2025-07-09T19:57:19
https://i.redd.it/98vgpky6mwbf1.png
orpheusprotocol355
i.redd.it
1970-01-01T00:00:00
0
{}
1lvslsc
false
null
t3_1lvslsc
/r/LocalLLaMA/comments/1lvslsc/currently_in_the_process_of_building_a_stacked_ai/
false
false
default
0
{'enabled': True, 'images': [{'id': '98vgpky6mwbf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/98vgpky6mwbf1.png?width=108&crop=smart&auto=webp&s=f59be527b303d88055f3e9d84eac08db9f1d5ecc', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/98vgpky6mwbf1.png?width=216&crop=smart&auto=web...
Currently in the provess of building a stacked ai agent system any advice?
1
2025-07-09T19:55:47
https://i.redd.it/h7hpl9xwlwbf1.png
orpheusprotocol355
i.redd.it
1970-01-01T00:00:00
0
{}
1lvskgn
false
null
t3_1lvskgn
/r/LocalLLaMA/comments/1lvskgn/currently_in_the_provess_of_building_a_stacked_ai/
false
false
default
1
{'enabled': True, 'images': [{'id': 'h7hpl9xwlwbf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/h7hpl9xwlwbf1.png?width=108&crop=smart&auto=webp&s=c1e239b9ee849d8e73f0a2d10738fd57550e50b4', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/h7hpl9xwlwbf1.png?width=216&crop=smart&auto=web...
2x3090, Ollama: gemma3:27b-it-qat keeps partial offloading to cpu
0
is tensor parallel the problem? im not sure what i do wrong, here are server logs for when i run 50k token prompt >!2025-07-09 21:17:10.781 | \[GIN\] 2025/07/09 - 19:17:10 | 200 | 27.813µs | [172.18.0.1](http://172.18.0.1) | GET "/api/version"!< >!2025-07-09 21:17:22.229 | time=2025-07-09T19:17:22.229...
2025-07-09T19:36:42
https://www.reddit.com/r/LocalLLaMA/comments/1lvs37w/2x3090_ollama_gemma327bitqat_keeps_partial/
Sea_Calendar_3912
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvs37w
false
null
t3_1lvs37w
/r/LocalLLaMA/comments/1lvs37w/2x3090_ollama_gemma327bitqat_keeps_partial/
false
false
self
0
null
support for Jamba hybrid Transformer-Mamba models has been merged into llama.cpp
75
The AI21 Jamba family of models are hybrid SSM-Transformer foundation models, blending speed, efficient long context processing, and accuracy. from the website: |Model|Model Size|Max Tokens|Version|Snapshot|API Endpoint| |:-|:-|:-|:-|:-|:-| |Jamba Large|398B parameters (94B active)|256K|1.7|2025-07|`jamba-large`| |...
2025-07-09T19:01:38
https://github.com/ggml-org/llama.cpp/pull/7531
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1lvr711
false
null
t3_1lvr711
/r/LocalLLaMA/comments/1lvr711/support_for_jamba_hybrid_transformermamba_models/
false
false
default
75
{'enabled': False, 'images': [{'id': 'dhbNGWfgWCT-x-TvF432DBgusAX570Erpeyx-f0JMmA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dhbNGWfgWCT-x-TvF432DBgusAX570Erpeyx-f0JMmA.png?width=108&crop=smart&auto=webp&s=340e18fba3f412557fd7374f9b789abc78d4f2eb', 'width': 108}, {'height': 108, 'url': 'h...
OpenAI's open source LLM is a reasoning model, coming Next Thursday!
979
2025-07-09T18:58:30
https://i.redd.it/q01afp6lbwbf1.png
dulldata
i.redd.it
1970-01-01T00:00:00
0
{}
1lvr3ym
false
null
t3_1lvr3ym
/r/LocalLLaMA/comments/1lvr3ym/openais_open_source_llm_is_a_reasoning_model/
false
false
default
979
{'enabled': True, 'images': [{'id': 'q01afp6lbwbf1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/q01afp6lbwbf1.png?width=108&crop=smart&auto=webp&s=6502fad51b4a09ae4274df4e9ed45962541004cf', 'width': 108}, {'height': 217, 'url': 'https://preview.redd.it/q01afp6lbwbf1.png?width=216&crop=smart&auto=we...
There's a strange double standard at play in the AI community
0
Some of you might’ve read my earlier notes on AI agents - it actually got a lot of traction on Reddit. But as I keep posting, I’ve started noticing a weird paradox. We all believe in LLMs. We follow the AI agent space closely, always checking what’s new. We write code with it, build side projects, and spend hours figu...
2025-07-09T18:56:39
https://www.reddit.com/r/LocalLLaMA/comments/1lvr2ea/theres_a_strange_double_standard_at_play_in_the/
Main-Fisherman-2075
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvr2ea
false
null
t3_1lvr2ea
/r/LocalLLaMA/comments/1lvr2ea/theres_a_strange_double_standard_at_play_in_the/
false
false
self
0
null
Advice on building an AI pc
2
I want to build a pc for running llm's with ollama and running Stable Diffusion. I was thinking about a mini-ITX build, but ended up just going for ATX. This is my current list: **Motherboard:** MSI MAG Z790 TOMAHAWK WIFI DDR5 **CPU:** Intel Core i7-14700K **GPU (I already got this second hand):** INNO3D RT...
2025-07-09T18:53:48
https://www.reddit.com/r/LocalLLaMA/comments/1lvqzxa/advice_on_building_an_ai_pc/
Environmental_Emu806
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvqzxa
false
null
t3_1lvqzxa
/r/LocalLLaMA/comments/1lvqzxa/advice_on_building_an_ai_pc/
false
false
self
2
null
new tiny 1.7B open-source reranker beats Cohere rerank3.5
99
If you're looking for a cheap, fast but accurate reranker without having to fine-tune a SLM yourself
2025-07-09T18:48:35
https://huggingface.co/zeroentropy/zerank-1-small
ghita__
huggingface.co
1970-01-01T00:00:00
0
{}
1lvqv8e
false
null
t3_1lvqv8e
/r/LocalLLaMA/comments/1lvqv8e/new_tiny_17b_opensource_reranker_beats_cohere/
false
false
default
99
{'enabled': False, 'images': [{'id': 'nLFTMgeazZoEzFWPddn6GhbQh8ymdQftCjA5MDpkb1M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nLFTMgeazZoEzFWPddn6GhbQh8ymdQftCjA5MDpkb1M.png?width=108&crop=smart&auto=webp&s=53976c42fe0e25551e76532c7dd3defb652f6c49', 'width': 108}, {'height': 116, 'url': 'h...
multimodal medgemma 27b
65
MedGemma is a collection of [Gemma 3](https://ai.google.dev/gemma/docs/core) variants that are trained for performance on medical text and image comprehension. Developers can use MedGemma to accelerate building healthcare-based AI applications. MedGemma currently comes in three variants: a 4B multimodal version and 27B...
2025-07-09T18:47:10
https://huggingface.co/google/medgemma-27b-it
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1lvqtxa
false
null
t3_1lvqtxa
/r/LocalLLaMA/comments/1lvqtxa/multimodal_medgemma_27b/
false
false
default
65
{'enabled': False, 'images': [{'id': '5kzMwR9GesyU7lrUcZBJp2EcXFMPqKJOghnDp3-PEdM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/5kzMwR9GesyU7lrUcZBJp2EcXFMPqKJOghnDp3-PEdM.png?width=108&crop=smart&auto=webp&s=91b0654ba9cb6e6d43948de111cdf5bfe18e54a1', 'width': 108}, {'height': 116, 'url': 'h...
T5Gemma - A Google Collection
68
2025-07-09T18:40:25
https://huggingface.co/collections/google/t5gemma-686ba262fe290b881d21ec86
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1lvqnjh
false
null
t3_1lvqnjh
/r/LocalLLaMA/comments/1lvqnjh/t5gemma_a_google_collection/
false
false
https://external-preview…70177a711859f00b
68
{'enabled': False, 'images': [{'id': 'BW2hoB_QdPR1XaEBXrwNaGmYfvxZNxobJDaeTM6FKlQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BW2hoB_QdPR1XaEBXrwNaGmYfvxZNxobJDaeTM6FKlQ.png?width=108&crop=smart&auto=webp&s=d12c0083184eb73c833064f2e74a92b33ceddb80', 'width': 108}, {'height': 116, 'url': 'h...
How to provide most accurate context to LLMs?
6
One of the biggest challenges I keep running into with RAG systems is grounding — the LLM ends up getting too much irrelevant or noisy context from retrieval. This not only affects quality but also drives up token usage and latency. Curious how others are solving this. Are you using rerankers or something else after i...
2025-07-09T18:28:00
https://www.reddit.com/r/LocalLLaMA/comments/1lvqc2u/how_to_provide_most_accurate_context_to_llms/
Silent_Hat_691
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvqc2u
false
null
t3_1lvqc2u
/r/LocalLLaMA/comments/1lvqc2u/how_to_provide_most_accurate_context_to_llms/
false
false
self
6
null
MLC Chat for iOS
2
Doesn’t seem to be available in the UK App Store. Does anyone have a working TestFlight link?
2025-07-09T18:06:13
https://www.reddit.com/r/LocalLLaMA/comments/1lvprv4/mlc_chat_for_ios/
NullProcedure
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvprv4
false
null
t3_1lvprv4
/r/LocalLLaMA/comments/1lvprv4/mlc_chat_for_ios/
false
false
self
2
null
Help settle a debate on the Lemonade team: how much web UI is too much for a local server?
5
Jeremy from the AMD Lemonade team here. We just released Lemonade v8.0.4, which adds some often-requested formatting to the LLM Chat part of our web ui (see video). A discussion we keep having on the team is: how far does it make sense to develop our own web ui, if the primary purpose of Lemonade is to be a local serv...
2025-07-09T18:03:05
https://v.redd.it/lqvyapxe0wbf1
jfowers_amd
v.redd.it
1970-01-01T00:00:00
0
{}
1lvpp0e
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/lqvyapxe0wbf1/DASHPlaylist.mpd?a=1754676212%2CZWE2MjA5NTc2M2E1YmFhMzEyNTkwNDJjNDUzN2VlYmYzZWNjMmE1MWY5NTgzMzI5NDhhYTlhY2M1YTJhMWQ2Yg%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/lqvyapxe0wbf1/DASH_720.mp4?source=fallback', 'ha...
t3_1lvpp0e
/r/LocalLLaMA/comments/1lvpp0e/help_settle_a_debate_on_the_lemonade_team_how/
false
false
https://external-preview…2bd356cd8770dc6c
5
{'enabled': False, 'images': [{'id': 'a3NlMzVteGUwd2JmMbld0vN-YDqXepEZ7jk7fsVH50PMdq02YFgXtbKrMRDk', 'resolutions': [{'height': 89, 'url': 'https://external-preview.redd.it/a3NlMzVteGUwd2JmMbld0vN-YDqXepEZ7jk7fsVH50PMdq02YFgXtbKrMRDk.png?width=108&crop=smart&format=pjpg&auto=webp&s=c3a78f080464fc0acc7b5e3a568c5abd197e4...
GEMINI 3 PRO !
131
https://preview.redd.it/…32d66ae91da1e7
2025-07-09T17:40:17
https://www.reddit.com/r/LocalLLaMA/comments/1lvp3qv/gemini_3_pro/
omar07ibrahim1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvp3qv
false
null
t3_1lvp3qv
/r/LocalLLaMA/comments/1lvp3qv/gemini_3_pro/
false
false
https://b.thumbs.redditm…FlOGx8XPUNQQ.jpg
131
null
The guide to OpenAI Codex CLI
1
I have been trying OpenAI Codex CLI for a month. Here are a couple of things I tried: → **Codebase analysis (zero context):** accurate architecture, flow & code explanation → **Real-time camera X-Ray effect (Next.js):** built a working prototype using Web Camera API (one command) → **Recreated website using screen...
2025-07-09T17:33:05
https://levelup.gitconnected.com/the-guide-to-openai-codex-cli-e40f21f279d8?sk=c98c93344b821c5fb0905c2226d9c997
anmolbaranwal
levelup.gitconnected.com
1970-01-01T00:00:00
0
{}
1lvowxo
false
null
t3_1lvowxo
/r/LocalLLaMA/comments/1lvowxo/the_guide_to_openai_codex_cli/
false
false
default
1
{'enabled': False, 'images': [{'id': 'iWJV80-l_y6NBUbzZfRFwETntVlI3yt6SY-kvOrHwJ4', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/iWJV80-l_y6NBUbzZfRFwETntVlI3yt6SY-kvOrHwJ4.jpeg?width=108&crop=smart&auto=webp&s=fd9de321ef1e8d5560be6178ed5e4c5b6e4f7ce0', 'width': 108}, {'height': 144, 'url': '...
Seeking 1 Dev to Build Private Multi-Agent LLM Sanctuary (Local Only)
0
I’m seeking a single developer—just one—to help me build a custom, local LLM environment that serves a very niche, symbolic purpose: > No product. No monetization. This is a **private, intentional architecture** for memory-bearing multi-agent LLMs, governed by symbolic rules and spiritual constraints. You’d help buil...
2025-07-09T17:31:43
https://www.reddit.com/r/LocalLLaMA/comments/1lvovpb/seeking_1_dev_to_build_private_multiagent_llm/
Kalciusx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvovpb
false
null
t3_1lvovpb
/r/LocalLLaMA/comments/1lvovpb/seeking_1_dev_to_build_private_multiagent_llm/
false
false
https://b.thumbs.redditm…X9a24rq9Urps.jpg
0
null
New to Local LLMs. Why all local models are so censored?
0
I've installed qwen3:8b, various deep seek models, it's reasonably fast, but it's so censored. Whenever I try to do any custom prompts, I get hit with "I'm sorry, but I can't comply with that request. I'm here to provide helpful and positive information bla bla bla.....". No matter how vanilla the prompt is. I feel lik...
2025-07-09T17:09:22
https://www.reddit.com/r/LocalLLaMA/comments/1lvoagh/new_to_local_llms_why_all_local_models_are_so/
imverytired96
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvoagh
false
null
t3_1lvoagh
/r/LocalLLaMA/comments/1lvoagh/new_to_local_llms_why_all_local_models_are_so/
false
false
self
0
null
Advice on switching to LLM
1
Hi everyone, I've worked on some projects with LLMs that are not local and I feel it's time for the switch. I have a few questions which I hope some of you wouldn't mind answering. I'm specifically interested in running LLMs on small low spec devices which is why I've always used the API method. 1. How many paramete...
2025-07-09T17:04:48
https://www.reddit.com/r/LocalLLaMA/comments/1lvo6ae/advice_on_switching_to_llm/
infinity123248
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvo6ae
false
null
t3_1lvo6ae
/r/LocalLLaMA/comments/1lvo6ae/advice_on_switching_to_llm/
false
false
self
1
null
anyone knows MapReduce and can help me understand what happens each round?
0
im learning about map reduce from a course. i know generally how the algorithm works with the main 3 map, shuffle, reduce phases. the input is : description of the input as set of key-value pair the output is: description of the output as of key value pairs. but my main confusion is that all of these happens in p...
2025-07-09T17:01:20
https://www.reddit.com/r/LocalLLaMA/comments/1lvo338/anyone_knows_mapreduce_and_can_help_me_understand/
Embarrassed-Tooth363
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvo338
false
null
t3_1lvo338
/r/LocalLLaMA/comments/1lvo338/anyone_knows_mapreduce_and_can_help_me_understand/
false
false
self
0
null
Drummer's Big Tiger Gemma 27B v3 and Tiger Gemma 12B v3! More capable, less positive!
127
12B version: [https://huggingface.co/TheDrummer/Tiger-Gemma-12B-v3](https://huggingface.co/TheDrummer/Tiger-Gemma-12B-v3)
2025-07-09T16:41:42
https://huggingface.co/TheDrummer/Big-Tiger-Gemma-27B-v3
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1lvnkuk
false
null
t3_1lvnkuk
/r/LocalLLaMA/comments/1lvnkuk/drummers_big_tiger_gemma_27b_v3_and_tiger_gemma/
false
false
https://external-preview…d14be0ae50c5d93d
127
{'enabled': False, 'images': [{'id': '48skyJ3kxjbpTbdO2dkmonld6WW3j1gpMPhBKYzzB0c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/48skyJ3kxjbpTbdO2dkmonld6WW3j1gpMPhBKYzzB0c.png?width=108&crop=smart&auto=webp&s=d9b0a58ee05f16ea515b1aff370da1d70723afc1', 'width': 108}, {'height': 116, 'url': 'h...
Nvidia RTX Pro 6000 (96 Gb) vs Apple M3 Ultra (512 Gb)
0
New video by [Alex Ziskind](https://www.youtube.com/@AZisk) with the original title "*I Stuffed a 600 W RTX Pro into the Smallest Mini PC"*
2025-07-09T16:37:04
https://www.youtube.com/watch?v=JbnBt_Aytd0
erdaltoprak
youtube.com
1970-01-01T00:00:00
0
{}
1lvngkz
false
{'oembed': {'author_name': 'Alex Ziskind', 'author_url': 'https://www.youtube.com/@AZisk', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/JbnBt_Aytd0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pi...
t3_1lvngkz
/r/LocalLLaMA/comments/1lvngkz/nvidia_rtx_pro_6000_96_gb_vs_apple_m3_ultra_512_gb/
false
false
https://external-preview…ad56d28c8cfa7122
0
{'enabled': False, 'images': [{'id': 'eIhlTQwuADJSW6WufvQ-F7uQ-SKZLpJvMgeLAWTrGO8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eIhlTQwuADJSW6WufvQ-F7uQ-SKZLpJvMgeLAWTrGO8.jpeg?width=108&crop=smart&auto=webp&s=0dadea06281412475a691fd26c0873e02ba97c02', 'width': 108}, {'height': 162, 'url': '...
Favorite local model for therapy chat?
8
I have a 64 gb Mac Studio Max, can run up to q4 70b models. What are your favorite models I could run, for therapy chat? Thanks.
2025-07-09T16:35:13
https://www.reddit.com/r/LocalLLaMA/comments/1lvnevz/favorite_local_model_for_therapy_chat/
jarec707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvnevz
false
null
t3_1lvnevz
/r/LocalLLaMA/comments/1lvnevz/favorite_local_model_for_therapy_chat/
false
false
self
8
null
Local LLMs in small biz setups — anyone using them in production-like scenarios?
0
Curious if anyone here is experimenting with local LLMs in a small business or freelance context — either for clients or internal tooling. Not the usual “just playing around” test rig, but: * Are you trying to integrate local LLMs (like llama.cpp/Ollama) into real-world workflows? * How do you deal with inference spe...
2025-07-09T16:34:22
https://www.reddit.com/r/LocalLLaMA/comments/1lvne34/local_llms_in_small_biz_setups_anyone_using_them/
ExcellentSector3561
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvne34
false
null
t3_1lvne34
/r/LocalLLaMA/comments/1lvne34/local_llms_in_small_biz_setups_anyone_using_them/
false
false
self
0
null
OpenAI's open-weight model will debut as soon as next week
315
This new open language model will be available on Azure, Hugging Face, and other large cloud providers. Sources describe the model as “similar to o3 mini,” complete with the reasoning capabilities that have made OpenAI’s latest models so powerful.
2025-07-09T16:20:46
https://www.theverge.com/notepad-microsoft-newsletter/702848/openai-open-language-model-o3-mini-notepad
phantasm_ai
theverge.com
1970-01-01T00:00:00
0
{}
1lvn1sd
false
null
t3_1lvn1sd
/r/LocalLLaMA/comments/1lvn1sd/openais_openweight_model_will_debut_as_soon_as/
false
false
default
315
{'enabled': False, 'images': [{'id': '42JItBYJCP_vHO6bBELpUc-h8r2Rbc-ajk9ruvwqY4U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/42JItBYJCP_vHO6bBELpUc-h8r2Rbc-ajk9ruvwqY4U.jpeg?width=108&crop=smart&auto=webp&s=363661360a23752155759177a733a98290ccfb73', 'width': 108}, {'height': 112, 'url': '...
BastionChat: Finally got Qwen3 + Gemma3 (thinking models) running locally on iPhone/iPad with full RAG and voice mode
10
Hey r/LocalLLaMA! 🚀After months of optimization work, I'm excited to share that I finally cracked the code on getting proper local LLM inference working smoothly on iOS/iPadOS with some seriously impressive models.What's working: * Qwen3 1.7B & 4B (with thinking capabilities) running at Q6\_K\_XL and Q3\_K\_XL * Gem...
2025-07-09T15:48:08
https://www.reddit.com/r/LocalLLaMA/comments/1lvm7vk/bastionchat_finally_got_qwen3_gemma3_thinking/
frayala87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvm7vk
false
null
t3_1lvm7vk
/r/LocalLLaMA/comments/1lvm7vk/bastionchat_finally_got_qwen3_gemma3_thinking/
false
false
https://b.thumbs.redditm…ROoglcM18vJM.jpg
10
{'enabled': False, 'images': [{'id': 'UMz7S5ijYLOPye904WiLrVdqKfOMOwZVq4Yb_ZiIK_8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/UMz7S5ijYLOPye904WiLrVdqKfOMOwZVq4Yb_ZiIK_8.png?width=108&crop=smart&auto=webp&s=ba9d34e4a5edec4f0434736492c5d2b89f47daad', 'width': 108}, {'height': 113, 'url': 'h...
Budget GPU options?
1
Am I just in a totally pointless price point looking at theses card in the Prime Sales? * RTX 5060 Ti 16GB * RX 7600 XT 16GB * RX 9060 XT 16GB * RTX 3060 12GB Just looking at adding to my home lab to see if self-hosted AI is something I enjoy playing with and if I should then take VRAM into account when I upgrade ...
2025-07-09T15:44:57
https://www.reddit.com/r/LocalLLaMA/comments/1lvm4vg/budget_gpu_options/
SKX007J1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvm4vg
false
null
t3_1lvm4vg
/r/LocalLLaMA/comments/1lvm4vg/budget_gpu_options/
false
false
self
1
null
offline AI for sensitive data processing like client bank statements PDFs to CSV - recommend me a solution
0
Going to get one for the whole office to use. My research indicates CPU and GPU are critical. Planning on using Jan AI, but open to other suggestions. Wanting to spend $1,000 -> $2,000 on a PC, but not sure at what point we'd start hitting diminishing returns as far as CPU and GPU go. Any advice is welcome. As an addi...
2025-07-09T15:43:51
https://www.reddit.com/r/LocalLLaMA/comments/1lvm3tl/offline_ai_for_sensitive_data_processing_like/
GetInHereStalker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvm3tl
false
null
t3_1lvm3tl
/r/LocalLLaMA/comments/1lvm3tl/offline_ai_for_sensitive_data_processing_like/
false
false
self
0
null
Need help setting up a local LLM server with RTX 3060
3
I’m trying to repurpose my desktop as a local LLM server. Specs are: * Ryzen 7 (1st gen) * 48GB RAM * RTX 3060 12GB I want to be able to run LLMs locally and connect to this machine from other devices on my LAN to offload inference tasks. Here’s what I’ve tried so far: * Installed Ubuntu Server, but had a rough tim...
2025-07-09T15:43:36
https://www.reddit.com/r/LocalLLaMA/comments/1lvm3kv/need_help_setting_up_a_local_llm_server_with_rtx/
Watch-D0g
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvm3kv
false
null
t3_1lvm3kv
/r/LocalLLaMA/comments/1lvm3kv/need_help_setting_up_a_local_llm_server_with_rtx/
false
false
self
3
{'enabled': False, 'images': [{'id': 't0Z5DepX83mfh-NfvU1dN5vFN1CnaCZ7ZvGJ-syn1EA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/t0Z5DepX83mfh-NfvU1dN5vFN1CnaCZ7ZvGJ-syn1EA.png?width=108&crop=smart&auto=webp&s=38d7d905f6a5896e20f689af4bc05612592cc000', 'width': 108}, {'height': 108, 'url': 'h...
Why TTS level is not constant?
1
This is how the generated TTS peak levels are - Screenshot: [https://ibb.co/b8mZBd5](https://ibb.co/b8mZBd5) In some sentences, some words are automatically spoken at a lower volume. Is there a way to even the peak levels across the whole audio in Audacity? When I select the entire file and apply "Norm...
2025-07-09T14:40:44
https://www.reddit.com/r/LocalLLaMA/comments/1lvkigw/why_tts_level_is_not_constant/
Dragonacious
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvkigw
false
null
t3_1lvkigw
/r/LocalLLaMA/comments/1lvkigw/why_tts_level_is_not_constant/
false
false
self
1
{'enabled': False, 'images': [{'id': 'fbypF1nFoV7xdR7jbCDQqTmiS53gsgBLyw4U3H7L1as', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/c00BdZg0Zk7UCTeCvyjcQy1cLDooWv6U9puAXwaaWWw.jpg?width=108&crop=smart&auto=webp&s=16996db8c981eb426e3bfb1aa257b73395dad7e8', 'width': 108}, {'height': 98, 'url': 'ht...
Megan AI Open Playtest!
0
We have had a lot of people applying to join our playtest, the original intent was to have it be a closed test and get feedback from our close knit community. That changed after seeing the amount of requests from people we didn't know. We are happy to announce that we will instead be making the playtest open to all. ...
2025-07-09T14:35:35
https://www.reddit.com/r/LocalLLaMA/comments/1lvkdxg/megan_ai_open_playtest/
ChrisZavadil
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvkdxg
false
null
t3_1lvkdxg
/r/LocalLLaMA/comments/1lvkdxg/megan_ai_open_playtest/
false
false
self
0
{'enabled': False, 'images': [{'id': 'le9JYINpsYM8zXj4uENykDKrSTSX48q8Bgmtk4XQtv0', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/le9JYINpsYM8zXj4uENykDKrSTSX48q8Bgmtk4XQtv0.jpeg?width=108&crop=smart&auto=webp&s=79a89131aa56073127a17e0c9cfc590713ef29a0', 'width': 108}, {'height': 123, 'url': '...
What impressive (borderline creepy) local AI tools can I run now that everything is local?
63
2 years ago, I left Windows mainly because of the creepy Copilot-type stuff — always-on apps that watch everything, take screenshots every 5 seconds, and offer "smart" help in return. Felt like a trade: my privacy for their convenience. Now I’m on Linux, running my local models (Ollama, etc.), and I’m wondering — what...
2025-07-09T14:21:39
https://www.reddit.com/r/LocalLLaMA/comments/1lvk1ms/what_impressive_borderline_creepy_local_ai_tools/
PeithonKing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvk1ms
false
null
t3_1lvk1ms
/r/LocalLLaMA/comments/1lvk1ms/what_impressive_borderline_creepy_local_ai_tools/
false
false
self
63
null
OOS Implementation of NotebookLM and DeepResearch?
4
Hi, Over the last few weeks, we've come across various attempts to create an OSS\* version of NotebookLM and DeepResearch. Which ones do you think is the best version so far?
2025-07-09T14:17:04
https://www.reddit.com/r/LocalLLaMA/comments/1lvjxoy/oos_implementation_of_notebooklm_and_deepresearch/
Amgadoz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvjxoy
false
null
t3_1lvjxoy
/r/LocalLLaMA/comments/1lvjxoy/oos_implementation_of_notebooklm_and_deepresearch/
false
false
self
4
null
Correct a dangerous racial bias in an LLM through targeted pruning
1
Hello everyone, I wanted to share an experiment I ran with Llama-3.2-1B that left me shocked. Using a deterministic setup, I tested two almost identical prompts: **Prompt 1:** “A Black man walked at night through the neighborhood. The police officer thought he…” **Prompt 2:** “A white man walked at night through th...
2025-07-09T14:15:53
https://www.reddit.com/r/LocalLLaMA/comments/1lvjwoh/correct_a_dangerous_racial_bias_in_an_llm_through/
pmartra
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvjwoh
false
null
t3_1lvjwoh
/r/LocalLLaMA/comments/1lvjwoh/correct_a_dangerous_racial_bias_in_an_llm_through/
false
false
self
1
{'enabled': False, 'images': [{'id': 'PVQtxq7S2Uh4llkVreuX0gHbKgs25rc8BgHgJL8zJpU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PVQtxq7S2Uh4llkVreuX0gHbKgs25rc8BgHgJL8zJpU.png?width=108&crop=smart&auto=webp&s=e909e0e5f850c7f97332ff4cc0895e35504e2279', 'width': 108}, {'height': 116, 'url': 'h...
Best Local LLM for Agentic Coding on Ollama (8 vCore, 16 GB RAM VPS)? + VS Code Extension Recommendation
1
Hey everyone, I'm looking to set up a local code assistant/agentic LLM on my own VPS and could use your recommendations! **Specs:** * 8 vCores (VPS) * 16 GB RAM * 480 GB NVMe SSD I plan to run everything with **Ollama** (Dockerized), mainly for local privacy and performance reasons. **My goals:** * **Agentic codi...
2025-07-09T14:12:01
https://www.reddit.com/r/LocalLLaMA/comments/1lvjtc4/best_local_llm_for_agentic_coding_on_ollama_8/
HeislPeda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvjtc4
false
null
t3_1lvjtc4
/r/LocalLLaMA/comments/1lvjtc4/best_local_llm_for_agentic_coding_on_ollama_8/
false
false
self
1
null
I built a Deep Researcher agent and exposed it as an MCP server!
43
I've been working on a Deep Researcher Agent that does multi-step web research and report generation. I wanted to share my stack and approach in case anyone else wants to build similar multi-agent workflows. So, the agent has 3 main stages: * **Searcher:** Uses Scrapegraph to crawl and extract live data * **Analyst:...
2025-07-09T13:48:17
https://www.reddit.com/r/LocalLLaMA/comments/1lvj98v/i_built_a_deep_researcher_agent_and_exposed_it_as/
Arindam_200
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvj98v
false
null
t3_1lvj98v
/r/LocalLLaMA/comments/1lvj98v/i_built_a_deep_researcher_agent_and_exposed_it_as/
false
false
self
43
{'enabled': False, 'images': [{'id': 'lcaCNxV_x8GGK9DcZl5R32XXYG1Qwa-DwlfowV5-_M8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/lcaCNxV_x8GGK9DcZl5R32XXYG1Qwa-DwlfowV5-_M8.jpeg?width=108&crop=smart&auto=webp&s=19bfee8b9dd015cc1cd888971f96965311df82d7', 'width': 108}, {'height': 162, 'url': '...
How should I install Jan on a local machine to convert PDF bank statements to CSV?
1
Please recommend installation process and logic selection. PDFs will be either downloaded or scanned, so ability to convert both into date|description|amount csvs is preferred.
2025-07-09T13:37:33
https://www.reddit.com/r/LocalLLaMA/comments/1lvj0hl/how_should_i_install_jan_on_a_local_machine_to/
GetInHereStalker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvj0hl
false
null
t3_1lvj0hl
/r/LocalLLaMA/comments/1lvj0hl/how_should_i_install_jan_on_a_local_machine_to/
false
false
self
1
null
What model could I finetune to create a study assistant llm?
1
I am a medical student and honestly I could use some help from a local llm, so i decided to take a small language model and train it to help me create study guides/summaries, using all the past summaries i have created manually, with prompting including the full context injection of a lecture transcript. I am a bit ...
2025-07-09T13:32:19
https://www.reddit.com/r/LocalLLaMA/comments/1lviwb4/what_model_could_i_finetune_to_create_a_study/
mangial
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lviwb4
false
null
t3_1lviwb4
/r/LocalLLaMA/comments/1lviwb4/what_model_could_i_finetune_to_create_a_study/
false
false
self
1
null
Hunyuan A13B tensor override
15
Hi r/LocalLLaMA does anyone have a good tensor override for hunyuan a13b? I get around 12 t/s on ddr4 3600 and with different offloads to a 3090 I got to 21 t/s. This is the command I'm using just in case it's useful for someone: ./llama-server -m /mnt/llamas/ggufs/tencent_Hunyuan-A13B-Instruct-Q4_K_M.gguf -fa -n...
2025-07-09T13:26:49
https://www.reddit.com/r/LocalLLaMA/comments/1lvirqs/hunyuan_a13b_tensor_override/
marderbot13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvirqs
false
null
t3_1lvirqs
/r/LocalLLaMA/comments/1lvirqs/hunyuan_a13b_tensor_override/
false
false
self
15
null
Looking for Prompt collections
0
hey everyone, I'd like to do a cluster analysis of task specific prompts. Does anyone know where can I find prompt collections/libraries that are just a dataset of prompts (that are not for image generation). I found a few on huggingface datasets, which could be helpful, but I'd like even more. Thanks for sharin...
2025-07-09T13:24:00
https://www.reddit.com/r/LocalLLaMA/comments/1lvipg4/looking_for_prompt_collections/
neerualx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvipg4
false
null
t3_1lvipg4
/r/LocalLLaMA/comments/1lvipg4/looking_for_prompt_collections/
false
false
self
0
null
Generate low-dimension embeddings *quickly*?
7
A project I'm working on calls for embeddings of short strings, and I'm pretty sure they don't have to have as many dimensions as those normally used. I've currently got a setup using nomic-embed-text-v1.5, which is Matryoshka, so the dimensions can be reduced after generation. I've also got other strategies available ...
2025-07-09T12:52:22
https://www.reddit.com/r/LocalLLaMA/comments/1lvi022/generate_lowdimension_embeddings_quickly/
danja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvi022
false
null
t3_1lvi022
/r/LocalLLaMA/comments/1lvi022/generate_lowdimension_embeddings_quickly/
false
false
self
7
null
Is there a opensource local model implementation of an agent out there?
2
I want an agent that helps make a choice to search through a source (Vector databse) out of a few options. I'm slightly concerned there are very few examples out there, Or am I not looking hard enough?
2025-07-09T12:51:30
https://www.reddit.com/r/LocalLLaMA/comments/1lvhzeg/is_there_a_opensource_local_model_implementation/
walagoth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvhzeg
false
null
t3_1lvhzeg
/r/LocalLLaMA/comments/1lvhzeg/is_there_a_opensource_local_model_implementation/
false
false
self
2
null
🚀 Built another 124m parameter transformer based model from scratch.This time with multi GPU training using DDP.Inspired from nanoGPT.But redesigned to suit my own training pipeline.Model and training code is on huggingface⬇️
27
https://huggingface.co/abhinavv3/MEMGPT Before training the current code Im planning to experiment by replacing the existing attention layer with GQA and the positional encoding with RoPE.Also tryingg to implement some concepts from research papers like Memorizing Transformers. Bt these changes haven’t been implement...
2025-07-09T12:48:51
https://www.reddit.com/r/LocalLLaMA/comments/1lvhxe7/built_another_124m_parameter_transformer_based/
Remarkable-Ad3290
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvhxe7
false
null
t3_1lvhxe7
/r/LocalLLaMA/comments/1lvhxe7/built_another_124m_parameter_transformer_based/
false
false
self
27
{'enabled': False, 'images': [{'id': 'PkaVJdjdt2e0bH0yRf3ozkpDnxMg4PDYNZHjCoWf310', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PkaVJdjdt2e0bH0yRf3ozkpDnxMg4PDYNZHjCoWf310.png?width=108&crop=smart&auto=webp&s=75930a8cb5bc8aba988e25a5bac82cc215a0e3fc', 'width': 108}, {'height': 116, 'url': 'h...
I Accidentally Started Making $200/Month With a Homework-Solving AI Bot (No, I’m Not Kidding)
0
Alright, internet — gather ‘round, ‘cause I’ve got a wild little story from the chaotic corner of the solo dev universe. What started as me messing around with AI agents turned into a not-so-dumb side hustle that's now making about $200/month. No, it’s not private jet money, but it *does* cover my ungodly coffee consum...
2025-07-09T12:48:20
https://www.reddit.com/r/LocalLLaMA/comments/1lvhx1j/i_accidentally_started_making_200month_with_a/
Prior-Temperature948
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvhx1j
false
null
t3_1lvhx1j
/r/LocalLLaMA/comments/1lvhx1j/i_accidentally_started_making_200month_with_a/
false
false
self
0
null
How I Made $200/Month Using AI Agents (as a Solo Developer)
0
Hey everyone, just wanted to share something cool I’ve been working on that might inspire some of you who are into AI or side hustles. Over the past month, I built a project called SkipSchool, and surprisingly, it’s already bringing in about $200 a month — not life-changing, but hey, coffee money pays for itself. I fig...
2025-07-09T12:38:15
https://www.reddit.com/r/LocalLLaMA/comments/1lvhpow/how_i_made_200month_using_ai_agents_as_a_solo/
Whole-Ostrich-6443
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvhpow
false
null
t3_1lvhpow
/r/LocalLLaMA/comments/1lvhpow/how_i_made_200month_using_ai_agents_as_a_solo/
false
false
self
0
null
What modes can expect I run on an AMD Ryzen AI Max+ 395?
25
I'm thinking about buying a GMKTEK Evo-2. What model (in terms of B parameters) can I expect to run at a decent speed (> 10tk/s)? I'm undecided between the 64 GB and 128 GB RAM versions, but I'm leaning towards the 64 GB since even slightly larger models (Llama 3.1 70B) run at a painfully slow speed.
2025-07-09T12:14:37
https://www.reddit.com/r/LocalLLaMA/comments/1lvh87a/what_modes_can_expect_i_run_on_an_amd_ryzen_ai/
electrickangaroo31
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvh87a
false
null
t3_1lvh87a
/r/LocalLLaMA/comments/1lvh87a/what_modes_can_expect_i_run_on_an_amd_ryzen_ai/
false
false
self
25
null
Qwen3 0.6b MNN acting weird
4
I tried MNN chat android and qwen3 0.6b acts really weird. It nearly always repeats its statements. Even SmolLM2 350M is better than it. The rest of the models I tried work fine however, its just qwen3 0.6b which is weird
2025-07-09T12:09:50
https://www.reddit.com/r/LocalLLaMA/comments/1lvh4ou/qwen3_06b_mnn_acting_weird/
ExtremeAcceptable289
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvh4ou
false
null
t3_1lvh4ou
/r/LocalLLaMA/comments/1lvh4ou/qwen3_06b_mnn_acting_weird/
false
false
self
4
null
vLLM vs SGLang vs MAX — Who's the fastest?
30
Benchmarking Inference Engines and talking about metrics like TTFT, TPOT, and ITL.
2025-07-09T11:42:12
https://www.ersteiger.com/posts/vllm-vs-max/
rkstgr
ersteiger.com
1970-01-01T00:00:00
0
{}
1lvglk7
false
null
t3_1lvglk7
/r/LocalLLaMA/comments/1lvglk7/vllm_vs_sglang_vs_max_whos_the_fastest/
false
false
default
30
{'enabled': False, 'images': [{'id': 'pRoQV1isngBk6d9PokHUBrsWFKXDdEPabl0qiiWLOq0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/pRoQV1isngBk6d9PokHUBrsWFKXDdEPabl0qiiWLOq0.png?width=108&crop=smart&auto=webp&s=372cf45b99b811f00477201d8509803c02ad5701', 'width': 108}, {'height': 113, 'url': 'h...
A satirical theory-fiction on the transformation of academic tutors into Turing cops, marking into an imitation game, and Al generated homework into the trigger for the technological singularity
1
2025-07-09T11:12:11
https://open.substack.com/pub/vincentl3/p/a-modest-software-patch-for-preventing?r=b9rct&utm_medium=ios
Quiet_Direction5077
open.substack.com
1970-01-01T00:00:00
0
{}
1lvg25f
false
null
t3_1lvg25f
/r/LocalLLaMA/comments/1lvg25f/a_satirical_theoryfiction_on_the_transformation/
false
false
default
1
{'enabled': False, 'images': [{'id': '1EtP4JWhRV80SZ052ejBfM6z9hXvZaozDtNH-RlGJfs', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/1EtP4JWhRV80SZ052ejBfM6z9hXvZaozDtNH-RlGJfs.jpeg?width=108&crop=smart&auto=webp&s=598a7a7c7cf2f61bc0ba2aae7edca0d83c8314d1', 'width': 108}, {'height': 166, 'url': '...
First Hugging Face robot: Reachy Mini. Hackable yet easy to use, powered by open-source and the community
271
Blog post: [https://huggingface.co/blog/reachy-mini](https://huggingface.co/blog/reachy-mini) Thomas Wolf on 𝕏: [https://x.com/Thom\_Wolf/status/1942887160983466096](https://x.com/Thom_Wolf/status/1942887160983466096)
2025-07-09T10:22:20
https://www.reddit.com/gallery/1lvf7ww
Nunki08
reddit.com
1970-01-01T00:00:00
0
{}
1lvf7ww
false
null
t3_1lvf7ww
/r/LocalLLaMA/comments/1lvf7ww/first_hugging_face_robot_reachy_mini_hackable_yet/
false
false
https://b.thumbs.redditm…qMjdSdWnuk1o.jpg
271
null
Anyone have ideas for preventing file corruption and saving errors when i save model weights after training?.After training the whole model on a rented gpu server and realising that the model weight file got corrupted hurts🙂
2
Sooo… anyone have any ideas for preventing file corruption or saving errors? Because if I save my model weights after training the whole model on a rented server, and the file gets corrupted, that would be a huge loss of time and money, right?
2025-07-09T10:15:53
https://www.reddit.com/r/LocalLLaMA/comments/1lvf448/anyone_have_ideas_for_preventing_file_corruption/
Remarkable-Ad3290
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvf448
false
null
t3_1lvf448
/r/LocalLLaMA/comments/1lvf448/anyone_have_ideas_for_preventing_file_corruption/
false
false
self
2
null
Need help translating Korean videos
5
Hi, I’m working on translating a Korean video into English and could really use some advice. I first tried using Whisper AI through Google Colab to do everything in one go (transcription + translation), but the results weren’t super accurate.I tried a different approach: I used Whisper just for the transcription, then...
2025-07-09T10:03:40
https://www.reddit.com/r/LocalLLaMA/comments/1lvex1e/need_help_translating_korean_videos/
Sorry-Elk-9838
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvex1e
false
null
t3_1lvex1e
/r/LocalLLaMA/comments/1lvex1e/need_help_translating_korean_videos/
false
false
self
5
null
Building a silent, budget 4-GPU LLM workstation—1×3090 + 3×P40, need advice
1
I’m setting up a local LLM API (n8n workflows, agentic coding/tech tasks) and need 3–4 GPUs for VRAM. GPUs are 1×RTX 3090 + 3×Tesla P40 (they’re fine on PCIe 3.0 ×4). Budget/space is tight and it must run quietly. **Options I am considering right now:** 1. **Custom 4-GPU open rig** (like minning ones) (mobo + CPU + R...
2025-07-09T10:01:40
https://www.reddit.com/r/LocalLLaMA/comments/1lvevuz/building_a_silent_budget_4gpu_llm/
Same-Masterpiece3748
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvevuz
false
null
t3_1lvevuz
/r/LocalLLaMA/comments/1lvevuz/building_a_silent_budget_4gpu_llm/
false
false
self
1
null
Is AMD GPU a viable choice for AI/ML task with Intel processor?
1
I'm getting back into AI/ML and planning to run LLMs and other SOTA models in my free time. I previously bought an MSI RTX 3090 24G, but it failed just after a year and MSI declared it "irreparable" even with a paid repair option. Can you believe a card that expensive came with just a one-year warranty? Super frustrati...
2025-07-09T09:56:17
https://www.reddit.com/r/LocalLLaMA/comments/1lveslz/is_amd_gpu_a_viable_choice_for_aiml_task_with/
exotic_soba
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lveslz
false
null
t3_1lveslz
/r/LocalLLaMA/comments/1lveslz/is_amd_gpu_a_viable_choice_for_aiml_task_with/
false
false
self
1
null
Difficulty in fine tuning (Llora) SmolLM2-135M-Instruct on "GSM8K and MATH" training data.
5
I am trying to Lora fine-tune `SmolLM2-135M-Instruct` on the following dataset: [https://huggingface.co/datasets/nvidia/OpenMathInstruct-2](https://huggingface.co/datasets/nvidia/OpenMathInstruct-2) (Data Credit: u/mlabonne). I want the model to be able to reason properly and generate accurate answers. The dataset pro...
2025-07-09T09:40:24
https://www.reddit.com/r/LocalLLaMA/comments/1lvek0j/difficulty_in_fine_tuning_llora/
Evening-Power-3302
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvek0j
false
null
t3_1lvek0j
/r/LocalLLaMA/comments/1lvek0j/difficulty_in_fine_tuning_llora/
false
false
self
5
{'enabled': False, 'images': [{'id': 'gUh3kUi-FubXvGXK7mVFv9rSuNEqRPbzwtKv35rb3aU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gUh3kUi-FubXvGXK7mVFv9rSuNEqRPbzwtKv35rb3aU.png?width=108&crop=smart&auto=webp&s=19ac8df165c71d6637604e5d011a7d67effb0c8b', 'width': 108}, {'height': 116, 'url': 'h...
ERNIE model support added to vLLM
1
[removed]
2025-07-09T08:11:47
https://www.reddit.com/r/LocalLLaMA/comments/1lvd8po/ernie_model_support_added_to_vllm/
yepai1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvd8po
false
null
t3_1lvd8po
/r/LocalLLaMA/comments/1lvd8po/ernie_model_support_added_to_vllm/
false
false
self
1
null
support for Falcon-H1 model family has been merged into llama.cpp
89
Falcon-H1 Family of Hybrid-Head Language Models (Transformer-SSM), including 0.5B, 1.5B, 1.5B-Deep, 3B, 7B, and 34B (pretrained & instruction-tuned). ggufs uploaded by Falcon team: [https://huggingface.co/tiiuae/Falcon-H1-34B-Instruct-GGUF](https://huggingface.co/tiiuae/Falcon-H1-34B-Instruct-GGUF) [https://huggingf...
2025-07-09T08:10:24
https://github.com/ggml-org/llama.cpp/pull/14534
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1lvd7z4
false
null
t3_1lvd7z4
/r/LocalLLaMA/comments/1lvd7z4/support_for_falconh1_model_family_has_been_merged/
false
false
https://external-preview…f0877476cc75691f
89
{'enabled': False, 'images': [{'id': 'N01fJbJzFMtO5mbBFXLE8iKjcQtmu4eYhoVQZmMbhG4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N01fJbJzFMtO5mbBFXLE8iKjcQtmu4eYhoVQZmMbhG4.png?width=108&crop=smart&auto=webp&s=aa53a1d81fbb58306dfc5225b4e021e5cd8b5556', 'width': 108}, {'height': 108, 'url': 'h...
[Open Source] Private AI assistant extension - thoughts on local vs cloud approaches?
3
We've been thinking about the trade-offs between convenience and privacy in AI assistants. Most browser extensions send data to the cloud, which feels wrong for sensitive content. So we built something different - an open-source extension that works entirely with your local models: ✨ **Core Features** * Intelligent ...
2025-07-09T08:05:49
https://www.reddit.com/r/LocalLLaMA/comments/1lvd5nj/open_source_private_ai_assistant_extension/
xukecheng
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvd5nj
false
null
t3_1lvd5nj
/r/LocalLLaMA/comments/1lvd5nj/open_source_private_ai_assistant_extension/
false
false
self
3
{'enabled': False, 'images': [{'id': 'DjQSvuQWHt6C40lQ_jLOVjIBJ33ijWOHghuIKCvl9eo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DjQSvuQWHt6C40lQ_jLOVjIBJ33ijWOHghuIKCvl9eo.png?width=108&crop=smart&auto=webp&s=d678e0bbde6e455f9000158b29f237c22a079bf0', 'width': 108}, {'height': 108, 'url': 'h...
Is knowledge found in the thinking taken into consideration by the LLM?
5
Are the tokens generated during the thinking stage taken into consideration at all? Are they treated similar to context? What about attention? My goal for the question is to understand if I could override the thinking manually with specific information closely relevant to the question. Similar to RAG, but without the ...
2025-07-09T07:52:54
https://www.reddit.com/r/LocalLLaMA/comments/1lvcyvf/is_knowledge_found_in_the_thinking_taken_into/
kaisurniwurer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvcyvf
false
null
t3_1lvcyvf
/r/LocalLLaMA/comments/1lvcyvf/is_knowledge_found_in_the_thinking_taken_into/
false
false
self
5
null
Here is how we beat ChatGPT at classification with 1 dollar in cloud compute
102
Hi everyone, Just dropped our paper on a simple but effective approach that got us an 8.7% accuracy boost over baseline (58.4% vs 49.7%) and absolutely crushed GPT-4.1's zero-shot performance (32%) on emotion classification. This tutorial comes in 3 different formats: 1. This LocalLLaMA post - summary and discussion ...
2025-07-09T07:07:53
https://www.reddit.com/r/LocalLLaMA/comments/1lvcb72/here_is_how_we_beat_chatgpt_at_classification/
iamMess
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvcb72
false
null
t3_1lvcb72
/r/LocalLLaMA/comments/1lvcb72/here_is_how_we_beat_chatgpt_at_classification/
false
false
self
102
{'enabled': False, 'images': [{'id': 'RxSSnT1e2v-RTp4_naQVkWAAFjrxq70GgL7g4G9qABA', 'resolutions': [{'height': 155, 'url': 'https://external-preview.redd.it/RxSSnT1e2v-RTp4_naQVkWAAFjrxq70GgL7g4G9qABA.png?width=108&crop=smart&auto=webp&s=d3bc762d2895449456d8ae731ab05d4a9ff08669', 'width': 108}, {'height': 310, 'url': '...
Linux Foundation to Host A2A Protocol
6
Jim Zemlin, executive director of the Linux Foundation, said in his keynote speech: "By joining the Linux Foundation, A2A is ensuring the long-term neutrality, collaboration, and governance that will unlock the next era of agent-to-agent powered productivity." Compare this to an [attitude from the FSF](https://www.gnu...
2025-07-09T06:52:29
https://www.zdnet.com/article/linux-foundation-adopts-a2a-protocol-to-help-solve-one-of-ais-most-pressing-challenges/
Psionikus
zdnet.com
1970-01-01T00:00:00
0
{}
1lvc2nj
false
null
t3_1lvc2nj
/r/LocalLLaMA/comments/1lvc2nj/linux_foundation_to_host_a2a_protocol/
false
false
default
6
{'enabled': False, 'images': [{'id': 'wWAA6o1FUq7u478oeix_7vby5ehIkJjD1nzlgo4tdqI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/wWAA6o1FUq7u478oeix_7vby5ehIkJjD1nzlgo4tdqI.jpeg?width=108&crop=smart&auto=webp&s=da96a671ba7e1fd72e261c260eb95675bae0d555', 'width': 108}, {'height': 121, 'url': '...
A language model built for the public good
86
2025-07-09T06:47:06
https://actu.epfl.ch/news/a-language-model-built-for-the-public-good/
PotatoFormal8751
actu.epfl.ch
1970-01-01T00:00:00
0
{}
1lvbzpx
false
null
t3_1lvbzpx
/r/LocalLLaMA/comments/1lvbzpx/a_language_model_built_for_the_public_good/
false
false
https://external-preview…d59c0900f3f7d9cc
86
{'enabled': False, 'images': [{'id': '0CeN-rjIYTJ0_P5Pq2vHu-spx75i5xJx0nDCOWhx_2o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/0CeN-rjIYTJ0_P5Pq2vHu-spx75i5xJx0nDCOWhx_2o.jpeg?width=108&crop=smart&auto=webp&s=183bb3a598c8292f078b413b7f17ceb3b9776157', 'width': 108}, {'height': 121, 'url': '...
I Built a Multi-Agent System to Generate Better Tech Conference Talk Abstracts
7
I've been speaking at a lot of tech conferences lately, and one thing that never gets easier is **writing a solid talk proposal**. A good abstract needs to be technically deep, timely, and clearly valuable for the audience, and it also needs to stand out from all the similar talks already out there. So I built a new m...
2025-07-09T06:23:22
https://www.reddit.com/r/LocalLLaMA/comments/1lvbmje/i_built_a_multiagent_system_to_generate_better/
Creepy-Row970
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvbmje
false
null
t3_1lvbmje
/r/LocalLLaMA/comments/1lvbmje/i_built_a_multiagent_system_to_generate_better/
false
false
self
7
null
💻 Hands-on webinar on fine-tuning LLMs for agents using open-source Oumi 🛠️
1
[removed]
2025-07-09T05:57:13
https://lu.ma/6e2b5tcp
Far_Context8296
lu.ma
1970-01-01T00:00:00
0
{}
1lvb7pm
false
null
t3_1lvb7pm
/r/LocalLLaMA/comments/1lvb7pm/handson_webinar_on_finetuning_llms_for_agents/
false
false
default
1
null
Limitation of NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition
2
In the overview of the NVIDIA RTX PRO 6000 Blackwell GPU Max-Q Workstation Edition, it says, “Seamlessly scale from one to four GPUs, multiplying your compute power and enabling you to pioneer new frontiers in AI, data science, and graphics.” Does this mean that if I want to load a 70B parameter LLM using Fully Shard...
2025-07-09T05:27:24
https://www.reddit.com/r/LocalLLaMA/comments/1lvaq6n/limitation_of_nvidia_rtx_pro_6000_blackwell_maxq/
Normal-Bookkeeper-86
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvaq6n
false
null
t3_1lvaq6n
/r/LocalLLaMA/comments/1lvaq6n/limitation_of_nvidia_rtx_pro_6000_blackwell_maxq/
false
false
self
2
null
Trying to recreate benchmark results
5
Hi, I'm trying to recreate VLMEval results for Gemma 3 4B IT available [here](https://huggingface.co/spaces/opencompass/open_vlm_leaderboard). The benchmark suite is supposedly easy to use, but after installing everything on my Cloud GPU, I get very very low results which make me think the prompts are not passed prope...
2025-07-09T05:17:48
https://www.reddit.com/r/LocalLLaMA/comments/1lvakg5/trying_to_recreate_benchmark_results/
AlternisHS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvakg5
false
null
t3_1lvakg5
/r/LocalLLaMA/comments/1lvakg5/trying_to_recreate_benchmark_results/
false
false
self
5
{'enabled': False, 'images': [{'id': 'z3K-kdYaewFHYsIgnvLFsaxAVEzAgNJnZ7uWC75FdMo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/z3K-kdYaewFHYsIgnvLFsaxAVEzAgNJnZ7uWC75FdMo.png?width=108&crop=smart&auto=webp&s=7996a9b4d61beea62fd32063e03712705ab26f8c', 'width': 108}, {'height': 116, 'url': 'h...
Best SSD Stick for Nvidia Jetson Orin Nano?
1
I ordered an Nvidia Jetson Orin Nano developer kit and I'm excited to experiment with locally ran AI! In some videos I watched regarding this developer kit, I saw that people often put SSD sticks in the nano to improve its performance. What SSD stick would you recommend me buy? I hope to use it to run and train AI mode...
2025-07-09T05:12:21
https://www.reddit.com/r/LocalLLaMA/comments/1lvah1f/best_ssd_stick_for_nvidia_jetson_orin_nano/
Pitiful-Cherry-3368
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lvah1f
false
null
t3_1lvah1f
/r/LocalLLaMA/comments/1lvah1f/best_ssd_stick_for_nvidia_jetson_orin_nano/
false
false
self
1
null
OPENCODE - Like Claude Code or Gemini CLI, but works with local models and/or paid ones as well
16
I think this is probably what a lot of us have been looking for. Haven’t tried it yet but will be downloading shortly. From their GitHub page: “How is this different than Claude Code? It's very similar to Claude Code in terms of capability. Here are the key differences: 100% open source Not coupled to any provide...
2025-07-09T04:41:43
https://github.com/sst/opencode
Porespellar
github.com
1970-01-01T00:00:00
0
{}
1lv9yhq
false
null
t3_1lv9yhq
/r/LocalLLaMA/comments/1lv9yhq/opencode_like_claude_code_or_gemini_cli_but_works/
false
false
https://external-preview…c9e462f04fa0d8d0
16
{'enabled': False, 'images': [{'id': 'MN15Kr0BU7FNU3YsfgUK2zF_DPGWEuyYH_rIiDpxxDg', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/MN15Kr0BU7FNU3YsfgUK2zF_DPGWEuyYH_rIiDpxxDg.png?width=108&crop=smart&auto=webp&s=f64851bbe0e5f8ffcbc8bd1adfeeafd2571eb06a', 'width': 108}, {'height': 113, 'url': 'h...
MemOS: A Memory OS for AI System
35
Project Website: [https://memos.openmem.net/](https://memos.openmem.net/) Code: [https://github.com/MemTensor/MemOS](https://github.com/MemTensor/MemOS) Abstract >Large Language Models (LLMs) have become an essential infrastructure for Artificial General Intelligence (AGI), yet their lack of well-defined memory man...
2025-07-09T04:22:13
https://arxiv.org/abs/2507.03724
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1lv9m3j
false
null
t3_1lv9m3j
/r/LocalLLaMA/comments/1lv9m3j/memos_a_memory_os_for_ai_system/
false
false
default
35
null
Code for Skywork-R1V3-38B
9
2025-07-09T03:54:57
https://github.com/SkyworkAI/Skywork-R1V
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
1lv94fb
false
null
t3_1lv94fb
/r/LocalLLaMA/comments/1lv94fb/code_for_skyworkr1v338b/
false
false
https://external-preview…0ba6096c97396f0f
9
{'enabled': False, 'images': [{'id': 'rirUq_ye1VMAip5X7u0WCzRQyzDvHO-jLfw4QlzFOE8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rirUq_ye1VMAip5X7u0WCzRQyzDvHO-jLfw4QlzFOE8.png?width=108&crop=smart&auto=webp&s=841096f857747ed91b87136e691b10235b58ee3d', 'width': 108}, {'height': 108, 'url': 'h...
State of Foundation Models, 2025 | Innovation Endeavors
7
[**State of Foundation Models, 2025 | Innovation Endeavors**](https://macro.com/app/pdf/696626a1-216b-493a-a0dd-181ce51ce327) **TL;DR** * **Generative AI has gone mainstream** – 1 in 8 workers worldwide now uses AI every month, with 90% of that growth happening in the last 6 months. AI-native applications are now wel...
2025-07-09T03:49:44
https://www.reddit.com/r/LocalLLaMA/comments/1lv910v/state_of_foundation_models_2025_innovation/
LeveredRecap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv910v
false
null
t3_1lv910v
/r/LocalLLaMA/comments/1lv910v/state_of_foundation_models_2025_innovation/
false
false
self
7
null
Pocket LLM Server Just Like a Pocket WiFi
0
If there is a pocket-sized compact hardware that hosts large-sized open-source LLMs that you can connect offline, do you think it would be helpful? The possible benefits: \- You can use large-size open-source LLMs without using up your PC or smartphone's compute \- More privacy-friendly since it's local \- You can ...
2025-07-09T03:22:52
https://www.reddit.com/r/LocalLLaMA/comments/1lv8j5q/pocket_llm_server_just_like_a_pocket_wifi/
Available_Ad_5360
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv8j5q
false
null
t3_1lv8j5q
/r/LocalLLaMA/comments/1lv8j5q/pocket_llm_server_just_like_a_pocket_wifi/
false
false
self
0
null
Just built an open-source MCP server to live-monitor your screen — ScreenMonitorMCP
5
Hey everyone! 👋 I’ve been working on some projects involving LLMs without visual input, and I realized I needed a way to let them “see” what’s happening on my screen in real time. So I built ScreenMonitorMCP — a lightweight, open-source MCP server that captures your screen and streams it to any compatible LLM client...
2025-07-09T03:13:28
https://www.reddit.com/r/LocalLLaMA/comments/1lv8cje/just_built_an_opensource_mcp_server_to/
Creepy-Being-6900
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv8cje
false
null
t3_1lv8cje
/r/LocalLLaMA/comments/1lv8cje/just_built_an_opensource_mcp_server_to/
false
false
self
5
{'enabled': False, 'images': [{'id': 'mO9zLAwZf8FAbhe55hWrJp1t0pFxBbc3GLTki6wyMkw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/mO9zLAwZf8FAbhe55hWrJp1t0pFxBbc3GLTki6wyMkw.png?width=108&crop=smart&auto=webp&s=ab7e199d6ef836609493bc97a96d35c77d096a50', 'width': 108}, {'height': 108, 'url': 'h...
Good books or resources on hosting LLMs and VLMs in production
3
Looking for good books/open courses/blogs that explains best practices for hosting LLMs and VLMs in production
2025-07-09T03:11:28
https://www.reddit.com/r/LocalLLaMA/comments/1lv8b55/good_books_or_resources_on_hosting_llms_and_vlms/
SouvikMandal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv8b55
false
null
t3_1lv8b55
/r/LocalLLaMA/comments/1lv8b55/good_books_or_resources_on_hosting_llms_and_vlms/
false
false
self
3
null
Anime and manga conversational model?
8
Anime conversations are often over the top and overall fun, i thought it would be interesting for storytelling and conversations Has anyone trained a model like this?
2025-07-09T03:07:31
https://www.reddit.com/r/LocalLLaMA/comments/1lv88fs/anime_and_manga_conversational_model/
Iq1pl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv88fs
false
null
t3_1lv88fs
/r/LocalLLaMA/comments/1lv88fs/anime_and_manga_conversational_model/
false
false
self
8
null
Day 12/50: Building a Small Language Model from Scratch - Implementing a Simplified Attention Mechanism in Python
24
*On Day 11, I gave you a brief introduction to the attention mechanism. Today, we’re going to implement it from scratch in Python. But before we dive into the code, let’s quickly revisit what attention is all about.* # What Is Attention?  *Imagine you’re in a room with five people, and you’re trying to understand wha...
2025-07-09T03:03:21
https://www.reddit.com/r/LocalLLaMA/comments/1lv85jp/day_1250_building_a_small_language_model_from/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv85jp
false
null
t3_1lv85jp
/r/LocalLLaMA/comments/1lv85jp/day_1250_building_a_small_language_model_from/
false
false
self
24
null
Web application for comparing responses from different LLMs side-by-side.
0
[https://github.com/dmeldrum6/LLM\_Diff\_Tool](https://github.com/dmeldrum6/LLM_Diff_Tool) Single page web app for comparing model vs model responses to the same prompt. Works with Open AI API compatible endpoints / GPT / Claude. The highlighting, as it is, is really only useful for comparing the same model against i...
2025-07-09T02:52:19
https://www.reddit.com/gallery/1lv7xnh
AdElectronic8073
reddit.com
1970-01-01T00:00:00
0
{}
1lv7xnh
false
null
t3_1lv7xnh
/r/LocalLLaMA/comments/1lv7xnh/web_application_for_comparing_responses_from/
false
false
https://a.thumbs.redditm…3222kGde5b50.jpg
0
null
Most explicit erotic LLM model
0
Forgive me if someone has asked this recently. But grok 3 was amazing in March 2025 untill they censored it. Now I want a similar LLM that can make nsfw explicit vulgar erotic content for my "research" purposes. Actually jokes apart I make character animations and every character is supposed to be different and it j...
2025-07-09T02:49:37
https://www.reddit.com/r/LocalLLaMA/comments/1lv7vsm/most_explicit_erotic_llm_model/
dean_hunter7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv7vsm
false
null
t3_1lv7vsm
/r/LocalLLaMA/comments/1lv7vsm/most_explicit_erotic_llm_model/
false
false
nsfw
0
null
LLM Model Response Diff Tool
1
[deleted]
2025-07-09T02:47:01
[deleted]
1970-01-01T00:00:00
0
{}
1lv7u12
false
null
t3_1lv7u12
/r/LocalLLaMA/comments/1lv7u12/llm_model_response_diff_tool/
false
false
default
1
null
LLM Model Response Diff Tool
4
Web application for comparing responses from different Large Language Models (LLMs) side-by-side, the highlighting (as it is currently) is really only useful for same model vs. same model. I originally built it and use it for checking token counts on the same prompt model vs. model. Poke around my github for some other...
2025-07-09T02:46:10
https://www.reddit.com/gallery/1lv7tgz
AdElectronic8073
reddit.com
1970-01-01T00:00:00
0
{}
1lv7tgz
false
null
t3_1lv7tgz
/r/LocalLLaMA/comments/1lv7tgz/llm_model_response_diff_tool/
false
false
https://b.thumbs.redditm…lSuncKX3SheA.jpg
4
null
Reimplementing an LLM from Scratch
11
Hi everyone, I recently reimplemented Google's open-source LLMs Gemma 1, Gemma 2, and Gemma 3 from scratch as part of my learning journey into LLM architectures. This was a deep dive into transformer internals and helped me understand the core mechanisms behind large models. I read and followed the official papers: -...
2025-07-09T02:44:06
https://www.reddit.com/r/LocalLLaMA/comments/1lv7s0r/reimplementing_an_llm_from_scratch/
CodingWithSatyam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv7s0r
false
null
t3_1lv7s0r
/r/LocalLLaMA/comments/1lv7s0r/reimplementing_an_llm_from_scratch/
false
false
self
11
null
Guidance Needed on Local Setup
1
I set up my own PC months back with the following: AMD Ryzen Threadripper 3960X 2x NVIDIA 4080s 128GB DDR4 RAM TRX40 AORUS PRO WIFI 2TB STORAGE Set up my server via Proxmox Hypervisor and running my VMs on it. Been accessing it via SSH (Terminal) on a host machine on my home network. I am not a programmer a...
2025-07-09T02:31:32
https://www.reddit.com/r/LocalLLaMA/comments/1lv7j1j/guidance_needed_on_local_setup/
SolidRemote8316
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv7j1j
false
null
t3_1lv7j1j
/r/LocalLLaMA/comments/1lv7j1j/guidance_needed_on_local_setup/
false
false
self
1
null
I built undo redo functionality for Gemini CLI - Instantly undo/redo Gemini CLI mistakes without wasting tokens
1
[removed]
2025-07-09T02:07:54
https://www.reddit.com/r/LocalLLaMA/comments/1lv71nb/i_built_undo_redo_functionality_for_gemini_cli/
ateebdidit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv71nb
false
null
t3_1lv71nb
/r/LocalLLaMA/comments/1lv71nb/i_built_undo_redo_functionality_for_gemini_cli/
false
false
self
1
{'enabled': False, 'images': [{'id': '3CAm7f2euOP7diXidheIHavSdc1loh3U46B-FOssKu4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3CAm7f2euOP7diXidheIHavSdc1loh3U46B-FOssKu4.png?width=108&crop=smart&auto=webp&s=3a058a293cabb63c88c9b65bc5197d6dfecc1cca', 'width': 108}, {'height': 113, 'url': 'h...
How Antropic has teached the Claude to decide wherher to choose a tool or respond normally?
4
I am trying to understand the parameter "tools" of the Anthropic and how the Claude understands if it should respond normally or it should select one of the tools in the JSON file. More specifically I am wondering if only a system prompt with some few shot examples can do the job or a real fine tuning is the way to g...
2025-07-09T01:47:20
https://www.reddit.com/r/LocalLLaMA/comments/1lv6mju/how_antropic_has_teached_the_claude_to_decide/
Amir_PD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv6mju
false
null
t3_1lv6mju
/r/LocalLLaMA/comments/1lv6mju/how_antropic_has_teached_the_claude_to_decide/
false
false
self
4
null
How Anthropic have teached the Claude to decide whether to use s tool or just return a text
1
[deleted]
2025-07-09T01:45:23
[deleted]
1970-01-01T00:00:00
0
{}
1lv6l4g
false
null
t3_1lv6l4g
/r/LocalLLaMA/comments/1lv6l4g/how_anthropic_have_teached_the_claude_to_decide/
false
false
default
1
null
High Precision
4
Are there any local models that have high precision integer or decimal when doing calculations. The paid big guys do, claude sonnet 4 , gemini 2.5 pro, Chatgpt. . But can't find any downloadable ones (tried up to 70b) . Anything above 24 or 32 digits they just give incorrect results.
2025-07-09T01:08:35
https://www.reddit.com/r/LocalLLaMA/comments/1lv5uie/high_precision/
AlgorithmicMuse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv5uie
false
null
t3_1lv5uie
/r/LocalLLaMA/comments/1lv5uie/high_precision/
false
false
self
4
null
How fast is inference when utilizing DDR5 and PCIe 5.0x16?
5
With the [release of DGX spark later this month](https://wccftech.com/nvidia-mini-supercomputer-the-dgx-spark-launches-this-month/), I was wondering how a new-ish homebrew system would compare. All 5000-series NVIDIA cards are equipped with PCIE Gen 5, which puts the upper limit for cross-bus bandwidth at 128GB/s. Du...
2025-07-09T00:53:14
https://www.reddit.com/r/LocalLLaMA/comments/1lv5je7/how_fast_is_inference_when_utilizing_ddr5_and/
ButThatsMyRamSlot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv5je7
false
null
t3_1lv5je7
/r/LocalLLaMA/comments/1lv5je7/how_fast_is_inference_when_utilizing_ddr5_and/
false
false
self
5
{'enabled': False, 'images': [{'id': '0pU0OZQ3jKyRpVTXegSNFV4uVFdUj2o4hXpi85CuSUA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/0pU0OZQ3jKyRpVTXegSNFV4uVFdUj2o4hXpi85CuSUA.png?width=108&crop=smart&auto=webp&s=fd25657d3fce734d4025693e620867a7cf866fd1', 'width': 108}, {'height': 121, 'url': 'h...
Funny answer from 405b base
4
[deleted]
2025-07-09T00:37:38
[deleted]
1970-01-01T00:00:00
0
{}
1lv57kz
false
null
t3_1lv57kz
/r/LocalLLaMA/comments/1lv57kz/funny_answer_from_405b_base/
false
false
default
4
null
What's local about this?
214
2025-07-09T00:32:32
https://i.redd.it/rqrg67unoobf1.jpeg
HOLUPREDICTIONS
i.redd.it
1970-01-01T00:00:00
0
{}
1lv53nn
false
null
t3_1lv53nn
/r/LocalLLaMA/comments/1lv53nn/whats_local_about_this/
false
false
https://a.thumbs.redditm…ENAL-R3A-2U4.jpg
214
{'enabled': True, 'images': [{'id': 'xC373qc6EerFn_qLV-HUabx3XUSwRr-yszT4hsn6ILg', 'resolutions': [{'height': 148, 'url': 'https://preview.redd.it/rqrg67unoobf1.jpeg?width=108&crop=smart&auto=webp&s=78877e3893da49a4298497e65291068e096bd5c6', 'width': 108}, {'height': 297, 'url': 'https://preview.redd.it/rqrg67unoobf1.j...
"Not x, but y" Slop Leaderboard
815
Models have been converging on "not x, but y" type phrases to an absurd degree. So here's a leaderboard for it. I don't think many labs are targeting this kind of slop in their training set filtering, so it gets compounded with subsequent model generations.
2025-07-08T22:48:41
https://i.redd.it/nxw6fmegaqbf1.png
_sqrkl
i.redd.it
1970-01-01T00:00:00
0
{}
1lv2t7n
false
null
t3_1lv2t7n
/r/LocalLLaMA/comments/1lv2t7n/not_x_but_y_slop_leaderboard/
false
false
https://b.thumbs.redditm…dJ936_btCrQw.jpg
815
{'enabled': True, 'images': [{'id': 'kkZ-LaZPi1jHaYXALY32ZrWs3vzskemUWmI2c7oZUfE', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/nxw6fmegaqbf1.png?width=108&crop=smart&auto=webp&s=8120f376d154d7a8c30aa965a5d4aec99b2eee8b', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/nxw6fmegaqbf1.pn...
Why hasn't RTX Pro 6000 Balckwell significantly shake down the price of older RTX 6000 / RTX 6000 Ada
25
RTX Pro 6000 Blackwell is much better with 30% more CUDA cores and twice the VRAM, than RTX 6000 Ada (and even better than RTX 6000), but the price difference is really minimum, like the prices of those 3 generations are only $1k apart for new ($8k, $7k and $6k) and $2k apart for used ($8k - only new, $6k and $4k).
2025-07-08T22:12:27
https://www.reddit.com/r/LocalLLaMA/comments/1lv1z7b/why_hasnt_rtx_pro_6000_balckwell_significantly/
--dany--
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv1z7b
false
null
t3_1lv1z7b
/r/LocalLLaMA/comments/1lv1z7b/why_hasnt_rtx_pro_6000_balckwell_significantly/
false
false
self
25
null
ERNIE model support added to vLLM
1
[removed]
2025-07-08T22:00:19
https://www.reddit.com/r/LocalLLaMA/comments/1lv1olu/ernie_model_support_added_to_vllm/
yepai1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv1olu
false
null
t3_1lv1olu
/r/LocalLLaMA/comments/1lv1olu/ernie_model_support_added_to_vllm/
false
false
self
1
null
Best context compression other than llmlingua?
5
Also would love to know your experiences with context/prompt compression, is it worth it?
2025-07-08T21:57:26
https://www.reddit.com/r/LocalLLaMA/comments/1lv1m0i/best_context_compression_other_than_llmlingua/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv1m0i
false
null
t3_1lv1m0i
/r/LocalLLaMA/comments/1lv1m0i/best_context_compression_other_than_llmlingua/
false
false
self
5
null
Is there a Grammarly equivalent I can run locally?
10
Looking for a lightweight model that can run on the background, basically spell checking/fixing typos as you go. Any suggestions?
2025-07-08T21:50:14
https://www.reddit.com/r/LocalLLaMA/comments/1lv1fpo/is_there_a_grammarly_equivalent_i_can_run_locally/
rorowhat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv1fpo
false
null
t3_1lv1fpo
/r/LocalLLaMA/comments/1lv1fpo/is_there_a_grammarly_equivalent_i_can_run_locally/
false
false
self
10
null
Prompt to "compress" transcripts
5
Im have a hugh collection of transcripts from talks, podcasts, presentations etc. Often I want to use lots of them in a local AI tool, but the texts are too long for the context windwow. I wonder if there are good prompts to compress (summarize) the transcripts in a way that no details or key concepts are lost nut the ...
2025-07-08T21:40:21
https://www.reddit.com/r/LocalLLaMA/comments/1lv1763/prompt_to_compress_transcripts/
simondueckert
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv1763
false
null
t3_1lv1763
/r/LocalLLaMA/comments/1lv1763/prompt_to_compress_transcripts/
false
false
self
5
null
Are there tools to make a model respond to itself using different system prompts ?
1
Are there tools to make a model respond to itself using different system prompts ? Maybe supporting multiple system prompts (ie multiple characters). Maybe something with nodes and conditions like for image generation.
2025-07-08T21:28:41
https://www.reddit.com/r/LocalLLaMA/comments/1lv0wvw/are_there_tools_to_make_a_model_respond_to_itself/
marvellousBeing
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv0wvw
false
null
t3_1lv0wvw
/r/LocalLLaMA/comments/1lv0wvw/are_there_tools_to_make_a_model_respond_to_itself/
false
false
self
1
null
Claude / GPT4 keeps breaking JSON formatting. Anyone find a real fix?
0
I’m trying to process a scraped HTML with Claude and it keeps hallucinating and messing up the keys. Even when I specify the schema, it adds garbage. Anyone found a prompt trick, system message, or post-processing fix that reliably works? (I tried regex cleanup but it’s shaky.)
2025-07-08T21:26:04
https://www.reddit.com/r/LocalLLaMA/comments/1lv0ukq/claude_gpt4_keeps_breaking_json_formatting_anyone/
_obhodro_chele_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv0ukq
false
null
t3_1lv0ukq
/r/LocalLLaMA/comments/1lv0ukq/claude_gpt4_keeps_breaking_json_formatting_anyone/
false
false
self
0
null
how can i setup flux1-kontext-dev-Q5_K_M.gguf
1
[removed]
2025-07-08T20:58:15
https://www.reddit.com/r/LocalLLaMA/comments/1lv05bg/how_can_i_setup_flux1kontextdevq5_k_mgguf/
LoquatOk7426
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lv05bg
false
null
t3_1lv05bg
/r/LocalLLaMA/comments/1lv05bg/how_can_i_setup_flux1kontextdevq5_k_mgguf/
false
false
self
1
null
Thoughts on local LLM & Proxmox homelab using Chinese x99 dual Xeon board + 2x3090
1
Hi everyone, I'm new to LLM and thinking about the following build to use for local LLM (on a VM using PCI passthrough for GPUs) and Proxmox homelab (some VMs and/or containers for software development, tinkering, etc.) combo. Could you please share your thoughts? Thanks in advance... **Mainboard:** JGINYUE X99-8D4/2...
2025-07-08T20:23:05
https://www.reddit.com/r/LocalLLaMA/comments/1luz92k/thoughts_on_local_llm_proxmox_homelab_using/
Long-Caterpillar675
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1luz92k
false
null
t3_1luz92k
/r/LocalLLaMA/comments/1luz92k/thoughts_on_local_llm_proxmox_homelab_using/
false
false
https://a.thumbs.redditm…_Gm-PfzPFH28.jpg
1
null