title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
The new design in DeepSeek V3.1
205
I just pulled the [V3.1-Base](https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base) configs and compared to V3-Base They add four new special tokens <|search▁begin|> (id: 128796) <|search▁end|> (id: 128797) <think> (id: 128798) </think> (id: 128799) And I noticed that V3.1 on the web version actively searc...
2025-08-19T16:47:11
https://www.reddit.com/r/LocalLLaMA/comments/1munvj6/the_new_design_in_deepseek_v31/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1munvj6
false
null
t3_1munvj6
/r/LocalLLaMA/comments/1munvj6/the_new_design_in_deepseek_v31/
false
false
self
205
{'enabled': False, 'images': [{'id': 'TF0v-SFT5DAKs6neF39KH5oR_BZ__J6Srmsxz1t_P1Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TF0v-SFT5DAKs6neF39KH5oR_BZ__J6Srmsxz1t_P1Y.png?width=108&crop=smart&auto=webp&s=4188a7c062c65c9a6a20eb12a38b07612d8b8590', 'width': 108}, {'height': 116, 'url': 'h...
Local Potato Llama 3.2 3B vibes on my rx 5500 xt 💀
2
I need help with my UI text stream 😂 does anybody know what should I set the type writer at... I set my pause "..." "!" "?" All at 500ms.. I don't want to machine gun my web browser with my rx 5500 xt 436Gt/s and 49tokens per second.. feels jittery.. oh yeah this is my own costum engine tho.. I just need suggestions i...
2025-08-19T16:45:46
https://v.redd.it/vztbtsa980kf1
Afraid-Subject5822
/r/LocalLLaMA/comments/1munu35/local_potato_llama_32_3b_vibes_on_my_rx_5500_xt/
1970-01-01T00:00:00
0
{}
1munu35
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vztbtsa980kf1/DASHPlaylist.mpd?a=1758343553%2CYmY0NDZmY2NhMGZkMTc5NDVjZjBhMDM2YmI5ZGZiMWI5N2Q4MmY5YTJlNzVkY2Y4YTUyZmEyYTFmYjlmMjNiMg%3D%3D&v=1&f=sd', 'duration': 141, 'fallback_url': 'https://v.redd.it/vztbtsa980kf1/DASH_1080.mp4?source=fallback', '...
t3_1munu35
/r/LocalLLaMA/comments/1munu35/local_potato_llama_32_3b_vibes_on_my_rx_5500_xt/
false
false
https://external-preview…65cff1eb76883827
2
{'enabled': False, 'images': [{'id': 'dXZ0OW5zYTk4MGtmMUqhYGZkDjZlONcqdTCXCIUzuYrWpiNBW1KtXKmvYOEe', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/dXZ0OW5zYTk4MGtmMUqhYGZkDjZlONcqdTCXCIUzuYrWpiNBW1KtXKmvYOEe.png?width=108&crop=smart&format=pjpg&auto=webp&s=529f8ae398a45e46025c29064551c0df43f3...
azzurra-voice is a new State-of-the-Art Italian Text-to-Speech model
18
2025-08-19T16:43:16
https://blog.cartesia.one/posts/introducing-azzurra-voice/
poppear
blog.cartesia.one
1970-01-01T00:00:00
0
{}
1munrls
false
null
t3_1munrls
/r/LocalLLaMA/comments/1munrls/azzurravoice_is_a_new_stateoftheart_italian/
false
false
default
18
null
Added Emotional Reactions to My Chatbot — Here’s How It Looks
12
I’ve been building my own AI chatbot platform solo, and just added a fun new feature - prompts that can dynamically change a character’s emotions. In the GIF, you’ll see a chat where the character’s avatar changes from neutral → surprised → happy depending on the flow of the conversation. This opens up a lot ...
2025-08-19T16:42:47
https://i.redd.it/z3jjvs7t80kf1.gif
RIPT1D3_Z
i.redd.it
1970-01-01T00:00:00
0
{}
1munr40
false
null
t3_1munr40
/r/LocalLLaMA/comments/1munr40/added_emotional_reactions_to_my_chatbot_heres_how/
false
false
default
12
{'enabled': True, 'images': [{'id': 'z3jjvs7t80kf1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/z3jjvs7t80kf1.gif?width=108&crop=smart&format=png8&s=d1072278ee36a2f7b7f9cb1d3084180e8de77a08', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/z3jjvs7t80kf1.gif?width=216&crop=smart&format...
is this a new META or fake META: META AI AI by a_n_k_i_t_1_5
1
[removed]
2025-08-19T16:26:52
https://www.reddit.com/r/humanresources/comments/1l9fqtn/na_ai_generated_grievances_how_are_you_dealing/n9jxr27/
ExcitementCold2098
reddit.com
1970-01-01T00:00:00
0
{}
1munb7r
false
null
t3_1munb7r
/r/LocalLLaMA/comments/1munb7r/is_this_a_new_meta_or_fake_meta_meta_ai_ai_by_a_n/
false
false
default
1
null
Generating code with gpt-oss-120b on Strix Halo with ROCm
78
I’ve seen a few posts asking about how to get gpt-oss models running on AMD devices. This guide gives a quick 3-minute overview of how it works on Strix Halo (Ryzen AI MAX 395). The same steps work for gpt-oss-20b, and many other models, on Radeon 7000/9000 GPUs as well. ## Detailed Instructions 1. Install and run L...
2025-08-19T16:05:34
https://v.redd.it/pnap0vvk10kf1
jfowers_amd
/r/LocalLLaMA/comments/1mumpub/generating_code_with_gptoss120b_on_strix_halo/
1970-01-01T00:00:00
0
{}
1mumpub
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pnap0vvk10kf1/DASHPlaylist.mpd?a=1758341139%2CN2RlZjNjNGRjNmY3MDA2NmFhNmViZGRlMzJmMzFkYjMyODA3ZDE0YmExZjk5YmJjNmZkYTJiMGIyMjZiNTdmMQ%3D%3D&v=1&f=sd', 'duration': 180, 'fallback_url': 'https://v.redd.it/pnap0vvk10kf1/DASH_1080.mp4?source=fallback', '...
t3_1mumpub
/r/LocalLLaMA/comments/1mumpub/generating_code_with_gptoss120b_on_strix_halo/
false
false
https://external-preview…c117a370f516025b
78
{'enabled': False, 'images': [{'id': 'MTBtNjc4d2sxMGtmMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MTBtNjc4d2sxMGtmMVpKx18zu2Al60haJHCIipoecy1_uH38KnrawZp01IuI.png?width=108&crop=smart&format=pjpg&auto=webp&s=c24f1f6e6483dd7dea84624d38e86bc4ed905...
DeepSeek V3.1 Blogpost from another site but older
0
[https://deepseek.ai/blog/deepseek-v31](https://deepseek.ai/blog/deepseek-v31) "up to 43% improvement in multi-step reasoning expanded context window of 1 million tokens now supports over 100 languages with near-native proficiency 38% reduction in hallucinations compared to our previous model 560 billion param...
2025-08-19T16:04:39
https://www.reddit.com/r/LocalLLaMA/comments/1mumowd/deepseek_v31_blogpost_from_another_site_but_older/
ghgi_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mumowd
false
null
t3_1mumowd
/r/LocalLLaMA/comments/1mumowd/deepseek_v31_blogpost_from_another_site_but_older/
false
false
self
0
null
Don't think Cloudflare's AI pay-per-crawl will succeed
0
Saw there were discussions here about this product release from Cloudflare, so I figured I should share what I wrote about it on my blog. The TLDR reasons I don't think it'll work are... * hard to fully block scrapers * pricing dynamics (charge too high -> LLM devs either bypass or ignore, but publishers won't use it ...
2025-08-19T15:59:04
https://developerwithacat.com/blog/202507/cloudflare-pay-per-crawl/
ReditusReditai
developerwithacat.com
1970-01-01T00:00:00
0
{}
1mumj3w
false
null
t3_1mumj3w
/r/LocalLLaMA/comments/1mumj3w/dont_think_cloudflares_ai_paypercrawl_will_succeed/
false
false
default
0
{'enabled': False, 'images': [{'id': 'vvpILw2SOV7L7p2TySD15mbIgcxTJce8zyOoizjVfhA', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/vvpILw2SOV7L7p2TySD15mbIgcxTJce8zyOoizjVfhA.jpeg?width=108&crop=smart&auto=webp&s=002cc845000846ff5a13411606b35c5885f870cd', 'width': 108}, {'height': 115, 'url': '...
Tried mixing local LLM + face recognition just for fun (wild results)
258
So I’ve been tinkering a lot with running models locally (mostly LLaMA variants + some vision stuff). I like keeping things offline when possible, just feels better knowing my data isn’t flying around random servers. Over the weekend I got curious… what if I combine face matching with a local LLM? Like, have the LLM ...
2025-08-19T15:54:45
https://www.reddit.com/r/LocalLLaMA/comments/1mumext/tried_mixing_local_llm_face_recognition_just_for/
yeahiiiiiii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mumext
false
null
t3_1mumext
/r/LocalLLaMA/comments/1mumext/tried_mixing_local_llm_face_recognition_just_for/
false
false
self
258
null
gpt-oss-20b-surviveV1
13
A fine-tuned version of GPT-OSS-20B specifically for survival-related discussions. **Base model:** huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated **Link:** [https://huggingface.co/lolzinventor/gpt-oss-surviveV1](https://huggingface.co/lolzinventor/gpt-oss-surviveV1) **Key findings:** * Still provides refusals on con...
2025-08-19T15:41:28
https://www.reddit.com/r/LocalLLaMA/comments/1mum1fb/gptoss20bsurvivev1/
lolzinventor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mum1fb
false
null
t3_1mum1fb
/r/LocalLLaMA/comments/1mum1fb/gptoss20bsurvivev1/
false
false
self
13
{'enabled': False, 'images': [{'id': 'ywJWp9xUtVcS0_-rkcC3iNz2U1U9b8hPNfnPvHyPCyM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ywJWp9xUtVcS0_-rkcC3iNz2U1U9b8hPNfnPvHyPCyM.png?width=108&crop=smart&auto=webp&s=b54bf9210101656baa6b62e9ec8931da14c606d2', 'width': 108}, {'height': 116, 'url': 'h...
Google is also untrustworthy
28
2025-08-19T15:28:08
https://i.redd.it/exvjbuqdvzjf1.png
Charuru
i.redd.it
1970-01-01T00:00:00
0
{}
1mulnzj
false
null
t3_1mulnzj
/r/LocalLLaMA/comments/1mulnzj/google_is_also_untrustworthy/
false
false
default
28
{'enabled': True, 'images': [{'id': 'exvjbuqdvzjf1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/exvjbuqdvzjf1.png?width=108&crop=smart&auto=webp&s=ecf39aa86c9ac72f5f39ff13603f68b4f72253b4', 'width': 108}, {'height': 150, 'url': 'https://preview.redd.it/exvjbuqdvzjf1.png?width=216&crop=smart&auto=web...
Which is better? 4B 4Q or 12B 2Q
1
I know that higher parameters with lower Q is better than lower parameters with higher Q. but also anything under 4Q quickly becomes unusable. I want to know if someone have tried a 12B 2Q. I don't want to download one if it's not any better than a 4B 4Q. I'm kind of a hobbyist when comes to local AI, and my knowledge ...
2025-08-19T15:14:06
https://www.reddit.com/r/LocalLLaMA/comments/1mul9r4/which_is_better_4b_4q_or_12b_2q/
MihinMUD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mul9r4
false
null
t3_1mul9r4
/r/LocalLLaMA/comments/1mul9r4/which_is_better_4b_4q_or_12b_2q/
false
false
self
1
null
how i use ai to study fine art (and it’s more helpful than it sounds)
1
[removed]
2025-08-19T15:10:01
https://www.reddit.com/r/LocalLLaMA/comments/1mul5nj/how_i_use_ai_to_study_fine_art_and_its_more/
Own_View3337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mul5nj
false
null
t3_1mul5nj
/r/LocalLLaMA/comments/1mul5nj/how_i_use_ai_to_study_fine_art_and_its_more/
false
false
self
1
{'enabled': False, 'images': [{'id': 'qiQHsEpGJo06MEuSRT3ug0ywVHvdcnjUauSn8Bd_7AE', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/qiQHsEpGJo06MEuSRT3ug0ywVHvdcnjUauSn8Bd_7AE.png?width=108&crop=smart&auto=webp&s=1284b066fcc7adb996fe13fb37dacf82c67d971e', 'width': 108}, {'height': 122, 'url': 'h...
Deepseek-V3.1-Base released
75
ERROR: type should be string, got "\nhttps://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base"
2025-08-19T15:09:09
https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base
Fun-Doctor6855
huggingface.co
1970-01-01T00:00:00
0
{}
1mul4sx
false
null
t3_1mul4sx
/r/LocalLLaMA/comments/1mul4sx/deepseekv31base_released/
false
false
https://external-preview…c9e54f5e01fbcafa
75
{'enabled': False, 'images': [{'id': 'TF0v-SFT5DAKs6neF39KH5oR_BZ__J6Srmsxz1t_P1Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TF0v-SFT5DAKs6neF39KH5oR_BZ__J6Srmsxz1t_P1Y.png?width=108&crop=smart&auto=webp&s=4188a7c062c65c9a6a20eb12a38b07612d8b8590', 'width': 108}, {'height': 116, 'url': 'h...
Any way to give the model the ability to send me a message unprompted?
0
Like every minute there is a x% chance that it sends it by itself. Using LM Studio.
2025-08-19T15:08:37
https://www.reddit.com/r/LocalLLaMA/comments/1mul4b3/any_way_to_give_the_model_the_ability_to_send_me/
aaaaaaeeea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mul4b3
false
null
t3_1mul4b3
/r/LocalLLaMA/comments/1mul4b3/any_way_to_give_the_model_the_ability_to_send_me/
false
false
self
0
null
🤗 DeepSeek-V3.1-Base
300
The v3.1 base model is here: [https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base](https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base)
2025-08-19T15:01:07
https://www.reddit.com/r/LocalLLaMA/comments/1mukwq6/deepseekv31base/
newsletternew
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mukwq6
false
null
t3_1mukwq6
/r/LocalLLaMA/comments/1mukwq6/deepseekv31base/
false
false
self
300
{'enabled': False, 'images': [{'id': 'TF0v-SFT5DAKs6neF39KH5oR_BZ__J6Srmsxz1t_P1Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TF0v-SFT5DAKs6neF39KH5oR_BZ__J6Srmsxz1t_P1Y.png?width=108&crop=smart&auto=webp&s=4188a7c062c65c9a6a20eb12a38b07612d8b8590', 'width': 108}, {'height': 116, 'url': 'h...
Which PSU for a dual RTX Pro 6000 build?
2
I am building an AI workstation with dual RTX Pro 6000? I already have ASRock 1650w PSU, my CPU is 9950x3d, is it enough to power? Or I should use a 2000w or 2200w PSU?
2025-08-19T14:51:31
https://www.reddit.com/r/LocalLLaMA/comments/1muknd5/which_psu_for_a_dual_rtx_pro_6000_build/
kitgary
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1muknd5
false
null
t3_1muknd5
/r/LocalLLaMA/comments/1muknd5/which_psu_for_a_dual_rtx_pro_6000_build/
false
false
self
2
null
deepseek-ai/DeepSeek-V3.1-Base · Hugging Face
799
2025-08-19T14:49:14
https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Base
xLionel775
huggingface.co
1970-01-01T00:00:00
0
{}
1mukl2a
false
null
t3_1mukl2a
/r/LocalLLaMA/comments/1mukl2a/deepseekaideepseekv31base_hugging_face/
false
false
default
799
{'enabled': False, 'images': [{'id': 'TF0v-SFT5DAKs6neF39KH5oR_BZ__J6Srmsxz1t_P1Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TF0v-SFT5DAKs6neF39KH5oR_BZ__J6Srmsxz1t_P1Y.png?width=108&crop=smart&auto=webp&s=4188a7c062c65c9a6a20eb12a38b07612d8b8590', 'width': 108}, {'height': 116, 'url': 'h...
Just open-sourced RightNow CLI :)
1
[removed]
2025-08-19T14:48:27
https://www.reddit.com/r/LocalLLaMA/comments/1mukkah/just_opensourced_rightnow_cli/
Sure_Storm_9129
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mukkah
false
null
t3_1mukkah
/r/LocalLLaMA/comments/1mukkah/just_opensourced_rightnow_cli/
false
false
https://b.thumbs.redditm…CAhCVb0hFTvM.jpg
1
null
AI query handling
1
How do you make AI break a query into smaller bits? so its doesnt violate tokens?
2025-08-19T14:44:12
https://www.reddit.com/r/LocalLLaMA/comments/1mukg3q/ai_query_handling/
Yusso_17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mukg3q
false
null
t3_1mukg3q
/r/LocalLLaMA/comments/1mukg3q/ai_query_handling/
false
false
self
1
null
anything better than gemma-3-27b-it for analyzing and summarizing long texts ?
3
hi have been running this 16GB model for a while now on my ARM M1 max mac through LM Studio and I'm rather satisfied with the results. Was wondering whether there would be any better and up to date model to tackle the job, especially in non-English texts (mainly French) ? I host the model on an external SSD, so it ...
2025-08-19T14:39:00
https://www.reddit.com/r/LocalLLaMA/comments/1mukayg/anything_better_than_gemma327bit_for_analyzing/
greenreddits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mukayg
false
null
t3_1mukayg
/r/LocalLLaMA/comments/1mukayg/anything_better_than_gemma327bit_for_analyzing/
false
false
self
3
null
Looking for an online service for training without surprising charges
1
I want to mess around with fine tuning / training, what I'm looking for is reliable service that I can charge money to it, and I can not be over charged no matter what. I want something simple like using Colab or Jupiter notebook.
2025-08-19T14:11:06
https://www.reddit.com/r/LocalLLaMA/comments/1mujjtp/looking_for_an_online_service_for_training/
ResponsibleTruck4717
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mujjtp
false
null
t3_1mujjtp
/r/LocalLLaMA/comments/1mujjtp/looking_for_an_online_service_for_training/
false
false
self
1
null
Lightweight Open-Source Models for Document and Email Data Extraction
2
Can you suggest an open-source model for document and email data extraction that is lightweight (small in size), easy to run locally and take to production, and suitable for structured information extraction (e.g., JSON output
2025-08-19T13:57:17
https://www.reddit.com/r/LocalLLaMA/comments/1muj6mu/lightweight_opensource_models_for_document_and/
Technical-Ocelot366
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1muj6mu
false
null
t3_1muj6mu
/r/LocalLLaMA/comments/1muj6mu/lightweight_opensource_models_for_document_and/
false
false
self
2
null
I built real-time course correction for Claude Code... and it's also a Tamagotchi
3
I built a system that actually **BLOCKS** Claude from doing things you didn't ask for. Real-time violation detection with immediate intervention. **How it works:** EVERY interaction gets analyzed - every "I'll help you with that", every tool call, every single message. Whether Claude is thinking, reading a file, or tr...
2025-08-19T13:54:23
https://i.redd.it/zq6azo9qezjf1.jpeg
Standard_Excuse7988
i.redd.it
1970-01-01T00:00:00
0
{}
1muj3z8
false
null
t3_1muj3z8
/r/LocalLLaMA/comments/1muj3z8/i_built_realtime_course_correction_for_claude/
false
false
default
3
{'enabled': True, 'images': [{'id': 'zq6azo9qezjf1', 'resolutions': [{'height': 24, 'url': 'https://preview.redd.it/zq6azo9qezjf1.jpeg?width=108&crop=smart&auto=webp&s=f4cd57f3cc946ff05c84607c3de216147290487f', 'width': 108}, {'height': 49, 'url': 'https://preview.redd.it/zq6azo9qezjf1.jpeg?width=216&crop=smart&auto=we...
If your MCP is an API wrapper you are doing it wrong
0
Originally saw this post on [r/mcp](https://www.reddit.com/r/mcp/) by r/[WallabyInDisguise](https://www.reddit.com/user/WallabyInDisguise/) and agreed so much I wanted to share it here *I've been building with MCP since it launched, and I keep seeing the same mistakes everywhere. Most companies are taking the easy pat...
2025-08-19T13:45:13
https://www.reddit.com/r/LocalLLaMA/comments/1muivf3/if_your_mcp_is_an_api_wrapper_you_are_doing_it/
juanviera23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1muivf3
false
null
t3_1muivf3
/r/LocalLLaMA/comments/1muivf3/if_your_mcp_is_an_api_wrapper_you_are_doing_it/
false
false
self
0
null
GPT OSS 20B through ollama with codex cli has really low performance
9
I feel like I'm missing something here. So it's clear to me that gpt 20B is a small model. But it seems completely useless in codex cli. I even struggle to make it create a test file. I was hoping for it to be able to make simple, clearly defined file changes at least, as it runs very fast on my machine. The bad output...
2025-08-19T13:41:01
https://www.reddit.com/r/LocalLLaMA/comments/1muirjt/gpt_oss_20b_through_ollama_with_codex_cli_has/
Markronom
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1muirjt
false
null
t3_1muirjt
/r/LocalLLaMA/comments/1muirjt/gpt_oss_20b_through_ollama_with_codex_cli_has/
false
false
self
9
null
Nvidia nemotron gguf
4
Will there be nvidia nemotron gguf version? Do we need to wait?
2025-08-19T13:22:19
https://www.reddit.com/r/LocalLLaMA/comments/1muiar8/nvidia_nemotron_gguf/
Just_Investment6769
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1muiar8
false
null
t3_1muiar8
/r/LocalLLaMA/comments/1muiar8/nvidia_nemotron_gguf/
false
false
self
4
null
Phi-Omni-ST Implimentation
0
Does anyone know what happened to the Phi-Omni-ST model? I can't find any weights or implementation code for the mode
2025-08-19T13:19:13
https://www.reddit.com/r/LocalLLaMA/comments/1mui7xu/phiomnist_implimentation/
Omar_Alsaabi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mui7xu
false
null
t3_1mui7xu
/r/LocalLLaMA/comments/1mui7xu/phiomnist_implimentation/
false
false
self
0
null
Is there a project that is basically an open source version of lovable, v0 be vercel...
0
I don't see here any mention of models/projects that are able to replicate, basically uploading a wireframe and text and building a prototype with it live with a preview on the right side. And then that can be prompted to change things in the design. Im a dev turned product person without (proper) design skills so thi...
2025-08-19T13:19:09
https://www.reddit.com/r/LocalLLaMA/comments/1mui7vm/is_there_a_project_that_is_basically_an_open/
anedisi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mui7vm
false
null
t3_1mui7vm
/r/LocalLLaMA/comments/1mui7vm/is_there_a_project_that_is_basically_an_open/
false
false
self
0
null
Best model to fine-tune with my JSONL dataset?
0
Hey guys, I’ve got a JSONL file ready for fine-tuning. it’s basically a bunch of prompts like *“Explain Super Saiyan 4 from Dragon Ball”* paired with detailed explanations. The idea is to train a model that not only gives accurate info but also explains things in a specific style (more narrative and engaging, like I’d...
2025-08-19T13:07:38
https://www.reddit.com/r/LocalLLaMA/comments/1muhxgi/best_model_to_finetune_with_my_jsonl_dataset/
RedoHawk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1muhxgi
false
null
t3_1muhxgi
/r/LocalLLaMA/comments/1muhxgi/best_model_to_finetune_with_my_jsonl_dataset/
false
false
self
0
null
Creating Pixel Art Video Scenes
1
Does anybody have experience in creating pixel art video scenes using llama or any other model and can recommend a good model for this task ?
2025-08-19T12:55:53
https://www.reddit.com/r/LocalLLaMA/comments/1muhn1o/creating_pixel_art_video_scenes/
Wild_Wafer313
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1muhn1o
false
null
t3_1muhn1o
/r/LocalLLaMA/comments/1muhn1o/creating_pixel_art_video_scenes/
false
false
self
1
null
Why do you use Local LLMs?
0
I'm not gonna lie, I don't really understand how they are graded and how good each LLM model is in relation to others. Personally I use small models (7B or less, that is what I can run locally) for roleplay and for a personal project to implement a bot to play Minetest in singleplayer (a game similar to Minecraft). ...
2025-08-19T12:46:44
https://www.reddit.com/r/LocalLLaMA/comments/1muhfcv/why_do_you_use_local_llms/
j0j0n4th4n
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1muhfcv
false
null
t3_1muhfcv
/r/LocalLLaMA/comments/1muhfcv/why_do_you_use_local_llms/
false
false
self
0
null
AMD GPU suspend/resume, preserve loaded model (Linux)?
5
So for some time I have dual 7900 xtx pc, it works ok for models, whatever. But I generally do not use it much, cause startup, and just keeping it on for rather occasional model use is also not cost efficient for me. I'm wondering I can setup wake on lan from suspend. As far as I understand likely I can. But the questi...
2025-08-19T12:39:25
https://www.reddit.com/r/LocalLLaMA/comments/1muh9g0/amd_gpu_suspendresume_preserve_loaded_model_linux/
morphles
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1muh9g0
false
null
t3_1muh9g0
/r/LocalLLaMA/comments/1muh9g0/amd_gpu_suspendresume_preserve_loaded_model_linux/
false
false
self
5
null
How MCP Connects AI Models to Edge Devices
2
As developers, we all know the pain of wiring LLMs into real-world systems: endless glue code, brittle vendor APIs, and debugging nightmares every time something changes. The Model Context Protocol (MCP) is a new standard designed to solve that. It lets us expose sensors, APIs, or devices as schema-defined tools that m...
2025-08-19T12:38:05
https://glama.ai/blog/2025-08-19-bringing-ai-to-the-edge-mcp-for-iot
No-Abies7108
glama.ai
1970-01-01T00:00:00
0
{}
1muh8co
false
null
t3_1muh8co
/r/LocalLLaMA/comments/1muh8co/how_mcp_connects_ai_models_to_edge_devices/
false
false
default
2
{'enabled': False, 'images': [{'id': 'dm_f9yGb4tj80I93GxscHH5r9iQUP9PjnCWvl5wsN_0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/dm_f9yGb4tj80I93GxscHH5r9iQUP9PjnCWvl5wsN_0.png?width=108&crop=smart&auto=webp&s=23b92c5a7f12d2420619642f825114eba6a4a4b2', 'width': 108}, {'height': 113, 'url': 'h...
Need help with 5090 + 4090 dual GPU setup
1
Hey everyone, I'm hoping someone here can help me get clarity on whether my current motherboard will actually support a two GPU setup, or if I'm just chasing a ghost here. Here's the deal: * Current Board: ASUS ROG STRIX X870 * CPU: AMD Ryzen 9 9950X3D * GPUs: RTX 5090 + RTX 4090 * Drives: 2x M.2 NVME (2TB and 4TB) ...
2025-08-19T12:27:58
https://www.reddit.com/r/LocalLLaMA/comments/1muh048/need_help_with_5090_4090_dual_gpu_setup/
ate50eggs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1muh048
false
null
t3_1muh048
/r/LocalLLaMA/comments/1muh048/need_help_with_5090_4090_dual_gpu_setup/
false
false
self
1
null
llama.cpp + ngrok
5
For llama.cpp to work on any device with ngrok or programs, it must have the http configuration. 1 - run llama.cpp https://preview.redd.it/w9ue9yzotyjf1.png?width=1320&format=png&auto=webp&s=543fb957b35de8cbdaa11f2ec6a922b16ba26bd2 2 - Follow Ngrok's instructions on their website. https://preview.redd.it/010alpy8w...
2025-08-19T12:17:17
https://www.reddit.com/r/LocalLLaMA/comments/1mugrii/llamacpp_ngrok/
Illustrious-Swim9663
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mugrii
false
null
t3_1mugrii
/r/LocalLLaMA/comments/1mugrii/llamacpp_ngrok/
false
false
https://b.thumbs.redditm…QXs_04QijMek.jpg
5
{'enabled': False, 'images': [{'id': 'gPMH5N9P7LrlLK9XBWYWsOwYsEUDqDGG215n2qqPFuo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gPMH5N9P7LrlLK9XBWYWsOwYsEUDqDGG215n2qqPFuo.png?width=108&crop=smart&auto=webp&s=6b537c62b936b0a9242eb25a6a29f6f3a8df2640', 'width': 108}, {'height': 108, 'url': 'h...
Application form
0
Anyone working grape 🍇 solution company
2025-08-19T12:10:35
https://www.reddit.com/r/LocalLLaMA/comments/1mugm7a/application_form/
Boring-Jackfruit2962
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mugm7a
false
null
t3_1mugm7a
/r/LocalLLaMA/comments/1mugm7a/application_form/
false
false
self
0
null
How do the results from VRAM calculators you find on the web compare to the actual VRAM needed for inference on optimization heavy frameworks like vllm, SGlang, llama.cpp?
0
I've been using VRAM calculators like this one [Can You Run This LLM? VRAM Calculator (Nvidia GPU and Apple Silicon)](https://apxml.com/tools/vram-calculator) to estimate the resource requirements for running LLMs. The calculators provide a great overview of VRAM, generation speed, and throughput for various configurat...
2025-08-19T12:05:31
https://www.reddit.com/r/LocalLLaMA/comments/1mugia9/how_do_the_results_from_vram_calculators_you_find/
Ashamed-Stretch-1675
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mugia9
false
null
t3_1mugia9
/r/LocalLLaMA/comments/1mugia9/how_do_the_results_from_vram_calculators_you_find/
false
false
self
0
null
The Factors That Make Indirect Prompt Injections Attacks Succeed
1
2025-08-19T11:44:30
https://www.fogel.dev/prompt_injection_cfs_framework
CardanoMoon
fogel.dev
1970-01-01T00:00:00
0
{}
1mug2i9
false
null
t3_1mug2i9
/r/LocalLLaMA/comments/1mug2i9/the_factors_that_make_indirect_prompt_injections/
false
false
default
1
null
DeepSeek v3.1
528
It’s happening! DeepSeek online model version has been updated to V3.1, context length extended to 128k, welcome to test on the official site and app. API calling remains the same.
2025-08-19T11:31:24
https://i.redd.it/143veukbpyjf1.jpeg
Just_Lifeguard_5033
i.redd.it
1970-01-01T00:00:00
0
{}
1muft1w
false
null
t3_1muft1w
/r/LocalLLaMA/comments/1muft1w/deepseek_v31/
false
false
default
528
{'enabled': True, 'images': [{'id': '143veukbpyjf1', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/143veukbpyjf1.jpeg?width=108&crop=smart&auto=webp&s=68377f55b764837a72a0837f69e1cdd0f9cbeef0', 'width': 108}, {'height': 55, 'url': 'https://preview.redd.it/143veukbpyjf1.jpeg?width=216&crop=smart&auto=we...
5 Lessons from Evaluating AI Voice Agents
29
1. **Latency matters more than anything** \- A 500ms delay feels tolerable in text. In voice, it feels broken. Testing latency across providers is a must. 2. **Edge cases are the real test** \- Scripted happy-path calls make every agent look good. The moment you throw in “interruptions” or background noise, big gaps ap...
2025-08-19T11:30:46
https://www.reddit.com/r/LocalLLaMA/comments/1mufslp/5_lessons_from_evaluating_ai_voice_agents/
Otherwise_Flan7339
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mufslp
false
null
t3_1mufslp
/r/LocalLLaMA/comments/1mufslp/5_lessons_from_evaluating_ai_voice_agents/
false
false
self
29
null
Local version of our CLI now processes multiple files and exploring privacy friendly options
3
Hi everyone A few days ago I shared the local version of my CLI that turns PDFs and docs into fine tuning datasets. The response was really cool around 50 stars and so many thoughtful suggestions. Really appreciate everyone who checked it out Based on your feedback we added multi file support. Now you can just point ...
2025-08-19T11:05:41
https://www.reddit.com/r/LocalLLaMA/comments/1mufans/local_version_of_our_cli_now_processes_multiple/
Interesting-Area6418
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mufans
false
null
t3_1mufans
/r/LocalLLaMA/comments/1mufans/local_version_of_our_cli_now_processes_multiple/
false
false
self
3
{'enabled': False, 'images': [{'id': 'DvZT8E-hjT8MKDjg9MvnMdmGKRIz8aRniojME9t9-DU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DvZT8E-hjT8MKDjg9MvnMdmGKRIz8aRniojME9t9-DU.png?width=108&crop=smart&auto=webp&s=83b50f5d3ff34cc43c1b71f93bec5fa950b8f090', 'width': 108}, {'height': 108, 'url': 'h...
NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale
33
2025-08-19T11:00:19
https://huggingface.co/stepfun-ai/NextStep-1-Large
lomero
huggingface.co
1970-01-01T00:00:00
0
{}
1muf6ry
false
null
t3_1muf6ry
/r/LocalLLaMA/comments/1muf6ry/nextstep1_toward_autoregressive_image_generation/
false
false
https://external-preview…17df9cf608594a2f
33
{'enabled': False, 'images': [{'id': 'ojIYaD1O8xYRW9Q-A7BHJQx5N3b1m-3M6OVRuLj2lzI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ojIYaD1O8xYRW9Q-A7BHJQx5N3b1m-3M6OVRuLj2lzI.png?width=108&crop=smart&auto=webp&s=3d16f0a01f30cb9d09bfb4f495fa04df12c04bc0', 'width': 108}, {'height': 116, 'url': 'h...
NextStep-1: Toward Autoregressive Image Generation with Continuous Tokens at Scale
1
2025-08-19T10:58:42
https://huggingface.co/stepfun-ai/NextStep-1-Large-Pretrain
lomero
huggingface.co
1970-01-01T00:00:00
0
{}
1muf5n3
false
null
t3_1muf5n3
/r/LocalLLaMA/comments/1muf5n3/nextstep1_toward_autoregressive_image_generation/
false
false
default
1
null
Backend for GLM 4.5 Air and the 96Gb Blackwell
13
Hi all, I've been struggling with this for quite a bit of time now. Last week I got my RTX Pro 6000 after \~ 7 months of saving cash. All I managed to compile as a backend is llama cpp, but I really want to get a proper backend working on it. Llama cpp struggles heavily with parallel requests, it accepts ANY api key...
2025-08-19T10:54:35
https://www.reddit.com/r/LocalLLaMA/comments/1muf2zr/backend_for_glm_45_air_and_the_96gb_blackwell/
UltrMgns
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1muf2zr
false
null
t3_1muf2zr
/r/LocalLLaMA/comments/1muf2zr/backend_for_glm_45_air_and_the_96gb_blackwell/
false
false
self
13
null
I built a tool to replace static API keys with short-lived credentials for agents
3
Hey everyone, Like many of you I've been experimenting a lot with local models and building agents. One thing that kept bothering me was the security around API keys. It feels like we're all just throwing secrets into .env files and hoping for the best which doesn't scale and is risky if an agent ever touches an exter...
2025-08-19T10:49:50
https://agentvisa.dev/
HeyItsFudge
agentvisa.dev
1970-01-01T00:00:00
0
{}
1muezst
false
null
t3_1muezst
/r/LocalLLaMA/comments/1muezst/i_built_a_tool_to_replace_static_api_keys_with/
false
false
default
3
null
Save backup, safetensors or gguf?
3
Hello everyone, In order to save a backup, taking into account the use of LM Studio, llama.cpp and the like, which is better safetensors or gguf?. Safetensors is the way models are published, if you have enough machine it is not a problem to convert them to gguf, but if it exceeds your machine they can only be downl...
2025-08-19T10:41:35
https://www.reddit.com/r/LocalLLaMA/comments/1mueues/save_backup_safetensors_or_gguf/
Macestudios32
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mueues
false
null
t3_1mueues
/r/LocalLLaMA/comments/1mueues/save_backup_safetensors_or_gguf/
false
false
self
3
null
And people ask why you need local models.
0
2025-08-19T10:36:00
https://i.redd.it/5p9wf01efyjf1.png
SuddenWerewolf7041
i.redd.it
1970-01-01T00:00:00
0
{}
1mueqxd
false
null
t3_1mueqxd
/r/LocalLLaMA/comments/1mueqxd/and_people_ask_why_you_need_local_models/
false
false
https://a.thumbs.redditm…2jCxS4wsYqb8.jpg
0
{'enabled': True, 'images': [{'id': '7jy9uManBGq2rUi4bOr4Of0WrAahdvu6Q_O0Om67YUg', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/5p9wf01efyjf1.png?width=108&crop=smart&auto=webp&s=906b836393fa1e6f5e447564c65223eaaf864bd4', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/5p9wf01efyjf1.png?...
When will low-cost Chinese GPUs hit the market?
147
I've heard of some Chinese GPUs, but I'm curious when they'll release low-cost alternatives that can seriously challenge NVIDIA 50xx dominance. Are there any indications that this will happen anytime soon? I'd love the hardware equivalent of a "deepseek moment" for OpenAI earlier this year.
2025-08-19T10:35:22
https://www.reddit.com/r/LocalLLaMA/comments/1mueqhs/when_will_lowcost_chinese_gpus_hit_the_market/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mueqhs
false
null
t3_1mueqhs
/r/LocalLLaMA/comments/1mueqhs/when_will_lowcost_chinese_gpus_hit_the_market/
false
false
self
147
null
Analyzed 10,000+ Reddit discussions about GPT-5: r/LocalLLaMA had a more measured response than other AI subs
1
[removed]
2025-08-19T10:22:46
https://www.reddit.com/r/LocalLLaMA/comments/1mueiln/analyzed_10000_reddit_discussions_about_gpt5/
feconroses
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mueiln
false
null
t3_1mueiln
/r/LocalLLaMA/comments/1mueiln/analyzed_10000_reddit_discussions_about_gpt5/
false
false
self
1
null
PC Build Inquiry for First-Time Fine-Tuner: 128GB vs 256GB RAM & The ECC Dilemma
0
Hello everyone, As I mentioned in my previous posts, I'm a first-time fine-tuner (and not a CS major, so I know it'll be tough lol). I'm building a PC and have run into a few issues. # My Build & Problem I currently own an **AMD Ryzen Threadripper 1950X** CPU and an **X399 Gaming 7** motherboard. I'm having a l...
2025-08-19T10:22:39
https://www.reddit.com/r/LocalLLaMA/comments/1mueij0/pc_build_inquiry_for_firsttime_finetuner_128gb_vs/
Patience2277
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mueij0
false
null
t3_1mueij0
/r/LocalLLaMA/comments/1mueij0/pc_build_inquiry_for_firsttime_finetuner_128gb_vs/
false
false
self
0
null
The Factors That Make Indirect Prompt Injections Attacks Succeed
1
I wrote a blog post breaking down which factors lead to successful indirect prompt injections. It builds off of work by Simon Willison, in which he identified which factors are necessary in the environment for prompt injections to succeed (the "lethal trifecta"). In this blog post, I specifically focus how the promp...
2025-08-19T09:52:40
https://www.reddit.com/r/LocalLLaMA/comments/1mudzwc/the_factors_that_make_indirect_prompt_injections/
CardanoMoon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mudzwc
false
null
t3_1mudzwc
/r/LocalLLaMA/comments/1mudzwc/the_factors_that_make_indirect_prompt_injections/
false
false
self
1
null
Would it be technically possible to set up a cloud service for llm where the provider cannot see user data?
1
Are there solutions for something similar like this via encryption? It would need to be impossible for the cloud provider to see the calculations on its own server without the user's private key and the solution would have to be open source and independently verified and most probably monitored. Would be very interes...
2025-08-19T09:02:31
https://www.reddit.com/r/LocalLLaMA/comments/1mud7hs/would_it_be_technically_possible_to_set_up_a/
Original_Alps23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mud7hs
false
null
t3_1mud7hs
/r/LocalLLaMA/comments/1mud7hs/would_it_be_technically_possible_to_set_up_a/
false
false
self
1
null
Working on a Metacognition System for my Small LLM!
0
I'm currently adding a metacognition system to my model. As I've been fleshing it out, it looks like it'll be able to help with the model's memory system, too! https://preview.redd.it/jlrdj6p6wxjf1.png?width=3400&format=png&auto=webp&s=236c2abb0826bbaf978d6108419b0f0429f13794 Yeah, it was intentional lol. It's des...
2025-08-19T08:51:44
https://www.reddit.com/r/LocalLLaMA/comments/1mud186/working_on_a_metacognition_system_for_my_small_llm/
Patience2277
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mud186
false
null
t3_1mud186
/r/LocalLLaMA/comments/1mud186/working_on_a_metacognition_system_for_my_small_llm/
false
false
https://b.thumbs.redditm…kTREEHMZqdmY.jpg
0
null
The Factors That Make Indirect Prompt Injections Succeed
0
2025-08-19T08:51:37
https://www.fogel.dev/prompt_injection_cfs_framework
CardanoMoon
fogel.dev
1970-01-01T00:00:00
0
{}
1mud160
false
null
t3_1mud160
/r/LocalLLaMA/comments/1mud160/the_factors_that_make_indirect_prompt_injections/
false
false
https://external-preview…f063615f4c1008c3
0
{'enabled': False, 'images': [{'id': 'bbs0DpZb_Jt7Az1Xtj697yOndch4o42nUvjkH4Ajtso', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/bbs0DpZb_Jt7Az1Xtj697yOndch4o42nUvjkH4Ajtso.jpeg?width=108&crop=smart&auto=webp&s=30fb53ae4b4e296ffe4e9c0050ed1ddc351fc188', 'width': 108}, {'height': 144, 'url': '...
GLM 4.5 Air Suddenly running 5-6x Slower on Hybrid CPU/RoCM inference.
9
I have a pc of the specs... CPU: 7900x RAM: 2x32gb 6000 mhz cl 30 GPU: 7900XTX I'm loading up a quant of GLM 4.5 air in llama cpp with.. `./build/bin/llama-cli -ngl 99 -sm none -m ~/models/unsloth/GLM-4.5-Air-GGUF/GLM-4.5-Air-IQ4_XS-00001-of-00002.gguf --flash-attn  --n-cpu-moe 34 -c 32000 -p " Hello"` This is ta...
2025-08-19T08:42:41
https://www.reddit.com/r/LocalLLaMA/comments/1mucw6t/glm_45_air_suddenly_running_56x_slower_on_hybrid/
ROS_SDN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mucw6t
false
null
t3_1mucw6t
/r/LocalLLaMA/comments/1mucw6t/glm_45_air_suddenly_running_56x_slower_on_hybrid/
false
false
self
9
null
ovis2.5 gguf when >:(
0
I have recently been made aware that despite [Ovis2.5](https://huggingface.co/AIDC-AI/Ovis2.5-9B) just coming out, there are still no GGUFs available for even the previous generation Ovis2 models. Apparently, Ovis2 or Ovis2.5 support has never been added to llama.cpp, despite the models being one of the best models for...
2025-08-19T08:34:34
https://www.reddit.com/r/LocalLLaMA/comments/1mucrov/ovis25_gguf_when/
airbus_a360_when
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mucrov
false
null
t3_1mucrov
/r/LocalLLaMA/comments/1mucrov/ovis25_gguf_when/
false
false
self
0
{'enabled': False, 'images': [{'id': 'WVvDuoktVm3xyjGE3ZPklLa6iGYoHWa6en0V0XmaQeE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WVvDuoktVm3xyjGE3ZPklLa6iGYoHWa6en0V0XmaQeE.png?width=108&crop=smart&auto=webp&s=1804803cb4754d66432727848aecc091d74ec7d8', 'width': 108}, {'height': 116, 'url': 'h...
SCAPO: community-scraped, practical tips for local LLMs
1
[removed]
2025-08-19T08:34:34
https://i.redd.it/ptj6nxuotxjf1.gif
Emergency_Little
i.redd.it
1970-01-01T00:00:00
0
{}
1mucroi
false
null
t3_1mucroi
/r/LocalLLaMA/comments/1mucroi/scapo_communityscraped_practical_tips_for_local/
false
false
default
1
{'enabled': True, 'images': [{'id': 'ptj6nxuotxjf1', 'resolutions': [{'height': 63, 'url': 'https://preview.redd.it/ptj6nxuotxjf1.gif?width=108&crop=smart&format=png8&s=6cfdc246ba5608a1b7ec0a11e1b8d242889f92fa', 'width': 108}, {'height': 126, 'url': 'https://preview.redd.it/ptj6nxuotxjf1.gif?width=216&crop=smart&format...
Which models are suitable for websearch?
7
I am using LibreChat, together with a local only Seaxng (search), firecrawl (scrape) and jina (reranker). I see that search, scraping, and reranking is active, but my current model ( qwen3:30b with 16k context window ) gets the data, but the results are missing my initial questions. To ensure that the model is not ...
2025-08-19T08:18:57
https://www.reddit.com/r/LocalLLaMA/comments/1mucj1p/which_models_are_suitable_for_websearch/
runsleeprepeat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mucj1p
false
null
t3_1mucj1p
/r/LocalLLaMA/comments/1mucj1p/which_models_are_suitable_for_websearch/
false
false
self
7
null
Building a RAG-based Bot with a large knowledge base.
5
Hello everyone, Recently I received some data from a website and am asked to develop a bot that can go through that data and answer questions. The data is a single json file (~1MB) and contains information related to projects initiated throughout the company. Each object in the json file has information like name of t...
2025-08-19T07:38:10
https://www.reddit.com/r/LocalLLaMA/comments/1mubvpf/building_a_ragbased_bot_with_a_large_knowledge/
champ_undisputed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mubvpf
false
null
t3_1mubvpf
/r/LocalLLaMA/comments/1mubvpf/building_a_ragbased_bot_with_a_large_knowledge/
false
false
self
5
null
MyKB — An On-Prem AI Knowledge Engine (OSS, self-healing, zero-trust)
1
[removed]
2025-08-19T07:26:17
https://www.reddit.com/r/LocalLLaMA/comments/1mubp2q/mykb_an_onprem_ai_knowledge_engine_oss/
One_Milk_7025
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mubp2q
false
null
t3_1mubp2q
/r/LocalLLaMA/comments/1mubp2q/mykb_an_onprem_ai_knowledge_engine_oss/
false
false
self
1
null
Qwen Image Edit First Test (FREE) - This Just Changed AI Editing Forever
0
https://preview.redd.it/….be/yjY1YahSeBc)
2025-08-19T06:56:40
https://www.reddit.com/r/LocalLLaMA/comments/1mub7ep/qwen_image_edit_first_test_free_this_just_changed/
bipin_25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mub7ep
false
{'oembed': {'author_name': 'Codedigipt', 'author_url': 'https://www.youtube.com/@codedigiptbiplab', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/yjY1YahSeBc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyro...
t3_1mub7ep
/r/LocalLLaMA/comments/1mub7ep/qwen_image_edit_first_test_free_this_just_changed/
false
false
https://b.thumbs.redditm…AOOfe8jxxo1k.jpg
0
null
My PR that adds Mikupad (with extra features) as an alternative webUI for ik_llama.cpp
18
2025-08-19T06:03:36
https://github.com/ikawrakow/ik_llama.cpp/pull/558
AdventLogin2021
github.com
1970-01-01T00:00:00
0
{}
1muabfm
false
null
t3_1muabfm
/r/LocalLLaMA/comments/1muabfm/my_pr_that_adds_mikupad_with_extra_features_as_an/
false
false
default
18
null
GPT OSS quality on Nebius - fixed (update)
116
2025-08-19T05:47:34
https://www.reddit.com/gallery/1mua1k4
ai_devrel_eng
reddit.com
1970-01-01T00:00:00
0
{}
1mua1k4
false
null
t3_1mua1k4
/r/LocalLLaMA/comments/1mua1k4/gpt_oss_quality_on_nebius_fixed_update/
false
false
https://b.thumbs.redditm…Dc2Udh4wNqoE.jpg
116
null
GPT OSS quality on Nebius - fixed
2
(I work at Nebius) Just a quick update. Our GPT-OSS deployment underperformed on Artificial Analysis’ accuracy benchmarks (GPQA×16, AIME25×32, IFBench×8) [link to benchmark](https://artificialanalysis.ai/models/gpt-oss-120b/providers) | [X](https://x.com/ArtificialAnlys/status/1955102409044398415) GPT-OSS has ...
2025-08-19T05:40:53
https://www.reddit.com/r/LocalLLaMA/comments/1mu9xhq/gpt_oss_quality_on_nebius_fixed/
ai_devrel_eng
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu9xhq
false
null
t3_1mu9xhq
/r/LocalLLaMA/comments/1mu9xhq/gpt_oss_quality_on_nebius_fixed/
false
false
self
2
{'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'h...
I just want to run a server that can run all my GGUFs
0
Why does everything require me to sit and type commands into the terminal for an hour?! I know coders really like to make things unnecessarily complex and gatekeep technology. But come on, it doesn't have to be this complicated. I have an extra PC and want to set it up as a server so i don't bog down my everyday compu...
2025-08-19T05:40:26
https://www.reddit.com/r/LocalLLaMA/comments/1mu9x72/i_just_want_to_run_a_server_that_can_run_all_my/
OK-ButLikeWhy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu9x72
false
null
t3_1mu9x72
/r/LocalLLaMA/comments/1mu9x72/i_just_want_to_run_a_server_that_can_run_all_my/
false
false
self
0
null
Generating json data with local llm
0
I want to generate a json based data which might contain around 5k entries. I was wondering if anyone has suggestions for open source models that I can run with llama cpp or ollama that are good for generating json? I have an Intel mac book pro 2017 version with 16Gb of RAM. Also,as far as I know, ollama doesn’t su...
2025-08-19T05:32:24
https://www.reddit.com/r/LocalLLaMA/comments/1mu9s9c/generating_json_data_with_local_llm/
IterationStation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu9s9c
false
null
t3_1mu9s9c
/r/LocalLLaMA/comments/1mu9s9c/generating_json_data_with_local_llm/
false
false
self
0
null
Independent researcher built efficient attention mechanism - seeking feedback and arXiv help
13
Hey LocalLLaMA community! I know this isn't exactly about running models locally, but I figured you folks would appreciate efficiency improvements in AI models. I have been working on making Transformers more efficient. I developed something called "Condor" - a new attention mechanism that's way more computationally e...
2025-08-19T05:29:06
https://www.reddit.com/r/LocalLLaMA/comments/1mu9q27/independent_researcher_built_efficient_attention/
Perfect_Power815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu9q27
false
null
t3_1mu9q27
/r/LocalLLaMA/comments/1mu9q27/independent_researcher_built_efficient_attention/
false
false
self
13
null
Seems like many open source models struggle with this.
39
Many open source models, except for the GPT-OSS model, fail to genuinely grasp the details of **recently published articles**. For example, if an article states, "last week this event happened...". When asked about the date of the event, it becomes lost, even though the news articles on websites show the dates when the...
2025-08-19T05:15:55
https://www.reddit.com/r/LocalLLaMA/comments/1mu9hpi/seems_like_many_open_source_models_struggle_with/
Neither_Egg_4773
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu9hpi
false
null
t3_1mu9hpi
/r/LocalLLaMA/comments/1mu9hpi/seems_like_many_open_source_models_struggle_with/
false
false
self
39
null
Difference Between AI Models
1
It's a basic research paper I did, I got suspended on Medium after uploading, so using GitHub for right now. I'd like some feedback, if possible. I'm new to writing and would really appreciate it. Thanks. [https://github.com/UrMagma/The-Difference-Between-AI-Models/blob/main/README.md](https://github.com/UrMagma/The-D...
2025-08-19T05:11:38
https://www.reddit.com/r/LocalLLaMA/comments/1mu9f31/difference_between_ai_models/
Melodic-Emphasis-707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu9f31
false
null
t3_1mu9f31
/r/LocalLLaMA/comments/1mu9f31/difference_between_ai_models/
false
false
self
1
{'enabled': False, 'images': [{'id': 'Ty4PwpfJXy_YOoa9wtpyIw9t6zwRIpDBi5SjcRScWmE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ty4PwpfJXy_YOoa9wtpyIw9t6zwRIpDBi5SjcRScWmE.png?width=108&crop=smart&auto=webp&s=25b7f349ddfc48e54b6a23083afcd197343f3558', 'width': 108}, {'height': 108, 'url': 'h...
The best OCR for a machine like mine?
1
NVIDIA GeForce RTX 4060 Ti 8GB + 32gb Ram I won't be able to put everything in VRAM, but is there one that I can get working even if it's slow but is as accurate as possible? I tried Olmocr, but it seems there's no way to get it working with less than 15GB of VRAM, as if it couldn't be shared. A pain.
2025-08-19T05:07:49
https://www.reddit.com/r/LocalLLaMA/comments/1mu9coc/the_best_ocr_for_a_machine_like_mine/
9acca9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu9coc
false
null
t3_1mu9coc
/r/LocalLLaMA/comments/1mu9coc/the_best_ocr_for_a_machine_like_mine/
false
false
self
1
null
I am new to local Llama. Is there a way I get get an open source image generator or a video generator
0
All I have is Meta's Llama 3.2 with ollama. Is there a way to get a video generator and an image generator for local AI?
2025-08-19T04:50:47
https://www.reddit.com/r/LocalLLaMA/comments/1mu91qq/i_am_new_to_local_llama_is_there_a_way_i_get_get/
MomentumAndValue
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu91qq
false
null
t3_1mu91qq
/r/LocalLLaMA/comments/1mu91qq/i_am_new_to_local_llama_is_there_a_way_i_get_get/
false
false
self
0
null
New nvidia models 9B-v2-Base vs 9B-v2???
9
What is the difference between "9B-v2-Base" and "9B-v2"?? Here are their links respectively: [https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2](https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2) [https://huggingface.co/nvidia/NVIDIA-Nemotron-Nano-9B-v2-Base](https://huggingface.co/nvidia/NVIDIA-Ne...
2025-08-19T04:19:30
https://www.reddit.com/r/LocalLLaMA/comments/1mu8h32/new_nvidia_models_9bv2base_vs_9bv2/
Mr-Barack-Obama
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu8h32
false
null
t3_1mu8h32
/r/LocalLLaMA/comments/1mu8h32/new_nvidia_models_9bv2base_vs_9bv2/
false
false
self
9
{'enabled': False, 'images': [{'id': 'GiqzTuyH_eElt0yVAuFWAuvHSRjIIaLz2aN8rPQ0Z8s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GiqzTuyH_eElt0yVAuFWAuvHSRjIIaLz2aN8rPQ0Z8s.png?width=108&crop=smart&auto=webp&s=6ccbff12981d45a1e1ec4bde04a4cdbafc25ac0e', 'width': 108}, {'height': 116, 'url': 'h...
How to customize endpoints in Librechat when deployed using Railway?
0
I added the following commands in the Dockerfile: ``` COPY --chown=node:node ./librechat.yaml /app/librechat.yaml ``` I think this should be quivalent to what the documentation says: ``` services: api: volumes: - type: bind source: ./librechat.yaml target: /app/librechat.yaml ``` then Railway ...
2025-08-19T03:39:00
https://www.reddit.com/r/LocalLLaMA/comments/1mu7opl/how_to_customize_endpoints_in_librechat_when/
Ok-Finger280
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu7opl
false
null
t3_1mu7opl
/r/LocalLLaMA/comments/1mu7opl/how_to_customize_endpoints_in_librechat_when/
false
false
self
0
null
Dual RX 7900XTX GPUs for "AAA" 4K Gaming
0
Hello, I'm about to built my new gaming rig. The specs are below. You can see that I am pretty max out all component as possible as I can. Please kindly see and advise about GPU. **CPU - Ryzen 9 9950X3D** **RAM - G.Skill trident Z5 neo 4x48Gb Expo 6000Mhz** **Mobo - MSI MEG X870e Godlike** **PSU - Corsair AXi1600W...
2025-08-19T03:33:57
https://www.reddit.com/r/LocalLLaMA/comments/1mu7l2u/dual_rx_7900xtx_gpus_for_aaa_4k_gaming/
RunFit4976
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu7l2u
false
null
t3_1mu7l2u
/r/LocalLLaMA/comments/1mu7l2u/dual_rx_7900xtx_gpus_for_aaa_4k_gaming/
false
false
self
0
null
LLM progress hasn't slowed down
0
2025-08-19T03:22:25
https://i.redd.it/r3b4py7i9wjf1.jpeg
auradragon1
i.redd.it
1970-01-01T00:00:00
0
{}
1mu7cnc
false
null
t3_1mu7cnc
/r/LocalLLaMA/comments/1mu7cnc/llm_progress_hasnt_slowed_down/
false
false
https://b.thumbs.redditm…qpD8aLMl9W9U.jpg
0
{'enabled': True, 'images': [{'id': '8MIfAsh_NV_cx1a0goajhVduQQMBIRgbjZTTP2N1H88', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/r3b4py7i9wjf1.jpeg?width=108&crop=smart&auto=webp&s=df1866c932526e16982328996a8e521bf46c77e0', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/r3b4py7i9wjf1.jp...
I'm running GPT-OSS-20b on Arc A770, LFM2VL on CPU and SmallThinker 4B on CPU
2
GPT-OSS-20b gets 40+ tps, LFM2VL and SmallThinker both get 30+ tps...Is there a better way or what should I try to do with these three models running at the same time? I'm perfectly content with the three running this way. I tried consolidating them all and just running Gemma 3 12b but it only ran at 10 tps with a litt...
2025-08-19T03:05:05
https://www.reddit.com/r/LocalLLaMA/comments/1mu6zd6/im_running_gptoss20b_on_arc_a770_lfm2vl_on_cpu/
thejacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu6zd6
false
null
t3_1mu6zd6
/r/LocalLLaMA/comments/1mu6zd6/im_running_gptoss20b_on_arc_a770_lfm2vl_on_cpu/
false
false
self
2
null
how i use domoai to batch upscale ai visuals for reels and tiktoks
0
for content creators juggling multiple posts per week, quality can drop fast. i used to post ai visuals as is, but compression made them blurry. now i batch upscale everything using [domo](https://www.domoai.app/home?referrer=website) before publishing. i generate base art in [mage](https://www.mage.space/) or [niji](h...
2025-08-19T02:43:22
https://www.reddit.com/r/LocalLLaMA/comments/1mu6j0f/how_i_use_domoai_to_batch_upscale_ai_visuals_for/
Gold_Negotiation9518
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu6j0f
false
null
t3_1mu6j0f
/r/LocalLLaMA/comments/1mu6j0f/how_i_use_domoai_to_batch_upscale_ai_visuals_for/
false
false
self
0
null
Help Running Gemma 3n Locally (Insufficient Memory Error?)
0
I'm trying to run Gemma 3n locally on a dev's app that comes with Gemma 3n. Every time I try to run it and use the model, after one use, there's an insufficient memory error that pops up along with attach failed. I checked my Console app (on Mac) but I can't find the issue and no idea how to fix it. Others don't seem t...
2025-08-19T02:33:12
https://www.reddit.com/r/LocalLLaMA/comments/1mu6bd0/help_running_gemma_3n_locally_insufficient_memory/
solarsflare
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu6bd0
false
null
t3_1mu6bd0
/r/LocalLLaMA/comments/1mu6bd0/help_running_gemma_3n_locally_insufficient_memory/
false
false
self
0
null
Agentic coding tools with smaller system prompts?
9
Hey folks! Wondering if anyone here has thought about this I've been playing a bit recently with the new Qwen3 Coder 30B locally, and for being so small it's really impressive. Have even tried it with some of the Claude-Code-like agentic coding tools, like qwen's own, Claude Code Router, and opencode/crush, all with s...
2025-08-19T02:31:47
https://www.reddit.com/r/LocalLLaMA/comments/1mu6a9s/agentic_coding_tools_with_smaller_system_prompts/
Carbonite1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu6a9s
false
null
t3_1mu6a9s
/r/LocalLLaMA/comments/1mu6a9s/agentic_coding_tools_with_smaller_system_prompts/
false
false
self
9
null
AI Comparison
2
Hey guys, I did a little research paper comparing all the AI models to see if there was a real difference (yes, of course). I'd just like some feedback for it, I'm very new to writing publicly and could use some help. [https://medium.com/@MichaelsGuy/the-difference-between-ais-746f39c75569](https://medium.com/@Michaels...
2025-08-19T02:24:12
https://www.reddit.com/r/LocalLLaMA/comments/1mu64hr/ai_comparison/
Melodic-Emphasis-707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu64hr
false
null
t3_1mu64hr
/r/LocalLLaMA/comments/1mu64hr/ai_comparison/
false
false
self
2
null
Local model to generate STL files?
0
I'd like to generate STL files locally to assist with 3d printing fun toys for the kids. There are some paid services out there but I would prefer to keep everything local, is that even possible with current tech?
2025-08-19T02:22:06
https://www.reddit.com/r/LocalLLaMA/comments/1mu62uo/local_model_to_generate_stl_files/
_cronic_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu62uo
false
null
t3_1mu62uo
/r/LocalLLaMA/comments/1mu62uo/local_model_to_generate_stl_files/
false
false
self
0
null
Data Safety with Llama Cpp Python
0
If I run an embedding model with llama-cpp-python, as long as I’m using the latest version of llama-cpp-python, it should be safe, right? For example, even if the model file itself was tampered with or injected with malicious code by someone (such as the people who quantized it), it still wouldn’t be able to upload my ...
2025-08-19T02:17:07
https://www.reddit.com/r/LocalLLaMA/comments/1mu5ytz/data_safety_with_llama_cpp_python/
Dazzling-Albatross72
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu5ytz
false
null
t3_1mu5ytz
/r/LocalLLaMA/comments/1mu5ytz/data_safety_with_llama_cpp_python/
false
false
self
0
null
Buying a second hand GPU
14
Thinking of buying a second hand GPU for local AI, and I’d like some advice. If the seller is okay with lending the GPU before selling it, what’s the best way to make sure it’s in good condition and will handle AI training/inference well? Would running common gaming/3D benchmarks be enough, and which ones would you r...
2025-08-19T02:16:20
https://www.reddit.com/r/LocalLLaMA/comments/1mu5y7q/buying_a_second_hand_gpu/
V0dros
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu5y7q
false
null
t3_1mu5y7q
/r/LocalLLaMA/comments/1mu5y7q/buying_a_second_hand_gpu/
false
false
self
14
null
Fine-tuning a Code Generation LLM on Bengali Dataset - Need Model & Resource Recommendations
2
I want to fine-tune a code generation LLM on a dataset I created that looks like this: id,instruction,response,test_list 1,প্রথম n সংখ্যার ক্ষুদ্রতম গুণিতক খুঁজে বের করার জন্য একটি ফাংশন লিখুন।,"def smallest_multiple(n): if (n<=2): return n i = n * 2 factors = [number for num...
2025-08-19T02:00:10
https://www.reddit.com/r/LocalLLaMA/comments/1mu5l1c/finetuning_a_code_generation_llm_on_bengali/
Background_Front5937
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu5l1c
false
null
t3_1mu5l1c
/r/LocalLLaMA/comments/1mu5l1c/finetuning_a_code_generation_llm_on_bengali/
false
false
self
2
null
Local STT → LLM: whisper.cpp + llama.cpp (no cloud, no keys)
1
[removed]
2025-08-19T01:57:42
https://www.reddit.com/r/LocalLLaMA/comments/1mu5j1l/local_stt_llm_whispercpp_llamacpp_no_cloud_no_keys/
Eastern_Strategy_932
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu5j1l
false
null
t3_1mu5j1l
/r/LocalLLaMA/comments/1mu5j1l/local_stt_llm_whispercpp_llamacpp_no_cloud_no_keys/
false
false
self
1
null
llama.cpp running slower than ollama?
4
Hi everyone! Lately i've been trying to make the jump from ollama to llama.cpp but I've been running into some issues. The biggest one being that models are for some reason running *much* *slower* in llama.cpp than they are in ollama (Ollama: 20+ tokens per second vs llama.cpp 1-3 tokens per second). I think this is ...
2025-08-19T01:30:33
https://www.reddit.com/r/LocalLLaMA/comments/1mu4wvl/llamacpp_running_slower_than_ollama/
Sharp-Strawberry8911
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu4wvl
false
null
t3_1mu4wvl
/r/LocalLLaMA/comments/1mu4wvl/llamacpp_running_slower_than_ollama/
false
false
self
4
null
We're Updating the Wiki To Be More Current, And We Want Your Feedback
68
The r/LocalLLaMA subreddit has long had a wiki: [https://www.reddit.com/r/LocalLLaMA/wiki/wiki/](https://www.reddit.com/r/LocalLLaMA/wiki/wiki/). However, the wiki hadn't been updated in a year or two (it still was mainly focused on LLaMA 2)! So we renovated the FAQ, Resources, and Models sections to reflect the prese...
2025-08-19T01:01:14
https://www.reddit.com/r/LocalLLaMA/comments/1mu49q3/were_updating_the_wiki_to_be_more_current_and_we/
N8Karma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu49q3
false
null
t3_1mu49q3
/r/LocalLLaMA/comments/1mu49q3/were_updating_the_wiki_to_be_more_current_and_we/
false
false
self
68
{'enabled': False, 'images': [{'id': 'huAqX1NBS4qSJvNdvrfFdjyVlc903dmo6z6Xw6Vm7_0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/huAqX1NBS4qSJvNdvrfFdjyVlc903dmo6z6Xw6Vm7_0.png?width=108&crop=smart&auto=webp&s=4c5c06fdf24f1da53a518e0ac38a08134db07684', 'width': 108}, {'height': 108, 'url': 'h...
Why does Qwen3-Coder not work in Qwen-Code aka what's going on with tool calling?
17
These issues are driving me nuts. So, my config is with using llama.cpp. Let's assume that is a requirement because of the need to do partial offloading. Of course, we use the very latest from git. Same for qwen-code. We get a nice GGUF from [https://huggingface.co/unsloth/Qwen3-Coder-30B-A3B-Instruct-GGUF](https://h...
2025-08-19T00:40:48
https://www.reddit.com/r/LocalLLaMA/comments/1mu3tln/why_does_qwen3coder_not_work_in_qwencode_aka/
Pristine-Woodpecker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu3tln
false
null
t3_1mu3tln
/r/LocalLLaMA/comments/1mu3tln/why_does_qwen3coder_not_work_in_qwencode_aka/
false
false
self
17
{'enabled': False, 'images': [{'id': 'cy-9p63w74w2Wsh_XdxWUC4aNr1WfOGqoNbvrUXxtCo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cy-9p63w74w2Wsh_XdxWUC4aNr1WfOGqoNbvrUXxtCo.png?width=108&crop=smart&auto=webp&s=b858451b750eab889b9ebb40dc87b8742e42c132', 'width': 108}, {'height': 116, 'url': 'h...
Quantization breakthrough: 4x compression with <2% performance loss - looking for testers
0
Hey r/LocalLLaMA! I've been working on a quantization approach that might interest folks running models locally. **The Problem We've All Faced**: You want to run larger models locally, but: * INT4 quantization destroys performance * Good quantization tools are proprietary * Cross-language support is terrible * Edge d...
2025-08-19T00:36:27
https://www.reddit.com/r/LocalLLaMA/comments/1mu3q2c/quantization_breakthrough_4x_compression_with_2/
Silver_Raspberry_811
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu3q2c
false
null
t3_1mu3q2c
/r/LocalLLaMA/comments/1mu3q2c/quantization_breakthrough_4x_compression_with_2/
false
false
self
0
null
Best setup for local general LLM for M2 Air 8GB RAM?
2
Things change so fast, and it is hard to keep up. I’m wondering what the best setup for general LLM usage is for an M2 MacBook Air with 8GB of RAM. Is the local Jan model the best and easiest right now?
2025-08-19T00:19:45
https://www.reddit.com/r/LocalLLaMA/comments/1mu3cfr/best_setup_for_local_general_llm_for_m2_air_8gb/
Socratesticles_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu3cfr
false
null
t3_1mu3cfr
/r/LocalLLaMA/comments/1mu3cfr/best_setup_for_local_general_llm_for_m2_air_8gb/
false
false
self
2
null
Nice grok, nice
0
2025-08-19T00:12:24
https://i.redd.it/089i6wk6cvjf1.jpeg
gpu_mamba
i.redd.it
1970-01-01T00:00:00
0
{}
1mu361l
false
null
t3_1mu361l
/r/LocalLLaMA/comments/1mu361l/nice_grok_nice/
false
false
default
0
{'enabled': True, 'images': [{'id': '089i6wk6cvjf1', 'resolutions': [{'height': 133, 'url': 'https://preview.redd.it/089i6wk6cvjf1.jpeg?width=108&crop=smart&auto=webp&s=0420e2aa2d2bb6611c7efef5c9a298e767c48f83', 'width': 108}, {'height': 266, 'url': 'https://preview.redd.it/089i6wk6cvjf1.jpeg?width=216&crop=smart&auto=...
Nice grok, nice.
1
2025-08-19T00:11:37
https://i.redd.it/4u39rui1cvjf1.jpeg
tensorpool_tycho
i.redd.it
1970-01-01T00:00:00
0
{}
1mu35dj
false
null
t3_1mu35dj
/r/LocalLLaMA/comments/1mu35dj/nice_grok_nice/
false
false
default
1
{'enabled': True, 'images': [{'id': '4u39rui1cvjf1', 'resolutions': [{'height': 133, 'url': 'https://preview.redd.it/4u39rui1cvjf1.jpeg?width=108&crop=smart&auto=webp&s=74d26f23e445c4d064d869e7a89bb81b37ec5a9b', 'width': 108}, {'height': 266, 'url': 'https://preview.redd.it/4u39rui1cvjf1.jpeg?width=216&crop=smart&auto=...
An Alternative to Text-to-SQL
7
We quickly built pxt.retrieval\_udf(), a feature that transforms any [Pixeltable](https://github.com/pixeltable/pixeltable) table into an AI-queryable tool for agentic workflows. Traditional RAG excels at unstructured data (at least Pixeltable does) but struggles with structured information. Now your AI agents can que...
2025-08-18T23:59:34
https://i.redd.it/8vca849s9vjf1.png
Norqj
i.redd.it
1970-01-01T00:00:00
0
{}
1mu2v5g
false
null
t3_1mu2v5g
/r/LocalLLaMA/comments/1mu2v5g/an_alternative_to_texttosql/
false
false
default
7
{'enabled': True, 'images': [{'id': '8vca849s9vjf1', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/8vca849s9vjf1.png?width=108&crop=smart&auto=webp&s=2d9aac84c2ed66361f9b87dbb456df2d57e7d3b9', 'width': 108}, {'height': 234, 'url': 'https://preview.redd.it/8vca849s9vjf1.png?width=216&crop=smart&auto=we...
How do I optimize my dual GPU set up consisting of 3070 mobile (8GB) + external GTX1080 (8GB)?
1
**System**: - 2021 MSI Laptop - Internal RTX 3070 mobile GPU (8GB) - External GTX 1080 GPU (8GB) connected thru Thunderbolt 4 **Goal**: I want to run models between 9GB - 15GB in size, preferably within LM Studio. Open to other engines / front end suggestions. **Issue**: Whole laptop crashes trying to load anything l...
2025-08-18T23:44:17
https://www.reddit.com/r/LocalLLaMA/comments/1mu2id1/how_do_i_optimize_my_dual_gpu_set_up_consisting/
sourpatchgrownadults
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu2id1
false
null
t3_1mu2id1
/r/LocalLLaMA/comments/1mu2id1/how_do_i_optimize_my_dual_gpu_set_up_consisting/
false
false
self
1
null
Need help deploying a model (offering $200)
10
Hey everyone! I'm trying to get a finetuned version of this model running at high speed for my app. I've: 1. Made a Lora for `OpenGVLab/InternVL3-14B-Instruct` 2. Merged with base model 3. Quantized to AWQ 4. Deployed with LMDeploy However, the inference is slow, its like over a second for a simple prompt with a 40 t...
2025-08-18T23:37:16
https://www.reddit.com/r/LocalLLaMA/comments/1mu2ccx/need_help_deploying_a_model_offering_200/
909GagMan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu2ccx
false
null
t3_1mu2ccx
/r/LocalLLaMA/comments/1mu2ccx/need_help_deploying_a_model_offering_200/
false
false
self
10
{'enabled': False, 'images': [{'id': 'MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MV8WQnKGiSSypEI5QKJe4g08BAeccM6KobeueLMOJdY.png?width=108&crop=smart&auto=webp&s=a08158a2ec290c8157b492f314bfb148408be1fc', 'width': 108}, {'height': 121, 'url': 'h...
bilbo.high.reasoning.medium.mini.3lightbulbs.ultra
330
2025-08-18T22:47:26
https://i.redd.it/bfdlovjpvujf1.png
Comfortable-Rock-498
i.redd.it
1970-01-01T00:00:00
0
{}
1mu15vr
false
null
t3_1mu15vr
/r/LocalLLaMA/comments/1mu15vr/bilbohighreasoningmediummini3lightbulbsultra/
false
false
default
330
{'enabled': True, 'images': [{'id': 'bfdlovjpvujf1', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/bfdlovjpvujf1.png?width=108&crop=smart&auto=webp&s=c4b70cf410be6bd77cf08d544a21faec8d8bf79f', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/bfdlovjpvujf1.png?width=216&crop=smart&auto=we...
Anyone deploying this?
8
So by complete chance, I just found out about this: https://github.com/gpustack/llama-box This seems incredibly powerful and useful! I do plan to use LocalAI to build the backbone for my AI server - but damn, this is genuenly awesome. Had never heared of this before. Anyone have this deployed? What's your experience ...
2025-08-18T22:19:07
https://www.reddit.com/r/LocalLLaMA/comments/1mu0gsq/anyone_deploying_this/
IngwiePhoenix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu0gsq
false
null
t3_1mu0gsq
/r/LocalLLaMA/comments/1mu0gsq/anyone_deploying_this/
false
false
self
8
{'enabled': False, 'images': [{'id': 'xexqCsRpRwBZZFFbkOJBA6i0CuT3hI7KrdH8UHLmlS8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xexqCsRpRwBZZFFbkOJBA6i0CuT3hI7KrdH8UHLmlS8.png?width=108&crop=smart&auto=webp&s=de05678287f2062bf76340251ca4506ab18ae31c', 'width': 108}, {'height': 108, 'url': 'h...
Qwen Code CLI has generous FREE Usage option
178
For those who didnt know, Qwen-Code which is a clone of Gemini CLI has a good [Free usage plan](https://github.com/QwenLM/qwen-code?tab=readme-ov-file#-free-options-available): - 2,000 requests per day with no token limits - 60 requests per minute rate limit It allows us to use Qwen3Coder for FREE. Made a small video ...
2025-08-18T22:15:33
https://www.reddit.com/r/LocalLLaMA/comments/1mu0djr/qwen_code_cli_has_generous_free_usage_option/
NoobMLDude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu0djr
false
null
t3_1mu0djr
/r/LocalLLaMA/comments/1mu0djr/qwen_code_cli_has_generous_free_usage_option/
false
false
self
178
null
GPT-OSS 20B New Language Fine Tuning
7
Hi. Does anyone have experience fine tuning GPT-OSS 20B to think and respond in a new language that it originally barely knew (although had approximately 2 letters per token)? For those who have, I would like to know how bad was it before your fine tune, how good it became and what would you recommend. How big of a dat...
2025-08-18T22:10:24
https://www.reddit.com/r/LocalLLaMA/comments/1mu08pc/gptoss_20b_new_language_fine_tuning/
AustinFirstAndOnly
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mu08pc
false
null
t3_1mu08pc
/r/LocalLLaMA/comments/1mu08pc/gptoss_20b_new_language_fine_tuning/
false
false
self
7
null