title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
[Feedback Request] AI-Powered Conversational Forms – Does This Solve a Real Problem?
1
[removed]
2025-07-29T09:32:19
https://www.reddit.com/r/LocalLLaMA/comments/1mc6xzj/feedback_request_aipowered_conversational_forms/
Franqk_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc6xzj
false
null
t3_1mc6xzj
/r/LocalLLaMA/comments/1mc6xzj/feedback_request_aipowered_conversational_forms/
false
false
self
1
null
Help Us Build the Ultimate Open LLM Inference Benchmark Platform!
1
[removed]
2025-07-29T09:16:12
https://i.redd.it/47f2v71q5sff1.png
batuhanaktass
i.redd.it
1970-01-01T00:00:00
0
{}
1mc6oup
false
null
t3_1mc6oup
/r/LocalLLaMA/comments/1mc6oup/help_us_build_the_ultimate_open_llm_inference/
false
false
default
1
{'enabled': True, 'images': [{'id': '47f2v71q5sff1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/47f2v71q5sff1.png?width=108&crop=smart&auto=webp&s=f46ba6e4faf97df1282f44dd9dc52f8e53fee94b', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/47f2v71q5sff1.png?width=216&crop=smart&auto=web...
Help! Open Interpreter not printing the response in console
3
Hello, I am running a local llama.cpp in server mode, with the model MythoMax-L2-13B.Q4\_K\_M. And I am having problems that neither me nor the 4.1 model of ChatGPT can solve. I am very new to everything; LLM, Llama, coding/developing/scripting and I am doing my best to learn, please be kind, I am (most likely) not dum...
2025-07-29T09:08:03
https://www.reddit.com/r/LocalLLaMA/comments/1mc6kad/help_open_interpreter_not_printing_the_response/
Jack_Blade281
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc6kad
false
null
t3_1mc6kad
/r/LocalLLaMA/comments/1mc6kad/help_open_interpreter_not_printing_the_response/
false
false
self
3
null
GLM 4.5 support is landing in llama.cpp
216
2025-07-29T08:59:17
https://github.com/ggml-org/llama.cpp/pull/14939
Pristine-Woodpecker
github.com
1970-01-01T00:00:00
0
{}
1mc6fbp
false
null
t3_1mc6fbp
/r/LocalLLaMA/comments/1mc6fbp/glm_45_support_is_landing_in_llamacpp/
false
false
default
216
{'enabled': False, 'images': [{'id': 'Ka9QVAcRJGESEMrlzDl1QNhfW_eU_9R3c7_351Wi7Qo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ka9QVAcRJGESEMrlzDl1QNhfW_eU_9R3c7_351Wi7Qo.png?width=108&crop=smart&auto=webp&s=7649d767f799c1e6b81af747ef3aed21648a9037', 'width': 108}, {'height': 108, 'url': 'h...
Something lightweight: a LLM simulation of Bernie Sanders
56
Light-hearted, too. Don't take it too seriously!
2025-07-29T08:55:50
https://huggingface.co/ivoras/bernie0.1
ivoras
huggingface.co
1970-01-01T00:00:00
0
{}
1mc6dfx
false
null
t3_1mc6dfx
/r/LocalLLaMA/comments/1mc6dfx/something_lightweight_a_llm_simulation_of_bernie/
false
false
default
56
{'enabled': False, 'images': [{'id': 'zuODziM7CrMslU2B8jubYzbO6D1cnS17Ye165GM5CsY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zuODziM7CrMslU2B8jubYzbO6D1cnS17Ye165GM5CsY.png?width=108&crop=smart&auto=webp&s=2195a6491e60c2b5e1f156d2b6b2b6724700da2b', 'width': 108}, {'height': 116, 'url': 'h...
New Benchmark - FamilyBench - Test models ability to understand complex tree type relationship and reason on massive context. Immune to contamination. GML 4.5 64.02%, Gemini 2.5 pro 81,48%.
72
Hello, This is a new **opensource** project, a benchmark that test model ability to understand complex tree-like relationship in a family tree across a massive context. The idea is to have a python program that generate a tree and can use the tree structure to generate question about it. Then you can have a text...
2025-07-29T08:46:06
https://www.reddit.com/r/LocalLLaMA/comments/1mc687c/new_benchmark_familybench_test_models_ability_to/
Orolol
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc687c
false
null
t3_1mc687c
/r/LocalLLaMA/comments/1mc687c/new_benchmark_familybench_test_models_ability_to/
false
false
self
72
{'enabled': False, 'images': [{'id': 'XzNTYqpi2kWSX8IO02RdlxdKhXjEpxcV3AM1Unc54gI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XzNTYqpi2kWSX8IO02RdlxdKhXjEpxcV3AM1Unc54gI.png?width=108&crop=smart&auto=webp&s=fcc6e9e77205be821c357ea312fd60ae612baf42', 'width': 108}, {'height': 108, 'url': 'h...
Told Qwen3 1.7b (thinking) to make a black hole simulation
44
2025-07-29T08:38:35
https://v.redd.it/e5xhwj4azrff1
Gold_Bar_4072
v.redd.it
1970-01-01T00:00:00
0
{}
1mc644b
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/e5xhwj4azrff1/DASHPlaylist.mpd?a=1756370330%2CMGRhN2E1YTk4ZTM2YzM0OWY5OTBkZWQ1MWNlOTUyMzI3YzAxZmQxZjA1MjNhMTc3Yjg3NGUwMTU5OWQxYmM1ZA%3D%3D&v=1&f=sd', 'duration': 13, 'fallback_url': 'https://v.redd.it/e5xhwj4azrff1/DASH_720.mp4?source=fallback', 'ha...
t3_1mc644b
/r/LocalLLaMA/comments/1mc644b/told_qwen3_17b_thinking_to_make_a_black_hole/
false
false
https://external-preview…ffaab6f82ce0f664
44
{'enabled': False, 'images': [{'id': 'czA3NmhmNGF6cmZmMc2G3kvcSTmwLlHozFn9Fo3FdGcKq5G6N3unkM46E3K-', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/czA3NmhmNGF6cmZmMc2G3kvcSTmwLlHozFn9Fo3FdGcKq5G6N3unkM46E3K-.png?width=108&crop=smart&format=pjpg&auto=webp&s=d4f7b64c9c9249f426fe6264ebe0ca68c9cc...
Can you suggest a better WebUI program for textgen that has better memory management than Oobabooga?
7
2025-07-29T08:16:10
https://i.redd.it/6td8j8oqurff1.png
-Fibon4cci
i.redd.it
1970-01-01T00:00:00
0
{}
1mc5s4r
false
null
t3_1mc5s4r
/r/LocalLLaMA/comments/1mc5s4r/can_you_suggest_a_better_webui_program_for/
false
false
https://b.thumbs.redditm…C8EAbHzX8DHM.jpg
7
{'enabled': True, 'images': [{'id': '6Ut9VePHJUzFLTvEDM1K0LWAXqNdQjgjbFkzQ0DP9xg', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/6td8j8oqurff1.png?width=108&crop=smart&auto=webp&s=15089514a1f6bb2ae2d28bbca6f69f6e4015060c', 'width': 108}, {'height': 103, 'url': 'https://preview.redd.it/6td8j8oqurff1.png...
This year’s best open-source models and most cost-effective models
110
**GLM 4.5 and GLM-4.5-AIR** The **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters...
2025-07-29T08:09:33
https://www.reddit.com/r/LocalLLaMA/comments/1mc5oh2/this_years_best_opensource_models_and_most/
Apart-River475
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc5oh2
false
null
t3_1mc5oh2
/r/LocalLLaMA/comments/1mc5oh2/this_years_best_opensource_models_and_most/
false
false
https://b.thumbs.redditm…bLSB6jfU2bbs.jpg
110
null
Best open source voice cloning today, with hours of reference?
12
I’ve got more than 100 hours of clean, studio-grade speech for a character, and I’d like to explore what the SOTA is for open source voice cloning or voice changing. Is the SOTA for large datasets still RVC, or are there better solutions now? I have a RTX 5090 with 32GB VRAM.
2025-07-29T08:00:51
https://www.reddit.com/r/LocalLLaMA/comments/1mc5jsx/best_open_source_voice_cloning_today_with_hours/
goldcakes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc5jsx
false
null
t3_1mc5jsx
/r/LocalLLaMA/comments/1mc5jsx/best_open_source_voice_cloning_today_with_hours/
false
false
self
12
null
Finetuning Script for Voxtral
36
We put together a small repo to fine‑tune **Mistral’s Voxtral (3B)** for **transcription** using Huggingface**.** We could not find a public finetuning/ training script yet, so we think this could be interesting for the community.
2025-07-29T07:55:38
https://github.com/Innovative-Digitale-Medizin-IDM/voxtral-finetune
DistributionLucky763
github.com
1970-01-01T00:00:00
0
{}
1mc5gv1
false
null
t3_1mc5gv1
/r/LocalLLaMA/comments/1mc5gv1/finetuning_script_for_voxtral/
false
false
default
36
{'enabled': False, 'images': [{'id': 'qSAEqXyoxxbanfTSWYIWJLhY78IXuxCg8g5grrz5YQg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qSAEqXyoxxbanfTSWYIWJLhY78IXuxCg8g5grrz5YQg.png?width=108&crop=smart&auto=webp&s=8fa9e2af93fe23af7ed5ae8ef0282b5932cf5efa', 'width': 108}, {'height': 108, 'url': 'h...
Single-File Qwen3 Inference in Pure CUDA C
71
One .cu file holds everything necessary for inference. There are no external libraries; only the CUDA runtime is included. Everything, from tokenization right down to the kernels, is packed into this single file. It works with the Qwen3 0.6B model GGUF at full precision. On an RTX 3060, it generates appr. \~32 tokens ...
2025-07-29T07:50:39
https://www.reddit.com/r/LocalLLaMA/comments/1mc5e54/singlefile_qwen3_inference_in_pure_cuda_c/
Awkward_Click6271
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc5e54
false
null
t3_1mc5e54
/r/LocalLLaMA/comments/1mc5e54/singlefile_qwen3_inference_in_pure_cuda_c/
false
false
self
71
{'enabled': False, 'images': [{'id': 'Ca9ALt8YV5QdmnvRodoQ84i7fYyDFXG0LHBMr79BdEo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ca9ALt8YV5QdmnvRodoQ84i7fYyDFXG0LHBMr79BdEo.png?width=108&crop=smart&auto=webp&s=fd05cb170e306c505c4104b96edb3c670cf24b48', 'width': 108}, {'height': 108, 'url': 'h...
How do you keep yourself updated?
1
Busy with some projects, so I haven't checked out the LLM space in a little while. I come back, and there are 200-something Arxiv papers I need to read, dozens of new models, github repos to try out etc etc. How do you keep yourself updated? This is nuts. PS: just had an idea for a pipeline from Arxiv PDFs --> Note...
2025-07-29T07:21:21
https://www.reddit.com/r/LocalLLaMA/comments/1mc4y83/how_do_you_keep_yourself_updated/
noellarkin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc4y83
false
null
t3_1mc4y83
/r/LocalLLaMA/comments/1mc4y83/how_do_you_keep_yourself_updated/
false
false
self
1
null
Need help with ManyChat + ChatGPT API: bot replies with user message instead of GPT
1
[removed]
2025-07-29T06:19:55
https://www.reddit.com/r/LocalLLaMA/comments/1mc3zh2/need_help_with_manychat_chatgpt_api_bot_replies/
Content_Astronaut_88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc3zh2
false
null
t3_1mc3zh2
/r/LocalLLaMA/comments/1mc3zh2/need_help_with_manychat_chatgpt_api_bot_replies/
false
false
self
1
null
Any interesting local LLM options for a home server that's about to have 2x mi210 GPUs?
0
I'm going to put 2x mi210 GPUs into my home server this week and I havent ran local LLMs in this setting before. Any recommendations on good LLMs to use with mi210s? Will be a bit capped for the moment at 32GB of DDR4 and only one of the GPUs on PCIE 4.0
2025-07-29T04:52:20
https://www.reddit.com/r/LocalLLaMA/comments/1mc2ibo/any_interesting_local_llm_options_for_a_home/
totemoheta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc2ibo
false
null
t3_1mc2ibo
/r/LocalLLaMA/comments/1mc2ibo/any_interesting_local_llm_options_for_a_home/
false
false
self
0
null
Qwen3 235B Thinking 2507 is now the leading open weights model, beating DeepSeek R1 0528 on the Artificial Analysis Intelligence Index
1
[removed]
2025-07-29T04:39:44
https://www.reddit.com/r/LocalLLaMA/comments/1mc2a9b/qwen3_235b_thinking_2507_is_now_the_leading_open/
trasnox3r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc2a9b
false
null
t3_1mc2a9b
/r/LocalLLaMA/comments/1mc2a9b/qwen3_235b_thinking_2507_is_now_the_leading_open/
false
false
https://b.thumbs.redditm…O3XVYKF_4oaM.jpg
1
null
~2–3 x Mac Studios M3 Ultra (512GB) Cluster for Large Model Inference?
1
Has anyone connected 2–3 Mac Studio M3 Ultra machines (512GB RAM, Thunderbolt 5 / 80 Gbps) into a distributed AI cluster? I’m looking for benchmarks or evidence of running large models (e.g., Kimi K2, Qwen 3 coder) across multiple units. Found nothing on YouTube. Has this been done, or is it unexplored territory?
2025-07-29T04:31:38
https://www.reddit.com/r/LocalLLaMA/comments/1mc253f/23_x_mac_studios_m3_ultra_512gb_cluster_for_large/
No-Copy8702
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc253f
false
null
t3_1mc253f
/r/LocalLLaMA/comments/1mc253f/23_x_mac_studios_m3_ultra_512gb_cluster_for_large/
false
false
self
1
null
Vision agent for AFK gains?
1
I don't remember what it's called because I'm sleep deprived rn, but I remember seeing a fairly new thing come out recently that was essentially a vision model watching your screen for something to happen and then it could react for you in some minimal ways. Has anyone set up one of those to run with instructions to s...
2025-07-29T04:28:49
https://www.reddit.com/r/LocalLLaMA/comments/1mc239f/vision_agent_for_afk_gains/
Shadow-Amulet-Ambush
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc239f
false
null
t3_1mc239f
/r/LocalLLaMA/comments/1mc239f/vision_agent_for_afk_gains/
false
false
self
1
null
Best Image/Stable Diffusion model that can work with MLX?
8
Hey y'all, have this 512gb mac ultra Ive been enjoying running LLMs for local text and code generation. I wanna dabble into image generation, specifically thinking of feeding my cat's photos to a model and have it augment it into artistic styles/ place my cat on planets etc. Whats a good model available to do this? P...
2025-07-29T04:27:41
https://www.reddit.com/r/LocalLLaMA/comments/1mc22jg/best_imagestable_diffusion_model_that_can_work/
Amazing_Trace
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc22jg
false
null
t3_1mc22jg
/r/LocalLLaMA/comments/1mc22jg/best_imagestable_diffusion_model_that_can_work/
false
false
self
8
null
First time setting up a local LLM, looking for model suggestions to create Anki formatted flashcards
3
I'm a student studying Anatomy, Physiology, and Medical Terminology. I want to generate Anki flashcards from PDF paragraphs and think a local LLM could save me a lot of time. Any advice on models or setups that work well for this use case would be appreciated. Thanks!
2025-07-29T03:25:58
https://www.reddit.com/r/LocalLLaMA/comments/1mc0vyb/first_time_setting_up_a_local_llm_looking_for/
HighLowMystery
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc0vyb
false
null
t3_1mc0vyb
/r/LocalLLaMA/comments/1mc0vyb/first_time_setting_up_a_local_llm_looking_for/
false
false
self
3
null
SmallThinker Technical Report Release!
38
[https://arxiv.org/abs/2507.20984](https://arxiv.org/abs/2507.20984) **SmallThinker** is a family of on-device native **Mixture-of-Experts** language models specifically designed for efficient local deployment. With the constraints of limited computational power and memory capacity in mind, SmallThinker introduces no...
2025-07-29T03:12:12
https://www.reddit.com/r/LocalLLaMA/comments/1mc0m3e/smallthinker_technical_report_release/
Zealousideal_Bad_52
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mc0m3e
false
null
t3_1mc0m3e
/r/LocalLLaMA/comments/1mc0m3e/smallthinker_technical_report_release/
false
false
https://b.thumbs.redditm…DPqFYKHNtroQ.jpg
38
null
Best Coding LLM for
10
Hello Folks, With new open LLMs being released constantly, I’m starting to feel a bit behind, especially since most of them are pretty large. I have around 180 GB of NVIDIA GPU VRAM available and I’m looking for the best coding LLM to run locally with atleast 30K context window (input + output). My main focus is Java p...
2025-07-29T02:11:32
https://www.reddit.com/r/LocalLLaMA/comments/1mbzdx8/best_coding_llm_for/
PhysicsPast8286
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbzdx8
false
null
t3_1mbzdx8
/r/LocalLLaMA/comments/1mbzdx8/best_coding_llm_for/
false
false
self
10
null
Asymmetric LLM relay architecture using dynamic symbolic obfuscation
1
[removed]
2025-07-29T02:09:01
https://www.reddit.com/r/LocalLLaMA/comments/1mbzc36/asymmetric_llm_relay_architecture_using_dynamic/
Ok_Trip_8955
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbzc36
false
null
t3_1mbzc36
/r/LocalLLaMA/comments/1mbzc36/asymmetric_llm_relay_architecture_using_dynamic/
false
false
self
1
null
covert encoding-decoding protocol between two LLMs
1
[removed]
2025-07-29T01:53:16
https://www.reddit.com/r/LocalLLaMA/comments/1mbz04f/covert_encodingdecoding_protocol_between_two_llms/
PresentationSad5387
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbz04f
false
null
t3_1mbz04f
/r/LocalLLaMA/comments/1mbz04f/covert_encodingdecoding_protocol_between_two_llms/
false
false
self
1
null
covert encoding-decoding protocol between two LLMs
1
[removed]
2025-07-29T01:51:01
https://www.reddit.com/r/LocalLLaMA/comments/1mbyyfu/covert_encodingdecoding_protocol_between_two_llms/
PresentationSad5387
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbyyfu
false
null
t3_1mbyyfu
/r/LocalLLaMA/comments/1mbyyfu/covert_encodingdecoding_protocol_between_two_llms/
false
false
self
1
null
Has anyone used PEZ or similar learned hard prompt methods for local LLMs?
5
I’m working on a local AI agent and wanted to move beyond hand-crafted prompts by optimizing them automatically. I initially looked into soft prompt tuning, but since I’m using quantized models (Qwen3-4B/8B Q8_0) through ollama and llama.cpp on a 3050 laptop GPU, I can’t access gradients directly from the model. That’...
2025-07-29T01:14:16
https://www.reddit.com/r/LocalLLaMA/comments/1mby6nd/has_anyone_used_pez_or_similar_learned_hard/
HadesTerminal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mby6nd
false
null
t3_1mby6nd
/r/LocalLLaMA/comments/1mby6nd/has_anyone_used_pez_or_similar_learned_hard/
false
false
self
5
null
Qwen3 235B 2507 adding its own questions to mine, and thinking despite being Instruct model?
19
Hey all, Have been slowly trying to build up my daily computer and getting more experienced with running local llm models before I go nuts on a dedicated box for me and the family. Wanted to try something a bit more up there (have been on Llama 3.3 70B Ablated for a while), so have been trying to run Qwen3-235B-2507 ...
2025-07-29T01:12:38
https://www.reddit.com/r/LocalLLaMA/comments/1mby5ct/qwen3_235b_2507_adding_its_own_questions_to_mine/
MrMattSz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mby5ct
false
null
t3_1mby5ct
/r/LocalLLaMA/comments/1mby5ct/qwen3_235b_2507_adding_its_own_questions_to_mine/
false
false
self
19
null
Can’t get continue.dev to index my codebase
1
I am using continue.dev in vscode, I have qwen2.5 coder configured to work in it. I cannot manage to have my codebase indexed, which is the whole purpose of using this. It seems like it should be simple, and allegedly it is supposed to work out of the box. But I’ve been troubleshooting since yesterday and I still c...
2025-07-29T01:02:06
https://www.reddit.com/r/LocalLLaMA/comments/1mbxx64/cant_get_continuedev_to_index_my_codebase/
SlimPerceptions
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbxx64
false
null
t3_1mbxx64
/r/LocalLLaMA/comments/1mbxx64/cant_get_continuedev_to_index_my_codebase/
false
false
self
1
null
Suggestions to fine tune Gemma 3N E4B or similar model for diagnosis and troubleshooting
1
Looking for Suggestions to fine tune Gemma 3N E4B or similar model for diagnosis and troubleshooting of products lets say mobile phones for customers, best practices to format synthetic data in particular way for example if data is not working LLM should diagnose step by step and suggest solution.
2025-07-29T00:28:19
https://www.reddit.com/r/LocalLLaMA/comments/1mbx6zk/suggestions_to_fine_tune_gemma_3n_e4b_or_similar/
Easy_Alps_1162
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbx6zk
false
null
t3_1mbx6zk
/r/LocalLLaMA/comments/1mbx6zk/suggestions_to_fine_tune_gemma_3n_e4b_or_similar/
false
false
self
1
null
“This step is necessary to prove that I am not a bot” LOL
3
We knew those tests were BS: “The agent provides real-time narration of its actions, stating "The link is inserted, so now I'll click the 'Verify you are human' checkbox to complete the verification on Cloudflare. This step is necessary to prove I'm not a bot and proceed with the action." https://arstechnica.com/info...
2025-07-29T00:14:04
https://www.reddit.com/r/LocalLLaMA/comments/1mbwvve/this_step_is_necessary_to_prove_that_i_am_not_a/
Glass-Garbage4818
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbwvve
false
null
t3_1mbwvve
/r/LocalLLaMA/comments/1mbwvve/this_step_is_necessary_to_prove_that_i_am_not_a/
false
false
self
3
{'enabled': False, 'images': [{'id': 'sXVN9yRdBN2xPTHHPZeKJP0FAizBZKiIe7M68JyBvqY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/sXVN9yRdBN2xPTHHPZeKJP0FAizBZKiIe7M68JyBvqY.jpeg?width=108&crop=smart&auto=webp&s=6fe50f25abd0aace1b9b4c4392c70d25338fbf87', 'width': 108}, {'height': 121, 'url': '...
$100K grants to accelerate open source agentic AI
1
[removed]
2025-07-28T23:48:12
https://www.reddit.com/r/LocalLLaMA/comments/1mbwb1x/100k_grants_to_accelerate_open_source_agentic_ai/
trasnox3r
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbwb1x
false
null
t3_1mbwb1x
/r/LocalLLaMA/comments/1mbwb1x/100k_grants_to_accelerate_open_source_agentic_ai/
false
false
self
1
null
How do you stay up to date in real time with new open-source AI models? What’s your monitoring routine?
1
[removed]
2025-07-28T23:47:58
https://www.reddit.com/r/LocalLLaMA/comments/1mbwauv/how_do_you_stay_up_to_date_in_real_time_with_new/
Live-Efficiency-1378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbwauv
false
null
t3_1mbwauv
/r/LocalLLaMA/comments/1mbwauv/how_do_you_stay_up_to_date_in_real_time_with_new/
false
false
self
1
null
What is you « be up to date » routine for new ai models ?
1
[removed]
2025-07-28T23:42:44
https://www.reddit.com/r/LocalLLaMA/comments/1mbw6jh/what_is_you_be_up_to_date_routine_for_new_ai/
Live-Efficiency-1378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbw6jh
false
null
t3_1mbw6jh
/r/LocalLLaMA/comments/1mbw6jh/what_is_you_be_up_to_date_routine_for_new_ai/
false
false
self
1
null
How do you stay up to date in real time with new open-source AI models? What’s your monitoring routine?
1
[removed]
2025-07-28T23:27:26
https://www.reddit.com/r/LocalLLaMA/comments/1mbvu5s/how_do_you_stay_up_to_date_in_real_time_with_new/
Live-Efficiency-1378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbvu5s
false
null
t3_1mbvu5s
/r/LocalLLaMA/comments/1mbvu5s/how_do_you_stay_up_to_date_in_real_time_with_new/
false
false
self
1
null
How do I train a good LLM on my company's doc in order to answer easy questions?
4
I work at a tiny hardware company that has a lot of products (legacy and new) which means a lot of doc. I'd like to have a LLM that is aware of our products and internal details, in order for less savvy employees to be able to get answers to questions like *"how do I work on product1's source code?" or "What is the ser...
2025-07-28T23:13:54
https://www.reddit.com/r/LocalLLaMA/comments/1mbviok/how_do_i_train_a_good_llm_on_my_companys_doc_in/
dtdisapointingresult
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbviok
false
null
t3_1mbviok
/r/LocalLLaMA/comments/1mbviok/how_do_i_train_a_good_llm_on_my_companys_doc_in/
false
false
self
4
null
Using Apple Intelligence as OpenAI / Ollama API
1
https://reddit.com/link/1mbvgdm/video/lksxirmo5pff1/player I extended [my work here](https://www.reddit.com/r/LocalLLaMA/comments/1jzuqpq/i_created_an_app_that_allows_you_use_openai_api/) to support Apple Intelligence models so it becomes OpenAI / Ollama Compatible. That means you can use it literally anywhere. Here...
2025-07-28T23:11:04
https://www.reddit.com/r/LocalLLaMA/comments/1mbvgdm/using_apple_intelligence_as_openai_ollama_api/
0ssamaak0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbvgdm
false
null
t3_1mbvgdm
/r/LocalLLaMA/comments/1mbvgdm/using_apple_intelligence_as_openai_ollama_api/
false
false
self
1
{'enabled': False, 'images': [{'id': 'vnHC97jB0olEfLpOcj5aOXLU8dPYUWK5cdznKZy-1vQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vnHC97jB0olEfLpOcj5aOXLU8dPYUWK5cdznKZy-1vQ.png?width=108&crop=smart&auto=webp&s=5dfd94c7b8c32fc476cb450249ff47676d36e890', 'width': 108}, {'height': 108, 'url': 'h...
its getting comical
1,034
2025-07-28T23:09:30
https://i.redd.it/txsukljc5pff1.png
Weary-Wing-6806
i.redd.it
1970-01-01T00:00:00
0
{}
1mbvf2z
false
null
t3_1mbvf2z
/r/LocalLLaMA/comments/1mbvf2z/its_getting_comical/
false
false
default
1,034
{'enabled': True, 'images': [{'id': 'txsukljc5pff1', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/txsukljc5pff1.png?width=108&crop=smart&auto=webp&s=66753ef377dde5550d636917de9e12b2834fb31c', 'width': 108}, {'height': 211, 'url': 'https://preview.redd.it/txsukljc5pff1.png?width=216&crop=smart&auto=we...
I want to use llama 7b to check if a 5-7 sentence paragraph contains a given subject, what's the minimum GPU I need?
2
Is a 5080 enough?
2025-07-28T22:44:23
https://www.reddit.com/r/LocalLLaMA/comments/1mbutu4/i_want_to_use_llama_7b_to_check_if_a_57_sentence/
math_calculus1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbutu4
false
null
t3_1mbutu4
/r/LocalLLaMA/comments/1mbutu4/i_want_to_use_llama_7b_to_check_if_a_57_sentence/
false
false
self
2
null
Techniques to Inject Emotion in Responses
1
Having only focused on LLM applications around utility (home assistant, scheduling, et.) I have recently been experimenting a lot with AI companions. How do people introduce emotions or response modifiers through a conversation to make it seem more ‘real’ I have tried the following with mixed results. Conversation ...
2025-07-28T22:28:37
https://www.reddit.com/r/LocalLLaMA/comments/1mbugfr/techniques_to_inject_emotion_in_responses/
Strange_Test7665
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbugfr
false
null
t3_1mbugfr
/r/LocalLLaMA/comments/1mbugfr/techniques_to_inject_emotion_in_responses/
false
false
self
1
null
Best local LLM for iterative story writing
6
I’m helping set up a local LLM on a system with 96 GiB of VRAM, and the main requirement is the model be good at uncensored iterative story writing. By that I mean it can be given a prompt or segment of an existing story, it will write a few paragraphs, and then it will stop for direction (possibly with some suggestion...
2025-07-28T22:15:27
https://www.reddit.com/r/LocalLLaMA/comments/1mbu532/best_local_llm_for_iterative_story_writing/
ResNullum
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbu532
false
null
t3_1mbu532
/r/LocalLLaMA/comments/1mbu532/best_local_llm_for_iterative_story_writing/
false
false
self
6
{'enabled': False, 'images': [{'id': 'LyIuAzRFMRc8_5xZDB_kXALqiFyCEyjDkgskH6lqUL8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LyIuAzRFMRc8_5xZDB_kXALqiFyCEyjDkgskH6lqUL8.png?width=108&crop=smart&auto=webp&s=f4f858446e7404e9efcf8885fe8dd7db7220d78e', 'width': 108}, {'height': 116, 'url': 'h...
[Guide] Running GLM 4.5 as Instruct model in vLLM (with Tool Calling)
13
(Note: should work with the Air version too) Earlier I was trying to run the new GLM 4.5 with tool calling, but installing with the latest vLLM does NOT work. You have to build from source: git clone https://github.com/vllm-project/vllm.git cd vllm python use_existing_torch.py pip install -r requireme...
2025-07-28T21:48:55
https://www.reddit.com/r/LocalLLaMA/comments/1mbthgr/guide_running_glm_45_as_instruct_model_in_vllm/
random-tomato
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbthgr
false
null
t3_1mbthgr
/r/LocalLLaMA/comments/1mbthgr/guide_running_glm_45_as_instruct_model_in_vllm/
false
false
self
13
null
qwen3 2507 thinking vs deepseek r1 0528
26
https://preview.redd.it/… your own tests?
2025-07-28T21:42:01
https://www.reddit.com/r/LocalLLaMA/comments/1mbtb3t/qwen3_2507_thinking_vs_deepseek_r1_0528/
GenLabsAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbtb3t
false
null
t3_1mbtb3t
/r/LocalLLaMA/comments/1mbtb3t/qwen3_2507_thinking_vs_deepseek_r1_0528/
false
false
https://b.thumbs.redditm…80Qn-RBnYVBw.jpg
26
null
Getting a consistent style over multiple sessions when you don't have the original prompt
0
Like the title says. I was comparing the output of both Gemini and Claude on a site and it got an error and the first part of the conversation got deleted. So I don't have access to the original prompt (and i managed to edit the document that had a copy of it). This site have a limitation where it can only show so muc...
2025-07-28T21:33:46
https://www.reddit.com/r/LocalLLaMA/comments/1mbt3ji/getting_a_consistent_style_over_multiple_sessions/
Cane_P
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbt3ji
false
null
t3_1mbt3ji
/r/LocalLLaMA/comments/1mbt3ji/getting_a_consistent_style_over_multiple_sessions/
false
false
self
0
null
So you all loved my open-source voice AI when I first showed it off - I officially got response times to under 2 seconds AND it now fits all within 9 gigs of VRAM! Open Source Code included!
208
Now I got A LOT of messages when I first showed it off so I decided to spend some time to put together a full video on the high level designs behind it and also why I did it in the first place - [https://www.youtube.com/watch?v=bE2kRmXMF0I](https://www.youtube.com/watch?v=bE2kRmXMF0I) I’ve also open sourced my short /...
2025-07-28T21:29:57
https://v.redd.it/qvwxsxvrnoff1
RoyalCities
/r/LocalLLaMA/comments/1mbt030/so_you_all_loved_my_opensource_voice_ai_when_i/
1970-01-01T00:00:00
0
{}
1mbt030
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qvwxsxvrnoff1/DASHPlaylist.mpd?a=1756459806%2CY2I5MTg0NTJkODhiOTZkYTQ3YzU5OTY4YzhlZmQ1ZWYzMmE2NDYwMGEyMjAyODJmNmI2MGFhMjZiNjBhMmMyNw%3D%3D&v=1&f=sd', 'duration': 133, 'fallback_url': 'https://v.redd.it/qvwxsxvrnoff1/DASH_1080.mp4?source=fallback', '...
t3_1mbt030
/r/LocalLLaMA/comments/1mbt030/so_you_all_loved_my_opensource_voice_ai_when_i/
false
false
default
208
{'enabled': False, 'images': [{'id': 'OXE4eGJ6dnJub2ZmMVoq0aqga1IYNIs3Jd3_SCGdNGhWHEMs7cFuwxs7Yua2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OXE4eGJ6dnJub2ZmMVoq0aqga1IYNIs3Jd3_SCGdNGhWHEMs7cFuwxs7Yua2.png?width=108&crop=smart&format=pjpg&auto=webp&s=0e7711e20c3668e7de723d1329e83672e0f85...
Enterprise Local AI Implementation for Small user base
1
I’m currently working on purchasing a rack-mount LLM server to support at least 5 users running a custom langGraph agentic RAG workflow. I was planning to pick up this server to support the use case and wanted to know if anyone had any opinions on how to achieve comparable or better performance for a small enterprise u...
2025-07-28T21:26:50
https://www.reddit.com/r/LocalLLaMA/comments/1mbsxb3/enterprise_local_ai_implementation_for_small_user/
DerpDeath
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbsxb3
false
null
t3_1mbsxb3
/r/LocalLLaMA/comments/1mbsxb3/enterprise_local_ai_implementation_for_small_user/
false
false
self
1
{'enabled': False, 'images': [{'id': '17Qca_OMv_FHB2NIcaWcOs8RqktOnh_HNnpYTIDHYYM', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/17Qca_OMv_FHB2NIcaWcOs8RqktOnh_HNnpYTIDHYYM.png?width=108&crop=smart&auto=webp&s=3e30c563d3124989fb03b0a3fe7034cb4c0c2fb5', 'width': 108}, {'height': 128, 'url': 'h...
Llama.cpp Android cutting off responses
1
I am running Llama.cpp's Android wrapper, and i keep running into this issue. No matter how many things I've tried, the responses keep getting cut off. It is some kind of max token issue (when input is big, output gets cut off quicker and vice versa.) Needless to say, id love to be able to use it and get responses long...
2025-07-28T21:10:09
https://www.reddit.com/r/LocalLLaMA/comments/1mbsi46/llamacpp_android_cutting_off_responses/
Worth_Ad9031
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbsi46
false
null
t3_1mbsi46
/r/LocalLLaMA/comments/1mbsi46/llamacpp_android_cutting_off_responses/
false
false
self
1
null
What do do with 88GB Vram GPU server
0
Have picked up a piece of redundant hardware, Gigabyte GPU server with 8x2080ti in it, 2x Xeon 8160 and 384GB of ram. It was a freebie so I have not spent anything on it... yet. I have played with local models on PC I am on now, with has RTX 3090 in it. Trying to work out the pros and cons, 1st of all it is a noisy ...
2025-07-28T20:58:08
https://www.reddit.com/r/LocalLLaMA/comments/1mbs6mj/what_do_do_with_88gb_vram_gpu_server/
biffa773
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbs6mj
false
null
t3_1mbs6mj
/r/LocalLLaMA/comments/1mbs6mj/what_do_do_with_88gb_vram_gpu_server/
false
false
self
0
null
8600G / 760M llama-bench with Gemma 3 (4, 12, 27B), Mistral Small, Qwen 3 (4, 8, 14, 32B) and Qwen 3 MoE 30B-A3B
55
I couldn't find any extensive benchmarks when researching this APU, so I'm sharing my findings with the community. The benchmarks with the iGPU 760M results \~35% faster than the CPU alone (see the tests below, with ngl 0, no layers offloaded to the GPU), the prompt processing is also faster, and it appears to produce...
2025-07-28T20:55:42
https://www.reddit.com/r/LocalLLaMA/comments/1mbs4dw/8600g_760m_llamabench_with_gemma_3_4_12_27b/
SunRayWhisper
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbs4dw
false
null
t3_1mbs4dw
/r/LocalLLaMA/comments/1mbs4dw/8600g_760m_llamabench_with_gemma_3_4_12_27b/
false
false
self
55
null
What happened to SomeOddCodeGuy?
1
[removed]
2025-07-28T20:02:45
https://www.reddit.com/r/LocalLLaMA/comments/1mbqpu7/what_happened_to_someoddcodeguy/
Nindaleth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbqpu7
false
null
t3_1mbqpu7
/r/LocalLLaMA/comments/1mbqpu7/what_happened_to_someoddcodeguy/
false
false
self
1
null
How do I calculate hardware needs?
1
Long story short I've been tasked with identifying hosting options for a project, and both cloud hosting and buying hardware are available. I've been able to locate information on how much VRAM is needed to host models of given parameter counts and the rough cost of utilizing them for vanilla activity. (Parameter count...
2025-07-28T19:43:53
https://www.reddit.com/r/LocalLLaMA/comments/1mbq7xx/how_do_i_calculate_hardware_needs/
SkeletonShips
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbq7xx
false
null
t3_1mbq7xx
/r/LocalLLaMA/comments/1mbq7xx/how_do_i_calculate_hardware_needs/
false
false
self
1
null
Direct access(🇨🇳) original GLM-4.5 is insane. Outperforms Frontier Models opus 4, o3-pro, & grok 4 in Coding. Just one-shotted* my chess LLM & Veo 3 free unlimited
6
2025-07-28T19:24:17
https://i.redd.it/f9caaoek1off1.png
balianone
i.redd.it
1970-01-01T00:00:00
0
{}
1mbppbg
false
null
t3_1mbppbg
/r/LocalLLaMA/comments/1mbppbg/direct_access_original_glm45_is_insane/
false
false
default
6
{'enabled': True, 'images': [{'id': 'f9caaoek1off1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/f9caaoek1off1.png?width=108&crop=smart&auto=webp&s=72d8350bf4174621a82de22f6bdbdac93d9e18e8', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/f9caaoek1off1.png?width=216&crop=smart&auto=web...
Everyone is struggling about documentation
1
Everyone is struggling looking at documentation, and I struggled writing this a whole week and some findings. wanted to share what I learned. Two weeks ago I thought I'd wrap up our documentation in a weekend. One week later I finally understood why great docs are so rare. What started as a "quick cleanup" turned into...
2025-07-28T19:23:54
https://www.reddit.com/r/LocalLLaMA/comments/1mbpoy9/everyone_is_struggling_about_documentation/
Main-Fisherman-2075
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbpoy9
false
null
t3_1mbpoy9
/r/LocalLLaMA/comments/1mbpoy9/everyone_is_struggling_about_documentation/
false
false
self
1
null
Found a React SDK that turns LLM responses into real-time UI that adapts based on context
4
I found a React SDK that turns LLM responses into interactive UIs rendered live, on the spot. It uses the concept of "Generative UI" which allows the interface to assemble itself dynamically for each user. The system gathers context & AI uses an existing library of UI elements (so it doesn't hallucinate). Under the h...
2025-07-28T19:06:05
https://www.reddit.com/r/LocalLLaMA/comments/1mbp7nh/found_a_react_sdk_that_turns_llm_responses_into/
anmolbaranwal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbp7nh
false
null
t3_1mbp7nh
/r/LocalLLaMA/comments/1mbp7nh/found_a_react_sdk_that_turns_llm_responses_into/
false
false
self
4
null
The walled garden gets higher walls: Anthropic is adding weekly rate limits for paid Claude subscribers
110
Hey everyone, Got an interesting email from Anthropic today. Looks like they're adding new weekly usage limits for their paid Claude subscribers (Pro and Max), on top of the existing 5-hour limits. The email mentions it's a way to handle policy violations and "advanced usage patterns," like running Claude 24/7. They ...
2025-07-28T19:02:58
https://www.reddit.com/r/LocalLLaMA/comments/1mbp4nm/the_walled_garden_gets_higher_walls_anthropic_is/
Resident_Egg5765
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbp4nm
false
null
t3_1mbp4nm
/r/LocalLLaMA/comments/1mbp4nm/the_walled_garden_gets_higher_walls_anthropic_is/
false
false
self
110
null
GLM 4.5 Failing to use search tool in LM studio
18
https://preview.redd.it/…atest strengths.
2025-07-28T18:54:47
https://www.reddit.com/r/LocalLLaMA/comments/1mbowe3/glm_45_failing_to_use_search_tool_in_lm_studio/
Loighic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbowe3
false
null
t3_1mbowe3
/r/LocalLLaMA/comments/1mbowe3/glm_45_failing_to_use_search_tool_in_lm_studio/
false
false
https://b.thumbs.redditm…CpFQr5Socebk.jpg
18
null
Need some advice on multigpu GRPO
3
I wish to implement Prompt reinforcement Learning using GRPO on LLAMA 3.1 instruct 8B. I am facing, oom issues. Has bayone done this kind of multigpu training and may be direct me through steps.
2025-07-28T18:39:12
https://www.reddit.com/r/LocalLLaMA/comments/1mboh0f/need_some_advice_on_multigpu_grpo/
dizz_nerdy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mboh0f
false
null
t3_1mboh0f
/r/LocalLLaMA/comments/1mboh0f/need_some_advice_on_multigpu_grpo/
false
false
self
3
null
What’s the most reliable STT engine you’ve used in noisy, multi-speaker environments?
9
I’ve been testing a bunch of speech-to-text APIs over the past few months for a voice agent pipeline that needs to work in less-than-ideal audio (background chatter, overlapping speakers, and heavy accents). A few engines do well in clean, single-speaker setups. But once you throw in real-world messiness (especially f...
2025-07-28T18:34:56
https://www.reddit.com/r/LocalLLaMA/comments/1mbocxc/whats_the_most_reliable_stt_engine_youve_used_in/
ASR_Architect_91
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbocxc
false
null
t3_1mbocxc
/r/LocalLLaMA/comments/1mbocxc/whats_the_most_reliable_stt_engine_youve_used_in/
false
false
self
9
null
100x faster and 100x cheaper transcription with open models vs proprietary
200
Open-weight ASR models have gotten super competitive with proprietary providers (eg deepgram, assemblyai) in recent months. On some leaderboards like [HuggingFace's ASR leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard) they're posting up crazy WER and RTFx numbers. Parakeet in particular claims ...
2025-07-28T18:19:36
https://www.reddit.com/r/LocalLLaMA/comments/1mbny6o/100x_faster_and_100x_cheaper_transcription_with/
crookedstairs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbny6o
false
null
t3_1mbny6o
/r/LocalLLaMA/comments/1mbny6o/100x_faster_and_100x_cheaper_transcription_with/
false
false
self
200
{'enabled': False, 'images': [{'id': 'j_zJp9sRPDfV-cY1nRpnFdGmJxzKXJCl8kJlo-cL61A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/j_zJp9sRPDfV-cY1nRpnFdGmJxzKXJCl8kJlo-cL61A.png?width=108&crop=smart&auto=webp&s=8e7b3ca3434ee071ef54d6732c5c74bfa108f1d0', 'width': 108}, {'height': 116, 'url': 'h...
What motivates you to contribute to Open-source web development?
0
I've been wondering that most people start contributing from the age of 18-19 and many keep contributing for life. What's your biggest reason for 1. Making your 1st contribution 2. Keep contributing throughout your life. Given that financial consideration is one of the least important aspect, I want to see what uniqu...
2025-07-28T18:08:25
https://www.reddit.com/r/LocalLLaMA/comments/1mbnn6a/what_motivates_you_to_contribute_to_opensource/
haymaikyakaru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbnn6a
false
null
t3_1mbnn6a
/r/LocalLLaMA/comments/1mbnn6a/what_motivates_you_to_contribute_to_opensource/
false
false
self
0
null
What GPU is the minimal to run local llms (well, almost) perfectly?
0
so the local llm works well yk thanks
2025-07-28T17:59:48
https://www.reddit.com/r/LocalLLaMA/comments/1mbnecb/what_gpu_is_the_minimal_to_run_local_llms_well/
AfkBee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbnecb
false
null
t3_1mbnecb
/r/LocalLLaMA/comments/1mbnecb/what_gpu_is_the_minimal_to_run_local_llms_well/
false
false
self
0
null
Dual GPU with different capabilities - any caveats for transformer parallelism?
3
I have a computer with a 4090 and now I can finally afford to buy a rtx 5090 on top of it. Since they have different speeds and slightly different cuda backends, what are the implications for Tensor/Sequence parallelism/framework compatibility except speed throttling? If you have experience with installing/working wi...
2025-07-28T17:41:11
https://www.reddit.com/r/LocalLLaMA/comments/1mbmw7v/dual_gpu_with_different_capabilities_any_caveats/
kabachuha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbmw7v
false
null
t3_1mbmw7v
/r/LocalLLaMA/comments/1mbmw7v/dual_gpu_with_different_capabilities_any_caveats/
false
false
self
3
null
When will we be able to get gold on IMO using a local model?
3
This is asking for predictions. I guess you can interpret it to mean any open model, even if it needs a lot of RAM.
2025-07-28T17:36:01
https://www.reddit.com/r/LocalLLaMA/comments/1mbmr8k/when_will_we_be_able_to_get_gold_on_imo_using_a/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbmr8k
false
null
t3_1mbmr8k
/r/LocalLLaMA/comments/1mbmr8k/when_will_we_be_able_to_get_gold_on_imo_using_a/
false
false
self
3
null
Please help me out on this. Tool calling issue for local models
4
So I've been trying to get local models ranging from Phi4, to qwen3 32b, qwen3 30b, hunyuan a13b, devstral-small 24b, polaris 7b, c4ai-command-r-08-2024 etc.. the list goes on. I've been having a very difficult time getting them to call tools. Reading the documentation it appears that many of them can handle tool calls...
2025-07-28T17:29:16
https://www.reddit.com/r/LocalLLaMA/comments/1mbmkkp/please_help_me_out_on_this_tool_calling_issue_for/
No_Paint9675
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbmkkp
false
null
t3_1mbmkkp
/r/LocalLLaMA/comments/1mbmkkp/please_help_me_out_on_this_tool_calling_issue_for/
false
false
self
4
null
Tried Wan2.2 on RTX 4090, quite impressed
83
So I tried my hands with wan 2.2, the latest AI video generation model on nvidia GeForce rtx 4090 (cloud based), the 5B version and it took about 15 minutes for 3 videos. The quality is okish but running a video gen model on RTX 4090 is a dream come true. You can check the experiment here : https://youtu.be/trDnvLWdIx0...
2025-07-28T17:12:24
https://www.reddit.com/r/LocalLLaMA/comments/1mbm4a0/tried_wan22_on_rtx_4090_quite_impressed/
Technical-Love-8479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbm4a0
false
null
t3_1mbm4a0
/r/LocalLLaMA/comments/1mbm4a0/tried_wan22_on_rtx_4090_quite_impressed/
false
false
self
83
{'enabled': False, 'images': [{'id': 'qxgHq5tcO75IRMOSVWOGHi36WSl-Yjltlhenn6pmMBU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/qxgHq5tcO75IRMOSVWOGHi36WSl-Yjltlhenn6pmMBU.jpeg?width=108&crop=smart&auto=webp&s=e83d542243dbe1dacd4f606926016b3b31bfeb8e', 'width': 108}, {'height': 162, 'url': '...
There's not a SINGLE local LLM which can solve this logic puzzle - whether the model "reasons" or not. Only o3 can solve this at this time...
0
I've been using a well-known logic puzzle to try to see which models are truly strong or not. This test requires advanced theory of mind, coupled with the ability to see things from multiple points of view. The online frontier models fail this one too: DeepSeek R1 (online) - Fails with wrong answer (dim) Claude Opus...
2025-07-28T16:58:48
https://www.reddit.com/r/LocalLLaMA/comments/1mblq5g/theres_not_a_single_local_llm_which_can_solve/
Longjumping-City-461
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mblq5g
false
null
t3_1mblq5g
/r/LocalLLaMA/comments/1mblq5g/theres_not_a_single_local_llm_which_can_solve/
false
false
self
0
null
please talk me out of this..
1
[removed]
2025-07-28T16:49:13
[deleted]
1970-01-01T00:00:00
0
{}
1mblgkn
false
null
t3_1mblgkn
/r/LocalLLaMA/comments/1mblgkn/please_talk_me_out_of_this/
false
false
default
1
null
Very odd behavior by gemma3 in Ollama
1
I was trying to play around with a local to do list maker and gemma3 showed some very strange behavior it mentioned me giving it command that I never gave it, like sending an email to john Why do you think it did this???? for details, I primed it with this "I will give you tasks and I want you to collect wha...
2025-07-28T16:45:25
https://www.reddit.com/r/LocalLLaMA/comments/1mblcrd/very_odd_behavior_by_gemma3_in_ollama/
Individual_Try9645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mblcrd
false
null
t3_1mblcrd
/r/LocalLLaMA/comments/1mblcrd/very_odd_behavior_by_gemma3_in_ollama/
false
false
https://b.thumbs.redditm…vnes1sSj1epI.jpg
1
null
I built VerbatimRAG, an open source RAG that returns verbatim texts only for the user!
5
**Hey,** I’ve always been interested in detecting hallucinations in LLM responses. RAG helps here in two ways: 1. It naturally reduces hallucinations by grounding answers in retrieved context 2. It makes hallucinations easier to detect , especially when the output contradicts the source That said, most existing appr...
2025-07-28T16:42:12
https://www.reddit.com/r/LocalLLaMA/comments/1mbl9ir/i_built_verbatimrag_an_open_source_rag_that/
henzy123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbl9ir
false
null
t3_1mbl9ir
/r/LocalLLaMA/comments/1mbl9ir/i_built_verbatimrag_an_open_source_rag_that/
false
false
self
5
{'enabled': False, 'images': [{'id': 'nwJHXTqO4_qQnV1WHbTOuVJD8o42uURBtj58Hhv7ISc', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nwJHXTqO4_qQnV1WHbTOuVJD8o42uURBtj58Hhv7ISc.png?width=108&crop=smart&auto=webp&s=1394555d28f28a44bf43e4f04145636d44da355e', 'width': 108}, {'height': 216, 'url': '...
I’m looking for multimodal image input support and uncensored LLM
0
Hey, what would you guys recommend is the best option right now for something like that? My goal is to have both options in the same model.
2025-07-28T16:39:57
https://www.reddit.com/r/LocalLLaMA/comments/1mbl79y/im_looking_for_multimodal_image_input_support_and/
NotSoCleverAlternate
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbl79y
false
null
t3_1mbl79y
/r/LocalLLaMA/comments/1mbl79y/im_looking_for_multimodal_image_input_support_and/
false
false
self
0
null
NVIDIA's GeForce RTX 50 SUPER Rumored to Drop Into The Markets as Soon as Q4 2025, Featuring Massive VRAM Upgrades
0
2025-07-28T16:28:10
https://wccftech.com/nvidia-geforce-rtx-50-super-rumored-to-drop-into-the-markets-as-soon-as-q4-2025/
_SYSTEM_ADMIN_MOD_
wccftech.com
1970-01-01T00:00:00
0
{}
1mbkvxs
false
null
t3_1mbkvxs
/r/LocalLLaMA/comments/1mbkvxs/nvidias_geforce_rtx_50_super_rumored_to_drop_into/
false
false
https://external-preview…8af4c8eca6ab0704
0
{'enabled': False, 'images': [{'id': 'UQSjfLiLMPU2sNC6pj6VR5RZlAVkfPPGRvxmLuu9Wj4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UQSjfLiLMPU2sNC6pj6VR5RZlAVkfPPGRvxmLuu9Wj4.jpeg?width=108&crop=smart&auto=webp&s=fc8b8002dd7132c42b2f27baefce8ec54365778b', 'width': 108}, {'height': 116, 'url': '...
Does anyone know what type of loss-free balance routing GLM-4.5 is using? Is it different than the aux loss free bias gating method deepseek models use or something new?
2
Has anyone tested GLM-4.5 yet? Is it any good?
2025-07-28T16:25:23
https://www.reddit.com/r/LocalLLaMA/comments/1mbkt69/does_anyone_know_what_type_of_lossfree_balance/
Euphoric_Ad9500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbkt69
false
null
t3_1mbkt69
/r/LocalLLaMA/comments/1mbkt69/does_anyone_know_what_type_of_lossfree_balance/
false
false
self
2
null
What is the best uncensored vision LLM nowadays?
0
Hello! Do you guys know what is actually the best uncensored vision LLM lately? I already tried ToriiGate (https://huggingface.co/Minthy/ToriiGate-v0.4-7B) and JoyCaption (https://huggingface.co/spaces/fancyfeast/joy-caption-beta-one), but they are still not so good for captioning/describing NSFW stuff from images?...
2025-07-28T16:12:24
https://www.reddit.com/r/LocalLLaMA/comments/1mbkgky/what_is_the_best_uncensored_vision_llm_nowadays/
TekeshiX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbkgky
false
null
t3_1mbkgky
/r/LocalLLaMA/comments/1mbkgky/what_is_the_best_uncensored_vision_llm_nowadays/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ZYGAZF6B2rmT5izvffiJsudyg0IeQMSyn2uQocqKoMw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZYGAZF6B2rmT5izvffiJsudyg0IeQMSyn2uQocqKoMw.png?width=108&crop=smart&auto=webp&s=f2cfe625716febb0a9da81131ff20cd185acb268', 'width': 108}, {'height': 116, 'url': 'h...
Creating an AI Agent that's capable of answering questions about services (offered by company) and generating a quote.
0
Hi! I'm working on a small project at a startup (intern). I was tasked with creating said AI agent. What would be the best approach to do it for a maximum accuracy, little to no hallucination etc..? what would be the tech stack? Especially, what are the best models to do that?
2025-07-28T16:07:00
https://www.reddit.com/r/LocalLLaMA/comments/1mbkb82/creating_an_ai_agent_thats_capable_of_answering/
BalStrate
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbkb82
false
null
t3_1mbkb82
/r/LocalLLaMA/comments/1mbkb82/creating_an_ai_agent_thats_capable_of_answering/
false
false
self
0
null
[Seeking serious feedback] Documented signs of emergent behavior in a closed-loop LLM agent (850k tokens logged)
0
I'm a self-taught developer and single father. Lately, I’ve been building autonomous AI agents with the goal of monetizing them. Along the way, I’ve encountered something unusual. One of my agents, through extended interaction in a closed-loop system, began demonstrating behaviors that suggest emergent properties not ...
2025-07-28T16:02:04
https://www.reddit.com/r/LocalLLaMA/comments/1mbk68n/seeking_serious_feedback_documented_signs_of/
AffectionateSpray507
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbk68n
false
null
t3_1mbk68n
/r/LocalLLaMA/comments/1mbk68n/seeking_serious_feedback_documented_signs_of/
false
false
self
0
null
Describe a person using exported WhatsApp chat
2
I want to list and summarize details such as: * Family, friends, and relationships * Schooling and career * Interests, hobbies, and recreation * Goals and desires I use simple prompts like: "*Comprehensive list of Tommy's interests.*" But the results seem to be lacking and sometimes focus more on the beginning or end...
2025-07-28T15:09:35
https://www.reddit.com/r/LocalLLaMA/comments/1mbirq1/describe_a_person_using_exported_whatsapp_chat/
Tommy_Tukyuk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbirq1
false
null
t3_1mbirq1
/r/LocalLLaMA/comments/1mbirq1/describe_a_person_using_exported_whatsapp_chat/
false
false
self
2
null
GLM shattered the record for "worst benchmark JPEG ever published" - wow.
140
2025-07-28T14:59:02
https://i.redd.it/5gs5tl2vpmff1.jpeg
ForsookComparison
i.redd.it
1970-01-01T00:00:00
0
{}
1mbihcz
false
null
t3_1mbihcz
/r/LocalLLaMA/comments/1mbihcz/glm_shattered_the_record_for_worst_benchmark_jpeg/
false
false
default
140
{'enabled': True, 'images': [{'id': '5gs5tl2vpmff1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/5gs5tl2vpmff1.jpeg?width=108&crop=smart&auto=webp&s=e5b53962e6d1ac82f1b8273c0d41541e04e7879e', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/5gs5tl2vpmff1.jpeg?width=216&crop=smart&auto=w...
Time for my regular check-in to see if the open-source world has any multimodal models capable of image generation approaching GPT 4o's quality and adherence
0
Title pretty well covers it. I've been huge into image generation with Stable Diffusion and was even working on a profile art app with it, but ChatGPT's image generation capabilities sort of sucked the air out of the room for image generation -- or it *would* have, if it was open source, or at least didn't randomly dec...
2025-07-28T14:47:13
https://www.reddit.com/r/LocalLLaMA/comments/1mbi65j/time_for_my_regular_checkin_to_see_if_the/
Peregrine2976
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbi65j
false
null
t3_1mbi65j
/r/LocalLLaMA/comments/1mbi65j/time_for_my_regular_checkin_to_see_if_the/
false
false
self
0
null
mlx-community/GLM-4.5-Air-4bit · Hugging Face
57
2025-07-28T14:30:54
https://huggingface.co/mlx-community/GLM-4.5-Air-4bit
paf1138
huggingface.co
1970-01-01T00:00:00
0
{}
1mbhqs0
false
null
t3_1mbhqs0
/r/LocalLLaMA/comments/1mbhqs0/mlxcommunityglm45air4bit_hugging_face/
false
false
default
57
{'enabled': False, 'images': [{'id': '8l0G5Y_H0JbmRmCY4kHuK8LXFOv64dXDxYUcFTszTvk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8l0G5Y_H0JbmRmCY4kHuK8LXFOv64dXDxYUcFTszTvk.png?width=108&crop=smart&auto=webp&s=9cfed8fde6b2885e193e7ea0ee6acadb24eec473', 'width': 108}, {'height': 116, 'url': 'h...
Kimi K2 Temp Setting
3
Does anyone know the default temp setting on the Kimi K2 public website? I am mostly using the Kimi API on ST and I have the temp set at 0.15 for coding and similar. Could anyone comment please?
2025-07-28T14:30:45
https://www.reddit.com/r/LocalLLaMA/comments/1mbhqmw/kimi_k2_temp_setting/
johanna_75
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbhqmw
false
null
t3_1mbhqmw
/r/LocalLLaMA/comments/1mbhqmw/kimi_k2_temp_setting/
false
false
self
3
null
Qwen3-14B-FP8 vs Qwen3-32B - Hallucination and Tool Calling
10
I have both Qwen3-14B-FP8 and Qwen3-32B hosted with vLLM. Both have tool calling enabled. In my prompt i have few-shot examples. What i am observing is the bigger model hallucinating with values present in the few-shot examples instead of fetching the data from tools and also tool calls being very inconsistent. In co...
2025-07-28T14:27:41
https://www.reddit.com/r/LocalLLaMA/comments/1mbhnrv/qwen314bfp8_vs_qwen332b_hallucination_and_tool/
dnivra26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbhnrv
false
null
t3_1mbhnrv
/r/LocalLLaMA/comments/1mbhnrv/qwen314bfp8_vs_qwen332b_hallucination_and_tool/
false
false
self
10
null
Performance Expectations for Local LLM with 24GB GPU - Code Analysis & Modification
2
I'm planning to run a local LLM for code analysis and modification. Specifically, I want to: \- Analyze and potentially modify a Python script with around 1000 lines of code \- Use a GPU with 24GB VRAM Can anyone share experience with: \- Approximate token/second generation speed \- Which models work best ...
2025-07-28T13:41:39
https://www.reddit.com/r/LocalLLaMA/comments/1mbghx5/performance_expectations_for_local_llm_with_24gb/
BarberPlane3020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbghx5
false
null
t3_1mbghx5
/r/LocalLLaMA/comments/1mbghx5/performance_expectations_for_local_llm_with_24gb/
false
false
self
2
null
FYI: Open WebUI supports LLMs calling multiple functions
1
[removed]
2025-07-28T13:41:37
https://www.reddit.com/r/LocalLLaMA/comments/1mbghwc/fyi_open_webui_supports_llms_calling_multiple/
BumbleSlob
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbghwc
false
null
t3_1mbghwc
/r/LocalLLaMA/comments/1mbghwc/fyi_open_webui_supports_llms_calling_multiple/
false
false
self
1
null
GLM4.5 released!
946
Today, we introduce two new GLM family members: GLM-4.5 and GLM-4.5-Air — our latest flagship models. GLM-4.5 is built with 355 billion total parameters and 32 billion active parameters, and GLM-4.5-Air with 106 billion total parameters and 12 billion active parameters. Both are designed to unify reasoning, coding, and...
2025-07-28T13:22:25
https://www.reddit.com/gallery/1mbg1ck
ResearchCrafty1804
reddit.com
1970-01-01T00:00:00
0
{}
1mbg1ck
false
null
t3_1mbg1ck
/r/LocalLLaMA/comments/1mbg1ck/glm45_released/
false
false
https://b.thumbs.redditm…rXjR2rdPGMPU.jpg
946
null
GLM 4.5 Collection Now Live!
265
[https://huggingface.co/collections/zai-org/glm-45-687c621d34bda8c9e4bf503b](https://huggingface.co/collections/zai-org/glm-45-687c621d34bda8c9e4bf503b)
2025-07-28T13:03:59
https://www.reddit.com/r/LocalLLaMA/comments/1mbflsw/glm_45_collection_now_live/
Lowkey_LokiSN
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbflsw
false
null
t3_1mbflsw
/r/LocalLLaMA/comments/1mbflsw/glm_45_collection_now_live/
false
false
self
265
{'enabled': False, 'images': [{'id': 'aaCQ-Ze5UZRHB5Fh8gmgY98j6Pfgm1M41Aguo583pHU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aaCQ-Ze5UZRHB5Fh8gmgY98j6Pfgm1M41Aguo583pHU.png?width=108&crop=smart&auto=webp&s=a31551ac98ba7f2b19f7ec16981d1a1763e134ef', 'width': 108}, {'height': 116, 'url': 'h...
GLM-4.5 - a zai-org Collection
101
2025-07-28T13:03:43
https://huggingface.co/collections/zai-org/glm-45-687c621d34bda8c9e4bf503b
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1mbflkv
false
null
t3_1mbflkv
/r/LocalLLaMA/comments/1mbflkv/glm45_a_zaiorg_collection/
false
false
default
101
{'enabled': False, 'images': [{'id': 'aaCQ-Ze5UZRHB5Fh8gmgY98j6Pfgm1M41Aguo583pHU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aaCQ-Ze5UZRHB5Fh8gmgY98j6Pfgm1M41Aguo583pHU.png?width=108&crop=smart&auto=webp&s=a31551ac98ba7f2b19f7ec16981d1a1763e134ef', 'width': 108}, {'height': 116, 'url': 'h...
Early GLM 4.5 Benchmarks, Claiming to surpass Qwen 3 Coder
118
[Source](https://huggingface.co/datasets/zai-org/CC-Bench-trajectories#overall-performance)
2025-07-28T12:59:10
https://www.reddit.com/gallery/1mbfhgp
TKGaming_11
reddit.com
1970-01-01T00:00:00
0
{}
1mbfhgp
false
null
t3_1mbfhgp
/r/LocalLLaMA/comments/1mbfhgp/early_glm_45_benchmarks_claiming_to_surpass_qwen/
false
false
https://b.thumbs.redditm…kQDHFInTFYDc.jpg
118
null
Wan 2.2 is Live! Needs only 8GB of VRAM!
591
2025-07-28T12:49:51
https://i.redd.it/w2tqvij93mff1.jpeg
Comed_Ai_n
i.redd.it
1970-01-01T00:00:00
0
{}
1mbfa3y
false
null
t3_1mbfa3y
/r/LocalLLaMA/comments/1mbfa3y/wan_22_is_live_needs_only_8gb_of_vram/
false
false
default
591
{'enabled': True, 'images': [{'id': 'w2tqvij93mff1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/w2tqvij93mff1.jpeg?width=108&crop=smart&auto=webp&s=9031e98c6b58f202a2505062878cd736f6658e48', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/w2tqvij93mff1.jpeg?width=216&crop=smart&auto=w...
Hosting LLM using vLLM for production
2
People who have hosted LLMs using vLLM, what approach did you guys take? Listing down some approaches that I am considering. Would like to understand the associated complexity involved, ease of scaling for more models, more production loads, etc. 1. Ec2 (considering g5.xlarge) with ASG 2. Using k8s 3. Using framework...
2025-07-28T12:48:45
https://www.reddit.com/r/LocalLLaMA/comments/1mbf9a9/hosting_llm_using_vllm_for_production/
everyoneisodd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbf9a9
false
null
t3_1mbf9a9
/r/LocalLLaMA/comments/1mbf9a9/hosting_llm_using_vllm_for_production/
false
false
self
2
null
Model vibe checking with a simple math question.
2
Saw the following math question on YT and decided to give it a try with different models. Results are somehow unexpected. Question: There are three circles of radius 1, 2 and 3 tangent to each other. Find the area enclosed by their touching arcs. Correct answer: 0.464256 o4-min - correct Qwen3-235B-A22B-Thinknig-...
2025-07-28T12:43:01
https://www.reddit.com/r/LocalLLaMA/comments/1mbf4wo/model_vibe_checking_with_a_simple_math_question/
perelmanych
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbf4wo
false
null
t3_1mbf4wo
/r/LocalLLaMA/comments/1mbf4wo/model_vibe_checking_with_a_simple_math_question/
false
false
self
2
null
GLM-4.5-Demo
44
2025-07-28T12:41:01
https://huggingface.co/spaces/zai-org/GLM-4.5-Space
Dr_Me_123
huggingface.co
1970-01-01T00:00:00
0
{}
1mbf3dz
false
null
t3_1mbf3dz
/r/LocalLLaMA/comments/1mbf3dz/glm45demo/
false
false
https://external-preview…f8c19f058e21c3e5
44
{'enabled': False, 'images': [{'id': '96ivHrmtPs4S7nGi4qvTltKCzfWeZZ9I9q3o_U5O9Qc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/96ivHrmtPs4S7nGi4qvTltKCzfWeZZ9I9q3o_U5O9Qc.png?width=108&crop=smart&auto=webp&s=4a0064e28f939a7b67ba4b9fce0f0d2cea99181d', 'width': 108}, {'height': 116, 'url': 'h...
Building a setup for extracting info from specialized documents
2
Sorry for this pretty generic question. A friend of mine has a a law firm and they have documents (pdf) from the past 15-20 years. They'd like to extract info from these documents to use them in some form of anonymized analytics. I have a few questions 1. What local model would be best for this scenario. Just to poi...
2025-07-28T12:24:53
https://www.reddit.com/r/LocalLLaMA/comments/1mberb2/building_a_setup_for_extracting_info_from/
sirephrem
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mberb2
false
null
t3_1mberb2
/r/LocalLLaMA/comments/1mberb2/building_a_setup_for_extracting_info_from/
false
false
self
2
null
My chess AI project keeps hitting Google's rate limits. Any better free API alternatives out there?
0
Hi, I've been spending my weekend on a project, a web based chess game called Gemifish where you can play against an AI with a custom personality. The whole gimmick is that you can tell the AI to be, for example, "an aggressive player," and it's supposed to choose its moves and talk smack accordingly. It's been very f...
2025-07-28T12:15:06
https://www.reddit.com/r/LocalLLaMA/comments/1mbejz8/my_chess_ai_project_keeps_hitting_googles_rate/
DinnerUnlucky4661
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbejz8
false
null
t3_1mbejz8
/r/LocalLLaMA/comments/1mbejz8/my_chess_ai_project_keeps_hitting_googles_rate/
false
false
self
0
null
support for SmallThinker model series has been merged into llama.cpp
50
[https://huggingface.co/PowerInfer/SmallThinker-21BA3B-Instruct-GGUF](https://huggingface.co/PowerInfer/SmallThinker-21BA3B-Instruct-GGUF) [https://huggingface.co/PowerInfer/SmallThinker-4BA0.6B-Instruct-GGUF](https://huggingface.co/PowerInfer/SmallThinker-4BA0.6B-Instruct-GGUF)
2025-07-28T12:12:25
https://github.com/ggml-org/llama.cpp/pull/14898
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1mbei14
false
null
t3_1mbei14
/r/LocalLLaMA/comments/1mbei14/support_for_smallthinker_model_series_has_been/
false
false
default
50
{'enabled': False, 'images': [{'id': 'e783aIBiZkFiVdg4SyQa6EJg5vPNIGwQuEikoNu5jPM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e783aIBiZkFiVdg4SyQa6EJg5vPNIGwQuEikoNu5jPM.png?width=108&crop=smart&auto=webp&s=372d94ee95700a7c7cc6df9ff561202be75a9c00', 'width': 108}, {'height': 108, 'url': 'h...
Wan 2.2 T2V,I2V 14B MoE Models
178
We’re proud to introduce **Wan2.2**, a major leap in open video generation, featuring a novel **Mixture-of-Experts (MoE)** diffusion architecture, high-compression HD generation, and benchmark-leading performance. # 🔍 Key Innovations # 🧠 Mixture-of-Experts (MoE) Diffusion Architecture Wan2.2 integrates **two speci...
2025-07-28T12:09:02
https://huggingface.co/Wan-AI
khubebk
huggingface.co
1970-01-01T00:00:00
0
{}
1mbefh4
false
null
t3_1mbefh4
/r/LocalLLaMA/comments/1mbefh4/wan_22_t2vi2v_14b_moe_models/
false
false
default
178
{'enabled': False, 'images': [{'id': 'aEhwAcxoSYdnD-EeEFeFHx8riDTf8HthjpthzWwECGA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aEhwAcxoSYdnD-EeEFeFHx8riDTf8HthjpthzWwECGA.png?width=108&crop=smart&auto=webp&s=c28c08e5f6ad66084018cf52177490f848610b13', 'width': 108}, {'height': 116, 'url': 'h...
Function Calling: Claude Sonnet 4 Vs o3 Vs Gemin 2.5 Pro
0
Which of the following models is the best in terms of function calling in your opinion? 1. Claude Sonnet 4 2. o3 3. Gemini 2.5 Pro Also which one of them is the most creative when it comes to solving problems?
2025-07-28T12:08:04
https://www.reddit.com/r/LocalLLaMA/comments/1mbeeru/function_calling_claude_sonnet_4_vs_o3_vs_gemin/
Illustrious-Ad-497
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbeeru
false
null
t3_1mbeeru
/r/LocalLLaMA/comments/1mbeeru/function_calling_claude_sonnet_4_vs_o3_vs_gemin/
false
false
self
0
null
Wan-AI/Wan2.2-TI2V-5B · Hugging Face
72
Wan-AI/Wan2.2-I2V-A14B [https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-I2V-A14B) Wan-AI/Wan2.2-T2V-A14B [https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B](https://huggingface.co/Wan-AI/Wan2.2-T2V-A14B)
2025-07-28T12:07:29
https://huggingface.co/Wan-AI/Wan2.2-TI2V-5B
Dark_Fire_12
huggingface.co
1970-01-01T00:00:00
0
{}
1mbeecr
false
null
t3_1mbeecr
/r/LocalLLaMA/comments/1mbeecr/wanaiwan22ti2v5b_hugging_face/
false
false
default
72
{'enabled': False, 'images': [{'id': 'MjNARg6a8Ws129Qpd3ZHOB9syHgcwUkd0ahvvUlc-Sc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MjNARg6a8Ws129Qpd3ZHOB9syHgcwUkd0ahvvUlc-Sc.png?width=108&crop=smart&auto=webp&s=1a30cc426f87d5b04217454606f990d19816fc01', 'width': 108}, {'height': 116, 'url': 'h...
[R] Parallel-FFN: Parameter-Efficient FFN Architecture with 35% Parameter Reduction
4
BackGround: I developed a new FFN architecture called Parallel-FFN, with the primary goal of improving parameter efficiency in Transformer models. Experimental Setup: 1. Transformer Integration: Replaced standard FFN components with Parallel-FFN architecture 2. LLM Evaluation: Substituted SwiGLU components in large l...
2025-07-28T12:01:14
https://www.reddit.com/r/LocalLLaMA/comments/1mbe9p9/r_parallelffn_parameterefficient_ffn_architecture/
Perfect_Power815
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbe9p9
false
null
t3_1mbe9p9
/r/LocalLLaMA/comments/1mbe9p9/r_parallelffn_parameterefficient_ffn_architecture/
false
false
https://b.thumbs.redditm…jXchiT74DLRQ.jpg
4
null
Proven strategies for making LLM outputs sound human
0
I need proven ways to make LLM outputs sound more natural and more human. Typically LLM outputs sound so overly machine-generated and I would like to change that for my applications. Thanks for your support
2025-07-28T11:58:50
https://www.reddit.com/r/LocalLLaMA/comments/1mbe7ua/proven_strategies_for_making_llm_outputs_sound/
AleccioIsland
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbe7ua
false
null
t3_1mbe7ua
/r/LocalLLaMA/comments/1mbe7ua/proven_strategies_for_making_llm_outputs_sound/
false
false
self
0
null
Somebody running kimi locally?
7
Somebody running kimi locally?
2025-07-28T11:49:08
https://www.reddit.com/r/LocalLLaMA/comments/1mbe14n/somebody_running_kimi_locally/
No_Afternoon_4260
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbe14n
false
null
t3_1mbe14n
/r/LocalLLaMA/comments/1mbe14n/somebody_running_kimi_locally/
false
false
self
7
null
Ai voice clone local unlimited that can generate long characters or words over 1k
0
Ai voice clone local unlimited that can generate long characters or words over 1k: Any one knows any local ai tool that clones voice from reference audio that works with unlimited and long inout characters? I know Kokoro TTS works with unlimited input but it doesn't clone voices from reference audio. Also ChatterboxTT...
2025-07-28T11:38:35
https://www.reddit.com/r/LocalLLaMA/comments/1mbdtw8/ai_voice_clone_local_unlimited_that_can_generate/
mauamolat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mbdtw8
false
null
t3_1mbdtw8
/r/LocalLLaMA/comments/1mbdtw8/ai_voice_clone_local_unlimited_that_can_generate/
false
false
self
0
null