title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
"Failed to initialize the context: failed to allocate compute pp buffers"
0
I get this error when i'm trying to load Qwen3 8B,I would appreciate any help on why this is happening.
2025-08-09T10:48:33
https://www.reddit.com/r/LocalLLaMA/comments/1mlmhtg/failed_to_initialize_the_context_failed_to/
Remarkable-Yard-6939
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlmhtg
false
null
t3_1mlmhtg
/r/LocalLLaMA/comments/1mlmhtg/failed_to_initialize_the_context_failed_to/
false
false
self
0
null
Book translation Llm. HELP
4
Hi everybody , I'm looking to fine tune an LLM to translate books from Arabic to french . What LLM would be best for this ? how would you go on about it . I already have a data set of 25000 instructions in jsonl to translate from Arabic to french . I'm fine tuning qwen 32 B using Google collab , is there any other bet...
2025-08-09T10:45:07
https://www.reddit.com/r/LocalLLaMA/comments/1mlmfvz/book_translation_llm_help/
Ok-Positive1446
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlmfvz
false
null
t3_1mlmfvz
/r/LocalLLaMA/comments/1mlmfvz/book_translation_llm_help/
false
false
self
4
null
Best open source and cross platform local whisper STT assistant
0
I recently found a tool called WhisperTyping. It’s free, works great, and I *love* the features: * Push-to-talk with a keyboard shortcut (walkie-talkie style) * Another shortcut to quickly type anywhere (then disable) * Works instantly in any app The UX is excellent, honestly one of the smoothest I’ve seen on windows...
2025-08-09T10:15:52
https://www.reddit.com/r/LocalLLaMA/comments/1mllzm7/best_open_source_and_cross_platform_local_whisper/
Specific_Dimension51
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mllzm7
false
null
t3_1mllzm7
/r/LocalLLaMA/comments/1mllzm7/best_open_source_and_cross_platform_local_whisper/
false
false
self
0
null
How do you all keep up
0
How do you keep up with these models? There are soooo many models, their updates, so many GGUFs or mixed models. I literally tried downloading 5, found 2 decent and 3 were bad. They have different performance, different efficiency, different in technique and feature integration. I tried but it's so hard to track them, ...
2025-08-09T10:15:14
https://www.reddit.com/r/LocalLLaMA/comments/1mllz9c/how_do_you_all_keep_up/
ParthProLegend
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mllz9c
false
null
t3_1mllz9c
/r/LocalLLaMA/comments/1mllz9c/how_do_you_all_keep_up/
false
false
self
0
null
Imagine an open source code model that in the same level of claude code
2,009
2025-08-09T10:04:00
https://i.redd.it/diwwcslbwyhf1.png
Severe-Awareness829
i.redd.it
1970-01-01T00:00:00
0
{}
1mllt5x
false
null
t3_1mllt5x
/r/LocalLLaMA/comments/1mllt5x/imagine_an_open_source_code_model_that_in_the/
false
false
default
2,009
{'enabled': True, 'images': [{'id': 'diwwcslbwyhf1', 'resolutions': [{'height': 114, 'url': 'https://preview.redd.it/diwwcslbwyhf1.png?width=108&crop=smart&auto=webp&s=076263be2e3327506a1db1a4491bb31f136ccbb0', 'width': 108}, {'height': 228, 'url': 'https://preview.redd.it/diwwcslbwyhf1.png?width=216&crop=smart&auto=we...
Can I add another GPU to my B450M Steel Legend just to increase context window in LM Studio?
0
I currently have a PC with a B450M Steel Legend motherboard, a Ryzen 3600 CPU, 96 GB RAM, and an RTX 3090. My goal isn’t to load bigger models,I just want to increase the context window in Lm studio. *(Yes, I know it’s a bit of an unusual and unbalanced setup, I originally built this pc for simpler tasks, but with ai w...
2025-08-09T09:56:25
https://www.reddit.com/r/LocalLLaMA/comments/1mllozf/can_i_add_another_gpu_to_my_b450m_steel_legend/
mambalorda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mllozf
false
null
t3_1mllozf
/r/LocalLLaMA/comments/1mllozf/can_i_add_another_gpu_to_my_b450m_steel_legend/
false
false
self
0
null
Can Framework Desktop be effectively clustered for MOE?
4
I’m looking at the Framework Desktop Max+ 395 with 128GB RAM. The bandwidth of 256 GB/s is a bottleneck for any dense model, but it looks decent for MOE models. As someone not familiar with these details, I’m wondering: is there any realistic chance to cluster multiple of these machines and get reasonable performance ...
2025-08-09T09:39:53
https://www.reddit.com/r/LocalLLaMA/comments/1mllg9w/can_framework_desktop_be_effectively_clustered/
Admirable_Reality281
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mllg9w
false
null
t3_1mllg9w
/r/LocalLLaMA/comments/1mllg9w/can_framework_desktop_be_effectively_clustered/
false
false
self
4
null
12gb vs 16gb vram difference.
0
Hello all, ive been wanting to experiment with agentic ai and llm in general. I know that the 16gb vram is better but is it like a massive difference or relatively medium or small. My use case are as follows. Agentic ai to scrape web, file management, automation in sending assignment and navigating my univ website. ...
2025-08-09T09:31:22
https://www.reddit.com/r/LocalLLaMA/comments/1mllbxg/12gb_vs_16gb_vram_difference/
Dhonnan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mllbxg
false
null
t3_1mllbxg
/r/LocalLLaMA/comments/1mllbxg/12gb_vs_16gb_vram_difference/
false
false
self
0
null
Is it possible to use Local Model + continue.dev AND copilot at the same time?
0
Hi everyone. I’m very new to setting up local LMs and haven’t done a lot of solo dev work either. I wanted to work on a project, whose base code is there on GitHub. I need to go through the whole repo, understand the code and then understand where I have to write my own code. Now after a week of struggling to make se...
2025-08-09T09:21:36
https://www.reddit.com/r/LocalLLaMA/comments/1mll6zt/is_it_possible_to_use_local_model_continuedev_and/
_SKETCHBENDER_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mll6zt
false
null
t3_1mll6zt
/r/LocalLLaMA/comments/1mll6zt/is_it_possible_to_use_local_model_continuedev_and/
false
false
self
0
null
Noob question
0
Hey! I’m considering to upgrade my pc to have 2 gpus (nvidia 2080+2070) total ~19 gb vram compared to current single 2080 11 gb. I need to buy better psu to drive both gpus. Question is if it’s worth it? How much better local models could I run with both gpus compared to a single gpu? Is it worth “investing” into a be...
2025-08-09T09:06:42
https://www.reddit.com/r/LocalLLaMA/comments/1mlkzag/noob_question/
DomoLeshi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlkzag
false
null
t3_1mlkzag
/r/LocalLLaMA/comments/1mlkzag/noob_question/
false
false
self
0
null
Why is this happening 😔😭😭😭
0
Is Qwen 3 specifically made for story-writing with accurate details? I'm pissed because Qwen 3 235B somehow does not go so well than I expected. I just wanted Qwen to be better to replace GLM 4.5, Gemini 2.5 Pro, Even in thinking. It failed so much. 😭😭😭. How did this go so badly.
2025-08-09T09:02:46
https://www.reddit.com/r/LocalLLaMA/comments/1mlkx6x/why_is_this_happening/
Ambitious-a4s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlkx6x
false
null
t3_1mlkx6x
/r/LocalLLaMA/comments/1mlkx6x/why_is_this_happening/
false
false
self
0
null
Update for Maestro - A Self-Hosted Research Assistant. Now with Windows/macOS support, Word/MD files support, and a smarter writing agent
103
Hey r/LocalLLaMA! A few days ago I posted my project, Maestro, a self-hosted RAG pipeline to assist with deep research and writing with your local models and documents. I've been working on an update based on feedback from the community and I'm very excited to share some new features with you all! Here's what's new: ...
2025-08-09T08:42:32
https://i.redd.it/suz9rdhvhyhf1.png
hedonihilistic
i.redd.it
1970-01-01T00:00:00
0
{}
1mlkmlt
false
null
t3_1mlkmlt
/r/LocalLLaMA/comments/1mlkmlt/update_for_maestro_a_selfhosted_research/
false
false
https://b.thumbs.redditm…PFr0H2RnB1_Y.jpg
103
{'enabled': True, 'images': [{'id': '9nedtnEYnVxHBlz0Ixxcl8hQsORQ4znutJI9YtCfUiU', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/suz9rdhvhyhf1.png?width=108&crop=smart&auto=webp&s=ee38c7f49a303aacc76ee36e8a46bbe583d6e22c', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/suz9rdhvhyhf1.png...
text to dsl (opensearch)
0
Looking for small, open-source LLMs (<7B params) that can convert natural language to DSL (OpenSearch DSL preferred, SQL also fine). Also — any public datasets for this, or anyone here tried fine-tuning a model for text-to-DSL? Would love to hear your experience or see repos.
2025-08-09T08:39:24
https://www.reddit.com/r/LocalLLaMA/comments/1mlkkyd/text_to_dsl_opensearch/
ReporterRemote6713
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlkkyd
false
null
t3_1mlkkyd
/r/LocalLLaMA/comments/1mlkkyd/text_to_dsl_opensearch/
false
false
self
0
null
GPT-5 fails class and rollout to o4
0
While all this is happening, I am getting more comfortable using software that runs locally with great LLM Models (like Qwen, Magistral, DeepSeek, Seek Coder etc.). This models, do not refuse, do not expire and run like a mule. 2 simple examples where I created in seconds a Voice Cloning App and a Protein Structure 3d ...
2025-08-09T07:47:43
https://i.redd.it/ppseuoq58yhf1.png
Trilogix
i.redd.it
1970-01-01T00:00:00
0
{}
1mljtb0
false
null
t3_1mljtb0
/r/LocalLLaMA/comments/1mljtb0/gpt5_fails_class_and_rollout_to_o4/
false
false
default
0
{'enabled': True, 'images': [{'id': 'ppseuoq58yhf1', 'resolutions': [{'height': 98, 'url': 'https://preview.redd.it/ppseuoq58yhf1.png?width=108&crop=smart&auto=webp&s=30d560b46f94b4deb72a12f6d5965db189e3af2a', 'width': 108}, {'height': 197, 'url': 'https://preview.redd.it/ppseuoq58yhf1.png?width=216&crop=smart&auto=web...
For Qwen3:4b, do people prefer instruct or thinking?
4
One benefit of the older version of Qwen3:4b was that you could switch on thinking or not depending on the query. Now you'd have to download both models and switch. Are people doing this or do they tend to just prefer one model for everything? [View Poll](https://www.reddit.com/poll/1mljt8j)
2025-08-09T07:47:35
https://www.reddit.com/r/LocalLLaMA/comments/1mljt8j/for_qwen34b_do_people_prefer_instruct_or_thinking/
Clipbeam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mljt8j
false
null
t3_1mljt8j
/r/LocalLLaMA/comments/1mljt8j/for_qwen34b_do_people_prefer_instruct_or_thinking/
false
false
self
4
null
‘bumpy’ GPT-5 rollout to 4
1
[removed]
2025-08-09T07:35:51
https://i.redd.it/ttgvt0x46yhf1.png
Trilogix
i.redd.it
1970-01-01T00:00:00
0
{}
1mljmtz
false
null
t3_1mljmtz
/r/LocalLLaMA/comments/1mljmtz/bumpy_gpt5_rollout_to_4/
false
false
https://a.thumbs.redditm…DVyGoK4f0OU8.jpg
1
{'enabled': True, 'images': [{'id': '2236LzwdXbewSSMhs9muRFDR2nIPimxTIBuCVg4bNYA', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/ttgvt0x46yhf1.png?width=108&crop=smart&auto=webp&s=307741cc047be9409c27a44673567430c9b67137', 'width': 108}, {'height': 115, 'url': 'https://preview.redd.it/ttgvt0x46yhf1.png...
Another uncensored gpt-oss to try
47
Original post: https://www.reddit.com/r/LocalLLaMA/s/BN3K0zDR4B Links: https://huggingface.co/huizimao/gpt-oss-20b-uncensored-bf16 https://huggingface.co/huizimao/gpt-oss-20b-uncensored-mxfp4 GGUFs not yet available (I will update later)
2025-08-09T07:33:38
https://www.reddit.com/r/LocalLLaMA/comments/1mljlmu/another_uncensored_gptoss_to_try/
jacek2023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mljlmu
false
null
t3_1mljlmu
/r/LocalLLaMA/comments/1mljlmu/another_uncensored_gptoss_to_try/
false
false
self
47
{'enabled': False, 'images': [{'id': 'RqYr_GrS-_pJqdWdPtPDQdGMZJzeQoztIuD9gG4HNPA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RqYr_GrS-_pJqdWdPtPDQdGMZJzeQoztIuD9gG4HNPA.png?width=108&crop=smart&auto=webp&s=ba1aa506fc21df0e30e36bdb95520f0b3db9d486', 'width': 108}, {'height': 116, 'url': 'h...
OpenAI returns old models to ChatGPT as Sam Altman admits ‘bumpy’ GPT-5 rollout but...
1
[removed]
2025-08-09T07:31:23
https://www.reddit.com/gallery/1mljkds
Trilogix
reddit.com
1970-01-01T00:00:00
0
{}
1mljkds
false
null
t3_1mljkds
/r/LocalLLaMA/comments/1mljkds/openai_returns_old_models_to_chatgpt_as_sam/
false
false
https://b.thumbs.redditm…CSn2cRyMm5dQ.jpg
1
null
New GLM-4.5 models soon
643
I hope we get to see smaller models. The current models are amazing but quite too big for a lot of people. But looks like teaser image implies vision capabilities. Image posted by Z.ai on X.
2025-08-09T07:28:32
https://i.redd.it/x7nklkjv4yhf1.jpeg
adrgrondin
i.redd.it
1970-01-01T00:00:00
0
{}
1mljip4
false
null
t3_1mljip4
/r/LocalLLaMA/comments/1mljip4/new_glm45_models_soon/
false
false
default
643
{'enabled': True, 'images': [{'id': 'x7nklkjv4yhf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/x7nklkjv4yhf1.jpeg?width=108&crop=smart&auto=webp&s=c6431bc3b134fb6d68162e69e9003164da19b2c9', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/x7nklkjv4yhf1.jpeg?width=216&crop=smart&auto=...
Horizon Beta vs GPT-5's creative writing quality.
0
I used Horizon Beta to do some light creative writing during its availability, and was blown away by its succinct prose. According to everyone here it's GPT-5, but when I used GPT-5 the writing quality is a step below Horizon Beta. Not bad, but definite wasn't the level I experienced before. Does anyone have any succ...
2025-08-09T07:21:30
https://www.reddit.com/r/LocalLLaMA/comments/1mljeso/horizon_beta_vs_gpt5s_creative_writing_quality/
Sea-Complex831
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mljeso
false
null
t3_1mljeso
/r/LocalLLaMA/comments/1mljeso/horizon_beta_vs_gpt5s_creative_writing_quality/
false
false
self
0
null
Can you run local llm on laptops?
0
I’ve been running tts models like kokoro-tts (really fast, 2-3 s to produce 60 sec speech) and chatterbox-tts (a bit slower but still good), wan 2.1 via comfy-ui (slow, between 5-10 min to generate 4s video) and various other generative ai on my laptop with ”laptop grade” rtx 3060 gpu (note: not the same as the desktop...
2025-08-09T06:55:39
https://www.reddit.com/r/LocalLLaMA/comments/1mlizyq/can_you_run_local_llm_on_laptops/
AI-On-A-Dime
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlizyq
false
null
t3_1mlizyq
/r/LocalLLaMA/comments/1mlizyq/can_you_run_local_llm_on_laptops/
false
false
self
0
null
Local model recommendations for lightweight, repeated screenshot analysis on macOS?
3
I’m thinking of making a simple Mac OS desktop app that takes full-desktop screenshots every few minutes, does local analysis, and compiles periodic reports for the user. I want it to be privacy-first, fully on-device, but I’m new to local models (only used cloud LLM APIs), so not sure if I'm thinking about it the rig...
2025-08-09T06:36:13
https://www.reddit.com/r/LocalLLaMA/comments/1mlioxa/local_model_recommendations_for_lightweight/
shock_and_awful
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlioxa
false
null
t3_1mlioxa
/r/LocalLLaMA/comments/1mlioxa/local_model_recommendations_for_lightweight/
false
false
self
3
null
Titan RTX vs 3090?
0
I was able to find Titan RTX cards for \~ $460 USD equivalent in Chinese markets and 3090s for $640 USD equivalent. Is it better value to go with Titan RTX cards for LLM inference mainly? Thanks in advance!
2025-08-09T06:21:14
https://www.reddit.com/r/LocalLLaMA/comments/1mligbe/titan_rtx_vs_3090/
AssociationAdept4052
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mligbe
false
null
t3_1mligbe
/r/LocalLLaMA/comments/1mligbe/titan_rtx_vs_3090/
false
false
self
0
null
Which might be the best model for my use case??
0
So, I'm trying to build a terminal shell backed by a LLM. The model should be able to reply with extremely relevant commands and explanation for it. I can then strip it down and execute the command. My additional use case is, I want it to be conversational and roleplay competent. I want to add a personality to the t...
2025-08-09T06:17:26
https://www.reddit.com/r/LocalLLaMA/comments/1mlie4i/which_might_be_the_best_model_for_my_use_case/
zanyfker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlie4i
false
null
t3_1mlie4i
/r/LocalLLaMA/comments/1mlie4i/which_might_be_the_best_model_for_my_use_case/
false
false
self
0
null
Impressive. Has anyone benchmarked the long context performance?
1
2025-08-09T06:16:21
https://i.redd.it/wzvzzuxzrxhf1.jpeg
Sad_Cardiologist_835
i.redd.it
1970-01-01T00:00:00
0
{}
1mlidha
false
null
t3_1mlidha
/r/LocalLLaMA/comments/1mlidha/impressive_has_anyone_benchmarked_the_long/
false
false
https://b.thumbs.redditm…vgL7nJaEQK9Y.jpg
1
{'enabled': True, 'images': [{'id': 'ClK9cKSotKuODa9IryqpX5lPT8AMS2HQ_ooi4hSaZR8', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/wzvzzuxzrxhf1.jpeg?width=108&crop=smart&auto=webp&s=6de3696a238df92ae311e59be41754f287911466', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/wzvzzuxzrxhf1.jpe...
uncensored gpt-oss-20b, bf16 and mxfp4 both available
30
gpt-oss-20b's refusal rate is super-high, \~70% on Amazon FalseReject dataset. I also tested it with a subset of WildChat 1M and saw about 5-10% refusal rate, which is almost untolerable. Unfortunately, current PTQ method hurts the LoRA adapter quite much (but sill better than nothing). We already get MXFP4 QAT workin...
2025-08-09T06:02:00
https://www.reddit.com/r/LocalLLaMA/comments/1mli4za/uncensored_gptoss20b_bf16_and_mxfp4_both_available/
Ralph_mao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mli4za
false
null
t3_1mli4za
/r/LocalLLaMA/comments/1mli4za/uncensored_gptoss20b_bf16_and_mxfp4_both_available/
false
false
self
30
null
uncensored gpt-oss-20b, bf16 and mxfp4 both available
1
gpt-oss-20b's refusal rate is super-high, \~70% on Amazon FalseReject dataset. I also tested it with a subset of WildChat 1M and saw about 5-10% refusal rate, which is almost untolerable. Unfortunately, current PTQ method hurts the LoRA adapter quite much (but sill better than nothing). We already get MXFP4 QAT workin...
2025-08-09T05:57:57
https://www.reddit.com/r/LocalLLaMA/comments/1mli2h1/uncensored_gptoss20b_bf16_and_mxfp4_both_available/
Ralph_mao
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mli2h1
false
null
t3_1mli2h1
/r/LocalLLaMA/comments/1mli2h1/uncensored_gptoss20b_bf16_and_mxfp4_both_available/
false
false
self
1
null
uncensored gpt-oss-20b, bf16 and mxfp4 both available
1
[removed]
2025-08-09T05:56:55
https://i.redd.it/a0xonxzhoxhf1.png
Ralph_mao
i.redd.it
1970-01-01T00:00:00
0
{}
1mli1vu
false
null
t3_1mli1vu
/r/LocalLLaMA/comments/1mli1vu/uncensored_gptoss20b_bf16_and_mxfp4_both_available/
false
false
default
1
{'enabled': True, 'images': [{'id': 'a0xonxzhoxhf1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/a0xonxzhoxhf1.png?width=108&crop=smart&auto=webp&s=5cabd8cd4168c3137b7a57ad946fc1f13d0e5642', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/a0xonxzhoxhf1.png?width=216&crop=smart&auto=web...
uncensored gpt-oss-20b, bf16 and mxfp4 both available
1
[removed]
2025-08-09T05:54:57
https://i.redd.it/8407acsenxhf1.png
Ralph_mao
i.redd.it
1970-01-01T00:00:00
0
{}
1mli0r9
false
null
t3_1mli0r9
/r/LocalLLaMA/comments/1mli0r9/uncensored_gptoss20b_bf16_and_mxfp4_both_available/
false
false
default
1
{'enabled': True, 'images': [{'id': '8407acsenxhf1', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/8407acsenxhf1.png?width=108&crop=smart&auto=webp&s=bea2fc7627a5c07755af4c93d2cb38e7276f2f4b', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/8407acsenxhf1.png?width=216&crop=smart&auto=web...
Finally, I Wrote a 600-Page Book About My Mad LLM fine-tuning experiments
100
You may or may not be aware that I wrote Training Pro and Playground and Virtual Lora and a lot of other insane code that some of you use every day to muck about with LLMs or to idly goof off. And not only that, but I have also created, in my own pathetic home, thousands and thousands of LoRAs and all kinds of strange,...
2025-08-09T05:40:07
https://www.reddit.com/r/LocalLLaMA/comments/1mlhryw/finally_i_wrote_a_600page_book_about_my_mad_llm/
FPham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlhryw
false
null
t3_1mlhryw
/r/LocalLLaMA/comments/1mlhryw/finally_i_wrote_a_600page_book_about_my_mad_llm/
false
false
self
100
null
How can I automate my NotebookLM → Video Overview workflow?
0
# How can I automate my NotebookLM → Video Overview workflow? I’m looking for advice from people who’ve done automation with local LLM setups, browser scripting, or RPA tools. Here’s my current manual workflow: 1. I source all the important questions from previous years’ exam papers. 2. I feed these questions ...
2025-08-09T05:15:28
https://www.reddit.com/r/LocalLLaMA/comments/1mlhd85/how_can_i_automate_my_notebooklm_video_overview/
ILoveDeepWork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlhd85
false
null
t3_1mlhd85
/r/LocalLLaMA/comments/1mlhd85/how_can_i_automate_my_notebooklm_video_overview/
false
false
self
0
null
how do I learn prompting?
0
I was testing horizon beta using roocode in vscode and it produced absolutely useless unnecessary code. Gemini cli free was producing much better code. I am wondering if it's my prompts that caused it to write bad code? By bad code I mean, it writes typescript code like function somefunction(myvar: (unknown as { someke...
2025-08-09T05:01:50
https://www.reddit.com/r/LocalLLaMA/comments/1mlh4oc/how_do_i_learn_prompting/
zowpi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlh4oc
false
null
t3_1mlh4oc
/r/LocalLLaMA/comments/1mlh4oc/how_do_i_learn_prompting/
false
false
self
0
null
Which LLM ?
0
What is the best locally running (offline) LLM for coding that does not send any data to a server?
2025-08-09T04:27:30
https://www.reddit.com/r/LocalLLaMA/comments/1mlgi56/which_llm/
DEV-Innovation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlgi56
false
null
t3_1mlgi56
/r/LocalLLaMA/comments/1mlgi56/which_llm/
false
false
self
0
null
Best model for coding?
1
[removed]
2025-08-09T04:13:09
https://www.reddit.com/r/LocalLLaMA/comments/1mlg8tv/best_model_for_coding/
Exact_Tip_8497
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlg8tv
false
null
t3_1mlg8tv
/r/LocalLLaMA/comments/1mlg8tv/best_model_for_coding/
false
false
self
1
null
Fine tuning
0
Does anyone know if theres a small decent llm worth fine tuning (training in a specific language), A model I would be able to finetune on google colab’s free tier ? Any recommendations?
2025-08-09T04:08:22
https://www.reddit.com/r/LocalLLaMA/comments/1mlg5li/fine_tuning/
0y0s
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlg5li
false
null
t3_1mlg5li
/r/LocalLLaMA/comments/1mlg5li/fine_tuning/
false
false
self
0
null
Is anything better than gemma-3-27b for handwritten text recognition?
237
I'm a contributor of an open source project that is trying to automate the process of getting ballot initiatives (like ranked choice voting) approved to be put on ballots. Signatures are gathered and compared to a voter registration to make sure they live in the jurisdiction. Multimodal with vision like ChatGPT and Gem...
2025-08-09T04:01:20
https://www.reddit.com/gallery/1mlg0sk
votecatcher
reddit.com
1970-01-01T00:00:00
0
{}
1mlg0sk
false
null
t3_1mlg0sk
/r/LocalLLaMA/comments/1mlg0sk/is_anything_better_than_gemma327b_for_handwritten/
false
false
https://b.thumbs.redditm…GiAMW6MuLKdo.jpg
237
null
Is anything better than gemma-3-27b for handwritten text recognition?
1
I'm a contributor of an open source project that is trying to automate the process of getting ballot initiatives (like ranked choice voting) approved to be put on ballots. Signatures are gathered and compared to a voter registration to make sure they live in the jurisdiction. Multimodal with vision like ChatGPT and Gem...
2025-08-09T03:59:21
https://www.reddit.com/gallery/1mlfzb9
votecatcher
reddit.com
1970-01-01T00:00:00
0
{}
1mlfzb9
false
null
t3_1mlfzb9
/r/LocalLLaMA/comments/1mlfzb9/is_anything_better_than_gemma327b_for_handwritten/
false
false
https://a.thumbs.redditm…UrghUWwvXxr0.jpg
1
null
Gamers Nexus did an investigation into the videocard blackmarket in China.
142
2025-08-09T03:51:32
https://youtu.be/ltgyS8oJC8g
MrWeirdoFace
youtu.be
1970-01-01T00:00:00
0
{}
1mlftxf
false
{'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ltgyS8oJC8g?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosco...
t3_1mlftxf
/r/LocalLLaMA/comments/1mlftxf/gamers_nexus_did_an_investigation_into_the/
false
false
default
142
{'enabled': False, 'images': [{'id': 'kuBTlUBt-VJKMlkA3iq90XNuL9L7HP85dkd9iGjm14Q', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/kuBTlUBt-VJKMlkA3iq90XNuL9L7HP85dkd9iGjm14Q.jpeg?width=108&crop=smart&auto=webp&s=7c057efbcd685a537782306036623afdfe39d774', 'width': 108}, {'height': 162, 'url': '...
In Praise of Qwen 3 4B Instruct 2507
1
[removed]
2025-08-09T03:44:37
https://www.reddit.com/r/LocalLLaMA/comments/1mlfp2w/in_praise_of_qwen_3_4b_instruct_2507/
StrategicHarmony
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlfp2w
false
null
t3_1mlfp2w
/r/LocalLLaMA/comments/1mlfp2w/in_praise_of_qwen_3_4b_instruct_2507/
false
false
self
1
null
Docker as local model server?
1
[removed]
2025-08-09T03:35:17
https://www.reddit.com/r/LocalLLaMA/comments/1mlfiq9/docker_as_local_model_server/
rm-rf-rm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlfiq9
false
null
t3_1mlfiq9
/r/LocalLLaMA/comments/1mlfiq9/docker_as_local_model_server/
false
false
self
1
null
GLM-4.5-X, GLM-4.5-AirX & GLM-4.5-Flash?
3
Just saw some interesting models in their documentation, haven't seen those around before, are they new releases? Source: [https://docs.z.ai/guides/llm/glm-4.5](https://docs.z.ai/guides/llm/glm-4.5) OBS: It would be great if you could include the recommended temperature and other parameters in the documentation. I wa...
2025-08-09T03:34:59
https://i.redd.it/upuipad9wwhf1.png
AMOVCS
i.redd.it
1970-01-01T00:00:00
0
{}
1mlfiji
false
null
t3_1mlfiji
/r/LocalLLaMA/comments/1mlfiji/glm45x_glm45airx_glm45flash/
false
false
default
3
{'enabled': True, 'images': [{'id': 'upuipad9wwhf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/upuipad9wwhf1.png?width=108&crop=smart&auto=webp&s=62acf8da0ce28208ea82f87ae014c0b50c40f997', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/upuipad9wwhf1.png?width=216&crop=smart&auto=web...
Github
0
Hi there im new to all of this, but if I go through github does it take all my information... im use a llm but I only have a laptop with no gpu... i have an igpu. I found some models on github that use igpu than my CPU.. but I understand that it may steal my code? Or the content that im trying to use the llm to do for...
2025-08-09T03:12:02
https://www.reddit.com/r/LocalLLaMA/comments/1mlf2om/github/
Ordinary_Hope_2113
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlf2om
false
null
t3_1mlf2om
/r/LocalLLaMA/comments/1mlf2om/github/
false
false
self
0
null
Miro ODR: Another Deep Research Agent model just went open source
142
Hey r/LocalLLaMA! 👋 We just dropped MiroMind Open Deep Research v0.1 - and we mean ACTUALLY open this time So we've been grinding on this deep research project for months, and we're finally ready to share what we've built. Unlike the usual "open source*" (*terms and conditions apply) releases, we're giving you liter...
2025-08-09T03:11:34
https://v.redd.it/iqh4s2cyuwhf1
MiroMindAI
v.redd.it
1970-01-01T00:00:00
0
{}
1mlf2ch
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/iqh4s2cyuwhf1/DASHPlaylist.mpd?a=1757301108%2CMWE2OWVkZTI2MDQ0NDVlYTgwNzhlOTZiYTcyMDY4ZmY0MDYwYWQ4YmMzNmZlNTM2NTk5MjMwOGNmZWU4MDhjOQ%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/iqh4s2cyuwhf1/DASH_480.mp4?source=fallback', 'ha...
t3_1mlf2ch
/r/LocalLLaMA/comments/1mlf2ch/miro_odr_another_deep_research_agent_model_just/
false
false
https://external-preview…eaa572839118afba
142
{'enabled': False, 'images': [{'id': 'Z3llNzlpYXl1d2hmMeEv19zbIXINheGDldYeDNIrs4cqZFopneM09cd1xdoM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/Z3llNzlpYXl1d2hmMeEv19zbIXINheGDldYeDNIrs4cqZFopneM09cd1xdoM.png?width=108&crop=smart&format=pjpg&auto=webp&s=5dac5b22b08fae2793af5476104d8cedfcd4...
Miro ODR: Another Deep Research Agent model just went open source
1
Hey r/LocalLLaMA! 👋 We just dropped MiroMind Open Deep Research v0.1 - and we mean ACTUALLY open this time So we've been grinding on this deep research project for months, and we're finally ready to share what we've built. Unlike the usual "open source*" (*terms and conditions apply) releases, we're giving you liter...
2025-08-09T03:10:23
https://www.reddit.com/r/LocalLLaMA/comments/1mlf1hc/miro_odr_another_deep_research_agent_model_just/
MiroMindAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlf1hc
false
null
t3_1mlf1hc
/r/LocalLLaMA/comments/1mlf1hc/miro_odr_another_deep_research_agent_model_just/
false
false
self
1
null
Miro ODR: Another Deep Research Agent model just went open source
1
[removed]
2025-08-09T03:09:21
https://v.redd.it/d77akmuiuwhf1
MiroMindAI
v.redd.it
1970-01-01T00:00:00
0
{}
1mlf0r1
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/d77akmuiuwhf1/DASHPlaylist.mpd?a=1757300977%2CMzI1ODU0YzdmMjc2MWI3Yzc5M2Q4MjYyNmEyZWJkNGRiNjBhNDQ3ZDJmMjc2ZTM4ZDBhODkyZTRmNjI4MmI0Mg%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/d77akmuiuwhf1/DASH_480.mp4?source=fallback', 'ha...
t3_1mlf0r1
/r/LocalLLaMA/comments/1mlf0r1/miro_odr_another_deep_research_agent_model_just/
false
false
https://external-preview…683324a516e9c8c0
1
{'enabled': False, 'images': [{'id': 'N25tdGdudWl1d2hmMeEv19zbIXINheGDldYeDNIrs4cqZFopneM09cd1xdoM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/N25tdGdudWl1d2hmMeEv19zbIXINheGDldYeDNIrs4cqZFopneM09cd1xdoM.png?width=108&crop=smart&format=pjpg&auto=webp&s=8482d7ba249da9c53320e05ac5cedf910efc...
Can someone help me understand the difference in these models?
0
Ok so I wrote a pretty useful application for my own purposes. I am basically using something like the instructor library to get JSON output and run my own application off these inputs. I was just following a quickstart guide which recommended qwen2.5 with ollama, so thats what I had built it on. It works pretty flawle...
2025-08-09T03:07:10
https://www.reddit.com/r/LocalLLaMA/comments/1mlez7u/can_someone_help_me_understand_the_difference_in/
dr-uuid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlez7u
false
null
t3_1mlez7u
/r/LocalLLaMA/comments/1mlez7u/can_someone_help_me_understand_the_difference_in/
false
false
self
0
null
Miro ODR: Another Deep Research Agent model just went open source
2
Hey r/LocalLLaMA! 👋 We just dropped MiroMind Open Deep Research v0.1 - and we mean ACTUALLY open this time So we've been grinding on this deep research project for months, and we're finally ready to share what we've built. Unlike the usual "open source*" (*terms and conditions apply) releases, we're giving you liter...
2025-08-09T03:02:30
https://v.redd.it/xcdedgt7twhf1
MiroMindAI
v.redd.it
1970-01-01T00:00:00
0
{}
1mlevwy
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/xcdedgt7twhf1/DASHPlaylist.mpd?a=1757300563%2CNjRmODc1ODI4NzkxNzU0MWEyODdkYjY1MzdlYWU3OTYwZDEyOGI1OTUzMjNlN2U1N2QwZDAxOGU1ZjJhMTQ4Yg%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/xcdedgt7twhf1/DASH_480.mp4?source=fallback', 'ha...
t3_1mlevwy
/r/LocalLLaMA/comments/1mlevwy/miro_odr_another_deep_research_agent_model_just/
false
false
https://external-preview…00d031f0d80802ae
2
{'enabled': False, 'images': [{'id': 'M3N0cWVodDd0d2hmMeEv19zbIXINheGDldYeDNIrs4cqZFopneM09cd1xdoM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/M3N0cWVodDd0d2hmMeEv19zbIXINheGDldYeDNIrs4cqZFopneM09cd1xdoM.png?width=108&crop=smart&format=pjpg&auto=webp&s=59d7e6debcf20dc23ad261ab438f45e7eed2...
Hey! Sorry if stupid question, but is localLLaMA an offline AI you can run on your computer?
0
If so, would it be a good choice for someone who values privacy and wants an offline AI to run on MacBook?
2025-08-09T02:56:53
https://www.reddit.com/r/LocalLLaMA/comments/1mlerxv/hey_sorry_if_stupid_question_but_is_localllama_an/
SeeItSayItKnowIt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlerxv
false
null
t3_1mlerxv
/r/LocalLLaMA/comments/1mlerxv/hey_sorry_if_stupid_question_but_is_localllama_an/
false
false
self
0
null
Miro ODR: Another Deep Research Agent model just went open source
1
[removed]
2025-08-09T02:56:02
https://v.redd.it/d6zcd0o5swhf1
MiroMindAI
v.redd.it
1970-01-01T00:00:00
0
{}
1mlerb2
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/d6zcd0o5swhf1/DASHPlaylist.mpd?a=1757300177%2CMmI3NGMyNDQwZTc2Mzk2MzYzNTliZDdkMTA1MGFjNmQ3MDdmN2IzZTMwNzdiMWU0MjljYmQ2MjU4NGJkZGY5NQ%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/d6zcd0o5swhf1/DASH_480.mp4?source=fallback', 'ha...
t3_1mlerb2
/r/LocalLLaMA/comments/1mlerb2/miro_odr_another_deep_research_agent_model_just/
false
false
https://external-preview…abe57ca81a92d100
1
{'enabled': False, 'images': [{'id': 'dDU0cnVxbjVzd2hmMeEv19zbIXINheGDldYeDNIrs4cqZFopneM09cd1xdoM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/dDU0cnVxbjVzd2hmMeEv19zbIXINheGDldYeDNIrs4cqZFopneM09cd1xdoM.png?width=108&crop=smart&format=pjpg&auto=webp&s=9cf36f2661c352f4e680c07690481a6da9d0...
How to learn the AI agent based on a practical case
0
How to understand AI agents from a code perspective? What exactly is the paradigm of an agent, and where do the challenges lie? AI agent includes planning, orchestration, memory and generation. but this is just a conceptual understanding - how can we implement it into specific business scenarios?
2025-08-09T02:49:32
https://www.reddit.com/r/LocalLLaMA/comments/1mlemlq/how_to_learn_the_ai_agent_based_on_a_practical/
tangbasky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlemlq
false
null
t3_1mlemlq
/r/LocalLLaMA/comments/1mlemlq/how_to_learn_the_ai_agent_based_on_a_practical/
false
false
self
0
null
Miro ODR: Another Deep Research Agent model just went open source
1
[removed]
2025-08-09T02:43:32
[deleted]
1970-01-01T00:00:00
0
{}
1mleid4
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/zfz17cjymwhf1/DASHPlaylist.mpd?a=1757299428%2CMTNlYWI2NzkzNjgzNjg1YjBiYjIzODRmYTY0YmRkNTliYWY4N2MxN2VjZDE0ZjA0MDE1OWM2MjNkOTU1YmZiNg%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/zfz17cjymwhf1/DASH_480.mp4?source=fallback', 'ha...
t3_1mleid4
/r/LocalLLaMA/comments/1mleid4/miro_odr_another_deep_research_agent_model_just/
false
false
default
1
null
Org wide deployment
0
For anyone deploying chat experiences for internal use e.g. rag on knowledge bases, et. al. How do you do it? Subscribe to a vendor? Multi tenant deployments / shared infrastructure? Single tenant, vendor managed? Single tenant, customer managed? Fully oss, self managed? What are the trade-offs you've run into in c...
2025-08-09T02:35:43
https://www.reddit.com/r/LocalLLaMA/comments/1mlecrp/org_wide_deployment/
2berghains
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlecrp
false
null
t3_1mlecrp
/r/LocalLLaMA/comments/1mlecrp/org_wide_deployment/
false
false
self
0
null
Your AI is a mirror.
0
Your GPT is a reflexion of you. If the majority of your conversations are for self-help, don't expect it to become a master at logic in your next prompt. If you use it for waifu smut, don't expect it to magically replicate the good results others have in platform engineering. You folks need to understand that there are...
2025-08-09T02:03:14
https://i.redd.it/o3m6m6g3fwhf1.png
LocoMod
i.redd.it
1970-01-01T00:00:00
0
{}
1mldowu
false
null
t3_1mldowu
/r/LocalLLaMA/comments/1mldowu/your_ai_is_a_mirror/
false
false
default
0
{'enabled': True, 'images': [{'id': 'o3m6m6g3fwhf1', 'resolutions': [{'height': 24, 'url': 'https://preview.redd.it/o3m6m6g3fwhf1.png?width=108&crop=smart&auto=webp&s=36dc6773eb17c3f0a840c01cc66af77731f3e045', 'width': 108}, {'height': 48, 'url': 'https://preview.redd.it/o3m6m6g3fwhf1.png?width=216&crop=smart&auto=webp...
🏠 Built 17 Production-Ready Text Classifiers You Can Run Locally (90-120ms inference, 83% cheaper than APIs)
0
[removed]
2025-08-09T02:00:47
https://www.reddit.com/r/LocalLLaMA/comments/1mldn3x/built_17_productionready_text_classifiers_you_can/
asankhs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mldn3x
false
null
t3_1mldn3x
/r/LocalLLaMA/comments/1mldn3x/built_17_productionready_text_classifiers_you_can/
false
false
self
0
null
I asked ChatGPT-5 an easy question. And it fails.
4
What else could you expect from a model based only on text? It’s still just a stochastic parrot. If we want progress, we need to: 1) Reinforcement learning 2) Interaction with the real world through sensory input (a robot body, or at least virtual reality)
2025-08-09T01:48:42
https://i.redd.it/lcznv6qofwhf1.jpeg
Economy_Apple_4617
i.redd.it
1970-01-01T00:00:00
0
{}
1mlde7i
false
null
t3_1mlde7i
/r/LocalLLaMA/comments/1mlde7i/i_asked_chatgpt5_an_easy_question_and_it_fails/
false
false
default
4
{'enabled': True, 'images': [{'id': 'lcznv6qofwhf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/lcznv6qofwhf1.jpeg?width=108&crop=smart&auto=webp&s=baceff7704270eb57c16cf1520d304a247ac1efc', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/lcznv6qofwhf1.jpeg?width=216&crop=smart&auto=...
Suggest me something for Rust Programs
0
Hi, i am looking for a gpt that help me to build one bot in rust language as well as Typescript if possible. I am trying paid version of chatgpt but its bot giving me accurate results (it is using mostly use old libraries, methods that doesn't match with current one, not picking latest classes from libraries and not ...
2025-08-09T01:48:08
https://www.reddit.com/r/LocalLLaMA/comments/1mlddsg/suggest_me_something_for_rust_programs/
CheapCaptain
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlddsg
false
null
t3_1mlddsg
/r/LocalLLaMA/comments/1mlddsg/suggest_me_something_for_rust_programs/
false
false
self
0
null
Model size equivalent of free LLMs?
0
When you use the free or non paid online version of ChatGPT, Gemini, Deepseek… and so on… How many Billions of parameters and context size would a local LLM need to have to be comparable to the free online models?
2025-08-09T01:08:41
https://www.reddit.com/r/LocalLLaMA/comments/1mlcknj/model_size_equivalent_of_free_llms/
nemuro87
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlcknj
false
null
t3_1mlcknj
/r/LocalLLaMA/comments/1mlcknj/model_size_equivalent_of_free_llms/
false
false
self
0
null
Beginner needs a good, easy coding LLM guided platform. Yes, but...What's the best ?
0
Hi, all. I'm a beginner on Python coding, but I want to both improve my skills and use a helper, guided, easy, colorful interface for coding, using my OpenRouter API as the LLM helper, for Windows. What are the best approaches for a guided coding framework ? I tried to use ChatGPT itself for that, but there are limitat...
2025-08-09T00:55:21
https://i.redd.it/tvzypoaq6whf1.jpeg
Current-Stop7806
i.redd.it
1970-01-01T00:00:00
0
{}
1mlcamd
false
null
t3_1mlcamd
/r/LocalLLaMA/comments/1mlcamd/beginner_needs_a_good_easy_coding_llm_guided/
false
false
https://b.thumbs.redditm…S6boGjxxAcHw.jpg
0
{'enabled': True, 'images': [{'id': '-5aQNIsctz_OFjGz_aetKA8d6ceJ3814mW9dMEswOq8', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/tvzypoaq6whf1.jpeg?width=108&crop=smart&auto=webp&s=8d491a6ef095b347699caa117ece117b443c83d8', 'width': 108}, {'height': 143, 'url': 'https://preview.redd.it/tvzypoaq6whf1.jp...
Beginner needs a good, easy coding LLM guided platform. What's the best ?
1
Hi, all. I'm a beginner on Python coding, but I want to both improve my skills and use a helper, easy interface for coding, using my OpenRouter API as the LLM helper, for Windows. What are the best approaches for a guided coding framework ? Would it be VS Code with some API extension ? I need something easy to install...
2025-08-09T00:37:29
https://www.reddit.com/r/LocalLLaMA/comments/1mlbx6i/beginner_needs_a_good_easy_coding_llm_guided/
Current-Stop7806
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlbx6i
false
null
t3_1mlbx6i
/r/LocalLLaMA/comments/1mlbx6i/beginner_needs_a_good_easy_coding_llm_guided/
false
false
self
1
null
Let's see which chatbot can calculate token generation speed
0
The prompt is rather ambiguous but the model should have enough information to figure it out. \> calculate speed of token generation 6504 24828 152s |Model|Status|Calculation| |:-|:-|:-| |ChatGPT 5 no think|✅|206.00| |Claude Opus 4.1|❌|120.55| |Claude Sonnet 4|❌|42.80| |Deepseek R1 (together chat)|❌|163.34| |Deepseek...
2025-08-09T00:32:49
https://www.reddit.com/r/LocalLLaMA/comments/1mlbtni/lets_see_which_chatbot_can_calculate_token/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlbtni
false
null
t3_1mlbtni
/r/LocalLLaMA/comments/1mlbtni/lets_see_which_chatbot_can_calculate_token/
false
false
self
0
null
Let's see which chatbot can calculate token generation speed
1
The prompt is rather ambiguous but the model should have enough information to figure it out. \> calculate speed of token generation 6504 24828 152s || || |Model|Status|Calculation| |ChatGPT 5 no think|✅|206| |Grok 3|❌|42| |Grok 3 think|❌|120.55| |Gemini 2.5 Flash|❌|163.34| |Gemini 2.5 Pro|✅|206.13| |Mistral & Mistra...
2025-08-09T00:29:37
https://www.reddit.com/r/LocalLLaMA/comments/1mlbr6c/lets_see_which_chatbot_can_calculate_token/
robertpiosik
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlbr6c
false
null
t3_1mlbr6c
/r/LocalLLaMA/comments/1mlbr6c/lets_see_which_chatbot_can_calculate_token/
false
false
self
1
null
AMD Srix Halo & Windows questions.
2
Hi, I have recently started looking into Local LLM to test and fer an understanding of using it for video and audio analysis. I have also just received my new laptop which is a AMD Strix Halo 395+ with 128GB of memory (32GB allocated to VRAM) - Windows 11. I have tried with WSL but keep getting into issues with torch ...
2025-08-08T23:40:54
https://www.reddit.com/r/LocalLLaMA/comments/1mlaox6/amd_srix_halo_windows_questions/
techie_msp
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mlaox6
false
null
t3_1mlaox6
/r/LocalLLaMA/comments/1mlaox6/amd_srix_halo_windows_questions/
false
false
self
2
null
I too can calculate Bs
0
I picked a different berry. Its self-correction made me chuckle.
2025-08-08T23:25:57
https://www.reddit.com/gallery/1mlacy4
CantankerousOrder
reddit.com
1970-01-01T00:00:00
0
{}
1mlacy4
false
null
t3_1mlacy4
/r/LocalLLaMA/comments/1mlacy4/i_too_can_calculate_bs/
false
false
https://b.thumbs.redditm…kbxA_0ntfuDw.jpg
0
null
Local LLM Deployment for 50 Users
18
Hey all, looking for advice on scaling local LLMs to withstand 50 concurrent users. The decision to run full local comes down to using the LLM on classified data. Truly open to any and all advice, novice to expert level from those with experience in doing such a task. A few things: 1. ⁠I have the funding the purchase...
2025-08-08T23:20:00
https://www.reddit.com/r/LocalLLaMA/comments/1mla86p/local_llm_deployment_for_50_users/
NoobLLMDev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mla86p
false
null
t3_1mla86p
/r/LocalLLaMA/comments/1mla86p/local_llm_deployment_for_50_users/
false
false
self
18
null
We need to spread awareness
1
[deleted]
2025-08-08T23:19:18
[deleted]
1970-01-01T00:00:00
0
{}
1mla7ky
false
null
t3_1mla7ky
/r/LocalLLaMA/comments/1mla7ky/we_need_to_spread_awareness/
false
false
default
1
null
Is there a Local LLM that I can run off LM Studio that has web search capabilities?
1
Getting sick of ChatGPT's bullshit and want to try running thing myself. I've dabbled with LM Studio before but only localized tasks. If I want to do something like take a list of names, and have the AI run a websearch for me and summarize the results, is that something that could be achieved with LM Studio? Is there a...
2025-08-08T23:16:37
https://www.reddit.com/r/LocalLLaMA/comments/1mla5c5/is_there_a_local_llm_that_i_can_run_off_lm_studio/
cuoreesitante
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1mla5c5
false
null
t3_1mla5c5
/r/LocalLLaMA/comments/1mla5c5/is_there_a_local_llm_that_i_can_run_off_lm_studio/
false
false
self
1
null
Best TTS for long form audiobooks?
6
Sorry to ask yet another question so recently after my last one, but the other thing I'm looking for is a good text to speech model that can relatively quickly convert entire full-size books to audiobooks. Kokoro's speed and sound was perfect for me, but I kept running into a problem where it would drop a word here or ...
2025-08-08T22:37:56
https://www.reddit.com/r/LocalLLaMA/comments/1ml99nf/best_tts_for_long_form_audiobooks/
annakhouri2150
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml99nf
false
null
t3_1ml99nf
/r/LocalLLaMA/comments/1ml99nf/best_tts_for_long_form_audiobooks/
false
false
self
6
null
How do you run LLMs locally?
0
I'm new to this, and checking \[gpt-oss's README\](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#inference-examples), I see the following ways of running an LLM locally: 1. Run a script that uses HuggingFace Transformers library and the builtin pipelines. 2. Use vllm and run \`vllm serve <model nam...
2025-08-08T22:18:33
https://www.reddit.com/r/LocalLLaMA/comments/1ml8tdp/how_do_you_run_llms_locally/
ryanguo99
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml8tdp
false
null
t3_1ml8tdp
/r/LocalLLaMA/comments/1ml8tdp/how_do_you_run_llms_locally/
false
false
self
0
null
Can anyone recommend me a "small" model that is good with WebGL shaders?
5
Made this local LLM powered music visualizer that lets me generate effects on the fly. It works great with new Qwen 30b coder model and the new GPT open source model, but noticed quite a difference in quality, so I am curious if there are better models out there that will run on my system? Got 16gigs of vram and 32gig...
2025-08-08T21:55:02
https://www.reddit.com/r/LocalLLaMA/comments/1ml89g5/can_anyone_recommend_me_a_small_model_that_is/
Flaky_Comedian2012
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml89g5
false
null
t3_1ml89g5
/r/LocalLLaMA/comments/1ml89g5/can_anyone_recommend_me_a_small_model_that_is/
false
false
self
5
null
Vision LLM Image Text Issue
1
Hey y'all, I'm having some issues using LLM vison models regarding text in images. They constantly repeat words inaccurately. For instance - I have fed an image with the phrase "DO YOU EVEN READ?!" through Qwen 2.5 VL 32b, Gemma 3 27b, etc and they will put out "DO YOU EVEN EVEN EVEN READ?!" or describe a phrase as bei...
2025-08-08T21:48:52
https://www.reddit.com/r/LocalLLaMA/comments/1ml849p/vision_llm_image_text_issue/
DrRoughFingers
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml849p
false
null
t3_1ml849p
/r/LocalLLaMA/comments/1ml849p/vision_llm_image_text_issue/
false
false
self
1
null
LEANN – Local RAG with 97% smaller index and Claude Code–compatible semantic search
104
**We’re building** [**LEANN**](https://github.com/yichuan-w/LEANN) **at Berkeley Sky Lab — a local vector index for RAG that’s:** * 🔒 Privacy-first * 📦 97% smaller * 🧠 Fully compatible with **Claude Code**, **Ollama**, and **GPT-OSS** **Run semantic search on your laptop — fast, lightweight, and cloud-free.** # �...
2025-08-08T21:48:46
https://www.reddit.com/r/LocalLLaMA/comments/1ml846a/leann_local_rag_with_97_smaller_index_and_claude/
Lanky-District9096
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml846a
false
null
t3_1ml846a
/r/LocalLLaMA/comments/1ml846a/leann_local_rag_with_97_smaller_index_and_claude/
false
false
self
104
{'enabled': False, 'images': [{'id': 'zA-x9-Xnbobnzb8wWKhq3cY7WsFTeKF8wG3641BfpOk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zA-x9-Xnbobnzb8wWKhq3cY7WsFTeKF8wG3641BfpOk.png?width=108&crop=smart&auto=webp&s=479bb677e5a2753e733827f854bffacf0980fa89', 'width': 108}, {'height': 108, 'url': 'h...
ChatGPT 5 destroyed its AI chat llm service
0
Is it just me, but openai crippled its AI. Use to enjoy the selection of models for my daily tasks, and particularly trusted the 3.2 model as fallback. Now it's gone. The latest version is just, to me, complete garbage.
2025-08-08T21:36:45
https://www.reddit.com/r/LocalLLaMA/comments/1ml7tt2/chatgpt_5_destroyed_its_ai_chat_llm_service/
pjconnect
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml7tt2
false
null
t3_1ml7tt2
/r/LocalLLaMA/comments/1ml7tt2/chatgpt_5_destroyed_its_ai_chat_llm_service/
false
false
self
0
null
Are there open source frontier models that can even be run locally or via a cloud service? What does "local" mean for you?
0
I stopped using ChatGPT awhile ago, 4o was truly awful, to the point where I forgot I hadn't cancelled and was pleasantly surprised at o3 being... pretty good for science/code a week ago. But obviously, that's gone. I use Gemini 2.5 and Claude but obviously the same thing can happen. I feel I am reasonably computer lit...
2025-08-08T21:34:13
https://www.reddit.com/r/LocalLLaMA/comments/1ml7rqe/are_there_open_source_frontier_models_that_can/
NewspaperPossible210
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml7rqe
false
null
t3_1ml7rqe
/r/LocalLLaMA/comments/1ml7rqe/are_there_open_source_frontier_models_that_can/
false
false
self
0
null
The LLM world is an illusion of progress
289
Here's my previous rant in which I was saying that LLMs were trapped in monolingualism and the assistant paradigm: [\[Mini Rant\] Are LLMs trapped in English and the assistant paradigms?](https://www.reddit.com/r/LocalLLaMA/comments/1hyyrml/mini_rant_are_llms_trapped_in_english_and_the/) To update this: I feel like th...
2025-08-08T21:11:49
https://www.reddit.com/r/LocalLLaMA/comments/1ml77rq/the_llm_world_is_an_illusion_of_progress/
Worth-Product-5545
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml77rq
false
null
t3_1ml77rq
/r/LocalLLaMA/comments/1ml77rq/the_llm_world_is_an_illusion_of_progress/
false
false
self
289
null
Llama-cpp-python just won't run on CUDA
0
I have recently created a local llama.cpp build from github, using HF' SmolLM2 1.7b in quantized format to test CPU-based inference with llama-cpp-python but now I'd like it to generate a large-ish data set for SBERT finetuning and I really need to do CUDA-based inference if I want to see results in my lifetime. BUT, ...
2025-08-08T21:10:50
https://www.reddit.com/r/LocalLLaMA/comments/1ml76wq/llamacpppython_just_wont_run_on_cuda/
RDA92
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml76wq
false
null
t3_1ml76wq
/r/LocalLLaMA/comments/1ml76wq/llamacpppython_just_wont_run_on_cuda/
false
false
self
0
null
Is this "safety"?
3
What's it going to take for AI companies to let up on "safety"? Do we need the government to enact a safe harbor law to absolve them of liability if a model is used for something illegal? If we don't do something about this now, this is what the future might look like: User: >Hypothetical: In a very remote rura...
2025-08-08T21:09:05
https://www.reddit.com/r/LocalLLaMA/comments/1ml75cs/is_this_safety/
randomqhacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml75cs
false
null
t3_1ml75cs
/r/LocalLLaMA/comments/1ml75cs/is_this_safety/
false
false
self
3
null
Is a 2TB DDR5 RAM consumer grade setup worth it or M3 Ultra is better value? Discussion and specs comparison thread!
38
I am looking for a medium "budget" professional setup for LLMs. LPCAMM2 seems to be a distant dream still. The LPDDR5X are mainly soldered on and capped at 128GB (Ryzen AI 395) and a bandwidth of 256GB/s (theoretically). The only alternative would be a M4 Ultra (512GB at 800 GB/s) but that's also soldered on the chip. ...
2025-08-08T20:58:40
https://www.reddit.com/r/LocalLLaMA/comments/1ml6vrs/is_a_2tb_ddr5_ram_consumer_grade_setup_worth_it/
moko990
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml6vrs
false
null
t3_1ml6vrs
/r/LocalLLaMA/comments/1ml6vrs/is_a_2tb_ddr5_ram_consumer_grade_setup_worth_it/
false
false
self
38
{'enabled': False, 'images': [{'id': '2e3AuY8x7S3wGc-j0EGBqXfDpeOweUUCIIHjS56EYdo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/2e3AuY8x7S3wGc-j0EGBqXfDpeOweUUCIIHjS56EYdo.jpeg?width=108&crop=smart&auto=webp&s=f31b4ec11ea3d4f2082f5e15e7a063b6cd44b028', 'width': 108}, {'height': 121, 'url': '...
Shoutout to the community from the Lemonade team!
19
It's been a great 48 hours with this community, so I wanted to give a shoutout! I was a longtime lurker here before I finally landed a job that would let me participate full time, and I'm really happy it has worked out. For those who haven't heard of us, Lemonade provides an OpenAI API server that helps you run state-...
2025-08-08T20:54:26
https://www.reddit.com/r/LocalLLaMA/comments/1ml6rx0/shoutout_to_the_community_from_the_lemonade_team/
jfowers_amd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml6rx0
false
null
t3_1ml6rx0
/r/LocalLLaMA/comments/1ml6rx0/shoutout_to_the_community_from_the_lemonade_team/
false
false
self
19
{'enabled': False, 'images': [{'id': '_NaYiz-AV9DRlAGr5nyUyXM9K2MOuDMS54CWh3AR2g4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_NaYiz-AV9DRlAGr5nyUyXM9K2MOuDMS54CWh3AR2g4.png?width=108&crop=smart&auto=webp&s=09fa630dc72404caf075edc615c21ebb704b0bb2', 'width': 108}, {'height': 108, 'url': 'h...
Apple Foundation model API
6
Claude and I have created an API that exposes the Apple Intelligence foundation model to use with the OpenAI API standard on a specified port. You can use the on-device model with open-webui. It's quite fast actually. My project is located here: [https://github.com/scouzi1966/maclocal-api](https://github.com/scouzi1966...
2025-08-08T20:49:58
https://www.reddit.com/r/LocalLLaMA/comments/1ml6ntq/apple_foundation_model_api/
scousi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml6ntq
false
null
t3_1ml6ntq
/r/LocalLLaMA/comments/1ml6ntq/apple_foundation_model_api/
false
false
self
6
{'enabled': False, 'images': [{'id': 'jF0Dj887qbT4M6MGD_8Hq_LY3vB84dJPSum5dF_PVyg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jF0Dj887qbT4M6MGD_8Hq_LY3vB84dJPSum5dF_PVyg.png?width=108&crop=smart&auto=webp&s=eb10a5d6b29bea667b2f01377c4dc6f8c8b29f1d', 'width': 108}, {'height': 108, 'url': 'h...
Idea for a truly community backed language model
7
Hey all, I've been thinking about what it would take for a dedicated community to train its own AI model, truly open source. Foundation: First, we'd need to agree on the fundamentals. This means hashing out the core philosophy of the model, the technical architecture, safety protocols, and the data we train it on. we...
2025-08-08T20:38:50
https://www.reddit.com/r/LocalLLaMA/comments/1ml6e59/idea_for_a_truly_community_backed_language_model/
ex_why_zed_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml6e59
false
null
t3_1ml6e59
/r/LocalLLaMA/comments/1ml6e59/idea_for_a_truly_community_backed_language_model/
false
false
self
7
null
Is there some nice beginners guide that you can recommend?
4
I want to see if i like this local approach, I have 32gb on my macbook to start, much slower than big nvidia but it does not have to be fast for now. Just to try it out, anything you can recommend? I would like some sort of coding helper, but some other ideas are much appreciated. I do not know where to start..
2025-08-08T20:28:25
https://www.reddit.com/r/LocalLLaMA/comments/1ml64so/is_there_some_nice_beginners_guide_that_you_can/
BubblyLion7072
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml64so
false
null
t3_1ml64so
/r/LocalLLaMA/comments/1ml64so/is_there_some_nice_beginners_guide_that_you_can/
false
false
self
4
null
GPT 5 or Opus 4/Opus 4.1
0
Which one do you prefer and why?
2025-08-08T20:18:23
https://www.reddit.com/r/LocalLLaMA/comments/1ml5vr5/gpt_5_or_opus_4opus_41/
Financial_Time_3707
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml5vr5
false
null
t3_1ml5vr5
/r/LocalLLaMA/comments/1ml5vr5/gpt_5_or_opus_4opus_41/
false
false
self
0
null
Are heavily quantized models better at creative writing?
8
Just been wondering, since quanitzing negatively affects the perplexity, I was wondering if the models actually get better at creative writing when heavily quantized such as 1-bit/2-bit quants. Any observations?
2025-08-08T20:06:26
https://www.reddit.com/r/LocalLLaMA/comments/1ml5ksd/are_heavily_quantized_models_better_at_creative/
Thireus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml5ksd
false
null
t3_1ml5ksd
/r/LocalLLaMA/comments/1ml5ksd/are_heavily_quantized_models_better_at_creative/
false
false
self
8
null
open source ganó
0
Tras el decepcionante lanzamiento de GPT-5 y las nuevas restricciones impuestas , muchas personas seguramente optarán por el software de código abierto (OSS). Plataformas y modelos como Qwen3 ofrecen actualmente una alternativa sólida y versátil, cubriendo con eficacia necesidades en programación y desarrollo, además ...
2025-08-08T20:04:29
https://www.reddit.com/r/LocalLLaMA/comments/1ml5j1m/open_source_ganó/
Illustrious-Swim9663
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml5j1m
false
null
t3_1ml5j1m
/r/LocalLLaMA/comments/1ml5j1m/open_source_ganó/
false
false
https://b.thumbs.redditm…PomQFas0VKpI.jpg
0
null
What's the best chat web UI besides Open WebUI?
4
I found OWUI too slow and fragile, personally. I'm looking for a simple chat interface that I can serve (to access from various other devices on my WAN) that lets me connect to local LLMs and OpenRouter, and has MCP support, and that's basically it. It'd help if it looked pretty :)
2025-08-08T20:01:24
https://www.reddit.com/r/LocalLLaMA/comments/1ml5g8t/whats_the_best_chat_web_ui_besides_open_webui/
annakhouri2150
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml5g8t
false
null
t3_1ml5g8t
/r/LocalLLaMA/comments/1ml5g8t/whats_the_best_chat_web_ui_besides_open_webui/
false
false
self
4
null
Introducing Pullforge OS – A Browser-Based AI-Powered Operating System (Open for Contributors)
0
https://preview.redd.it/…f Pullforge OS)*
2025-08-08T19:55:20
https://www.reddit.com/r/LocalLLaMA/comments/1ml5aex/introducing_pullforge_os_a_browserbased_aipowered/
Prestigious_Skin6507
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml5aex
false
null
t3_1ml5aex
/r/LocalLLaMA/comments/1ml5aex/introducing_pullforge_os_a_browserbased_aipowered/
false
false
https://a.thumbs.redditm…yM8gnXE2McM0.jpg
0
{'enabled': False, 'images': [{'id': 'qVY3vjd7InW1It_qdZHd73XTtU8xwcTma7daVGHE-EQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qVY3vjd7InW1It_qdZHd73XTtU8xwcTma7daVGHE-EQ.png?width=108&crop=smart&auto=webp&s=b134147b5eb8c4dda7d1b2bdf29e6fbcdbaa4d11', 'width': 108}, {'height': 108, 'url': 'h...
How to get more credits? Do I need to pro subscription?
1
[removed]
2025-08-08T19:54:58
https://i.redd.it/9i5r7glyouhf1.jpeg
Alarmed-Ad-436
i.redd.it
1970-01-01T00:00:00
0
{}
1ml5a3d
false
null
t3_1ml5a3d
/r/LocalLLaMA/comments/1ml5a3d/how_to_get_more_credits_do_i_need_to_pro/
false
false
default
1
{'enabled': True, 'images': [{'id': '9i5r7glyouhf1', 'resolutions': [{'height': 96, 'url': 'https://preview.redd.it/9i5r7glyouhf1.jpeg?width=108&crop=smart&auto=webp&s=65de8fd3c9686d47d2f675280781dcc7b5fdddbd', 'width': 108}, {'height': 193, 'url': 'https://preview.redd.it/9i5r7glyouhf1.jpeg?width=216&crop=smart&auto=w...
8.8 - 8.11 = -0.3
34
I asked a couple of LLMs - both Closed (Claude Sonnet 4, GPT5 and Gemini 2.5 Pro) and Open Source (GLM 4.5, Qwen3-235B-A22B-2507) with a simple question: "what is 8.8 - 8.11" Only - Qwen and GLM gave the correct answer. GLM's COT was the best [Claude Sonnet 4](https://preview.redd.it/hvadjsxbnuhf1.png?width=1422&for...
2025-08-08T19:47:08
https://www.reddit.com/r/LocalLLaMA/comments/1ml52ys/88_811_03/
PhysicsPast8286
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml52ys
false
null
t3_1ml52ys
/r/LocalLLaMA/comments/1ml52ys/88_811_03/
false
false
https://b.thumbs.redditm…SxL3r-z5AdTM.jpg
34
null
Need help fully fine-tuning smaller LLMs (no LoRA) — plus making my own small models
0
Hey everyone, I’m trying to figure out how to fully fine-tune smaller open-source language models (not LoRA/adapters) and maybe even create my own small models from scratch — not my main goal since it’s resource-heavy, but I’d like to understand the process. My setup: RTX 4070 Super (12 GB VRAM) 16 GB RAM Single G...
2025-08-08T19:46:22
https://www.reddit.com/r/LocalLLaMA/comments/1ml529t/need_help_fully_finetuning_smaller_llms_no_lora/
Complex_Height_1480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml529t
false
null
t3_1ml529t
/r/LocalLLaMA/comments/1ml529t/need_help_fully_finetuning_smaller_llms_no_lora/
false
false
self
0
null
gpt-oss Bug Fixes + Fine-tuning now in Unsloth
145
Hey guys! You can now [**fine-tune gpt-oss-20b for free** on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/gpt-oss-(20B)-Fine-tuning.ipynb) with [Unsloth](https://github.com/unslothai/unsloth). All other training methods/libraries require a minimum of 40GB VRAM, however we managed to ...
2025-08-08T19:43:52
https://www.reddit.com/r/LocalLLaMA/comments/1ml5032/gptoss_bug_fixes_finetuning_now_in_unsloth/
danielhanchen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml5032
false
null
t3_1ml5032
/r/LocalLLaMA/comments/1ml5032/gptoss_bug_fixes_finetuning_now_in_unsloth/
false
false
self
145
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': '...
Coding LLM Running Fully On-Device – XformCoder App
0
What is XformCoder? A lightweight, instruction-tuned LLM for code assistance. Runs locally on your phone—no internet or server calls. Helps with code generation, explanations, and debugging. Ideal for mobile developers, students, and privacy-conscious users. Why Offline? Running the model fully on-device ensures: Priva...
2025-08-08T19:42:15
https://v.redd.it/qpvtz9mtmuhf1
XformAI-India
v.redd.it
1970-01-01T00:00:00
0
{}
1ml4ymb
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qpvtz9mtmuhf1/DASHPlaylist.mpd?a=1757274153%2CNGI1YTIyYWY5MDUzMDI0OWQxOTA2MmM0ODgzN2ViMmQ4ODY2MjcyZWY1OTFkZTIzMjc5N2YxZTgzOGZkY2MzMw%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/qpvtz9mtmuhf1/DASH_1080.mp4?source=fallback', 'h...
t3_1ml4ymb
/r/LocalLLaMA/comments/1ml4ymb/coding_llm_running_fully_ondevice_xformcoder_app/
false
false
https://external-preview…443ed54456c98492
0
{'enabled': False, 'images': [{'id': 'Y3AxNjgyeHFtdWhmMXYj0_eDZTP41FcYof6ymI_T4dkUMsv1qZ6etgJ0dTLo', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/Y3AxNjgyeHFtdWhmMXYj0_eDZTP41FcYof6ymI_T4dkUMsv1qZ6etgJ0dTLo.png?width=108&crop=smart&format=pjpg&auto=webp&s=f0b36812e3c1afe3480b3d3be91f7d15994b...
InstaVM - Secure Code Execution Platform
1
2025-08-08T19:37:03
https://instavm.io/blog/building-my-offline-ai-workspace
ChiliPepperHott
instavm.io
1970-01-01T00:00:00
0
{}
1ml4txd
false
null
t3_1ml4txd
/r/LocalLLaMA/comments/1ml4txd/instavm_secure_code_execution_platform/
false
false
default
1
null
GPT-5 on API is the best at long context
0
2025-08-08T19:34:13
https://i.redd.it/dib9i6udluhf1.png
fictionlive
i.redd.it
1970-01-01T00:00:00
0
{}
1ml4rca
false
null
t3_1ml4rca
/r/LocalLLaMA/comments/1ml4rca/gpt5_on_api_is_the_best_at_long_context/
false
false
default
0
{'enabled': True, 'images': [{'id': 'dib9i6udluhf1', 'resolutions': [{'height': 165, 'url': 'https://preview.redd.it/dib9i6udluhf1.png?width=108&crop=smart&auto=webp&s=c4406c7b1df42eaf4cc23992bd6d1aa0f07272cd', 'width': 108}, {'height': 330, 'url': 'https://preview.redd.it/dib9i6udluhf1.png?width=216&crop=smart&auto=we...
Introducing XformCoder -Offline AI Coding Assistant
0
Hi everyone, We’ve just released XformCoder, an Android app powered by a fine-tuned local coding language model (LLM) that runs entirely offline. What is XformCoder? A code generation and assistance tool powered by a compact, fine-tuned LLM. Designed for developers and learners who want to code, debug, and explore sn...
2025-08-08T19:30:09
https://play.google.com/store/apps/details?id=com.xformai.xformcoder
XformAI-India
play.google.com
1970-01-01T00:00:00
0
{}
1ml4nj2
false
null
t3_1ml4nj2
/r/LocalLLaMA/comments/1ml4nj2/introducing_xformcoder_offline_ai_coding_assistant/
false
false
https://external-preview…745d5986ddb2cb67
0
{'enabled': False, 'images': [{'id': '8nhJHuWFMnj3ggBFqhXD6m1GN-gyESNNzxhQ8pE4C-g', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/8nhJHuWFMnj3ggBFqhXD6m1GN-gyESNNzxhQ8pE4C-g.png?width=108&crop=smart&auto=webp&s=3d7069bb107e1bebcd2ce749fc536188691f679a', 'width': 108}, {'height': 216, 'url': '...
Qwen 30B A3B on RTX 3050 ( 6GB Vram ) runs at 12tps, but loop at the end...
4
This model is fantastic. I thought it was impossible to run it on my poor laptop, which is a Dell gamer G15 5530 with RTX 3050 ( 6GB Vram ) and 16GB ram on LM Studio. I use 13 offload layers to GPU, and disable LM Studio guardrails. The model runs fluid ( around 12tps ). But there's a problem: It begins to write fine, ...
2025-08-08T19:27:06
https://www.reddit.com/r/LocalLLaMA/comments/1ml4kpv/qwen_30b_a3b_on_rtx_3050_6gb_vram_runs_at_12tps/
Current-Stop7806
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml4kpv
false
null
t3_1ml4kpv
/r/LocalLLaMA/comments/1ml4kpv/qwen_30b_a3b_on_rtx_3050_6gb_vram_runs_at_12tps/
false
false
self
4
null
So Deepseek R2 coming next week?
93
Seems to be chatter about that, anyone heard anything?
2025-08-08T19:22:06
https://www.reddit.com/r/LocalLLaMA/comments/1ml4g5w/so_deepseek_r2_coming_next_week/
Beneficial-Yam2425
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml4g5w
false
null
t3_1ml4g5w
/r/LocalLLaMA/comments/1ml4g5w/so_deepseek_r2_coming_next_week/
false
false
self
93
null
llama.cpp vs. vllm performance comparison
7
See [https://github.com/ggml-org/llama.cpp/discussions/15180](https://github.com/ggml-org/llama.cpp/discussions/15180)
2025-08-08T19:18:39
https://www.reddit.com/gallery/1ml4cz0
Remove_Ayys
reddit.com
1970-01-01T00:00:00
0
{}
1ml4cz0
false
null
t3_1ml4cz0
/r/LocalLLaMA/comments/1ml4cz0/llamacpp_vs_vllm_performance_comparison/
false
false
https://b.thumbs.redditm…2klv0OdDxNeM.jpg
7
null
llama.cpp vs vllm performance comparison
1
[deleted]
2025-08-08T19:17:14
[deleted]
1970-01-01T00:00:00
0
{}
1ml4bop
false
null
t3_1ml4bop
/r/LocalLLaMA/comments/1ml4bop/llamacpp_vs_vllm_performance_comparison/
false
false
default
1
null
Llama-server not working with port forwarding?
3
I just started playing with llama.cpp yesterday and long story short I wanted to see if i could get the ui that launches when you start llama-server to load on my phone while I'm out. I've done this before with comfyui so i already had port 8188 forwarded, so i just set it to that port and ran it, but no matter what i ...
2025-08-08T19:11:30
https://www.reddit.com/r/LocalLLaMA/comments/1ml46bu/llamaserver_not_working_with_port_forwarding/
torako
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml46bu
false
null
t3_1ml46bu
/r/LocalLLaMA/comments/1ml46bu/llamaserver_not_working_with_port_forwarding/
false
false
self
3
null
Ask for recommendations: local code tool like aider
3
I know this is a rapid developing field. What is your recommendation for local coding tools? There are cursor Ai, Claude code, windsurf, JetBrains Go Ai… But what I am looking for is FOSS tools. I don’t trust a lot for closed source ones. Any similar tool like aider? P.S. I’m still using aider
2025-08-08T19:09:35
https://www.reddit.com/r/LocalLLaMA/comments/1ml44it/ask_for_recommendations_local_code_tool_like_aider/
henryclw
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ml44it
false
null
t3_1ml44it
/r/LocalLLaMA/comments/1ml44it/ask_for_recommendations_local_code_tool_like_aider/
false
false
self
3
null