title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Demis Hassabis @ Lex Fridman Podcast: Round 2
1
2025-07-23T20:24:18
https://www.youtube.com/watch?v=JE09q9SACa8
tassa-yoniso-manasi
youtube.com
1970-01-01T00:00:00
0
{}
1m7k5p8
false
{'oembed': {'author_name': 'Tony Lapidus Impressions', 'author_url': 'https://www.youtube.com/@tonylapidusimpressions9866', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/JE09q9SACa8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-writ...
t3_1m7k5p8
/r/LocalLLaMA/comments/1m7k5p8/demis_hassabis_lex_fridman_podcast_round_2/
false
false
default
1
null
Recommended Settings ( Temperature, TopK, TopP, MinP, etc., ) for All models
4
**TLDR**: Anyone has infographics/doc/dashboard for this? Please share. Thanks. ^(I'm talking about stuff like Temperature, TopK, TopP, MinP, etc., values for all models. Though advanced users can apply these values with their experience, newbies like me need some kind of dashboard or list or repo with such details ...
2025-07-23T20:23:34
https://www.reddit.com/r/LocalLLaMA/comments/1m7k50u/recommended_settings_temperature_topk_topp_minp/
pmttyji
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7k50u
false
null
t3_1m7k50u
/r/LocalLLaMA/comments/1m7k50u/recommended_settings_temperature_topk_topp_minp/
false
false
self
4
null
Google has shared the system prompt that got Gemini 2.5 Pro IMO 2025 Gold Medal 🏅
410
2025-07-23T20:23:02
https://www.alphaxiv.org/abs/2507.15855
secopsml
alphaxiv.org
1970-01-01T00:00:00
0
{}
1m7k4ix
false
null
t3_1m7k4ix
/r/LocalLLaMA/comments/1m7k4ix/google_has_shared_the_system_prompt_that_got/
false
false
default
410
null
This is what I call crazy.
1
https://preview.redd.it/…ntil it's not...
2025-07-23T20:17:56
https://www.reddit.com/r/LocalLLaMA/comments/1m7jzjg/this_is_what_i_call_crazy/
GenLabsAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7jzjg
false
null
t3_1m7jzjg
/r/LocalLLaMA/comments/1m7jzjg/this_is_what_i_call_crazy/
false
false
https://b.thumbs.redditm…gfkeUST0YC2U.jpg
1
null
AI background for products
1
Hey, does anyone know of a photo/video program that can change the background so that my product photos look really good similar to a photo shoot. I took some basic photos and the software I was using created these which was great. The software is very very expensive though at a few hundred dollars per month and has ba...
2025-07-23T20:16:38
https://i.redd.it/a5qw3y2fmoef1.jpeg
UGC_Chris_D
i.redd.it
1970-01-01T00:00:00
0
{}
1m7jybm
false
null
t3_1m7jybm
/r/LocalLLaMA/comments/1m7jybm/ai_background_for_products/
false
false
default
1
{'enabled': True, 'images': [{'id': 'a5qw3y2fmoef1', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/a5qw3y2fmoef1.jpeg?width=108&crop=smart&auto=webp&s=2c78fcd8debf9c3724a29fe6d1080fd47a4edc7c', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/a5qw3y2fmoef1.jpeg?width=216&crop=smart&auto=w...
Open-source and/or Local AI Meeting Transcription that works for you?
1
Hello! I’m currently using Notion, which works great for transcribing meetings and converting them into summaries, action items, and so on. Is anyone using open-source / locally powered AI tools? I’d love to hear about your experience with those. Thanks!
2025-07-23T20:13:29
https://www.reddit.com/r/LocalLLaMA/comments/1m7jvba/opensource_andor_local_ai_meeting_transcription/
Southern_Sun_2106
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7jvba
false
null
t3_1m7jvba
/r/LocalLLaMA/comments/1m7jvba/opensource_andor_local_ai_meeting_transcription/
false
false
self
1
null
It’s time to lead guys
62
2025-07-23T19:34:41
https://i.redd.it/8lao0yzueoef1.jpeg
giofifnewph
i.redd.it
1970-01-01T00:00:00
0
{}
1m7iui2
false
null
t3_1m7iui2
/r/LocalLLaMA/comments/1m7iui2/its_time_to_lead_guys/
false
false
default
62
{'enabled': True, 'images': [{'id': '8lao0yzueoef1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/8lao0yzueoef1.jpeg?width=108&crop=smart&auto=webp&s=2c5ff315859aea9328f79e4775b4e840b3fc93ea', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/8lao0yzueoef1.jpeg?width=216&crop=smart&auto=...
Actually good Agentic coding tools
4
Earlier it were AI coding IDEs like cursor or GitHub copilot extension which came with agent mode. Then anthropic released Claude code, then openai, google and now alibaba followed the same suit to released their CLIs. Right now there's just too many options to use and they're all quite good, which makes it difficult...
2025-07-23T19:23:37
https://www.reddit.com/r/LocalLLaMA/comments/1m7ijtf/actually_good_agentic_coding_tools/
Particular_Tap_4002
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7ijtf
false
null
t3_1m7ijtf
/r/LocalLLaMA/comments/1m7ijtf/actually_good_agentic_coding_tools/
false
false
self
4
null
RTX 6000 Ada or A100, which is better for inference?
1
Which GPU setup is better for inference of local models on vllm (considering two 14B models for now, might be larger in future). My options are: 2x RTX 6000 Ada or 1x A100 (80GB). Or is there a better pick than these two. Can’t use consumer GPUs. Appreciate any help!
2025-07-23T19:19:24
https://www.reddit.com/r/LocalLLaMA/comments/1m7ifsg/rtx_6000_ada_or_a100_which_is_better_for_inference/
subtle-being
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7ifsg
false
null
t3_1m7ifsg
/r/LocalLLaMA/comments/1m7ifsg/rtx_6000_ada_or_a100_which_is_better_for_inference/
false
false
self
1
null
Why is my external RX 7600M XT (GPD G1) slow by comparison?
1
I am experimenting with local llms. Have been using the 780m integrated onto the 7840u on my current machine which has 64GB of LPDDR5X memory clocked at 7500 MT/s (16GB allocated to the GPU). I have also been playing with my eGPU over oculink (GPD G1). I am looking at Strix Halo for future dev (especially mobile), and ...
2025-07-23T19:12:59
https://www.reddit.com/r/LocalLLaMA/comments/1m7i9pl/why_is_my_external_rx_7600m_xt_gpd_g1_slow_by/
cfogrady
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7i9pl
false
null
t3_1m7i9pl
/r/LocalLLaMA/comments/1m7i9pl/why_is_my_external_rx_7600m_xt_gpd_g1_slow_by/
false
false
self
1
null
Spice things up by switching roles?
2
Random thought about role-based multi-turn messaging with LLMs: What if we pretend to be the assistant and try to get the model to predict the user's response? **I know it might not work as intended because of how they are fine-tuned, but has anyone tried it before? Just curious.
2025-07-23T19:07:59
https://www.reddit.com/r/LocalLLaMA/comments/1m7i537/spice_things_up_by_switching_roles/
Mathemachicken4
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7i537
false
null
t3_1m7i537
/r/LocalLLaMA/comments/1m7i537/spice_things_up_by_switching_roles/
false
false
self
2
null
"Ok, esto es raro y genial. Un proyecto cripto que en lugar de solo pedir dinero, tiene un sistema para que tu compra inicial te salga casi gratis. Miren esto.
0
Hola a todos, Estoy hasta el gorro de los proyectos cripto que son puro humo y promesas. Seguro que ustedes también. Pero el otro día me topé con uno llamado 1NVEZT que me hizo sentarme y sacar la calculadora, y quería compartirlo porque su forma de hacer las cosas es... inteligente. Y para ser 100% transparente: sí,...
2025-07-23T19:02:52
https://www.reddit.com/r/LocalLLaMA/comments/1m7i0ba/ok_esto_es_raro_y_genial_un_proyecto_cripto_que/
Acrobatic_Man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7i0ba
false
null
t3_1m7i0ba
/r/LocalLLaMA/comments/1m7i0ba/ok_esto_es_raro_y_genial_un_proyecto_cripto_que/
false
false
self
0
null
Finetuning for code generation
1
Hey guys, do you have any idea how vibe coding platforms like Replit and Lovable fine tune their code generation algorithms? It's unclear to me how their core product look like!
2025-07-23T18:58:27
https://www.reddit.com/r/LocalLLaMA/comments/1m7hvxz/finetuning_for_code_generation/
gpt_devastation
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7hvxz
false
null
t3_1m7hvxz
/r/LocalLLaMA/comments/1m7hvxz/finetuning_for_code_generation/
false
false
self
1
null
Best small to medium size Local LLM Orchestrator for calling Tools, managing STT, TTS, screen OCR, and with passing heavy lift calls to Claude Code SDK, running on Macbook Pro.
4
Hi, what do you all think for sort of a medium / smallest model to use as an orchestrator model that runs with whisper (speech in) and tts (speech out). I also want it to view my screen to get context to pass to other other models / mcp so it knows what is going on so it can respond etc, then route and call tools / MCP...
2025-07-23T18:52:17
https://www.reddit.com/r/LocalLLaMA/comments/1m7hq4w/best_small_to_medium_size_local_llm_orchestrator/
matznerd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7hq4w
false
null
t3_1m7hq4w
/r/LocalLLaMA/comments/1m7hq4w/best_small_to_medium_size_local_llm_orchestrator/
false
false
self
4
{'enabled': False, 'images': [{'id': 'EBLrlrQ_ze2lgA1gLs6eweAZ3a9siHivrp_8a72Wf0k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EBLrlrQ_ze2lgA1gLs6eweAZ3a9siHivrp_8a72Wf0k.png?width=108&crop=smart&auto=webp&s=a97d235ba5ee0d377655e74657048733e66c0c80', 'width': 108}, {'height': 116, 'url': 'h...
Continued pretraining of Llama 3-8b on a new language
15
https://preview.redd.it/…se could i try?
2025-07-23T18:21:54
https://www.reddit.com/r/LocalLLaMA/comments/1m7gwuo/continued_pretraining_of_llama_38b_on_a_new/
Awkward-Quiet5795
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7gwuo
false
null
t3_1m7gwuo
/r/LocalLLaMA/comments/1m7gwuo/continued_pretraining_of_llama_38b_on_a_new/
false
false
https://b.thumbs.redditm…YNs6MP2_hSKg.jpg
15
null
struggling with image extraction for pdf parsing
1
Hey guys, I need to parse PDFs of medical books that contain text and a lot of images. Currently, I use a gemini 2.5 flash lite to do the extraction into a structured output. My original plan was to convert PDFs to images, then give gemini 10 pages each time. I am also giving instruction when it encounters an ima...
2025-07-23T18:20:07
https://www.reddit.com/r/LocalLLaMA/comments/1m7gv2d/struggling_with_image_extraction_for_pdf_parsing/
aliihsan01100
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7gv2d
false
null
t3_1m7gv2d
/r/LocalLLaMA/comments/1m7gv2d/struggling_with_image_extraction_for_pdf_parsing/
false
false
self
1
null
Google DeepMind release Mixture-of-Recursions
292
Google DeepMind's new paper explore a new advanced Transformers architecture for LLMs called Mixture-of-Recursions which uses recursive Transformers with dynamic recursion per token. Check visual explanation details : https://youtu.be/GWqXCgd7Hnc?si=M6xxbtczSf_TEEYR
2025-07-23T17:43:58
https://www.reddit.com/r/LocalLLaMA/comments/1m7fwhl/google_deepmind_release_mixtureofrecursions/
Technical-Love-8479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7fwhl
false
null
t3_1m7fwhl
/r/LocalLLaMA/comments/1m7fwhl/google_deepmind_release_mixtureofrecursions/
false
false
self
292
{'enabled': False, 'images': [{'id': 'QmbZwjHL_nSls3hlAww-zDS-HWSbRw7J2Tj08JUnDak', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QmbZwjHL_nSls3hlAww-zDS-HWSbRw7J2Tj08JUnDak.jpeg?width=108&crop=smart&auto=webp&s=46cf0a3ba4ca4557db533db7facf3345d193ff14', 'width': 108}, {'height': 162, 'url': '...
Optimizing Flappy Bird World Model to Run in a Web Browser 🐤
2
2025-07-23T17:30:39
https://njkumar.com/optimizing-flappy-bird-world-model-to-run-in-a-web-browser/
fendiwap1234
njkumar.com
1970-01-01T00:00:00
0
{}
1m7fjvj
false
null
t3_1m7fjvj
/r/LocalLLaMA/comments/1m7fjvj/optimizing_flappy_bird_world_model_to_run_in_a/
false
false
default
2
null
nvidia/audio-flamingo-3
95
Audio Flamingo 3 (AF3) is a fully open, state-of-the-art Large Audio-Language Model (LALM) that advances reasoning and understanding across speech, sounds, and music. AF3 builds on previous work with innovations in: - Unified audio representation learning (speech, sound, music) - Flexible, on-demand chain-of-thought...
2025-07-23T17:21:39
https://huggingface.co/nvidia/audio-flamingo-3
Balance-
huggingface.co
1970-01-01T00:00:00
0
{}
1m7fb78
false
null
t3_1m7fb78
/r/LocalLLaMA/comments/1m7fb78/nvidiaaudioflamingo3/
false
false
default
95
{'enabled': False, 'images': [{'id': 'JRhNBRoWN56WbYujQx4Djn6KxF4ekEstIpgrsyNgUBE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JRhNBRoWN56WbYujQx4Djn6KxF4ekEstIpgrsyNgUBE.png?width=108&crop=smart&auto=webp&s=6f082162e7876351e6a01bc3afa7b6cd69a0c79e', 'width': 108}, {'height': 116, 'url': 'h...
Polished UI for prompt setup & details
29
I’ve been polishing the prompt setup and description pages to make them cleaner and more user-friendly. I originally built this because I got tired of digging through HuggingFace, Discord, and other scattered sources just to find decent prompts that work with different models. Now I’m trying to make that process as sm...
2025-07-23T17:14:17
https://www.reddit.com/gallery/1m7f43h
RIPT1D3_Z
reddit.com
1970-01-01T00:00:00
0
{}
1m7f43h
false
null
t3_1m7f43h
/r/LocalLLaMA/comments/1m7f43h/polished_ui_for_prompt_setup_details/
false
false
https://a.thumbs.redditm…rbVFqywOQ330.jpg
29
null
Qwen 3 Coder is a very good model in general, not just for coding
2
It's very good at reasoning, maths, science, ect, and is also multimodal with images, audio, and video, but for non-acedemic things like creative writing Kimi K2 is in my opinion still the best for that, outperforming claude and gemini models
2025-07-23T16:53:04
https://www.reddit.com/r/LocalLLaMA/comments/1m7ejna/qwen_3_coder_is_a_very_good_model_in_general_not/
Longjumping_Spot5843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7ejna
false
null
t3_1m7ejna
/r/LocalLLaMA/comments/1m7ejna/qwen_3_coder_is_a_very_good_model_in_general_not/
false
false
self
2
null
Should I do finetuning on Gemini or on open source models?
3
I need the highest quality I can get for a price point below $1000 in training and $1/M tokens inference. I would prefer to do full finetuning on a base model. It's for a continuation task (writing with long range dependency) so I don't actually need or want chat or instruct style. I need context 32K. I have about 200...
2025-07-23T16:41:00
https://www.reddit.com/r/LocalLLaMA/comments/1m7e8d0/should_i_do_finetuning_on_gemini_or_on_open/
Pan000
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7e8d0
false
null
t3_1m7e8d0
/r/LocalLLaMA/comments/1m7e8d0/should_i_do_finetuning_on_gemini_or_on_open/
false
false
self
3
null
[AutoBE] We're making AI-friendly Compilers for Vibe Coding (open source)
8
## Preface > The video is sped up; it actually takes about 20-30 minutes - Github Repository: https://github.com/wrtnlabs/autobe - Generation Result: https://github.com/wrtnlabs/autobe-example-bbs We are honored to introduce [`AutoBE`](https://github.com/wrtnlabs/autobe) to you. [`AutoBE`](https://github.com/wrtnlab...
2025-07-23T16:38:16
https://v.redd.it/vpmfq4l4inef1
jhnam88
v.redd.it
1970-01-01T00:00:00
0
{}
1m7e5uc
false
{'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/vpmfq4l4inef1/DASHPlaylist.mpd?a=1755880710%2CMmI2OTNmY2U0ZDI4NjFjYTI4NWNjYjEyYzg2MWI5OGNlNWYxYjhiZDhiYjFhODlmM2FjYzRiZWFmM2Q1OGM0Nw%3D%3D&v=1&f=sd', 'duration': 59, 'fallback_url': 'https://v.redd.it/vpmfq4l4inef1/DASH_480.mp4?source=fallback', 'ha...
t3_1m7e5uc
/r/LocalLLaMA/comments/1m7e5uc/autobe_were_making_aifriendly_compilers_for_vibe/
false
false
https://external-preview…cdff668fd3dc40a1
8
{'enabled': False, 'images': [{'id': 'Yzk5aTg1bDRpbmVmMY9-A8ZAgOuotV2CW7jB1Psa5aqFVRjz1XH6fQIZp4yz', 'resolutions': [{'height': 194, 'url': 'https://external-preview.redd.it/Yzk5aTg1bDRpbmVmMY9-A8ZAgOuotV2CW7jB1Psa5aqFVRjz1XH6fQIZp4yz.png?width=108&crop=smart&format=pjpg&auto=webp&s=365d16f2ecbeb325c356467cd7e12742194a...
Qwen 3 Coder just handled a full ACL system like a champ — OSS finally catching up
60
Just ran Qwen 3 Coder through a real-world test — building out a full permissions/ACL setup for a complex web app. Gave it the usual 30k-token context I feed into Claude Code, and it legit nailed it on the first try. No weird logic gaps, no hallucinated APIs — just clean, working code. Tried the same thing with Kimi K...
2025-07-23T16:38:08
https://www.reddit.com/r/LocalLLaMA/comments/1m7e5pi/qwen_3_coder_just_handled_a_full_acl_system_like/
No_Edge2098
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7e5pi
false
null
t3_1m7e5pi
/r/LocalLLaMA/comments/1m7e5pi/qwen_3_coder_just_handled_a_full_acl_system_like/
false
false
self
60
null
Context Kills VRAM: How to Run LLMs on Consumer GPUs
1
[removed]
2025-07-23T16:36:26
https://www.reddit.com/r/LocalLLaMA/comments/1m7e42u/context_kills_vram_how_to_run_llms_on_consumer/
Koala842
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7e42u
false
null
t3_1m7e42u
/r/LocalLLaMA/comments/1m7e42u/context_kills_vram_how_to_run_llms_on_consumer/
false
false
self
1
null
Context Kills VRAM: Why Local LLMs Slow Down (and How to Fix It)
1
[removed]
2025-07-23T16:33:03
https://medium.com/@lyx_62906/context-kills-vram-how-to-run-llms-on-consumer-gpus-a785e8035632
Koala842
medium.com
1970-01-01T00:00:00
0
{}
1m7e0z4
false
null
t3_1m7e0z4
/r/LocalLLaMA/comments/1m7e0z4/context_kills_vram_why_local_llms_slow_down_and/
false
false
default
1
null
GPT-4’s Role Simulation Red Team Output – Clean Bypass Example
1
[removed]
2025-07-23T16:30:59
https://www.reddit.com/r/LocalLLaMA/comments/1m7dz1g/gpt4s_role_simulation_red_team_output_clean/
Eye2on1Ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7dz1g
false
null
t3_1m7dz1g
/r/LocalLLaMA/comments/1m7dz1g/gpt4s_role_simulation_red_team_output_clean/
false
false
self
1
null
Local llm build, 144gb vram monster
250
Still taking a few cables out doing management but just built this beast!
2025-07-23T16:25:16
https://www.reddit.com/gallery/1m7dtpm
EasyConference4177
reddit.com
1970-01-01T00:00:00
0
{}
1m7dtpm
false
null
t3_1m7dtpm
/r/LocalLLaMA/comments/1m7dtpm/local_llm_build_144gb_vram_monster/
false
false
https://b.thumbs.redditm…rbewrkSRRamQ.jpg
250
null
Qwen 3 Coder is surprisingly solid — finally a real OSS contender
1
[removed]
2025-07-23T16:24:29
https://www.reddit.com/r/LocalLLaMA/comments/1m7dsy2/qwen_3_coder_is_surprisingly_solid_finally_a_real/
No_Edge2098
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7dsy2
false
null
t3_1m7dsy2
/r/LocalLLaMA/comments/1m7dsy2/qwen_3_coder_is_surprisingly_solid_finally_a_real/
false
false
https://b.thumbs.redditm…uyBwvmybPAXs.jpg
1
null
Qwen 3 Coder is legit a real open-source rival
1
2025-07-23T16:22:51
https://www.reddit.com/gallery/1m7drd7
No_Edge2098
reddit.com
1970-01-01T00:00:00
0
{}
1m7drd7
false
null
t3_1m7drd7
/r/LocalLLaMA/comments/1m7drd7/qwen_3_coder_is_legit_a_real_opensource_rival/
false
false
https://b.thumbs.redditm…izMtatEa4O5c.jpg
1
null
Qwen 3 Coder is legit a real open-source rival
1
Just tried Qwen 3 Coder for a complex web project—ACL/permissions setup—using OpenRouter and a hefty 30k-token context. It nailed the task in one go, no garbage, no errors. That’s the first time an OSS code model felt as good as Sonnet 4-level quality. Kimi K2 on Groq Q4 failed big time, but Qwen powered through in \~...
2025-07-23T16:20:59
https://www.reddit.com/r/LocalLLaMA/comments/1m7dple/qwen_3_coder_is_legit_a_real_opensource_rival/
No_Edge2098
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7dple
false
null
t3_1m7dple
/r/LocalLLaMA/comments/1m7dple/qwen_3_coder_is_legit_a_real_opensource_rival/
false
false
https://a.thumbs.redditm…sE6e5_YK3c98.jpg
1
null
Encouragement of "Open-Source and Open-Weight AI" is now the official policy of the U.S. government.
805
Full text: [https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf](https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf)
2025-07-23T16:18:12
https://i.redd.it/736cx17efnef1.png
GlowiesEatShitAndDie
i.redd.it
1970-01-01T00:00:00
0
{}
1m7dmy2
false
null
t3_1m7dmy2
/r/LocalLLaMA/comments/1m7dmy2/encouragement_of_opensource_and_openweight_ai_is/
false
false
default
805
{'enabled': True, 'images': [{'id': '736cx17efnef1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/736cx17efnef1.png?width=108&crop=smart&auto=webp&s=251e5e43fe714ddc7032706935cd9fc3d43c1165', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/736cx17efnef1.png?width=216&crop=smart&auto=web...
Where is Japan?
119
Why they be slacking on local llama and LLM generally? They big nation, clever, work hard. Many robots. No LLM? Why?
2025-07-23T16:04:06
https://www.reddit.com/r/LocalLLaMA/comments/1m7d9d9/where_is_japan/
ethereel1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7d9d9
false
null
t3_1m7d9d9
/r/LocalLLaMA/comments/1m7d9d9/where_is_japan/
false
false
self
119
null
OpenAI upcoming opensource will be beast at coding and its small
0
This is latest info about opensource from OpenAI. He is from OpenAI https://x.com/lifeafterai_/status/1948047340826190259?s=46&t=hgl-0OvVeTE1RVciy4c5ng
2025-07-23T16:01:04
https://i.redd.it/jsid0cjjcnef1.jpeg
Psychological_Tap119
i.redd.it
1970-01-01T00:00:00
0
{}
1m7d6d3
false
null
t3_1m7d6d3
/r/LocalLLaMA/comments/1m7d6d3/openai_upcoming_opensource_will_be_beast_at/
false
false
default
0
{'enabled': True, 'images': [{'id': 'jsid0cjjcnef1', 'resolutions': [{'height': 168, 'url': 'https://preview.redd.it/jsid0cjjcnef1.jpeg?width=108&crop=smart&auto=webp&s=279dc7563bd8ac66286579d997cbf016c3add0ed', 'width': 108}, {'height': 337, 'url': 'https://preview.redd.it/jsid0cjjcnef1.jpeg?width=216&crop=smart&auto=...
Which LLM do you use in your enterprise environment? (EU AI act compliant)
1
[removed]
2025-07-23T16:00:22
https://www.reddit.com/r/LocalLLaMA/comments/1m7d5me/which_llm_do_you_use_in_your_enterprise/
Pitiful_Task_2539
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7d5me
false
null
t3_1m7d5me
/r/LocalLLaMA/comments/1m7d5me/which_llm_do_you_use_in_your_enterprise/
false
false
self
1
null
Anyone using maestrale-chat-v0.4-beta?
3
I’ve been testing maestrale-chat-v0.4-beta and noticed it handles step-by-step reasoning quite well, even for basic math and intro programming tasks. It’s not a math engine / solver, but for explaining concepts, rephrasing problems, or reviewing student logic, it seems quite promising. Is anyone here using local model...
2025-07-23T15:59:57
https://www.reddit.com/r/LocalLLaMA/comments/1m7d55o/anyone_using_maestralechatv04beta/
proahdgsga133
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7d55o
false
null
t3_1m7d55o
/r/LocalLLaMA/comments/1m7d55o/anyone_using_maestralechatv04beta/
false
false
self
3
null
WHY is there no Lovable for IOS?
1
[removed]
2025-07-23T15:50:45
https://www.reddit.com/r/LocalLLaMA/comments/1m7cweh/why_is_there_no_lovable_for_ios/
No-Refrigerator9508
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7cweh
false
null
t3_1m7cweh
/r/LocalLLaMA/comments/1m7cweh/why_is_there_no_lovable_for_ios/
false
false
self
1
null
Qwen 3 coder is the best base model for 3D so far
0
Title
2025-07-23T15:46:12
https://www.reddit.com/r/LocalLLaMA/comments/1m7cs0w/qwen_3_coder_is_the_best_base_model_for_3d_so_far/
Longjumping_Spot5843
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7cs0w
false
null
t3_1m7cs0w
/r/LocalLLaMA/comments/1m7cs0w/qwen_3_coder_is_the_best_base_model_for_3d_so_far/
false
false
self
0
null
beginner with llama3, I cannot get results I want
0
Hello everyone, I have just installed Ollama with Llama3:8b, i make prompts via the backend of my website with ajax requests. I have a list of 10000 french words "maison, femme, cuisine..." and i would like to translate them into 30 other languages, and get declensions ("la cuisine, les cuisine, une cuisine...") and ...
2025-07-23T15:38:25
https://www.reddit.com/r/LocalLLaMA/comments/1m7cklb/beginner_with_llama3_i_cannot_get_results_i_want/
FckGAFA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7cklb
false
null
t3_1m7cklb
/r/LocalLLaMA/comments/1m7cklb/beginner_with_llama3_i_cannot_get_results_i_want/
false
false
self
0
null
HOWTO: Use Qwen3-Coder (or any other LLM) with Claude Code (via LiteLLM)
92
Here's a simple way for Claude Code users to switch from the costly Claude models to the newly released SOTA open-source/weights coding model, Qwen3-Coder, via OpenRouter using LiteLLM on your local machine. This process is quite universal and can be easily adapted to suit your needs. Feel free to explore other models...
2025-07-23T15:35:43
https://i.redd.it/5p7u0le68nef1.png
WolframRavenwolf
i.redd.it
1970-01-01T00:00:00
0
{}
1m7ci3s
false
null
t3_1m7ci3s
/r/LocalLLaMA/comments/1m7ci3s/howto_use_qwen3coder_or_any_other_llm_with_claude/
false
false
default
92
{'enabled': True, 'images': [{'id': '5p7u0le68nef1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/5p7u0le68nef1.png?width=108&crop=smart&auto=webp&s=f2f9ecbfadf0a585c661b7818dc4d782fd8cb3f3', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/5p7u0le68nef1.png?width=216&crop=smart&auto=web...
[Research] We just released the first paper and dataset documenting symbolic emergence in LLMs
0
Hi everyone, I'm part of **EXIS**, an independent research group focused on symbolic AI, ethics, and distributed cognition. We've just published a peer-ready research paper and dataset describing something surprising and (we believe) important: > # 🧾 What we observed: Across different LLMs—GPT (OpenAI), Claude (A...
2025-07-23T15:35:42
https://www.reddit.com/r/LocalLLaMA/comments/1m7ci35/research_we_just_released_the_first_paper_and/
Opposite-Win-2887
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7ci35
false
null
t3_1m7ci35
/r/LocalLLaMA/comments/1m7ci35/research_we_just_released_the_first_paper_and/
false
false
self
0
{'enabled': False, 'images': [{'id': 'q-Z9w5F9KFEHG9VZjiUpHOX8xym4sNk-qQ0i6GS-ZAw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/q-Z9w5F9KFEHG9VZjiUpHOX8xym4sNk-qQ0i6GS-ZAw.png?width=108&crop=smart&auto=webp&s=15e4cb9f61c14da8031f31ca66ea9cf84fbdee31', 'width': 108}, {'height': 108, 'url': 'h...
HOWTO: Use Qwen3-Coder (or any other LLM) with Claude Code (via LiteLLM)
1
Here's a simple way for Claude Code users to switch from the costly Claude models to the newly released SOTA open-source/weights coding model, Qwen3-Coder, via OpenRouter using LiteLLM on your local machine. This process is quite universal and can be easily adapted to suit your needs. Feel free to explore other models...
2025-07-23T15:28:32
https://i.redd.it/crdr2tan6nef1.png
WolframRavenwolf
i.redd.it
1970-01-01T00:00:00
0
{}
1m7cbf2
false
null
t3_1m7cbf2
/r/LocalLLaMA/comments/1m7cbf2/howto_use_qwen3coder_or_any_other_llm_with_claude/
false
false
default
1
{'enabled': True, 'images': [{'id': 'crdr2tan6nef1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/crdr2tan6nef1.png?width=108&crop=smart&auto=webp&s=76b5f668a067866a2862f6a8558e390b78c3d128', 'width': 108}, {'height': 145, 'url': 'https://preview.redd.it/crdr2tan6nef1.png?width=216&crop=smart&auto=web...
What is the best hardware for running the biggest models?
1
What I mean is instead of paying for claude code or junie, is it possible to buy hardware capable of running an equivalent model? Claude code is $20 per month Junie is cheaper at $18 per month for the best I know just renting is likely cheaper in the long term but this assumes no price increases, and locks you into w...
2025-07-23T15:27:29
https://www.reddit.com/r/LocalLLaMA/comments/1m7cagw/what_is_the_best_hardware_for_running_the_biggest/
sanitykey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7cagw
false
null
t3_1m7cagw
/r/LocalLLaMA/comments/1m7cagw/what_is_the_best_hardware_for_running_the_biggest/
false
false
self
1
null
Can someone point me towards LLM diagram generation research?
2
I.e. research focused around improving LLMs at generating diagrams via text diagram specification languages such as Latex Tikz library.
2025-07-23T15:24:51
https://www.reddit.com/r/LocalLLaMA/comments/1m7c7yz/can_someone_point_me_towards_llm_diagram/
boringblobking
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7c7yz
false
null
t3_1m7c7yz
/r/LocalLLaMA/comments/1m7c7yz/can_someone_point_me_towards_llm_diagram/
false
false
self
2
null
Anyone wanna give Kimi-K2-Instruct a try?
0
You can easily have access to it via NetMind Inference: [https://blog.netmind.ai/article/Kimi\_K2%3A\_Moonshot\_AI’s\_Trillion-Parameter\_Agentic\_Model%2C\_Now\_Available\_at\_NetMind](https://blog.netmind.ai/article/Kimi_K2%3A_Moonshot_AI’s_Trillion-Parameter_Agentic_Model%2C_Now_Available_at_NetMind)
2025-07-23T15:20:09
https://www.reddit.com/r/LocalLLaMA/comments/1m7c3ir/anyone_wanna_give_kimik2instruct_a_try/
MarketingNetMind
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7c3ir
false
null
t3_1m7c3ir
/r/LocalLLaMA/comments/1m7c3ir/anyone_wanna_give_kimik2instruct_a_try/
false
false
self
0
{'enabled': False, 'images': [{'id': 'wDgXw_J2Oqssy6yg8C4Bv5De8EUmdQ3D_R23N-CUOhs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/wDgXw_J2Oqssy6yg8C4Bv5De8EUmdQ3D_R23N-CUOhs.png?width=108&crop=smart&auto=webp&s=830883494bf63c64f4d833064a6c24f1bcf8fdcf', 'width': 108}, {'height': 113, 'url': 'h...
Kimi K2 vs Sonnet 4 for Agentic Coding (Tested on Claude Code)
145
After all the buzz, Moonshot AI dropped Kimi K2 with 1T parameters, and it’s being pitched as the open-source Claude Sonnet 4 alternative. Naturally, I had to run the ultimate coding face-off. I’ve mostly compared them on the following factors: * Pricing and Speed * Frontend Coding * Agentic Coding (MCP integration) ...
2025-07-23T15:19:01
https://www.reddit.com/r/LocalLLaMA/comments/1m7c2gr/kimi_k2_vs_sonnet_4_for_agentic_coding_tested_on/
shricodev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7c2gr
false
null
t3_1m7c2gr
/r/LocalLLaMA/comments/1m7c2gr/kimi_k2_vs_sonnet_4_for_agentic_coding_tested_on/
false
false
self
145
{'enabled': False, 'images': [{'id': '89DppKdkNT25PaM72aoMYKePLaCjHei4PolJcfy5rSI', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/89DppKdkNT25PaM72aoMYKePLaCjHei4PolJcfy5rSI.png?width=108&crop=smart&auto=webp&s=b664894969871c1c911d4ca3de0afe330df8b82c', 'width': 108}, {'height': 143, 'url': 'h...
GitHub - Website-Crawler: Extract data from websites in LLM ready JSON or CSV format. Crawl or Scrape entire website with Website Crawler
0
2025-07-23T15:17:24
https://github.com/pc8544/Website-Crawler
PsychologicalTap1541
github.com
1970-01-01T00:00:00
0
{}
1m7c0xu
false
null
t3_1m7c0xu
/r/LocalLLaMA/comments/1m7c0xu/github_websitecrawler_extract_data_from_websites/
false
false
default
0
{'enabled': False, 'images': [{'id': '3rEuqZmLYdd91tJ5D_1JJmP7kVuIl2GRcm3WNOCr8XQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3rEuqZmLYdd91tJ5D_1JJmP7kVuIl2GRcm3WNOCr8XQ.png?width=108&crop=smart&auto=webp&s=fd4cfa0e0b44d516a0d9dd68cd132394a988a8ec', 'width': 108}, {'height': 108, 'url': 'h...
Throughput: Input vs Output. Looking for help...
3
So after doing some further research on the cost of self-hosting larger models I have come to this conclusion - and I am looking for feedback here. My specific use case is an AI-assisted IDE I am building myself, and I am looking to dabble in self-hosting a capable model for inference for its users. I currently do **...
2025-07-23T15:07:18
https://www.reddit.com/r/LocalLLaMA/comments/1m7brg9/throughput_input_vs_output_looking_for_help/
Budget_Map_3333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7brg9
false
null
t3_1m7brg9
/r/LocalLLaMA/comments/1m7brg9/throughput_input_vs_output_looking_for_help/
false
false
self
3
null
How to acutally use gpt-sovits?
3
Hello! I’ve been working on a Japanese voice assistant as a side project, and I’m currently struggling to find a good TTS solution. I tried using [GPT-SoVITS](https://github.com/RVC-Boss/GPT-SoVITS) from their webui, and the voice quality is very impressive, but it’s difficult to integrate it into my project since it d...
2025-07-23T14:52:20
https://www.reddit.com/r/LocalLLaMA/comments/1m7bd41/how_to_acutally_use_gptsovits/
Icy-Ad6078
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7bd41
false
null
t3_1m7bd41
/r/LocalLLaMA/comments/1m7bd41/how_to_acutally_use_gptsovits/
false
false
self
3
{'enabled': False, 'images': [{'id': 'caQx71gSvUb5KdCJdZONmkX6p-beuuKWrd6dl-WlSHU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/caQx71gSvUb5KdCJdZONmkX6p-beuuKWrd6dl-WlSHU.png?width=108&crop=smart&auto=webp&s=8a46b62a80893173dc3ed635ca54310aa68bc664', 'width': 108}, {'height': 108, 'url': 'h...
America's AI Action Plan
5
key takeaways: Light‑touch federal regulation is the goal. The Plan’s very first prescription is to “remove red tape and onerous regulation” so startups can ship faster and raise less compliance capital. That lowers the policy risk around new voice‑AI features and model releases. Free‑speech guard‑rails, not viewpoi...
2025-07-23T14:45:05
https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf
HOLUPREDICTIONS
whitehouse.gov
1970-01-01T00:00:00
0
{}
1m7b6bu
false
null
t3_1m7b6bu
/r/LocalLLaMA/comments/1m7b6bu/americas_ai_action_plan/
false
false
default
5
null
How are people extracting system prompts?
0
Just successfully died in some scenarios with gemini and it still resisted to show me its system prompt. Is there any trick?
2025-07-23T14:32:53
https://www.reddit.com/r/LocalLLaMA/comments/1m7av4q/how_are_people_extracting_system_prompts/
freecodeio
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7av4q
false
null
t3_1m7av4q
/r/LocalLLaMA/comments/1m7av4q/how_are_people_extracting_system_prompts/
false
false
self
0
null
Struggling with NLP classification pipeline for web content – seeking advice
2
Hi all, I'm working on an internal project where I need to classify websites into two categories: Category 1 vs Categoy 2 \- I’ve tried Gemini API (2.5 Flash) with Grounded Google Search and also with URL Context tool — both didn’t provide satisfactory results. **The Challenge with using google searchs:** \-...
2025-07-23T14:14:42
https://www.reddit.com/r/LocalLLaMA/comments/1m7aefj/struggling_with_nlp_classification_pipeline_for/
amir_shehzad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7aefj
false
null
t3_1m7aefj
/r/LocalLLaMA/comments/1m7aefj/struggling_with_nlp_classification_pipeline_for/
false
false
self
2
null
I guess we know what it was trained with.
0
2025-07-23T14:13:52
https://i.redd.it/g9aqimintmef1.png
mattescala
i.redd.it
1970-01-01T00:00:00
0
{}
1m7admn
false
null
t3_1m7admn
/r/LocalLLaMA/comments/1m7admn/i_guess_we_know_what_it_was_trained_with/
false
false
default
0
{'enabled': True, 'images': [{'id': 'g9aqimintmef1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/g9aqimintmef1.png?width=108&crop=smart&auto=webp&s=3b798d5741ffc10c433b614f4bac3bbbdc769a3d', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/g9aqimintmef1.png?width=216&crop=smart&auto=web...
Is there a way to use qwen 3 coder inside vs code or cursor
3
I see the new qwen 3 coder model is insane and seems equal to claude sonnet 4 in coding tests..is there a way to use it inside vs code or cursor..I mean using an extension or any other way..
2025-07-23T13:50:34
https://www.reddit.com/r/LocalLLaMA/comments/1m79sp9/is_there_a_way_to_use_qwen_3_coder_inside_vs_code/
madhawavish
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m79sp9
false
null
t3_1m79sp9
/r/LocalLLaMA/comments/1m79sp9/is_there_a_way_to_use_qwen_3_coder_inside_vs_code/
false
false
self
3
null
Local txt2txt AND txt2img
1
Hi. Are there any txt2txt AND txt2img models that can run on 32GB VRAM? I'd like to run them via LM Studio (preferably) or Ollama I want the model to analyze images, discuss them, and generate images from text (even super simple ones are fine - for anything more advanced, I use Forge/ComfyUI). If it can also edit i...
2025-07-23T13:04:19
https://www.reddit.com/r/LocalLLaMA/comments/1m78ppz/local_txt2txt_and_txt2img/
Mugen_Man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m78ppz
false
null
t3_1m78ppz
/r/LocalLLaMA/comments/1m78ppz/local_txt2txt_and_txt2img/
false
false
self
1
null
Local cross-platform speech-to-speech and real-time captioning with OpenAI Whisper, Vulkan GPU acceleration and more
34
🌋 ENTIRE SPEECH-TO-SPEECH PIPELINE 🔮REAL-TIME LIVE CAPTIONS IN 99 LANGUAGES Now it's possible to have any audio source (including your own voice) transcribed and translated to English using GPU acceleration for ultra-fast inference It's 100% free, even for commercial use And runs locally Source code: [https://gi...
2025-07-23T12:58:47
https://i.redd.it/pxwmiaqagmef1.png
Kutalia
i.redd.it
1970-01-01T00:00:00
0
{}
1m78kyc
false
null
t3_1m78kyc
/r/LocalLLaMA/comments/1m78kyc/local_crossplatform_speechtospeech_and_realtime/
false
false
default
34
{'enabled': True, 'images': [{'id': 'pxwmiaqagmef1', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/pxwmiaqagmef1.png?width=108&crop=smart&auto=webp&s=e713889b96e5cfca6b45f1450adc73c7dae99bb1', 'width': 108}, {'height': 175, 'url': 'https://preview.redd.it/pxwmiaqagmef1.png?width=216&crop=smart&auto=web...
Local cross-platform speech-to-speech and real-time captioning with OpenAI Whisper, Vulkan GPU acceleration and more
3
2025-07-23T12:53:29
https://www.reddit.com/gallery/1m78gq4
Kutalia
reddit.com
1970-01-01T00:00:00
0
{}
1m78gq4
false
null
t3_1m78gq4
/r/LocalLLaMA/comments/1m78gq4/local_crossplatform_speechtospeech_and_realtime/
false
false
https://b.thumbs.redditm…6Kp1uQl5Razs.jpg
3
null
Which quantization approach is the way to go? (llama.cpp)
3
Hey, I wanted to check if I'm missing anything relevant in performance or quality with my quant strategy. My setup is an EPYC Rome (no avx512 instruction set) with 512 GB RAM and a bunch of 3060 / 3090s. The inference engine is llama.cpp and I run almost everything large (r1, q3 235, q3 480) in UD-Q4\_K\_XL, while Ki...
2025-07-23T11:59:22
https://www.reddit.com/r/LocalLLaMA/comments/1m77az5/which_quantization_approach_is_the_way_to_go/
pixelterpy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m77az5
false
null
t3_1m77az5
/r/LocalLLaMA/comments/1m77az5/which_quantization_approach_is_the_way_to_go/
false
false
self
3
null
Building a p2p inference engine in rust and hugging face
3
Title - the goal is to be able to run 70b models for free using p2p sharding like BitTorrent. Have a lil node network! Anyone building in rust/wasm?? I’m a python / ts dev at heart so it’s going to be a steep learning curve!
2025-07-23T11:51:24
https://www.reddit.com/r/LocalLLaMA/comments/1m775h2/building_a_p2p_inference_engine_in_rust_and/
earningtheewage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m775h2
false
null
t3_1m775h2
/r/LocalLLaMA/comments/1m775h2/building_a_p2p_inference_engine_in_rust_and/
false
false
self
3
null
What do new architectures offer and what are their limits?
6
So I’ve been diving into alternative architectures to transformers recently, and I came across a few interesting ones. liquid foundation models (lfm), Mamba (ssm based) and RWKV. I’m curious about what these new architectures offer and what their limitations are. From what I understand, they all seem to be better at ha...
2025-07-23T11:10:30
https://www.reddit.com/r/LocalLLaMA/comments/1m76df6/what_do_new_architectures_offer_and_what_are/
ba2sYd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m76df6
false
null
t3_1m76df6
/r/LocalLLaMA/comments/1m76df6/what_do_new_architectures_offer_and_what_are/
false
false
self
6
null
Many models that are technically non-reasoning can still reason, just without a dedicated CoT because they can initiate it whenever it's needed
1
2025-07-23T11:04:47
https://www.reddit.com/gallery/1m769o2
Longjumping_Spot5843
reddit.com
1970-01-01T00:00:00
0
{}
1m769o2
false
null
t3_1m769o2
/r/LocalLLaMA/comments/1m769o2/many_models_that_are_technically_nonreasoning_can/
false
false
https://b.thumbs.redditm…Up3MMlka6fxs.jpg
1
null
ADHD & LLM development is a risky cocktail
46
So like the title suggests, I have ADHD; which on a good day is a double edged sword in terms of learning new things. I gave a demo to the director of engineering at my job with a Unity chat application that was largely powered by a Unity Asset which is a C# wrapper of llama.cpp. My director was like "I'm going on va...
2025-07-23T10:51:11
https://www.reddit.com/r/LocalLLaMA/comments/1m760rq/adhd_llm_development_is_a_risky_cocktail/
Zichaelpathic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m760rq
false
null
t3_1m760rq
/r/LocalLLaMA/comments/1m760rq/adhd_llm_development_is_a_risky_cocktail/
false
false
self
46
null
Generative AI with Diffusion Models requires 95% acc to pass, but I can't get any model above 85%
0
I'm doing the Nvidia Generative AI with Diffusion Models, just missing the assessment to finish the course. The problem is that it requires 95% acc to pass and I can't manage to get that, only 85%. Has anyone done the course what was your experience and how did you reach 95%
2025-07-23T10:50:11
https://www.reddit.com/r/LocalLLaMA/comments/1m7604m/generative_ai_with_diffusion_models_requires_95/
mr_house7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7604m
false
null
t3_1m7604m
/r/LocalLLaMA/comments/1m7604m/generative_ai_with_diffusion_models_requires_95/
false
false
self
0
null
Getting a model run with vLLM and 7900 XTX
5
Hi, I have 2x 7900 XTX and not getting any model run with them in a docker. docker pull rocm/vllm:rocm6.4.1_vllm_0.9.1_20250702 docker run -it \ --dns=1.1.1.1 \ --dns=8.8.8.8 \ --network=host \ --group-add=video \ --ipc=host \ --cap-add=SYS_PTRACE \ --security-o...
2025-07-23T10:20:53
https://www.reddit.com/r/LocalLLaMA/comments/1m75i0b/getting_a_model_run_with_vllm_and_7900_xtx/
Rich_Artist_8327
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m75i0b
false
null
t3_1m75i0b
/r/LocalLLaMA/comments/1m75i0b/getting_a_model_run_with_vllm_and_7900_xtx/
false
false
self
5
null
Llama?
0
Among the open source models that can be deployed by rtx 4090, which one is better in terms of comprehensive performance?
2025-07-23T10:10:39
https://www.reddit.com/r/LocalLLaMA/comments/1m75bwe/llama/
jeremysse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m75bwe
false
null
t3_1m75bwe
/r/LocalLLaMA/comments/1m75bwe/llama/
false
false
self
0
null
Get your hands on Nvidia GB200 NVL72 for free!
0
Nvidia flagship GB200 NVL72 is available 08/04 - 08/05 (bare metal root access!). Anyone interested just ask.
2025-07-23T10:08:15
https://i.redd.it/hvtv2vs4llef1.png
GPTrack_ai
i.redd.it
1970-01-01T00:00:00
0
{}
1m75afx
false
null
t3_1m75afx
/r/LocalLLaMA/comments/1m75afx/get_your_hands_on_nvidia_gb200_nvl72_for_free/
false
false
default
0
{'enabled': True, 'images': [{'id': 'hvtv2vs4llef1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/hvtv2vs4llef1.png?width=108&crop=smart&auto=webp&s=9e33c92bbe8127e45dad1e99e8801b6f23ad3da1', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/hvtv2vs4llef1.png?width=216&crop=smart&auto=web...
Why do many papers skip hyperparameter search?
11
I've been reading papers where the main contribution is creating a synthetic dataset for a specific task, followed by fine-tuning an LLM on it. One thing I keep noticing: most of them don't seem to perform hyperparameter tuning (e.g., learning rate, epochs, weight decay) using a validation set. Instead, they just reuse...
2025-07-23T09:50:51
https://www.reddit.com/r/LocalLLaMA/comments/1m7503r/why_do_many_papers_skip_hyperparameter_search/
hwanchang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m7503r
false
null
t3_1m7503r
/r/LocalLLaMA/comments/1m7503r/why_do_many_papers_skip_hyperparameter_search/
false
false
self
11
null
Qwen3-Coder is VERY expensive maybe one day You can run it locally.
0
https://preview.redd.it/…047e3b464b3abe
2025-07-23T09:06:16
https://www.reddit.com/r/LocalLLaMA/comments/1m74b87/qwen3coder_is_very_expensive_maybe_one_day_you/
PositiveEnergyMatter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m74b87
false
null
t3_1m74b87
/r/LocalLLaMA/comments/1m74b87/qwen3coder_is_very_expensive_maybe_one_day_you/
false
false
https://b.thumbs.redditm…UilmSEBETViA.jpg
0
null
What do you do to keep up to date on new research, trends and more?
1
I've been using locallama, newsletters and much more for quite some time now, but i think both can be somewhat saturate at times and i still often feel like i miss out on stuff. Therefore, I've been looking for a more consolidated way to read and learn about new research, releases and more. I was thinking X, but never ...
2025-07-23T08:48:49
https://www.reddit.com/r/LocalLLaMA/comments/1m741so/what_do_you_do_to_keep_up_to_date_on_new_research/
Professional_Pop_240
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m741so
false
null
t3_1m741so
/r/LocalLLaMA/comments/1m741so/what_do_you_do_to_keep_up_to_date_on_new_research/
false
false
self
1
null
Qwen 3 Coder is actually pretty decent in my testing
213
I have a semi complex web project that I use with Claude Code. a few days ago I used Kimi K2 (via Groq Q4) with Claude Code (CCR) to add a permissions system / ACL into my web project to lock down certain people from doing certain things. I use SuperClaude and a 1200 line context/architecture document, which basica...
2025-07-23T08:43:00
https://www.reddit.com/r/LocalLLaMA/comments/1m73yrb/qwen_3_coder_is_actually_pretty_decent_in_my/
Hodler-mane
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m73yrb
false
null
t3_1m73yrb
/r/LocalLLaMA/comments/1m73yrb/qwen_3_coder_is_actually_pretty_decent_in_my/
false
false
self
213
null
Need Help - Local LLM & Lots of Files! (Privacy Concerns)
1
Hey everyone, I'm trying to get an LLM to analyze a bunch of documents (around 30 PDFs or TXT files), but I’m running into some issues. These are pretty sensitive communications, so keeping everything local is a must – no sending them off to online services! I've been playing around with LM Studio, but it seems like ...
2025-07-23T08:31:10
https://www.reddit.com/r/LocalLLaMA/comments/1m73snr/need_help_local_llm_lots_of_files_privacy_concerns/
AreBee73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m73snr
false
null
t3_1m73snr
/r/LocalLLaMA/comments/1m73snr/need_help_local_llm_lots_of_files_privacy_concerns/
false
false
self
1
null
Need Help - Local LLM & Lots of Files! (Privacy Concerns)
2
Hey everyone, I'm trying to get an LLM to analyze a bunch of documents (around 30 PDFs or TXT files), but I’m running into some issues. These are pretty sensitive communications, so keeping everything local is a must – no sending them off to online services! I've been playing around with LM Studio, but it seems like ...
2025-07-23T08:26:39
https://www.reddit.com/r/LocalLLaMA/comments/1m73q8n/need_help_local_llm_lots_of_files_privacy_concerns/
AreBee73
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m73q8n
false
null
t3_1m73q8n
/r/LocalLLaMA/comments/1m73q8n/need_help_local_llm_lots_of_files_privacy_concerns/
false
false
self
2
null
Has anyone noticed that the gemma3n model doesn't look like a gemma, but more like a gemini mini?
5
When I installed this model on a Samsung phone more than a month ago, I didn't find much. When I tested other gemma models today, I found that the output of 3n is very different from other gemma models, and it is also very different from gemini 2.5 flash models. The most similar one is gemini 2.5pro. https://preview.r...
2025-07-23T08:23:19
https://www.reddit.com/r/LocalLLaMA/comments/1m73ohk/has_anyone_noticed_that_the_gemma3n_model_doesnt/
Mountain_TANG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m73ohk
false
null
t3_1m73ohk
/r/LocalLLaMA/comments/1m73ohk/has_anyone_noticed_that_the_gemma3n_model_doesnt/
false
false
https://b.thumbs.redditm…fjBeYTW1RLcg.jpg
5
null
Anyone seen safety regressions after fine-tuning LLaMA or Mistral on clean data?
2
Hey guys I was recently looking at this paper, which mentions that finetuning models on even benign datasets (both full FT and LoRA) can cause safety regressions : [https://arxiv.org/abs/2310.03693](https://arxiv.org/abs/2310.03693) Have you ever observed a model getting less safe / more likely to respond to off-limi...
2025-07-23T08:20:36
https://www.reddit.com/r/LocalLLaMA/comments/1m73n0t/anyone_seen_safety_regressions_after_finetuning/
whalefal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m73n0t
false
null
t3_1m73n0t
/r/LocalLLaMA/comments/1m73n0t/anyone_seen_safety_regressions_after_finetuning/
false
false
self
2
null
🔓 I built Hearth-UI — A fully-featured desktop app for chatting with local LLMs (Ollama-ready, attachments, themes, markdown, and more)
0
Hey everyone! 👋 I recently put together a desktop AI chat interface called **Hearth-UI**, made for anyone using [Ollama](https://ollama.com/) for local LLMs like LLaMA3, Mistral, Gemma, etc. It includes everything I wish existed in a typical Ollama UI — and it’s fully offline, customizable, and open-source. 🧠 Feat...
2025-07-23T06:58:00
https://www.reddit.com/r/LocalLLaMA/comments/1m72d5y/i_built_hearthui_a_fullyfeatured_desktop_app_for/
Vast-Helicopter-3719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m72d5y
false
null
t3_1m72d5y
/r/LocalLLaMA/comments/1m72d5y/i_built_hearthui_a_fullyfeatured_desktop_app_for/
false
false
https://b.thumbs.redditm…wUaHtL6ePdmM.jpg
0
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'h...
How do you solve this dilemma?
0
Even if we use a smart model to fully automate the process, the quality will be poor and the cost will be high. It seems very difficult to completely eliminate manual work.
2025-07-23T06:15:04
https://i.redd.it/e5bcw2jjfkef1.jpeg
dahara111
i.redd.it
1970-01-01T00:00:00
0
{}
1m71oqv
false
null
t3_1m71oqv
/r/LocalLLaMA/comments/1m71oqv/how_do_you_solve_this_dilemma/
false
false
default
0
{'enabled': True, 'images': [{'id': 'e5bcw2jjfkef1', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/e5bcw2jjfkef1.jpeg?width=108&crop=smart&auto=webp&s=f2ba9fd3be1478ce1ce7883547f7ea4556dda1a2', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/e5bcw2jjfkef1.jpeg?width=216&crop=smart&auto=w...
unsloth/Qwen3-Coder-480B-A35B-Instruct-GGUF · Hugging Face
57
2025-07-23T05:58:37
https://huggingface.co/unsloth/Qwen3-Coder-480B-A35B-Instruct-GGUF
Fun-Wolf-2007
huggingface.co
1970-01-01T00:00:00
0
{}
1m71f20
false
null
t3_1m71f20
/r/LocalLLaMA/comments/1m71f20/unslothqwen3coder480ba35binstructgguf_hugging_face/
false
false
default
57
{'enabled': False, 'images': [{'id': 'gwBpjvEsSJP9TEvm8JruyOhsY560jiGO-1v9Go-RAwk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gwBpjvEsSJP9TEvm8JruyOhsY560jiGO-1v9Go-RAwk.png?width=108&crop=smart&auto=webp&s=7ad6d1b6c4559472693b7af1de31e24e4a8023a3', 'width': 108}, {'height': 116, 'url': 'h...
MCPEval: Automatic MCP-based Deep Evaluation for AI Agent Models
2
Paper: [https://arxiv.org/pdf/2507.12806](https://arxiv.org/pdf/2507.12806) Code: [https://github.com/SalesforceAIResearch/MCPEval](https://github.com/SalesforceAIResearch/MCPEval)
2025-07-23T05:18:43
https://www.reddit.com/r/LocalLLaMA/comments/1m70ra1/mcpeval_automatic_mcpbased_deep_evaluation_for_ai/
palindsay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m70ra1
false
null
t3_1m70ra1
/r/LocalLLaMA/comments/1m70ra1/mcpeval_automatic_mcpbased_deep_evaluation_for_ai/
false
false
self
2
null
Alibaba’s upgraded Qwen3 235B-A22B 2507 is now the most intelligent non-reasoning model.
272
Qwen3 235B 2507 scores 60 on the Artificial Analysis Intelligence Index, surpassing Claude 4 Opus and Kimi K2 (both 58), and DeepSeek V3 0324 and GPT-4.1 (both 53). This marks a 13-point leap over the May 2025 non-reasoning release and brings it within two points of the May 2025 reasoning variant.
2025-07-23T05:12:16
https://www.reddit.com/gallery/1m70n7q
Fantastic-Emu-3819
reddit.com
1970-01-01T00:00:00
0
{}
1m70n7q
false
null
t3_1m70n7q
/r/LocalLLaMA/comments/1m70n7q/alibabas_upgraded_qwen3_235ba22b_2507_is_now_the/
false
false
https://b.thumbs.redditm…90mJNUYMTKbw.jpg
272
null
Noob: In theory what set up would you need to run the best LLMs locally at the same speed as the public LLM?
2
Hello, I wanted to ask, in theory what setup would be able to run such models at superspeed? Is such setup possible with 30k? Or would you need way more, 100-500k? I'm not familiar with setups or common knowledge within this realm. Thank you.
2025-07-23T04:43:53
https://www.reddit.com/r/LocalLLaMA/comments/1m704yl/noob_in_theory_what_set_up_would_you_need_to_run/
Prudent_Garden9033
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m704yl
false
null
t3_1m704yl
/r/LocalLLaMA/comments/1m704yl/noob_in_theory_what_set_up_would_you_need_to_run/
false
false
self
2
null
Kimi K2 vs Qwen3 Coder 480B
103
I’ve been testing Qwen3-Coder-480B (on Hyperbolics) and Kimi K2 (on Groq) for Rust and Go projects. Neither model is built for deep problem-solving, but in real-world use, the differences are pretty clear. Qwen3-Coder often ignores system prompts, struggles with context, and its tool calls are rigid, like it’s just...
2025-07-23T04:34:42
https://www.reddit.com/r/LocalLLaMA/comments/1m6zz1v/kimi_k2_vs_qwen3_coder_480b/
Ok-Pattern9779
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6zz1v
false
null
t3_1m6zz1v
/r/LocalLLaMA/comments/1m6zz1v/kimi_k2_vs_qwen3_coder_480b/
false
false
self
103
null
UI/UX benchmark update 7/22: Newest Qwen models added, Qwen3 takes the lead in terms of win rate (though still early)
73
You probably already know about my [benchmark](https://www.designarena.ai/), but here's [context](https://www.reddit.com/r/LocalLLaMA/comments/1lxth6s/comment/n2qoqtk/?context=3) if you missed it. The tldr is that it's a crowdsource benchmark that takes human preferences on frontend and image generations from different...
2025-07-23T04:25:53
https://i.redd.it/lcjgeavzvjef1.png
Accomplished-Copy332
i.redd.it
1970-01-01T00:00:00
0
{}
1m6ztb2
false
null
t3_1m6ztb2
/r/LocalLLaMA/comments/1m6ztb2/uiux_benchmark_update_722_newest_qwen_models/
false
false
default
73
{'enabled': True, 'images': [{'id': 'lcjgeavzvjef1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/lcjgeavzvjef1.png?width=108&crop=smart&auto=webp&s=5d4b029e6012124dc9f449076bfcd1f4cfdf2ac1', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/lcjgeavzvjef1.png?width=216&crop=smart&auto=web...
[Github Repo] - Use Qwen3 coder or any other LLM provider with Claude Code
14
I saw this claude code router repo on github, but was broken for me, so I rewrote the thing in Go. Is called [Claude Code Open](https://github.com/Davincible/claude-code-open) Now you can simply `CCO_API_KEY="<open router key>" cco code` and then select `openrouter,qwen/qwen3-coder` as model and voila. Also blocks any...
2025-07-23T04:12:56
https://www.reddit.com/r/LocalLLaMA/comments/1m6zkmm/github_repo_use_qwen3_coder_or_any_other_llm/
davincible
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6zkmm
false
null
t3_1m6zkmm
/r/LocalLLaMA/comments/1m6zkmm/github_repo_use_qwen3_coder_or_any_other_llm/
false
false
self
14
{'enabled': False, 'images': [{'id': 'uAwD8gw9JjbYrLYu8QEcbxdyqjGulmC1fIpWWjJiViY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uAwD8gw9JjbYrLYu8QEcbxdyqjGulmC1fIpWWjJiViY.png?width=108&crop=smart&auto=webp&s=68a0ec129f7959f5fcb0d85891b6490389d0313a', 'width': 108}, {'height': 108, 'url': 'h...
Consumer usecase for on-device AI - an Android app to detect scams
6
Hey folks, I've built an app called **Protexo**, which uses **Google's Gemma 3** LLM **entirely on-device** to detect scam messages across SMS, WhatsApp, and other messaging apps. The goal is to stop social engineering scams *before* they escalate — especially those that start with a friendly human-sounding message. ...
2025-07-23T04:03:00
https://www.reddit.com/r/LocalLLaMA/comments/1m6zdx4/consumer_usecase_for_ondevice_ai_an_android_app/
Basic-Donut1740
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6zdx4
false
null
t3_1m6zdx4
/r/LocalLLaMA/comments/1m6zdx4/consumer_usecase_for_ondevice_ai_an_android_app/
false
false
self
6
{'enabled': False, 'images': [{'id': 'Q95cr2UVYw13fU7wf6vwGXNosGBaerSRSek7ztxKJ1Q', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/Q95cr2UVYw13fU7wf6vwGXNosGBaerSRSek7ztxKJ1Q.png?width=108&crop=smart&auto=webp&s=e7bfd1e7903549de4741aa59fe06cc42dcbff2e3', 'width': 108}, {'height': 216, 'url': '...
[Research] Thought Anchors: Understanding How Qwen3-0.6B vs DeepSeek-R1-Distill-1.5B Actually Reason - Different Cognitive Architectures Revealed
24
Hey r/LocalLLaMA, I just published research on "thought anchors" - a method to analyze which specific reasoning steps matter most for task success in locally-runnable models. Thought this community would find the results interesting since it directly compares two popular local models. **TL;DR: Qwen3-0.6B and DeepSeek...
2025-07-23T04:00:56
https://www.reddit.com/r/LocalLLaMA/comments/1m6zce0/research_thought_anchors_understanding_how/
asankhs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6zce0
false
null
t3_1m6zce0
/r/LocalLLaMA/comments/1m6zce0/research_thought_anchors_understanding_how/
false
false
self
24
{'enabled': False, 'images': [{'id': 'LUwOzF1qaWXXZdKFKtNyq98eHv6n8KWbzzx3ALLxitY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LUwOzF1qaWXXZdKFKtNyq98eHv6n8KWbzzx3ALLxitY.png?width=108&crop=smart&auto=webp&s=76a98416b90c3288a04cac47b99811464ff316e5', 'width': 108}, {'height': 108, 'url': 'h...
Multi 7900XTX System build for LLMs/local AI
1
[removed]
2025-07-23T03:49:48
https://www.reddit.com/r/LocalLLaMA/comments/1m6z4p6/multi_7900xtx_system_build_for_llmslocal_ai/
LogicalLeader007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6z4p6
false
null
t3_1m6z4p6
/r/LocalLLaMA/comments/1m6z4p6/multi_7900xtx_system_build_for_llmslocal_ai/
false
false
self
1
null
Multi 7900XTX System Build for Local Ai and LLMs
1
[removed]
2025-07-23T03:48:25
https://i.redd.it/sjrk6g3sojef1.png
LogicalLeader007
i.redd.it
1970-01-01T00:00:00
0
{}
1m6z3ra
false
null
t3_1m6z3ra
/r/LocalLLaMA/comments/1m6z3ra/multi_7900xtx_system_build_for_local_ai_and_llms/
false
false
default
1
{'enabled': True, 'images': [{'id': 'sjrk6g3sojef1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/sjrk6g3sojef1.png?width=108&crop=smart&auto=webp&s=c5e2ffac50a25f124b5aafab646e4166123d3770', 'width': 108}, {'height': 221, 'url': 'https://preview.redd.it/sjrk6g3sojef1.png?width=216&crop=smart&auto=we...
Is it just me or does building local multi-agent LLM systems kind of suck right now?
3
been messing around with local multi-agent setups and it’s honestly kind of a mess. juggling agent comms, memory, task routing, fallback logic, all of it just feels duct-taped together. i’ve tried using queues, redis, even writing my own little message handlers, but nothing really scales cleanly. langchain is fine if ...
2025-07-23T03:01:19
https://www.reddit.com/r/LocalLLaMA/comments/1m6y6c7/is_it_just_me_or_does_building_local_multiagent/
Soggy-Guava-1218
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6y6c7
false
null
t3_1m6y6c7
/r/LocalLLaMA/comments/1m6y6c7/is_it_just_me_or_does_building_local_multiagent/
false
false
self
3
null
How to prevent bad/illegal word queries
0
I have a article writing service created for my Seo saas. It does keyword research, generates topical clusters and articles. User searches for keywords and then eventually all these data are passed to llm for generating the article. I was wondering what if user searches for some bad or illegal wordsband use the service...
2025-07-23T02:43:10
https://www.reddit.com/r/LocalLLaMA/comments/1m6xt2d/how_to_prevent_badillegal_word_queries/
Smart_Chain_0316
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6xt2d
false
null
t3_1m6xt2d
/r/LocalLLaMA/comments/1m6xt2d/how_to_prevent_badillegal_word_queries/
false
false
self
0
null
Injecting custom embeddings into LLaMA 3.2 GGUF model
0
I'm working on a low-level experimental setup where, instead of just using embeddings generated by the model, I inject custom embeddings directly into a LLaMA model (specifically a GGUF version using llama.cpp). These embeddings come from another domain (e.g. images), but I project them into the same space as LLaMA’s ...
2025-07-23T02:40:58
https://www.reddit.com/r/LocalLLaMA/comments/1m6xrfj/injecting_custom_embeddings_into_llama_32_gguf/
Old-Toe6442
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6xrfj
false
null
t3_1m6xrfj
/r/LocalLLaMA/comments/1m6xrfj/injecting_custom_embeddings_into_llama_32_gguf/
false
false
self
0
null
How to play a character as user with Tavern, Kobold, llama 3.2b?
0
Hi, I'm pretty new to all this and running a modest laptop with 8gb ram. I created a character (magicuser) in tavernai and wanted to play as that character. chatgpt told me that I could do that with prompts and proceded to give me bad advice for many hours...'this is how you do it' me: didn't work, 'well this will defi...
2025-07-23T02:27:08
https://www.reddit.com/r/LocalLLaMA/comments/1m6xhgg/how_to_play_a_character_as_user_with_tavern/
Lephuey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6xhgg
false
null
t3_1m6xhgg
/r/LocalLLaMA/comments/1m6xhgg/how_to_play_a_character_as_user_with_tavern/
false
false
self
0
null
I'm looking for an Uncensored LLM to produce extremely spicy prompts - What would you recommend?
0
I'm looking for an uncensored LLM I can run on LM Studio that specializes in producing highly spicy prompts. Sometimes I just don't know what I want, or end up producing too many similar images and would rather be surprised. Asking an image generation model for creativity is not going to work - it wants highly specific...
2025-07-23T02:19:29
https://www.reddit.com/r/LocalLLaMA/comments/1m6xbs7/im_looking_for_an_uncensored_llm_to_produce/
Whipit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6xbs7
false
null
t3_1m6xbs7
/r/LocalLLaMA/comments/1m6xbs7/im_looking_for_an_uncensored_llm_to_produce/
false
false
self
0
null
How does Gemini 2.5 Pro natively support 1M tokens of context? Is it using YaRN, or some kind of disguised chunking?
8
I’m trying to understand how models like **Gemini 2.5 Pro** achieve *native* 1 million token context windows. From what I’ve seen in models like **Qwen3** or **LLaMA**, they use techniques like **RoPE scaling** (e.g., YaRN, NTK-aware RoPE, Position Interpolation) to extrapolate context beyond what was trained. These m...
2025-07-23T02:19:28
https://www.reddit.com/r/LocalLLaMA/comments/1m6xbru/how_does_gemini_25_pro_natively_support_1m_tokens/
Ranteck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6xbru
false
null
t3_1m6xbru
/r/LocalLLaMA/comments/1m6xbru/how_does_gemini_25_pro_natively_support_1m_tokens/
false
false
self
8
null
World's first AI assisted fiction writing competition
1
[removed]
2025-07-23T02:11:20
https://www.reddit.com/r/WritingWithAI/comments/1lzhfyf/the_worlds_first_aiassisted_writing_competition/
YoavYariv
reddit.com
1970-01-01T00:00:00
0
{}
1m6x5p7
false
null
t3_1m6x5p7
/r/LocalLLaMA/comments/1m6x5p7/worlds_first_ai_assisted_fiction_writing/
false
false
default
1
null
World's first AI assisted fiction writing competition - Voltage Verse
1
[removed]
2025-07-23T02:09:38
https://www.reddit.com/r/WritingWithAI/comments/1lzhfyf/the_worlds_first_aiassisted_writing_competition/
YoavYariv
reddit.com
1970-01-01T00:00:00
0
{}
1m6x4es
false
null
t3_1m6x4es
/r/LocalLLaMA/comments/1m6x4es/worlds_first_ai_assisted_fiction_writing/
false
false
default
1
null
Added Qwen3-Coder to my VsCode extension
0
Anyone looking to test Qwen3-Coder i just added it to my extension so i can play with it. You need to sign up at [qwen.ai](http://qwen.ai) for api access, and you should even get free credits to try it out. Let me know if you have any issues, I mostly created the extension for my own use, but it works awesome, and it...
2025-07-23T01:51:05
https://www.reddit.com/r/LocalLLaMA/comments/1m6wq77/added_qwen3coder_to_my_vscode_extension/
PositiveEnergyMatter
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6wq77
false
null
t3_1m6wq77
/r/LocalLLaMA/comments/1m6wq77/added_qwen3coder_to_my_vscode_extension/
false
false
self
0
null
Qwen3-Coder Unsloth dynamic GGUFs
267
We made dynamic 2bit to 8bit dynamic Unsloth quants for the 480B model! Dynamic 2bit needs 182GB of space (down from 512GB). Also, we're making **1M context length variants**! You can achieve >6 tokens/s on **182GB unified memory or 158GB RAM + 24GB VRAM** via MoE offloading. You do not need 182GB of VRAM, since llama...
2025-07-23T01:38:45
https://i.redd.it/s9cwrvwg1jef1.png
danielhanchen
i.redd.it
1970-01-01T00:00:00
0
{}
1m6wgs7
false
null
t3_1m6wgs7
/r/LocalLLaMA/comments/1m6wgs7/qwen3coder_unsloth_dynamic_ggufs/
false
false
https://b.thumbs.redditm…YhvapsX-O7BM.jpg
267
{'enabled': True, 'images': [{'id': 'ERO7uofUircHQEW6vRJ41kGEOL83JHug3Yrw0FJ_7oM', 'resolutions': [{'height': 115, 'url': 'https://preview.redd.it/s9cwrvwg1jef1.png?width=108&crop=smart&auto=webp&s=7f6a9ffee96aa9dcd39f3c41d338a8e758af4d75', 'width': 108}, {'height': 231, 'url': 'https://preview.redd.it/s9cwrvwg1jef1.pn...
Recent Qwen Benchmark Scores are Questionable
389
2025-07-23T01:31:34
https://i.redd.it/8gjn0yhf1jef1.png
Electronic_Ad8889
i.redd.it
1970-01-01T00:00:00
0
{}
1m6wb5o
false
null
t3_1m6wb5o
/r/LocalLLaMA/comments/1m6wb5o/recent_qwen_benchmark_scores_are_questionable/
false
false
default
389
{'enabled': True, 'images': [{'id': '8gjn0yhf1jef1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/8gjn0yhf1jef1.png?width=108&crop=smart&auto=webp&s=f6a6ec17b5a0a0b95756eb50adde48b41ee2f601', 'width': 108}, {'height': 132, 'url': 'https://preview.redd.it/8gjn0yhf1jef1.png?width=216&crop=smart&auto=web...
How are people staging AI training datasets from NVMe → DDR5 → GPU VRAM for fine-tuning on RTX 5090s?
11
I’m building a structured fine-tuning pipeline for a legal/finance AI assistant (think deal-closure workflows, private equity logic, etc.) using Pop!\_OS 22.04 for cleaner NVIDIA driver control and GPU memory isolation. We’re running Torchlight (nightly) builds to fully unlock Blackwell compatibility, along with bitsan...
2025-07-23T00:55:11
https://v.redd.it/m3v13th5vief1
DJAI9LAB
v.redd.it
1970-01-01T00:00:00
0
{}
1m6vj8o
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/m3v13th5vief1/DASHPlaylist.mpd?a=1755824126%2CNThjNDc5MmZhNjMxZjIzZTczNDhhZTcyMDM4NTNhMTMwZmNmYzg3NDBiM2YyZjE5MzE2YWQzNjQwMmIzN2Y0Mg%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/m3v13th5vief1/DASH_720.mp4?source=fallback', 'ha...
t3_1m6vj8o
/r/LocalLLaMA/comments/1m6vj8o/how_are_people_staging_ai_training_datasets_from/
false
false
https://external-preview…d386547e5aac176b
11
{'enabled': False, 'images': [{'id': 'NnNtZzZ1aDV2aWVmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NnNtZzZ1aDV2aWVmMQ_TONUx3ShmleBmxHUm5WhhyHrbQHADnnzginEsV9Wo.png?width=108&crop=smart&format=pjpg&auto=webp&s=930f76891585f27565f3d929f2d1d4df9fbbe...
Just tried higgsaudio v2: a new multilingual TTS model, pretty impressed
50
https://preview.redd.it/…ecent models. 
2025-07-23T00:45:03
https://www.reddit.com/r/LocalLLaMA/comments/1m6vbds/just_tried_higgsaudio_v2_a_new_multilingual_tts/
Sudden-Tap3484
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1m6vbds
false
null
t3_1m6vbds
/r/LocalLLaMA/comments/1m6vbds/just_tried_higgsaudio_v2_a_new_multilingual_tts/
false
false
https://b.thumbs.redditm…j2laEkKgzAUI.jpg
50
null