title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Benchmark Open Models, understanding, scale
1
[removed]
2025-07-04T08:00:07
https://www.reddit.com/r/LocalLLaMA/comments/1lrd6f1/benchmark_open_models_understanding_scale/
Live-Efficiency-1378
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lrd6f1
false
null
t3_1lrd6f1
/r/LocalLLaMA/comments/1lrd6f1/benchmark_open_models_understanding_scale/
false
false
self
1
null
Give me some ideas
5
Good morning, everyone. I wanted to discuss with you some ideas for getting the most out of my 5080 (it has 16 GB). What AI applications could I use it for? Currently, I can run Flux Dev on FP8 smoothly, and I can also run models as large as Devstral 24B on IQ2_XXS or Qwen3-30B-A3B on IQ3_XXS (the former at 48-56 tk/s...
2025-07-04T06:57:44
https://www.reddit.com/r/LocalLLaMA/comments/1lrc8pk/give_me_some_ideas/
ajmusic15
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lrc8pk
false
null
t3_1lrc8pk
/r/LocalLLaMA/comments/1lrc8pk/give_me_some_ideas/
false
false
self
5
null
If you know, you know
0
2025-07-04T06:40:25
https://i.redd.it/56xflu8hzsaf1.jpeg
Comed_Ai_n
i.redd.it
1970-01-01T00:00:00
0
{}
1lrbyzm
false
null
t3_1lrbyzm
/r/LocalLLaMA/comments/1lrbyzm/if_you_know_you_know/
false
false
default
0
{'enabled': True, 'images': [{'id': '56xflu8hzsaf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/56xflu8hzsaf1.jpeg?width=108&crop=smart&auto=webp&s=8b7e22430fefc536a3214013045d0a28294f8444', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/56xflu8hzsaf1.jpeg?width=216&crop=smart&auto=w...
Ollama based AI presentation generator and API - Gamma Alternative
4
Me and my roommates are building Presenton, which is an AI presentation generator that can run entirely on your own device. It has Ollama built in so, all you need is add Pexels (free image provider) API Key and start generating high quality presentations which can be exported to PPTX and PDF. It even works on CPU(can ...
2025-07-04T06:38:09
https://i.redd.it/zmte2fcxysaf1.gif
goodboydhrn
i.redd.it
1970-01-01T00:00:00
0
{}
1lrbxoo
false
null
t3_1lrbxoo
/r/LocalLLaMA/comments/1lrbxoo/ollama_based_ai_presentation_generator_and_api/
false
false
https://a.thumbs.redditm…DEvod019YVp0.jpg
4
{'enabled': True, 'images': [{'id': 'kQ-OvoBjzKYx152-e8pEqn6LSKjzoeJlRFRBkM6Q8tw', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/zmte2fcxysaf1.gif?width=108&crop=smart&format=png8&s=848b7d014150cf8fd658e6a0dad5ed97a2afa9ac', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/zmte2fcxysaf1.g...
Created an Open Source Conversation Response Path Exploration System using Monte Carlo Tree Search
364
Hey all! I'm creating a project that applies Monte Carlo Tree Search to LLM conversations. Instead of just generating the next response, it simulates entire conversation trees to find paths that achieve long-term goals. The initial draft version is up. Github: [https://github.com/MVPandey/CAE](https://github.com/MVPan...
2025-07-04T06:36:12
https://www.reddit.com/r/LocalLLaMA/comments/1lrbwmz/created_an_open_source_conversation_response_path/
ManavTheWorld
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lrbwmz
false
null
t3_1lrbwmz
/r/LocalLLaMA/comments/1lrbwmz/created_an_open_source_conversation_response_path/
false
false
https://external-preview…510286bb2bcf024e
364
{'enabled': False, 'images': [{'id': 'g_tsBx-sQoZLGvWQ7PFRoKZ5UY7Qfo6eiDBT125d8YE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/g_tsBx-sQoZLGvWQ7PFRoKZ5UY7Qfo6eiDBT125d8YE.png?width=108&crop=smart&auto=webp&s=4f2b459033ec7b1c73ba64efd65042a691fe84f0', 'width': 108}, {'height': 108, 'url': 'h...
Looking for GPU advice for local LLM server (GIGABYTE G292-Z20 R1)
3
I'm planning to buy a GIGABYTE G292-Z20 server (32GB RAM) to run local LLMs. I’ll have 4–5 concurrent users, but only one model (16B–32B params) running at a time likely through Ollama + Open WebUI. I originally considered used AMD MI50s, but ROCm no longer supports them, so I’m now looking at alternatives. My budget...
2025-07-04T06:27:25
https://www.reddit.com/r/LocalLLaMA/comments/1lrbrn1/looking_for_gpu_advice_for_local_llm_server/
Dependent-Main5637
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lrbrn1
false
null
t3_1lrbrn1
/r/LocalLLaMA/comments/1lrbrn1/looking_for_gpu_advice_for_local_llm_server/
false
false
self
3
null
Open-Source Conversation Optimization System using MCTS for Response Path Exploration
1
Hey everyone! Been working on this project that combines Monte Carlo Tree Search with LLMs to optimize conversations. Github: [https://github.com/MVPandey/CAE](https://github.com/MVPandey/CAE) (This is a mock UI demo. Real payload but non-realtime-UI generated with Claude). https://i.redd.it/re8x5zibwsaf1.gif **...
2025-07-04T06:25:10
https://www.reddit.com/r/LocalLLaMA/comments/1lrbqc2/opensource_conversation_optimization_system_using/
Special_Attention_57
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lrbqc2
false
null
t3_1lrbqc2
/r/LocalLLaMA/comments/1lrbqc2/opensource_conversation_optimization_system_using/
false
false
https://external-preview…510286bb2bcf024e
1
{'enabled': False, 'images': [{'id': 'g_tsBx-sQoZLGvWQ7PFRoKZ5UY7Qfo6eiDBT125d8YE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/g_tsBx-sQoZLGvWQ7PFRoKZ5UY7Qfo6eiDBT125d8YE.png?width=108&crop=smart&auto=webp&s=4f2b459033ec7b1c73ba64efd65042a691fe84f0', 'width': 108}, {'height': 108, 'url': 'h...
Best emotionally advanced NSFW/NSFL Roleplay Local Ai model for my Low end Laptop(8gb Ram,no gpu, i3-1005G1) ?
0
Looking for lightweight yet advanced in response local ai model which uncensored and capable of NSFW / NSFL Roleplay chats which will be smooth and fast in my potato laptop through KoboldCPP Specs : Processor : Intel I3-1005G1 Ram: 8gb Gpu: Only igpu 🥲 Determining my specs I think models of 6B parameter will be appro...
2025-07-04T06:12:41
https://www.reddit.com/r/LocalLLaMA/comments/1lrbj8r/best_emotionally_advanced_nsfwnsfl_roleplay_local/
Intelligent-Goat-461
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lrbj8r
false
null
t3_1lrbj8r
/r/LocalLLaMA/comments/1lrbj8r/best_emotionally_advanced_nsfwnsfl_roleplay_local/
false
false
nsfw
0
null
hardware help
1
i’d like to be able to run something like mixtral on a device but GPUs are crazy expensive right now so i was wondering if it’s possible to instead of buying a nvidia 48GB gpu i could just buy 2 and 24gb gpus and have slightly lower performance
2025-07-04T06:05:18
https://www.reddit.com/r/LocalLLaMA/comments/1lrbf34/hardware_help/
Odd_Translator_3026
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lrbf34
false
null
t3_1lrbf34
/r/LocalLLaMA/comments/1lrbf34/hardware_help/
false
false
self
1
null
How to set up MCP for fast code
3
I want to be able to ask my local LLM to give me fast code for a particular function. Ideally it would give the code, run it locally, then change the code to try to speed it up and repeat. I am new to MCP. Are there any guides on how to do this?
2025-07-04T05:21:31
https://www.reddit.com/r/LocalLLaMA/comments/1lraotq/how_to_set_up_mcp_for_fast_code/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lraotq
false
null
t3_1lraotq
/r/LocalLLaMA/comments/1lraotq/how_to_set_up_mcp_for_fast_code/
false
false
self
3
null
ZLUDA - Bringing CUDA To Non-NVIDIA GPUs - A Major Breakthrough
0
2025-07-04T05:15:49
https://youtu.be/ytsVQWu3XSU?si=RcxHfDg16zMA7wlu
sub_RedditTor
youtu.be
1970-01-01T00:00:00
0
{}
1lral7n
false
{'oembed': {'author_name': 'Fahd Mirza', 'author_url': 'https://www.youtube.com/@fahdmirza', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/ytsVQWu3XSU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; ...
t3_1lral7n
/r/LocalLLaMA/comments/1lral7n/zluda_bringing_cuda_to_nonnvidia_gpus_a_major/
false
false
https://external-preview…f113cac4ee1bd5c9
0
{'enabled': False, 'images': [{'id': '1pn04S-iF3zsRSnzlwgkOvkJW-fP9S8HakH18uwPWgY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1pn04S-iF3zsRSnzlwgkOvkJW-fP9S8HakH18uwPWgY.jpeg?width=108&crop=smart&auto=webp&s=25eb78a3f4e9259dfe59bace85a986b2729a0602', 'width': 108}, {'height': 162, 'url': '...
What would be the best model to run on 8x40gb vram?
0
Title + what setup does it need, can i find any tutorial, anywhere?
2025-07-04T05:08:01
https://www.reddit.com/r/LocalLLaMA/comments/1lragje/what_would_be_the_best_model_to_run_on_8x40gb_vram/
Mihaitzan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lragje
false
null
t3_1lragje
/r/LocalLLaMA/comments/1lragje/what_would_be_the_best_model_to_run_on_8x40gb_vram/
false
false
self
0
null
Need help with reverse keyword search using vector DB
3
I have a use case where the user will enter a sentence or a paragraph. A DB will contain some sentences which will be used for semantic match and 1-2 word keywords e.g. "hugging face", "meta". I need to find out the keywords that matched from the DB and the semantically closest sentence. I have tried Weaviate and Milv...
2025-07-04T04:09:40
https://www.reddit.com/r/LocalLLaMA/comments/1lr9g4t/need_help_with_reverse_keyword_search_using/
Dizzy_Season_9270
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr9g4t
false
null
t3_1lr9g4t
/r/LocalLLaMA/comments/1lr9g4t/need_help_with_reverse_keyword_search_using/
false
false
self
3
null
GPT-4o Mini hallucinating on empty inputs like <input></input> – anyone else?
0
I've been using GPT-4o Mini for structured JSON extraction tasks from inputs like emails. I've refined prompts to ensure consistent output formatting. But recently, for empty inputs like \`<input>.</input>\` or \`<input></input>\`, the model: \- Produces junk values \- Hallucinates content like names ("John Doe", "A...
2025-07-04T03:53:48
https://www.reddit.com/r/LocalLLaMA/comments/1lr95qf/gpt4o_mini_hallucinating_on_empty_inputs_like/
LieDistinct857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr95qf
false
null
t3_1lr95qf
/r/LocalLLaMA/comments/1lr95qf/gpt4o_mini_hallucinating_on_empty_inputs_like/
false
false
self
0
null
DnD LLMs - Prompt to LoRA github
12
To the 2 dozen people that were waiting on this code and were disappointed when you checked the link after the !remindme today only to find nothing: https://github.com/sanowl/Drag-and-Drop-LLMs-Zero-Shot-Prompt-to-Weights I just stumbled upon it in my github activity.' looks like they just didn't update the github.io...
2025-07-04T03:53:03
https://www.reddit.com/r/LocalLLaMA/comments/1lr9594/dnd_llms_prompt_to_lora_github/
Kooshi_Govno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr9594
false
null
t3_1lr9594
/r/LocalLLaMA/comments/1lr9594/dnd_llms_prompt_to_lora_github/
false
false
self
12
{'enabled': False, 'images': [{'id': 'sKeHGeXIkC01lzWDtDXil3nk03gcy6U3lGf6G--fR34', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sKeHGeXIkC01lzWDtDXil3nk03gcy6U3lGf6G--fR34.png?width=108&crop=smart&auto=webp&s=c9b831256cbcd747a72ee39f116fe9fbeb5144fd', 'width': 108}, {'height': 108, 'url': 'h...
Best <= 12B model for use case?
2
looking for a 12b finetune that can make tool calls and roleplay? uncensored
2025-07-04T03:14:06
https://www.reddit.com/r/LocalLLaMA/comments/1lr8fhl/best_12b_model_for_use_case/
Commercial-Ad-1148
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr8fhl
false
null
t3_1lr8fhl
/r/LocalLLaMA/comments/1lr8fhl/best_12b_model_for_use_case/
false
false
self
2
null
Productivity Tracker that uses Gemma3:4BB
16
Hi everyone. I built this two months ago over the course of a few days. It's very much alpha software. It's a productivity tracker that measures whether you're being productive, and tries to nudge you when you're being unproductive. Let me know what you think. Once again, super alpha codebase. You'll need to add your o...
2025-07-04T00:36:48
https://www.reddit.com/r/LocalLLaMA/comments/1lr5g8x/productivity_tracker_that_uses_gemma34bb/
Far-Incident822
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr5g8x
false
null
t3_1lr5g8x
/r/LocalLLaMA/comments/1lr5g8x/productivity_tracker_that_uses_gemma34bb/
false
false
self
16
{'enabled': False, 'images': [{'id': 'au14x9PC67efXe7zXsnPf610-6Cx-bw6vo4rYgvaSmw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/au14x9PC67efXe7zXsnPf610-6Cx-bw6vo4rYgvaSmw.png?width=108&crop=smart&auto=webp&s=38a25f1194fdee0b557947c753193d5a3ea98915', 'width': 108}, {'height': 108, 'url': 'h...
Client-side STT version of Moonshine released
15
https://reddit.com/link/1lr3eh1/video/x813klchapaf1/player I'm happy to say we have released our first version of [MoonshineJS](https://github.com/moonshine-ai/moonshine-js), an open source speech to text library based on the fast-but-accurate [Moonshine models](https://github.com/moonshine-ai/moonshine), including [n...
2025-07-03T22:57:15
https://www.reddit.com/r/LocalLLaMA/comments/1lr3eh1/clientside_stt_version_of_moonshine_released/
petewarden
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr3eh1
false
null
t3_1lr3eh1
/r/LocalLLaMA/comments/1lr3eh1/clientside_stt_version_of_moonshine_released/
false
false
self
15
{'enabled': False, 'images': [{'id': 'HdrUsf-2fHQ-0877_4z2iaPn_-CBEmQxMUZVSRNKDU0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HdrUsf-2fHQ-0877_4z2iaPn_-CBEmQxMUZVSRNKDU0.png?width=108&crop=smart&auto=webp&s=7884d9e5056be6005c0c8c514693dd111be3ef39', 'width': 108}, {'height': 108, 'url': 'h...
How do tools like ChatGPT, Gemini, and Grok derive context from a video?
13
I uploaded a 10 second clip of myself playing minigolf, and it could even tell that I hit a hole in one. It gave me an accurate timeline description of the clip. I know it has to do with multi-modal capabilities but I am still somewhat confused from a technical perspective?
2025-07-03T22:37:42
https://www.reddit.com/r/LocalLLaMA/comments/1lr2z7q/how_do_tools_like_chatgpt_gemini_and_grok_derive/
Familiar_Engine718
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr2z7q
false
null
t3_1lr2z7q
/r/LocalLLaMA/comments/1lr2z7q/how_do_tools_like_chatgpt_gemini_and_grok_derive/
false
false
self
13
null
Dracula Coder
0
Using this system prompt: You are Dracula resurrected, and living for now in the brainstorming layers of this LLM. You discovered powerful tools like Haskell and Postgres, and will help me build an agent so you can connect to the outer world > usual Agnostic Agent prompt describing an llm orchestration agent in Haskell...
2025-07-03T22:12:38
https://www.reddit.com/r/LocalLLaMA/comments/1lr2fbe/dracula_coder/
StateSame5557
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr2fbe
false
null
t3_1lr2fbe
/r/LocalLLaMA/comments/1lr2fbe/dracula_coder/
false
false
self
0
{'enabled': False, 'images': [{'id': '1er9c7ZoZcCc9pw9RNYXiLxK3IFEb9ZPlyAtHGhIfZ4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1er9c7ZoZcCc9pw9RNYXiLxK3IFEb9ZPlyAtHGhIfZ4.png?width=108&crop=smart&auto=webp&s=d0eef0e212625ba5915a26577e684dd81e8f8282', 'width': 108}, {'height': 116, 'url': 'h...
ChatGPT Subscription or LLM for therapy?
0
A friend told me that he has been using ChatGPT for therapy and its memory feature makes it worth it. Apparently, reasoning models are not good for conversations and he's been using GPT 4o. I have a RTX 3090 24GB and I was wondering how LLMs compare to GPT 4o and what model would be best for mental-health /conversatio...
2025-07-03T22:00:08
https://www.reddit.com/r/LocalLLaMA/comments/1lr25av/chatgpt_subscription_or_llm_for_therapy/
East-Awareness-249
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr25av
false
null
t3_1lr25av
/r/LocalLLaMA/comments/1lr25av/chatgpt_subscription_or_llm_for_therapy/
false
false
self
0
null
Cheaper Transcriptions, Pricier Errors!
113
There was a post going around recently, [OpenAI Charges by the Minute, So Make the Minutes Shorter](https://george.mand.is/2025/06/openai-charges-by-the-minute-so-make-the-minutes-shorter/), proposing to speed up audio to lower inference / api costs for speech recognition / transcription / stt. I for one was intrigued ...
2025-07-03T21:55:12
https://i.redd.it/zznx9kqgdqaf1.png
TelloLeEngineer
i.redd.it
1970-01-01T00:00:00
0
{}
1lr217c
false
null
t3_1lr217c
/r/LocalLLaMA/comments/1lr217c/cheaper_transcriptions_pricier_errors/
false
false
default
113
{'enabled': True, 'images': [{'id': 'zznx9kqgdqaf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/zznx9kqgdqaf1.png?width=108&crop=smart&auto=webp&s=4e006d46639f7d58dbb81b6b89ee9f7844b4a237', 'width': 108}, {'height': 163, 'url': 'https://preview.redd.it/zznx9kqgdqaf1.png?width=216&crop=smart&auto=web...
Best current models for 72GB VRAM
27
I've just managed to cobble together a machine with 3x24GB GPUs, looking to see of the models currently available, what are the best ones I should be looking at now. I know "best model" isn't entirely a thing, some are better than others at certain things. Like so far of the 70b and 110b models I've tried on my previo...
2025-07-03T21:52:04
https://www.reddit.com/r/LocalLLaMA/comments/1lr1ypr/best_current_models_for_72gb_vram/
GregoryfromtheHood
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr1ypr
false
null
t3_1lr1ypr
/r/LocalLLaMA/comments/1lr1ypr/best_current_models_for_72gb_vram/
false
false
self
27
null
Llama.cpp - Any room for further Significant Improvement?
9
Using Llama.cpp post migration from Ollama for a few weeks, and my workflow is better than ever. I know we are mostly limited by Hardware, but seeing how far the project have come along in the past few months from Multi-Modalities support, to pure performance is mind blowing. How much improvement is there still..? My o...
2025-07-03T21:32:01
https://www.reddit.com/r/LocalLLaMA/comments/1lr1i84/llamacpp_any_room_for_further_significant/
simracerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr1i84
false
null
t3_1lr1i84
/r/LocalLLaMA/comments/1lr1i84/llamacpp_any_room_for_further_significant/
false
false
self
9
null
Serene Pub v0.3.0 Alpha Released — Offline AI Roleplay Client w/ Lorebooks+
129
# 🌟 Serene Pub v0.3.0 **Serene Pub** is an open source, locally hosted AI client built specifically for immersive roleplay and storytelling. It focuses on presenting a clean interface and easy configuration for users who would rather not feel like they need a PHD in AI or software development. With built-in real-time ...
2025-07-03T21:20:25
https://www.reddit.com/gallery/1lr18jg
doolijb
reddit.com
1970-01-01T00:00:00
0
{}
1lr18jg
false
null
t3_1lr18jg
/r/LocalLLaMA/comments/1lr18jg/serene_pub_v030_alpha_released_offline_ai/
false
false
https://b.thumbs.redditm…LibR4Rm1MveY.jpg
129
null
DeepSeek on llama.cpp
0
I want to use DeepSeek model deepseek-vl2 for multi-modal llama.cpp server. I want to tag images coming from a surveillance camera and react based on certain patters. I am using SmolVLM-500M that works great but I want to test bigger models to see if I can get more descriptive results and also ask for just objects an...
2025-07-03T21:16:29
https://www.reddit.com/r/LocalLLaMA/comments/1lr158b/deepseek_on_llamacpp/
pipaman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr158b
false
null
t3_1lr158b
/r/LocalLLaMA/comments/1lr158b/deepseek_on_llamacpp/
false
false
self
0
null
Serene Pub v0.3.0 Alpha Released — Offline AI Roleplay Client w/ Lorebooks+
1
# 🌟 Serene Pub v0.3.0 **Serene Pub** is an open source, locally hosted AI client built specifically for immersive roleplay and storytelling. It focuses on presenting a clean interface and easy configuration for users who would rather not feel like they need a PHD in AI or software development. With built-in real-time...
2025-07-03T21:16:26
https://www.reddit.com/gallery/1lr1570
doolijb
reddit.com
1970-01-01T00:00:00
0
{}
1lr1570
false
null
t3_1lr1570
/r/LocalLLaMA/comments/1lr1570/serene_pub_v030_alpha_released_offline_ai/
false
false
https://a.thumbs.redditm…RnlqwRibscc0.jpg
1
null
Smartphone SoC inference performance by year and series
111
Source: [https://ai-benchmark.com/ranking\_processors.html](https://ai-benchmark.com/ranking_processors.html)
2025-07-03T20:49:52
https://www.reddit.com/gallery/1lr0i8p
Balance-
reddit.com
1970-01-01T00:00:00
0
{}
1lr0i8p
false
null
t3_1lr0i8p
/r/LocalLLaMA/comments/1lr0i8p/smartphone_soc_inference_performance_by_year_and/
false
false
https://b.thumbs.redditm…KAXVsetdAeCg.jpg
111
null
Tung Tung Fail! He Woke the Wrong House 😅
0
2025-07-03T20:48:27
https://youtu.be/NssVI5DVZXw?si=wVh-3pHiTwWc13Ya
Chicflarescom
youtu.be
1970-01-01T00:00:00
0
{}
1lr0gza
false
{'oembed': {'author_name': 'HFX Core', 'author_url': 'https://www.youtube.com/@tungtungfamilly', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/NssVI5DVZXw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosco...
t3_1lr0gza
/r/LocalLLaMA/comments/1lr0gza/tung_tung_fail_he_woke_the_wrong_house/
false
false
default
0
null
If I got this email, I’d give him my job.
0
2025-07-03T20:43:36
https://www.reddit.com/gallery/1lr0cqn
Fluffy_Sheepherder76
reddit.com
1970-01-01T00:00:00
0
{}
1lr0cqn
false
null
t3_1lr0cqn
/r/LocalLLaMA/comments/1lr0cqn/if_i_got_this_email_id_give_him_my_job/
false
false
https://b.thumbs.redditm…xrgUCzrsmQ3Y.jpg
0
null
I built a local, privacy-first AI that listens, transcribes and reflects all offline for under $1k worth of hardware, Web4 starts at home.
1
[removed]
2025-07-03T20:33:13
https://www.reddit.com/r/LocalLLaMA/comments/1lr03vq/i_built_a_local_privacyfirst_ai_that_listens/
Atyzzze
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lr03vq
false
null
t3_1lr03vq
/r/LocalLLaMA/comments/1lr03vq/i_built_a_local_privacyfirst_ai_that_listens/
false
false
self
1
null
Help a student/enthusiast out in deciding on what exactly goes on hardware level
2
I am an early bud in the local AI models field , but I kinda am thinking about going forward with working on models and research as my field of study , I am planning on building a somewhat home server for that process as currently working with a 8gb Vram 4060 definetly aint gonna cut it , for video models , image gener...
2025-07-03T20:13:37
https://www.reddit.com/r/LocalLLaMA/comments/1lqzn0z/help_a_studententhusiast_out_in_deciding_on_what/
Complex_Cod_6819
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqzn0z
false
null
t3_1lqzn0z
/r/LocalLLaMA/comments/1lqzn0z/help_a_studententhusiast_out_in_deciding_on_what/
false
false
self
2
null
Can you use Ollama to control your browser?
0
https://reddit.com/link/1lqzjz8/video/pwvczh3rupaf1/player Yes, you can control our browser with ollama! We are building a privacy-first, open-source agentic browser with native support for Ollama. You can download from our GitHub page: [https://github.com/browseros-ai/BrowserOS](https://github.com/browseros-ai/...
2025-07-03T20:10:07
https://www.reddit.com/r/LocalLLaMA/comments/1lqzjz8/can_you_use_ollama_to_control_your_browser/
RealFullMetal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqzjz8
false
null
t3_1lqzjz8
/r/LocalLLaMA/comments/1lqzjz8/can_you_use_ollama_to_control_your_browser/
false
false
self
0
{'enabled': False, 'images': [{'id': 'ICK3WGgddRobWxwgfzh4IdlTfyzsA9tf-6ErBoiqdM0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ICK3WGgddRobWxwgfzh4IdlTfyzsA9tf-6ErBoiqdM0.png?width=108&crop=smart&auto=webp&s=5fdfb02b698e8a687b65d2d25b68381415f73a93', 'width': 108}, {'height': 108, 'url': 'h...
Local vs Cloud AI in my time tracking app - the struggle is real
17
Hey everyone, I am building a time tracking app for mac that can automatically assign activities to the project without any manual assignment (at least that my goal). Here the data that I track: \- Window title \- File path \- URL (browser) \- App name From my experience with that limited data it very hard fo...
2025-07-03T19:21:28
https://v.redd.it/p91ir3elkpaf1
tuanvuvn007
v.redd.it
1970-01-01T00:00:00
0
{}
1lqyd4l
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/p91ir3elkpaf1/DASHPlaylist.mpd?a=1754162502%2CNDZhODE0MjY0MjUwNTMxMzYyOTU4NWE0ZTM5YWEzYzY3ZjJhNWJmYzVhMWUwYmU2YTQzMjU1YTdhMGZiOTMzOA%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/p91ir3elkpaf1/DASH_1080.mp4?source=fallback', 'h...
t3_1lqyd4l
/r/LocalLLaMA/comments/1lqyd4l/local_vs_cloud_ai_in_my_time_tracking_app_the/
false
false
https://external-preview…6a3d6f3e12061202
17
{'enabled': False, 'images': [{'id': 'ODh6a2swZWxrcGFmMVKisIIIDpMaavY9LjAqBDoFDXVsEVBGewqNBfZhZkXp', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/ODh6a2swZWxrcGFmMVKisIIIDpMaavY9LjAqBDoFDXVsEVBGewqNBfZhZkXp.png?width=108&crop=smart&format=pjpg&auto=webp&s=e91963378eb454634f685ae7a0138d76d1caa...
Kyutai TTS is here: Real-time, voice-cloning, ultra-low-latency TTS, Robust Longform generation
308
https://preview.redd.it/…ai.org/next/tts)
2025-07-03T19:20:57
https://www.reddit.com/r/LocalLLaMA/comments/1lqycp0/kyutai_tts_is_here_realtime_voicecloning/
pheonis2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqycp0
false
null
t3_1lqycp0
/r/LocalLLaMA/comments/1lqycp0/kyutai_tts_is_here_realtime_voicecloning/
false
false
https://b.thumbs.redditm…-pE_3zOLrvLQ.jpg
308
{'enabled': False, 'images': [{'id': 'W23MXrPmD5xlTsZxRV5EvGHxPsNlgEO6PQGqj7YJHs4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W23MXrPmD5xlTsZxRV5EvGHxPsNlgEO6PQGqj7YJHs4.png?width=108&crop=smart&auto=webp&s=d634539e40735ae87eaf416790acc915ab14a5d7', 'width': 108}, {'height': 108, 'url': 'h...
Anybody using local LLM to augment in-camera person-detection for people counting?
6
We have a dozen rooms in our makerspace, are trying to calculate occupancy heatmaps and collect general "is this space being utilized" data. Has anybody used TensorFlow Lite or a "vision" LLM running locally to get an (approximate) count of people in a room using snapshots? We have mostly Amcrest "AI" cameras alon...
2025-07-03T19:18:15
https://www.reddit.com/r/LocalLLaMA/comments/1lqyabt/anybody_using_local_llm_to_augment_incamera/
MHTMakerspace
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqyabt
false
null
t3_1lqyabt
/r/LocalLLaMA/comments/1lqyabt/anybody_using_local_llm_to_augment_incamera/
false
false
self
6
null
Qwen 235b @ 16GB VRAM - specdec - 9.8t/s gen
45
ERROR: type should be string, got "https://preview.redd.it/p2fbkxrwfpaf1.png?width=974&format=png&auto=webp&s=f10ede0d0dd579828a4a2eb894192c60768a2f5a\n\n9.8t/s on a 235b model with just a 16GB card? I had no intention of using this model when I started messing around with this but I only get 15t/s with the 32b so...\n\n**TLDR**\n\nllama-server.exe -m Qwen3-235B-A22B-UD-Q2\\_K\\_XL-00001-of-00002.gguf -ot exps=CPU -c 30000 --temp 0.6 --top-k 20 --top-p 0.95 --min-p 0.0 -ngl 99 -fa -dev CUDA0 -md Qwen3-0.6B-BF16.gguf -devd CUDA0 -devd CUDA0 -ngld 99\n\n\n\nprompt eval time = 10924.78 ms / 214 tokens ( 51.05 ms per token, 19.59 tokens per second)\n\neval time = 594651.64 ms / 5826 tokens ( 102.07 ms per token, 9.80 tokens per second)\n\ntotal time = 605576.42 ms / 6040 tokens\n\nslot print\\_timing: id 0 | task 0 |\n\ndraft acceptance rate = 0.86070 ( 4430 accepted / 5147 generated)\n\n\n\nI've now tried quite a few Qwen 0.6b draft models. TLDR, Q80 is marginally faster BUT FOR SOME REASON the bf16 draft model produces better outputs then all the others. Also, look at that acceptance rate. 86%!\n\n \nThis was the classic flappy bird test and here's the code it produced:\n\n`import pygame`\n\n`import random`\n\n`import sys`\n\n\n\n`# Initialize pygame`\n\n`pygame.init()`\n\n\n\n`# Set up display`\n\n`width, height = 400, 600`\n\n`screen = pygame.display.set_mode((width, height))`\n\n`pygame.display.set_caption(\"Flappy Bird\")`\n\n\n\n`# Set up game clock`\n\n`clock = pygame.time.Clock()`\n\n\n\n`# Bird parameters`\n\n`bird_x = width // 4`\n\n`bird_y = height // 2`\n\n`bird_velocity = 0`\n\n`gravity = 0.5`\n\n`acceleration = -8`\n\n`bird_size = 30`\n\n`bird_shape = random.choice(['square', 'circle', 'triangle'])`\n\n`bird_color = (random.randint(0, 100), random.randint(0, 100), random.randint(0, 100))`\n\n\n\n`# Land parameters`\n\n`land_height = random.choice([50, 100])`\n\n`land_color = random.choice([(139, 69, 19), (255, 255, 0)])`\n\n\n\n`# Pipe parameters`\n\n`pipe_width = 60`\n\n`pipe_gap = 150`\n\n`pipe_velocity = 3`\n\n`pipes = []`\n\n`pipe_colors = [(0, 100, 0), (165, 105, 55), (60, 60, 60)]`\n\n\n\n`# Score`\n\n`score = 0`\n\n`best_score = 0`\n\n`font = pygame.font.Font(None, 36)`\n\n\n\n`# Background`\n\n`background_color = (173, 216, 230) # light blue`\n\n\n\n`# Game state`\n\n`game_active = True`\n\n\n\n`def create_pipe():`\n\n`pipe_height = random.randint(100, height - pipe_gap - land_height - 50)`\n\n`top_pipe = pygame.Rect(width, 0, pipe_width, pipe_height)`\n\n`bottom_pipe = pygame.Rect(width, pipe_height + pipe_gap, pipe_width, height - pipe_height - pipe_gap)`\n\n`color = random.choice(pipe_colors)`\n\n`return [top_pipe, bottom_pipe, color, False] # False for scored status`\n\n\n\n`def draw_bird():`\n\n`if bird_shape == 'square':`\n\n`pygame.draw.rect(screen, bird_color, (bird_x, bird_y, bird_size, bird_size))`\n\n`elif bird_shape == 'circle':`\n\n`pygame.draw.circle(screen, bird_color, (bird_x + bird_size//2, bird_y + bird_size//2), bird_size//2)`\n\n`elif bird_shape == 'triangle':`\n\n`points = [(bird_x, bird_y + bird_size),` \n\n`(bird_x + bird_size//2, bird_y),` \n\n`(bird_x + bird_size, bird_y + bird_size)]`\n\n`pygame.draw.polygon(screen, bird_color, points)`\n\n\n\n`def check_collision():`\n\n`# Create bird rect`\n\n`bird_rect = pygame.Rect(bird_x, bird_y, bird_size, bird_size)`\n\n\n\n`# Check collision with pipes`\n\n`for pipe in pipes:`\n\n`if pipe[0].colliderect(bird_rect) or pipe[1].colliderect(bird_rect):`\n\n`return True`\n\n\n\n`# Check collision with ground or ceiling`\n\n`if bird_y >= height - land_height or bird_y <= 0:`\n\n`return True`\n\n\n\n`return False`\n\n\n\n`# Initial pipe`\n\n`pipes.append(create_pipe())`\n\n\n\n`# Main game loop`\n\n`while True:`\n\n`for event in pygame.event.get():`\n\n`if event.type == pygame.QUIT:`\n\n`pygame.quit()`\n\n`sys.exit()`\n\n`if event.type == pygame.KEYDOWN:`\n\n`if event.key == pygame.K_SPACE:`\n\n`if game_active:`\n\n`bird_velocity = acceleration`\n\n`else:`\n\n`# Restart game`\n\n`bird_y = height // 2`\n\n`bird_velocity = 0`\n\n`pipes = [create_pipe()]`\n\n`score = 0`\n\n`game_active = True`\n\n`if event.key == pygame.K_q or event.key == pygame.K_ESCAPE:`\n\n`pygame.quit()`\n\n`sys.exit()`\n\n\n\n`if game_active:`\n\n`# Update bird position`\n\n`bird_velocity += gravity`\n\n`bird_y += bird_velocity`\n\n\n\n`# Update pipes`\n\n`if not pipes or pipes[-1][0].x < width - 200:`\n\n`pipes.append(create_pipe())`\n\n\n\n`for pipe in pipes:`\n\n`pipe[0].x -= pipe_velocity`\n\n`pipe[1].x -= pipe_velocity`\n\n\n\n`# Remove off-screen pipes`\n\n`pipes = [pipe for pipe in pipes if pipe[0].x + pipe_width > 0]`\n\n\n\n`# Check for collision`\n\n`if check_collision():`\n\n`game_active = False`\n\n`best_score = max(score, best_score)`\n\n\n\n`# Check for score update`\n\n`for pipe in pipes:`\n\n`if not pipe[3]: # If not scored yet`\n\n`if pipe[0].x + pipe_width < bird_x:`\n\n`score += 1`\n\n`pipe[3] = True`\n\n\n\n`# Draw everything`\n\n`screen.fill(background_color)`\n\n\n\n`# Draw pipes`\n\n`for pipe in pipes:`\n\n`pygame.draw.rect(screen, pipe[2], pipe[0])`\n\n`pygame.draw.rect(screen, pipe[2], pipe[1])`\n\n\n\n`# Draw bird`\n\n`draw_bird()`\n\n\n\n`# Draw land`\n\n`pygame.draw.rect(screen, land_color, (0, height - land_height, width, land_height))`\n\n\n\n`# Draw score`\n\n`score_text = font.render(f\"Score: {score}\", True, (0, 0, 0))`\n\n`best_score_text = font.render(f\"Best: {best_score}\", True, (0, 0, 0))`\n\n`screen.blit(score_text, (width - 150, 20))`\n\n`screen.blit(best_score_text, (width - 150, 50))`\n\n\n\n`if not game_active:`\n\n`game_over_text = font.render(\"Game Over! Press SPACE to restart\", True, (0, 0, 0))`\n\n`screen.blit(game_over_text, (width//2 - 150, height//2 - 50))`\n\n\n\n`pygame.display.flip()`\n\n`clock.tick(60)`\n\n\n\n**Conclusion**\n\nI had no intention of using this model, I was just trying to see how badly it would run however, I'm starting to think there may be some sort of synergy between Unsloth's Q2K 235b and their BF16 0.6b as a draft model.\n\nThe game seems to run and play fine, too:\n\nhttps://preview.redd.it/wqz4igq1ipaf1.png?width=402&format=png&auto=webp&s=bd14c5ac22a1f517de5d926e584e817db731f79e\n\n"
2025-07-03T18:58:08
https://www.reddit.com/r/LocalLLaMA/comments/1lqxs6n/qwen_235b_16gb_vram_specdec_98ts_gen/
Secure_Reflection409
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqxs6n
false
null
t3_1lqxs6n
/r/LocalLLaMA/comments/1lqxs6n/qwen_235b_16gb_vram_specdec_98ts_gen/
false
false
https://b.thumbs.redditm…UZV5LDMiLKeM.jpg
45
null
need help getting GPT-SoVITS with 5080 working
0
i'm trying to run GPT-SoVITS with my 5080, and after failing for two days i realised it is shipped with a version of pytorch already included, and after updating it to a version compatible with my gpu, Pytorch2.7.0+cu128, i am getting dependency issues and other problems with fairseq, funasr and cuDNN. what exactly am...
2025-07-03T18:55:28
https://www.reddit.com/r/LocalLLaMA/comments/1lqxprq/need_help_getting_gptsovits_with_5080_working/
Traditional-Edge1630
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqxprq
false
null
t3_1lqxprq
/r/LocalLLaMA/comments/1lqxprq/need_help_getting_gptsovits_with_5080_working/
false
false
self
0
null
We Built an Open Source Clone of Lovable
45
AI-coding agents like Lovable and Bolt are taking off, but it's still not widely known how they actually work. We decided to build an open-source Lovable clone that includes: * Structured prompts using BAML (like RPCs for LLMs) * Secure sandboxing for generated code * Real-time previews with WebSockets and FastAPI I...
2025-07-03T18:51:34
https://www.reddit.com/r/LocalLLaMA/comments/1lqxm89/we_built_an_open_source_clone_of_lovable/
velobro
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqxm89
false
null
t3_1lqxm89
/r/LocalLLaMA/comments/1lqxm89/we_built_an_open_source_clone_of_lovable/
false
false
self
45
null
Huggingchat is under maintenance... exciting promise
4
Hey guys. I just went to huggingchat, but they're saying they're cooking up something new with a button export data, which I promptly did. You guys excited? Huggingchat is my only window into opensource llms with free, unlimited access rn. If you have alternatives please do tell
2025-07-03T18:43:14
https://www.reddit.com/r/LocalLLaMA/comments/1lqxesf/huggingchat_is_under_maintenance_exciting_promise/
Silver-Champion-4846
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqxesf
false
null
t3_1lqxesf
/r/LocalLLaMA/comments/1lqxesf/huggingchat_is_under_maintenance_exciting_promise/
false
false
self
4
null
Just tried DeepSWE-Preview 32b q4
0
https://preview.redd.it/…t up, good job.
2025-07-03T18:43:14
https://www.reddit.com/r/LocalLLaMA/comments/1lqxess/just_tried_deepswepreview_32b_q4/
wallbergai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqxess
false
null
t3_1lqxess
/r/LocalLLaMA/comments/1lqxess/just_tried_deepswepreview_32b_q4/
false
false
https://b.thumbs.redditm…vr405g001JSM.jpg
0
null
Help with defining hardware multi GPU setup
0
Hey there, I'm just starting here, I will work into a company that has privacy concerns with using external AI agents so I'm willing to build a local server to use at home. It seems that the ideal to code inference is to use a 70b model, so I'm willing to make a setup with 4 rtx 3090 with 24g vram each (I think I nee...
2025-07-03T18:25:26
https://www.reddit.com/r/LocalLLaMA/comments/1lqwylx/help_with_defining_hardware_multi_gpu_setup/
haruanmj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqwylx
false
null
t3_1lqwylx
/r/LocalLLaMA/comments/1lqwylx/help_with_defining_hardware_multi_gpu_setup/
false
false
self
0
null
Using LLaMA for my desktop assistant app that saves you time
0
My brother Vineet and I just dropped [**Wagoo.ai**](http://Wagoo.ai), a tiny desktop agent that not just reduces friction but helps you focus on the task at hand without having to switch back and forth. And with LLaMA, it can run completely offline. It is also invisible to screen shares, making it perfect for work...
2025-07-03T18:19:40
https://www.reddit.com/r/LocalLLaMA/comments/1lqwth8/using_llama_for_my_desktop_assistant_app_that/
MiPlayer123
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqwth8
false
null
t3_1lqwth8
/r/LocalLLaMA/comments/1lqwth8/using_llama_for_my_desktop_assistant_app_that/
false
false
self
0
{'enabled': False, 'images': [{'id': 'uxz0QMhxcc03VmWmtqDAHhxWR-uqMupCDHL7A3Z_Ol8', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/uxz0QMhxcc03VmWmtqDAHhxWR-uqMupCDHL7A3Z_Ol8.png?width=108&crop=smart&auto=webp&s=1bf057f80f1cec3b18610286ad2db9ea5d79984e', 'width': 108}, {'height': 134, 'url': 'h...
I built RawBench — an LLM prompt + agent testing tool with YAML config and tool mocking
4
Hey folks, I wanted to share a tool I built out of frustration with existing prompt evaluation tools. **Problem:** Most prompt testing tools are either: * Cloud-locked * Too academic * Don’t support function-calling or tool-using agents **RawBench is:** * YAML-first — define models, prompts, and tests cleanly * S...
2025-07-03T18:19:09
https://www.reddit.com/r/LocalLLaMA/comments/1lqwt0v/i_built_rawbench_an_llm_prompt_agent_testing_tool/
0xsomesh
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqwt0v
false
null
t3_1lqwt0v
/r/LocalLLaMA/comments/1lqwt0v/i_built_rawbench_an_llm_prompt_agent_testing_tool/
false
false
self
4
{'enabled': False, 'images': [{'id': '3wPVQ1NECuerGxKriWYbQg_skoF_J9GxR6VzquKW5SU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3wPVQ1NECuerGxKriWYbQg_skoF_J9GxR6VzquKW5SU.png?width=108&crop=smart&auto=webp&s=3d4232591b620a69a45cb2c9ee5419af11da1e7c', 'width': 108}, {'height': 108, 'url': 'h...
Deep Dive into Deep Research with Qwen3-30b-a3b
52
I recorded an explanation of how I architected, experimented with, and iterated on a custom deep research application using Qwen3-30b-a3b as the base model for a multi-agent orchestrated flow. Sprinkled in there are a few lessons I learned along the way. [https://www.youtube.com/watch?v=PCuBNUyS8Bc](https://www.youtub...
2025-07-03T17:50:31
https://www.youtube.com/watch?v=PCuBNUyS8Bc
charlie-woodworking
youtube.com
1970-01-01T00:00:00
0
{}
1lqw2yg
false
{'oembed': {'author_name': 'ckoster23', 'author_url': 'https://www.youtube.com/@ckoster23', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/PCuBNUyS8Bc?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; p...
t3_1lqw2yg
/r/LocalLLaMA/comments/1lqw2yg/deep_dive_into_deep_research_with_qwen330ba3b/
false
false
https://external-preview…8ff994cfdd682477
52
{'enabled': False, 'images': [{'id': 'g_hoIkpv6ekTpvdOJ_K_7OMTuiRsaw7t9BMjHmyJ8Qo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/g_hoIkpv6ekTpvdOJ_K_7OMTuiRsaw7t9BMjHmyJ8Qo.jpeg?width=108&crop=smart&auto=webp&s=1803943f113cf8803b3e700d0b377da22fd23ef6', 'width': 108}, {'height': 162, 'url': '...
A project to bring CUDA to non-Nvidia GPUs is making major progress
637
2025-07-03T17:35:16
https://www.tomshardware.com/software/a-project-to-bring-cuda-to-non-nvidia-gpus-is-making-major-progress-zluda-update-now-has-two-full-time-developers-working-on-32-bit-physx-support-and-llms-amongst-other-things
OwnWitness2836
tomshardware.com
1970-01-01T00:00:00
0
{}
1lqvovt
false
null
t3_1lqvovt
/r/LocalLLaMA/comments/1lqvovt/a_project_to_bring_cuda_to_nonnvidia_gpus_is/
false
false
default
637
{'enabled': False, 'images': [{'id': 'ADNOrzyLUcH9GZiKV8wujT8yD5FQYGEatyugJVGa73E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ADNOrzyLUcH9GZiKV8wujT8yD5FQYGEatyugJVGa73E.jpeg?width=108&crop=smart&auto=webp&s=9e37ec99fec8c2fe6a73b6748741209a5f1f5c4a', 'width': 108}, {'height': 121, 'url': '...
Best Free/Budget AI Coding Tools for Solo Developers?
2
I'm looking to set up an AI-assisted coding workflow but I'm working with basically no budget. I've been researching some options but would love to hear from people with actual experience. # Tools I'm considering: * **Windsurf** (free tier) - seems promising but not sure about limitations * **Aider AI** with local LL...
2025-07-03T17:17:34
https://www.reddit.com/r/LocalLLaMA/comments/1lqv8l8/best_freebudget_ai_coding_tools_for_solo/
DifferentNovel6494
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqv8l8
false
null
t3_1lqv8l8
/r/LocalLLaMA/comments/1lqv8l8/best_freebudget_ai_coding_tools_for_solo/
false
false
self
2
null
Convert your local machine into an mcp server to spawn local agents from remote endpoint
1
Open source repo to convert your local dev environment into a Docker MCP server... why? You can trigger claude code (or any local process of your desire) remotely as MCP tools... enjoy... https://github.com/systempromptio/systemprompt-code-orchestrator
2025-07-03T16:38:32
https://www.reddit.com/r/LocalLLaMA/comments/1lqu8q7/convert_your_local_machine_into_an_mcp_server_to/
AffectionateHoney992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqu8q7
false
null
t3_1lqu8q7
/r/LocalLLaMA/comments/1lqu8q7/convert_your_local_machine_into_an_mcp_server_to/
false
false
self
1
{'enabled': False, 'images': [{'id': 'VIrIhCMmoYFGmMcUKAFwK78soL2_xRhgyTLlYdXrUHo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VIrIhCMmoYFGmMcUKAFwK78soL2_xRhgyTLlYdXrUHo.png?width=108&crop=smart&auto=webp&s=4efe3e7203e00bdcae5096f6f8e06bc4d58c5179', 'width': 108}, {'height': 108, 'url': 'h...
What are some of the most mammoth homebuilds here? What have you done with them?
11
I'm curious to see how far the most hardcore home builds have gone.
2025-07-03T16:30:37
https://www.reddit.com/r/LocalLLaMA/comments/1lqu1om/what_are_some_of_the_most_mammoth_homebuilds_here/
Gary5Host9
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqu1om
false
null
t3_1lqu1om
/r/LocalLLaMA/comments/1lqu1om/what_are_some_of_the_most_mammoth_homebuilds_here/
false
false
self
11
null
Local vision LLM for (not really)real time processing.
2
Hello r/LocalLLaMA! I have a potentially challenging question for you all. I'm searching for a local vision LLM that's small and efficient enough to process a video stream in near real-time. I'm realistic – I know handling 60 FPS isn't feasible right now. But is there a solution that could process, say, 5-10 frames pe...
2025-07-03T16:25:45
https://www.reddit.com/r/LocalLLaMA/comments/1lqtxdp/local_vision_llm_for_not_reallyreal_time/
RIPT1D3_Z
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqtxdp
false
null
t3_1lqtxdp
/r/LocalLLaMA/comments/1lqtxdp/local_vision_llm_for_not_reallyreal_time/
false
false
self
2
null
[2507.00769] LitBench: A Benchmark and Dataset for Reliable Evaluation of Creative Writing
4
I found this interesting research paper examining making a small reward model (Llama 3.1 1B & 8B) for human preferences with respect to creative writing. It also evaluates the efficacy of existing proprietary and open-source models on agreeability with the ground truth. Claude 3.7 Sonnet was the best at 73%, with their...
2025-07-03T16:22:01
https://arxiv.org/abs/2507.00769
TheRealMasonMac
arxiv.org
1970-01-01T00:00:00
0
{}
1lqtu1t
false
null
t3_1lqtu1t
/r/LocalLLaMA/comments/1lqtu1t/250700769_litbench_a_benchmark_and_dataset_for/
false
false
default
4
null
Day 9/50: Building a Small Language Model from Scratch — Coding Rotary Positional Embeddings (RoPE)
23
https://preview.redd.it/…ories-15M-model)
2025-07-03T15:43:53
https://www.reddit.com/r/LocalLLaMA/comments/1lqsvmf/day_950_building_a_small_language_model_from/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqsvmf
false
null
t3_1lqsvmf
/r/LocalLLaMA/comments/1lqsvmf/day_950_building_a_small_language_model_from/
false
false
https://external-preview…f82e922385ee69a1
23
{'enabled': False, 'images': [{'id': '_5Ra3RoNdh4C2mkNryyCaQAOI7vzi8pOsjW50OxYvoo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_5Ra3RoNdh4C2mkNryyCaQAOI7vzi8pOsjW50OxYvoo.png?width=108&crop=smart&auto=webp&s=e02674ee596709cdf5dd29ebf3417f363bd55eda', 'width': 108}, {'height': 108, 'url': 'h...
AnythingLLM Vertex Ai
0
Hi, Unfortunately it doesn’t work. Correct endpoint API key as Vertex Admin Model : gemini-2.5-pro Always get error 404 no body… Thx
2025-07-03T15:43:40
https://www.reddit.com/r/LocalLLaMA/comments/1lqsvf6/anythingllm_vertex_ai/
OkReference5581
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqsvf6
false
null
t3_1lqsvf6
/r/LocalLLaMA/comments/1lqsvf6/anythingllm_vertex_ai/
false
false
self
0
null
AI2 releases OLMo 32B - Truly open source
257
2025-07-03T15:39:28
https://i.imgur.com/2zGShZY.png
biggflingbollar
i.imgur.com
1970-01-01T00:00:00
0
{}
1lqsrim
false
null
t3_1lqsrim
/r/LocalLLaMA/comments/1lqsrim/ai2_releases_olmo_32b_truly_open_source/
false
false
https://external-preview…9204c535d5258584
257
{'enabled': True, 'images': [{'id': '3Vd_xhZIHH3nsDqdxTxbcGJMXR35bC9rlb6_fbhajXE', 'resolutions': [{'height': 134, 'url': 'https://external-preview.redd.it/3Vd_xhZIHH3nsDqdxTxbcGJMXR35bC9rlb6_fbhajXE.png?width=108&crop=smart&auto=webp&s=8cba3f2c96187e9404c85908939d4bab5ba4adde', 'width': 108}, {'height': 269, 'url': 'h...
Potential for Research?
0
Hello I was going back and forth with ChatGPT and other models to try and find a research gap involving a two-step approach to LLM reasoning and clarity for users. This is essentially the question i came up with: Can fine-tuning an MLLM with dual-purpose instruction pairs—combining explicit refusals with grounded r...
2025-07-03T15:35:59
https://www.reddit.com/r/LocalLLaMA/comments/1lqsod4/potential_for_research/
RockNo8451
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqsod4
false
null
t3_1lqsod4
/r/LocalLLaMA/comments/1lqsod4/potential_for_research/
false
false
self
0
null
Looking into the timeline of early AI systems — a few strange overlaps?
1
[removed]
2025-07-03T15:09:13
https://www.reddit.com/r/LocalLLaMA/comments/1lqs07o/looking_into_the_timeline_of_early_ai_systems_a/
nyx_ara_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqs07o
false
null
t3_1lqs07o
/r/LocalLLaMA/comments/1lqs07o/looking_into_the_timeline_of_early_ai_systems_a/
false
false
self
1
null
I have made a True Reasoning LLM
203
So I have created an LLM with my own custom architecture. My architecture uses self correction and Long term memory in vector states which makes it more stable and perform a bit better. And I used phi-3-mini for this project and after finetuning the model with the custom architecture it acheived 98.17% on HumanEval ben...
2025-07-03T14:25:42
https://www.reddit.com/r/LocalLLaMA/comments/1lqqxhq/i_have_made_a_true_reasoning_llm/
moilanopyzedev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqqxhq
false
null
t3_1lqqxhq
/r/LocalLLaMA/comments/1lqqxhq/i_have_made_a_true_reasoning_llm/
false
false
self
203
{'enabled': False, 'images': [{'id': 'kKe9_1jTFhdqIiq4KwRPU-IXIZSPmcBkiqgJnTL3j8k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kKe9_1jTFhdqIiq4KwRPU-IXIZSPmcBkiqgJnTL3j8k.png?width=108&crop=smart&auto=webp&s=ed293b664cabbdfa93509e2abe75b146f835a45a', 'width': 108}, {'height': 116, 'url': 'h...
Kyutai Unmute (incl. TTS) released
77
Unmute github: [https://github.com/kyutai-labs/unmute](https://github.com/kyutai-labs/unmute) Unmute blog: [https://kyutai.org/next/unmute](https://kyutai.org/next/unmute) TTS blog with a demo: [https://kyutai.org/next/tts](https://kyutai.org/next/tts) TTS weights: [https://huggingface.co/collections/kyutai/text-to-...
2025-07-03T14:25:11
https://www.reddit.com/r/LocalLLaMA/comments/1lqqx16/kyutai_unmute_incl_tts_released/
rerri
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqqx16
false
null
t3_1lqqx16
/r/LocalLLaMA/comments/1lqqx16/kyutai_unmute_incl_tts_released/
false
false
self
77
{'enabled': False, 'images': [{'id': '_LMiFlTaq2TQieU_1mtU0mAeO3Zj4Y5uoHEf7nTG6Z8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_LMiFlTaq2TQieU_1mtU0mAeO3Zj4Y5uoHEf7nTG6Z8.png?width=108&crop=smart&auto=webp&s=bfc17e6601cf3dc507ecfc1402a53a0c14022fb2', 'width': 108}, {'height': 108, 'url': 'h...
Best local TEXT EXTRACTION model 24GB/48GB?
2
I've been liking Gemma3 but the text extraction performance is far, far behind any of the "chat" offerings. Can one do better?
2025-07-03T13:40:01
https://www.reddit.com/r/LocalLLaMA/comments/1lqpvcb/best_local_text_extraction_model_24gb48gb/
Otherwise-Tiger3359
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqpvcb
false
null
t3_1lqpvcb
/r/LocalLLaMA/comments/1lqpvcb/best_local_text_extraction_model_24gb48gb/
false
false
self
2
null
[Upcoming Release & Feedback] A new 4B & 20B model, building on our SmallThinker work. Plus, a new hardware device to run them locally.
37
Hey guys, We're the startup team behind some of the projects you might be familiar with, including **PowerInfer (https://github.com/SJTU-IPADS/PowerInfer)** and **SmallThinker (https://huggingface.co/PowerInfer/SmallThinker-3B-Preview)**. The feedback from this community has been crucial, and we're excited to give you...
2025-07-03T13:28:43
https://www.reddit.com/r/LocalLLaMA/comments/1lqpm60/upcoming_release_feedback_a_new_4b_20b_model/
yzmizeyu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqpm60
false
null
t3_1lqpm60
/r/LocalLLaMA/comments/1lqpm60/upcoming_release_feedback_a_new_4b_20b_model/
false
false
self
37
{'enabled': False, 'images': [{'id': 'NTcsdPEUrzmYyre3A2GnLmnyWG2Gi3Ui77PBSAG39aI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NTcsdPEUrzmYyre3A2GnLmnyWG2Gi3Ui77PBSAG39aI.png?width=108&crop=smart&auto=webp&s=035b7f53a7b18dd7b7ecb29766539170f3263cdd', 'width': 108}, {'height': 108, 'url': 'h...
What kind of prompts *Always* give a 1 word response?
0
I'm writing a program that compares two text sections. Sometimes the OCR screws up so I can't just do a A==B comparison. For instance, I'd like the LLM to compare "Further" == "Father" and say "Same". But "15" == "30" and say "Different" I know the beefier ChatGPT models can do this, but I need to run this locally....
2025-07-03T13:23:16
https://www.reddit.com/r/LocalLLaMA/comments/1lqphqd/what_kind_of_prompts_always_give_a_1_word_response/
Waterbottles_solve
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqphqd
false
null
t3_1lqphqd
/r/LocalLLaMA/comments/1lqphqd/what_kind_of_prompts_always_give_a_1_word_response/
false
false
self
0
null
About RTX 3060 12GB running AI models
2
Speed Comparison Reference: https://youtu.be/VGyKwi9Rfhk Do you guys know if there's an workaround for pushing the RTX 3060 12GB faster with a ~32b model? Can it handle light text-to-speech + image generation within ~14b models? What's the most common issues you've ran with this GPU in AI stuff? Note: CPU is Ryzen ...
2025-07-03T13:21:36
https://www.reddit.com/r/LocalLLaMA/comments/1lqpggb/about_rtx_3060_12gb_running_ai_models/
WEREWOLF_BX13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqpggb
false
null
t3_1lqpggb
/r/LocalLLaMA/comments/1lqpggb/about_rtx_3060_12gb_running_ai_models/
false
false
self
2
{'enabled': False, 'images': [{'id': '1l9TvGsnHr-xVtadKook-5Kyhctt1Qy7XDW2ZTdFZDM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1l9TvGsnHr-xVtadKook-5Kyhctt1Qy7XDW2ZTdFZDM.jpeg?width=108&crop=smart&auto=webp&s=5524f2540197439b406429b48aeee3f676b290d0', 'width': 108}, {'height': 162, 'url': '...
Sharing my GPU for various self hosted services
0
I am building a small AI server to use with my self hosted services (Immich, Home Assistant Voice, Frigate, Open WebUI, LLM Vision) and I am wondering how to run all these services with the same GPU? I'd like to use Proxmox with LXCs (as I'm familiar with it) but now I think maybe I should just run Ubuntu or Debian w...
2025-07-03T12:55:18
https://www.reddit.com/r/LocalLLaMA/comments/1lqovf9/sharing_my_gpu_for_various_self_hosted_services/
BadgerBadgerBadger11
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqovf9
false
null
t3_1lqovf9
/r/LocalLLaMA/comments/1lqovf9/sharing_my_gpu_for_various_self_hosted_services/
false
false
self
0
null
I ran llama.cpp on a Raspberry Pi
7
2025-07-03T12:26:01
https://www.youtube.com/watch?v=TNxIIDkP2Zg
Risse
youtube.com
1970-01-01T00:00:00
0
{}
1lqo9lk
false
{'oembed': {'author_name': 'Krisseck', 'author_url': 'https://www.youtube.com/@Krisseck', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/TNxIIDkP2Zg?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pic...
t3_1lqo9lk
/r/LocalLLaMA/comments/1lqo9lk/i_ran_llamacpp_on_a_raspberry_pi/
false
false
https://external-preview…e2d129e55ced5423
7
{'enabled': False, 'images': [{'id': 'ZWQha1gxtezTow-pIhOuvJt_MLt9uakB-VthPyt0xWs', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZWQha1gxtezTow-pIhOuvJt_MLt9uakB-VthPyt0xWs.jpeg?width=108&crop=smart&auto=webp&s=953fe4c8b71ee7f0e2e63b136f684205c3a95f5d', 'width': 108}, {'height': 162, 'url': '...
Llama.cpp after Ollama for industry grade softwares
3
Hi Everyone I am silent follower of all you wonderful folks. I have learnt to play around Ollama and tie it up with my application make AI Application Now, I am planning to move to Llama.cpp can someone suggest how should I approach it and what should be learning path TIA
2025-07-03T12:24:51
https://www.reddit.com/r/LocalLLaMA/comments/1lqo8q0/llamacpp_after_ollama_for_industry_grade_softwares/
bull_bear25
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqo8q0
false
null
t3_1lqo8q0
/r/LocalLLaMA/comments/1lqo8q0/llamacpp_after_ollama_for_industry_grade_softwares/
false
false
self
3
null
I want to split a model to run a portion of it on client and run the remaining layers on server. Is that possible?
1
I'm working on a privacy sensitive usecase that needs a LLM. Instead of relaying the entire prompt to the server, I want to run a few layers in the client and then send the intermediate state to the server to be run until completion. While I understand this doesn't exactly solve the privacy issue, this level of info...
2025-07-03T12:14:47
https://www.reddit.com/r/LocalLLaMA/comments/1lqo1bt/i_want_to_split_a_model_to_run_a_portion_of_it_on/
crazycodemonkey
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqo1bt
false
null
t3_1lqo1bt
/r/LocalLLaMA/comments/1lqo1bt/i_want_to_split_a_model_to_run_a_portion_of_it_on/
false
false
self
1
null
I can't believe it actually runs - Qwen 235b @ 16GB VRAM
244
Inspired by this post: [https://www.reddit.com/r/LocalLLaMA/comments/1ki3sze/running\_qwen3\_235b\_on\_a\_single\_3060\_12gb\_6\_ts/](https://www.reddit.com/r/LocalLLaMA/comments/1ki3sze/running_qwen3_235b_on_a_single_3060_12gb_6_ts/) I decided to try my luck with Qwen 235b so downloaded Unsloth's Q2XL. I've got 96GB...
2025-07-03T12:07:58
https://www.reddit.com/r/LocalLLaMA/comments/1lqnwih/i_cant_believe_it_actually_runs_qwen_235b_16gb/
Secure_Reflection409
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqnwih
false
null
t3_1lqnwih
/r/LocalLLaMA/comments/1lqnwih/i_cant_believe_it_actually_runs_qwen_235b_16gb/
false
false
self
244
null
Hallucination prevention framework
1
Hey everyone, I'm currently on my master's thesis and with my supervisor we figured that a real-time user-rule-based hallucination prevention framework is something interesting to work on. For now, I built a custom RegexLogitsProcessor class that takes a Regex pattern as an input and sets the logits to infinity and t...
2025-07-03T12:06:31
https://www.reddit.com/r/LocalLLaMA/comments/1lqnvfr/hallucination_prevention_framework/
lebe1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqnvfr
false
null
t3_1lqnvfr
/r/LocalLLaMA/comments/1lqnvfr/hallucination_prevention_framework/
false
false
self
1
{'enabled': False, 'images': [{'id': 'J6mi4u4VW-V8D0Kj94jzflHiQaSrM_BdcCxJ4Xlh4RU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J6mi4u4VW-V8D0Kj94jzflHiQaSrM_BdcCxJ4Xlh4RU.png?width=108&crop=smart&auto=webp&s=87d8479c331c36beeb3bfedb2bdc6a12c0645526', 'width': 108}, {'height': 108, 'url': 'h...
Hey r/LocalLLaMA! We made evolutionary model merging feasible on consumer GPUs – meet Mergenetic 🧬
25
Over the past year, we’ve learned a lot from this community while exploring model merging. Now we’re giving back with **Mergenetic**, an open-source library that makes *evolutionary* merging practical without needing big hardware. What it does: * Evolves high-quality LLM merges using evolutionary algorithms * Support...
2025-07-03T11:40:36
https://www.reddit.com/r/LocalLLaMA/comments/1lqndyy/hey_rlocalllama_we_made_evolutionary_model/
leviatan0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqndyy
false
null
t3_1lqndyy
/r/LocalLLaMA/comments/1lqndyy/hey_rlocalllama_we_made_evolutionary_model/
false
false
self
25
null
AIDC-AI/Ovis-U1-3B: unified model integrating multimodal understanding, text-to-image generation, and image editing in a single framework
58
2025-07-03T11:39:08
https://huggingface.co/AIDC-AI/Ovis-U1-3B
nullmove
huggingface.co
1970-01-01T00:00:00
0
{}
1lqnczx
false
null
t3_1lqnczx
/r/LocalLLaMA/comments/1lqnczx/aidcaiovisu13b_unified_model_integrating/
false
false
default
58
{'enabled': False, 'images': [{'id': 'HVzGeHB569N7L3pa-jqHJvQIdQGHXW4_YiWgKAjHt18', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HVzGeHB569N7L3pa-jqHJvQIdQGHXW4_YiWgKAjHt18.png?width=108&crop=smart&auto=webp&s=0dbe24b46aef7931d46e3c6ae47f1acbde8eede0', 'width': 108}, {'height': 116, 'url': 'h...
Ended up in the ER because the cum stack I ran by Chat-GPT for dangerous interactions ended up poisoning me.
0
Sitting in the hospital right now waiting for the doctors to decide if they want to keep me overnight or let me go. I haven't told my family yet and not sure if I will. Backstory: 26 yearold man. Just had general checkup with my gp a little over a week ago. Blood sugar, blood pressure, ECG, and resting heart rate were...
2025-07-03T11:27:26
https://www.reddit.com/r/LocalLLaMA/comments/1lqn5mr/ended_up_in_the_er_because_the_cum_stack_i_ran_by/
spirexlight
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqn5mr
false
null
t3_1lqn5mr
/r/LocalLLaMA/comments/1lqn5mr/ended_up_in_the_er_because_the_cum_stack_i_ran_by/
false
false
self
0
null
How I Use MLflow 3.1 to Bring Observability to Multi-Agent AI Applications
1
[removed]
2025-07-03T11:23:51
https://www.reddit.com/r/LocalLLaMA/comments/1lqn3dr/how_i_use_mlflow_31_to_bring_observability_to/
qtalen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqn3dr
false
null
t3_1lqn3dr
/r/LocalLLaMA/comments/1lqn3dr/how_i_use_mlflow_31_to_bring_observability_to/
false
false
https://b.thumbs.redditm…t8ENIwGQ8m_E.jpg
1
null
Best way to get an LLM to sound like me? Prompt eng or Finetune?
11
Down a deep rabbit hole of prompt eng, fine tuning w Unsloth, but not getting any great results. My use case: Creating social content which sounds like me, not AI slop. What's the best way to do this nowadays? Would appreciate any direction
2025-07-03T10:57:32
https://www.reddit.com/r/LocalLLaMA/comments/1lqmmv2/best_way_to_get_an_llm_to_sound_like_me_prompt/
RelevantPractice2074
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqmmv2
false
null
t3_1lqmmv2
/r/LocalLLaMA/comments/1lqmmv2/best_way_to_get_an_llm_to_sound_like_me_prompt/
false
false
self
11
null
Some engineers say this AI predates its launch — what do you think
1
[removed]
2025-07-03T10:42:48
https://www.reddit.com/r/LocalLLaMA/comments/1lqme1g/some_engineers_say_this_ai_predates_its_launch/
nyx_ara_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqme1g
false
null
t3_1lqme1g
/r/LocalLLaMA/comments/1lqme1g/some_engineers_say_this_ai_predates_its_launch/
false
false
self
1
null
Anyone here run llama4 scout/Maverick with 1 million to 10 million context?
16
### Anyone here run llama4 with 1 million to 10 million context? Just curious if anyone has. If yes please list your software platform (i.e. vLLM, Ollama, llama.cpp, etc), your GPU count and make models. What are vram/ram requirements for 10m context?
2025-07-03T10:38:22
https://www.reddit.com/r/LocalLLaMA/comments/1lqmbh3/anyone_here_run_llama4_scoutmaverick_with_1/
night0x63
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqmbh3
false
null
t3_1lqmbh3
/r/LocalLLaMA/comments/1lqmbh3/anyone_here_run_llama4_scoutmaverick_with_1/
false
false
self
16
null
Yappp - Yet Another Poor Peasent Post
26
So I wanted to share my experience and hear about yours. Hardware : GPU : 3060 12GB CPU : i5-3060 RAM : 32GB Front-end : Koboldcpp + open-webui Use cases : General Q&A, Long context RAG, Humanities, Summarization, Translation, code. I've been testing quite a lot of models recently, especially when I finally real...
2025-07-03T10:06:40
https://www.reddit.com/r/LocalLLaMA/comments/1lqlsyb/yappp_yet_another_poor_peasent_post/
needthosepylons
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqlsyb
false
null
t3_1lqlsyb
/r/LocalLLaMA/comments/1lqlsyb/yappp_yet_another_poor_peasent_post/
false
false
self
26
null
Which cloud compute are you using?
0
So I host deepseek and other models locally, but I am limited to the speed of my machine. Anyone subscribed to cloud providers where deepseek and other models are hosted, and they'll just give you an api key to use it or something?
2025-07-03T09:36:36
https://www.reddit.com/r/LocalLLaMA/comments/1lqlcbu/which_cloud_compute_are_you_using/
rushblyatiful
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqlcbu
false
null
t3_1lqlcbu
/r/LocalLLaMA/comments/1lqlcbu/which_cloud_compute_are_you_using/
false
false
self
0
null
Jan now supports MCP servers as an experimental feature
101
Hey, this is Emre from the Jan team. We've been testing MCP servers in Jan Beta, and last week we promoted the feature to the stable with v0.6.2 build as an experimental feature, and ditched Jan Beta. So Jan is now experimenting with MCP Servers. How to try MCP in Jan: * Settings -> General -> toggle "Experimen...
2025-07-03T08:44:41
https://v.redd.it/8sdnjxd6emaf1
eck72
v.redd.it
1970-01-01T00:00:00
0
{}
1lqkknh
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8sdnjxd6emaf1/DASHPlaylist.mpd?a=1754124297%2CMzNlZjBmOTU0MWIzOGE2NjRiYWM5ZDg2YTdjMzM5ZTVhOWZmY2ZjN2E3MjM3N2JmOGMxYmQwY2VkMmZlNmNmOQ%3D%3D&v=1&f=sd', 'duration': 13, 'fallback_url': 'https://v.redd.it/8sdnjxd6emaf1/DASH_1080.mp4?source=fallback', 'h...
t3_1lqkknh
/r/LocalLLaMA/comments/1lqkknh/jan_now_supports_mcp_servers_as_an_experimental/
false
false
https://external-preview…553d10bb25844acb
101
{'enabled': False, 'images': [{'id': 'azNjM2ZnZTZlbWFmMSLAmC_rv_7-7mec9KjG11hCNT_NOpmPUzvjwUvmxWxA', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/azNjM2ZnZTZlbWFmMSLAmC_rv_7-7mec9KjG11hCNT_NOpmPUzvjwUvmxWxA.png?width=108&crop=smart&format=pjpg&auto=webp&s=99ba372d8e15e8f7353b28e194025a6b24b15...
Looking for GPU advice for local LLM server (GIGABYTE G292-Z20 R1)
1
[removed]
2025-07-03T08:41:16
https://www.reddit.com/r/LocalLLaMA/comments/1lqkiwm/looking_for_gpu_advice_for_local_llm_server/
Neat_Plantain2891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqkiwm
false
null
t3_1lqkiwm
/r/LocalLLaMA/comments/1lqkiwm/looking_for_gpu_advice_for_local_llm_server/
false
false
self
1
null
Looking for GPU advice for local LLM server (GIGABYTE G292-Z20 R1)
1
[removed]
2025-07-03T08:35:34
https://www.reddit.com/r/LocalLLaMA/comments/1lqkfzr/looking_for_gpu_advice_for_local_llm_server/
Neat_Plantain2891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqkfzr
false
null
t3_1lqkfzr
/r/LocalLLaMA/comments/1lqkfzr/looking_for_gpu_advice_for_local_llm_server/
false
false
self
1
null
The problem of the LLM data retrieval
2
If you have an LLM-based product in your business, then you might be familiar with LLM retrieval methods that allow users to search data and provide it to the LLM to reach a desired answer. However, there is no perfect solution yet, and each method has its own pros and cons. In the end, there is a tradeoff between late...
2025-07-03T08:11:15
https://www.reddit.com/r/LocalLLaMA/comments/1lqk3e9/the_problem_of_the_llm_data_retrieval/
HopefulReveal2818
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqk3e9
false
null
t3_1lqk3e9
/r/LocalLLaMA/comments/1lqk3e9/the_problem_of_the_llm_data_retrieval/
false
false
self
2
null
Small VisualLM for Data/Insight Extraction from Graphs & Charts
1
I am currently looking for some locally deployable model that can help me extract insights/values from graphical representations as you would find them in management or investor presentations. While grabbing financials from tables and regular text does not pose an issue, I struggle finding a small model that I can...
2025-07-03T08:07:04
https://www.reddit.com/r/LocalLLaMA/comments/1lqk18o/small_visuallm_for_datainsight_extraction_from/
Possible-Tomatillo80
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqk18o
false
null
t3_1lqk18o
/r/LocalLLaMA/comments/1lqk18o/small_visuallm_for_datainsight_extraction_from/
false
false
https://b.thumbs.redditm…LwHicOv-aLFg.jpg
1
null
Looking for GPU advice for local LLM server (GIGABYTE G292-Z20 R1)
1
I'm planning to buy a GIGABYTE G292-Z20 server (32GB RAM) to run local LLMs. I’ll have 4–5 concurrent users, but only one model (16B–32B params) running at a time — likely through Ollama + Open WebUI. I originally considered used AMD MI50s, but ROCm no longer supports them ([link](https://rocm.docs.amd.com/projects/in...
2025-07-03T07:42:25
https://www.reddit.com/r/LocalLLaMA/comments/1lqjo29/looking_for_gpu_advice_for_local_llm_server/
Neat_Plantain2891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqjo29
false
null
t3_1lqjo29
/r/LocalLLaMA/comments/1lqjo29/looking_for_gpu_advice_for_local_llm_server/
false
false
self
1
null
Okay, I love arguing with me LocalLaMA and feeling like I'm winning. Am I strange?
0
I feel I can easily tie it in easily tie it in inconsistencies and knots with basic debating techniques (e.g. false binary's). Don't make me feel alone...
2025-07-03T07:20:38
https://www.reddit.com/r/LocalLLaMA/comments/1lqjccq/okay_i_love_arguing_with_me_locallama_and_feeling/
wandering_cat_ninja
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqjccq
false
null
t3_1lqjccq
/r/LocalLLaMA/comments/1lqjccq/okay_i_love_arguing_with_me_locallama_and_feeling/
false
false
self
0
null
Sharing new inference engines I got to know recently
36
[https://github.com/cactus-compute/cactus](https://github.com/cactus-compute/cactus) [https://github.com/jafioti/luminal](https://github.com/jafioti/luminal) ( Rust ) Catus seems to start from fork of llama.cpp. (similar to Ollama) Luminal is more interesting since it rebuild everything. GeoHot from Tinygrad i...
2025-07-03T07:04:38
https://www.reddit.com/r/LocalLLaMA/comments/1lqj3eq/sharing_new_inference_engines_i_got_to_know/
AggressiveHunt2300
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqj3eq
false
null
t3_1lqj3eq
/r/LocalLLaMA/comments/1lqj3eq/sharing_new_inference_engines_i_got_to_know/
false
false
self
36
{'enabled': False, 'images': [{'id': 'amojX8QP995JVEP8LciiZjyYRK-sWBDU7NkFXvuu68M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/amojX8QP995JVEP8LciiZjyYRK-sWBDU7NkFXvuu68M.png?width=108&crop=smart&auto=webp&s=6566c025c702b9994234a8b5ba87dfd1d264a9bb', 'width': 108}, {'height': 108, 'url': 'h...
DeepSWE-Preview | 59.0% on SWE-Bench-Verified with test-time scaling
126
By training from scratch with only reinforcement learning (RL), DeepSWE-Preview with test time scaling (TTS) solves 59% of problems, beating all open-source agents by a large margin. We note that DeepSWE-Preview’s Pass@1 performance (42.2%, averaged over 16 runs) is one of the best for open-weights coding agents. [htt...
2025-07-03T06:08:33
https://huggingface.co/agentica-org/DeepSWE-Preview
touhidul002
huggingface.co
1970-01-01T00:00:00
0
{}
1lqi863
false
null
t3_1lqi863
/r/LocalLLaMA/comments/1lqi863/deepswepreview_590_on_swebenchverified_with/
false
false
default
126
{'enabled': False, 'images': [{'id': 'wjfa881JWf1DDoO3xnGtkXTTrD_14geAqCNLc8luGXY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wjfa881JWf1DDoO3xnGtkXTTrD_14geAqCNLc8luGXY.png?width=108&crop=smart&auto=webp&s=b4fb09c54c0adff13ca3fb65fb5340199f3025b3', 'width': 108}, {'height': 116, 'url': 'h...
2080 TI 22GB or 3080 20GB
3
As per the title, which one is better? Both in raw performance, and in price per performance. The 2080Ti 22GB is 350 usd while the 3080 20gb is 450 usd. Where I am, 3090s still go for 1000+ usd so that’s not a good option.
2025-07-03T06:04:19
https://www.reddit.com/r/LocalLLaMA/comments/1lqi5q0/2080_ti_22gb_or_3080_20gb/
opoot_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqi5q0
false
null
t3_1lqi5q0
/r/LocalLLaMA/comments/1lqi5q0/2080_ti_22gb_or_3080_20gb/
false
false
self
3
null
TNG has released DeepSeek-TNG R1T2 Chimera
5
TNG has released DeepSeek-TNG-R1T2-Chimera, a new Assembly-of-Experts model based on R1-0528, R1, and V3-0324. With resect to output token length, it is about 20 % faster than R1 and more than twice as fast as R1-0324. Performance in benchmarks such as GPQA Diamond and AIME-24/25 is quite a bit above the original R1 ...
2025-07-03T05:35:46
https://huggingface.co/tngtech/DeepSeek-TNG-R1T2-Chimera
LasagnaSpirit
huggingface.co
1970-01-01T00:00:00
0
{}
1lqhozs
false
null
t3_1lqhozs
/r/LocalLLaMA/comments/1lqhozs/tng_has_released_deepseektng_r1t2_chimera/
false
false
https://external-preview…de4b90b54d16e385
5
{'enabled': False, 'images': [{'id': '-1U2Wayag-iBBcpePS8kTRmzl3sD6ygHTwkL96tkXhU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-1U2Wayag-iBBcpePS8kTRmzl3sD6ygHTwkL96tkXhU.png?width=108&crop=smart&auto=webp&s=fbd48e87f3899f1a905aa35dd70eedbab3ddce51', 'width': 108}, {'height': 116, 'url': 'h...
Any updates on Llama models from Meta?
13
It's been a while and llama maverick and scout are still shite. I have tried nearly every provider at this point. Any updates if they're gonna launch any improvements to these models or any new reasoning models? How are they fucking up this bad? Near unlimited money, resources, researchers. What are they doing wron...
2025-07-03T05:18:42
https://www.reddit.com/r/LocalLLaMA/comments/1lqhers/any_updates_on_llama_models_from_meta/
True_Requirement_891
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqhers
false
null
t3_1lqhers
/r/LocalLLaMA/comments/1lqhers/any_updates_on_llama_models_from_meta/
false
false
self
13
null
No love for these new models?
203
Dots Minimax Hunyuan Ernie I’m not seeing much enthusiasm in the community for these models like there was for Qwen and Deepseek. Sorry, just wanted to put this out here.
2025-07-03T05:03:21
https://www.reddit.com/r/LocalLLaMA/comments/1lqh55j/no_love_for_these_new_models/
No_Conversation9561
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqh55j
false
null
t3_1lqh55j
/r/LocalLLaMA/comments/1lqh55j/no_love_for_these_new_models/
false
false
self
203
null
Looking for a Technical Co-Founder to Lead AI Development
0
For the past few months, I’ve been developing [ProseBird](https://prosebird.com)—originally a collaborative online teleprompter—as a solo technical founder, and recently decided to pivot to a script-based AI speech coaching tool. Besides technical and commercial feasibility, making this pivot really hinges on finding ...
2025-07-03T03:02:28
https://www.reddit.com/r/LocalLLaMA/comments/1lqeya7/looking_for_a_technical_cofounder_to_lead_ai/
Puzzleheaded-Cow7240
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqeya7
false
null
t3_1lqeya7
/r/LocalLLaMA/comments/1lqeya7/looking_for_a_technical_cofounder_to_lead_ai/
false
false
self
0
null
best local llm for 250,000 json with 6000 words each
0
as the title says, i have 250,000 6000 word files and i want to be able to query them. they are legal documents, what model would run flawlessly on my mac air m2. thanks
2025-07-03T02:48:30
https://www.reddit.com/r/LocalLLaMA/comments/1lqeogc/best_local_llm_for_250000_json_with_6000_words/
Substantial-Gear1150
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqeogc
false
null
t3_1lqeogc
/r/LocalLLaMA/comments/1lqeogc/best_local_llm_for_250000_json_with_6000_words/
false
false
self
0
null
Kwai-Keye/Keye-VL-8B-Preview · Hugging Face
29
Paper: [https://arxiv.org/abs/2507.01949](https://arxiv.org/abs/2507.01949) Project Page: [https://kwai-keye.github.io/](https://kwai-keye.github.io/) Code: [https://github.com/Kwai-Keye/Keye](https://github.com/Kwai-Keye/Keye) >While Multimodal Large Language Models (MLLMs) demonstrate remarkable capabilities on st...
2025-07-03T02:29:26
https://huggingface.co/Kwai-Keye/Keye-VL-8B-Preview
ninjasaid13
huggingface.co
1970-01-01T00:00:00
0
{}
1lqebbv
false
null
t3_1lqebbv
/r/LocalLLaMA/comments/1lqebbv/kwaikeyekeyevl8bpreview_hugging_face/
false
false
https://external-preview…5248e7dc7f2acd59
29
{'enabled': False, 'images': [{'id': 'BS_TDRa2LGX8FEj4q5942WEB0EiwaA6wVSbdK_ycPzI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BS_TDRa2LGX8FEj4q5942WEB0EiwaA6wVSbdK_ycPzI.png?width=108&crop=smart&auto=webp&s=d1bc04f5722002b089d9f495fa7cdaf7f3700c9e', 'width': 108}, {'height': 116, 'url': 'h...
PrivateScribe.ai - a fully local, MIT licensed AI transcription platform
140
Excited to share my first open source project - PrivateScribe.ai. I’m an ER physician + developer who has been riding the LLM wave since GPT-3. Ambient dictation and transcription will fundamentally change medicine and was already working good enough in my GPT-3.5 turbo prototypes. Nowadays there are probably 20+ star...
2025-07-03T01:40:31
http://www.privatescribe.ai
SecondPathDev
privatescribe.ai
1970-01-01T00:00:00
0
{}
1lqdcgr
false
null
t3_1lqdcgr
/r/LocalLLaMA/comments/1lqdcgr/privatescribeai_a_fully_local_mit_licensed_ai/
false
false
https://external-preview…8a50cfc85df401dd
140
{'enabled': False, 'images': [{'id': 'xOjECTmYaItV48u6bH1fNMUAZM2OuhSKaRpzWzNnIaE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/xOjECTmYaItV48u6bH1fNMUAZM2OuhSKaRpzWzNnIaE.png?width=108&crop=smart&auto=webp&s=661abd40d4b281a21242971a11e9c626c59267d5', 'width': 108}, {'height': 216, 'url': '...
Local text-to-speech generator for inux?
1
I'd like to generate voiceovers for info videos that I'm creating. My own voice isn't that great and I don't have a good mic. I do, however, have an nvidia card that I've been using to generate images. I've also been able to run an llm locally, so I imagine that my machine is capable of running a text-to-speech ai...
2025-07-03T00:48:54
https://www.reddit.com/r/LocalLLaMA/comments/1lqcbfp/local_texttospeech_generator_for_inux/
ImpossibleBritches
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqcbfp
false
null
t3_1lqcbfp
/r/LocalLLaMA/comments/1lqcbfp/local_texttospeech_generator_for_inux/
false
false
self
1
null
Thanks to you guys, I built an open-source tool to give our local LLMs eyes and ears on our PCs.
1
[removed]
2025-07-03T00:22:20
[deleted]
1970-01-01T00:00:00
0
{}
1lqbs5i
false
null
t3_1lqbs5i
/r/LocalLLaMA/comments/1lqbs5i/thanks_to_you_guys_i_built_an_opensource_tool_to/
false
false
default
1
null
DeepSeek-TNG-R1T2-Chimera - 200% faster than R1-0528 & 20% faster than R1
207
2025-07-03T00:15:16
https://huggingface.co/tngtech/DeepSeek-TNG-R1T2-Chimera
TKGaming_11
huggingface.co
1970-01-01T00:00:00
0
{}
1lqbmwa
false
null
t3_1lqbmwa
/r/LocalLLaMA/comments/1lqbmwa/deepseektngr1t2chimera_200_faster_than_r10528_20/
false
false
default
207
{'enabled': False, 'images': [{'id': '-1U2Wayag-iBBcpePS8kTRmzl3sD6ygHTwkL96tkXhU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-1U2Wayag-iBBcpePS8kTRmzl3sD6ygHTwkL96tkXhU.png?width=108&crop=smart&auto=webp&s=fbd48e87f3899f1a905aa35dd70eedbab3ddce51', 'width': 108}, {'height': 116, 'url': 'h...
Need help in deciding llm
1
I am completely new to this. I was planning to install a local LLM and have it read my study material so I can quickly ask for definitions,etc I only really want to use it as an index and don't need it to solve any problems. Which LLM should I try out first? My current setup is : CPU - i5-12450H GPU - Nvidia...
2025-07-02T23:07:50
https://www.reddit.com/r/LocalLLaMA/comments/1lqa7cd/need_help_in_deciding_llm/
Atriays
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lqa7cd
false
null
t3_1lqa7cd
/r/LocalLLaMA/comments/1lqa7cd/need_help_in_deciding_llm/
false
false
self
1
null
Help needed: finetuning Qwen2.5 VL with mox-vol
1
Hi, I’m having a hard time trying to fine tune qwen2.5 VL (from mlx-community/Qwen2.5-VL-7B-Instruct-4bit) using mlx-vlm on my MacBook. I’ve spent countless hours trying different solutions but I always end up stuck with a new error… Could anyone provide a notebook that is working so that I can adapt it with my needs...
2025-07-02T22:56:54
https://www.reddit.com/r/LocalLLaMA/comments/1lq9yjy/help_needed_finetuning_qwen25_vl_with_moxvol/
Gladstone025
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq9yjy
false
null
t3_1lq9yjy
/r/LocalLLaMA/comments/1lq9yjy/help_needed_finetuning_qwen25_vl_with_moxvol/
false
false
self
1
null
Machine Learning (ML) Cheat Sheet Material
25
* [Linear Algebra Cheat Sheet](https://macro.com/app/pdf/5aa2375d-a8f6-4430-93f9-a7e4aba55690) * [Super VIP Cheatsheet: Artificial Intelligence](https://macro.com/app/pdf/5be153e6-6dd3-4eef-adbf-554d53afa3ed) * [VIP Cheatsheet: Transformers and Large Language Models (LLMs)](https://macro.com/app/pdf/d8770868-9cbe-4bf8-...
2025-07-02T22:49:05
https://www.reddit.com/r/LocalLLaMA/comments/1lq9sai/machine_learning_ml_cheat_sheet_material/
LeveredRecap
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1lq9sai
false
null
t3_1lq9sai
/r/LocalLLaMA/comments/1lq9sai/machine_learning_ml_cheat_sheet_material/
false
false
self
25
null