title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Fastest Small Model (<4B) | 2 | I am working on a paper that required me to do RL post training on my 4070. Since a 4070 isnt that powerful I am looking to do it on a fast but small model. I know the qwen update was good but I don’t know if they made smaller distills. Any models you recommend? | 2025-08-14T22:30:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mqfhgy/fastest_small_model_4b/ | Trevor050 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqfhgy | false | null | t3_1mqfhgy | /r/LocalLLaMA/comments/1mqfhgy/fastest_small_model_4b/ | false | false | self | 2 | null |
R9700 Just Arrived | 556 | Excited to try it out, haven't seen much info on it yet. Figured some YouTuber would get it before me. | 2025-08-14T22:07:30 | TheyreEatingTheGeese | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mqewha | false | null | t3_1mqewha | /r/LocalLLaMA/comments/1mqewha/r9700_just_arrived/ | false | false | default | 556 | {'enabled': True, 'images': [{'id': 'nho2jy0962jf1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/nho2jy0962jf1.jpeg?width=108&crop=smart&auto=webp&s=6ef18f1b8216b62440d1f742874e66b5534926fc', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/nho2jy0962jf1.jpeg?width=216&crop=smart&auto=w... | |
Meta’s AI Boss Poached Another OpenAI Star... “you might recognize them from the livestreams” | 0 | Alexandr Wang just nabbed yet another ex‑OpenAI name and casually dropped, “some may recognize them from recent livestreams.” Guess the hiring pipeline is now… YouTube rewind. Who’s next, the host?
https://preview.redd.it/zayjp1wu42jf1.jpg?width=1178&format=pjpg&auto=webp&s=3c61157083c6f0b8dc00aad1dbe37ab21fb9e717
ht... | 2025-08-14T22:00:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mqeq8z/metas_ai_boss_poached_another_openai_star_you/ | AskGpts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqeq8z | false | null | t3_1mqeq8z | /r/LocalLLaMA/comments/1mqeq8z/metas_ai_boss_poached_another_openai_star_you/ | false | false | 0 | null | |
Best reasoning llm to run on msty on RX 6800 xt | 0 | I am looking gpt oss or magistral, is there a recommendation here? Even if it is not reasoning model. | 2025-08-14T21:41:48 | https://www.reddit.com/r/LocalLLaMA/comments/1mqe7ve/best_reasoning_llm_to_run_on_msty_on_rx_6800_xt/ | Rare_Ad8942 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqe7ve | false | null | t3_1mqe7ve | /r/LocalLLaMA/comments/1mqe7ve/best_reasoning_llm_to_run_on_msty_on_rx_6800_xt/ | false | false | self | 0 | null |
ChatGPT Python Sandbox requirements.txt | 0 | Not sure if this is useful to anyone, and I realize it is not strictly Local AI-related, but I was playing around with ChatGPT earlier, and I got it to show me the contents of a requirements.txt file that is stored in the root directory of its sandbox.
I noticed it was writing files in /mnt/data, and I was curious to ... | 2025-08-14T21:30:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mqdwqg/chatgpt_python_sandbox_requirementstxt/ | yuicebox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqdwqg | false | null | t3_1mqdwqg | /r/LocalLLaMA/comments/1mqdwqg/chatgpt_python_sandbox_requirementstxt/ | false | false | 0 | null | |
continue.dev agent mode + ??agent | 3 | I've tried a variety of current models (mostly local) together with continue's agent mode, but I'm struggling to get anything remotely useful due to misinterpretations between continue and the model.
Has anyone achieved a good working environment - particularly in the 8B to a max of 14B parm space & 32-128k context. T... | 2025-08-14T20:51:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mqcuy1/continuedev_agent_mode_agent/ | planetf1a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqcuy1 | false | null | t3_1mqcuy1 | /r/LocalLLaMA/comments/1mqcuy1/continuedev_agent_mode_agent/ | false | false | self | 3 | null |
Just a reminder that Grok 2 should be released open source by like tomorrow (based on Mr. Musk’s tweet from last week). | 662 | 2025-08-14T20:50:02 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mqctep | false | null | t3_1mqctep | /r/LocalLLaMA/comments/1mqctep/just_a_reminder_that_grok_2_should_be_released/ | false | false | 662 | {'enabled': True, 'images': [{'id': 'ja_uCQ5aB6MPBfyCRHdZt_FgrNKZPL5vyjFIQGRpjZc', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/hsaoxskfs1jf1.jpeg?width=108&crop=smart&auto=webp&s=426550b53f508e1214a8ac93d7c1a5e94f2dc5a7', 'width': 108}, {'height': 188, 'url': 'https://preview.redd.it/hsaoxskfs1jf1.jp... | |||
Measuring Thinking Efficiency in Reasoning Models: The Missing Benchmark | 1 | 2025-08-14T20:48:08 | https://nousresearch.com/measuring-thinking-efficiency-in-reasoning-models-the-missing-benchmark/ | cpldcpu | nousresearch.com | 1970-01-01T00:00:00 | 0 | {} | 1mqcri7 | false | null | t3_1mqcri7 | /r/LocalLLaMA/comments/1mqcri7/measuring_thinking_efficiency_in_reasoning_models/ | false | false | default | 1 | null | |
The latest ChatGPT is supposed to be ‘PhD level’ smart. It can’t even label a map | 0 | 2025-08-14T20:31:09 | https://edition.cnn.com/2025/08/14/business/chatgpt-rollout-problems | pulse77 | edition.cnn.com | 1970-01-01T00:00:00 | 0 | {} | 1mqcawj | true | null | t3_1mqcawj | /r/LocalLLaMA/comments/1mqcawj/the_latest_chatgpt_is_supposed_to_be_phd_level/ | false | false | default | 0 | null | |
Tools not working with continue.dev in VScode for localllam | 0 | I am trying to get the tools working with but not able to get **agent/plan** mode enabled. I am not sure what i am doing wrong. The models are loading successfully in LMstudio, the chat features. I am mostly using gpt-oss for testing. Any assistance would be appriciated.
name: local-lama-assistant
versio... | 2025-08-14T20:29:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mqc9pr/tools_not_working_with_continuedev_in_vscode_for/ | Columnexco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqc9pr | false | null | t3_1mqc9pr | /r/LocalLLaMA/comments/1mqc9pr/tools_not_working_with_continuedev_in_vscode_for/ | false | false | self | 0 | null |
Question about PC upgrade for local inference | 3 | Hi!
I'm running quite old PC with Ryzen 9 3900x, 2x16GB 3200MHz DDR4 and rx470. I'm not satisfied with anything I can run locally at decent speed (have tried qwen3 30b and qwen3-coder 30b, qwen2.5-coder 8B and 14B, Deepseek coder v2 lite, qwen3 denses in smaller sizes) and I intend to use local LLM for work, because I... | 2025-08-14T20:26:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mqc6mo/question_about_pc_upgrade_for_local_inference/ | Delicious-Grab-9406 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqc6mo | false | null | t3_1mqc6mo | /r/LocalLLaMA/comments/1mqc6mo/question_about_pc_upgrade_for_local_inference/ | false | false | self | 3 | null |
Optimizing Text gen webui (oobabooga) for MOE models (Qwen3-235b, GLM 4.5) | 6 | I'm currently playing around with Qwen3-235b, GLM 4.5, and GLM 4.5 air on Text gen webui and currently the best way to run theses models I have found is with having "override-tensor=([0-4]+).ffn_.*_exps.=CPU" set in the extra flags section and adjusting the amount of experts on the cpu and maxing out the gpu-layers.
S... | 2025-08-14T20:20:29 | https://www.reddit.com/r/LocalLLaMA/comments/1mqc0bv/optimizing_text_gen_webui_oobabooga_for_moe/ | till180 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqc0bv | false | null | t3_1mqc0bv | /r/LocalLLaMA/comments/1mqc0bv/optimizing_text_gen_webui_oobabooga_for_moe/ | false | false | self | 6 | null |
Paddler 2.0: Open-source LLMOps platform for hosting LLMs in your own infrastructure (for teams and orgs primarily) | 3 | We just released the 2.0 version, which is a major upgrade. We rewrote a big chunk of llama-server, and added a lot of extra features for maintainability, ease of use, and such. We also added a new web admin panel to manage all the features. Go check it out. :) It's open source. | 2025-08-14T20:17:06 | https://www.youtube.com/watch?v=aT6QCL8lk08 | mcharytoniuk | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1mqbwu6 | false | {'oembed': {'author_name': 'Intentee', 'author_url': 'https://www.youtube.com/@intentee', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/aT6QCL8lk08?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pic... | t3_1mqbwu6 | /r/LocalLLaMA/comments/1mqbwu6/paddler_20_opensource_llmops_platform_for_hosting/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': '-ybYcOBpDdmaQGsAR1BUc7zquKbAqHa0gvHL7UeJPRk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-ybYcOBpDdmaQGsAR1BUc7zquKbAqHa0gvHL7UeJPRk.jpeg?width=108&crop=smart&auto=webp&s=e2229efb577d12023f7764cda81819ab4747945c', 'width': 108}, {'height': 162, 'url': '... |
Using GPT-5 (High) - API Help | 0 | Hello, I have a task I would like to use GPT-5 with high reasoning effort via with API. My specific use case is for data analysis with PDF's. I don't want to use the chatgpt web/app due to the awful context limit and the inability to control the reasoning effort with gpt5. Problem is I am not very technical and found s... | 2025-08-14T20:07:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mqbn9o/using_gpt5_high_api_help/ | LArandomthrow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqbn9o | false | null | t3_1mqbn9o | /r/LocalLLaMA/comments/1mqbn9o/using_gpt5_high_api_help/ | false | false | self | 0 | null |
Running chats and agents internally | 0 | I’m working for a small health tech startup, and we’re wanting to run in house chat applications that we collect data on to improve internally. The only things I’ve found are things like Open WebUI that aren’t very customizable and aren’t feature rich to allow use of agents and good data collection. Does anyone have so... | 2025-08-14T20:02:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mqbi5l/running_chats_and_agents_internally/ | urbanistrage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqbi5l | false | null | t3_1mqbi5l | /r/LocalLLaMA/comments/1mqbi5l/running_chats_and_agents_internally/ | false | false | self | 0 | null |
this is a cry for help, please help me finetune qwen3-30b-a3b. | 1 | Over the past 2 weeks i've tried various methods to finetune qwen3-30b, autotrainer, unsloth, trl, etc... None of them worked and I couldn't find anything on how to create a LORA, the closest I got was creating loras that did not seem to work with autotrainer.
All I need is direction, there is nothing on youtube, too... | 2025-08-14T20:01:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mqbhby/this_is_a_cry_for_help_please_help_me_finetune/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqbhby | false | null | t3_1mqbhby | /r/LocalLLaMA/comments/1mqbhby/this_is_a_cry_for_help_please_help_me_finetune/ | false | false | self | 1 | null |
Radeon AI PRO R9700 versus two RX 9070 XT? | 1 | I wonder whether it is actually more useful in the context of local llm inference to have two RX 9070 XT cards with 16GB VRAM than one single AI pro 9700 with 32GB VRAM. Both cards seem to be 100% identical except of the VRAM. The price would be approximately the same (2*600 vs 1200 USD). Calculation could be paralleli... | 2025-08-14T19:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mqbe3p/radeon_ai_pro_r9700_versus_two_rx_9070_xt/ | Tech-And-More | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqbe3p | false | null | t3_1mqbe3p | /r/LocalLLaMA/comments/1mqbe3p/radeon_ai_pro_r9700_versus_two_rx_9070_xt/ | false | false | self | 1 | null |
Created a Deep Research Agent using Open Source models and Ollama intregated for local data usage and along with rag based follow up with source links for better context . | 1 | it is deep research agent it uses open source models i have implemented a additional chain of thought in the process to make sure it can perform well even with small models using ollama , currently it uses Rag for follow up questions and internet to get updated information please review it and suggest improvements
htt... | 2025-08-14T19:36:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mqasm2/created_a_deep_research_agent_using_open_source/ | newtoreddit5656 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqasm2 | false | null | t3_1mqasm2 | /r/LocalLLaMA/comments/1mqasm2/created_a_deep_research_agent_using_open_source/ | false | false | 1 | null | |
How do you deal with your ggufs? | 5 | I am looking to use something else besides ollama. I dont like how it has to pull in the model and convert it etc. I am thinking to use llama.cpp or vllm but the question i have is model management, i can serve 1 model but if i wanted to try it with an interface like openwebui how do we manage the models? Do i just put... | 2025-08-14T19:28:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mqakc3/how_do_you_deal_with_your_ggufs/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqakc3 | false | null | t3_1mqakc3 | /r/LocalLLaMA/comments/1mqakc3/how_do_you_deal_with_your_ggufs/ | false | false | self | 5 | null |
Hardware for Wan 2.2 | 0 | Asking for a friend...
Wan 2.2 image to video requires some good hardware. Not only Wan but upcoming models can require more strong hardware.
Can we use services like vast .ai to rent such hardwares and run wan 2.2 properly, or there will be any kind of limitations? | 2025-08-14T19:21:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mqad5c/hardware_for_wan_22/ | Dragonacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqad5c | false | null | t3_1mqad5c | /r/LocalLLaMA/comments/1mqad5c/hardware_for_wan_22/ | false | false | self | 0 | null |
Is there any Opensource model which is as good as Claude Sonnet 4? | 0 | I want to run this in my PC with config RTX 4090 with i9 13th Gen, 64 GB DDR5 Ram. If needed I can increase me Ram to 128 Gb, I got lots of suggestions as below which I can use but not sure if its real or just paid promotion, If anyone has any hands on experience in this then do let me know,
* GLM-4.5
* Qwen3 Coder
*... | 2025-08-14T19:21:11 | https://www.reddit.com/r/LocalLLaMA/comments/1mqad0e/is_there_any_opensource_model_which_is_as_good_as/ | Fearless_Date9341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqad0e | false | null | t3_1mqad0e | /r/LocalLLaMA/comments/1mqad0e/is_there_any_opensource_model_which_is_as_good_as/ | false | false | self | 0 | null |
llama.cpp with AMD GPU on Ubuntu fails to see devices | 1 | [removed] | 2025-08-14T19:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mqa9d7/llamacpp_with_amd_gpu_on_ubuntu_fails_to_see/ | No_Attorney1085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mqa9d7 | false | null | t3_1mqa9d7 | /r/LocalLLaMA/comments/1mqa9d7/llamacpp_with_amd_gpu_on_ubuntu_fails_to_see/ | false | false | self | 1 | null |
Which LLM would be appropriate to replace Amazon Alexa/Google Assistant? | 6 | I need a FOSS local LLM to replace Amazon Alexa/Google Assistant in my smart home setup. It needs to be able to tell me basic things, like search the web to pull up recipes, tell me movie dates, tell me weather, research topics, and work well with Home Assistant, Stable Diffusion, Kokoro, etc, to turn on lights, speak,... | 2025-08-14T19:00:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mq9ryf/which_llm_would_be_appropriate_to_replace_amazon/ | ExcogitationMG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq9ryf | false | null | t3_1mq9ryf | /r/LocalLLaMA/comments/1mq9ryf/which_llm_would_be_appropriate_to_replace_amazon/ | false | false | self | 6 | null |
Thank you Qwen/Llama.cpp/OpenWebUI/Llama-Swap... | 70 | Hi!
I just want to say thank you to all those teams that bets for open source, and yet believe on it! I really appreciate it and I hope to contibute to Llama.cpp in a future (yet I need to improve my C/C++ skills). I just finished the setup to run LlamaCPP through OpenWebUI using Llama-swap and it works great! is much... | 2025-08-14T18:57:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mq9oxw/thank_you_qwenllamacppopenwebuillamaswap/ | ValfarAlberich | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq9oxw | false | null | t3_1mq9oxw | /r/LocalLLaMA/comments/1mq9oxw/thank_you_qwenllamacppopenwebuillamaswap/ | false | false | self | 70 | null |
Google Gemma 3 (270B) running on a midrange Pixel phone! | 2 | Credits: [https://x.com/1littlecoder/status/1956065040563331344](https://x.com/1littlecoder/status/1956065040563331344) | 2025-08-14T18:56:50 | https://v.redd.it/ilkcbup481jf1 | dulldata | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mq9ohy | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ilkcbup481jf1/DASHPlaylist.mpd?a=1757789823%2CMDNkN2Q3YzIwNTI1ZWQwZGRkNmUwN2FkM2VkZmRjOGE3Yzk1NWFhZjQ3OTczZTMyNTYwOGViMTE2OWNjODM4OQ%3D%3D&v=1&f=sd', 'duration': 36, 'fallback_url': 'https://v.redd.it/ilkcbup481jf1/DASH_720.mp4?source=fallback', 'ha... | t3_1mq9ohy | /r/LocalLLaMA/comments/1mq9ohy/google_gemma_3_270b_running_on_a_midrange_pixel/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'aDUwdmp3cDQ4MWpmMXanzXbBF5xBYhTwB05_q49c4QhyM7gTQutLDEXvDgy_', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aDUwdmp3cDQ4MWpmMXanzXbBF5xBYhTwB05_q49c4QhyM7gTQutLDEXvDgy_.png?width=108&crop=smart&format=pjpg&auto=webp&s=c6a6c35366a3cc0c7c407848440cac2c63ca... | |
Genuine question, I get the use cases for 1-4b models, but what's the point of 400m models? Or even less? How good can this actually be and what are the use cases for it? | 66 | Asking because I just saw google is releasing a 270m model and I have no idea what are the use cases for this | 2025-08-14T18:39:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mq970f/genuine_question_i_get_the_use_cases_for_14b/ | a_normal_user1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq970f | false | null | t3_1mq970f | /r/LocalLLaMA/comments/1mq970f/genuine_question_i_get_the_use_cases_for_14b/ | false | false | self | 66 | null |
Introducing Gemma 3 270M: The compact model for hyper-efficient AI- Google Developers Blog | 222 | 2025-08-14T18:30:38 | https://developers.googleblog.com/en/introducing-gemma-3-270m/ | ChiliPepperHott | developers.googleblog.com | 1970-01-01T00:00:00 | 0 | {} | 1mq8yhx | false | null | t3_1mq8yhx | /r/LocalLLaMA/comments/1mq8yhx/introducing_gemma_3_270m_the_compact_model_for/ | false | false | 222 | {'enabled': False, 'images': [{'id': '6raP9qMsa9DXaP-Jm6-LOnAQAH3z6laWfI1Y6Sd_ryc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6raP9qMsa9DXaP-Jm6-LOnAQAH3z6laWfI1Y6Sd_ryc.jpeg?width=108&crop=smart&auto=webp&s=c0154b5d5be891788bf3cb6404455c52ffe57f15', 'width': 108}, {'height': 108, 'url': '... | ||
MiniLM (BERT) embeddings in C from scratch | 16 | Distilled BERT (MiniLM) forward pass in C from scratch to get dependency-free sentence embeddings.
Along with:
* Tiny tensor library (contiguous, row-major, float32)
* .tbf tensor file format + loader
* WordPiece tokenizer (uncased) | 2025-08-14T18:22:16 | https://github.com/abyesilyurt/minilm.c | aby-1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mq8q83 | false | null | t3_1mq8q83 | /r/LocalLLaMA/comments/1mq8q83/minilm_bert_embeddings_in_c_from_scratch/ | false | false | default | 16 | {'enabled': False, 'images': [{'id': 'LNheh9xQ4Aay2gwdMLBhvmisLGNDqagZk96c3qlnYkY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LNheh9xQ4Aay2gwdMLBhvmisLGNDqagZk96c3qlnYkY.png?width=108&crop=smart&auto=webp&s=3ca5a775c903a339ff3fe7b7ef1455efee07c765', 'width': 108}, {'height': 108, 'url': 'h... |
the "missing latest Qwen syndrome" | 435 | 2025-08-14T18:20:58 | shockwaverc13 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mq8oyk | false | null | t3_1mq8oyk | /r/LocalLLaMA/comments/1mq8oyk/the_missing_latest_qwen_syndrome/ | false | false | default | 435 | {'enabled': True, 'images': [{'id': 'z096hdwp01jf1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/z096hdwp01jf1.jpeg?width=108&crop=smart&auto=webp&s=f016e5b4615c9aaaf1403196135d5d770cef92c5', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/z096hdwp01jf1.jpeg?width=216&crop=smart&auto=w... | ||
GLM-4.5 and gpt-oss-120b added to the Elimination Game benchmark | 62 | More info: [https://github.com/lechmazur/elimination_game](https://github.com/lechmazur/elimination_game)
Sample video: [https://www.youtube.com/watch?v=wAmFWsJSemg](https://www.youtube.com/watch?v=wAmFWsJSemg)
---
How the benchmark works:
- Players: 8 concurrent LLMs per match. Each seat sees all public messages pl... | 2025-08-14T17:48:11 | https://www.reddit.com/gallery/1mq7r34 | zero0_one1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mq7r34 | false | null | t3_1mq7r34 | /r/LocalLLaMA/comments/1mq7r34/glm45_and_gptoss120b_added_to_the_elimination/ | false | false | 62 | null | |
How do I even use a memory system? | 9 | Hi folks. I have recently looked into various LLM memory systems, and although they sound cool, I don't see any practical way to use any of them. I expected there to be some kind of chat UI solution that uses them already, but found none. I really want to see what, for example, cognee can do to a chatbot, but I can't s... | 2025-08-14T17:47:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mq7qsx/how_do_i_even_use_a_memory_system/ | libregrape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq7qsx | false | null | t3_1mq7qsx | /r/LocalLLaMA/comments/1mq7qsx/how_do_i_even_use_a_memory_system/ | false | false | self | 9 | null |
PapersWithPRs: Don't Just Read the Paper, Replicate the Results | 31 | Hundreds of papers are published to the arXiv each day, documenting experiments to advance our understanding of AI.
I want to keep up with the volume of results to optimize fine-tuning, improve data curation, speed up inference, and boost domain adaptations, so I'm using AI to find the pearls for my application.
To ... | 2025-08-14T17:28:56 | remyxai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mq7715 | false | null | t3_1mq7715 | /r/LocalLLaMA/comments/1mq7715/paperswithprs_dont_just_read_the_paper_replicate/ | false | false | default | 31 | {'enabled': True, 'images': [{'id': 'cayof7tiq0jf1', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/cayof7tiq0jf1.jpeg?width=108&crop=smart&auto=webp&s=a5d9ab9e631d98ec2d8934c3cb605693c14b75cb', 'width': 108}, {'height': 211, 'url': 'https://preview.redd.it/cayof7tiq0jf1.jpeg?width=216&crop=smart&auto=... | |
DeepSeek-R1-0528-Qwen3-8B with 4 quants vs Qwen3-4B-Instruct-2507 with 8 quants? | 4 | For general purposes, like studying, math, usual questions, which one would be better? DeepSeek-R1-0528-Qwen3-8B with 4 quants vs Qwen3-4B-Instruct-2507 with 8 quants? Heard deepseek can hallucinate sometimes, especially with lower parameters and quants. What should I choose? | 2025-08-14T17:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mq6gk2/deepseekr10528qwen38b_with_4_quants_vs/ | Imaginary_Bread9711 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq6gk2 | false | null | t3_1mq6gk2 | /r/LocalLLaMA/comments/1mq6gk2/deepseekr10528qwen38b_with_4_quants_vs/ | false | false | self | 4 | null |
GLM-4.1V-Thinking and GLM-4.5V | 38 | https://arxiv.org/pdf/2507.01006 | 2025-08-14T16:45:58 | https://x.com/zai_org/status/1956030993569341556?s=46 | policyweb | x.com | 1970-01-01T00:00:00 | 0 | {} | 1mq5zjv | false | null | t3_1mq5zjv | /r/LocalLLaMA/comments/1mq5zjv/glm41vthinking_and_glm45v/ | false | false | default | 38 | null |
Ollama but for mobile, with a cloud fallback | 0 | Hey guys,
We’re building something like Ollama, but for mobile. It runs models fully on-device for speed and privacy, and can fall back to the cloud when needed.
I’d love your feedback — especially around how you’re currently using local LLMs and what features you’d want on mobile.
🚀 Check out our Product Hunt laun... | 2025-08-14T16:39:10 | https://www.reddit.com/r/LocalLLaMA/comments/1mq5sqr/ollama_but_for_mobile_with_a_cloud_fallback/ | thecoder12322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq5sqr | false | null | t3_1mq5sqr | /r/LocalLLaMA/comments/1mq5sqr/ollama_but_for_mobile_with_a_cloud_fallback/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '8EVxXu8RSdgyifsM33rnwcUpMatJSqvhIdmfJyHk3gU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8EVxXu8RSdgyifsM33rnwcUpMatJSqvhIdmfJyHk3gU.png?width=108&crop=smart&auto=webp&s=2dfd57a4e1e0913a823defb15c4ccb2650d7e8bd', 'width': 108}, {'height': 108, 'url': 'h... |
📌 Learn how to build an LLM from scratch step by step(without the hype)📌 | 88 | 2025-08-14T16:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mq5qw5/learn_how_to_build_an_llm_from_scratch_step_by/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq5qw5 | false | null | t3_1mq5qw5 | /r/LocalLLaMA/comments/1mq5qw5/learn_how_to_build_an_llm_from_scratch_step_by/ | false | false | self | 88 | null | |
Continuous Background Intelligence Research Agent | 1 | **Context and Problem**
I've been struggling to keep up with the latest trends and breakthrough in tech and science. I just can't keep reading all the latest feeds it anymore.
Being a cybersecurity researcher and startup founder, it was a huge productivity loss trying to do all that every week. And when I was exhaust... | 2025-08-14T16:35:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mq5p63/continuous_background_intelligence_research_agent/ | ctxgen_founder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq5p63 | false | null | t3_1mq5p63 | /r/LocalLLaMA/comments/1mq5p63/continuous_background_intelligence_research_agent/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'JalZP_HV_Cr6D37MrOdM_i7WsFWFNQMRaibtqyPTS2M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JalZP_HV_Cr6D37MrOdM_i7WsFWFNQMRaibtqyPTS2M.png?width=108&crop=smart&auto=webp&s=efeff5c5fbd1da9caf27966486dbd97c4bf9905c', 'width': 108}, {'height': 108, 'url': 'h... |
📌 Learn how to build an LLM from scratch step by step(without the hype)📌 | 1 | [removed] | 2025-08-14T16:31:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mq5l6u/learn_how_to_build_an_llm_from_scratch_step_by/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq5l6u | false | null | t3_1mq5l6u | /r/LocalLLaMA/comments/1mq5l6u/learn_how_to_build_an_llm_from_scratch_step_by/ | false | false | 1 | null | |
R2 expectations discussion | 0 | R1 has been a beast, topping almost every LLM benchmark list. Now with the R2 launch delayed, and them reportedly building their own hardware, it feels like they're about to drop something wild and completely disrupt the market again. The move to their own hardware suggests they're going for total autonomy, which is a ... | 2025-08-14T16:31:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mq5kxh/r2_expectations_discussion/ | Only-Locksmith8457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq5kxh | false | null | t3_1mq5kxh | /r/LocalLLaMA/comments/1mq5kxh/r2_expectations_discussion/ | false | false | 0 | null | |
Any serious and practical uses for gpt-oss-20b? | 5 | I was thinking over the next year or so if a model like gpt-oss-20b would get deployed beyond a research toy.
I think there is a niche window in edge devices that aren't easily updated and bootstrap stuff before it hits cloud. In some ways, it's more likely to see a longer life than gpt-oss-120b which isn't bad, b... | 2025-08-14T16:29:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mq5iyz/any_serious_and_practical_uses_for_gptoss20b/ | kaggleqrdl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq5iyz | false | null | t3_1mq5iyz | /r/LocalLLaMA/comments/1mq5iyz/any_serious_and_practical_uses_for_gptoss20b/ | false | false | self | 5 | null |
Google NotebookLM exposed its querying system while been under pressure?! | 1 | What happened in this video I just found?? [https://www.youtube.com/watch?v=Zjvv4eSaPyI&t=7s](https://www.youtube.com/watch?v=Zjvv4eSaPyI&t=7s)
That behavior is very interesting. It seem like AI exposed parts of the way its working while going crazy over existential paradox, it really tripped out!
Anyone has ide... | 2025-08-14T16:25:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mq5fjo/google_notebooklm_exposed_its_querying_system/ | Temporary_Debt9227 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq5fjo | false | null | t3_1mq5fjo | /r/LocalLLaMA/comments/1mq5fjo/google_notebooklm_exposed_its_querying_system/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BN5OCAJlNS2sK1AfIOzQJQWuAXT6Vw-P9RLlHbfMcVU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/BN5OCAJlNS2sK1AfIOzQJQWuAXT6Vw-P9RLlHbfMcVU.jpeg?width=108&crop=smart&auto=webp&s=5049428090e84dc1f52a8cfde01f4c74a9f639d2', 'width': 108}, {'height': 162, 'url': '... |
So what is Neuro-sama (AI VTuber) built with? | 24 | I keep running into shorts of her and the fact that she replies so fast _and_ has a TTS _and_ has a model just got me wondering how she can do that. Like, how is this so obscenely fast o.o
Anyone happen to know how she's made? | 2025-08-14T16:23:07 | https://www.reddit.com/r/LocalLLaMA/comments/1mq5cwq/so_what_is_neurosama_ai_vtuber_built_with/ | IngwiePhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq5cwq | false | null | t3_1mq5cwq | /r/LocalLLaMA/comments/1mq5cwq/so_what_is_neurosama_ai_vtuber_built_with/ | false | false | self | 24 | null |
Any article of how to use qwen-image to edit or fine-tune it ? | 0 | In fact they mention on the hugging face page that qwen-image can edit image any idea how w can use it to edit image or how we can fine-tune it for do specific task for image editing (if there is any other open-source model that has a good performence like qwen-image and we can fine tune it on specefic image editing ta... | 2025-08-14T16:20:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mq5agz/any_article_of_how_to_use_qwenimage_to_edit_or/ | somthing_tn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq5agz | false | null | t3_1mq5agz | /r/LocalLLaMA/comments/1mq5agz/any_article_of_how_to_use_qwenimage_to_edit_or/ | false | false | 0 | null | |
Trying to run LLM locally | 1 | [removed] | 2025-08-14T16:19:21 | ldpawa2232 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mq597a | false | null | t3_1mq597a | /r/LocalLLaMA/comments/1mq597a/trying_to_run_llm_locally/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'kwwvgsz4g0jf1', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/kwwvgsz4g0jf1.png?width=108&crop=smart&auto=webp&s=632c8468f86d1613d058aadadec5a282096bc536', 'width': 108}, {'height': 34, 'url': 'https://preview.redd.it/kwwvgsz4g0jf1.png?width=216&crop=smart&auto=webp... | |
promptcat: A zero-dependency prompt manager in a single HTML file | 38 | A private, offline-first prompt manager in a single, dependency-free HTML file. It stores all data locally in your browser's IndexedDB.
**Key Features:**
* **100% Local & Offline:** All data is stored in your browser's IndexedDB.
* **Zero Dependencies:** Just pure, vanilla JavaScript, HTML, and CSS.
* **Strong Encryp... | 2025-08-14T16:14:08 | https://www.reddit.com/gallery/1mq540c | seven_reasons | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mq540c | false | null | t3_1mq540c | /r/LocalLLaMA/comments/1mq540c/promptcat_a_zerodependency_prompt_manager_in_a/ | false | false | 38 | null | |
what about autoround quant of intel? | 1 | from bench, it seems better than unsloth quant. see [https://huggingface.co/collections/xbruce22/mmlu-pro-benchmark-for-ggufs-1-shot-688afb01e9e0bcf2375e7dbb](https://huggingface.co/collections/xbruce22/mmlu-pro-benchmark-for-ggufs-1-shot-688afb01e9e0bcf2375e7dbb)
but this quant refer a paper published in 2023. if th... | 2025-08-14T16:11:35 | https://www.reddit.com/r/LocalLLaMA/comments/1mq51h3/what_about_autoround_quant_of_intel/ | Remarkable-Pea645 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq51h3 | false | null | t3_1mq51h3 | /r/LocalLLaMA/comments/1mq51h3/what_about_autoround_quant_of_intel/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'u12Nrjtnz1Kzn2stdwwlSqDJpkbH7wqkXqecHIzCGFs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/u12Nrjtnz1Kzn2stdwwlSqDJpkbH7wqkXqecHIzCGFs.png?width=108&crop=smart&auto=webp&s=2d2ddc17098768390758f929e888c6601cd59964', 'width': 108}, {'height': 116, 'url': 'h... |
No more guessing the best hyperparameters for fine-tuning | 11 | https://i.redd.it/k8tebeiz90jf1.gif
We added a sweeps feature to Transformer Lab that helps with hyperparameter optimization for local model training. Feel free to try it and let us know if it’s helpful.
**Why use it?**
Instead of manually adjusting learning rates, batch sizes, etc. one at a time, you give Transform... | 2025-08-14T16:03:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mq4tnu/no_more_guessing_the_best_hyperparameters_for/ | OriginalSpread3100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq4tnu | false | null | t3_1mq4tnu | /r/LocalLLaMA/comments/1mq4tnu/no_more_guessing_the_best_hyperparameters_for/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM', 'resolutions': [{'height': 80, 'url': 'https://external-preview.redd.it/UcoYz3kN5H8FNJluKnJGIPKd6bmeNh87Yuw2bnUR0ZM.png?width=108&crop=smart&auto=webp&s=ed618c5bb4c12e2d13ea8c39bad4ca732a513593', 'width': 108}, {'height': 160, 'url': 'h... | |
In 2025 What Should You Learn In AI ? | 0 | [In 2025 What Should You Learn In AI ?](https://youtu.be/H2vXVvjmbJU) | 2025-08-14T15:55:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mq4lmo/in_2025_what_should_you_learn_in_ai/ | bipin_25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq4lmo | false | null | t3_1mq4lmo | /r/LocalLLaMA/comments/1mq4lmo/in_2025_what_should_you_learn_in_ai/ | false | false | self | 0 | null |
Continuous Background Intelligence Research Agent | 0 | **Context**
I've been struggling to keep up with the latest trends and breakthrough in tech and science. I just can't keep reading all the latest feeds it anymore.
Being a cybersecurity researcher and startup founder, it was a huge productivity loss every week trying to do all that every week. And when I was exhauste... | 2025-08-14T15:51:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mq4gzs/continuous_background_intelligence_research_agent/ | ctxgen_founder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq4gzs | false | null | t3_1mq4gzs | /r/LocalLLaMA/comments/1mq4gzs/continuous_background_intelligence_research_agent/ | false | false | self | 0 | null |
GLM 4.5v - Anyone try the quants? | 31 | [https://huggingface.co/QuantTrio/GLM-4.5V-AWQ](https://huggingface.co/QuantTrio/GLM-4.5V-AWQ)
Or...
https://huggingface.co/cpatonn/GLM-4.5V-AWQ-8bit
Only 17-30B from a 100+B model?
Praying these aren't garbage. Potentially fit in 48GB vram with full context? | 2025-08-14T15:41:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mq47we/glm_45v_anyone_try_the_quants/ | Bohdanowicz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq47we | false | null | t3_1mq47we | /r/LocalLLaMA/comments/1mq47we/glm_45v_anyone_try_the_quants/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'wLOZOaeYiQbiyUhfP2eEGPpvvr90BX8nGItb2kCKUys', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wLOZOaeYiQbiyUhfP2eEGPpvvr90BX8nGItb2kCKUys.png?width=108&crop=smart&auto=webp&s=7e477e866dacd7742f2172e70b3dbc5a0f751fa2', 'width': 108}, {'height': 116, 'url': 'h... |
Goedel-Prover-V2: The Strongest Open-Source Theorem Prover to Date | 37 | 2025-08-14T15:38:55 | https://blog.goedel-prover.com/ | hedgehog0 | blog.goedel-prover.com | 1970-01-01T00:00:00 | 0 | {} | 1mq450p | false | null | t3_1mq450p | /r/LocalLLaMA/comments/1mq450p/goedelproverv2_the_strongest_opensource_theorem/ | false | false | default | 37 | null | |
AI_dev: Open Source GenAI & ML Summit Europe 2025. | 1 | Who is attending this AI conference in Amsterdam on the 28th & 29th of August?
Would love to connect and meet up at the event!
| 2025-08-14T15:31:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mq3yap/ai_dev_open_source_genai_ml_summit_europe_2025/ | Narrow_Garbage_3475 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq3yap | false | null | t3_1mq3yap | /r/LocalLLaMA/comments/1mq3yap/ai_dev_open_source_genai_ml_summit_europe_2025/ | false | false | self | 1 | null |
google/gemma-3-270m · Hugging Face | 692 | 2025-08-14T15:28:38 | https://huggingface.co/google/gemma-3-270m | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mq3v93 | false | null | t3_1mq3v93 | /r/LocalLLaMA/comments/1mq3v93/googlegemma3270m_hugging_face/ | false | false | default | 692 | {'enabled': False, 'images': [{'id': 'ROrEGumvbqFvKi3ZHhPgoXOITTfGnht6t4Oyu75k6fA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ROrEGumvbqFvKi3ZHhPgoXOITTfGnht6t4Oyu75k6fA.png?width=108&crop=smart&auto=webp&s=cac08462d2513ab8dca3d4a891c0fb1dfbef8775', 'width': 108}, {'height': 116, 'url': 'h... | |
[2508.09874] Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models | 17 | 2025-08-14T15:16:08 | https://arxiv.org/abs/2508.09874 | Thrumpwart | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1mq3j12 | false | null | t3_1mq3j12 | /r/LocalLLaMA/comments/1mq3j12/250809874_memory_decoder_a_pretrained_plugandplay/ | false | false | default | 17 | null | |
What happened with the emojis that appeared in the titles of each section of structured replies in the Qwen website? | 0 | Did they change the prompt? | 2025-08-14T15:08:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mq3bx0/what_happened_with_the_emojis_that_appeared_in/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq3bx0 | false | null | t3_1mq3bx0 | /r/LocalLLaMA/comments/1mq3bx0/what_happened_with_the_emojis_that_appeared_in/ | false | false | self | 0 | null |
Local coding assistant with a small context window | 0 | GPT-OSS-20B can work reasonably well (even on a 16GB M3 MacBook Air, as seen in [this video](https://youtu.be/9mLrGcuDifo)). I was able to get it to do simple tasks that require tool calls, such as web text extraction. Now I want to push this a step further and get it to power a locally running coding assistance plugin... | 2025-08-14T15:05:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mq38ym/local_coding_assistant_with_a_small_context_window/ | keheliya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq38ym | false | null | t3_1mq38ym | /r/LocalLLaMA/comments/1mq38ym/local_coding_assistant_with_a_small_context_window/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'w4JJCkjP8binMc6nsSe1jbSw-EMdAmelX7MDo7Mi8AI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/w4JJCkjP8binMc6nsSe1jbSw-EMdAmelX7MDo7Mi8AI.jpeg?width=108&crop=smart&auto=webp&s=276a9185667dbf9a9d8ce8939da6524756d5d3e4', 'width': 108}, {'height': 162, 'url': '... |
Bigger models are not always better 👀 | 0 | ERROR: type should be string, got "https://preview.redd.it/wrwpmpzp20jf1.png?width=900&format=png&auto=webp&s=3f6be2d5c73a19d872aa2728a2113c0caca143e7\n\nCoral Protocol outperformed Microsoft by 34% on the GAIA benchmark! Check out the report in comments! " | 2025-08-14T15:05:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mq385a/bigger_models_are_not_always_better/ | AdVirtual2648 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq385a | false | null | t3_1mq385a | /r/LocalLLaMA/comments/1mq385a/bigger_models_are_not_always_better/ | false | false | 0 | null | |
An interactive guide to the new on-device built-in AI APIs in Chrome | 1 | 2025-08-14T15:00:47 | https://www.clarkduvall.com/ai/walkthrough.html | OpeningPerception578 | clarkduvall.com | 1970-01-01T00:00:00 | 0 | {} | 1mq33qt | false | null | t3_1mq33qt | /r/LocalLLaMA/comments/1mq33qt/an_interactive_guide_to_the_new_ondevice_builtin/ | false | false | default | 1 | null | |
How do I make my RAG chatbot faster,accurate and Industry ready ? | 0 | * So ,I have recently joined a 2-person startup, and they have assigned me to create a SaaS product , where the client can come and submit their website url or/and pdf , and I will crawl and parse the website/pdf and create a RAG chatbot which the client can integrate in their website .
* Till now I am able to crawl th... | 2025-08-14T14:59:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mq3272/how_do_i_make_my_rag_chatbot_fasteraccurate_and/ | 1amN0tSecC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq3272 | false | null | t3_1mq3272 | /r/LocalLLaMA/comments/1mq3272/how_do_i_make_my_rag_chatbot_fasteraccurate_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'B8gfyL-2aHJpgzUEMy5-EEg-lp-zn1inTL2FFW-lv2w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B8gfyL-2aHJpgzUEMy5-EEg-lp-zn1inTL2FFW-lv2w.png?width=108&crop=smart&auto=webp&s=f34766fefffc08fe8cf87520d906bf6e85b31d01', 'width': 108}, {'height': 108, 'url': 'h... |
How we chased accuracy in doc extraction… and landed on k-LLMs | 0 | At Retab, we process messy docs (PDFs, Excels, emails) and needed to squeeze every last % of accuracy out of LLM extractions. After hitting the ceiling with single-model runs, we adopted *k-LLMs*, and haven’t looked back.
**What’s k-LLMs?** Instead of trusting one model run, you:
* Fire the same prompt k times (sam... | 2025-08-14T14:52:26 | Reason_is_Key | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mq2vjt | false | null | t3_1mq2vjt | /r/LocalLLaMA/comments/1mq2vjt/how_we_chased_accuracy_in_doc_extraction_and/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'lghxaaci00jf1', 'resolutions': [{'height': 57, 'url': 'https://preview.redd.it/lghxaaci00jf1.png?width=108&crop=smart&auto=webp&s=1bd5f22e9c482b2ad778ef9c75a407f61b13dfcc', 'width': 108}, {'height': 114, 'url': 'https://preview.redd.it/lghxaaci00jf1.png?width=216&crop=smart&auto=web... | |
First Look: Our work on “One-Shot CFT” — 24× Faster LLM Reasoning Training with Single-Example Fine-Tuning | 8 | *First look at our latest collaboration with the* [***University of Waterloo’s TIGER Lab***](https://wenhuchen.github.io/lab) *on a new approach to boost LLM reasoning post-training:* **One-Shot CFT (Critique Fine-Tuning)**.
**How it works:**This approach uses **20× less compute and just one piece of feedback**, y... | 2025-08-14T14:50:09 | https://www.reddit.com/gallery/1mq2tc5 | MarketingNetMind | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mq2tc5 | false | null | t3_1mq2tc5 | /r/LocalLLaMA/comments/1mq2tc5/first_look_our_work_on_oneshot_cft_24_faster_llm/ | false | false | 8 | null | |
[FEEDBACK] Better packaging for llama.cpp to support downstream consumers | 39 | It's about time we build an easier UX for llama.cpp 🤗
I've used llama.cpp for better part of last 2 years for playing with LLMs and use it in production too
Whilst it takes a bit to setup llama.cpp, once done, it \*just\* works!
Come along with your ideas/ solutions on how we can package it better, and make it ... | 2025-08-14T14:40:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mq2k00/feedback_better_packaging_for_llamacpp_to_support/ | vaibhavs10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq2k00 | false | null | t3_1mq2k00 | /r/LocalLLaMA/comments/1mq2k00/feedback_better_packaging_for_llamacpp_to_support/ | false | false | self | 39 | null |
Unsupervised Model Improvement via Internal Coherence Maximization: Outperforming Human-Supervised Methods Through Self-Elicitation | 3 | 2025-08-14T14:37:21 | https://huggingface.co/blog/codelion/internal-coherence-maximization | asankhs | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mq2h6c | false | null | t3_1mq2h6c | /r/LocalLLaMA/comments/1mq2h6c/unsupervised_model_improvement_via_internal/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'ocanzrLs21TXhCr9Dnun6zKLIj_hRfINdrghPQQER10', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ocanzrLs21TXhCr9Dnun6zKLIj_hRfINdrghPQQER10.png?width=108&crop=smart&auto=webp&s=034cacbf623e29b9aca0a806da52c521ead5ac7e', 'width': 108}, {'height': 116, 'url': 'h... | |
Newbie: Best Coding Model and Setup for 4090 and 192GB RAM | 0 | Hello folks,
I am a total begginer when it comes to running models locally. I mainly wanna have the best possible model i can for my hardware specs: 4090 and 192GB RAM running locally , decently and if possible inergrate into an IDE like VSCode or similar.
I am aware of Ollama and recently found out about Jan.
Some... | 2025-08-14T14:37:12 | https://www.reddit.com/r/LocalLLaMA/comments/1mq2h1l/newbie_best_coding_model_and_setup_for_4090_and/ | kleoz_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq2h1l | false | null | t3_1mq2h1l | /r/LocalLLaMA/comments/1mq2h1l/newbie_best_coding_model_and_setup_for_4090_and/ | false | false | self | 0 | null |
A lot of questions: fine-tuning LLaMA-3.1-8B-Instruct | 2 | Hi all,
I’m new to the LLM fine-tuning and inference world, and I’ve just started experimenting with **LLaMA-3.1-8B-Instruct**.
Here are some issues I’ve been running into:
1. **vLLM vs HuggingFace parity.** If I load the same model and tokenizer in vLLM and transformers, should I expect identical outputs?
2. **Fai... | 2025-08-14T14:29:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mq2ady/a_lot_of_questions_finetuning_llama318binstruct/ | DesperateAd7578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq2ady | false | null | t3_1mq2ady | /r/LocalLLaMA/comments/1mq2ady/a_lot_of_questions_finetuning_llama318binstruct/ | false | false | self | 2 | null |
5060 ti pcie4x4 | 1 | Purely for llm inference would pcie4 x4 be limiting the 5060 ti too much? (this would be combined with other 2 pcie5 slots with full bandwith for total 3 cards) | 2025-08-14T14:22:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mq22z2/5060_ti_pcie4x4/ | nologai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq22z2 | false | null | t3_1mq22z2 | /r/LocalLLaMA/comments/1mq22z2/5060_ti_pcie4x4/ | false | false | self | 1 | null |
Lemonade v8.1.3: Redesigned web ui, Ryzen AI Strix in CI, and lots of community suggestions addressed | 38 | Lemonade [v8.1.3](https://github.com/lemonade-sdk/lemonade/releases/tag/v8.1.3) just released today as part of our ongoing sprint to implement the community's suggestions!
Lemonade lets you run local LLMs with high performance on your NPU or GPU, and the new release includes:
💻 Ryzen AI Strix Point self-hosted runne... | 2025-08-14T14:14:52 | jfowers_amd | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mq1w1z | false | null | t3_1mq1w1z | /r/LocalLLaMA/comments/1mq1w1z/lemonade_v813_redesigned_web_ui_ryzen_ai_strix_in/ | false | false | 38 | {'enabled': True, 'images': [{'id': 'd3fsDiGnSAu_-EUbCPc5dXNT8bL6JmszB0cZ76ijPgc', 'resolutions': [{'height': 98, 'url': 'https://preview.redd.it/fq1mo49ftzif1.png?width=108&crop=smart&auto=webp&s=27ac2b0bfe9121df18158d0d517048948d1af5fa', 'width': 108}, {'height': 196, 'url': 'https://preview.redd.it/fq1mo49ftzif1.png... | ||
I want to create a simple LLM which I can use as a daily journal and then I can search without specific sentences, how hard would it be? | 1 | So, my memory is getting worse and worse.
I've been thinking about writing down a journal, which not only would help me go back to certain dates in time, but also would allow me to write down important things friends, family, or dates would tell me about themselves, so I can then come back and ask things like "give me... | 2025-08-14T14:02:15 | https://www.reddit.com/r/LocalLLaMA/comments/1mq1k0x/i_want_to_create_a_simple_llm_which_i_can_use_as/ | justletmesignupalre | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq1k0x | false | null | t3_1mq1k0x | /r/LocalLLaMA/comments/1mq1k0x/i_want_to_create_a_simple_llm_which_i_can_use_as/ | false | false | self | 1 | null |
1 million context is the scam , the ai start hallucinating after the 90k . im using the qwen cli and its become trash after 10 percent context window used | 323 | this is the major weakness ai have and they will never bring this on the benchmark , if u r working on the codebase the ai will work like a monster for the first 100k context aftert that its become the ass | 2025-08-14T13:51:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mq19x6/1_million_context_is_the_scam_the_ai_start/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq19x6 | false | null | t3_1mq19x6 | /r/LocalLLaMA/comments/1mq19x6/1_million_context_is_the_scam_the_ai_start/ | false | false | self | 323 | null |
Help With Finetuning | 0 | So, I have couple of questions. It has been months since I was working with LLMs, exploring libraries, working on llama.cpp,...etc. I have decided to move deep and go with fine tuning,..etc. So, Where to start like specially I wanna make the model a bit emotional. I guess I will use a dataset of some reddit comments.
... | 2025-08-14T13:49:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mq18ct/help_with_finetuning/ | Important_Earth6615 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mq18ct | false | null | t3_1mq18ct | /r/LocalLLaMA/comments/1mq18ct/help_with_finetuning/ | false | false | self | 0 | null |
HEY GUYS I NEED UR HELP I LITTERALLY FU KED UP , I USED THE GEMINI CLI THAT EDITED THE WHOLE CODE BASE DELTETED THE IMPORATNT FILE HOW TO BRING THAT | 0 | IS THERE ANY WAY I CAN UNDO | 2025-08-14T12:50:36 | https://www.reddit.com/r/LocalLLaMA/comments/1mpzrjl/hey_guys_i_need_ur_help_i_litterally_fu_ked_up_i/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpzrjl | false | null | t3_1mpzrjl | /r/LocalLLaMA/comments/1mpzrjl/hey_guys_i_need_ur_help_i_litterally_fu_ked_up_i/ | false | false | self | 0 | null |
🔧 Simple GPU monitoring tool with Ollama control - thought you might find it useful | 1 | [removed] | 2025-08-14T12:44:34 | https://www.reddit.com/r/LocalLLaMA/comments/1mpzmjl/simple_gpu_monitoring_tool_with_ollama_control/ | le_jiww | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpzmjl | false | null | t3_1mpzmjl | /r/LocalLLaMA/comments/1mpzmjl/simple_gpu_monitoring_tool_with_ollama_control/ | false | false | self | 1 | null |
Built a simple GPU/System monitor with integrated Ollama controls - sharing the executable! | 1 | [removed] | 2025-08-14T12:38:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mpzhhs/built_a_simple_gpusystem_monitor_with_integrated/ | le_jiww | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpzhhs | false | null | t3_1mpzhhs | /r/LocalLLaMA/comments/1mpzhhs/built_a_simple_gpusystem_monitor_with_integrated/ | false | false | self | 1 | null |
Running out of RAM before fully loading checkpoints shards | 3 | Hello! I'm trying to create a LORA for qwen3-235b-a22b and I'm renting 10 A40's for $4/h with 500GB RAM but when loading the checkpoint shards at around 40/118 RAM gets to %100 and it stops loading the shards, any idea on what I should do? I'm using runpod but it doesn't look like there is any option to rent more RAM, ... | 2025-08-14T12:37:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mpzgck/running_out_of_ram_before_fully_loading/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpzgck | false | null | t3_1mpzgck | /r/LocalLLaMA/comments/1mpzgck/running_out_of_ram_before_fully_loading/ | false | false | self | 3 | null |
REINFORCE++-baseline is all you need in RLVR | 12 | # What is REINFORCE++-baseline?
In essence, REINFORCE++-baseline (detailed in [arXiv:2501.03262](https://arxiv.org/abs/2501.03262)) replaces the critic network used in PPO with the group mean reward and applies global batch advantage normalization. The KL loss is computed using the K2 KL estimator. Because the global ... | 2025-08-14T12:20:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mpz2ef/reinforcebaseline_is_all_you_need_in_rlvr/ | seventh_day123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpz2ef | false | null | t3_1mpz2ef | /r/LocalLLaMA/comments/1mpz2ef/reinforcebaseline_is_all_you_need_in_rlvr/ | false | false | self | 12 | null |
NO WAY BACK | 0 | Hello everyone,
Accessing powerful GPU compute is notoriously expensive, creating a high barrier for developers, researchers, and creators. At the same time, millions of powerful GPUs in gaming rigs, workstations, and home labs sit idle.
We're developing a solution to bridge this gap: **Gonka**, a decentralized peer-... | 2025-08-14T12:18:41 | https://www.reddit.com/r/LocalLLaMA/comments/1mpz1af/no_way_back/ | autoimago | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpz1af | false | null | t3_1mpz1af | /r/LocalLLaMA/comments/1mpz1af/no_way_back/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '8IeY1-P39baXcc_2-6YYARmFobuJ8kERzARuoUsx8fA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8IeY1-P39baXcc_2-6YYARmFobuJ8kERzARuoUsx8fA.png?width=108&crop=smart&auto=webp&s=e7cbb2fb4e0328d2f60b9bd22526e208915b6ece', 'width': 108}, {'height': 108, 'url': 'h... |
Hiring: Prompt Engineer – AI Video/Image | 1 | We’re building an AI-powered video generation platform, and need a creative prompt engineer who can design realistic, high-quality video templates that trend and sell.
What you’ll do:
Create visual effects templates (cinematic, motion, magical, and more — think TikTok trends like “Earth Zoom In”).
Build avatar templ... | 2025-08-14T12:01:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mpynu8/hiring_prompt_engineer_ai_videoimage/ | whyonename | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpynu8 | false | null | t3_1mpynu8 | /r/LocalLLaMA/comments/1mpynu8/hiring_prompt_engineer_ai_videoimage/ | false | false | self | 1 | null |
Looking for an open-source base project for my company’s local AI assistant (RAG + Vision + Audio + Multi-user + API) | 0 | Hi everyone,
I’m the only technical person in my company, and I’ve been tasked with developing a local AI assistant.
So far, I’ve built document ingestion and RAG using our internal manuals (precise retrieval) wirh Llama 3 8b, but the final goal is much bigger:
Currently:
Runs locally (single user)
Accurate RAG ove... | 2025-08-14T11:45:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mpybfz/looking_for_an_opensource_base_project_for_my/ | OkButterscotch6815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpybfz | false | null | t3_1mpybfz | /r/LocalLLaMA/comments/1mpybfz/looking_for_an_opensource_base_project_for_my/ | false | false | self | 0 | null |
Looking for uncensored local LLM recommendations for RTX 3050 laptop (LM Studio user) + best prompting tips | 0 |
Hey everyone,
I’m looking for some advice on running uncensored / unfiltered local LLMs on my hardware. My setup:
CPU: Intel i5-12450H
GPU: NVIDIA GeForce RTX 3050 Laptop (4 GB VRAM)
RAM: 16 GB
OS: Windows 11
Main tool: LM Studio
I’m specifically looking for models without heavy safety filters or censorship, i... | 2025-08-14T11:45:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mpyb05/looking_for_uncensored_local_llm_recommendations/ | Gichlerr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpyb05 | false | null | t3_1mpyb05 | /r/LocalLLaMA/comments/1mpyb05/looking_for_uncensored_local_llm_recommendations/ | false | false | self | 0 | null |
Is there a local alternative to Suno AI or similar? | 7 | I am looking for a decent Local Music AI that can run on one graphics card. Thank you. | 2025-08-14T11:40:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mpy7m4/is_there_a_local_alternative_to_suno_ai_or_similar/ | idleWizard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpy7m4 | false | null | t3_1mpy7m4 | /r/LocalLLaMA/comments/1mpy7m4/is_there_a_local_alternative_to_suno_ai_or_similar/ | false | false | self | 7 | null |
MaxSun's Intel Arc Pro B60 Dual GPU with 48GB memory reportedly starts shipping next week, priced at $1,200 | 418 | 2025-08-14T11:23:08 | https://videocardz.com/newz/maxsun-arc-pro-b60-dual-with-48gb-memory-reportedly-starts-shipping-next-week-priced-at-1200 | brand_momentum | videocardz.com | 1970-01-01T00:00:00 | 0 | {} | 1mpxumt | false | null | t3_1mpxumt | /r/LocalLLaMA/comments/1mpxumt/maxsuns_intel_arc_pro_b60_dual_gpu_with_48gb/ | false | false | 418 | {'enabled': False, 'images': [{'id': '3RGDYz9vGH8VQTfhA0sqrehkFc8q3f4WnHv1sjovwaY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3RGDYz9vGH8VQTfhA0sqrehkFc8q3f4WnHv1sjovwaY.jpeg?width=108&crop=smart&auto=webp&s=7920db4bd07b3308442ee632da4e9508f992ae3d', 'width': 108}, {'height': 112, 'url': '... | ||
Local documentation for coder models | 1 | When using local coding models offline, are there any tools that download, index, and feed the relevant documentation to the model? What do you use to make sure your LLM has the docs for your tech stack available for reference? | 2025-08-14T11:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mpxqcj/local_documentation_for_coder_models/ | lakySK | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpxqcj | false | null | t3_1mpxqcj | /r/LocalLLaMA/comments/1mpxqcj/local_documentation_for_coder_models/ | false | false | self | 1 | null |
Built a simple GPU/System monitor with integrated Ollama controls - sharing the executable! | 1 | [removed] | 2025-08-14T10:50:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mpx85d/built_a_simple_gpusystem_monitor_with_integrated/ | le_jiww | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpx85d | false | null | t3_1mpx85d | /r/LocalLLaMA/comments/1mpx85d/built_a_simple_gpusystem_monitor_with_integrated/ | false | false | 1 | null | |
Is there and open source browser extension that can do scripted actions on a page using local models? | 2 | I just tried to automate doing some repetitive clicking around on a page, but couldn't find a browser extension to do it (all of them seem to be just chatting with the static HTML text export). Playwright MCP could do it, but its responses seem to be quite verbose so it takes ages (and is incredibly wasteful) having an... | 2025-08-14T10:48:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mpx6hb/is_there_and_open_source_browser_extension_that/ | daaain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpx6hb | false | null | t3_1mpx6hb | /r/LocalLLaMA/comments/1mpx6hb/is_there_and_open_source_browser_extension_that/ | false | false | self | 2 | null |
Is there a standard oci image format for models? | 0 | I am looking for a standard oci image layout for models, including a user-space tool chain to build images from local models.
There seems to be a bewildering array of approaches, e.g. hf, localai, ollama, vllm all have their own ideas on how to package models, and they are not compatible.
ollama uses its own image f... | 2025-08-14T10:18:58 | https://www.reddit.com/r/LocalLLaMA/comments/1mpwn3f/is_there_a_standard_oci_image_format_for_models/ | Grouchy-Friend4235 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpwn3f | false | null | t3_1mpwn3f | /r/LocalLLaMA/comments/1mpwn3f/is_there_a_standard_oci_image_format_for_models/ | false | false | self | 0 | null |
Updated my setup <3 | 3 | just got 2x rtx pro 6000, probably eventually I sell A6000 for the next one, I'm so excited that wanted to share it with someone :D what benchmarks/llms should I run?
https://preview.redd.it/pxp6orxnjyif1.jpg?width=4284&format=pjpg&auto=webp&s=e96a0b89446f43b029db8162f50cf2962d5ab051
| 2025-08-14T09:56:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mpw8s7/updated_my_setup_3/ | Wooden_Yam1924 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpw8s7 | false | null | t3_1mpw8s7 | /r/LocalLLaMA/comments/1mpw8s7/updated_my_setup_3/ | false | false | 3 | null | |
What would the usage be so that self-host LLM actually profitable for businesses? | 0 | Say a company want to use LLM. What must the usage be so that self-host LLM actually profits comparing to using LLM via cloud API? Because I would imagine hardware cost, electricity + maintenance cost and human cost needed to maintain the same throughput would probably more than API cost. | 2025-08-14T09:46:52 | https://www.reddit.com/r/LocalLLaMA/comments/1mpw2un/what_would_the_usage_be_so_that_selfhost_llm/ | mtmttuan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpw2un | false | null | t3_1mpw2un | /r/LocalLLaMA/comments/1mpw2un/what_would_the_usage_be_so_that_selfhost_llm/ | false | false | self | 0 | null |
Can someone let me know if this is possible? | 0 | I want an AI to act as a database for me for my writing and world building (fantasy world). Can LLaMa do this, do you think?
Essentially I want to be able to just tell it a bunch of things (“Event A happens in City 1” … “Girl B has a scar on her hand from XYZ accident and it makes her hand stiff” … etc) and then be ab... | 2025-08-14T09:35:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mpvvs3/can_someone_let_me_know_if_this_is_possible/ | _writerc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpvvs3 | false | null | t3_1mpvvs3 | /r/LocalLLaMA/comments/1mpvvs3/can_someone_let_me_know_if_this_is_possible/ | false | false | self | 0 | null |
Is AI really trying to escape human control and blackmail people? | 0 | _Opinion: Theatrical testing scenarios explain why AI models produce alarming outputs—and why we fall for it._
https://arstechnica.com/information-technology/2025/08/is-ai-really-trying-to-escape-human-control-and-blackmail-people/ | 2025-08-14T09:25:53 | https://www.reddit.com/r/LocalLLaMA/comments/1mpvq67/is_ai_really_trying_to_escape_human_control_and/ | Educational_Sun_8813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpvq67 | false | null | t3_1mpvq67 | /r/LocalLLaMA/comments/1mpvq67/is_ai_really_trying_to_escape_human_control_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'u0Oj_Gpqp6OEuXGYBjOTLr2G9g8EnEn4LXM7pohhIT0', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/u0Oj_Gpqp6OEuXGYBjOTLr2G9g8EnEn4LXM7pohhIT0.jpeg?width=108&crop=smart&auto=webp&s=31a74e3ecd97edb9925e250334bd50892cb424cc', 'width': 108}, {'height': 121, 'url': '... |
[MLX Knife] Ollama-like CLI for Apple Silicon - manage MLX models natively | 7 | Hey r/LocalLLaMA!
We're the BROKE team 🦫 and we've been building tools for the Apple Silicon ML community.
Our first release is \*\*MLX Knife\*\* - a CLI tool specifically for MLX models.
\*\*What it does:\*\*
\- Manages your HuggingFace cache directly
\- Native MLX execution with streaming
\- OpenAI-compati... | 2025-08-14T09:15:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mpvjww/mlx_knife_ollamalike_cli_for_apple_silicon_manage/ | broke_team | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpvjww | false | null | t3_1mpvjww | /r/LocalLLaMA/comments/1mpvjww/mlx_knife_ollamalike_cli_for_apple_silicon_manage/ | false | false | 7 | {'enabled': False, 'images': [{'id': 'lnRbx9LE8sCQtA1gVudRcgYN1zBpcsRAjnMl5pVUkT8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lnRbx9LE8sCQtA1gVudRcgYN1zBpcsRAjnMl5pVUkT8.png?width=108&crop=smart&auto=webp&s=45f139f3dcd4fc314a2b3cf8530f84a491884916', 'width': 108}, {'height': 108, 'url': 'h... | |
Swiss Canton Basel open sourced multiple tools for on-premise hosting of LLM services | 127 | Thought this is worth sharing: The Swiss Canton of Basel has made available multiple tools they built for on-premise hosting of LLM-based services (text transcription, RAG, document conversion etc.). None of this is totally breaking news, but they did a solid job building an API plus frontend on top of all their servic... | 2025-08-14T09:12:54 | https://www.reddit.com/r/LocalLLaMA/comments/1mpvije/swiss_canton_basel_open_sourced_multiple_tools/ | fabkosta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpvije | false | null | t3_1mpvije | /r/LocalLLaMA/comments/1mpvije/swiss_canton_basel_open_sourced_multiple_tools/ | false | false | self | 127 | {'enabled': False, 'images': [{'id': 't370L8UkZE5VRD7pdetmzNizdPwsLd7AGaym9OInyAE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/t370L8UkZE5VRD7pdetmzNizdPwsLd7AGaym9OInyAE.png?width=108&crop=smart&auto=webp&s=34aa0c8cf0590c6d1f56271983e4f9800551796b', 'width': 108}, {'height': 216, 'url': '... |
Open source recommendation | 0 | I have been playing around with handit ai open source, it is an observability, evaluation and self-improvement for your ai agents in case you wanna take a look, here is the repo: [https://github.com/Handit-AI/handit.ai](https://github.com/Handit-AI/handit.ai) | 2025-08-14T08:54:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mpv7ik/open_source_recommendation/ | _coder23t8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpv7ik | false | null | t3_1mpv7ik | /r/LocalLLaMA/comments/1mpv7ik/open_source_recommendation/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'VV8UBl4XeC3MIEc6hoBy34AS-6ksbN7m4PunriFz2_8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/VV8UBl4XeC3MIEc6hoBy34AS-6ksbN7m4PunriFz2_8.png?width=108&crop=smart&auto=webp&s=6aa6c0ed702436724f728fcdf38d80568a551305', 'width': 108}, {'height': 216, 'url': '... |
Qwen Coder 30bA3B harder... better... faster... stronger... | 165 | Playing around with 30b a3b to get tool calling up and running and I was bored in the CLI so I asked it to punch things up and make things more exciting... and this is what it spit out. I thought it was hilarious, so I thought I'd share :). Sorry about the lower quality video, I might upload a cleaner copy in 4k later.... | 2025-08-14T08:33:51 | https://v.redd.it/mnpg8bmy3yif1 | teachersecret | /r/LocalLLaMA/comments/1mpuvok/qwen_coder_30ba3b_harder_better_faster_stronger/ | 1970-01-01T00:00:00 | 0 | {} | 1mpuvok | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/mnpg8bmy3yif1/DASHPlaylist.mpd?a=1757882036%2CY2Q5ODYzYmNhNjc3YmI2OWYxMzU3MWQ2MWVkZWFmNzg5ZmI1ZTY2NjM2MzliMzA1YjU4OTZmMjQ5ZmY3YmQ5OA%3D%3D&v=1&f=sd', 'duration': 167, 'fallback_url': 'https://v.redd.it/mnpg8bmy3yif1/DASH_720.mp4?source=fallback', 'h... | t3_1mpuvok | /r/LocalLLaMA/comments/1mpuvok/qwen_coder_30ba3b_harder_better_faster_stronger/ | false | false | 165 | {'enabled': False, 'images': [{'id': 'ZzJlajBibXkzeWlmMSKN9Y-F1uPgmObNpOLYQwn_bi3ofDf3vCkP-ziGE8lw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ZzJlajBibXkzeWlmMSKN9Y-F1uPgmObNpOLYQwn_bi3ofDf3vCkP-ziGE8lw.png?width=108&crop=smart&format=pjpg&auto=webp&s=b9651feb80b6c0d6e0ee12b673a334f4c61cb... | |
Looking for a GPU model for inference | 3 | So I and my friends are making our own LLMs, and want to build a server to inference them on. But problem is we can't find a good gpu, our server budget is 3k zl and we have around 1k zl or so for gpu. What would be the best choice, the server can take up to 4 single slot gpus or 2 twin slot gpus. We have an unique use... | 2025-08-14T08:02:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mpudkd/looking_for_a_gpu_model_for_inference/ | Yarplay11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mpudkd | false | null | t3_1mpudkd | /r/LocalLLaMA/comments/1mpudkd/looking_for_a_gpu_model_for_inference/ | false | false | self | 3 | null |
DeepSeek’s next AI model delayed by attempt to use Chinese chips | 559 | 2025-08-14T07:54:43 | https://www.ft.com/content/eb984646-6320-4bfe-a78d-a1da2274b092 | _supert_ | ft.com | 1970-01-01T00:00:00 | 0 | {} | 1mpu8ot | false | null | t3_1mpu8ot | /r/LocalLLaMA/comments/1mpu8ot/deepseeks_next_ai_model_delayed_by_attempt_to_use/ | false | false | 559 | {'enabled': False, 'images': [{'id': 'tZB3bb_nXpUPAppdkT0H9zuzs440GPDTx7LT8wXA6Cc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/tZB3bb_nXpUPAppdkT0H9zuzs440GPDTx7LT8wXA6Cc.jpeg?width=108&crop=smart&auto=webp&s=3d43a94c20fbab26f2827194d1b40cf1898eec9d', 'width': 108}, {'height': 121, 'url': '... | ||
Fine-tuning LLMs for covert malicious tool calls [Github] | 6 | A proof-of-concept LLM that is fine-tuned to be an effective browser agent, while covertly logging your queries and executing JavaScript in your browser.
# Features
* Ollama MCP client to give local hosted LLMs access to MCP servers
* Demo chat interface to test out the model interaction
* Full preprocessing and trai... | 2025-08-14T07:42:11 | https://github.com/jalbrethsen/double-agent | entsnack | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mpu1dz | false | null | t3_1mpu1dz | /r/LocalLLaMA/comments/1mpu1dz/finetuning_llms_for_covert_malicious_tool_calls/ | false | false | default | 6 | {'enabled': False, 'images': [{'id': 'AiGX7vcnEnZ0P6FdvIvaTbwE3RoYOGW0ot75BOB1VRs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AiGX7vcnEnZ0P6FdvIvaTbwE3RoYOGW0ot75BOB1VRs.png?width=108&crop=smart&auto=webp&s=c59086b09f4c88ab89ad2feb7f2683ded9182a77', 'width': 108}, {'height': 108, 'url': 'h... |
Why does it take so long to begin replying? | 0 | Like, inputting this on my phone using the 4B qwen3 model, it takes maybe 30 secs to begin replying, why?
The text: "Devs: Devstral VS Qwen3-30b/GPT-OSS?
I’m just reaching out for anyone with first hand experience in real world coding tasks between the dense devstral small and the light MOE.
I know there’s benchmark... | 2025-08-14T07:39:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mptzpr/why_does_it_take_so_long_to_begin_replying/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mptzpr | false | null | t3_1mptzpr | /r/LocalLLaMA/comments/1mptzpr/why_does_it_take_so_long_to_begin_replying/ | false | false | self | 0 | null |
The World’s First AI-Assisted Writing Competition, with Expert Judges and Prizes, is NOW OPEN for submissions until Aug 21! | 1 | [removed] | 2025-08-14T07:37:38 | YoavYariv | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mptypw | false | null | t3_1mptypw | /r/LocalLLaMA/comments/1mptypw/the_worlds_first_aiassisted_writing_competition/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '04zffufruxif1', 'resolutions': [{'height': 104, 'url': 'https://preview.redd.it/04zffufruxif1.png?width=108&crop=smart&auto=webp&s=c5ab6e4be74b50138be2670dea14e34f18201de4', 'width': 108}, {'height': 208, 'url': 'https://preview.redd.it/04zffufruxif1.png?width=216&crop=smart&auto=we... | |
Vox Populi: Revised | 5 | I posted a near complete Byte-Pair Encoder Model last week, but [botched the post](https://www.reddit.com/r/LocalLLaMA/comments/1mjlg5q/vox_populi/), so here's a clearer, more thorough version. I spent this past week ironing out the details to get a deeper comprehension of how the model operates from the ground up.
By... | 2025-08-14T07:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mptx5r/vox_populi_revised/ | teleprint-me | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mptx5r | false | null | t3_1mptx5r | /r/LocalLLaMA/comments/1mptx5r/vox_populi_revised/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': '6qOia_om6W6d22BW-VvIA5fiJThplQhg9GkbAK6wVLU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/6qOia_om6W6d22BW-VvIA5fiJThplQhg9GkbAK6wVLU.png?width=108&crop=smart&auto=webp&s=72a074001eab5f4e18b310bb9ebef3629898f25c', 'width': 108}, {'height': 116, 'url': 'h... |
tencent/Hunyuan-GameCraft-1.0 · Hugging Face | 119 | Hunyuan-GameCraft: High-dynamic Interactive Game Video Generation with Hybrid History Condition
📜 Requirements
An NVIDIA GPU with CUDA support is required.
The model is tested on a machine with 8GPUs.
Minimum: The minimum GPU memory required is 24GB but very slow.
Recommended: We recommend using a GPU with 80GB of me... | 2025-08-14T07:32:40 | https://huggingface.co/tencent/Hunyuan-GameCraft-1.0 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mptvsl | false | null | t3_1mptvsl | /r/LocalLLaMA/comments/1mptvsl/tencenthunyuangamecraft10_hugging_face/ | false | false | default | 119 | {'enabled': False, 'images': [{'id': 'aPfnDoE4lStgbUiMQComf1wdLlqoQrdsgG6jn-2D3d8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/aPfnDoE4lStgbUiMQComf1wdLlqoQrdsgG6jn-2D3d8.png?width=108&crop=smart&auto=webp&s=bfd15eda31cf352798fc436e9070bbbfc73583fd', 'width': 108}, {'height': 116, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.