title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Best Local LLM for Code assist similar to Sonnet 4? | 6 | Trying to sort through and host my own LLM but I really dont know how these might compare, I'm using Void as the IDE and trying to replicate pretty close to what Cursor offers.
Is it even comparable?
https://preview.redd.it/tv72kg9xipbf1.png?width=628&format=png&auto=webp&s=ee67695b0a353a8b0805fc8b111ba81fe0d36fd6
... | 2025-07-08T20:06:35 | https://www.reddit.com/r/LocalLLaMA/comments/1luytx2/best_local_llm_for_code_assist_similar_to_sonnet_4/ | Kainzo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luytx2 | false | null | t3_1luytx2 | /r/LocalLLaMA/comments/1luytx2/best_local_llm_for_code_assist_similar_to_sonnet_4/ | false | false | 6 | null | |
LLM to answer someone's contacts as themselves ? | 1 | Hi,
Is there a self-hostable LLM specialized in answering someone's texts as if it was the actual person responding, e.g. to people who make you loose time when asking something from you (see [nohello.net](https://nohello.net/)) and people who are just too much (see [Silicon Valley S06E01](https://bw.artemislena.eu/si... | 2025-07-08T19:53:10 | https://www.reddit.com/r/LocalLLaMA/comments/1luyhi9/llm_to_answer_someones_contacts_as_themselves/ | KaKi_87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luyhi9 | false | null | t3_1luyhi9 | /r/LocalLLaMA/comments/1luyhi9/llm_to_answer_someones_contacts_as_themselves/ | false | false | self | 1 | null |
Has there been research into prompting strategies for models? | 2 | I am curious to understand if models tend to perform better if you treat it like a tin-can, structuring the prompt as imperative commands. I suspect it would perform better with STEM-type tasks since those are how many of the problems in open datasets are written. But what about non-STEM tasks like creative writing or ... | 2025-07-08T19:48:15 | https://www.reddit.com/r/LocalLLaMA/comments/1luycyq/has_there_been_research_into_prompting_strategies/ | TheRealMasonMac | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luycyq | false | null | t3_1luycyq | /r/LocalLLaMA/comments/1luycyq/has_there_been_research_into_prompting_strategies/ | false | false | self | 2 | null |
LLM Hallucination Detection Leaderboard for both RAG and Chat | 13 | does this track with your experiences? | 2025-07-08T19:46:43 | https://huggingface.co/spaces/kluster-ai/LLM-Hallucination-Detection-Leaderboard | cakesir | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1luybka | false | null | t3_1luybka | /r/LocalLLaMA/comments/1luybka/llm_hallucination_detection_leaderboard_for_both/ | false | false | default | 13 | {'enabled': False, 'images': [{'id': 'ivBG2cnyJkFTv2OERYiecz9C9knlSS7GfSBDDNC5kNs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ivBG2cnyJkFTv2OERYiecz9C9knlSS7GfSBDDNC5kNs.png?width=108&crop=smart&auto=webp&s=4e77c6a3e5ceaf4ca04c01c56574a179359a460b', 'width': 108}, {'height': 116, 'url': 'h... |
All LLMs Hallucinate: LLM Hallucination Detection Leaderboard | 1 | [deleted] | 2025-07-08T19:45:24 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1luyadl | false | null | t3_1luyadl | /r/LocalLLaMA/comments/1luyadl/all_llms_hallucinate_llm_hallucination_detection/ | false | false | default | 1 | null | ||
Comparing LLM Providers Throughput | 7 | We were not able to find an LLM provider benchmark that focuses on throughput, so we've built our own. [ArtificialAnalysis](https://artificialanalysis.ai/leaderboards/providers) only tests up to 10 RPS, which is too low for most applications. Additionally, it's not open-source, so it doesn’t help much if you’re self-ho... | 2025-07-08T19:41:43 | https://www.reddit.com/r/LocalLLaMA/comments/1luy711/comparing_llm_providers_throughput/ | NoVibeCoding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luy711 | false | null | t3_1luy711 | /r/LocalLLaMA/comments/1luy711/comparing_llm_providers_throughput/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'h... |
SmolLM3 has day-0 support in MistralRS! | 67 | It's a **SoTA 3B model** with hybrid **reasoning** and **128k context**.
Hits ⚡105 T/s withAFQ4 @ M3 Max.
Link: [https://github.com/EricLBuehler/mistral.rs](https://github.com/EricLBuehler/mistral.rs)
Using MistralRS means that you get
* Builtin MCP client
* OpenAI HTTP server
* Python & Rust APIs
* Full multim... | 2025-07-08T19:37:29 | https://www.reddit.com/r/LocalLLaMA/comments/1luy32e/smollm3_has_day0_support_in_mistralrs/ | EricBuehler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luy32e | false | null | t3_1luy32e | /r/LocalLLaMA/comments/1luy32e/smollm3_has_day0_support_in_mistralrs/ | false | false | self | 67 | {'enabled': False, 'images': [{'id': 'xVhA5yH40HillcQUZwf_X5-W7LKE47r-_8EECTemH4o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xVhA5yH40HillcQUZwf_X5-W7LKE47r-_8EECTemH4o.png?width=108&crop=smart&auto=webp&s=7d1c0172fdd6ce289863642a0c9276ecd54ad822', 'width': 108}, {'height': 108, 'url': 'h... |
Any one tried ERNIE-4.5-21B-A3B? | 40 | Any one tried ERNIE-4.5-21B-A3B? How is that compared to Qwen3-30B-A3B? | 2025-07-08T19:27:45 | https://www.reddit.com/r/LocalLLaMA/comments/1luxu6s/any_one_tried_ernie4521ba3b/ | BreakfastFriendly728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luxu6s | false | null | t3_1luxu6s | /r/LocalLLaMA/comments/1luxu6s/any_one_tried_ernie4521ba3b/ | false | false | self | 40 | null |
9950x3d+nvidia cuda+integrated gpu on vulkan, is it possibile? | 1 | Hi,
I have an 9950x3d+rtx3090.
I am doing experiments on llama.cpp
Is it possible to use cuda on rtx3090 and vulkan on igpu together with llama.cpp?
Thanks,
Mario | 2025-07-08T19:27:31 | https://www.reddit.com/r/LocalLLaMA/comments/1luxtzk/9950x3dnvidia_cudaintegrated_gpu_on_vulkan_is_it/ | mgiammarco | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luxtzk | false | null | t3_1luxtzk | /r/LocalLLaMA/comments/1luxtzk/9950x3dnvidia_cudaintegrated_gpu_on_vulkan_is_it/ | false | false | self | 1 | null |
My Serires of Cartoon | 0 | I produce a series of Cartoon on Youtube. I have been created with GPT (consistent) pictures, video with VIDU AI and broadcast on Youtube. But almost 2 minutes, we take the series 1 day, how consistently do I get a job in a short time? Which tools do you recommend to use? | 2025-07-08T19:17:37 | https://www.reddit.com/r/LocalLLaMA/comments/1luxkms/my_serires_of_cartoon/ | capitalistbear37 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luxkms | false | null | t3_1luxkms | /r/LocalLLaMA/comments/1luxkms/my_serires_of_cartoon/ | false | false | self | 0 | null |
How do I make my llm talk dirty? | 0 | I was working on a product that includes NSFW chatbot but couldn't find any satisfactory method for production of a model which goes out of bound, for fine tuning I couldn't find much data to train on. Does anybody have experience in doing so? | 2025-07-08T19:05:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lux8ya/how_do_i_make_my_llm_talk_dirty/ | ILoveMy2Balls | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lux8ya | false | null | t3_1lux8ya | /r/LocalLLaMA/comments/1lux8ya/how_do_i_make_my_llm_talk_dirty/ | false | false | nsfw | 0 | null |
What's the differences between thoes Qwen3-30B-A3B versions? | 6 | Hello, I'm using a macbook as a local server. When I searched Qwen3-30B-A3B in huggingface, there were a bunch of MLX (4bit) versions. Some of them are even from the same provider.
Which one should I use? Thank you!
[https://huggingface.co/Qwen/Qwen3-30B-A3B-MLX-4bit](https://huggingface.co/Qwen/Qwen3-30B-A3B-MLX-... | 2025-07-08T19:01:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lux5d5/whats_the_differences_between_thoes_qwen330ba3b/ | BreakfastFriendly728 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lux5d5 | false | null | t3_1lux5d5 | /r/LocalLLaMA/comments/1lux5d5/whats_the_differences_between_thoes_qwen330ba3b/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'GUbEWDf_I_rzpYURvoE5ztWrqL-aDjboifKoErqSKJQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/GUbEWDf_I_rzpYURvoE5ztWrqL-aDjboifKoErqSKJQ.png?width=108&crop=smart&auto=webp&s=4b4fbd2626910c43b13d270ceb9d19413dd4cf5f', 'width': 108}, {'height': 116, 'url': 'h... |
LM Studio is now free for use at work | 427 | It is great news for all of us, but at the same time, it will put a lot of pressure on other similar paid projects, like Msty, as in my opinion, LM Studio is one of the best AI front ends at the moment.
[LM Studio is free for use at work | LM Studio Blog](https://lmstudio.ai/blog/free-for-work) | 2025-07-08T18:56:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lux0q2/lm_studio_is_now_free_for_use_at_work/ | mtomas7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lux0q2 | false | null | t3_1lux0q2 | /r/LocalLLaMA/comments/1lux0q2/lm_studio_is_now_free_for_use_at_work/ | false | false | self | 427 | {'enabled': False, 'images': [{'id': 'SuDY5sfZG1VpbWVeog-gTtG3kPGfhsfAsatkTWl8Lvs', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/SuDY5sfZG1VpbWVeog-gTtG3kPGfhsfAsatkTWl8Lvs.png?width=108&crop=smart&auto=webp&s=59744800bd5d1e6874dc4dccab87d0a697f6ded9', 'width': 108}, {'height': 113, 'url': 'h... |
MCP alternative tailored to local models | 2 | Hey folks,
I’m trying to wire up totally private, on-device models with MCP and keep running into the same headaches:
first, auth feels pointless—why am I minting API keys when nothing ever leaves my laptop?
second, streaming and error handling are way too fussy; all I really want is tokens over stdout and a clea... | 2025-07-08T18:54:15 | https://www.reddit.com/r/LocalLLaMA/comments/1luwyou/mcp_alternative_tailored_to_local_models/ | juanviera23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luwyou | false | null | t3_1luwyou | /r/LocalLLaMA/comments/1luwyou/mcp_alternative_tailored_to_local_models/ | false | false | self | 2 | null |
NSFW Model image analysis | 81 | Hey guys like the title says im looking for a model or models I can use to send images to and discuss them. I want it to have support for NSFW content. I'd prefer a ui like oobabooga but I've h3ards it has issues with this kind of stuff. Image generation is a plus but not needed. | 2025-07-08T18:48:31 | https://www.reddit.com/r/LocalLLaMA/comments/1luwtdr/nsfw_model_image_analysis/ | Technical_Whole_947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luwtdr | false | null | t3_1luwtdr | /r/LocalLLaMA/comments/1luwtdr/nsfw_model_image_analysis/ | false | false | nsfw | 81 | null |
E-commerce PDP : Quick way to extract variants using LLM? | 1 | [removed] | 2025-07-08T18:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/1luws59/ecommerce_pdp_quick_way_to_extract_variants_using/ | Alchemistry-101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luws59 | false | null | t3_1luws59 | /r/LocalLLaMA/comments/1luws59/ecommerce_pdp_quick_way_to_extract_variants_using/ | false | false | self | 1 | null |
Most effective way to host LLM of over 20B params | 3 | Hello guys, we are trying to host an LLM like Gemma3 27B to just to run some tests. What is the most effective way to do that for any of you who ever tried this ? I ve seen some solutions like vastai and some aws hacks to reduce the cost (but still not that effective for our use case) | 2025-07-08T18:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/1luwgkn/most_effective_way_to_host_llm_of_over_20b_params/ | drafat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luwgkn | false | null | t3_1luwgkn | /r/LocalLLaMA/comments/1luwgkn/most_effective_way_to_host_llm_of_over_20b_params/ | false | false | self | 3 | null |
Help Needed: Building a PC. | 3 | Hi everyone,
I'm new to the world of AI tools and local models, but I am planning to build a PC for this purpose. I’ll mainly be using it for running tools like Stable Diffusion, ComfyUI, Wen, Flux, and may be some local LLMs.
Due to budget constraints, the **only fixed part of my build is the RTX 3090 (24GB)**. I ch... | 2025-07-08T18:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/1luwa98/help_needed_building_a_pc/ | MeRedditSurfer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luwa98 | false | null | t3_1luwa98 | /r/LocalLLaMA/comments/1luwa98/help_needed_building_a_pc/ | false | false | self | 3 | null |
Is hyperthreads the right language to use? | 0 | So we made an app with an local llm and it’s coming to steam soon, but one of my testers pointed something:
One of our options is use gpu which is active if the user has nvidia card, we titled the amount as hyperthreads but my buddy pointed out that he looked at his gpu details for his computer and didn’t see hyperthr... | 2025-07-08T18:26:50 | https://www.reddit.com/r/LocalLLaMA/comments/1luw8s3/is_hyperthreads_the_right_language_to_use/ | ChrisZavadil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luw8s3 | false | null | t3_1luw8s3 | /r/LocalLLaMA/comments/1luw8s3/is_hyperthreads_the_right_language_to_use/ | false | false | self | 0 | null |
In-browser Local Document Understanding Using SmolDocling 256M with Transformers.js | 25 | Hello everyone! A couple of days ago, I came across SmolDocling-256M and liked how well it performed for its size with document understanding and feature extraction. As such, I wanted to try my hand at creating a demo for it using Transformers.js since there weren't any that I saw.
Anyway, how it works is that th... | 2025-07-08T18:20:48 | https://v.redd.it/zm461kmdzobf1 | ajunior7 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1luw2yu | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/zm461kmdzobf1/DASHPlaylist.mpd?a=1754590879%2CZDczZDllZTJhNDM4M2M0ZGRkMzMwY2EzY2E3NDAzMmFjYTY4NWE0NzhhOWQxZDQ3Yjc2MzA1YjgxZmEwNjg4Yw%3D%3D&v=1&f=sd', 'duration': 104, 'fallback_url': 'https://v.redd.it/zm461kmdzobf1/DASH_720.mp4?source=fallback', 'h... | t3_1luw2yu | /r/LocalLLaMA/comments/1luw2yu/inbrowser_local_document_understanding_using/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'ZzZ1M3I5bWR6b2JmMZYVJOaYs5vdxL_1uR-mCmQPKun2b6oZ6FYMJ70b2gSH', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/ZzZ1M3I5bWR6b2JmMZYVJOaYs5vdxL_1uR-mCmQPKun2b6oZ6FYMJ70b2gSH.png?width=108&crop=smart&format=pjpg&auto=webp&s=2b350a5c851cafbe6c277819ead1abb03037c... | |
DataHack Summit 2025, Bengaluru | 1 | [removed] | 2025-07-08T18:20:04 | https://www.reddit.com/r/LocalLLaMA/comments/1luw2a3/datahack_summit_2025_bengaluru/ | adroit2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luw2a3 | false | null | t3_1luw2a3 | /r/LocalLLaMA/comments/1luw2a3/datahack_summit_2025_bengaluru/ | false | false | self | 1 | null |
Need help with on prem | 1 | Hey guys I’ve always been using the closed sourced llms like openai, gemini etc… but I realized I don’t really understand a lot of things especially with on prem related projects (I’m just a junior).
Lets say I want to use a specific LLM with X parameters. My questions are as follows:
1) How do I know what GPUs are r... | 2025-07-08T18:18:41 | https://www.reddit.com/r/LocalLLaMA/comments/1luw10n/need_help_with_on_prem/ | yazanrisheh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luw10n | false | null | t3_1luw10n | /r/LocalLLaMA/comments/1luw10n/need_help_with_on_prem/ | false | false | self | 1 | null |
Which project/program or IDE to use attached with LLM online/local for free use for entire code-repos? | 1 | I have been using Langgraph and chrome vector DB for RAG pipeline. In last one month's time I have got out of the loop. I still use Deepseek r1 on the chat.deepseek.com on daily basis for development purposes. It increases my daiy development product by multifold as it understands any library and code requirements real... | 2025-07-08T18:09:58 | https://www.reddit.com/r/LocalLLaMA/comments/1luvt31/which_projectprogram_or_ide_to_use_attached_with/ | tapu_buoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luvt31 | false | null | t3_1luvt31 | /r/LocalLLaMA/comments/1luvt31/which_projectprogram_or_ide_to_use_attached_with/ | false | false | self | 1 | null |
How Syncora.ai Solves Data Scarcity and Privacy Concerns in AI | 1 | [removed] | 2025-07-08T17:49:51 | https://www.reddit.com/r/LocalLLaMA/comments/1luv9pm/how_syncoraai_solves_data_scarcity_and_privacy/ | Syncoraai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luv9pm | false | null | t3_1luv9pm | /r/LocalLLaMA/comments/1luv9pm/how_syncoraai_solves_data_scarcity_and_privacy/ | false | false | self | 1 | null |
I used ChatGPT to formulate 50+ questions to test the latest Cogito Qwen 8b model, in "thinking" mode, here are the results | 4 | I wanted to see how smart this thing was for day-to-day use as I intend to use this to make notes of books, articles etc, as well as assisting writing documents.
Cogito Qwen 8B — Extended Reasoning Evaluation (Thinking Mode)
Evaluator: Freshmancult
Facilitator: ChatGPT
System: MAINGEAR MG-1 (Intel Core Ultra 7 265K,... | 2025-07-08T17:11:58 | https://www.reddit.com/r/LocalLLaMA/comments/1luu94f/i_used_chatgpt_to_formulate_50_questions_to_test/ | FreshmanCult | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luu94f | false | null | t3_1luu94f | /r/LocalLLaMA/comments/1luu94f/i_used_chatgpt_to_formulate_50_questions_to_test/ | false | false | self | 4 | null |
LLM Writing Style / GPT Slop | 2 | So I'm using openrouter with a bunch of different models and I can not get it to follow a human writing style to matter what I do.
I get one of the following (or more than one) depending on which model and settings I use:
* Constant interjection of narration \*user is blah blah blah\*. I have tried telling it specifi... | 2025-07-08T17:10:44 | https://www.reddit.com/r/LocalLLaMA/comments/1luu7x2/llm_writing_style_gpt_slop/ | anon294884 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luu7x2 | false | null | t3_1luu7x2 | /r/LocalLLaMA/comments/1luu7x2/llm_writing_style_gpt_slop/ | false | false | self | 2 | null |
Best model for spanish? | 3 | Hi, i need an LLM to analyze a very long text in spanish and draw conclusions from it. What would be the best model for this? | 2025-07-08T17:08:52 | https://www.reddit.com/r/LocalLLaMA/comments/1luu65g/best_model_for_spanish/ | Bionic_Push | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luu65g | false | null | t3_1luu65g | /r/LocalLLaMA/comments/1luu65g/best_model_for_spanish/ | false | false | self | 3 | null |
I am having trouble with llama fine tuning using LoRA unsloth | 2 | Hi I am developing a LoRA fine tuned (with unsloth) unsloth/Llama-3.2-3B-Instruct-unsloth-bnb-4bit model for a specific use case I trained the model with 99.1k rows in the below format
<s>\[INST\] <<SYS>>
\[Your system prompt here\]
<</SYS>>
\[User message\] \[/INST\] \[Assistant reply\] </s>
but when I run , M... | 2025-07-08T17:01:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lutzav/i_am_having_trouble_with_llama_fine_tuning_using/ | the__nitch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lutzav | false | null | t3_1lutzav | /r/LocalLLaMA/comments/1lutzav/i_am_having_trouble_with_llama_fine_tuning_using/ | false | false | self | 2 | null |
AI Coding Showdown: I tested Gemini CLI, Claude Code and ForgeCode in the Terminal | 2 | I've been using some terminal-based AI tools recently, Claude Code, Forge Code and Gemini CLI, for real development tasks like debugging apps with multiple files, building user interfaces, and quick prototyping. Here's how each one performed:
**Claude Code:**
I tested multi-file debugging with Claude, and also gave i... | 2025-07-08T16:51:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lutqg5/ai_coding_showdown_i_tested_gemini_cli_claude/ | codes_astro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lutqg5 | false | null | t3_1lutqg5 | /r/LocalLLaMA/comments/1lutqg5/ai_coding_showdown_i_tested_gemini_cli_claude/ | false | false | self | 2 | null |
Local PDF Database searchable with ollama - best setup? | 2 | Hi all,
I recently digitized most of my documents as PDFs and set up Paperless-NGX to provide a searchable web-based database. Since the documents are analyzed with OCR, the search in Paperless-NGX for certain keywords works really well to identify documents related to e.g. car repairs.
Now I'm trying to use a loc... | 2025-07-08T16:46:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lutlfx/local_pdf_database_searchable_with_ollama_best/ | 540Flair | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lutlfx | false | null | t3_1lutlfx | /r/LocalLLaMA/comments/1lutlfx/local_pdf_database_searchable_with_ollama_best/ | false | false | self | 2 | null |
SmolLM3: reasoning, long context and multilinguality for 3B parameter only | 358 | Hi there, I'm Elie from the smollm team at huggingface, sharing this new model we built for local/on device use!
blog: [https://huggingface.co/blog/smollm3](https://huggingface.co/blog/smollm3)
GGUF/ONIX ckpt are being uploaded here: [https://huggingface.co/collections/HuggingFaceTB/smollm3-686d33c1fdffe8e63531... | 2025-07-08T16:14:16 | eliebakk | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lusr7l | false | null | t3_1lusr7l | /r/LocalLLaMA/comments/1lusr7l/smollm3_reasoning_long_context_and/ | false | false | default | 358 | {'enabled': True, 'images': [{'id': 'njam3shfcobf1', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/njam3shfcobf1.png?width=108&crop=smart&auto=webp&s=02512340691c024aa56fdfadf2cf00ed3eaa8f6c', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/njam3shfcobf1.png?width=216&crop=smart&auto=web... | |
RX 6800 vs RTX 3060 for LLM? Does the Extra 4GB VRAM Matter, or Is CUDA Still King? | 4 | Hey all,
I’m looking to add a GPU to my homelab for learning about self-hosted LLMs. My current plan is to experiment with replacing commercial voice assistants like Alexa and Google Assistant with something I control myself. I want to build a system that can do more than just basic smart device commands, so I’m inter... | 2025-07-08T16:13:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lusqmt/rx_6800_vs_rtx_3060_for_llm_does_the_extra_4gb/ | SKX007J1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lusqmt | false | null | t3_1lusqmt | /r/LocalLLaMA/comments/1lusqmt/rx_6800_vs_rtx_3060_for_llm_does_the_extra_4gb/ | false | false | self | 4 | null |
Run Fine-Tuned LLMs on iPhone Neural Engine | 15 |
Run Fine-Tuned LLMs Right on Your iPhone – No Code Needed
Vector Space now lets you run powerful, fine-tuned large language models directly on your iPhone. No servers, no code — just tap and chat.
🚀 Why Vector Space:
1. Fine-Tuned Models Ready to Go
Run custom Qwen3 and Llama 3.2 models — including jailbreak, rolep... | 2025-07-08T16:02:04 | Glad-Speaker3006 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lusfyg | false | null | t3_1lusfyg | /r/LocalLLaMA/comments/1lusfyg/run_finetuned_llms_on_iphone_neural_engine/ | false | false | default | 15 | {'enabled': True, 'images': [{'id': 'v1kzyhlbbobf1', 'resolutions': [{'height': 160, 'url': 'https://preview.redd.it/v1kzyhlbbobf1.jpeg?width=108&crop=smart&auto=webp&s=6e11e1f6890e5b8d7706a409bd4fb7e85e523a93', 'width': 108}, {'height': 320, 'url': 'https://preview.redd.it/v1kzyhlbbobf1.jpeg?width=216&crop=smart&auto=... | |
new models from NVIDIA: OpenCodeReasoning-Nemotron-1.1 7B/14B/32B | 179 | OpenCodeReasoning-Nemotron-1.1-7B is a large language model (LLM) which is a derivative of Qwen2.5-7B-Instruct (AKA the reference model). It is a reasoning model that is post-trained for reasoning for code generation. The model supports a context length of 64k tokens.
This model is ready for commercial/non-commerci... | 2025-07-08T15:48:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lus2yw/new_models_from_nvidia/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lus2yw | false | null | t3_1lus2yw | /r/LocalLLaMA/comments/1lus2yw/new_models_from_nvidia/ | false | false | self | 179 | {'enabled': False, 'images': [{'id': 'xydcboaWr0AtFYvdA_VYKzaGbb6J3DC7YWd6PyBFtp0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/xydcboaWr0AtFYvdA_VYKzaGbb6J3DC7YWd6PyBFtp0.png?width=108&crop=smart&auto=webp&s=64ec5a05cd96cb87a4112fc8bfe7de098998dfb3', 'width': 108}, {'height': 116, 'url': 'h... |
NextCoder - a Microsoft Collection | 128 | 2025-07-08T15:45:04 | https://huggingface.co/collections/microsoft/nextcoder-6815ee6bfcf4e42f20d45028 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lurzqf | false | null | t3_1lurzqf | /r/LocalLLaMA/comments/1lurzqf/nextcoder_a_microsoft_collection/ | false | false | default | 128 | {'enabled': False, 'images': [{'id': '7_jhFTazab6GMtEoANxssbRBy-NQcSp84SYt3Tyoa40', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7_jhFTazab6GMtEoANxssbRBy-NQcSp84SYt3Tyoa40.png?width=108&crop=smart&auto=webp&s=b1f0986f5db53e6006b251423459f34eeb980baa', 'width': 108}, {'height': 116, 'url': 'h... | |
Why can't see posts more than 25 on this subreddit? | 1 | [removed] | 2025-07-08T15:44:15 | https://www.reddit.com/r/LocalLLaMA/comments/1luryyv/why_cant_see_posts_more_than_25_on_this_subreddit/ | donald-bro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luryyv | false | null | t3_1luryyv | /r/LocalLLaMA/comments/1luryyv/why_cant_see_posts_more_than_25_on_this_subreddit/ | false | false | self | 1 | null |
Major Hugging Face announcement on July 24th | 20 | 2025-07-08T15:40:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lurv79/major_hugging_face_announcement_on_july_24th/ | LightEt3rnaL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lurv79 | false | null | t3_1lurv79 | /r/LocalLLaMA/comments/1lurv79/major_hugging_face_announcement_on_july_24th/ | false | false | 20 | null | ||
NVIDIA’s Highly Anticipated “Mini-Supercomputer,” the DGX Spark, Launches This Month — Bringing Immense AI Power to Your Hands — up to 4000$ | 276 | 2025-07-08T15:33:16 | https://wccftech.com/nvidia-mini-supercomputer-the-dgx-spark-launches-this-month/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1luroqh | false | null | t3_1luroqh | /r/LocalLLaMA/comments/1luroqh/nvidias_highly_anticipated_minisupercomputer_the/ | false | false | 276 | {'enabled': False, 'images': [{'id': '0pU0OZQ3jKyRpVTXegSNFV4uVFdUj2o4hXpi85CuSUA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/0pU0OZQ3jKyRpVTXegSNFV4uVFdUj2o4hXpi85CuSUA.png?width=108&crop=smart&auto=webp&s=fd25657d3fce734d4025693e620867a7cf866fd1', 'width': 108}, {'height': 121, 'url': 'h... | ||
Practical Attacks on AI Text Classifiers with RL (Qwen/Llama, datasets and models available for download) | 170 | 2025-07-08T15:26:41 | https://trentmkelly.substack.com/p/practical-attacks-on-ai-text-classifiers | WithoutReason1729 | trentmkelly.substack.com | 1970-01-01T00:00:00 | 0 | {} | 1lurili | false | null | t3_1lurili | /r/LocalLLaMA/comments/1lurili/practical_attacks_on_ai_text_classifiers_with_rl/ | false | false | 170 | {'enabled': False, 'images': [{'id': 'TnzOhNCefQgo-evQobNC87LggZOvGaM0K7HMeNzBrog', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TnzOhNCefQgo-evQobNC87LggZOvGaM0K7HMeNzBrog.jpeg?width=108&crop=smart&auto=webp&s=96d86512027cc494679c015829c2e32e072fa35f', 'width': 108}, {'height': 108, 'url': '... | ||
What kind of throughput can I expect with Llama 3.1 on a H200? | 3 | I'm feeding millions of documents consisting of 2-10 sentences each into Llama-3.1-8B-Instruct using pyTorch and asking for a <25 token summary. On a single H200, I'm averaging around 30-40/toks/sec.
My batch size is 128, which leads to VRAM utilization topping out at around 101GB (out of 141 GB). If I increase it muc... | 2025-07-08T15:21:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lure0g/what_kind_of_throughput_can_i_expect_with_llama/ | big_like_a_pickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lure0g | false | null | t3_1lure0g | /r/LocalLLaMA/comments/1lure0g/what_kind_of_throughput_can_i_expect_with_llama/ | false | false | self | 3 | null |
Skywork/Skywork-R1V3-38B · Hugging Face | 80 | #
**Skywork-R1V3-38B** is the **latest and most powerful open-source multimodal reasoning model** in the Skywork series, pushing the boundaries of multimodal and cross-disciplinary intelligence. With elaborate RL algorithm in the post-training stage, R1V3 significantly enhances multimodal reasoning ablity and achieve... | 2025-07-08T14:37:34 | https://huggingface.co/Skywork/Skywork-R1V3-38B | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1luq8hp | false | null | t3_1luq8hp | /r/LocalLLaMA/comments/1luq8hp/skyworkskyworkr1v338b_hugging_face/ | false | false | default | 80 | {'enabled': False, 'images': [{'id': '3e7sHYHNZhfN6oJnsHpPG0zaYjLPHYlt79D7YjZqJp0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3e7sHYHNZhfN6oJnsHpPG0zaYjLPHYlt79D7YjZqJp0.png?width=108&crop=smart&auto=webp&s=fa8785f2c86e0a266b664cbff742d81402b4d47e', 'width': 108}, {'height': 116, 'url': 'h... |
apple/DiffuCoder-7B-cpGRPO | 1 | 2025-07-08T14:15:20 | https://huggingface.co/apple/DiffuCoder-7B-cpGRPO | pahadi_keeda | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lupopn | false | null | t3_1lupopn | /r/LocalLLaMA/comments/1lupopn/applediffucoder7bcpgrpo/ | false | false | default | 1 | null | |
Efficient Multimodal Data Pipeline | 8 | Using knapsack algorithm to efficiently batch the data helps train faster. In the blog post we cover a stage wise approach to making the data pipeline better.
https://preview.redd.it/wdccczfarnbf1.png?width=1000&format=png&auto=webp&s=2338717ec3d962002b326245686078de2ee7b479
https://preview.redd.it/nwc3prfarnbf1.png?... | 2025-07-08T14:10:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lupk47/efficient_multimodal_data_pipeline/ | Disastrous-Work-1632 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lupk47 | false | null | t3_1lupk47 | /r/LocalLLaMA/comments/1lupk47/efficient_multimodal_data_pipeline/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'htw5f24vzDhoh1VMu1c0r94PxjBdMJnubeaeDosh-w8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/htw5f24vzDhoh1VMu1c0r94PxjBdMJnubeaeDosh-w8.png?width=108&crop=smart&auto=webp&s=9f9f9c89e881153ea2dca9c42d70ea4867b3cc83', 'width': 108}, {'height': 108, 'url': 'h... | |
Q&A Content Pattern Assessment — Survey from Stack Overflow | 2 | My name is Caro and I’m a Strategic Product Designer at Stack Overflow working alongside our Data Scientists. We’re looking to better understand this community's take on which types of content patterns (shown using Stack Overflow Q&A examples) are most ideal to support AI model training and what factors lead to better ... | 2025-07-08T14:05:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lupg1f/qa_content_pattern_assessment_survey_from_stack/ | Tiny_Dot0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lupg1f | false | null | t3_1lupg1f | /r/LocalLLaMA/comments/1lupg1f/qa_content_pattern_assessment_survey_from_stack/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '-i4_uRRkBAWt5Dp8mEpIeZFSh9ZtU3-0chGRRO39eZM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-i4_uRRkBAWt5Dp8mEpIeZFSh9ZtU3-0chGRRO39eZM.png?width=108&crop=smart&auto=webp&s=79db7cfb87caaf1c480a675c03c065b8dc0efe62', 'width': 108}, {'height': 113, 'url': 'h... |
Question on llm and image generation + text output, is there any way to get results like the images, real information with images generated based on actual information. | 2 | 2025-07-08T14:02:54 | https://www.reddit.com/gallery/1lupdrq | Ziov1 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lupdrq | false | null | t3_1lupdrq | /r/LocalLLaMA/comments/1lupdrq/question_on_llm_and_image_generation_text_output/ | false | false | 2 | null | ||
Automated illustration of a Conan story using gemma3 + flux and other local models | 19 | [https://brianheming.substack.com/p/making-illustrated-conan-adventures-039](https://brianheming.substack.com/p/making-illustrated-conan-adventures-039) | 2025-07-08T13:58:34 | RobertTetris | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lup9qp | false | null | t3_1lup9qp | /r/LocalLLaMA/comments/1lup9qp/automated_illustration_of_a_conan_story_using/ | false | false | default | 19 | {'enabled': True, 'images': [{'id': '53ufkibvonbf1', 'resolutions': [{'height': 71, 'url': 'https://preview.redd.it/53ufkibvonbf1.png?width=108&crop=smart&auto=webp&s=1df219bb756e8a2c2fdd5b696830b3a0f337456c', 'width': 108}, {'height': 142, 'url': 'https://preview.redd.it/53ufkibvonbf1.png?width=216&crop=smart&auto=web... | |
We built pinpointed citations for AI answers — works with PDFs, Excel, CSV, Docx & more | 7 | We have added a feature to our RAG pipeline that shows **exact citations** — not just the source file, but the **exact paragraph or row** the AI used to answer.
Click a citation and it scrolls you straight to that spot in the document — works with **PDFs, Excel, CSV, Word, PPTX, Markdown**, and others.
It’s super use... | 2025-07-08T13:57:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lup8yd/we_built_pinpointed_citations_for_ai_answers/ | Effective-Ad2060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lup8yd | false | null | t3_1lup8yd | /r/LocalLLaMA/comments/1lup8yd/we_built_pinpointed_citations_for_ai_answers/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'V_tfXwBwMp_Y88r_ufJO_lIg21ROik8nAtQZXQAfOSs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/V_tfXwBwMp_Y88r_ufJO_lIg21ROik8nAtQZXQAfOSs.png?width=108&crop=smart&auto=webp&s=eadf34b506a23b84310c25cebe4274660315f5f7', 'width': 108}, {'height': 108, 'url': 'h... |
best local llm from coding and reasoning with support of all file format ? | 0 | plz give me the best option i have unlimited res. | 2025-07-08T13:31:00 | https://www.reddit.com/r/LocalLLaMA/comments/1luomsw/best_local_llm_from_coding_and_reasoning_with/ | Bulky-Kiwi9705 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luomsw | false | null | t3_1luomsw | /r/LocalLLaMA/comments/1luomsw/best_local_llm_from_coding_and_reasoning_with/ | false | false | self | 0 | null |
Mac Studio 512GB online! | 177 | I just had a $10k Mac Studio arrive. The first thing I installed was LM Studio. I downloaded qwen3-235b-a22b and fired it up. Fantastic performance with a small system prompt. I fired up devstral and tried to use it with Cline (a large system prompt agent) and very quickly discovered limitations. I managed to instruct ... | 2025-07-08T12:04:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lumsd2/mac_studio_512gb_online/ | chisleu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lumsd2 | false | null | t3_1lumsd2 | /r/LocalLLaMA/comments/1lumsd2/mac_studio_512gb_online/ | false | false | self | 177 | null |
New model GLM-Experimental is quite good (not local so far) | 47 | 2025-07-08T11:47:13 | https://chat.z.ai/ | AppearanceHeavy6724 | chat.z.ai | 1970-01-01T00:00:00 | 0 | {} | 1lumgjj | false | null | t3_1lumgjj | /r/LocalLLaMA/comments/1lumgjj/new_model_glmexperimental_is_quite_good_not_local/ | false | false | default | 47 | null | |
Google Colab’s new Gemini Integration is legit the best here-let-me-fix-that-for-you Python coding tool I’ve found so far. | 12 | I’m currently a graduate student pursuing a Masters in AI. A lot of our AI & ML class projects for fine-tuning models and such involve creating Jupyter notebooks to run Python for training and evaluating models.
I had been using Anaconda and Jupyter for Python projects, but then I heard that you could get access to f... | 2025-07-08T11:44:12 | https://www.reddit.com/r/LocalLLaMA/comments/1lumebd/google_colabs_new_gemini_integration_is_legit_the/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lumebd | false | null | t3_1lumebd | /r/LocalLLaMA/comments/1lumebd/google_colabs_new_gemini_integration_is_legit_the/ | false | false | self | 12 | null |
Newbie with questions :D | 2 | Hey there so i am new to this whole LLama local AI, of course i have used chatgpt, claude or even lovabl ai, but as this is just the Surface of Ai i just had a view questions.
So what is my plan?
I have a Cyberdeck (Little Raspberry Pi 4 build as a "Laptop") and i want to run a local AI model on it (would be best wi... | 2025-07-08T11:10:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lulsx0/newbie_with_questions_d/ | Astro2302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lulsx0 | false | null | t3_1lulsx0 | /r/LocalLLaMA/comments/1lulsx0/newbie_with_questions_d/ | false | false | self | 2 | null |
SK Telecom released Korean-focused continual pretraining of Qwen2.5 | 41 | Been testing these for Korean projects. Two models:
72B version: [https://huggingface.co/skt/A.X-4.0](https://huggingface.co/skt/A.X-4.0)
7B version: [https://huggingface.co/skt/A.X-4.0-Light](https://huggingface.co/skt/A.X-4.0-Light)
Benchmarks:
* KMMLU: 78.3 (GPT-4o: 72.5) - Korean version of MMLU with 35k quest... | 2025-07-08T11:05:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lulpev/sk_telecom_released_koreanfocused_continual/ | Then-Reveal-2162 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lulpev | false | null | t3_1lulpev | /r/LocalLLaMA/comments/1lulpev/sk_telecom_released_koreanfocused_continual/ | false | false | self | 41 | {'enabled': False, 'images': [{'id': 'gJs2krayQD6wockaWJtUMW1OIOZRXr9NYYQnhITYVao', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gJs2krayQD6wockaWJtUMW1OIOZRXr9NYYQnhITYVao.png?width=108&crop=smart&auto=webp&s=e8b5d73acbd94467d52fea808b993aa0ded5fc64', 'width': 108}, {'height': 116, 'url': 'h... |
Open source Korean LLM beats GPT-4o - SK Telecom's A.X 4.0 | 2 | SK Telecom just open sourced their A.X 4.0 model. Tested it for a Korean project and it's surprisingly good:
* KMMLU: 78.3 (vs GPT-4o: 72.5)
* CLIcK: 83.5 (vs GPT-4o: 80.2)
* Uses 33% fewer tokens for Korean text
Based on Qwen2.5 with additional Korean training. Finally a proper Korean LLM (K-LLM) that understands Ko... | 2025-07-08T10:42:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lulbil/open_source_korean_llm_beats_gpt4o_sk_telecoms_ax/ | Then-Reveal-2162 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lulbil | false | null | t3_1lulbil | /r/LocalLLaMA/comments/1lulbil/open_source_korean_llm_beats_gpt4o_sk_telecoms_ax/ | false | false | self | 2 | null |
Which training framework is the best for fine-tuning the Qwen3 30B MoE model? | 7 | I have tried Llama Factory, MS Swift, and Unsloth for fine-tuning the Qwen3-30B-MoE model. But the training speed is much slower than the Qwen3-14B model. I heard training MoE models is faster than dense models. Would you guide me on how to train the Qwen3-30B-MoE model? | 2025-07-08T10:42:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lulbd7/which_training_framework_is_the_best_for/ | Alone_Ad_6011 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lulbd7 | false | null | t3_1lulbd7 | /r/LocalLLaMA/comments/1lulbd7/which_training_framework_is_the_best_for/ | false | false | self | 7 | null |
Looking for commercial LLM with direct video input | 0 | Hi,
I'm wondering to compare video understanding ability on opensource models and commercial models. but after searching about 1 hour, it seems only Gemini family models have direct video input ability. I saw a lot of tutorials to break video into image frames then feed to claude or gpt models, no direct video input.... | 2025-07-08T10:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lul26v/looking_for_commercial_llm_with_direct_video_input/ | CrazyShipTed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lul26v | false | null | t3_1lul26v | /r/LocalLLaMA/comments/1lul26v/looking_for_commercial_llm_with_direct_video_input/ | false | false | self | 0 | null |
"Efficient GPT-4V-level multimodal LLM for edge deployment" is published in Nature Communications ! | 1 | [removed] | 2025-07-08T10:15:25 | OpenBMB_Official | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lukvhc | false | null | t3_1lukvhc | /r/LocalLLaMA/comments/1lukvhc/efficient_gpt4vlevel_multimodal_llm_for_edge/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'enlkk9c9lmbf1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/enlkk9c9lmbf1.png?width=108&crop=smart&auto=webp&s=b179665debf57f158c6f6af1bbc405fa04cb1b95', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/enlkk9c9lmbf1.png?width=216&crop=smart&auto=web... | |
Anyone compared Qwen3 embeddings results with/without quantization ? | 13 | I am referring to those models :
[https://huggingface.co/Qwen/Qwen3-Embedding-8B-GGUF](https://huggingface.co/Qwen/Qwen3-Embedding-8B-GGUF)
The model card provides result for the non-quantized models but not for the quantized version | 2025-07-08T09:37:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lukahl/anyone_compared_qwen3_embeddings_results/ | LelouchZer12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lukahl | false | null | t3_1lukahl | /r/LocalLLaMA/comments/1lukahl/anyone_compared_qwen3_embeddings_results/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'nhgLDr3CrdT5WN1jcNRSYohtWVTucJWbBnABdj1Uk68', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nhgLDr3CrdT5WN1jcNRSYohtWVTucJWbBnABdj1Uk68.png?width=108&crop=smart&auto=webp&s=3b1a94d1d02b61ae96a14b59752df02d82fa765d', 'width': 108}, {'height': 116, 'url': 'h... |
Is there any local LLM that comes close to GPT-4 in reasoning and capabilities? Hardware suggestion? | 0 | Hi everyone,
I'm looking for a **local LLM solution** that gets as close as possible to **GPT-4** in terms of:
* Deep reasoning
* Research assistance (Deep research)
* Document drafting
* Coding (apps, websites, debugging, architecture)
* Image generation and analysis (Can create image but can also understand images... | 2025-07-08T09:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/1luk04f/is_there_any_local_llm_that_comes_close_to_gpt4/ | ExtensionAd182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luk04f | false | null | t3_1luk04f | /r/LocalLLaMA/comments/1luk04f/is_there_any_local_llm_that_comes_close_to_gpt4/ | false | false | self | 0 | null |
Question about "./llama-server" prompt caching | 5 | Does `./llama-server` support prompt caching (like `--prompt-cache` in the CLI), and if not, what’s the correct way to persist or reuse context between chat turns to avoid recomputing the full prompt each time in API-based usage (e.g., with Open WebUI)? | 2025-07-08T09:16:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lujz2h/question_about_llamaserver_prompt_caching/ | d00m_sayer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lujz2h | false | null | t3_1lujz2h | /r/LocalLLaMA/comments/1lujz2h/question_about_llamaserver_prompt_caching/ | false | false | self | 5 | null |
Do you think LeCun will be around for much longer? | 0 | With all the reshuffling of AI leadership and the underwhelming performance of Llama in benchmarks, I'm wonderwhether Meta will still see LeCun as the guy to have there, or at best a nice theoretical voice to have around for fun and looking more palatable to purists? | 2025-07-08T09:02:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lujs0h/do_you_think_lecun_will_be_around_for_much_longer/ | shannister | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lujs0h | false | null | t3_1lujs0h | /r/LocalLLaMA/comments/1lujs0h/do_you_think_lecun_will_be_around_for_much_longer/ | false | false | self | 0 | null |
Seeking Advice on LLM and Hardware for Real-Time Speech-to-Speech System | 1 | [removed] | 2025-07-08T08:49:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lujl0o/seeking_advice_on_llm_and_hardware_for_realtime/ | Live_Name8353 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lujl0o | false | null | t3_1lujl0o | /r/LocalLLaMA/comments/1lujl0o/seeking_advice_on_llm_and_hardware_for_realtime/ | false | false | self | 1 | null |
Hunyuan-A13B model support has been merged into llama.cpp | 275 | 2025-07-08T08:36:49 | https://github.com/ggml-org/llama.cpp/pull/14425 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lujedm | false | null | t3_1lujedm | /r/LocalLLaMA/comments/1lujedm/hunyuana13b_model_support_has_been_merged_into/ | false | false | default | 275 | {'enabled': False, 'images': [{'id': '9jUZNMJtHKaljkWO0STnEWPE0o_A8ZlYFbsk9KFfTaQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9jUZNMJtHKaljkWO0STnEWPE0o_A8ZlYFbsk9KFfTaQ.png?width=108&crop=smart&auto=webp&s=8f8321301cd9e7e2450ca67d1db3bab2b2eac99c', 'width': 108}, {'height': 108, 'url': 'h... | |
Agents or Workflows? 150 practitioners voted on the difference | 0 | Since the "Building Effective Agents" by [Barry Zhang](https://www.linkedin.com/in/ACoAACP4kpQBq4IDzWseB6tuTC-RTW-jCb-M07s) and [Erik Schluntz](https://www.linkedin.com/in/ACoAAAtfQ_QBjPrLyG8XiNOkkVulx0mtDjPRBFs), the industry has been ripe with discussion about what a workflow is vs what an agent is.
I was genuinely... | 2025-07-08T08:11:53 | https://www.reddit.com/r/LocalLLaMA/comments/1luj1cb/agents_or_workflows_150_practitioners_voted_on/ | htahir1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luj1cb | false | null | t3_1luj1cb | /r/LocalLLaMA/comments/1luj1cb/agents_or_workflows_150_practitioners_voted_on/ | false | false | 0 | null | |
[Tool Release] Finetune & Quantize 1–3B LLMs on 8GB RAM using LoFT CLI (TinyLlama + QLoRA + llama.cpp) | 24 | Hey folks — I’ve been working on a CLI tool called **LoFT (Low-RAM Finetuning Toolkit)**, and I finally have a working release.
# 🔧 What it does:
* Finetunes open-source LLMs (1–3B) like **TinyLlama** using **QLoRA**
* Runs entirely on **CPU (MacBook Air 8GB RAM tested)**
* Quantizes to **GGUF** format
* Runs local ... | 2025-07-08T07:36:20 | https://www.reddit.com/r/LocalLLaMA/comments/1luiigi/tool_release_finetune_quantize_13b_llms_on_8gb/ | diptanshu1991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luiigi | false | null | t3_1luiigi | /r/LocalLLaMA/comments/1luiigi/tool_release_finetune_quantize_13b_llms_on_8gb/ | false | false | self | 24 | null |
Whisper-large-v3 on VertexAI not supporting return_timstamps: True ? | 6 | Spent hours banging my head on the wall (and chatGPT and gemini not helping)
I deployed Whisper-large-v3 in Vertex AI GCP:
[https://console.cloud.google.com/vertex-ai/publishers/openai/model-garden/whisper-large](https://console.cloud.google.com/vertex-ai/publishers/openai/model-garden/whisper-large)
Vibe coded a sc... | 2025-07-08T07:31:53 | https://www.reddit.com/r/LocalLLaMA/comments/1luig63/whisperlargev3_on_vertexai_not_supporting_return/ | Worldly-Side9489 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luig63 | false | null | t3_1luig63 | /r/LocalLLaMA/comments/1luig63/whisperlargev3_on_vertexai_not_supporting_return/ | false | false | self | 6 | null |
Law training | 0 | I’m dreaming of a tool that can help the individual citizen to defend himself. Of course for big complex legal cases the human cannot be replaced.
To the technique:
Any good experience with a local LLM that can be retrained with local law?
| 2025-07-08T07:20:34 | https://www.reddit.com/r/LocalLLaMA/comments/1luia44/law_training/ | IvAx358 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luia44 | false | null | t3_1luia44 | /r/LocalLLaMA/comments/1luia44/law_training/ | false | false | self | 0 | null |
Prompt House: A prompt manager that connects directly to your AI tools | 1 | Hey everyone,
I'd like to share an AI tool I built called Prompt House. It's a prompt manager designed to make your AI workflow faster and more seamless.
The main goal is to eliminate the endless cycle of copy-pasting prompts. It uses the MCP to allow your AI clients to programmatically find and use the perfect promp... | 2025-07-08T07:15:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lui7nc/prompt_house_a_prompt_manager_that_connects/ | Dull-Interview2947 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lui7nc | false | null | t3_1lui7nc | /r/LocalLLaMA/comments/1lui7nc/prompt_house_a_prompt_manager_that_connects/ | false | false | self | 1 | null |
Which model has good user interface in results part? | 0 | I am using Perplexity but I do not like its user interface in results section. There is no bullets or headings in explanation. Which models can I use for nice look that separates results in nice sections and headings and bullet points?
| 2025-07-08T07:05:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lui2hs/which_model_has_good_user_interface_in_results/ | 01101110111motiv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lui2hs | false | null | t3_1lui2hs | /r/LocalLLaMA/comments/1lui2hs/which_model_has_good_user_interface_in_results/ | false | false | self | 0 | null |
Grok open source | 0 | a lot of people asking why xai still not open souced grok 2 and whether they will open source grok 3 next this week.
the answer is grok 4 and grok 3 are actually same model just trained longer.. so they will most likely release grok 2 soon. but def not grok 3. | 2025-07-08T06:42:38 | https://www.reddit.com/r/LocalLLaMA/comments/1luhpxa/grok_open_source/ | JP_525 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luhpxa | false | null | t3_1luhpxa | /r/LocalLLaMA/comments/1luhpxa/grok_open_source/ | false | false | self | 0 | null |
Bytedance releases new agentic coding assistant: Trae-Agent | 63 | 2025-07-08T06:36:23 | https://github.com/bytedance/trae-agent | umarmnaq | github.com | 1970-01-01T00:00:00 | 0 | {} | 1luhmmi | false | null | t3_1luhmmi | /r/LocalLLaMA/comments/1luhmmi/bytedance_releases_new_agentic_coding_assistant/ | false | false | default | 63 | {'enabled': False, 'images': [{'id': '2kbD9hIKBj55ykS2AmlC98FIs3m9CAJZ5myO4lqm-lw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2kbD9hIKBj55ykS2AmlC98FIs3m9CAJZ5myO4lqm-lw.png?width=108&crop=smart&auto=webp&s=13d3b96cac3ba7dee333a0689252f0384bf8abaf', 'width': 108}, {'height': 108, 'url': 'h... | |
Which mode l has good user interface in results part? | 0 | I am using perplexity but I do not like its ui in result section. There is no bullets or headings in explanation. Which models can I use for nice look? | 2025-07-08T06:14:52 | https://www.reddit.com/r/LocalLLaMA/comments/1luhah1/which_mode_l_has_good_user_interface_in_results/ | 01101110111motiv | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luhah1 | false | null | t3_1luhah1 | /r/LocalLLaMA/comments/1luhah1/which_mode_l_has_good_user_interface_in_results/ | false | false | self | 0 | null |
Seeking advice on unifying local LLaMA and cloud LLMs under one API | 2 | Hi everyone,
I’m working on a project where I need to switch seamlessly between a locally-hosted LLaMA (via llama.cpp or vLLM) and various cloud LLMs (OpenAI, Gemini, Mistral, etc.). Managing separate SDKs and handling retries/failovers has been a real pain.
**Questions:**
1. How are you handling multi-provider rout... | 2025-07-08T06:14:19 | https://www.reddit.com/r/LocalLLaMA/comments/1luha71/seeking_advice_on_unifying_local_llama_and_cloud/ | Status-Hearing-4084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luha71 | false | null | t3_1luha71 | /r/LocalLLaMA/comments/1luha71/seeking_advice_on_unifying_local_llama_and_cloud/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'KtqF55T5R5O_Sm4YbjwusEgQYb6qaXbNg0Zl4TFGdZw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KtqF55T5R5O_Sm4YbjwusEgQYb6qaXbNg0Zl4TFGdZw.png?width=108&crop=smart&auto=webp&s=3942302dc94c4b8597f12b5928b3d53e4f36a0d6', 'width': 108}, {'height': 108, 'url': 'h... | |
I need help understanding what model I can run on my laptop | 0 | Got a Dell 16 off their website with Ryzen 7 AI, 32 GB ram, AMD graphics and a 1 tb SSD. I'm a total vibe coder trying to mess with some ideas, so I'm in the dark. ChatGPT is telling me to go with a 7b model, Claude is saying 70. The project I'm working on involves multiple prompts/returns before output (poor man's ... | 2025-07-08T06:11:09 | https://www.reddit.com/r/LocalLLaMA/comments/1luh8e2/i_need_help_understanding_what_model_i_can_run_on/ | doctordaedalus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luh8e2 | false | null | t3_1luh8e2 | /r/LocalLLaMA/comments/1luh8e2/i_need_help_understanding_what_model_i_can_run_on/ | false | false | self | 0 | null |
Forge: One Unified API for Multi-Provider AI Models (Open Source) | 1 | [removed] | 2025-07-08T06:07:14 | https://www.reddit.com/r/LocalLLaMA/comments/1luh65k/forge_one_unified_api_for_multiprovider_ai_models/ | Status-Hearing-4084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luh65k | false | null | t3_1luh65k | /r/LocalLLaMA/comments/1luh65k/forge_one_unified_api_for_multiprovider_ai_models/ | false | false | 1 | null | |
Gemma 3n on phone with 6GB of ram | 142 | Tokens per second is quite slow on my Pixel 6a (0.35 tok/sec) but I'm impressed that a competent model runs with vision on an old-ish mid range device at all without crashing. I'm using the 2b parameter version instead of the 4b. | 2025-07-08T05:59:38 | Thedudely1 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1luh1w3 | false | null | t3_1luh1w3 | /r/LocalLLaMA/comments/1luh1w3/gemma_3n_on_phone_with_6gb_of_ram/ | false | false | 142 | {'enabled': True, 'images': [{'id': 'fPyup5WyZM70v0dRQUf3fjkEANojxnEMaEGU3OZhwaM', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/3yac87hublbf1.png?width=108&crop=smart&auto=webp&s=c85d6fe8ed6f2139034def46653d2dd438617316', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/3yac87hublbf1.pn... | ||
How to train lora on 5090 | 4 | Hi. Still new to llm world. Would like to train and use my own lora. What's the best way to go about it? | 2025-07-08T05:17:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lugdls/how_to_train_lora_on_5090/ | Adventurous_Rise_683 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lugdls | false | null | t3_1lugdls | /r/LocalLLaMA/comments/1lugdls/how_to_train_lora_on_5090/ | false | false | self | 4 | null |
OpenAI Board Member on AI Job Displacement | 0 | Zico Kolter is the director of CMU's ML Department (ml.cmu.edu), and is on the board for OpenAI. He's also the co-founder and Chief Technical Advisor of Gray Swan AI, and is a Chief Expert at Robert Bosch. He mainly focuses on improving the safety and robustness of ML models, including applications like LLM security an... | 2025-07-08T05:14:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lugblm/openai_board_member_on_ai_job_displacement/ | Electrical_Ad_9568 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lugblm | false | null | t3_1lugblm | /r/LocalLLaMA/comments/1lugblm/openai_board_member_on_ai_job_displacement/ | false | false | self | 0 | null |
JOIN MANUS! | 1 | [removed] | 2025-07-08T04:50:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lufxgd/join_manus/ | Positive-Low4629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lufxgd | false | null | t3_1lufxgd | /r/LocalLLaMA/comments/1lufxgd/join_manus/ | false | false | self | 1 | null |
Create your account on Manus and get 1,500 credits right now! | 3 | Here’s the invitation link for you to create your account and claim your credits:
[https://manus.im/invitation/JJATOCNVI3ZI](https://manus.im/invitation/JJATOCNVI3ZI) | 2025-07-08T04:41:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lufrrm/create_your_account_on_manus_and_get_1500_credits/ | Positive-Low4629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lufrrm | false | null | t3_1lufrrm | /r/LocalLLaMA/comments/1lufrrm/create_your_account_on_manus_and_get_1500_credits/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU.png?width=108&crop=smart&auto=webp&s=260714b5951bb46fdf2bf0a74425b2ca66c9306b', 'width': 108}, {'height': 114, 'url': 'h... |
WebSailor: a new web agent from alibaba | 1 | [removed] | 2025-07-08T04:31:37 | https://www.reddit.com/r/LocalLLaMA/comments/1luflby/websailor_a_new_web_agent_from_alibaba/ | Ambitious_Tough7265 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luflby | false | null | t3_1luflby | /r/LocalLLaMA/comments/1luflby/websailor_a_new_web_agent_from_alibaba/ | false | false | self | 1 | null |
Let's train a local open-source coding agent model and kick BigAI's ass! | 12 | Who's down? [https://www.reddit.com/r/RooCode/comments/1lufep2/lets\_train\_a\_local\_opensource\_model\_to\_use\_roo/](https://www.reddit.com/r/RooCode/comments/1lufep2/lets_train_a_local_opensource_model_to_use_roo/)
FYI Roo Code is an open source VS Code extension, forked from Cline, which is comparable to Github C... | 2025-07-08T04:26:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lufhso/lets_train_a_local_opensource_coding_agent_model/ | InstrumentalAsylum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lufhso | false | null | t3_1lufhso | /r/LocalLLaMA/comments/1lufhso/lets_train_a_local_opensource_coding_agent_model/ | false | false | self | 12 | null |
Using content filters in AI apps: ShieldGemma, LlamaGuard, or something else? | 1 | **Curious to hear from others building AI apps:**
Are you implementing content filters like **ShieldGemma or LlamaGuard** to block specific outputs from your AI models? Have you experimented with them before?
Some use cases - especially those involving compliance or regulation - require stricter safeguards. If you'r... | 2025-07-08T04:11:22 | https://www.reddit.com/r/LocalLLaMA/comments/1luf8d1/using_content_filters_in_ai_apps_shieldgemma/ | Happy_Percentage_384 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luf8d1 | false | null | t3_1luf8d1 | /r/LocalLLaMA/comments/1luf8d1/using_content_filters_in_ai_apps_shieldgemma/ | false | false | self | 1 | null |
Mercury: Ultra-Fast Language Models Based on Diffusion | 10 | Interesting finding. SOTA throughputs for Coder LLMs, 10x speed up over frontier models.
Playground: [https://chat.inceptionlabs.ai/](https://chat.inceptionlabs.ai/)
API: [https://platform.inceptionlabs.ai/](https://platform.inceptionlabs.ai/)
Paper says:
We present Mercury, a new generation of commercial-scale lar... | 2025-07-08T03:58:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lueziv/mercury_ultrafast_language_models_based_on/ | Happy_Percentage_384 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lueziv | false | null | t3_1lueziv | /r/LocalLLaMA/comments/1lueziv/mercury_ultrafast_language_models_based_on/ | false | false | self | 10 | null |
test | 0 | test if this post is published | 2025-07-08T03:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/1luevql/test/ | Happy_Percentage_384 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luevql | false | null | t3_1luevql | /r/LocalLLaMA/comments/1luevql/test/ | false | false | self | 0 | null |
Sanity Check | 0 | Looking to get a PoC system in at work to be able to start testing some AI workloads and see how we can use it to augment our staff and improve their day to day workflows.
Got some quotes, but I'm very unfamiliar with ML centric GPUs. We're not looking for new, and don't want to spend a lot for a PoC system. The optio... | 2025-07-08T03:48:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lueszq/sanity_check/ | techdaddy1980 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lueszq | false | null | t3_1lueszq | /r/LocalLLaMA/comments/1lueszq/sanity_check/ | false | false | self | 0 | null |
Day 11/50: Building a small language from scratch: Introduction to the Attention Mechanism in Large Language Models (LLMs) | 47 | #
https://preview.redd.it/ya6uoxmoikbf1.png?width=1024&format=png&auto=webp&s=69253abb996cd2754a0835f4ada4f543826578ac
# Hello everyone!
Welcome back to our journey through the “Build Large Language Models from Scratch” series. So far, we’ve spent a considerable amount of time in the first stage of this journey, l... | 2025-07-08T03:16:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lue75q/day_1150_building_a_small_language_from_scratch/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lue75q | false | null | t3_1lue75q | /r/LocalLLaMA/comments/1lue75q/day_1150_building_a_small_language_from_scratch/ | false | false | 47 | null | |
Qwen3-235B-Q2 running locally on my 64GB (DDR4) and 32GB VRAM machine | 75 | Sharing some experiences here. Mostly vibes, but maybe someone will find this helpful:
CPU 3950x (16c/32,t)
GPU(s): two Rx 6800's (16GB at ~520GB/s)
RAM: 64GB 2700mhz DDR4 in dual channel
OS: Ubuntu 24.04
**Inference Software:** Llama-CPP (llama-server specifically) built to use ROCm
**Weights:** Qwen3-235b-a22b... | 2025-07-08T03:15:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lue5xt/qwen3235bq2_running_locally_on_my_64gb_ddr4_and/ | EmPips | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lue5xt | false | null | t3_1lue5xt | /r/LocalLLaMA/comments/1lue5xt/qwen3235bq2_running_locally_on_my_64gb_ddr4_and/ | false | false | self | 75 | null |
Insulting LLMs instead of encouraging LLMs in their system prompts works as well. | 167 | So, I was thinking how AIs are very confident about incorrect answers, and how that compares to dunning Kreuger effect. Most system prompts have something like, "You are a very intelligent programmer/AI/person/whatever. Help this person". So I ran a test on a local 13 B param models, 1 without any prompt, and 1 with th... | 2025-07-08T01:23:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lubwky/insulting_llms_instead_of_encouraging_llms_in/ | Calebhk98 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lubwky | false | null | t3_1lubwky | /r/LocalLLaMA/comments/1lubwky/insulting_llms_instead_of_encouraging_llms_in/ | false | false | self | 167 | null |
Chrome now includes a built-in local LLM, I built a wrapper to make the API easier to use | 42 | Chrome now includes a native on-device LLM (Gemini Nano) starting in version 138 for extensions. I've been building with it for a while and excited that its finally made it into the latest version of Chrome. It’s powerful, but the official Prompt API can be a bit awkward to use:
- Enforces sessions even for basic usag... | 2025-07-08T01:20:50 | https://github.com/kstonekuan/simple-chromium-ai | kuaythrone | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lubunz | false | null | t3_1lubunz | /r/LocalLLaMA/comments/1lubunz/chrome_now_includes_a_builtin_local_llm_i_built_a/ | false | false | default | 42 | {'enabled': False, 'images': [{'id': 'sp3umckXVxqL0xC9QHfq1Qvl1z_m3teqOXRXzjGhY2E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sp3umckXVxqL0xC9QHfq1Qvl1z_m3teqOXRXzjGhY2E.png?width=108&crop=smart&auto=webp&s=0058905d0ca8d0ae65987f447622e181045f356e', 'width': 108}, {'height': 108, 'url': 'h... |
So, does anyone have a good workflow to replace google search yet? | 21 | As everyone knows, google search has been getting worse the past few years. ChatGPT with web search enabled has become a big tool that is replacing Google for me.
Here are some example queries:
["List the median, 25th/75th percentile MCAT scores for medical schools in California in a table. Sort by rank."](https://... | 2025-07-08T00:58:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lubdcg/so_does_anyone_have_a_good_workflow_to_replace/ | DepthHour1669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lubdcg | false | null | t3_1lubdcg | /r/LocalLLaMA/comments/1lubdcg/so_does_anyone_have_a_good_workflow_to_replace/ | false | false | self | 21 | null |
Dual GPU with 2nd PSU and add2psu confusion. | 1 | Hello apologies if this is quite a flogged topic but I’ve read eveything I could possibly find and have found my self more confused the more I read.
I want to use 2x 3090s with my workstation. And my plan was to use a second PSU with add2psu and 2 risers. The second PSU would power the 2 GPUS. The workstation PSU woul... | 2025-07-08T00:51:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lub87l/dual_gpu_with_2nd_psu_and_add2psu_confusion/ | throwmesomewhere123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lub87l | false | null | t3_1lub87l | /r/LocalLLaMA/comments/1lub87l/dual_gpu_with_2nd_psu_and_add2psu_confusion/ | false | false | self | 1 | null |
No "attach" button in Jan | 4 | Hello, I'm feeling super stupid to ask this, but since the last versions of Jan the attachment (📎) button has disappeared. Is it only me experiencing this? | 2025-07-08T00:19:57 | https://www.reddit.com/r/LocalLLaMA/comments/1luaket/no_attach_button_in_jan/ | SensitiveDisk0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luaket | false | null | t3_1luaket | /r/LocalLLaMA/comments/1luaket/no_attach_button_in_jan/ | false | false | self | 4 | null |
Looking to connect devs who want to build something real this summer | 0 | Hey r/LocalLLaMA 👋
I'm not a developer myself, but I'm working with a community that's helping form small teams of people who want to build real, technical projects over the summer, especially in areas like open-source AI, DevOps, infrastructure, or tooling.
It’s a multi-month, team-based initiative with mentorship ... | 2025-07-08T00:16:26 | https://www.reddit.com/r/LocalLLaMA/comments/1luahr3/looking_to_connect_devs_who_want_to_build/ | Top_Comfort_5666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luahr3 | false | null | t3_1luahr3 | /r/LocalLLaMA/comments/1luahr3/looking_to_connect_devs_who_want_to_build/ | false | false | self | 0 | null |
Looking for "local" models to run on Super Computer | 2 | Hey Yall, I currently have access to a lot of compute through my schools research computer (tons of A100s) and I can use upto a reasonable amount of compute. I have never run any llm locally because I am a Macbook Air. So I want to know good ways to use this compute without going overkill I can easily procure and use a... | 2025-07-08T00:09:56 | https://www.reddit.com/r/LocalLLaMA/comments/1luacs4/looking_for_local_models_to_run_on_super_computer/ | golden34567 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1luacs4 | false | null | t3_1luacs4 | /r/LocalLLaMA/comments/1luacs4/looking_for_local_models_to_run_on_super_computer/ | false | false | self | 2 | null |
Stream of consciousness from Gemma3-1B | 0 | I’ve been toying around with building a speech-to-speech interface powered by Gemma3-1B. I asked it to generate stream of consciousness responses so I could get a sense of what the text-to-speech sounded like, but found the responses to get more and more interesting over time, so I wanted to share! Hope you find it a f... | 2025-07-07T23:52:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lu9zh2/stream_of_consciousness_from_gemma31b/ | Careful_Breath_1108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu9zh2 | false | null | t3_1lu9zh2 | /r/LocalLLaMA/comments/1lu9zh2/stream_of_consciousness_from_gemma31b/ | false | false | self | 0 | null |
Need help with basic functionality | 4 | Over the past 2 months, I’ve been testing various combinations of models and front ends for a local LLM. I have a windows computer with a 3090 (24gb VRAM), 32gb motherboard ram, and a 2tb ssd. I’m running ollama on the backend and openwebui and anythingllm for front ends. I’m successful with direct connections to ollam... | 2025-07-07T23:04:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lu8x2i/need_help_with_basic_functionality/ | evilbarron2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu8x2i | false | null | t3_1lu8x2i | /r/LocalLLaMA/comments/1lu8x2i/need_help_with_basic_functionality/ | false | false | self | 4 | null |
Help Us Improve Automation Tools – Share Your Experience in a 5-Minute Survey! | 0 | We want your insights!
If you’ve used automation tools like Zapier, Make, or n8n, we’d love your feedback.We're running a quick 5-minute survey to better understand how people use automation + AI — what works, what doesn't, and what you'd improve. Your input will help shape more intuitive, flexible automation platfor... | 2025-07-07T22:39:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lu8bw3/help_us_improve_automation_tools_share_your/ | Tinypossum14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu8bw3 | false | null | t3_1lu8bw3 | /r/LocalLLaMA/comments/1lu8bw3/help_us_improve_automation_tools_share_your/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '2u_dkKbIWOwlPp9B0Rn6TIsAiUTgfVODtM7gXLJ-OP4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/2u_dkKbIWOwlPp9B0Rn6TIsAiUTgfVODtM7gXLJ-OP4.png?width=108&crop=smart&auto=webp&s=48846081d414940a0d7bb83a7fdf6e06e341da6a', 'width': 108}, {'height': 113, 'url': 'h... |
Locally run TTS Models | 8 | Hi all,
I'm not familiar with coding in general and have been banging my head against chatGPT and online tutorials trying to make things such as Tortoise-TTS work, but it's so out of date that ChatGPT can't help me install it because of the amount of deprecation and I just don't know what I'm doing.
Does anyone have ... | 2025-07-07T22:26:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lu818k/locally_run_tts_models/ | Tankerspam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu818k | false | null | t3_1lu818k | /r/LocalLLaMA/comments/1lu818k/locally_run_tts_models/ | false | false | self | 8 | null |
Started r/aiinfra — a subreddit focused on the systems side of running LLMs | 1 | [removed] | 2025-07-07T22:14:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lu7qd3/started_raiinfra_a_subreddit_focused_on_the/ | cookiesupers22 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lu7qd3 | false | null | t3_1lu7qd3 | /r/LocalLLaMA/comments/1lu7qd3/started_raiinfra_a_subreddit_focused_on_the/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.