title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Guys wake up.... We can clone ourselves. | 1 | [removed] | 2025-06-23T18:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lip1pz/guys_wake_up_we_can_clone_ourselves/ | its_akphyo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lip1pz | false | null | t3_1lip1pz | /r/LocalLLaMA/comments/1lip1pz/guys_wake_up_we_can_clone_ourselves/ | false | false | self | 1 | null |
Has anybody else found DeepSeek R1 0528 Qwen3 8B to be wildly unreliable? | 1 | [removed] | 2025-06-23T18:49:16 | https://www.reddit.com/r/LocalLLaMA/comments/1liowi7/has_anybody_else_found_deepseek_r1_0528_qwen3_8b/ | Quagmirable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liowi7 | false | null | t3_1liowi7 | /r/LocalLLaMA/comments/1liowi7/has_anybody_else_found_deepseek_r1_0528_qwen3_8b/ | false | false | self | 1 | null |
Code with your voice | 1 | [removed] | 2025-06-23T18:24:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lio8xa/code_with_your_voice/ | D3c1m470r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lio8xa | false | null | t3_1lio8xa | /r/LocalLLaMA/comments/1lio8xa/code_with_your_voice/ | false | false | self | 1 | null |
What local hosted chat/story front ends to open ai compatable api's may I have not heard of? | 1 | [removed] | 2025-06-23T18:02:57 | https://www.reddit.com/r/LocalLLaMA/comments/1linot4/what_local_hosted_chatstory_front_ends_to_open_ai/ | mrgreaper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1linot4 | false | null | t3_1linot4 | /r/LocalLLaMA/comments/1linot4/what_local_hosted_chatstory_front_ends_to_open_ai/ | false | false | self | 1 | null |
Power required to run something like veo 3 or kling 2.1 locally | 1 | [removed] | 2025-06-23T17:40:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lin2sj/power_required_to_run_something_like_veo_3_or/ | Inevitable_Drive4729 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lin2sj | false | null | t3_1lin2sj | /r/LocalLLaMA/comments/1lin2sj/power_required_to_run_something_like_veo_3_or/ | false | false | self | 1 | null |
Are we there with Local Code dev? | 1 | [removed] | 2025-06-23T17:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1limzib/are_we_there_with_local_code_dev/ | sandwich_stevens | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1limzib | false | null | t3_1limzib | /r/LocalLLaMA/comments/1limzib/are_we_there_with_local_code_dev/ | false | false | self | 1 | null |
having trouble using LMStudio | 1 | [removed] | 2025-06-23T17:25:56 | https://www.reddit.com/r/LocalLLaMA/comments/1limp00/having_trouble_using_lmstudio/ | LazyChampionship5819 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1limp00 | false | null | t3_1limp00 | /r/LocalLLaMA/comments/1limp00/having_trouble_using_lmstudio/ | false | false | 1 | null | |
Sharing My 2-Week Solo Build: Local LLM Chat App with Characters, Inline Suggestions, and Prompt Tools | 1 | [removed] | 2025-06-23T17:24:27 | https://www.reddit.com/r/LocalLLaMA/comments/1limnk1/sharing_my_2week_solo_build_local_llm_chat_app/ | RIPT1D3_Z | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1limnk1 | false | null | t3_1limnk1 | /r/LocalLLaMA/comments/1limnk1/sharing_my_2week_solo_build_local_llm_chat_app/ | false | false | 1 | null | |
What gemma-3 (12b and 27b) version are you using/do you prefer? | 1 | [removed] | 2025-06-23T17:22:28 | https://www.reddit.com/r/LocalLLaMA/comments/1limlml/what_gemma3_12b_and_27b_version_are_you_usingdo/ | relmny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1limlml | false | null | t3_1limlml | /r/LocalLLaMA/comments/1limlml/what_gemma3_12b_and_27b_version_are_you_usingdo/ | false | false | self | 1 | null |
LM Studio seems to be much slower than Ollama, but Ollama's CLI is pretty limited. Is there a middle ground here? | 1 | [removed] | 2025-06-23T16:53:11 | https://www.reddit.com/r/LocalLLaMA/comments/1liltdi/lm_studio_seems_to_be_much_slower_than_ollama_but/ | nat2r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liltdi | false | null | t3_1liltdi | /r/LocalLLaMA/comments/1liltdi/lm_studio_seems_to_be_much_slower_than_ollama_but/ | false | false | self | 1 | null |
Paradigm shift: Polaris takes local models to the next level. | 1 | [removed] | 2025-06-23T16:50:25 | Ordinary_Mud7430 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lilqrp | false | null | t3_1lilqrp | /r/LocalLLaMA/comments/1lilqrp/paradigm_shift_polaris_takes_local_models_to_the/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'qvd3fu1aip8f1', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/qvd3fu1aip8f1.jpeg?width=108&crop=smart&auto=webp&s=edd5d010370cd7d0514093b0caae6ca87889615d', 'width': 108}, {'height': 141, 'url': 'https://preview.redd.it/qvd3fu1aip8f1.jpeg?width=216&crop=smart&auto=w... | |
Has anybody else found DeepSeek-R1-0528-Qwen3-8B to be wildly unreliable? | 1 | [removed] | 2025-06-23T16:42:37 | https://www.reddit.com/r/LocalLLaMA/comments/1liljg6/has_anybody_else_found_deepseekr10528qwen38b_to/ | Quagmirable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liljg6 | false | null | t3_1liljg6 | /r/LocalLLaMA/comments/1liljg6/has_anybody_else_found_deepseekr10528qwen38b_to/ | false | false | self | 1 | null |
Computing power needed to run something equal to Veo 3 or kling 2.1 locally | 1 | [removed] | 2025-06-23T16:39:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lilg7h/computing_power_needed_to_run_something_equal_to/ | Inevitable_Drive4729 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lilg7h | false | null | t3_1lilg7h | /r/LocalLLaMA/comments/1lilg7h/computing_power_needed_to_run_something_equal_to/ | false | false | self | 1 | null |
Linus tech tips 48gb 4090 | 1 | [removed] | 2025-06-23T16:36:00 | https://www.reddit.com/r/LocalLLaMA/comments/1lilda6/linus_tech_tips_48gb_4090/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lilda6 | false | null | t3_1lilda6 | /r/LocalLLaMA/comments/1lilda6/linus_tech_tips_48gb_4090/ | false | false | self | 1 | null |
Teach LLM to play Tetris | 1 | [removed] | 2025-06-23T16:34:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lilc9y/teach_llm_to_play_tetris/ | hadoopfromscratch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lilc9y | false | null | t3_1lilc9y | /r/LocalLLaMA/comments/1lilc9y/teach_llm_to_play_tetris/ | false | false | self | 1 | null |
Day 1 of 50 Days of Building a Small Language Model from Scratch
Topic: What is a Small Language Model (SLM)? | 3 | [removed] | 2025-06-23T16:15:38 | https://www.reddit.com/r/LocalLLaMA/comments/1liktwh/day_1_of_50_days_of_building_a_small_language/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liktwh | false | null | t3_1liktwh | /r/LocalLLaMA/comments/1liktwh/day_1_of_50_days_of_building_a_small_language/ | false | false | 3 | null | |
I create Ghibli-AI-Art-Generator and Open source it | 1 | [removed] | 2025-06-23T16:05:48 | https://v.redd.it/luzutm9m8p8f1 | gaodalie | /r/LocalLLaMA/comments/1likkcm/i_create_ghibliaiartgenerator_and_open_source_it/ | 1970-01-01T00:00:00 | 0 | {} | 1likkcm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/luzutm9m8p8f1/DASHPlaylist.mpd?a=1753416352%2CMzI4ZmM3ZDhmZDliZjlmMGIyOGFmN2YwYTU4MmIwZDBkNGEwZWMzZDI2YjhhNTI4OWYwNDhkOWRhNjEwY2M3OA%3D%3D&v=1&f=sd', 'duration': 50, 'fallback_url': 'https://v.redd.it/luzutm9m8p8f1/DASH_1080.mp4?source=fallback', 'h... | t3_1likkcm | /r/LocalLLaMA/comments/1likkcm/i_create_ghibliaiartgenerator_and_open_source_it/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ejh3c3BvOW04cDhmMWjk8BUNYNp88e3U9YNh6_5B3JlSlDRepcSm8_uSSAOn', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ejh3c3BvOW04cDhmMWjk8BUNYNp88e3U9YNh6_5B3JlSlDRepcSm8_uSSAOn.png?width=108&crop=smart&format=pjpg&auto=webp&s=7fce48988ba2d8da169cd52077f337cd498bf... | |
App that highlights text in pdf llm based its answer on? | 1 | [removed] | 2025-06-23T16:05:22 | https://www.reddit.com/r/LocalLLaMA/comments/1likjy0/app_that_highlights_text_in_pdf_llm_based_its/ | Sea-Replacement7541 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1likjy0 | false | null | t3_1likjy0 | /r/LocalLLaMA/comments/1likjy0/app_that_highlights_text_in_pdf_llm_based_its/ | false | false | self | 1 | null |
50 Days of Building a Small Language Model from Scratch — Day 1: What Are Small Language Models? | 1 | [removed] | 2025-06-23T16:00:24 | https://www.reddit.com/r/LocalLLaMA/comments/1likez1/50_days_of_building_a_small_language_model_from/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1likez1 | false | null | t3_1likez1 | /r/LocalLLaMA/comments/1likez1/50_days_of_building_a_small_language_model_from/ | false | false | 1 | null | |
Script Orchestration | 1 | [removed] | 2025-06-23T15:53:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lik8pk/script_orchestration/ | Loud-Bake-2740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lik8pk | false | null | t3_1lik8pk | /r/LocalLLaMA/comments/1lik8pk/script_orchestration/ | false | false | self | 1 | null |
How to integrate dynamic citations in a RAG system with an LLM? | 1 | [removed] | 2025-06-23T15:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lik1vd/how_to_integrate_dynamic_citations_in_a_rag/ | Mobile_Estate_9160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lik1vd | false | null | t3_1lik1vd | /r/LocalLLaMA/comments/1lik1vd/how_to_integrate_dynamic_citations_in_a_rag/ | false | false | self | 1 | null |
Open Source LLM Firewall (Self-Hosted, Policy-Driven) | 1 | [removed] | 2025-06-23T15:46:02 | https://github.com/trylonai/gateway | Consistent_Equal5327 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lik18g | false | null | t3_1lik18g | /r/LocalLLaMA/comments/1lik18g/open_source_llm_firewall_selfhosted_policydriven/ | false | false | default | 1 | null |
installing external GPU card | 1 | [removed] | 2025-06-23T15:45:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lik0tm/installing_external_gpu_card/ | tr3g | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lik0tm | false | null | t3_1lik0tm | /r/LocalLLaMA/comments/1lik0tm/installing_external_gpu_card/ | false | false | self | 1 | null |
Is there any modded GPU with 96GB of Vram? | 1 | [removed] | 2025-06-23T15:30:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lijmv5/is_there_any_modded_gpu_with_96gb_of_vram/ | polawiaczperel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lijmv5 | false | null | t3_1lijmv5 | /r/LocalLLaMA/comments/1lijmv5/is_there_any_modded_gpu_with_96gb_of_vram/ | false | false | self | 1 | null |
Have access to GPUs - wish to train something that's beneficial to the community | 1 | [removed] | 2025-06-23T15:30:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lijmts/have_access_to_gpus_wish_to_train_something_thats/ | fullgoopy_alchemist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lijmts | false | null | t3_1lijmts | /r/LocalLLaMA/comments/1lijmts/have_access_to_gpus_wish_to_train_something_thats/ | false | false | self | 1 | null |
How was LLaMA 3.2 1B made? | 1 | [removed] | 2025-06-23T15:24:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lijh20/how_was_llama_32_1b_made/ | AntiquePercentage536 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lijh20 | false | null | t3_1lijh20 | /r/LocalLLaMA/comments/1lijh20/how_was_llama_32_1b_made/ | false | false | self | 1 | null |
Advice needed: What is the most eficient way to use a local llm applied to web browsing. | 1 | [removed] | 2025-06-23T15:23:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lijggb/advice_needed_what_is_the_most_eficient_way_to/ | Interesting_Egg9997 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lijggb | false | null | t3_1lijggb | /r/LocalLLaMA/comments/1lijggb/advice_needed_what_is_the_most_eficient_way_to/ | false | false | self | 1 | null |
Rtx 4090 48g or rtx pro 6000 96g | 1 | [removed] | 2025-06-23T15:15:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lij8t7/rtx_4090_48g_or_rtx_pro_6000_96g/ | Fit_Camel_2459 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lij8t7 | false | null | t3_1lij8t7 | /r/LocalLLaMA/comments/1lij8t7/rtx_4090_48g_or_rtx_pro_6000_96g/ | false | false | self | 1 | null |
Anyone Using Local Models for Meeting Summarization? | 1 | [removed] | 2025-06-23T15:10:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lij43u/anyone_using_local_models_for_meeting/ | jaythesong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lij43u | false | null | t3_1lij43u | /r/LocalLLaMA/comments/1lij43u/anyone_using_local_models_for_meeting/ | false | false | self | 1 | null |
[OpenSource] A C library for embedding Apple Intelligence on-device Foundation models in any programming language or application with full support for native tool calling and MCP. | 1 | [removed] | 2025-06-23T15:06:26 | AndrewMD5 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lij0cp | false | null | t3_1lij0cp | /r/LocalLLaMA/comments/1lij0cp/opensource_a_c_library_for_embedding_apple/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '1pwr3sityo8f1', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=108&crop=smart&format=png8&s=02a2dec4d6807528629fd690251a571a048559de', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/1pwr3sityo8f1.gif?width=216&crop=smart&format... | |
What just happened? | 1 | [removed] | 2025-06-23T14:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/1liints/what_just_happened/ | Anti-Hippy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liints | false | null | t3_1liints | /r/LocalLLaMA/comments/1liints/what_just_happened/ | false | false | self | 1 | null |
LTT tests 4090 48gb cards from ebay. | 1 | 2025-06-23T14:47:40 | https://www.youtube.com/watch?v=HZgQp-WDebU | RedditUsr2 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1liiitu | false | {'oembed': {'author_name': 'Linus Tech Tips', 'author_url': 'https://www.youtube.com/@LinusTechTips', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/HZgQp-WDebU?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gy... | t3_1liiitu | /r/LocalLLaMA/comments/1liiitu/ltt_tests_4090_48gb_cards_from_ebay/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/ZSkXOQ0Ftmzf9m07Ydba1-71lECRPh1WZMhCFovef6Y.jpeg?width=108&crop=smart&auto=webp&s=34b6e95c9e78450a03bc17669db1039556875ab2', 'width': 108}, {'height': 162, 'url': '... | ||
Gemini weird behavior | 1 | [removed] | 2025-06-23T14:28:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lii1j2/gemini_weird_behavior/ | shahood123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lii1j2 | false | null | t3_1lii1j2 | /r/LocalLLaMA/comments/1lii1j2/gemini_weird_behavior/ | false | false | self | 1 | null |
Nanovllm a lightweight python implementation from the deepseek guys | 1 | [removed] | 2025-06-23T14:01:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lihd6v/nanovllm_a_lightweight_python_implementation_from/ | No_Afternoon_4260 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lihd6v | false | null | t3_1lihd6v | /r/LocalLLaMA/comments/1lihd6v/nanovllm_a_lightweight_python_implementation_from/ | false | false | self | 1 | null |
No new posts & Missing comments on existing posts | 1 | [removed] | 2025-06-23T13:38:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ligu2j/no_new_posts_missing_comments_on_existing_posts/ | Mushoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ligu2j | false | null | t3_1ligu2j | /r/LocalLLaMA/comments/1ligu2j/no_new_posts_missing_comments_on_existing_posts/ | false | false | self | 1 | null |
A team claimed that they fine-tuned a mistral-small to surpass most LLMs across different benchmarks | 1 | [removed] | 2025-06-23T13:37:48 | BreakfastFriendly728 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ligt5l | false | null | t3_1ligt5l | /r/LocalLLaMA/comments/1ligt5l/a_team_claimed_that_they_finetuned_a_mistralsmall/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'upjb09mwjo8f1', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/upjb09mwjo8f1.png?width=108&crop=smart&auto=webp&s=8f5d6376276cea367ac070698a99e40f49223f4f', 'width': 108}, {'height': 139, 'url': 'https://preview.redd.it/upjb09mwjo8f1.png?width=216&crop=smart&auto=web... | |
Vulkan + termux llama.cpp not working | 1 | [removed] | 2025-06-23T13:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ligiit/vulkan_termux_llamacpp_not_working/ | ExtremeAcceptable289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ligiit | false | null | t3_1ligiit | /r/LocalLLaMA/comments/1ligiit/vulkan_termux_llamacpp_not_working/ | false | false | self | 1 | null |
Are there any LLMs that are actually able to run on an "affordable" setup? Like, a server <$500/mo? | 1 | [removed] | 2025-06-23T13:15:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ligb2z/are_there_any_llms_that_are_actually_able_to_run/ | g15mouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ligb2z | false | null | t3_1ligb2z | /r/LocalLLaMA/comments/1ligb2z/are_there_any_llms_that_are_actually_able_to_run/ | false | false | self | 1 | null |
AMD Formally Launches Radeon AI PRO 9000 Series | 1 | 2025-06-23T13:10:51 | https://www.techpowerup.com/338086/amd-formally-launches-ryzen-threadripper-pro-9000-and-radeon-ai-pro-9000-series | Risse | techpowerup.com | 1970-01-01T00:00:00 | 0 | {} | 1lig76b | false | null | t3_1lig76b | /r/LocalLLaMA/comments/1lig76b/amd_formally_launches_radeon_ai_pro_9000_series/ | false | false | default | 1 | null | |
Just Picked up a 16" M3 Pro 36GB MacBook Pro for $1,250. What should I run? | 1 | [removed] | 2025-06-23T13:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lifz7x/just_picked_up_a_16_m3_pro_36gb_macbook_pro_for/ | mentalasf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lifz7x | false | null | t3_1lifz7x | /r/LocalLLaMA/comments/1lifz7x/just_picked_up_a_16_m3_pro_36gb_macbook_pro_for/ | false | false | self | 1 | null |
Llama.cpp vulkan on termux giving "assertion errno = ETIME failed" | 1 | [removed] | 2025-06-23T12:50:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lifr7f/llamacpp_vulkan_on_termux_giving_assertion_errno/ | ExtremeAcceptable289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lifr7f | false | null | t3_1lifr7f | /r/LocalLLaMA/comments/1lifr7f/llamacpp_vulkan_on_termux_giving_assertion_errno/ | false | false | self | 1 | null |
Kevin Durant - NBA star, is an early investor in Hugging Face (2017) | 1 | [removed] | 2025-06-23T12:38:39 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lifi4l | false | null | t3_1lifi4l | /r/LocalLLaMA/comments/1lifi4l/kevin_durant_nba_star_is_an_early_investor_in/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'afCCDtWnKEpurwPueUempZvBmyC4VOfpSx56OE9DHxk', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/tb8r81gv8o8f1.jpeg?width=108&crop=smart&auto=webp&s=7b7e56059161a0c3becbe6a003378572bc9910c7', 'width': 108}, {'height': 187, 'url': 'https://preview.redd.it/tb8r81gv8o8f1.jp... | ||
Llama on iPhone's Neural Engine - 0.05s to first token | 1 | [removed] | 2025-06-23T12:10:13 | Glad-Speaker3006 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1liexm6 | false | null | t3_1liexm6 | /r/LocalLLaMA/comments/1liexm6/llama_on_iphones_neural_engine_005s_to_first_token/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'YVjyxgsIGxIo9mg_gJ1gblaxyKUr6ysq2kyS3LW_cxw', 'resolutions': [{'height': 133, 'url': 'https://preview.redd.it/kphjfwaa4o8f1.jpeg?width=108&crop=smart&auto=webp&s=b6a12a2a8a26071421cfe1b75bf9334321f2ec90', 'width': 108}, {'height': 266, 'url': 'https://preview.redd.it/kphjfwaa4o8f1.j... | ||
Run Llama on iPhone’s Neural Engine - 0.05s to first token | 1 | [removed] | 2025-06-23T12:03:36 | Glad-Speaker3006 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1liessp | false | null | t3_1liessp | /r/LocalLLaMA/comments/1liessp/run_llama_on_iphones_neural_engine_005s_to_first/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'quzzmbr33o8f1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/quzzmbr33o8f1.jpeg?width=108&crop=smart&auto=webp&s=c4ed8eb3f189310fd45b2506612c2cccd28e7da6', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/quzzmbr33o8f1.jpeg?width=216&crop=smart&auto=... | |
Where's activity? | 1 | [removed] | 2025-06-23T11:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lidysr/wheres_activity/ | Guilty-Race-9633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lidysr | false | null | t3_1lidysr | /r/LocalLLaMA/comments/1lidysr/wheres_activity/ | false | false | self | 1 | null |
Just found out local LLaMA 3 days ago, started with LM Studio. Then, I tried to see what is the biggest model I could use. Don't mind the slow generation. Qwen3-32b Q8 gguf on LM Studio is better than Oobabooga? (PC: R5 3600, RTX3060 12GB, 32GB RAM). What is the best local LLaMA + internet setup? | 1 | [removed] | 2025-06-23T10:48:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lidg19/just_found_out_local_llama_3_days_ago_started/ | Mystvearn2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lidg19 | false | null | t3_1lidg19 | /r/LocalLLaMA/comments/1lidg19/just_found_out_local_llama_3_days_ago_started/ | false | false | self | 1 | null |
What's missing in local / open AI? | 1 | [removed] | 2025-06-23T10:42:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lidc1u/whats_missing_in_local_open_ai/ | Amgadoz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lidc1u | false | null | t3_1lidc1u | /r/LocalLLaMA/comments/1lidc1u/whats_missing_in_local_open_ai/ | false | false | self | 1 | null |
Looking for an upgrade from Meta-Llama-3.1-8B-Instruct-Q4_K_L.gguf, especially for letter parsing. Last time I looked into this was a very long time ago (7 months!) What are the best models nowadays? | 1 | [removed] | 2025-06-23T10:14:26 | https://www.reddit.com/r/LocalLLaMA/comments/1licvv8/looking_for_an_upgrade_from/ | AuspiciousApple | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1licvv8 | false | null | t3_1licvv8 | /r/LocalLLaMA/comments/1licvv8/looking_for_an_upgrade_from/ | false | false | self | 1 | null |
We're ReadyTensor! | 1 | [removed] | 2025-06-23T10:07:31 | https://www.reddit.com/r/LocalLLaMA/comments/1licryi/were_readytensor/ | Ready_Tensor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1licryi | false | null | t3_1licryi | /r/LocalLLaMA/comments/1licryi/were_readytensor/ | false | false | self | 1 | null |
Gryphe/Codex-24B-Small-3.2 · Hugging Face | 1 | [removed] | 2025-06-23T09:53:30 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1licjp9 | false | null | t3_1licjp9 | /r/LocalLLaMA/comments/1licjp9/gryphecodex24bsmall32_hugging_face/ | false | false | default | 1 | null | ||
quantize: Handle user-defined pruning of whole layers | 1 | [removed] | 2025-06-23T09:52:43 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1licj7z | false | null | t3_1licj7z | /r/LocalLLaMA/comments/1licj7z/quantize_handle_userdefined_pruning_of_whole/ | false | false | default | 1 | null | ||
quantize: Handle user-defined pruning of whole layers (blocks | 1 | [removed] | 2025-06-23T09:51:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1licin2 | false | null | t3_1licin2 | /r/LocalLLaMA/comments/1licin2/quantize_handle_userdefined_pruning_of_whole/ | false | false | default | 1 | null | ||
pruning of whole layers | 1 | [removed] | 2025-06-23T09:51:03 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1licia8 | false | null | t3_1licia8 | /r/LocalLLaMA/comments/1licia8/pruning_of_whole_layers/ | false | false | default | 1 | null | ||
quantize: Handle user-defined pruning of whole layers | 1 | 2025-06-23T09:50:17 | https://github.com/ggml-org/llama.cpp/pull/13037 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lichur | false | null | t3_1lichur | /r/LocalLLaMA/comments/1lichur/quantize_handle_userdefined_pruning_of_whole/ | false | false | default | 1 | null | |
quantize: Handle user-defined pruning of whole layers (blocks) by EAddario | 1 | 2025-06-23T09:49:28 | https://github.com/ggml-org/llama.cpp/pull/13037 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lichev | false | null | t3_1lichev | /r/LocalLLaMA/comments/1lichev/quantize_handle_userdefined_pruning_of_whole/ | false | false | default | 1 | null | |
quantize: Handle user-defined pruning of whole layers (blocks) by EAddario · Pull Request #13037 · ggml-org/llama.cpp | 1 | [removed] | 2025-06-23T09:48:29 | https://github.com/ggml-org/llama.cpp/pull/13037 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1licgw0 | false | null | t3_1licgw0 | /r/LocalLLaMA/comments/1licgw0/quantize_handle_userdefined_pruning_of_whole/ | false | false | default | 1 | null |
How to use gguf format model for image description? | 1 | [removed] | 2025-06-23T09:45:00 | https://www.reddit.com/r/LocalLLaMA/comments/1liceys/how_to_use_gguf_format_model_for_image_description/ | Best_Character_9311 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liceys | false | null | t3_1liceys | /r/LocalLLaMA/comments/1liceys/how_to_use_gguf_format_model_for_image_description/ | false | false | self | 1 | null |
Why are there so many invisible posts and comments in Sub LocalLLaMA? | 1 | [removed] | 2025-06-23T09:41:53 | choose_a_guest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1licdbd | false | null | t3_1licdbd | /r/LocalLLaMA/comments/1licdbd/why_are_there_so_many_invisible_posts_and/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'j68shdhkdn8f1', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/j68shdhkdn8f1.png?width=108&crop=smart&auto=webp&s=39910e1086085ac0b6f2e4c95e899e1d063830b8', 'width': 108}, {'height': 191, 'url': 'https://preview.redd.it/j68shdhkdn8f1.png?width=216&crop=smart&auto=web... | |
Tower+ 72B is build on top of Qwen 2.5 72B | 1 | [removed] | 2025-06-23T09:40:57 | touhidul002 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1licctq | false | null | t3_1licctq | /r/LocalLLaMA/comments/1licctq/tower_72b_is_build_on_top_of_qwen_25_72b/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'jvb6cx2fdn8f1', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/jvb6cx2fdn8f1.png?width=108&crop=smart&auto=webp&s=1b44cd01dd4e92f82a3efe01e0e55e294eb2455a', 'width': 108}, {'height': 75, 'url': 'https://preview.redd.it/jvb6cx2fdn8f1.png?width=216&crop=smart&auto=webp... | |
Why are there so many invisible posts and comments in Sub LocalLLaMA? | 1 | [removed] | 2025-06-23T09:28:58 | choose_a_guest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lic687 | false | null | t3_1lic687 | /r/LocalLLaMA/comments/1lic687/why_are_there_so_many_invisible_posts_and/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'kd9IUPkqQfgcPNQts3CA22i7tAd-qSGmlw_1tgpWAgA', 'resolutions': [{'height': 95, 'url': 'https://preview.redd.it/1kgkmw91an8f1.png?width=108&crop=smart&auto=webp&s=8ded45aa191e76c8609c8a247f936956ba7cda8e', 'width': 108}, {'height': 191, 'url': 'https://preview.redd.it/1kgkmw91an8f1.png... | ||
I want to use local llm for a waste management tool | 1 | [removed] | 2025-06-23T09:05:25 | https://www.reddit.com/r/LocalLLaMA/comments/1libt5g/i_want_to_use_local_llm_for_a_waste_management/ | Sonder-Otis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1libt5g | false | null | t3_1libt5g | /r/LocalLLaMA/comments/1libt5g/i_want_to_use_local_llm_for_a_waste_management/ | false | false | self | 1 | null |
How can I make my own GPT-4o-Realtime-audio level AI voice (e.g., Mickey Mouse)? | 1 | [removed] | 2025-06-23T09:05:16 | https://www.reddit.com/r/LocalLLaMA/comments/1libt2k/how_can_i_make_my_own_gpt4orealtimeaudio_level_ai/ | thibaudbrg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1libt2k | false | null | t3_1libt2k | /r/LocalLLaMA/comments/1libt2k/how_can_i_make_my_own_gpt4orealtimeaudio_level_ai/ | false | false | self | 1 | null |
Its all marketing... | 1 | [removed] | 2025-06-23T08:46:23 | freehuntx | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1libic7 | false | null | t3_1libic7 | /r/LocalLLaMA/comments/1libic7/its_all_marketing/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'mne7a0pd3n8f1', 'resolutions': [{'height': 109, 'url': 'https://preview.redd.it/mne7a0pd3n8f1.png?width=108&crop=smart&auto=webp&s=25fb492502a60b918fdac98e030184abdea44353', 'width': 108}, {'height': 218, 'url': 'https://preview.redd.it/mne7a0pd3n8f1.png?width=216&crop=smart&auto=we... | |
Can Jetson Xavier NX (16GB) run LLaMA 3.1 8B locally? | 1 | [removed] | 2025-06-23T08:41:14 | https://www.reddit.com/r/LocalLLaMA/comments/1libfq0/can_jetson_xavier_nx_16gb_run_llama_31_8b_locally/ | spacegeekOps | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1libfq0 | false | null | t3_1libfq0 | /r/LocalLLaMA/comments/1libfq0/can_jetson_xavier_nx_16gb_run_llama_31_8b_locally/ | false | false | self | 1 | null |
Searching for an Updated LLM Leaderboard Dataset | 1 | [removed] | 2025-06-23T08:29:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lib9j8/searching_for_an_updated_llm_leaderboard_dataset/ | razziath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lib9j8 | false | null | t3_1lib9j8 | /r/LocalLLaMA/comments/1lib9j8/searching_for_an_updated_llm_leaderboard_dataset/ | false | false | self | 1 | null |
what happened to the sub why are there no posts and all comments are hidden | 1 | [removed] | 2025-06-23T07:42:01 | https://www.reddit.com/r/LocalLLaMA/comments/1liakkx/what_happened_to_the_sub_why_are_there_no_posts/ | visionsmemories | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liakkx | false | null | t3_1liakkx | /r/LocalLLaMA/comments/1liakkx/what_happened_to_the_sub_why_are_there_no_posts/ | false | false | self | 1 | null |
Idea to speed up coding models | 1 | [removed] | 2025-06-23T07:33:45 | https://www.reddit.com/r/LocalLLaMA/comments/1liagd7/idea_to_speed_up_coding_models/ | Timotheeee1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liagd7 | false | null | t3_1liagd7 | /r/LocalLLaMA/comments/1liagd7/idea_to_speed_up_coding_models/ | false | false | self | 1 | null |
Notebook LM AI podcast alternative | 1 | [removed] | 2025-06-23T07:27:21 | https://www.reddit.com/r/LocalLLaMA/comments/1liad3b/notebook_lm_ai_podcast_alternative/ | blackkksparx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1liad3b | false | null | t3_1liad3b | /r/LocalLLaMA/comments/1liad3b/notebook_lm_ai_podcast_alternative/ | false | false | self | 1 | null |
Run Llama3 and Mistral Models on your GPU in pure Java: We hit >100 toks/s with GPULlama3.java and Docker images are available | 1 | [removed] | 2025-06-23T07:26:55 | https://github.com/beehive-lab/GPULlama3.java | mikebmx1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1liacv6 | false | null | t3_1liacv6 | /r/LocalLLaMA/comments/1liacv6/run_llama3_and_mistral_models_on_your_gpu_in_pure/ | false | false | default | 1 | null |
Could i fine tune a gemma 3 12b on a limited GPU ? | 1 | [removed] | 2025-06-23T06:50:25 | https://www.reddit.com/r/LocalLLaMA/comments/1li9t78/could_i_fine_tune_a_gemma_3_12b_on_a_limited_gpu/ | Head_Mushroom_3748 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li9t78 | false | null | t3_1li9t78 | /r/LocalLLaMA/comments/1li9t78/could_i_fine_tune_a_gemma_3_12b_on_a_limited_gpu/ | false | false | self | 1 | null |
Tools to improve sequential order of execution by LLM | 1 | [removed] | 2025-06-23T06:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1li9m05/tools_to_improve_sequential_order_of_execution_by/ | Puzzleheaded-Ad-1343 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li9m05 | false | null | t3_1li9m05 | /r/LocalLLaMA/comments/1li9m05/tools_to_improve_sequential_order_of_execution_by/ | false | false | self | 1 | null |
Extract learning needs from an excel sheet | 1 | [removed] | 2025-06-23T06:33:24 | https://www.reddit.com/r/LocalLLaMA/comments/1li9jv3/extract_learning_needs_from_an_excel_sheet/ | Opening_Pollution_28 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li9jv3 | false | null | t3_1li9jv3 | /r/LocalLLaMA/comments/1li9jv3/extract_learning_needs_from_an_excel_sheet/ | false | false | self | 1 | null |
Will I be happy with a RTX 3090? | 1 | [removed] | 2025-06-23T06:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/1li94zg/will_i_be_happy_with_a_rtx_3090/ | eribob | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li94zg | false | null | t3_1li94zg | /r/LocalLLaMA/comments/1li94zg/will_i_be_happy_with_a_rtx_3090/ | false | false | self | 1 | null |
test post. | 1 | [removed] | 2025-06-23T06:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/1li93sw/test_post/ | No-Statement-0001 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li93sw | false | null | t3_1li93sw | /r/LocalLLaMA/comments/1li93sw/test_post/ | false | false | self | 1 | null |
Fenix, a multi-agent trading bot I built to run entirely on a local Mac Mini using Ollama and quanti | 1 | [removed] | 2025-06-23T05:29:19 | https://www.reddit.com/r/LocalLLaMA/comments/1li8jiz/fenix_a_multiagent_trading_bot_i_built_to_run/ | MoveDecent3455 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li8jiz | false | null | t3_1li8jiz | /r/LocalLLaMA/comments/1li8jiz/fenix_a_multiagent_trading_bot_i_built_to_run/ | false | false | self | 1 | null |
Does llama cpp python support the multi-modal changes to llama.cpp? | 1 | [removed] | 2025-06-23T05:06:14 | https://www.reddit.com/r/LocalLLaMA/comments/1li85wg/does_llama_cpp_python_support_the_multimodal/ | KDCreerStudios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li85wg | false | null | t3_1li85wg | /r/LocalLLaMA/comments/1li85wg/does_llama_cpp_python_support_the_multimodal/ | false | false | self | 1 | null |
Qwen3 or gemma3 or phi4 | 1 | [removed] | 2025-06-23T03:52:25 | https://www.reddit.com/r/LocalLLaMA/comments/1li6w48/qwen3_or_gemma3_or_phi4/ | Divkix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li6w48 | false | null | t3_1li6w48 | /r/LocalLLaMA/comments/1li6w48/qwen3_or_gemma3_or_phi4/ | false | false | self | 1 | null |
Qwen3 vs phi4 vs gemma3 | 1 | [removed] | 2025-06-23T03:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/1li6obr/qwen3_vs_phi4_vs_gemma3/ | Divkix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li6obr | false | null | t3_1li6obr | /r/LocalLLaMA/comments/1li6obr/qwen3_vs_phi4_vs_gemma3/ | false | false | self | 1 | null |
🚀 IdeaWeaver Weekly Update: June 23–27, 2024 | 1 | [removed] | 2025-06-23T03:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/1li6nx5/ideaweaver_weekly_update_june_2327_2024/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li6nx5 | false | null | t3_1li6nx5 | /r/LocalLLaMA/comments/1li6nx5/ideaweaver_weekly_update_june_2327_2024/ | false | false | self | 1 | null |
🚀 IdeaWeaver Weekly Update: June 23–27, 2024 | 1 | [removed] | 2025-06-23T03:32:34 | https://www.reddit.com/r/LocalLLaMA/comments/1li6jaw/ideaweaver_weekly_update_june_2327_2024/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li6jaw | false | null | t3_1li6jaw | /r/LocalLLaMA/comments/1li6jaw/ideaweaver_weekly_update_june_2327_2024/ | false | false | 1 | null | |
Agents hack the agent orchestration system | 1 | [removed] | 2025-06-23T02:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/1li5egt/agents_hack_the_agent_orchestration_system/ | durapensa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li5egt | false | null | t3_1li5egt | /r/LocalLLaMA/comments/1li5egt/agents_hack_the_agent_orchestration_system/ | false | false | self | 1 | null |
Replacement thermal pads for EVGA 3090 | 1 | [removed] | 2025-06-23T02:05:50 | https://www.reddit.com/r/LocalLLaMA/comments/1li4wul/replacement_thermal_pads_for_evga_3090/ | crapaud_dindon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li4wul | false | null | t3_1li4wul | /r/LocalLLaMA/comments/1li4wul/replacement_thermal_pads_for_evga_3090/ | false | false | self | 1 | null |
Polaris: A Post-training recipe for scaling RL on Advanced ReasonIng models | 1 | [removed] | 2025-06-23T01:36:11 | https://www.reddit.com/r/LocalLLaMA/comments/1li4ctn/polaris_a_posttraining_recipe_for_scaling_rl_on/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li4ctn | false | null | t3_1li4ctn | /r/LocalLLaMA/comments/1li4ctn/polaris_a_posttraining_recipe_for_scaling_rl_on/ | false | false | self | 1 | null |
Promising Architecture, who should we contact | 1 | [removed] | 2025-06-23T01:35:06 | https://www.reddit.com/r/LocalLLaMA/comments/1li4c2h/promising_architecture_who_should_we_contact/ | Commercial-Ad-1148 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li4c2h | false | null | t3_1li4c2h | /r/LocalLLaMA/comments/1li4c2h/promising_architecture_who_should_we_contact/ | false | false | self | 1 | null |
Any local models that has less restraints? | 1 | [removed] | 2025-06-23T01:26:34 | https://www.reddit.com/r/LocalLLaMA/comments/1li467g/any_local_models_that_has_less_restraints/ | rushblyatiful | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li467g | false | null | t3_1li467g | /r/LocalLLaMA/comments/1li467g/any_local_models_that_has_less_restraints/ | false | false | 1 | null | |
TIL that Kevin Durant (yes that one) was an early investor in Huggingface | 1 | [removed] | 2025-06-23T01:25:49 | obvithrowaway34434 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1li45nb | false | null | t3_1li45nb | /r/LocalLLaMA/comments/1li45nb/til_that_kevin_durant_yes_that_one_was_an_early/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'q1_MnU5HXvaazeTN5w2YJeuExTa7DRt8n07DvsWk-yU', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/ofsiyiq3xk8f1.png?width=108&crop=smart&auto=webp&s=08aa76bf66ff3de24796b90fc38a3dcf5014ef4d', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/ofsiyiq3xk8f1.png... | ||
What's your /r/LocalLLaMA "hot take" ? | 1 | [removed] | 2025-06-23T00:58:13 | https://www.reddit.com/r/LocalLLaMA/comments/1li3m1o/whats_your_rlocalllama_hot_take/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li3m1o | false | null | t3_1li3m1o | /r/LocalLLaMA/comments/1li3m1o/whats_your_rlocalllama_hot_take/ | false | false | self | 1 | null |
Best human-like model that doesn't know it's an AI? | 1 | [removed] | 2025-06-22T23:59:46 | https://www.reddit.com/r/LocalLLaMA/comments/1li2g96/best_humanlike_model_that_doesnt_know_its_an_ai/ | RandumbRedditor1000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li2g96 | false | null | t3_1li2g96 | /r/LocalLLaMA/comments/1li2g96/best_humanlike_model_that_doesnt_know_its_an_ai/ | false | false | self | 1 | null |
Really bad performance on EPYC 7C13 with 1TB of RAM, BIOS settings to blame? | 1 | [removed] | 2025-06-22T23:45:56 | https://www.reddit.com/r/LocalLLaMA/comments/1li265m/really_bad_performance_on_epyc_7c13_with_1tb_of/ | BasicCoconut9187 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li265m | false | null | t3_1li265m | /r/LocalLLaMA/comments/1li265m/really_bad_performance_on_epyc_7c13_with_1tb_of/ | false | false | 1 | null | |
Any Local LLM Suggestion ? | 1 | [removed] | 2025-06-22T23:05:27 | https://www.reddit.com/r/LocalLLaMA/comments/1li1bmz/any_local_llm_suggestion/ | thesayk0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li1bmz | false | null | t3_1li1bmz | /r/LocalLLaMA/comments/1li1bmz/any_local_llm_suggestion/ | false | false | self | 1 | null |
What are the best 70b tier models/finetunes these days? | 1 | [removed] | 2025-06-22T23:03:33 | https://www.reddit.com/r/LocalLLaMA/comments/1li1a6i/what_are_the_best_70b_tier_modelsfinetunes_these/ | DepthHour1669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li1a6i | false | null | t3_1li1a6i | /r/LocalLLaMA/comments/1li1a6i/what_are_the_best_70b_tier_modelsfinetunes_these/ | false | false | self | 1 | null |
Best local LLM for JSON Objects? | 1 | [removed] | 2025-06-22T22:56:55 | https://www.reddit.com/r/LocalLLaMA/comments/1li14yb/best_local_llm_for_json_objects/ | ganderofvenice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li14yb | false | null | t3_1li14yb | /r/LocalLLaMA/comments/1li14yb/best_local_llm_for_json_objects/ | false | false | self | 1 | null |
Best local LLM for JSON Objects? | 1 | [removed] | 2025-06-22T22:52:54 | https://www.reddit.com/r/LocalLLaMA/comments/1li11wl/best_local_llm_for_json_objects/ | ganderofvenice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li11wl | false | null | t3_1li11wl | /r/LocalLLaMA/comments/1li11wl/best_local_llm_for_json_objects/ | false | false | self | 1 | null |
What is the best way to give my LLM a single document worth of information to use? | 1 | [removed] | 2025-06-22T22:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/1li0qvm/what_is_the_best_way_to_give_my_llm_a_single/ | lololy87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li0qvm | false | null | t3_1li0qvm | /r/LocalLLaMA/comments/1li0qvm/what_is_the_best_way_to_give_my_llm_a_single/ | false | false | self | 1 | null |
great video explaining how Language Models suddenly got really good, just by throwing more Compute at the problem | 1 | [removed] | 2025-06-22T22:33:03 | https://www.reddit.com/r/LocalLLaMA/comments/1li0mi9/great_video_explaining_how_language_models/ | maniaq | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1li0mi9 | false | null | t3_1li0mi9 | /r/LocalLLaMA/comments/1li0mi9/great_video_explaining_how_language_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9sTzOynCNLUEsVy20ac4RuO2848rWQcR3dxZ7wgKjEo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9sTzOynCNLUEsVy20ac4RuO2848rWQcR3dxZ7wgKjEo.jpeg?width=108&crop=smart&auto=webp&s=699b0146473040c96259633be869ee047c7e2ba2', 'width': 108}, {'height': 162, 'url': '... |
Mid-Range uncensored model for writing, esp. fiction? | 1 | [removed] | 2025-06-22T22:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lhzzuz/midrange_uncensored_model_for_writing_esp_fiction/ | Late-Assignment8482 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhzzuz | false | null | t3_1lhzzuz | /r/LocalLLaMA/comments/1lhzzuz/midrange_uncensored_model_for_writing_esp_fiction/ | false | false | self | 1 | null |
mod deleted his account? | 1 | [removed] | 2025-06-22T21:59:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lhzvj2/mod_deleted_his_account/ | futurefootballplayer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhzvj2 | false | null | t3_1lhzvj2 | /r/LocalLLaMA/comments/1lhzvj2/mod_deleted_his_account/ | false | false | self | 1 | null |
2x NVIDIA RTX 6000 Blackwell GPUs in My AI Workstation – What Should I Test Next? (192GB VRAM + 512 GB ECC DDR5 RAM) | 1 | [removed] | 2025-06-22T21:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lhzfp3/2x_nvidia_rtx_6000_blackwell_gpus_in_my_ai/ | texasdude11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhzfp3 | false | null | t3_1lhzfp3 | /r/LocalLLaMA/comments/1lhzfp3/2x_nvidia_rtx_6000_blackwell_gpus_in_my_ai/ | false | false | self | 1 | null |
Browser extension to desensationalise headlines with a local LLM | 1 | [removed] | 2025-06-22T20:18:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lhxju9/browser_extension_to_desensationalise_headlines/ | Everlier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhxju9 | false | null | t3_1lhxju9 | /r/LocalLLaMA/comments/1lhxju9/browser_extension_to_desensationalise_headlines/ | false | false | self | 1 | null |
What if i made groq for Voice Models?? | 1 | [removed] | 2025-06-22T19:47:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lhwsr2/what_if_i_made_groq_for_voice_models/ | Expert-Address-2918 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lhwsr2 | false | null | t3_1lhwsr2 | /r/LocalLLaMA/comments/1lhwsr2/what_if_i_made_groq_for_voice_models/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.