title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
: Looking for Uncensored LLMs - Anyone Have Recommendations? | 0 | Hey everyone,
I'm really interested in exploring the capabilities of Large Language Models (LLMs), but I’m finding that many of the publicly available ones are heavily censored and have restrictions on the types of responses they can generate.
I’m looking for recommendations for more “raw” or uncensored LLMs – ones t... | 2025-07-11T19:28:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lxg042/looking_for_uncensored_llms_anyone_have/ | gagarinten | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxg042 | false | null | t3_1lxg042 | /r/LocalLLaMA/comments/1lxg042/looking_for_uncensored_llms_anyone_have/ | false | false | self | 0 | null |
Trying to fine-tune LLaMA locally… and my GPU is crying | 10 | Decided to fine-tune LLaMA on my poor RTX 3060 for a niche task (legal docs, don’t ask why). It's been... an adventure. Fans screaming, temps soaring, and I swear the PC growled at me once.
Anyone else trying to make LLaMA behave on local hardware? What’s your setup — LoRA? QLoRA? Brute force and prayers?
Would love ... | 2025-07-11T19:19:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lxfs4d/trying_to_finetune_llama_locally_and_my_gpu_is/ | No_Edge2098 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxfs4d | false | null | t3_1lxfs4d | /r/LocalLLaMA/comments/1lxfs4d/trying_to_finetune_llama_locally_and_my_gpu_is/ | false | false | self | 10 | null |
Qwen 3 235b on Zen 2 Threadripper + 2x RTX 3090 | 3 | I’ve just gotten started with llama.cpp, fell in love with it, and decided to run some experiments with big models on my workstation (Threadripper 3970X, 128 GB RAM, 2× RTX 3090s). I’d like to share some interesting results I got.
Long story short, I got unsloth/Qwen3-235B-A22B-GGUF:UD-Q2_K_XL (88 GB) running at 15 to... | 2025-07-11T19:18:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lxfrdh/qwen_3_235b_on_zen_2_threadripper_2x_rtx_3090/ | Septerium | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxfrdh | false | null | t3_1lxfrdh | /r/LocalLLaMA/comments/1lxfrdh/qwen_3_235b_on_zen_2_threadripper_2x_rtx_3090/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': '... |
What is the most wide use case of Llama ? | 0 | Hi guys, just wondering that as Claude is mainly used for coding, what is the main use case of Llama? Do people use it for chat applications? Thanks!. | 2025-07-11T19:04:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lxfep2/what_is_the_most_wide_use_case_of_llama/ | ThisIsCodeXpert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxfep2 | false | null | t3_1lxfep2 | /r/LocalLLaMA/comments/1lxfep2/what_is_the_most_wide_use_case_of_llama/ | false | false | self | 0 | null |
Qwen 3 235b on Zen 2 Threadripper + 2 RTX 3090 | 1 | 2025-07-11T19:01:38 | https://www.reddit.com/gallery/1lxfbjt | Septerium | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lxfbjt | false | null | t3_1lxfbjt | /r/LocalLLaMA/comments/1lxfbjt/qwen_3_235b_on_zen_2_threadripper_2_rtx_3090/ | false | false | 1 | null | ||
Is there any Llama based chat application? | 0 | Hi guys,
Just wondering whether there is any ChatGPT like application for Llama models which people are finding useful? Thanks! | 2025-07-11T18:40:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lxes5c/is_there_any_llama_based_chat_application/ | ThisIsCodeXpert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxes5c | false | null | t3_1lxes5c | /r/LocalLLaMA/comments/1lxes5c/is_there_any_llama_based_chat_application/ | false | false | self | 0 | null |
Deepseek's Simple, yet Genius Data Generation Pipeline | 45 | Deepseek Prover V2 - formal reasoning math model | 2025-07-11T18:37:04 | https://youtu.be/wzpGWboeRBo | Which_Pound_6751 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1lxep4s | false | {'oembed': {'author_name': 'Future Lab', 'author_url': 'https://www.youtube.com/@FromFutureLab', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/wzpGWboeRBo?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosco... | t3_1lxep4s | /r/LocalLLaMA/comments/1lxep4s/deepseeks_simple_yet_genius_data_generation/ | false | false | default | 45 | null |
FlexOlmo: Open Language Models for Flexible Data Use | Implications for federated training in the open source community | 11 | "FlexOlmo: Open Language Models for Flexible Data Use" -- https://arxiv.org/abs/2507.07024
AllenAI has published a mostly open source model (published weights, code, and theory, but not yet training data) called FlexOlmo which demonstrates how an MoE may be trained in a federated manner, without the incompatibility pr... | 2025-07-11T18:29:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lxehv3/flexolmo_open_language_models_for_flexible_data/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxehv3 | false | null | t3_1lxehv3 | /r/LocalLLaMA/comments/1lxehv3/flexolmo_open_language_models_for_flexible_data/ | false | false | self | 11 | null |
H-Net: a hierarchical network that replaces tokenization with a dynamic chunking process directly inside the model, automatically discovering and operating over meaningful units of data | 48 | 2025-07-11T17:38:39 | https://arxiv.org/pdf/2507.07955 | HOLUPREDICTIONS | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1lxd7nh | false | null | t3_1lxd7nh | /r/LocalLLaMA/comments/1lxd7nh/hnet_a_hierarchical_network_that_replaces/ | false | false | default | 48 | null | |
Introducing Local AI Monster: Run Powerful LLMs Right in Your Browser 🚀 | 6 | Hey r/LocalLLaMA!As a web dev tinkering with local AI, I created Local AI Monster: A React app using MLC's WebLLM and WebGPU to run quantized Instruct models (e.g., Llama-3-8B, Phi-3-mini-4k, Gemma-2-9B) entirely client-side. No installs, no servers—just open in Chrome/Edge and chat.Key Features:
* Auto-Detect VRAM & ... | 2025-07-11T17:38:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lxd7ki/introducing_local_ai_monster_run_powerful_llms/ | ShadovvBeast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxd7ki | false | null | t3_1lxd7ki | /r/LocalLLaMA/comments/1lxd7ki/introducing_local_ai_monster_run_powerful_llms/ | false | false | self | 6 | null |
Opinion: MoonshotAI’s new Kimi K2 | 0 | Just checked out the new **Kimi K2** from Moonshot AI and it's honestly wild. It’s a **1 trillion parameter Mixture-of-Experts** model, but only activates around **32B parameters at inference**, which makes it super efficient.
It’s trained on **15.5 trillion tokens**, uses something called **MuonClip** for optimizatio... | 2025-07-11T17:36:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lxd5p5/opinion_moonshotais_new_kimi_k2/ | ken-senseii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxd5p5 | false | null | t3_1lxd5p5 | /r/LocalLLaMA/comments/1lxd5p5/opinion_moonshotais_new_kimi_k2/ | false | false | self | 0 | null |
We decided to opensource our app | 0 | By request of the many we decided to do what's best for the community, open source the app.
Here it is: [https://github.com/Mainframework](https://github.com/Mainframework)
We looking to expand our collaborations worldwide, so contact and feedback is welcome.
Free knowledge for everyone. Hope this post will not b... | 2025-07-11T17:10:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lxchju/we_decided_to_opensource_our_app/ | Opensoucen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxchju | false | null | t3_1lxchju | /r/LocalLLaMA/comments/1lxchju/we_decided_to_opensource_our_app/ | false | false | self | 0 | null |
How much do you use your local model on average on a day? | 18 | In terms of minutes/hours or number of query/response?
I'm averaging around 90 minutes on good days and 30 minutes on bad days. | 2025-07-11T16:50:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lxbynb/how_much_do_you_use_your_local_model_on_average/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxbynb | false | null | t3_1lxbynb | /r/LocalLLaMA/comments/1lxbynb/how_much_do_you_use_your_local_model_on_average/ | false | false | self | 18 | null |
Drummer's Snowpiercer 15B v2 | 38 | A finetune of ServiceNow's Alice 15B Thinker, but this prioritizes steerability and character adherence. Thinking will work most of the time but may need to wrangle it a bit. | 2025-07-11T16:44:02 | https://huggingface.co/TheDrummer/Snowpiercer-15B-v2 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lxbsw0 | false | null | t3_1lxbsw0 | /r/LocalLLaMA/comments/1lxbsw0/drummers_snowpiercer_15b_v2/ | false | false | default | 38 | {'enabled': False, 'images': [{'id': 'HXBx4eoymx-1BYUGWzf89bQNDef-5prUGW1GTtR1dHU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HXBx4eoymx-1BYUGWzf89bQNDef-5prUGW1GTtR1dHU.png?width=108&crop=smart&auto=webp&s=742b120ed0b7500c807528f76abf431a8c976940', 'width': 108}, {'height': 116, 'url': 'h... |
The 1T Kimi K2 model is using DeepSeek V3 architecture | 154 | 2025-07-11T16:13:31 | AaronFeng47 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lxb0eo | false | null | t3_1lxb0eo | /r/LocalLLaMA/comments/1lxb0eo/the_1t_kimi_k2_model_is_using_deepseek_v3/ | false | false | 154 | {'enabled': True, 'images': [{'id': 'Q200XGQK2QDqTgopEJURrQJKELVtC6jLu8BfpN-NSrQ', 'resolutions': [{'height': 190, 'url': 'https://preview.redd.it/l3gpvb5or9cf1.png?width=108&crop=smart&auto=webp&s=ff23740ffcaa59c97aa66e63a434e727e8b2ad4a', 'width': 108}, {'height': 381, 'url': 'https://preview.redd.it/l3gpvb5or9cf1.pn... | |||
The BastionRank Showdown: Crowning the Best On-Device AI Models of 2025 | 3 | Choosing the right on-device LLM is a major challenge 🤔. How do you balance speed, size, and true intelligence? To find a definitive answer, we created the BastionRank Benchmark.We put 10 of the most promising models through a rigorous gauntlet of tests designed to simulate real-world developer and user needs 🥊. Our ... | 2025-07-11T16:11:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lxaz08/the_bastionrank_showdown_crowning_the_best/ | frayala87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxaz08 | false | null | t3_1lxaz08 | /r/LocalLLaMA/comments/1lxaz08/the_bastionrank_showdown_crowning_the_best/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'fI5k02DR1sOEKQLbuJoh-Ag2mqh9Q28OHsSdk9zyjQE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/fI5k02DR1sOEKQLbuJoh-Ag2mqh9Q28OHsSdk9zyjQE.png?width=108&crop=smart&auto=webp&s=bd8c820178ab5cc2c4f33f2be0e9c5ff6994dd37', 'width': 108}, {'height': 216, 'url': '... |
The BastionRank Showdown: Crowning the Best On-Device AI Models of 2025 | 1 | [removed] | 2025-07-11T16:10:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lxaxta/the_bastionrank_showdown_crowning_the_best/ | frayala87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxaxta | false | null | t3_1lxaxta | /r/LocalLLaMA/comments/1lxaxta/the_bastionrank_showdown_crowning_the_best/ | false | false | self | 1 | null |
The BastionRank Showdown: Crowning the Best On-Device AI Models of 2025 | 1 | [removed] | 2025-07-11T16:09:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lxawgc/the_bastionrank_showdown_crowning_the_best/ | frayala87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxawgc | false | null | t3_1lxawgc | /r/LocalLLaMA/comments/1lxawgc/the_bastionrank_showdown_crowning_the_best/ | false | false | self | 1 | null |
The BastionRank Showdown: Crowning the Best On-Device AI Models of 2025 | 1 | [removed] | 2025-07-11T16:07:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lxaurk/the_bastionrank_showdown_crowning_the_best/ | frayala87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxaurk | false | null | t3_1lxaurk | /r/LocalLLaMA/comments/1lxaurk/the_bastionrank_showdown_crowning_the_best/ | false | false | 1 | null | |
made full comic panels using llama, domoai and canva no sub needed | 1 | [removed] | 2025-07-11T16:05:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lxat73/made_full_comic_panels_using_llama_domoai_and/ | Own_View3337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxat73 | false | null | t3_1lxat73 | /r/LocalLLaMA/comments/1lxat73/made_full_comic_panels_using_llama_domoai_and/ | false | false | self | 1 | null |
made full comic panels using llama, domoai and canva no sub needed | 1 | [removed] | 2025-07-11T16:03:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lxarih/made_full_comic_panels_using_llama_domoai_and/ | Own_View3337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxarih | false | null | t3_1lxarih | /r/LocalLLaMA/comments/1lxarih/made_full_comic_panels_using_llama_domoai_and/ | false | false | self | 1 | null |
DeepSeek-TNG-R1T2-Chimera vs DeepSeek R1-0528 quick test | 14 | I find the R1T2 Chimera shorter thinking very suitable for day-to-day code questions. I will give you an example where for the same simple question about modifications to be made to my java spring security cors configuration that allowed all origins, to allow only a specified domain and its subdomains:
https://preview... | 2025-07-11T15:39:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lxa4hy/deepseektngr1t2chimera_vs_deepseek_r10528_quick/ | ciprianveg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lxa4hy | false | null | t3_1lxa4hy | /r/LocalLLaMA/comments/1lxa4hy/deepseektngr1t2chimera_vs_deepseek_r10528_quick/ | false | false | 14 | null | |
Damn this is deepseek moment one of the 3bst coding model and it's open source and by far it's so good !! | 539 | https://x.com/Kimi_Moonshot/status/1943687594560332025?t=imY6uyPkkt-nqaao67g04Q&s=19 | 2025-07-11T15:23:24 | Independent-Wind4462 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lx9pny | false | null | t3_1lx9pny | /r/LocalLLaMA/comments/1lx9pny/damn_this_is_deepseek_moment_one_of_the_3bst/ | false | false | 539 | {'enabled': True, 'images': [{'id': 'UZVI56woInLaMKwWHApifja7RwNz2PKUDQXcK1QKXgk', 'resolutions': [{'height': 184, 'url': 'https://preview.redd.it/a1tzaif5j9cf1.jpeg?width=108&crop=smart&auto=webp&s=d9647bc9a930559ead2c29c4398ea6f566b948c9', 'width': 108}, {'height': 368, 'url': 'https://preview.redd.it/a1tzaif5j9cf1.j... | ||
Kimi K2 - 1T MoE, 32B active params | 306 | [https://huggingface.co/moonshotai/Kimi-K2-Base](https://huggingface.co/moonshotai/Kimi-K2-Base) | 2025-07-11T15:00:42 | https://www.reddit.com/gallery/1lx94ht | Nunki08 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lx94ht | false | null | t3_1lx94ht | /r/LocalLLaMA/comments/1lx94ht/kimi_k2_1t_moe_32b_active_params/ | false | false | 306 | null | |
moonshotai/Kimi-K2-Instruct (and Kimi-K2-Base) | 319 | Kimi K2 is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters. Trained with the Muon optimizer, Kimi K2 achieves exceptional performance across frontier knowledge, reasoning, and coding tasks while being meticulously optimized for agentic capa... | 2025-07-11T14:52:41 | https://huggingface.co/moonshotai/Kimi-K2-Instruct | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lx8xdm | false | null | t3_1lx8xdm | /r/LocalLLaMA/comments/1lx8xdm/moonshotaikimik2instruct_and_kimik2base/ | false | false | 319 | {'enabled': False, 'images': [{'id': 'ZjybqN_iZigLaZxMtl0N3yFDPtiDQRo-8LU-o9LYLXQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZjybqN_iZigLaZxMtl0N3yFDPtiDQRo-8LU-o9LYLXQ.png?width=108&crop=smart&auto=webp&s=014f5215759a7ee46cc335661cfd741228ef1b1e', 'width': 108}, {'height': 116, 'url': 'h... | |
ETH Zurich and EPFL will release a fully open-source LLM developed on public infrastructure. Trained on the “Alps” supercomputer at the Swiss National Supercomputing Centre (CSCS). Trained on 60% english/40% non-english, it will be released in 8B and 70B sizes. | 154 | 2025-07-11T14:45:09 | https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-language-model-built-for-the-public-good.html | nat2r | ethz.ch | 1970-01-01T00:00:00 | 0 | {} | 1lx8qrz | false | null | t3_1lx8qrz | /r/LocalLLaMA/comments/1lx8qrz/eth_zurich_and_epfl_will_release_a_fully/ | false | false | 154 | {'enabled': False, 'images': [{'id': 'TvWt1vR8SHY9KGIN7J2JGHcosAwEvwQ5h-ipBkpjo8A', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/TvWt1vR8SHY9KGIN7J2JGHcosAwEvwQ5h-ipBkpjo8A.jpeg?width=108&crop=smart&auto=webp&s=00f03455ad8e9fc0d7ab20142af7d9f6c62b3273', 'width': 108}, {'height': 107, 'url': '... | ||
Devstral-Vision-Small-2507 | 86 | Mistral released Devstral-Small-2507 - which is AWESOME! But, they released without vision capability. I didn't like that.
[**Devstral-Vision-Small-2507**](https://huggingface.co/cognitivecomputations/Devstral-Vision-Small-2507)
I did some model surgery. I started with Mistral-Small-3.2-24B-Instruct-2506, and rep... | 2025-07-11T14:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lx85jo/devstralvisionsmall2507/ | faldore | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx85jo | false | null | t3_1lx85jo | /r/LocalLLaMA/comments/1lx85jo/devstralvisionsmall2507/ | false | false | 86 | {'enabled': False, 'images': [{'id': 'gvyfd3vEbQ6Mp2mv-0QDRMEHLRbvSfXNoTHqOWwrwc0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gvyfd3vEbQ6Mp2mv-0QDRMEHLRbvSfXNoTHqOWwrwc0.png?width=108&crop=smart&auto=webp&s=2f0419c9f025891f3303819e45297a253ea78e7c', 'width': 108}, {'height': 116, 'url': 'h... | |
Can't push dataset to Hugging Face Hub – CAS service error | 1 | [removed] | 2025-07-11T13:58:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lx7ltr/cant_push_dataset_to_hugging_face_hub_cas_service/ | mahwiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx7ltr | false | null | t3_1lx7ltr | /r/LocalLLaMA/comments/1lx7ltr/cant_push_dataset_to_hugging_face_hub_cas_service/ | false | false | self | 1 | null |
Simple barebones MCP tutorial? | 7 | I am trying to learn about MCPs using a lightweight local LLM (windows laptop with 16GB RAM).
I want a simple project, to integrate the LLM with existing MCP servers, to see what it's all about, experiment, and have fun.
Can someone suggest a starting point, or a video tutorial?
(most I've seen so far either don't ... | 2025-07-11T13:58:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lx7loe/simple_barebones_mcp_tutorial/ | cangaroo_hamam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx7loe | false | null | t3_1lx7loe | /r/LocalLLaMA/comments/1lx7loe/simple_barebones_mcp_tutorial/ | false | false | self | 7 | null |
This week, Google released in Open Source: MedGemma 27B Multimodal, MedSigLIP, T5Gemma | 215 | MedGemma 27B Multimodal for complex multimodal & longitudinal EHR interpretation: [https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4](https://huggingface.co/collections/google/medgemma-release-680aade845f90bec6a3f60c4)
MedSigLIP: a lightweight image/text encoder for medical image retr... | 2025-07-11T13:57:36 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lx7l3k | false | null | t3_1lx7l3k | /r/LocalLLaMA/comments/1lx7l3k/this_week_google_released_in_open_source_medgemma/ | false | false | 215 | {'enabled': True, 'images': [{'id': 'WkBW3ZxSnDC2gTb6x_KnIpQDXfKRLjZymR9zHnbq-Zk', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/r2bp20do39cf1.jpeg?width=108&crop=smart&auto=webp&s=c06f842789e7c04781657d6ba7a8362b24d6127f', 'width': 108}, {'height': 195, 'url': 'https://preview.redd.it/r2bp20do39cf1.jp... | ||
Potentially Noob Question Regarding Live Weight Adjustments | 2 | I have only been utilizing LLMs for a little more than a year and only started down the local LLM path a few months ago, so bare with me if there are already papers about my following question:
Are there any models/agents that can cache previous context windows, encode the cached context into weight scalers, and then ... | 2025-07-11T13:56:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lx7kfh/potentially_noob_question_regarding_live_weight/ | Pyromancer777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx7kfh | false | null | t3_1lx7kfh | /r/LocalLLaMA/comments/1lx7kfh/potentially_noob_question_regarding_live_weight/ | false | false | self | 2 | null |
Getting CAS service error when pushing dataset to Hugging Face Hub | 1 | [removed] | 2025-07-11T13:55:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lx7j6c/getting_cas_service_error_when_pushing_dataset_to/ | mahwiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx7j6c | false | null | t3_1lx7j6c | /r/LocalLLaMA/comments/1lx7j6c/getting_cas_service_error_when_pushing_dataset_to/ | false | false | self | 1 | null |
When asked about Israel v Palestine, Grok 4 searches through twitter and other sources for Elon Musk's views so it can align with them. "Considering Elon Musk's views" is the summary of the CoT task shown at 0:50. Grok 4 does NOT do this for any other question, controversial or political. | 103 | First seen by ramez on twitter - [https://x.com/ramez/status/1943431212766294413](https://x.com/ramez/status/1943431212766294413) and the video is from Jeremy Howard confirming this behaviour - [https://x.com/jeremyphoward/status/1943444549696917714](https://x.com/jeremyphoward/status/1943444549696917714)
For othe... | 2025-07-11T13:42:01 | https://v.redd.it/2cdl5xgn09cf1 | lostlifon | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lx78bk | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/2cdl5xgn09cf1/DASHPlaylist.mpd?a=1754833338%2CODVkY2Y2ZTBmM2RmYzg5YjVkMjRkNGZiMjBhNGQ0MDhjYmNjMTEzZDQwY2I2NTdmMDNmNjU0YzQ2MTE3YmYyNg%3D%3D&v=1&f=sd', 'duration': 95, 'fallback_url': 'https://v.redd.it/2cdl5xgn09cf1/DASH_720.mp4?source=fallback', 'ha... | t3_1lx78bk | /r/LocalLLaMA/comments/1lx78bk/when_asked_about_israel_v_palestine_grok_4/ | false | false | 103 | {'enabled': False, 'images': [{'id': 'Z3N0OTZ4Z24wOWNmMWs9zsnwyBbwxMH0ZvInVPpBqNaLm12SmXXjJes3XWMY', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/Z3N0OTZ4Z24wOWNmMWs9zsnwyBbwxMH0ZvInVPpBqNaLm12SmXXjJes3XWMY.png?width=108&crop=smart&format=pjpg&auto=webp&s=b590fc5b15ee5cf87b47d64a1a55d22de2c9b... | |
Skywork/Skywork-R1V3-38B · Hugging Face | 33 | Skywork-R1V 3.0: an open source model that beats close source models on multi-modal reasoning. | 2025-07-11T13:30:10 | https://huggingface.co/Skywork/Skywork-R1V3-38B | tabspaces | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lx6yer | false | null | t3_1lx6yer | /r/LocalLLaMA/comments/1lx6yer/skyworkskyworkr1v338b_hugging_face/ | false | false | default | 33 | {'enabled': False, 'images': [{'id': '3e7sHYHNZhfN6oJnsHpPG0zaYjLPHYlt79D7YjZqJp0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3e7sHYHNZhfN6oJnsHpPG0zaYjLPHYlt79D7YjZqJp0.png?width=108&crop=smart&auto=webp&s=fa8785f2c86e0a266b664cbff742d81402b4d47e', 'width': 108}, {'height': 116, 'url': 'h... |
Has Local Llama Development Slowed Down, or Am I Missing Something? 🤔 | 0 | Anyone else feel like things have gone quieter in the open-source Llama scene lately?
Earlier this year, there were constant updates, fine-tunes, and people sharing their custom Llama workflows. But these past weeks, I’ve seen less buzz—even though projects like DeepSeek and Gemma keep getting mentioned in broader AI... | 2025-07-11T13:07:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lx6g3p/has_local_llama_development_slowed_down_or_am_i/ | shaker-ameen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx6g3p | false | null | t3_1lx6g3p | /r/LocalLLaMA/comments/1lx6g3p/has_local_llama_development_slowed_down_or_am_i/ | false | false | self | 0 | null |
llama2.c running on the original 2007 iPhone | 590 | 2025-07-11T13:04:10 | https://v.redd.it/3u6728ask8cf1 | kyousukegum | /r/LocalLLaMA/comments/1lx6dcm/llama2c_running_on_the_original_2007_iphone/ | 1970-01-01T00:00:00 | 0 | {} | 1lx6dcm | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/3u6728ask8cf1/DASHPlaylist.mpd?a=1754960660%2CZGYzYmFjZmEzNDg2MThkZjVhODcyYWJmNGFmZmQxYmMwN2UwMzYzMGU3YjM4NWYxMzFhYTc4MDFhNjI3ZmIxOQ%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/3u6728ask8cf1/DASH_1080.mp4?source=fallback', 'h... | t3_1lx6dcm | /r/LocalLLaMA/comments/1lx6dcm/llama2c_running_on_the_original_2007_iphone/ | false | false | 590 | {'enabled': False, 'images': [{'id': 'a3NtYmE5YXNrOGNmMX0KWZ1PPnq70dBw4mT1dYRnKuITb3d3yA97K-6QwELL', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/a3NtYmE5YXNrOGNmMX0KWZ1PPnq70dBw4mT1dYRnKuITb3d3yA97K-6QwELL.png?width=108&crop=smart&format=pjpg&auto=webp&s=ea40f484a395e1158453d61b60702f7424cc... | ||
Issues with Qwen 3 Embedding models (4B and 0.6B) | 15 | Hi,
I'm currently facing a weird issue.
I was testing different embedding models, with the goal being to integrate the best local one in a django application.
Architecture is as follows :
\- One Mac Book air running LMStudio, acting as a local server for llm and embedding operations
\- My PC for the django ap... | 2025-07-11T12:56:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lx66on/issues_with_qwen_3_embedding_models_4b_and_06b/ | IndependentApart5556 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx66on | false | null | t3_1lx66on | /r/LocalLLaMA/comments/1lx66on/issues_with_qwen_3_embedding_models_4b_and_06b/ | false | false | 15 | null | |
Nvidia being Nvidia: FP8 is 150 Tflops faster when kernel name contain "cutlass" | 451 | 2025-07-11T12:50:38 | https://github.com/triton-lang/triton/pull/7298/commits/a5e23d8e7e64b8a11af3edc1705407d91084b01d | bora_ach | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lx62hd | false | null | t3_1lx62hd | /r/LocalLLaMA/comments/1lx62hd/nvidia_being_nvidia_fp8_is_150_tflops_faster_when/ | false | false | default | 451 | null | |
What's the best way to work with granulized AI tasks or "agents." Any front-end UI/program? | 1 | 2025-07-11T12:43:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lx5wvp/whats_the_best_way_to_work_with_granulized_ai/ | Jattoe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx5wvp | false | null | t3_1lx5wvp | /r/LocalLLaMA/comments/1lx5wvp/whats_the_best_way_to_work_with_granulized_ai/ | false | false | 1 | null | ||
is this websitie safe | 0 | https://skipschool.lol | 2025-07-11T12:36:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lx5rnt/is_this_websitie_safe/ | Remarkable_Emu_6113 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx5rnt | false | null | t3_1lx5rnt | /r/LocalLLaMA/comments/1lx5rnt/is_this_websitie_safe/ | false | false | self | 0 | null |
I have made a github repository for streamlining AI coding flow. Please suggest improvements as additions and substraction to the codebase. | 0 | [https://github.com/kadavilrahul/coding\_task\_manager](https://github.com/kadavilrahul/coding_task_manager) | 2025-07-11T12:33:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lx5p9b/i_have_made_a_github_repository_for_streamlining/ | Maleficent_Mess6445 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx5p9b | false | null | t3_1lx5p9b | /r/LocalLLaMA/comments/1lx5p9b/i_have_made_a_github_repository_for_streamlining/ | false | false | self | 0 | null |
FYI Qwen3 235B A22B IQ4_XS works with 128 GB DDR5 + 8GB VRAM in Windows | 28 | (Disclaimers: Nothing new here especially given the recent posts, but was supposed to report back at u/Evening_Ad6637 et al. Furthermore, i am a total noob and do local LLM via LM Studio on Windows 11, so no fancy ik\_llama.cpp etc., as it is just so convenient.)
I finally received 2x64 GB DDR5 5600 MHz Sticks (Kingst... | 2025-07-11T12:30:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lx5n8c/fyi_qwen3_235b_a22b_iq4_xs_works_with_128_gb_ddr5/ | Karim_acing_it | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx5n8c | false | null | t3_1lx5n8c | /r/LocalLLaMA/comments/1lx5n8c/fyi_qwen3_235b_a22b_iq4_xs_works_with_128_gb_ddr5/ | false | false | self | 28 | null |
Quick Question: Best Open-Source Model for Local Q&A RAG App? 🤔 | 1 | Hey Reddit!
Building a RAG app focused on **Q&A**, and I need a good **open-source model that runs well locally**.
What's your go-to for performance vs. hardware (GPU/RAM) on a local setup for answering questions?
Thanks for the help!
\#RAG #LocalLLM #OpenSource #AI #QandA | 2025-07-11T12:25:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lx5jc1/quick_question_best_opensource_model_for_local_qa/ | Due-Wind6781 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx5jc1 | false | null | t3_1lx5jc1 | /r/LocalLLaMA/comments/1lx5jc1/quick_question_best_opensource_model_for_local_qa/ | false | false | self | 1 | null |
Friendly reminder that Grok 3 should be now open-sourced | 1,255 | 2025-07-11T12:13:48 | https://www.reddit.com/gallery/1lx5awq | Wrong_User_Logged | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lx5awq | false | null | t3_1lx5awq | /r/LocalLLaMA/comments/1lx5awq/friendly_reminder_that_grok_3_should_be_now/ | false | false | 1,255 | null | ||
Blackwell FP8 W8A8 NVFP4 support discussion | 10 | Context here: WSLv2, Win11, Blackwell Pro 6000 workstation.
I've beaten my head against the wall with W8A8 FP8 support and kind of loosely eyed NVFP4 from a distance, fully expecting it to be a nightmare. Like may of you I've seen on here, I went through the gauntlet and very specific hell of trying to build vllm + fl... | 2025-07-11T11:58:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lx4zpr/blackwell_fp8_w8a8_nvfp4_support_discussion/ | Kitchen-Year-8434 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx4zpr | false | null | t3_1lx4zpr | /r/LocalLLaMA/comments/1lx4zpr/blackwell_fp8_w8a8_nvfp4_support_discussion/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'w9EtvIrUzqo5Lu-tyiPoB9wYov3Zce-6YSW9Kr_hD50', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w9EtvIrUzqo5Lu-tyiPoB9wYov3Zce-6YSW9Kr_hD50.png?width=108&crop=smart&auto=webp&s=f4ce923d5f84359f99cae7bf1ae54488a8122402', 'width': 108}, {'height': 108, 'url': 'h... |
New OSS project: llamac-lab or a pure C runtime for LLaMA models, made for the edge | 14 | Just sharing my new madness, really, not much to say about it, as its very early.
So the idea is very simple lets have an LLM engine that can run relatively large size models on constrained hardware.
**So what it is (or going to be if I don't disappear into the abyss):**
Started hacking on this today.
A **pure C ... | 2025-07-11T11:56:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lx4ya7/new_oss_project_llamaclab_or_a_pure_c_runtime_for/ | rvnllm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx4ya7 | false | null | t3_1lx4ya7 | /r/LocalLLaMA/comments/1lx4ya7/new_oss_project_llamaclab_or_a_pure_c_runtime_for/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'lWjJdcJe5yyBPoySwGyf4ZMSAVXJpSrM43u2nmieDUY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lWjJdcJe5yyBPoySwGyf4ZMSAVXJpSrM43u2nmieDUY.png?width=108&crop=smart&auto=webp&s=843e60c1255f40c94aa5aef6086d2d53756bb490', 'width': 108}, {'height': 108, 'url': 'h... |
Moonshot AI about to release their 1T parameters model? | 103 | This is from their website. | 2025-07-11T11:45:02 | No_Conversation9561 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lx4qhp | false | null | t3_1lx4qhp | /r/LocalLLaMA/comments/1lx4qhp/moonshot_ai_about_to_release_their_1t_parameters/ | false | false | 103 | {'enabled': True, 'images': [{'id': '7JJbXGdkfholjapvB1HrPeqCDKJEt1mFdhwkHKl9F2s', 'resolutions': [{'height': 118, 'url': 'https://preview.redd.it/kts1w8a7g8cf1.jpeg?width=108&crop=smart&auto=webp&s=e9dd3d8cd71b434beaff9edc84a3406f39002d67', 'width': 108}, {'height': 237, 'url': 'https://preview.redd.it/kts1w8a7g8cf1.j... | ||
Need help | 1 | I have been experimenting building my own UI and having it load and run some Llama models. I have an RTX 4080 (16GB VRAM) and I run the Llama 3.1 13B at 50 tokens/s. I was unable to get Llama 4 17B to run any faster than 0.2 Tokens/s.
Llama 3.1 13B is not up to my tasks other than being a standard chatbot. Llama 4 17B... | 2025-07-11T11:38:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lx4mad/need_help/ | Aelexi93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx4mad | false | null | t3_1lx4mad | /r/LocalLLaMA/comments/1lx4mad/need_help/ | false | false | self | 1 | null |
SmolTalk2: The dataset behind SmolLM3's dual reasoning | 35 | 2025-07-11T11:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lx4hxt/smoltalk2_the_dataset_behind_smollm3s_dual/ | loubnabnl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx4hxt | false | null | t3_1lx4hxt | /r/LocalLLaMA/comments/1lx4hxt/smoltalk2_the_dataset_behind_smollm3s_dual/ | false | false | 35 | {'enabled': False, 'images': [{'id': 'guh_Ilc0qa1iubfc8zva4ecZtUzKUNZdEcOEXQmd36A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/guh_Ilc0qa1iubfc8zva4ecZtUzKUNZdEcOEXQmd36A.png?width=108&crop=smart&auto=webp&s=04717d5c88beb59639a322015eed7f0bd8ba211d', 'width': 108}, {'height': 116, 'url': 'h... | ||
Moonshot.ai about to release their new 1T model? | 1 | [deleted] | 2025-07-11T11:25:30 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lx4d8p | false | null | t3_1lx4d8p | /r/LocalLLaMA/comments/1lx4d8p/moonshotai_about_to_release_their_new_1t_model/ | false | false | default | 1 | null | ||
Is Kimi about to release their new 1T param OS model? | 1 | [deleted] | 2025-07-11T11:23:32 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lx4bwr | false | null | t3_1lx4bwr | /r/LocalLLaMA/comments/1lx4bwr/is_kimi_about_to_release_their_new_1t_param_os/ | false | false | default | 1 | null | ||
Prime Intellect on X: Releasing SYNTHETIC-2: our open dataset of 4m verified reasoning traces | 22 | 2025-07-11T11:20:42 | https://x.com/PrimeIntellect/status/1943424561116045389 | Marha01 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1lx4a3t | false | null | t3_1lx4a3t | /r/LocalLLaMA/comments/1lx4a3t/prime_intellect_on_x_releasing_synthetic2_our/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'GJtgM6KSRHIuX6i_TeTKEbRK7BxUUh_vFYMVwQtcUqs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/OI2iID36iwvBkHarjg-7dH6Sebf2j49Fh1hND_ikvDI.jpg?width=108&crop=smart&auto=webp&s=bafb8719e3b1cc5345fd3b0ddd04b3379b419c55', 'width': 108}], 'source': {'height': 20... | ||
EuroEval: The robust European language model benchmark. | 12 | I encountered this really cool project, EuroEval, which has LLM benchmarks of many open-weights models in different European languages (🇩🇰 Danish, 🇳🇱 Dutch, 🇬🇧 English, 🇫🇴 Faroese, 🇫🇮 Finnish, 🇫🇷 French, 🇩🇪 German, 🇮🇸 Icelandic, 🇮🇹 Italian, 🇳🇴 Norwegian, 🇪🇸 Spanish, 🇸🇪 Swedish).
>EuroEval is ... | 2025-07-11T10:56:06 | https://euroeval.com/leaderboards/ | Balance- | euroeval.com | 1970-01-01T00:00:00 | 0 | {} | 1lx3u8s | false | null | t3_1lx3u8s | /r/LocalLLaMA/comments/1lx3u8s/euroeval_the_robust_european_language_model/ | false | false | default | 12 | null |
How do I force the LLM to respond shortly? | 3 | It understands it in the beginning, but as conversation increases, it starts becoming a paragraph spewing machine.
Only way I can think of is to re-run responses on a 2nd AI conversation and ask it to re-write it shortly, then channel it back to the conversation. | 2025-07-11T10:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lx3p4i/how_do_i_force_the_llm_to_respond_shortly/ | freecodeio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx3p4i | false | null | t3_1lx3p4i | /r/LocalLLaMA/comments/1lx3p4i/how_do_i_force_the_llm_to_respond_shortly/ | false | false | self | 3 | null |
Sensitivity Aware Mixed Precision Quantization | 7 | During my first months at Hugging Face, I worked on Hybrid Quantization, also known as Sensitivity-Aware Mixed Precision Quantization. Each layer is quantized based on its sensitivity score: robust layers receive more aggressive quantization, and sensitive layers are preserved at higher precision.
The key question is ... | 2025-07-11T10:44:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lx3ngj/sensitivity_aware_mixed_precision_quantization/ | Swimming-Heart-8667 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx3ngj | false | null | t3_1lx3ngj | /r/LocalLLaMA/comments/1lx3ngj/sensitivity_aware_mixed_precision_quantization/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'ZSg1l9Yd03KVonWynPruA_LkKrURCTR2Tg9rjTv5dJc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZSg1l9Yd03KVonWynPruA_LkKrURCTR2Tg9rjTv5dJc.png?width=108&crop=smart&auto=webp&s=6558e013b15e77a0b562e69edc7c0dcca57b878c', 'width': 108}, {'height': 116, 'url': 'h... |
A language model built for the public good | 18 | what do you think? | 2025-07-11T10:38:36 | https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-language-model-built-for-the-public-good.html | Better-Armadillo1371 | ethz.ch | 1970-01-01T00:00:00 | 0 | {} | 1lx3jtc | false | null | t3_1lx3jtc | /r/LocalLLaMA/comments/1lx3jtc/a_language_model_built_for_the_public_good/ | false | false | default | 18 | {'enabled': False, 'images': [{'id': 'TvWt1vR8SHY9KGIN7J2JGHcosAwEvwQ5h-ipBkpjo8A', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/TvWt1vR8SHY9KGIN7J2JGHcosAwEvwQ5h-ipBkpjo8A.jpeg?width=108&crop=smart&auto=webp&s=00f03455ad8e9fc0d7ab20142af7d9f6c62b3273', 'width': 108}, {'height': 107, 'url': '... |
What do you think future AI agents will look like? | 0 | I think people are not able conceive AI agents of future. Many are just trying to connect some LLM to applications of past era and make some small tasks work, but I don't think it is an agent in any sense. The LLM and applications are mostly separate still.
I think the real agent will look something like claude code AI... | 2025-07-11T10:08:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lx32mx/what_do_you_think_future_ai_agents_will_look_like/ | Maleficent_Mess6445 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx32mx | false | null | t3_1lx32mx | /r/LocalLLaMA/comments/1lx32mx/what_do_you_think_future_ai_agents_will_look_like/ | false | false | self | 0 | null |
Made a Mock Interview Agent That Talks, Listens, Searches - and Logs Everything | 2 | I recently built a voice-based AI interviewer that runs in real time, asks job-specific follow-up questions, and can even look things up mid-conversation. It uses LiveKit for audio, Gemini for speech and reasoning, and Maxim to log and evaluate everything the agent does.
What sets this apart from other voice agents is... | 2025-07-11T09:55:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lx2uwr/made_a_mock_interview_agent_that_talks_listens/ | dinkinflika0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx2uwr | false | null | t3_1lx2uwr | /r/LocalLLaMA/comments/1lx2uwr/made_a_mock_interview_agent_that_talks_listens/ | false | false | self | 2 | null |
I want a gpu and my budget is $500 | 1 | [removed] | 2025-07-11T09:46:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lx2q8z/i_want_a_gpu_and_my_budget_is_500/ | Ok_Internet1963 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx2q8z | false | null | t3_1lx2q8z | /r/LocalLLaMA/comments/1lx2q8z/i_want_a_gpu_and_my_budget_is_500/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '4SuI0m9nrGlx9AORr34Z0fQCD9KZRk9WuXojI3Vjbik', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/4SuI0m9nrGlx9AORr34Z0fQCD9KZRk9WuXojI3Vjbik.png?width=108&crop=smart&auto=webp&s=4cc0e9aefcbfc76ebe81c977bb44369461029130', 'width': 108}, {'height': 121, 'url': 'h... |
I built a tool to run Humanity's Last Exam on your favorite local models! | 25 | Hey there,
in the last few weeks, I've spent a lot of time learning about Local LLMs but always noticed one glaringly obvious, missing thing: *good* tools to run LLM benchmarks on (in terms of output quality, not talking about speed here!) in order to decide which LLM is best suited for a given task.
Thus, I've built... | 2025-07-11T09:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lx2j1l/i_built_a_tool_to_run_humanitys_last_exam_on_your/ | mags0ft | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx2j1l | false | null | t3_1lx2j1l | /r/LocalLLaMA/comments/1lx2j1l/i_built_a_tool_to_run_humanitys_last_exam_on_your/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'o-H6rWtrxMkGu6ppy9Z76PzirPIQLGU4dUkkkXovQww', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o-H6rWtrxMkGu6ppy9Z76PzirPIQLGU4dUkkkXovQww.png?width=108&crop=smart&auto=webp&s=84aa3c4f7f239451e4fb6ade8dbdfcfe99152d99', 'width': 108}, {'height': 108, 'url': 'h... | |
Uncensored LLM ranking for roleplay? | 130 | Every day, a bunch of models appear, making it difficult to choose which ones to use for uncensored role-playing. Previously, the Ayumi LLM Role Play & ERP Ranking data was somewhat of a guide, but now I can't find a list that is even close to being up to date. It's difficult to choose from among the many models with f... | 2025-07-11T09:31:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lx2hn2/uncensored_llm_ranking_for_roleplay/ | mikemend | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx2hn2 | false | null | t3_1lx2hn2 | /r/LocalLLaMA/comments/1lx2hn2/uncensored_llm_ranking_for_roleplay/ | false | false | nsfw | 130 | null |
I want a gpu and have a budget of $500. | 1 | [removed] | 2025-07-11T09:28:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lx2gj8/i_want_a_gpu_and_have_a_budget_of_500/ | Ok_Internet1963 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx2gj8 | false | null | t3_1lx2gj8 | /r/LocalLLaMA/comments/1lx2gj8/i_want_a_gpu_and_have_a_budget_of_500/ | false | false | self | 1 | null |
Is a heavily quantised Q235b any better than Q32b? | 50 | I've come to the conclusion that Qwen's 235b at Q2K~, perhaps unsurprisingly, is not better than Qwen3 32b Q4KL but I still wonder about the Q3? Gemma2 27b Q3KS used to be awesome, for example. Perhaps Qwen's 235b at Q3 will be amazing? Amazing enough to warrant 10 t/s?
I'm in the process of getting a mish mash of RAM... | 2025-07-11T09:24:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lx2dw4/is_a_heavily_quantised_q235b_any_better_than_q32b/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx2dw4 | false | null | t3_1lx2dw4 | /r/LocalLLaMA/comments/1lx2dw4/is_a_heavily_quantised_q235b_any_better_than_q32b/ | false | false | self | 50 | null |
hey guys im working in comapany they gave me a task to download open souce ai image generation model and run in the local system but the problem im facing | 0 | so the problem is i generated one image okay but i said hey edit that image with character consistency this is where we are lagging anybody can help me pls with this | 2025-07-11T08:59:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lx20h2/hey_guys_im_working_in_comapany_they_gave_me_a/ | Select_Dream634 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx20h2 | false | null | t3_1lx20h2 | /r/LocalLLaMA/comments/1lx20h2/hey_guys_im_working_in_comapany_they_gave_me_a/ | false | false | self | 0 | null |
With a 1M context Gemini, does it still make sense to do embedding or use RAG for long texts? | 46 | I’m trying to build an AI application that transcribes long audio recordings (around hundreds of thousands of tokens) and allows interaction with an LLM. However, every answer I get from searches and inquiries tells me that I need to chunk and vectorize the long text.
But with LLMs like Gemini that support 1M-token co... | 2025-07-11T07:51:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lx10ja/with_a_1m_context_gemini_does_it_still_make_sense/ | GyozaHoop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx10ja | false | null | t3_1lx10ja | /r/LocalLLaMA/comments/1lx10ja/with_a_1m_context_gemini_does_it_still_make_sense/ | false | false | self | 46 | null |
An idea for how to avoid hallucinations and get good answers in ambiguous but technical/important topics (like law, medicine) - your thoughts? | 1 | I have been involved a little bit with AI at work. I work in an environment where it's very important that the output is correct, so hallucinations is a critical error. It's much preferable to not have an answer than to hallucinate.
Still, there are problems with hallucinations.
What seems to work well in testing is ... | 2025-07-11T07:45:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lx0xd2/an_idea_for_how_to_avoid_hallucinations_and_get/ | Intrepid_Bobcat_2931 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx0xd2 | false | null | t3_1lx0xd2 | /r/LocalLLaMA/comments/1lx0xd2/an_idea_for_how_to_avoid_hallucinations_and_get/ | false | false | self | 1 | null |
1B/3B uncensored gifs models | 1 | [removed] | 2025-07-11T07:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lx0sfs/1b3b_uncensored_gifs_models/ | JensenAyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx0sfs | false | null | t3_1lx0sfs | /r/LocalLLaMA/comments/1lx0sfs/1b3b_uncensored_gifs_models/ | false | false | self | 1 | null |
Why so much ram? | 0 | Why do people put so much system ram into a server with multi gpus, even putting 1 layer into system ram slows it down to a crawl. I see people with 4 and 8 gpus and they load up 256-512G+ of ram, I never understood why. | 2025-07-11T07:25:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lx0ms3/why_so_much_ram/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx0ms3 | false | null | t3_1lx0ms3 | /r/LocalLLaMA/comments/1lx0ms3/why_so_much_ram/ | false | false | self | 0 | null |
Comet (AI first) browser from Perplexity needs better 403 page | 0 | Tried to checkout the website for Ai-first Comet browser from Perplexity.
Was shown this page.
I understand it’s only rolled out to their $200 paying Pro customers.
But a better 403 page would be nice.
Just a heads up to the Perplexity team.
Also waiting for preview access to this browser. 😉 | 2025-07-11T07:09:57 | NoobMLDude | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lx0e8i | false | null | t3_1lx0e8i | /r/LocalLLaMA/comments/1lx0e8i/comet_ai_first_browser_from_perplexity_needs/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'p-AUWyAojHyWJqEHf2ZDNMt2MlN7Au9G2_ParjIhGEE', 'resolutions': [{'height': 93, 'url': 'https://preview.redd.it/76b20ni437cf1.jpeg?width=108&crop=smart&auto=webp&s=7b0f54d5e4850a778c1d877746eba43bc6a714a7', 'width': 108}, {'height': 187, 'url': 'https://preview.redd.it/76b20ni437cf1.jp... | ||
Manage multiple MCP servers for Ollama + OpenWebUI as Docker service | 1 | I'm running Ollama & OpenWebUI on a headless Linux server, as Docker (with Compose) containers, with an NVIDIA GPU. This setup works great, but I want to add MCP servers to my environment, to improve the results from Ollama invocations.
The [documentation for OpenWebUI](https://docs.openwebui.com/openapi-servers/mcp/)... | 2025-07-11T07:04:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lx0b5w/manage_multiple_mcp_servers_for_ollama_openwebui/ | trevorstr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lx0b5w | false | null | t3_1lx0b5w | /r/LocalLLaMA/comments/1lx0b5w/manage_multiple_mcp_servers_for_ollama_openwebui/ | false | false | self | 1 | null |
Granite-speech-3.3-8b | 85 | > Granite-speech-3.3-8b is a compact and efficient speech-language model, specifically designed for automatic speech recognition (ASR) and automatic speech translation (AST). Granite-speech-3.3-8b uses a two-pass design, unlike integrated models that combine speech and language into a single pass. Initial calls to gran... | 2025-07-11T06:33:44 | https://huggingface.co/ibm-granite/granite-speech-3.3-8b | Balance- | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lwztnp | false | null | t3_1lwztnp | /r/LocalLLaMA/comments/1lwztnp/granitespeech338b/ | false | false | 85 | {'enabled': False, 'images': [{'id': 'qCjJtYOA1xCC4NeAQLlvmQH4l0rYxhSxDnkaBD28QmM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/qCjJtYOA1xCC4NeAQLlvmQH4l0rYxhSxDnkaBD28QmM.png?width=108&crop=smart&auto=webp&s=b597cda2512d75e467a9d18009ec6b56f088c226', 'width': 108}, {'height': 116, 'url': 'h... | |
Longform text has become iconic — almost like an emoji. | 0 | I've noticed a fundamental shift in how I engage with longform text — both in how I use it and how I perceive its purpose.
Longform content used to be something you navigated linearly, even when skimming. It was rich with meaning and nuance — each piece a territory to be explored and inhabited. Reading was a slow burn... | 2025-07-11T05:03:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lwycam/longform_text_has_become_iconic_almost_like_an/ | monarchwadia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwycam | false | null | t3_1lwycam | /r/LocalLLaMA/comments/1lwycam/longform_text_has_become_iconic_almost_like_an/ | false | false | self | 0 | null |
LLMs are telepathy. We just don't know it yet. | 0 | I've noticed a fundamental shift in how I engage with longform text — both in how I use it and how I perceive its purpose.
Longform content used to be something you navigated linearly, even when skimming. It was rich with meaning and nuance — each piece a territory to be explored and inhabited. Reading was a slow burn... | 2025-07-11T04:47:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lwy1uc/llms_are_telepathy_we_just_dont_know_it_yet/ | monarchwadia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwy1uc | false | null | t3_1lwy1uc | /r/LocalLLaMA/comments/1lwy1uc/llms_are_telepathy_we_just_dont_know_it_yet/ | false | false | self | 0 | null |
Ollama calling tools | 0 | I'm using a local AI model qwen2.5:14b-instruct running via Ollama, but it doesn't automatically call tools follow by my prompt instruction like OpenAI models do.
How should I design my prompt to ensure the model understands when and how to trigger tool use properly.
Also im currently using N8N and workflow look like... | 2025-07-11T04:30:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lwxrai/ollama_calling_tools/ | Practical-Corgi-9906 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwxrai | false | null | t3_1lwxrai | /r/LocalLLaMA/comments/1lwxrai/ollama_calling_tools/ | false | false | self | 0 | null |
What other models would you like to see on Design Arena? | 25 | We just hit [15K users](https://www.designarena.ai/)! For context of course, see [this post](https://www.reddit.com/r/LocalLLaMA/comments/1lu7lsi/uiux_benchmark_update_and_response_more_models/). Since then, we have added Grok 4, several Devstral Small, Devstral Medium, Gemini 2.5 Flash, and Qwen-235B-A22B.
We now th... | 2025-07-11T04:30:26 | https://www.reddit.com/gallery/1lwxr2l | adviceguru25 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lwxr2l | false | null | t3_1lwxr2l | /r/LocalLLaMA/comments/1lwxr2l/what_other_models_would_you_like_to_see_on_design/ | false | false | 25 | null | |
LM Studio model recommendation for writing, emails, and general summarizations | 2 | Hey folks, I am quite new to the local model space and having a hard time to decide which models to invest further in (by giving more cores/gpu focus toward - and add docs for RAG).
Main goals:
\- Completely offline models for privacy / security
\- High token count and focused on best English writing / summarizatio... | 2025-07-11T04:25:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lwxnf0/lm_studio_model_recommendation_for_writing_emails/ | vdog313 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwxnf0 | false | null | t3_1lwxnf0 | /r/LocalLLaMA/comments/1lwxnf0/lm_studio_model_recommendation_for_writing_emails/ | false | false | self | 2 | null |
What is up AI Bros I am stupid and maybe your AI can help me be less of these stupid | 0 | Here's a thing that I am copying pasting from another post I will add a too long didn't read to the end of this post if it was too long and you didn't read didn't want to read this f****** post.
I NEED I need to use my text to text model that is very specific for my application purpose. I would like to add a microphon... | 2025-07-11T04:14:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lwxglp/what_is_up_ai_bros_i_am_stupid_and_maybe_your_ai/ | Parking_Razzmatazz89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwxglp | false | null | t3_1lwxglp | /r/LocalLLaMA/comments/1lwxglp/what_is_up_ai_bros_i_am_stupid_and_maybe_your_ai/ | false | false | self | 0 | null |
Are there any local llms that can answer this type of peculiarly British question? | 1 | If you get a super off peak day return from Cardiff to London, what is the first evening train you can get back? This is for travel today.
Free chatgpt gets the answer wrong. I think the answer is 7pm. | 2025-07-11T04:10:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lwxdqj/are_there_any_local_llms_that_can_answer_this/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwxdqj | false | null | t3_1lwxdqj | /r/LocalLLaMA/comments/1lwxdqj/are_there_any_local_llms_that_can_answer_this/ | false | false | self | 1 | null |
Is LLM first RAG better than traditional RAG? | 0 | I see that LLM is afar superior technology than Vector database and LLM is trained on Natural Language Processing. So is it not always better to send the query to LLM first which can understand the user intent better than anything? | 2025-07-11T04:00:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lwx77q/is_llm_first_rag_better_than_traditional_rag/ | Maleficent_Mess6445 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwx77q | false | null | t3_1lwx77q | /r/LocalLLaMA/comments/1lwx77q/is_llm_first_rag_better_than_traditional_rag/ | false | false | self | 0 | null |
2-bit Quant: CCQ, Convolutional Code for Extreme Low-bit Quantization in LLMs | 86 | The Creators of Earnie just published a new quantization algorithm that compress Ernie-300B to 85GB and Deepseek-V3 to 184 GB, with minimal (<2%) performance degradation in benchmarks. Paper here: [https://arxiv.org/pdf/2507.07145](https://arxiv.org/pdf/2507.07145)
| 2025-07-11T03:57:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lwx50s/2bit_quant_ccq_convolutional_code_for_extreme/ | ortegaalfredo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwx50s | false | null | t3_1lwx50s | /r/LocalLLaMA/comments/1lwx50s/2bit_quant_ccq_convolutional_code_for_extreme/ | false | false | self | 86 | null |
Help me find the best Android app for running LLMs locally | 3 | I'm looking for both a good app and an availability of a good and capable LLM.
Thanks! | 2025-07-11T03:42:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lwwuwq/help_me_find_the_best_android_app_for_running/ | DanielD2724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwwuwq | false | null | t3_1lwwuwq | /r/LocalLLaMA/comments/1lwwuwq/help_me_find_the_best_android_app_for_running/ | false | false | self | 3 | null |
Tired of writing /no_think every time you prompt? | 2 | Just add `/no_think` in the system prompt and the model will mostly stop reasoning
You can also add your own conditions like `when i write /nt it means /no_think` or `always /no_think except if i write /think` if the model is smart enough it will mostly follow your orders
Tested on qwen3 | 2025-07-11T03:22:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lwwh8s/tired_of_writing_no_think_every_time_you_prompt/ | Iq1pl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwwh8s | false | null | t3_1lwwh8s | /r/LocalLLaMA/comments/1lwwh8s/tired_of_writing_no_think_every_time_you_prompt/ | false | false | self | 2 | null |
Cactus – Ollama for Smartphones | 1 | [removed] | 2025-07-11T03:12:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lwwaam/cactus_ollama_for_smartphones/ | ies7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwwaam | false | null | t3_1lwwaam | /r/LocalLLaMA/comments/1lwwaam/cactus_ollama_for_smartphones/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'yOxqSB5IZjtpIWt1BhwsyaYOwNoDVfxEXt_C20iyTLw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yOxqSB5IZjtpIWt1BhwsyaYOwNoDVfxEXt_C20iyTLw.png?width=108&crop=smart&auto=webp&s=0db8e2b19d55e3c5f1352b27cc7df59eaba4c445', 'width': 108}, {'height': 108, 'url': 'h... |
Liquid AI open-sources a new generation of edge LLMs! | 1 | [removed] | 2025-07-11T03:07:41 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lww6qz | false | null | t3_1lww6qz | /r/LocalLLaMA/comments/1lww6qz/liquid_ai_opensources_a_new_generation_of_edge/ | false | false | default | 1 | null | ||
Open Source Claude Coder alternative? | 13 | I all, I've been using Claude Coder at work for a while now and LOVE it. Is there any high quality alternatives where you can use local models / openAI endpoint(s)? | 2025-07-11T03:02:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lww2w9/open_source_claude_coder_alternative/ | maxwell321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lww2w9 | false | null | t3_1lww2w9 | /r/LocalLLaMA/comments/1lww2w9/open_source_claude_coder_alternative/ | false | false | self | 13 | null |
Anyone using Block's goose? | 0 | I have heard about it, but don't really see folks talking about using it nor does it have the same excitement as the other agentic toolkits. As a matter of fact, anytime I hear about it, it's from someone working for Block who I'm following on social media.
For anyone using it, if you can compare it to other tools, ... | 2025-07-11T03:01:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lww2ld/anyone_using_blocks_goose/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lww2ld | false | null | t3_1lww2ld | /r/LocalLLaMA/comments/1lww2ld/anyone_using_blocks_goose/ | false | false | self | 0 | null |
Hunyuan responding with <answer> </answer> tag on LMstudio | 1 | Is anyone facing this issue? Do you know how to fit it? I am using unsloth q5 UD gguf
Thanks! | 2025-07-11T02:51:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lwvuuv/hunyuan_responding_with_answer_answer_tag_on/ | Kuane | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwvuuv | false | null | t3_1lwvuuv | /r/LocalLLaMA/comments/1lwvuuv/hunyuan_responding_with_answer_answer_tag_on/ | false | false | self | 1 | null |
Local AI server with Ollama and Tailscale integration looking for feedback | 0 | Hey Everyone, I’ve been working on a project called Fissure. It is a personal AI server that runs on your own machine. The goal is to make it pretty simple to run a local LLM, access it from your phone or laptop, and keep everything private.
Right now I’m focused on the desktop app, which runs your chosen model t... | 2025-07-11T02:46:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lwvrev/local_ai_server_with_ollama_and_tailscale/ | Remarkable-Stay-2193 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwvrev | false | null | t3_1lwvrev | /r/LocalLLaMA/comments/1lwvrev/local_ai_server_with_ollama_and_tailscale/ | false | false | self | 0 | null |
Grok 4 seems to consult Elon Musk to answer controversial questions | TechCrunch | 0 | 2025-07-11T02:26:25 | https://techcrunch.com/2025/07/10/grok-4-seems-to-consult-elon-musk-to-answer-controversial-questions/ | srwaxalot | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1lwvci3 | false | null | t3_1lwvci3 | /r/LocalLLaMA/comments/1lwvci3/grok_4_seems_to_consult_elon_musk_to_answer/ | false | false | default | 0 | null | |
Terraformed Binary Classifier | 1 | This is IAC for a binary classifier that could help folks get started with AI engineering. It’s a variation of the classic AWS example binary classifier but with an API endpoint to use for inferring and a scheduler for turning the endpoint off when not in use because that’s expensive.
https://github.com/jenastar/comp... | 2025-07-11T02:23:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lwva7f/terraformed_binary_classifier/ | jenastar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwva7f | false | null | t3_1lwva7f | /r/LocalLLaMA/comments/1lwva7f/terraformed_binary_classifier/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'oOX91S0GtJ5C3DDt-OLrOB9Trr9AiY8Gs-ers4ZsuWs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oOX91S0GtJ5C3DDt-OLrOB9Trr9AiY8Gs-ers4ZsuWs.png?width=108&crop=smart&auto=webp&s=77f88e42466d472f95a151834ef200b29d7df819', 'width': 108}, {'height': 108, 'url': 'h... |
[OC] Comprehensive AI Data Quality Metrics Documentation - 50+ Evaluation Metrics with Academic Sources | 7 | We've just released what might be the most comprehensive documentation of AI data quality evaluation metrics available. This covers everything from pre-training data assessment to multimodal evaluation.
**What's included:**
- 50+ evaluation metrics across text, image, and multimodal data
- Academic... | 2025-07-11T02:08:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lwuzjo/oc_comprehensive_ai_data_quality_metrics/ | chupei0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwuzjo | false | null | t3_1lwuzjo | /r/LocalLLaMA/comments/1lwuzjo/oc_comprehensive_ai_data_quality_metrics/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'c340hQeOye9TxtgDQ0X1CY8WX4WAKvr0pumN26Zmq9o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c340hQeOye9TxtgDQ0X1CY8WX4WAKvr0pumN26Zmq9o.png?width=108&crop=smart&auto=webp&s=761a7fb258827c359161c668421aeb498ed39406', 'width': 108}, {'height': 108, 'url': 'h... |
H | 0 | 2025-07-11T01:31:59 | https://www.reddit.com/gallery/1lwu80y | IntelligentNail7545 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lwu80y | false | null | t3_1lwu80y | /r/LocalLLaMA/comments/1lwu80y/h/ | false | false | 0 | null | ||
What is your wishlist for OpenAI's upcoming open source model? | 0 | The most recent news lead to next [Thursday ](https://www.reddit.com/r/LocalLLaMA/comments/1lvr3ym/openais_open_source_llm_is_a_reasoning_model/)as the launch date.
I am hoping for something under 100B parameters, preferably 70B so it can run on a 48gb vram system. Over the last year the bulk of advancement in the ope... | 2025-07-11T00:46:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lwtaor/what_is_your_wishlist_for_openais_upcoming_open/ | triynizzles1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwtaor | false | null | t3_1lwtaor | /r/LocalLLaMA/comments/1lwtaor/what_is_your_wishlist_for_openais_upcoming_open/ | false | false | self | 0 | null |
AMD's Pull Request for llama.cpp: Enhancing GPU Support | 361 | Hey everyone, good news for AMD GPU users! It seems AMD is getting serious about boosting support for their graphics cards in llama.cpp
Word is, someone from AMD dropped a pull request to tweak the code, aimed at adapting the project for use with AMD graphics cards.
Discussions with the project leaders are planned ... | 2025-07-11T00:45:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lwta86/amds_pull_request_for_llamacpp_enhancing_gpu/ | Rrraptr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwta86 | false | null | t3_1lwta86 | /r/LocalLLaMA/comments/1lwta86/amds_pull_request_for_llamacpp_enhancing_gpu/ | false | false | self | 361 | {'enabled': False, 'images': [{'id': 'Nv6JEmpzvB3dldFAieW1ex5iixHjB2uRtht6aYJ1wHE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Nv6JEmpzvB3dldFAieW1ex5iixHjB2uRtht6aYJ1wHE.png?width=108&crop=smart&auto=webp&s=4049b21c0ac9b7c3089ec2e3df2e59d8659989da', 'width': 108}, {'height': 108, 'url': 'h... |
Support for the upcoming IBM Granite 4.0 has been merged into llama.cpp | 153 | Whereas prior generations of Granite LLMs utilized a conventional transformer architecture, all models in the Granite 4.0 family utilize a new **hybrid Mamba-2/Transformer architecture,** marrying the speed and efficiency of Mamba with the precision of transformer-based self-attention.
Granite 4.0 Tiny-Preview, speci... | 2025-07-11T00:20:50 | https://github.com/ggml-org/llama.cpp/pull/13550 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lwsrx7 | false | null | t3_1lwsrx7 | /r/LocalLLaMA/comments/1lwsrx7/support_for_the_upcoming_ibm_granite_40_has_been/ | false | false | default | 153 | {'enabled': False, 'images': [{'id': 'FNrRGnLNvs7SoS-PWTZeuDoBIeMJrIjippY_Sjx3gVs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FNrRGnLNvs7SoS-PWTZeuDoBIeMJrIjippY_Sjx3gVs.png?width=108&crop=smart&auto=webp&s=4d9dfc6caf6473cddc930c8672b2473ef6a39f9d', 'width': 108}, {'height': 108, 'url': 'h... |
I get roughly 10x faster speeds with Mozilla llamafile than ollama or llama.cpp using the same model on the same hardware. Help me understand why. | 7 | **Context**:
I'm trying to setup a local LLM on my home servr. I'd prefer to use ollama+open-webui or llamacpp+open-webui to run Qwen3-30B-A3B (or whatever the best model my hardware can support at useable speeds). But I can achieve at best 2tk/s and really really bad prompt processing using either llamacpp or ollama ... | 2025-07-10T23:25:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lwrkma/i_get_roughly_10x_faster_speeds_with_mozilla/ | redoubt515 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwrkma | false | null | t3_1lwrkma | /r/LocalLLaMA/comments/1lwrkma/i_get_roughly_10x_faster_speeds_with_mozilla/ | false | false | self | 7 | null |
Best large open-source LLM for health/medical data analytics (RTX 6000 Pro, $10k budget) | 14 | Hey all,
We’re a hospital building an on-prem system for health and medical data analytics using LLMs. Our setup includes an RTX 6000 Pro and a 5090, and we’re working with a ~$19k budget.
We’re looking to:
• Run a large open-source LLM locally
• Do fine-tuning (LoRA or full) on structured clinical data and unstruct... | 2025-07-10T23:16:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lwrd38/best_large_opensource_llm_for_healthmedical_data/ | LeastExperience1579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwrd38 | false | null | t3_1lwrd38 | /r/LocalLLaMA/comments/1lwrd38/best_large_opensource_llm_for_healthmedical_data/ | false | false | self | 14 | null |
How are reasonable models built? Are they fine tuned from base non reasoning models? | 0 | Does it involve first building a model that creates generates a dataset with <think> tokens.
Then generate a reward model.
Finally fine tune model with RL and reward model? | 2025-07-10T23:13:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lwrad1/how_are_reasonable_models_built_are_they_fine/ | rockybaby2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwrad1 | false | null | t3_1lwrad1 | /r/LocalLLaMA/comments/1lwrad1/how_are_reasonable_models_built_are_they_fine/ | false | false | self | 0 | null |
How to stream only final LLM response while tool calling | 0 | Hi everyone, i am implementing tool calling with an LLM in langgraph and i want to stream only the final llm response (not the intermediate tool call messages). I am curious if anyone got any idea with the lowest latency possible? | 2025-07-10T23:10:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lwr8eh/how_to_stream_only_final_llm_response_while_tool/ | Dry_Yam_322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lwr8eh | false | null | t3_1lwr8eh | /r/LocalLLaMA/comments/1lwr8eh/how_to_stream_only_final_llm_response_while_tool/ | false | false | self | 0 | null |
People Are Using AI Chatbots to Guide Their Psychedelic Trips | 0 | Not your everyday agent article topic 🙃
Can anyone relate to this article? | 2025-07-10T22:52:19 | https://www.wired.com/story/people-are-using-ai-chatbots-to-guide-their-psychedelic-trips/ | nate4t | wired.com | 1970-01-01T00:00:00 | 0 | {} | 1lwqsso | false | null | t3_1lwqsso | /r/LocalLLaMA/comments/1lwqsso/people_are_using_ai_chatbots_to_guide_their/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'xRtKaJSCq_7Addv1o3JLYgIEZWdbA8k9CyZVMOkjZnU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/xRtKaJSCq_7Addv1o3JLYgIEZWdbA8k9CyZVMOkjZnU.jpeg?width=108&crop=smart&auto=webp&s=192803b0e7925fc9d1c3e639dec5d5c90bba5b5a', 'width': 108}, {'height': 113, 'url': '... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.