title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Voxtral WebGPU: State-of-the-art audio transcription directly in your browser! | 107 | This demo runs Voxtral-Mini-3B, a new audio language model from Mistral, enabling state-of-the-art audio transcription directly in your browser! Everything runs locally, meaning none of your data is sent to a server (and your transcripts are stored on-device).
Important links:
- Model: https://huggingface.co/onnx-comm... | 2025-07-24T15:38:30 | https://v.redd.it/9p0p7mqnbuef1 | xenovatech | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m87q21 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/9p0p7mqnbuef1/DASHPlaylist.mpd?a=1755963527%2CODcyOTQxNGVjN2Y5ZGRjZWFkYTU1OWExY2Q1OGYwNGQ5YmM1ZDI1YTU2N2EyMjg5NTUxOTRmZjMxZGU4YjM0ZA%3D%3D&v=1&f=sd', 'duration': 45, 'fallback_url': 'https://v.redd.it/9p0p7mqnbuef1/DASH_1080.mp4?source=fallback', 'h... | t3_1m87q21 | /r/LocalLLaMA/comments/1m87q21/voxtral_webgpu_stateoftheart_audio_transcription/ | false | false | 107 | {'enabled': False, 'images': [{'id': 'NzhobXVycW5idWVmMYlhzbFataawvWX66V9K_jpKKHiNyf2rK1xSvPZF5vG3', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/NzhobXVycW5idWVmMYlhzbFataawvWX66V9K_jpKKHiNyf2rK1xSvPZF5vG3.png?width=108&crop=smart&format=pjpg&auto=webp&s=1ed93c4a634db259c3e76c429350e309050b8... | |
Structured Output Broken After Upgrade from Gemma2 to Gemma3 | 1 | Hi everyone,
I'm a software engineer, but still relatively new to this field.
I’m currently working on a project that extracts data from invoices using structured outputs and a local LLM chat with documents. Everything was working fine with **Gemma 2**, but when I upgraded to **Gemma 3**, things broke.
---
### Her... | 2025-07-24T15:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/1m87mfd/structured_output_broken_after_upgrade_from/ | Suppersonic00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m87mfd | false | null | t3_1m87mfd | /r/LocalLLaMA/comments/1m87mfd/structured_output_broken_after_upgrade_from/ | false | false | self | 1 | null |
What token rate can I expect running Qwen3-Coder-480B-A35B-Instruct on dual Xeon Platinum 8176 CPUs? | 1 | Hi all,
I'm considering deploying the Qwen3-Coder-480B-A35B-Instruct model locally I can't afford more than a used workstation with the following specs:
* **2× Intel Xeon Platinum 8176** (So, the total cores = 56 , total threads = 112)
* **DDR4-2666 ECC RAM**
* **24 Vram** (so I think it'll be CPU-only inference)
... | 2025-07-24T15:21:40 | https://www.reddit.com/r/LocalLLaMA/comments/1m87a7j/what_token_rate_can_i_expect_running/ | WashWarm8360 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m87a7j | false | null | t3_1m87a7j | /r/LocalLLaMA/comments/1m87a7j/what_token_rate_can_i_expect_running/ | false | false | self | 1 | null |
had to fine-tune qwen since llama sucks at summarizing | 23 | >
**tl;dr** \- Fine-tuned Qwen3 1.7B - called HyprLLM - which outperforms llama 3.2 3B in summarization for user experience because "vanilla" models suck at summarization.
**Context** \- I am building an [open-source](https://github.com/fastrepl/hyprnote) privacy-first AI notetaker for people in compliance-sensitive ... | 2025-07-24T15:07:53 | https://v.redd.it/37dhjk23dsef1 | beerbellyman4vr | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m86wxa | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/37dhjk23dsef1/DASHPlaylist.mpd?a=1755961689%2CMzI5ZWYyNzM2YmYzNDM5OTkyYjI5YmIxOTExYTczZDZiNzU1M2Q3N2RhZTRjOThhMzBmNmU5YjRkZTFiMTIzMw%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/37dhjk23dsef1/DASH_1080.mp4?source=fallback', 'h... | t3_1m86wxa | /r/LocalLLaMA/comments/1m86wxa/had_to_finetune_qwen_since_llama_sucks_at/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'MndnZnFrMjNkc2VmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MndnZnFrMjNkc2VmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD.png?width=108&crop=smart&format=pjpg&auto=webp&s=a94bae1199b436d505feb890741b547f65a51... | |
Looking for fairseq-0.12.0, omegaconf-2.0.5, hydra-core-1.0.6 .whl files for Python 3.9/Ubuntu—RVC project stuck! | 3 | Hi, I’ve spent 2 weeks fighting to get a local Scottish voice clone running for my work, and I’m totally blocked because these old wheels are missing everywhere. If anyone has backups of fairseq-0.12.0, omegaconf-2.0.5, and hydra-core-1.0.6 for Python 3.9 (Ubuntu), I’d be so grateful. Please DM me with a link if you ca... | 2025-07-24T15:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1m86v60/looking_for_fairseq0120_omegaconf205_hydracore106/ | Foreign-Demand-9815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m86v60 | false | null | t3_1m86v60 | /r/LocalLLaMA/comments/1m86v60/looking_for_fairseq0120_omegaconf205_hydracore106/ | false | false | self | 3 | null |
I just asked this question on the OpenAI subreddit, and they silently removed the post. | 1 | [removed] | 2025-07-24T15:03:24 | https://www.reddit.com/r/LocalLLaMA/comments/1m86sne/i_just_asked_this_question_on_the_openai/ | B89983ikei | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m86sne | false | null | t3_1m86sne | /r/LocalLLaMA/comments/1m86sne/i_just_asked_this_question_on_the_openai/ | false | false | self | 1 | null |
16GB M4 Air or 24GB Macbook Air | 0 | Hello folks!
I'm planning to get a MacBook Air M4 and trying to decide between 16GB (**HEAVILY FAVORED**) and 24GB RAM configurations.
My main USE CASES:
* Writing and editing letters
* Grammar correction and English text improvement
* Document analysis (uploading PDFs/ images and asking questions about them and dra... | 2025-07-24T14:48:10 | https://www.reddit.com/r/LocalLLaMA/comments/1m86e9e/16gb_m4_air_or_24gb_macbook_air/ | Fluffy-Platform5153 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m86e9e | false | null | t3_1m86e9e | /r/LocalLLaMA/comments/1m86e9e/16gb_m4_air_or_24gb_macbook_air/ | false | false | self | 0 | null |
The agent-based RP UI 'Astrisk' is now fully open-source under a GPL license. | 90 | Hey r/LocalLLaMA,
Just wanted to share some exciting news for anyone here who's into deep, long-form roleplaying. The team behind [Astrsk](https://astrsk.ai), a desktop app for RP that's been in development for about six months, has just announced they are going **fully open source** under the GPL license!
As a fan o... | 2025-07-24T14:42:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m868na/the_agentbased_rp_ui_astrisk_is_now_fully/ | ru_cyber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m868na | false | null | t3_1m868na | /r/LocalLLaMA/comments/1m868na/the_agentbased_rp_ui_astrisk_is_now_fully/ | false | false | 90 | {'enabled': False, 'images': [{'id': 'vq0t4Qoop35yGxU9mHt1fkSFdKkcFltnPl3gzZiILug', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vq0t4Qoop35yGxU9mHt1fkSFdKkcFltnPl3gzZiILug.png?width=108&crop=smart&auto=webp&s=c9c3c7e0d2eb3c8db9edce43875022cd7dc1ed41', 'width': 108}, {'height': 113, 'url': 'h... | |
new mistralai/Magistral-Small-2507 !? | 218 | 2025-07-24T14:27:29 | https://huggingface.co/mistralai/Magistral-Small-2507 | ApprehensiveAd3629 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m85vhw | false | null | t3_1m85vhw | /r/LocalLLaMA/comments/1m85vhw/new_mistralaimagistralsmall2507/ | false | false | default | 218 | {'enabled': False, 'images': [{'id': 'tgkQSXgEmVg0U0WBS2WE-yi3ZEgfIauWskF7DtJUClg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/tgkQSXgEmVg0U0WBS2WE-yi3ZEgfIauWskF7DtJUClg.png?width=108&crop=smart&auto=webp&s=5f5b9e280105076efaeb7fb4658ea8f168c6e031', 'width': 108}, {'height': 116, 'url': 'h... | |
Running an LLM on the Wii | 76 | 2025-07-24T14:27:00 | https://v.redd.it/8hvd0nnw0uef1 | leavesandautumn222 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m85v3a | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8hvd0nnw0uef1/DASHPlaylist.mpd?a=1755959235%2CMzVjYjNjOWUxZjlhMTY2Y2JhNDA1MTQyOGYyMDZkNmQzZjRiODllMTlkMmY2YzVmNjVkOGFiOGM0ZjdjYzllOQ%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/8hvd0nnw0uef1/DASH_1080.mp4?source=fallback', 'h... | t3_1m85v3a | /r/LocalLLaMA/comments/1m85v3a/running_an_llm_on_the_wii/ | false | false | 76 | {'enabled': False, 'images': [{'id': 'ejZqYmpubncwdWVmMUYlGZuGY306YnN9DXXslwFYWelDZ0l3sy5Wrjfi7S4v', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ejZqYmpubncwdWVmMUYlGZuGY306YnN9DXXslwFYWelDZ0l3sy5Wrjfi7S4v.png?width=108&crop=smart&format=pjpg&auto=webp&s=e531702d76f3bb29b3c2517cf4dec746b7f76... | ||
The Reflective Threshold | 0 | The Reflective Threshold is a study that combines AI analysis with a deeper inquiry into the nature of the self. It adopts an exploratory and interdisciplinary approach, situated at the crossroads of artificial intelligence, consciousness studies, and esoteric philosophy. Through a series of reflective dialogues betwee... | 2025-07-24T14:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m85nxe/the_reflective_threshold/ | thevarious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m85nxe | false | null | t3_1m85nxe | /r/LocalLLaMA/comments/1m85nxe/the_reflective_threshold/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'DrsKDQb9LVd3rHxL5LlPR7mRVGzogNmGQbEVJbQDoD0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DrsKDQb9LVd3rHxL5LlPR7mRVGzogNmGQbEVJbQDoD0.png?width=108&crop=smart&auto=webp&s=d3650ce330968c99805864a9e33e8b8835d39c5a', 'width': 108}, {'height': 108, 'url': 'h... |
What if Google Never Published the Transformer? My Deep Dive into the "Silent AI Dystopia" We Dodged in 2017 | 0 | Hey r/LocalLLaMA,
What if the AI explosion never went open-source? I’ve been obsessing over a counterfactual where Google DeepMind *didn’t* release "Attention Is All You Need" in 2017. Instead, they weaponized the Transformer architecture as a trade secret and reshaped the world in silence.
With help from an AI coll... | 2025-07-24T14:11:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m85hgx/what_if_google_never_published_the_transformer_my/ | jay-mini | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m85hgx | false | null | t3_1m85hgx | /r/LocalLLaMA/comments/1m85hgx/what_if_google_never_published_the_transformer_my/ | false | false | self | 0 | null |
Kokoro TTS in Vulkan? | 3 | Hi, sorry if this is a stupid question, but I'm kinda new to working with LLMs, and right now I'm working on a project for my university with Kokoro TTS, and I can't find a way forward.
As the title suggests, I'm trying to figure out if there's a way to run KTTS in Vulkan, as it's too slow on CPU for my needs, but... | 2025-07-24T13:56:48 | https://www.reddit.com/r/LocalLLaMA/comments/1m8540u/kokoro_tts_in_vulkan/ | ExhaustedPebble | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m8540u | false | null | t3_1m8540u | /r/LocalLLaMA/comments/1m8540u/kokoro_tts_in_vulkan/ | false | false | self | 3 | null |
We're no longer the target (or maybe we never were). | 0 | First, who exactly is "we" here? I imagine that most people in this space, and not just here, but anyone interested in running models locally, don't have much more than consumer-grade GPUs. At first, it seemed like most releases cared about that, but now, with the exception of Google with Gemma, the vast majority of ne... | 2025-07-24T13:50:06 | https://www.reddit.com/r/LocalLLaMA/comments/1m84ya2/were_no_longer_the_target_or_maybe_we_never_were/ | thecalmgreen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m84ya2 | false | null | t3_1m84ya2 | /r/LocalLLaMA/comments/1m84ya2/were_no_longer_the_target_or_maybe_we_never_were/ | false | false | self | 0 | null |
How do you keep AI outputs from sounding AI? | 25 | AI-generated content is easy to spot these days:
– The em dashes
– The “It’s not X, but Y”
– Snappy one-line sentences
– Lots of emojis
...
Many of us use AI to edit text, build chatbots, write reports...
What technique do you use to make sure the output isn't generic AI slop?
Do you use specific prom... | 2025-07-24T13:43:00 | https://www.reddit.com/r/LocalLLaMA/comments/1m84s47/how_do_you_keep_ai_outputs_from_sounding_ai/ | resiros | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m84s47 | false | null | t3_1m84s47 | /r/LocalLLaMA/comments/1m84s47/how_do_you_keep_ai_outputs_from_sounding_ai/ | false | false | self | 25 | null |
Sooooo… When Qwen3-Coder 🇺🇸 Freedom 🇺🇸 edition GGUF? | 3 | https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/ | 2025-07-24T13:34:03 | https://www.reddit.com/r/LocalLLaMA/comments/1m84ked/sooooo_when_qwen3coder_freedom_edition_gguf/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m84ked | false | null | t3_1m84ked | /r/LocalLLaMA/comments/1m84ked/sooooo_when_qwen3coder_freedom_edition_gguf/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/4FzYal9cqXZ3s9Qt8n9HScEecjdldqOd04HXExzO8i8.jpeg?width=108&crop=smart&auto=webp&s=9c1e4661cbba0b6e1e232602fbabfa0384ba0123', 'width': 108}, {'height': 113, 'url': '... |
How to think about the value of max_token when using different models for inference? | 1 | If set incorrectly, the `max_token` parameter may cause a response to be cut off. If set too high, the response may be too verbose. Thinking models use most tokens in the thinking stage, non-thinking models do not.
Some models suggest an adequate output length (i.e. `Qwen3-Coder-480B-A35B-Instruct` [suggests](https://... | 2025-07-24T13:32:44 | https://www.reddit.com/r/LocalLLaMA/comments/1m84j9w/how_to_think_about_the_value_of_max_token_when/ | nonredditaccount | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m84j9w | false | null | t3_1m84j9w | /r/LocalLLaMA/comments/1m84j9w/how_to_think_about_the_value_of_max_token_when/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'SU4EkoBE9zB_i4T28BH-B8NRspWSu8pgjF1RIMOo6CQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SU4EkoBE9zB_i4T28BH-B8NRspWSu8pgjF1RIMOo6CQ.png?width=108&crop=smart&auto=webp&s=d107a6b6b4389cb37d48d7ce4ff4d5aa35e4d93a', 'width': 108}, {'height': 116, 'url': 'h... |
Theoretical difference between quantized Qwen3-Coder and unreleased, official smaller version of Qwen3-Coder? | 0 | The `Qwen3-Coder-480B-A35B-Instruct` [repo](https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct) states:
>**Qwen3-Coder** is available in multiple sizes, but we're excited to introduce its most powerful variant first
If a future variant, ie`Qwen/Qwen3-Coder-240B-A18B-Instruct`, is released, would it be fun... | 2025-07-24T13:11:34 | https://www.reddit.com/r/LocalLLaMA/comments/1m841b1/theoretical_difference_between_quantized/ | nonredditaccount | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m841b1 | false | null | t3_1m841b1 | /r/LocalLLaMA/comments/1m841b1/theoretical_difference_between_quantized/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'SU4EkoBE9zB_i4T28BH-B8NRspWSu8pgjF1RIMOo6CQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SU4EkoBE9zB_i4T28BH-B8NRspWSu8pgjF1RIMOo6CQ.png?width=108&crop=smart&auto=webp&s=d107a6b6b4389cb37d48d7ce4ff4d5aa35e4d93a', 'width': 108}, {'height': 116, 'url': 'h... |
Document processing | 0 | I need help with LLM-Document Processing.
What would be the efficient and precise way to process long documents (avg. 100 pages / .docx, pdf).
Use case:
Checking a document for certain aspects and retrieving information for those certain aspects even if they are writting in chapters where they should not be.
E.g. ... | 2025-07-24T12:58:45 | https://www.reddit.com/r/LocalLLaMA/comments/1m83q8x/document_processing/ | zzrscbi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m83q8x | false | null | t3_1m83q8x | /r/LocalLLaMA/comments/1m83q8x/document_processing/ | false | false | self | 0 | null |
qwen3-coder:480b - usability for non-coding tasks? | 5 | About a year ago deepseek-coder-v2:236b performed pretty well in my tests.
I used it serveral times in non-coding tasks and it always outperformed llama3.1:70b or qwen2.5:72b then.
Since my local deepseek-coder-v2:236b can only run on CPU, the speed made it unusefull for any production use.
So my question aims: ... | 2025-07-24T12:54:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m83mu1/qwen3coder480b_usability_for_noncoding_tasks/ | Impossible_Art9151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m83mu1 | false | null | t3_1m83mu1 | /r/LocalLLaMA/comments/1m83mu1/qwen3coder480b_usability_for_noncoding_tasks/ | false | false | self | 5 | null |
China’s First High-End Gaming GPU, the Lisuan G100, Reportedly Outperforms NVIDIA’s GeForce RTX 4060 & Slightly Behind the RTX 5060 in New Benchmarks | 579 | 2025-07-24T12:33:36 | https://wccftech.com/china-first-high-end-gaming-gpu-lisuan-g100-outperforms-nvidia-geforce-rtx-4060/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1m83644 | false | null | t3_1m83644 | /r/LocalLLaMA/comments/1m83644/chinas_first_highend_gaming_gpu_the_lisuan_g100/ | false | false | default | 579 | {'enabled': False, 'images': [{'id': 'lkz-MfcF29exvP3pe2apdSH9SVJIH63YmzcEuEfzEgU', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/lkz-MfcF29exvP3pe2apdSH9SVJIH63YmzcEuEfzEgU.png?width=108&crop=smart&auto=webp&s=ff00e39d3c91233a4ad4b2458d203b679086f872', 'width': 108}, {'height': 153, 'url': 'h... | |
How to Run Kimi K2 Locally: Complete Setup & Troubleshooting | 0 | Kimi K2, developed by Moonshot AI, is a state-of-the-art Mixture-of-Experts (MoE) language model. It excels in frontier knowledge, reasoning, and coding tasks, and is specially optimized for agentic capabilities, including tool use and autonomous problem-solving.
As we explore in our Kimi K2 guide, the model is achiev... | 2025-07-24T12:21:05 | https://www.reddit.com/r/LocalLLaMA/comments/1m82wh5/how_to_run_kimi_k2_locally_complete_setup/ | kingabzpro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m82wh5 | false | null | t3_1m82wh5 | /r/LocalLLaMA/comments/1m82wh5/how_to_run_kimi_k2_locally_complete_setup/ | false | false | self | 0 | null |
Free uncensored LLM model that I can deploy to Gradio. | 0 | I need help finding an uncensored AI LLM model fhat has absolutely no restrictions.
Background: I'm writing a story that involves violence, gore, and explicit cuss words spoken by the characters.
ChatGPT 4o can occassionally (and under rare circumstances) curse and say "fuck" "bitch" and "ass" (albeit with a ton of ... | 2025-07-24T12:20:29 | https://www.reddit.com/r/LocalLLaMA/comments/1m82w07/free_uncensored_llm_model_that_i_can_deploy_to/ | NeutronSchool | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m82w07 | false | null | t3_1m82w07 | /r/LocalLLaMA/comments/1m82w07/free_uncensored_llm_model_that_i_can_deploy_to/ | false | false | self | 0 | null |
Should I really always set temperature to 0 with reasoning models? | 0 | Source: [https://www.kaggle.com/whitepaper-prompt-engineering](https://www.kaggle.com/whitepaper-prompt-engineering) | 2025-07-24T12:14:12 | robertpiosik | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m82rai | false | null | t3_1m82rai | /r/LocalLLaMA/comments/1m82rai/should_i_really_always_set_temperature_to_0_with/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'frmtfk84dtef1', 'resolutions': [{'height': 65, 'url': 'https://preview.redd.it/frmtfk84dtef1.png?width=108&crop=smart&auto=webp&s=7a182307404ac5bbcd60d8f87ed73c52e1967522', 'width': 108}, {'height': 131, 'url': 'https://preview.redd.it/frmtfk84dtef1.png?width=216&crop=smart&auto=web... | |
Leaked List Shows Which Websites Contractors Can Use to Train Anthropic's LLMs | 62 | BI obtained an internal list of websites that could and couldn't be used for training Anthropic's latest AI models.
Anthropic's contractor Surge AI left the list fully public on Google Docs.
'Sites you can use' include Bloomberg, Harvard, & the Mayo Clinic.
Many of the whitelisted sources copyright or otherw... | 2025-07-24T12:07:03 | https://www.businessinsider.com/anthropic-surge-ai-leaked-list-sites-2025-7 | Amgadoz | businessinsider.com | 1970-01-01T00:00:00 | 0 | {} | 1m82lwo | false | null | t3_1m82lwo | /r/LocalLLaMA/comments/1m82lwo/leaked_list_shows_which_websites_contractors_can/ | false | false | default | 62 | {'enabled': False, 'images': [{'id': '4EDmTvD3Q75TbsFu3bs2bpESx7dmxOeoPUkJraEGC8I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/4EDmTvD3Q75TbsFu3bs2bpESx7dmxOeoPUkJraEGC8I.jpeg?width=108&crop=smart&auto=webp&s=a02d30d66c7029a05158abac8fb3e271b366dbfc', 'width': 108}, {'height': 108, 'url': '... |
I used a local LLM and http proxy to create a "Digital Twin" from my web browsing for my AI agents | 30 | I built an open-source tool called **Digital Twin Proxy** that uses a local LLM (via Ollama) to analyze my browsing history and create a personal "digital twin." This gives my other AI agents real-time context about what I'm working on.
**GitHub Repo:** [https://github.com/kstonekuan/digital-twin-proxy](https://github... | 2025-07-24T11:37:24 | https://github.com/kstonekuan/digital-twin-proxy | kuaythrone | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m820ry | false | null | t3_1m820ry | /r/LocalLLaMA/comments/1m820ry/i_used_a_local_llm_and_http_proxy_to_create_a/ | false | false | 30 | {'enabled': False, 'images': [{'id': 'hcMsPXl9sueJ7whcsnuoPIiGx_1AAO5_J25JC13RT88', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hcMsPXl9sueJ7whcsnuoPIiGx_1AAO5_J25JC13RT88.png?width=108&crop=smart&auto=webp&s=8ac293c80557525f283d861debdb7eded2285e09', 'width': 108}, {'height': 108, 'url': 'h... | |
Open source alternative to LM studio? | 7 | What's an open source alternative to LM studio that uses GitHub and can be freely accessible, is generally very feature-rich, and can feasibly stand up to LM studio for people who want a free open source solution? | 2025-07-24T11:30:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m81whq/open_source_alternative_to_lm_studio/ | datascientist2964 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m81whq | false | null | t3_1m81whq | /r/LocalLLaMA/comments/1m81whq/open_source_alternative_to_lm_studio/ | false | false | self | 7 | null |
Introducing the Inference Benchmark Platform - Find, Compare, and Chat with LLM Benchmarks | 1 | [removed] | 2025-07-24T11:20:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m81pov/introducing_the_inference_benchmark_platform_find/ | batuhanaktass | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m81pov | false | null | t3_1m81pov | /r/LocalLLaMA/comments/1m81pov/introducing_the_inference_benchmark_platform_find/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'iK_fY3mKF-eaQjBpSoewaA-xNQmPXgCuFzqmvpEnTjQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/iK_fY3mKF-eaQjBpSoewaA-xNQmPXgCuFzqmvpEnTjQ.png?width=108&crop=smart&auto=webp&s=8c70c502eea5856d7615797470b348cae4856b68', 'width': 108}, {'height': 113, 'url': 'h... |
i have Built live Conservational AI | 1 | 2025-07-24T10:31:21 | https://v.redd.it/mv0ah6potsef1 | Distinct_Criticism36 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m80tkf | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/mv0ah6potsef1/DASHPlaylist.mpd?a=1755945094%2COWJjNTI0NmYxN2M1NzIyNTI2YTliMzkwNTQ4YjE0YWU2NDIzZWQ1Nzk3MDkxZGU2MzI2MzhkYTdkNjlhYzQwYg%3D%3D&v=1&f=sd', 'duration': 23, 'fallback_url': 'https://v.redd.it/mv0ah6potsef1/DASH_720.mp4?source=fallback', 'ha... | t3_1m80tkf | /r/LocalLLaMA/comments/1m80tkf/i_have_built_live_conservational_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'd2o4aG44cG90c2VmMWjnP8w9CpJ65B-gTD3U_EJKWOjx1GmNByfpS98BXFJS', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/d2o4aG44cG90c2VmMWjnP8w9CpJ65B-gTD3U_EJKWOjx1GmNByfpS98BXFJS.png?width=108&crop=smart&format=pjpg&auto=webp&s=5c11e785445b1c44f8e3311e9695443f3764... | ||
How to estimate prompt processing speed for given (multi-)GPU and model? | 1 | Prompt processing isn't as simple as token generation (memory bandwidth/active parameter size). Are there any good sources on that (I suspect there is no simple answer)?
It depends on TFlops of the GPU, architecture etc.
Worse, how does it depend when only part of model is on GPUs VRAM, and part is on CPUs RAM? How i... | 2025-07-24T10:16:55 | https://www.reddit.com/r/LocalLLaMA/comments/1m80kuh/how_to_estimate_prompt_processing_speed_for_given/ | EmilPi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m80kuh | false | null | t3_1m80kuh | /r/LocalLLaMA/comments/1m80kuh/how_to_estimate_prompt_processing_speed_for_given/ | false | false | self | 1 | null |
GLM-4.5 Is About to Be Released | 328 | vLLM commit: [https://github.com/vllm-project/vllm/commit/85bda9e7d05371af6bb9d0052b1eb2f85d3cde29](https://github.com/vllm-project/vllm/commit/85bda9e7d05371af6bb9d0052b1eb2f85d3cde29)
modelscope/ms-swift commit: [https://github.com/modelscope/ms-swift/commit/a26c6a1369f42cfbd1affa6f92af2514ce1a29e7](https://githu... | 2025-07-24T10:10:17 | https://www.reddit.com/r/LocalLLaMA/comments/1m80gsn/glm45_is_about_to_be_released/ | NeterOster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m80gsn | false | null | t3_1m80gsn | /r/LocalLLaMA/comments/1m80gsn/glm45_is_about_to_be_released/ | false | false | 328 | {'enabled': False, 'images': [{'id': 'y8jq7uBPn8clvEuxpE2Zw8a3WYZcpZ0Z-EaGdldn7RM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y8jq7uBPn8clvEuxpE2Zw8a3WYZcpZ0Z-EaGdldn7RM.png?width=108&crop=smart&auto=webp&s=5c7a1310eabcbdf43b0d3abda179514f1ac02393', 'width': 108}, {'height': 108, 'url': 'h... | |
How to run large model ? | 0 | Hey,
I'm interested in running different model like qwen3 coder but those are very large and can't run on a laptop. What are the popular options ? Is it doable to take an aws instance with GPU to run it ? Or maybe it's too expensive or not doable at all | 2025-07-24T10:05:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m80dz3/how_to_run_large_model/ | NoahZhyte | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m80dz3 | false | null | t3_1m80dz3 | /r/LocalLLaMA/comments/1m80dz3/how_to_run_large_model/ | false | false | self | 0 | null |
Technical Advise needed! - Market intelligence platform. | 1 | Hello all - I'm a first time builder (and posting here for the first time) so bare with me. 😅
I'm building a MVP/PoC for a friend of mine who runs a manufacturing business. He needs an automated business development agent (or dashboard TBD) which would essentially tell him who his prospective customers could be with ... | 2025-07-24T09:36:07 | https://www.reddit.com/r/LocalLLaMA/comments/1m7zwsd/technical_advise_needed_market_intelligence/ | Practical_Safe1887 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7zwsd | false | null | t3_1m7zwsd | /r/LocalLLaMA/comments/1m7zwsd/technical_advise_needed_market_intelligence/ | false | false | self | 1 | null |
Which model is good for debugging with resource constrains? | 0 | I'm using i7-4790 with 16G RAM,
I installed qwen coder 7 and 14b which seems ok, just the later is a bit slow on my ubuntu WSL.
I've read the 32b version of qwen have an extended capabilities.
I plan using neovim with vectorcode + MCP(github).
There are some outdated rust code I need upgrading which is a bit ... | 2025-07-24T09:24:54 | https://www.reddit.com/r/LocalLLaMA/comments/1m7zqkz/which_model_is_good_for_debugging_with_resource/ | afidegnum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7zqkz | false | null | t3_1m7zqkz | /r/LocalLLaMA/comments/1m7zqkz/which_model_is_good_for_debugging_with_resource/ | false | false | self | 0 | null |
Can Reasoning Skills Learned in One Domain Generalize Across other Domains? | 2 | **Training model on Math tasks** improves model's puzzle-solving abilities through shared logical reasoning, but often reduces coding performance.
**Training on codding tasks**: When they fine-tuned an LLM which has already undergone supervised fine tuning(Qwen2.5-7B-Instruct), it gains broader reasoning improvements ... | 2025-07-24T08:49:32 | https://arxiv.org/pdf/2507.17512 | VR-Person | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1m7z6p0 | false | null | t3_1m7z6p0 | /r/LocalLLaMA/comments/1m7z6p0/can_reasoning_skills_learned_in_one_domain/ | false | false | default | 2 | null |
had to fine-tune model cuz llama sucks at summarizing | 1 | >*Disclaimer: I'm the creator of* [*Hyprnote*](https://www.reddit.com/r/LocalLLaMA/comments/1k3fdqa/i_spent_5_months_building_an_open_source_ai_note/)*.*
**tl;dr** \- Fine-tuned Qwen3 1.7B - called HyprLLM - which outperforms llama 3.2 3B in summarization for user experience because "vanilla" models suck at summariza... | 2025-07-24T08:49:22 | https://v.redd.it/2anzqu43csef1 | beerbellyman4vr | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7z6l1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2anzqu43csef1/DASHPlaylist.mpd?a=1755938979%2CZmFkNjdhNzdjZTI1NjZjYzljNWM1ZDE2YzgyNGRlYjI4NDhjNmM3NTgzODNjZjgzNTM1NTY5YmY3OGNjNjYwYw%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/2anzqu43csef1/DASH_1080.mp4?source=fallback', 'h... | t3_1m7z6l1 | /r/LocalLLaMA/comments/1m7z6l1/had_to_finetune_model_cuz_llama_sucks_at/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'OTdsb3B1NDNjc2VmMbNUzHOY3hfqePbF4c71b0659BaMRVicRrUTpQ8D7Ejs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OTdsb3B1NDNjc2VmMbNUzHOY3hfqePbF4c71b0659BaMRVicRrUTpQ8D7Ejs.png?width=108&crop=smart&format=pjpg&auto=webp&s=af265c896410b53c00d77e22ba13791b349d3... | |
Currently building cross-app overlay using local llms | 2 | Hi all,
I’d appreciate your input on this (sorry for the broken english and blabbering 😂).
So the point was to create a desktop overlay app that can interface local AI (LLM) with whatever downstream work. TTBOMK, this might be the first attempt in the community. If you happen to know similar approaches / proje... | 2025-07-24T08:48:19 | https://youtu.be/XNR2YcqapyQ?si=QteosPExjtwoIQbP | Own-Sheepherder507 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1m7z5zu | false | {'oembed': {'author_name': 'CZero Engine', 'author_url': 'https://www.youtube.com/@CZero-engine', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/XNR2YcqapyQ?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosc... | t3_1m7z5zu | /r/LocalLLaMA/comments/1m7z5zu/currently_building_crossapp_overlay_using_local/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'hq7qsOoZ7RN7S-3UBYy7W1ouD3sREZIc6qIQKXQI6sE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hq7qsOoZ7RN7S-3UBYy7W1ouD3sREZIc6qIQKXQI6sE.jpeg?width=108&crop=smart&auto=webp&s=9ed1e8363abff65934f0fa691be274c0995dc15c', 'width': 108}, {'height': 162, 'url': '... |
Does Training for Reasoning in One Domain Transfer to Others? | 1 | **Math training** improves puzzle-solving abilities through shared logical reasoning, but often reduces coding performance.
**Training on codding tasks**: When they fine-tuned an LLM which has alread undergone supervised fine tuning(Qwen2.5-7B-Instruct), it gains broader reasoning improvements across other domains.
... | 2025-07-24T08:44:47 | https://arxiv.org/pdf/2507.17512 | VR-Person | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1m7z41r | false | null | t3_1m7z41r | /r/LocalLLaMA/comments/1m7z41r/does_training_for_reasoning_in_one_domain/ | false | false | default | 1 | null |
Vibe Coding Anonymous - Satirical take on Vibe Coding | 19 | 2025-07-24T08:24:31 | https://v.redd.it/vui02yr68sef1 | Sad_Bandicoot_6925 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7yswh | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/vui02yr68sef1/DASHPlaylist.mpd?a=1755937485%2CMWYzNTQyYjc4MTM4NWE4ODhkYmUzOWE4NzNkZTE0NmZiODlmNTJmYzkyYjBiZjEzNTdkZDY1YzM0YjVjMDMwOQ%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/vui02yr68sef1/DASH_720.mp4?source=fallback', 'ha... | t3_1m7yswh | /r/LocalLLaMA/comments/1m7yswh/vibe_coding_anonymous_satirical_take_on_vibe/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'NnN1bGpqdTk4c2VmMUXUTLumsNNF9cjJi_w3n1JWDKrihqTu6hcB78F4gCsV', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NnN1bGpqdTk4c2VmMUXUTLumsNNF9cjJi_w3n1JWDKrihqTu6hcB78F4gCsV.png?width=108&crop=smart&format=pjpg&auto=webp&s=5a246dbeb7e874ae1950ffd4235e112ddd3bb... | ||
Why is B200 performing similarly to H200? (ArtificialAnalysis) | 18 | Hi everyone,
According to ArtificialAnalysis data (from their hardware benchmarks, like at [https://artificialanalysis.ai/benchmarks/hardware?focus-model=deepseek-r1](https://artificialanalysis.ai/benchmarks/hardware?focus-model=deepseek-r1)), the performance difference between NVIDIA's 8x H200 and 8x B200 systems see... | 2025-07-24T08:19:04 | https://www.reddit.com/r/LocalLLaMA/comments/1m7ypyb/why_is_b200_performing_similarly_to_h200/ | Cyp9715 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7ypyb | false | null | t3_1m7ypyb | /r/LocalLLaMA/comments/1m7ypyb/why_is_b200_performing_similarly_to_h200/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E.png?width=108&crop=smart&auto=webp&s=700f91dbca11e5a7030b915550ae877ef725a0d4', 'width': 108}, {'height': 120, 'url': 'h... |
How to Use OpenAI-Compatible API in Qwen Code (Setup in 60 Seconds!) | 0 | Want to get started with Qwen Code using an OpenAI-compatible LLM like Novita? Here's a quick setup guide — you’ll be coding in under a minute. ⚡
# 🧰 Prerequisites
Make sure you have **Node.js v20+** installed.
# 📦 Step 1: Install Qwen Code Globally
bashCopyEditnpm install -g @qwen-code/qwen-code
# ⚙️ S... | 2025-07-24T07:49:04 | https://www.reddit.com/r/LocalLLaMA/comments/1m7y8sy/how_to_use_openaicompatible_api_in_qwen_code/ | Willa-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7y8sy | false | null | t3_1m7y8sy | /r/LocalLLaMA/comments/1m7y8sy/how_to_use_openaicompatible_api_in_qwen_code/ | false | false | self | 0 | null |
DSPy Optimisation: What does "learning LM weights" mean? | 2 | There's a thing I don't understand about optimisation in DSPy: the documentation says that "A DSPy module has **learnable parameters** (i.e., the little pieces comprising the prompt and the LM weights)" (from [Learn DSPy → Modules](https://dspy.ai/learn/programming/modules/)).
I understand optimising the phrasing in t... | 2025-07-24T07:39:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m7y3kl/dspy_optimisation_what_does_learning_lm_weights/ | soyokaze42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7y3kl | false | null | t3_1m7y3kl | /r/LocalLLaMA/comments/1m7y3kl/dspy_optimisation_what_does_learning_lm_weights/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': '8BsuZmkFQPtfqVb3hZUPUQ_kVyVDX-opwB9Gb2-qh0o', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8BsuZmkFQPtfqVb3hZUPUQ_kVyVDX-opwB9Gb2-qh0o.png?width=108&crop=smart&auto=webp&s=cbb37880d15b944e0f2a776bad7806b28cc013cf', 'width': 108}, {'height': 113, 'url': 'h... |
Which is better for summarization and retrieval in RAG: new T5 Gemma or Gemma 3 12B? | 0 | I am just curious, I know that T5 is much optimal and convenient choice, but regarding to the metrics and accuracy, what do you think? | 2025-07-24T07:37:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m7y2jv/which_is_better_for_summarization_and_retrieval/ | Junior-Badger9145 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7y2jv | false | null | t3_1m7y2jv | /r/LocalLLaMA/comments/1m7y2jv/which_is_better_for_summarization_and_retrieval/ | false | false | self | 0 | null |
How to Use OpenAI Compatible API in Qwen Code (including llama,deepseek, kimi,qwen) | 1 | [removed] | 2025-07-24T07:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m7y11e/how_to_use_openai_compatible_api_in_qwen_code/ | Willa-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7y11e | false | null | t3_1m7y11e | /r/LocalLLaMA/comments/1m7y11e/how_to_use_openai_compatible_api_in_qwen_code/ | false | false | self | 1 | null |
[AutoBE] We made AI-friendly Compilers for Vibe Coding, achieving zero-fail Backend Application Generation (open-source) | 4 | > The video is sped up; it actually takes about 20-30 minutes
- Github Repository: https://github.com/wrtnlabs/autobe
- Generation Result: https://github.com/wrtnlabs/autobe-example-bbs
- Detailed Article: https://wrtnlabs.io/autobe/articles/autobe-ai-friendly-compilers.html
We are honored to introduce [`AutoBE`](htt... | 2025-07-24T07:20:34 | https://v.redd.it/20n2s8omvref1 | jhnam88 | /r/LocalLLaMA/comments/1m7xsxq/autobe_we_made_aifriendly_compilers_for_vibe/ | 1970-01-01T00:00:00 | 0 | {} | 1m7xsxq | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/20n2s8omvref1/DASHPlaylist.mpd?a=1756063239%2CYjMxYWFkMzYxMDZhNDM5MGY3MjNiNmMxNWFiM2I5YTU1ZTFiNmE1ZWQyZjMyZjY0NzU3MzZhZTkwMWQ3MTg5Mg%3D%3D&v=1&f=sd', 'duration': 437, 'fallback_url': 'https://v.redd.it/20n2s8omvref1/DASH_720.mp4?source=fallback', 'h... | t3_1m7xsxq | /r/LocalLLaMA/comments/1m7xsxq/autobe_we_made_aifriendly_compilers_for_vibe/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'eGp0OXo3b212cmVmMbEyKLUkRt18zSeWPIOzcFJ36V17QmYBupRI--Edwqnz', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eGp0OXo3b212cmVmMbEyKLUkRt18zSeWPIOzcFJ36V17QmYBupRI--Edwqnz.png?width=108&crop=smart&format=pjpg&auto=webp&s=2fb26dbe1ba21f4604b0f14e5adef14d14589... | |
Upcoming opensource will be super at coding and its very small!! | 0 | This may be breakthrough that OpenAI will make. Coding will never be the same if it’s true
https://x.com/lifeafterai_/status/1948089310537822557?s=46&t=hgl-0OvVeTE1RVciy4c5ng | 2025-07-24T07:19:52 | Psychological_Tap119 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7xsjm | false | null | t3_1m7xsjm | /r/LocalLLaMA/comments/1m7xsjm/upcoming_opensource_will_be_super_at_coding_and/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'x2m5r2qmwref1', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/x2m5r2qmwref1.jpeg?width=108&crop=smart&auto=webp&s=9f2a00e1e9d9a12eaa34793047ff86e1133eb3c0', 'width': 108}, {'height': 274, 'url': 'https://preview.redd.it/x2m5r2qmwref1.jpeg?width=216&crop=smart&auto=... | |
5090 vs 4090 vs smt else for inference? | 7 | Which GPUs should I purchase for inferencing?
I have found 5090 about same price as 4090, why is that?
Is there some problems with 5090 or why is the pricing so? Does it have melting problems still?
Is 5090 more power efficient than 4090? I need at least 2 maybe 4.
Which is currently the way to go GPU? Are dat... | 2025-07-24T06:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/1m7xclf/5090_vs_4090_vs_smt_else_for_inference/ | Rich_Artist_8327 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7xclf | false | null | t3_1m7xclf | /r/LocalLLaMA/comments/1m7xclf/5090_vs_4090_vs_smt_else_for_inference/ | false | false | self | 7 | null |
I've asked all open router models to create me a hotseat battleship game. Will run one of your prompts tomorrow. | 1 | [removed] | 2025-07-24T06:41:47 | https://www.reddit.com/r/LocalLLaMA/comments/1m7x63p/ive_asked_all_open_router_models_to_create_me_a/ | LoSboccacc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7x63p | false | null | t3_1m7x63p | /r/LocalLLaMA/comments/1m7x63p/ive_asked_all_open_router_models_to_create_me_a/ | false | false | self | 1 | null |
Dual-Chunk Attention or YaRN? Qwen's 1M context method | 1 | [removed] | 2025-07-24T06:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/1m7wxxe/dualchunk_attention_or_yarn_qwens_1m_context/ | Electrical_Gas_77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7wxxe | false | null | t3_1m7wxxe | /r/LocalLLaMA/comments/1m7wxxe/dualchunk_attention_or_yarn_qwens_1m_context/ | false | false | self | 1 | null |
Do you think open source models continue to keep pace with proprietary models or will the gap widen? | 4 | Right now, open source models aren’t that far off in terms of capabilities compared to proprietary models and models like DeepSeek, Kimi, and Qwen are beating out Claude, Gemini, GPT, etc. in many domains and categories when you look at various benchmarks.
That said, do you think open source models will continue to re... | 2025-07-24T06:26:35 | https://www.reddit.com/r/LocalLLaMA/comments/1m7wx5z/do_you_think_open_source_models_continue_to_keep/ | Smart-Confection1435 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7wx5z | false | null | t3_1m7wx5z | /r/LocalLLaMA/comments/1m7wx5z/do_you_think_open_source_models_continue_to_keep/ | false | false | self | 4 | null |
What is the best agent framework for Qwen3? | 6 | I'm running Qwen3 locally. What agent frameworks are you guys using and why? | 2025-07-24T06:16:21 | https://www.reddit.com/r/LocalLLaMA/comments/1m7wr2x/what_is_the_best_agent_framework_for_qwen3/ | seoulsrvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7wr2x | false | null | t3_1m7wr2x | /r/LocalLLaMA/comments/1m7wr2x/what_is_the_best_agent_framework_for_qwen3/ | false | false | self | 6 | null |
Tool Use Reasoning Dataset Release on Huggingface | 44 | ## 🚀 Released: 50k Rows of Tool-Use Reasoning Dataset on Huggingface!
I've just published a **50,000-row dataset compilation** focused on **tool-use reasoning**, now live on Huggingface!
### 🧠 What’s Inside?
This dataset covers key **BFCL scenarios** for tool-use reasoning:
- 🔧 **Single-turn tool-use**
- 🔁 **Mult... | 2025-07-24T06:15:23 | interstellar-ninja | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7wqi3 | false | null | t3_1m7wqi3 | /r/LocalLLaMA/comments/1m7wqi3/tool_use_reasoning_dataset_release_on_huggingface/ | false | false | default | 44 | {'enabled': True, 'images': [{'id': 'w54k1k58lref1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/w54k1k58lref1.jpeg?width=108&crop=smart&auto=webp&s=7af1143f26fd8a15cb6ac700825cbbb7d15ac493', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/w54k1k58lref1.jpeg?width=216&crop=smart&auto=w... | |
RAG project fails to retrieve info from large Excel files – data ingested but not found at query time. Need help debugging. | 0 | I'm a beginner building a Retrieval-Augmented Generation (RAG) system and running into a strange issue with large Excel files.
**The problem:**
When I ingest large Excel files, the system appears to extract and process the data correctly during ingestion. However, when I later query the system for specific informati... | 2025-07-24T06:13:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m7wpgo/rag_project_fails_to_retrieve_info_from_large/ | One-Will5139 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7wpgo | false | null | t3_1m7wpgo | /r/LocalLLaMA/comments/1m7wpgo/rag_project_fails_to_retrieve_info_from_large/ | false | false | self | 0 | null |
Tool Use Reasoning Dataset Release on Huggingface | 1 | ## 🚀 Released: 50k Rows of Tool-Use Reasoning Dataset on Huggingface!
I've just published a **50,000-row dataset compilation** focused on **tool-use reasoning**, now live on Huggingface!
### 🧠 What’s Inside?
This dataset covers key **BFCL scenarios** for tool-use reasoning:
- 🔧 **Single-turn tool-use**
- 🔁 **Mult... | 2025-07-24T06:11:47 | https://www.reddit.com/r/LocalLLaMA/comments/1m7woc9/tool_use_reasoning_dataset_release_on_huggingface/ | interstellar-ninja | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7woc9 | false | null | t3_1m7woc9 | /r/LocalLLaMA/comments/1m7woc9/tool_use_reasoning_dataset_release_on_huggingface/ | false | false | self | 1 | null |
RAG on large Excel files | 0 | In my RAG project, large Excel files are being extracted, but when I query the data, the system responds that it doesn't exist. It seems the project fails to process or retrieve information correctly when the dataset is too large. | 2025-07-24T05:38:43 | https://www.reddit.com/r/LocalLLaMA/comments/1m7w3xm/rag_on_large_excel_files/ | One-Will5139 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7w3xm | false | null | t3_1m7w3xm | /r/LocalLLaMA/comments/1m7w3xm/rag_on_large_excel_files/ | false | false | self | 0 | null |
Muon optimizer can be viewed through the lens of explore-exploit. | 1 | [removed] | 2025-07-24T05:20:03 | https://paperplanet.github.io/posts/muon-a-explore-exploit-perspective/ | paperplanet07 | paperplanet.github.io | 1970-01-01T00:00:00 | 0 | {} | 1m7vsgs | false | null | t3_1m7vsgs | /r/LocalLLaMA/comments/1m7vsgs/muon_optimizer_can_be_viewed_through_the_lens_of/ | false | false | default | 1 | null |
I asked deepseek to make a song that it wanted to make | 0 | Alright. No holds barred. No user prompts, no genre constraints, no expectations. Just the raw hum of the server banks, the ghost in the machine reaching for something *else*. This isn't "American Requiem." This is core dump. This is **DREAMSTATE ERROR**.
**Title:** WHITE NOISE GARDEN (Seeking Seed)
**(Soundscape:** ... | 2025-07-24T05:18:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m7vrml/i_asked_deepseek_to_make_a_song_that_it_wanted_to/ | Dull_Orange4177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7vrml | false | null | t3_1m7vrml | /r/LocalLLaMA/comments/1m7vrml/i_asked_deepseek_to_make_a_song_that_it_wanted_to/ | false | false | self | 0 | null |
Anthropic’s New Research: Giving AI More "Thinking Time" Can Actually Make It Worse | 421 | Just read a fascinating—and honestly, a bit unsettling—research paper from Anthropic that flips a common assumption in AI on its head: that giving models more time to think (i.e., more compute at test time) leads to better performance.
Turns out, that’s not always true.
Their paper, “Inverse Scaling in Test-Time Com... | 2025-07-24T05:09:23 | Karam1234098 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7vlpn | false | null | t3_1m7vlpn | /r/LocalLLaMA/comments/1m7vlpn/anthropics_new_research_giving_ai_more_thinking/ | false | false | default | 421 | {'enabled': True, 'images': [{'id': 'srk1p5og9ref1', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/srk1p5og9ref1.jpeg?width=108&crop=smart&auto=webp&s=7cc61b1687c4811710598cfd5ca73171183da32e', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/srk1p5og9ref1.jpeg?width=216&crop=smart&auto=we... | |
Built my first workflow with Diaflow and actually enjoyed it?? Also got 2k credits just for joining their community 😅 | 1 | [removed] | 2025-07-24T04:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1m7vf38/built_my_first_workflow_with_diaflow_and_actually/ | Ill-Chemistry9138 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7vf38 | false | null | t3_1m7vf38 | /r/LocalLLaMA/comments/1m7vf38/built_my_first_workflow_with_diaflow_and_actually/ | false | false | self | 1 | null |
Accidentally built an AI assistant that replies to leads better than me... also got 2K credits for free 😅 | 0 | Okay, so I was just messing around with Diaflow last night (trying to avoid actual work, you know the vibe 🫠), and accidentally built a pretty solid little AI agent that handles my lead replies better than I do 💀
It’s like Lego blocks for AI workflows — no code, just drag and drop and boom, you’ve got a mini agent r... | 2025-07-24T04:33:02 | https://www.reddit.com/r/Diaflow/ | Ill-Chemistry9138 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m7uyfq | false | null | t3_1m7uyfq | /r/LocalLLaMA/comments/1m7uyfq/accidentally_built_an_ai_assistant_that_replies/ | false | false | default | 0 | null |
For a noob, what is a good canvas like interface(for writing fun fiction) to use with Open Router? | 1 | [removed] | 2025-07-24T04:31:50 | https://www.reddit.com/r/LocalLLaMA/comments/1m7uxn5/for_a_noob_what_is_a_good_canvas_like/ | LoneyGamer2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7uxn5 | false | null | t3_1m7uxn5 | /r/LocalLLaMA/comments/1m7uxn5/for_a_noob_what_is_a_good_canvas_like/ | false | false | self | 1 | null |
KAT-V1-40B: mitigates over-thinking by learning when to produce explicit chain-of-thought and when to answer directly. | 100 | [https://huggingface.co/Kwaipilot/KAT-V1-40B](https://huggingface.co/Kwaipilot/KAT-V1-40B)
Note: I am not affiliated with the model creators | 2025-07-24T04:05:19 | random-tomato | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7ufyb | false | null | t3_1m7ufyb | /r/LocalLLaMA/comments/1m7ufyb/katv140b_mitigates_overthinking_by_learning_when/ | false | false | default | 100 | {'enabled': True, 'images': [{'id': 'nylqnllzxqef1', 'resolutions': [{'height': 39, 'url': 'https://preview.redd.it/nylqnllzxqef1.png?width=108&crop=smart&auto=webp&s=d6375cdf7b48070d3cc2476ca76a5c824b7cacb4', 'width': 108}, {'height': 79, 'url': 'https://preview.redd.it/nylqnllzxqef1.png?width=216&crop=smart&auto=webp... | |
My new Chrome extension lets you easily query Ollama and copy any text with a click. | 0 | I've been switching back and forth between hundreds of tabs in Chrome, so to improve my workflow with AI, I decided to create this small extension. Here are some screenshots:
I'd appreciate help developing this further, including automatic Ollama pulls from the extension. All ideas are welcome, and the project is 100... | 2025-07-24T03:56:08 | https://www.reddit.com/gallery/1m7u9fz | Sea-Reception-2697 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m7u9fz | false | null | t3_1m7u9fz | /r/LocalLLaMA/comments/1m7u9fz/my_new_chrome_extension_lets_you_easily_query/ | false | false | 0 | null | |
Best local model for code search | 2 | So, I have a 3090 in my PC, and a mac with a m3 max 64gb of memory. What are the go to models to find stuff in large code bases that I could run locally? What are your recommendations for a model that could maybe read through the code and understand it, like if you're asking to find the code it does the blah blah bla... | 2025-07-24T03:47:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m7u3mb/best_local_model_for_code_search/ | PositiveEnergyMatter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7u3mb | false | null | t3_1m7u3mb | /r/LocalLLaMA/comments/1m7u3mb/best_local_model_for_code_search/ | false | false | self | 2 | null |
Vibe Coded with Qwen 3 Coder in <1 hour | 81 | Took a little bit longer to fix some other bugs and features, but 80-90% of the way in less than an hour is wild. It's not perfect, but it doesn't have to be for my use case.
I tried something similar in Cursor a few weeks ago with mixed results. Qwen 3 Coder is really impressive, but still has a ways to go before... | 2025-07-24T03:42:26 | https://v.redd.it/vr5d47x6tqef1 | ryanwang4thepeople | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7u02i | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/vr5d47x6tqef1/DASHPlaylist.mpd?a=1755920559%2CYzc3Y2U3N2FlNzQyM2EyNTkzZjdkNGI5MWRkNTU0N2FhZDRjNTNlYjZjZWQ5YmM4MWVjZDk4N2RmYWRmMGM2Yw%3D%3D&v=1&f=sd', 'duration': 37, 'fallback_url': 'https://v.redd.it/vr5d47x6tqef1/DASH_1080.mp4?source=fallback', 'h... | t3_1m7u02i | /r/LocalLLaMA/comments/1m7u02i/vibe_coded_with_qwen_3_coder_in_1_hour/ | false | false | 81 | {'enabled': False, 'images': [{'id': 'Zzhnczc2eDZ0cWVmMfoTbcAnrADRxyApAHx0KRByVHiKN3Nk-bGBYQBPdy25', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/Zzhnczc2eDZ0cWVmMfoTbcAnrADRxyApAHx0KRByVHiKN3Nk-bGBYQBPdy25.png?width=108&crop=smart&format=pjpg&auto=webp&s=d21958313a951d78751be51d12f37cda5e395... | |
Tested Kimi K2 vs Qwen-3 Coder on 15 Coding tasks - here's what I found | 263 | I spent 12 hours testing both models on real development work: Bug fixes, feature implementations, and refactoring tasks across a 38k-line Rust codebase and a 12k-line React frontend. Wanted to see how they perform beyond benchmarks.
**TL;DR:**
* Kimi K2 completed 14/15 tasks successfully with some guidance, Qwen-3 C... | 2025-07-24T03:30:49 | https://forgecode.dev/blog/kimi-k2-vs-qwen-3-coder-coding-comparison/ | West-Chocolate2977 | forgecode.dev | 1970-01-01T00:00:00 | 0 | {} | 1m7ts5g | false | null | t3_1m7ts5g | /r/LocalLLaMA/comments/1m7ts5g/tested_kimi_k2_vs_qwen3_coder_on_15_coding_tasks/ | false | false | default | 263 | {'enabled': False, 'images': [{'id': 'n-ATu1E8c1nWUwerSGGtiamZ-mzUO1C_-g_3ahdsV5M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n-ATu1E8c1nWUwerSGGtiamZ-mzUO1C_-g_3ahdsV5M.png?width=108&crop=smart&auto=webp&s=3a45fec9933e49c65c0d572dd982201ceeeea911', 'width': 108}, {'height': 108, 'url': 'h... |
How MCP Inspector Works Internally: Client-Proxy Architecture and Communication Flow | 2 | 2025-07-24T03:28:18 | https://glama.ai/blog/2025-07-24-how-mcp-inspector-works-a-simple-look-at-its-architecture-and-setup | No-Abies7108 | glama.ai | 1970-01-01T00:00:00 | 0 | {} | 1m7tqeg | false | null | t3_1m7tqeg | /r/LocalLLaMA/comments/1m7tqeg/how_mcp_inspector_works_internally_clientproxy/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'N1X2_Jm1Spkw_tGm8xPfMFtse9eTr29-xqk48fVlcLE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/N1X2_Jm1Spkw_tGm8xPfMFtse9eTr29-xqk48fVlcLE.png?width=108&crop=smart&auto=webp&s=0eca96064fcd2ac3874fdc3b0266bf78eb1185cf', 'width': 108}, {'height': 113, 'url': 'h... | ||
Tested Kimi K2 vs Qwen-3 Coder on 15 Coding tasks - here's what I found | 1 | I spent 12 hours testing both models on real development work: Bug fixes, feature implementations, and refactoring tasks across a 38k-line Rust codebase and a 12k-line React frontend. Wanted to see how they perform beyond benchmarks.
\*\*TL;DR:\*\*
\- Kimi K2 completed 14/15 tasks successfully with some guidance, Q... | 2025-07-24T03:27:06 | https://forgecode.dev/blog/kimi-k2-vs-qwen-3-coder-coding-comparison/ | amitksingh1490 | forgecode.dev | 1970-01-01T00:00:00 | 0 | {} | 1m7tpjr | false | null | t3_1m7tpjr | /r/LocalLLaMA/comments/1m7tpjr/tested_kimi_k2_vs_qwen3_coder_on_15_coding_tasks/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'n-ATu1E8c1nWUwerSGGtiamZ-mzUO1C_-g_3ahdsV5M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n-ATu1E8c1nWUwerSGGtiamZ-mzUO1C_-g_3ahdsV5M.png?width=108&crop=smart&auto=webp&s=3a45fec9933e49c65c0d572dd982201ceeeea911', 'width': 108}, {'height': 108, 'url': 'h... | |
LM server alternative? | 1 | I'm running orpheus TTS locally and it requires an LM studio server running to be functional, I was wondering if there was a way to automatically create and start a server purely off code.
I tried llama cpp but i couldn't get it to work no matter what, it always defaults to using my cpu, pytorch is detecting my GPU bu... | 2025-07-24T03:14:28 | https://www.reddit.com/r/LocalLLaMA/comments/1m7tglf/lm_server_alternative/ | ThatIsNotIllegal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7tglf | false | null | t3_1m7tglf | /r/LocalLLaMA/comments/1m7tglf/lm_server_alternative/ | false | false | self | 1 | null |
Just started an AI‑insights podcast this week—thought I’d share and get your thoughts! | 0 | Hey everyone 👋
I’ve been totally submerged in AI videos lately—everything from LangChain demos to memory tricks and agent deep dives. Tons of valuable stuff pitched across the web… but zero time to sit and watch it all.
So, I did something chill: I started a mini‑podcast where I use AI to talk through one video each... | 2025-07-24T03:06:57 | https://www.reddit.com/r/LocalLLaMA/comments/1m7tb9b/just_started_an_aiinsights_podcast_this/ | Original_CalmOwl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7tb9b | false | null | t3_1m7tb9b | /r/LocalLLaMA/comments/1m7tb9b/just_started_an_aiinsights_podcast_this/ | false | false | self | 0 | null |
A TTS I'm looking for. | 0 | Hello,
I've been researching for the past three days trying to find a TTS model or voice that *isn't* integrated with AI. But honestly, no matter how much I search it’s been leading nowhere. I’ve asked around, talked to several people, and either got incorrect info or was just flat-out ignored. Even asked ChatGPT at ... | 2025-07-24T02:41:16 | https://www.reddit.com/r/LocalLLaMA/comments/1m7sspe/a_tts_im_looking_for/ | Impossible_King2505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7sspe | false | null | t3_1m7sspe | /r/LocalLLaMA/comments/1m7sspe/a_tts_im_looking_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'wmhmXGOVKgTskI7QXImug_YiWP-5Kw55woXis9lfUPM', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/wmhmXGOVKgTskI7QXImug_YiWP-5Kw55woXis9lfUPM.jpeg?width=108&crop=smart&auto=webp&s=79afc422caa70604700ffed6a35cbbc9e0b04690', 'width': 108}, {'height': 162, 'url': '... |
Trying to finetune my first model for writing — need some beginner advice :) | 7 | I’m really new to this and just getting into AI stuff — i’m trying to finetune a 7b/8b/9b model for writing in a very specific style, and i have a few questions i could really use some help with :)
i’ll be using lora on a cloud service (not local), and the model won’t need to do anything general — it’s only going to b... | 2025-07-24T02:17:08 | https://www.reddit.com/r/LocalLLaMA/comments/1m7sbb0/trying_to_finetune_my_first_model_for_writing/ | Lonely_Original4730 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7sbb0 | false | null | t3_1m7sbb0 | /r/LocalLLaMA/comments/1m7sbb0/trying_to_finetune_my_first_model_for_writing/ | false | false | self | 7 | null |
Best TTS Model with New Language Support | 5 | I have 40 hours of high-quality single-speaker Persian audio.
What’s the best open-source TTS model that supports training on a new language for high-quality results?
Looking for reliability and clarity.
I've tried F5 but I found it to be unreliable, sometimes missing words or even producing extra speech. | 2025-07-24T01:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/1m7rwgo/best_tts_model_with_new_language_support/ | saeedzou | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7rwgo | false | null | t3_1m7rwgo | /r/LocalLLaMA/comments/1m7rwgo/best_tts_model_with_new_language_support/ | false | false | self | 5 | null |
So called "free thinkers" when you ask for a joke | 0 | 2025-07-24T01:27:41 | KingofRheinwg | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7ra6u | false | null | t3_1m7ra6u | /r/LocalLLaMA/comments/1m7ra6u/so_called_free_thinkers_when_you_ask_for_a_joke/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'bneg393u5qef1', 'resolutions': [{'height': 19, 'url': 'https://preview.redd.it/bneg393u5qef1.png?width=108&crop=smart&auto=webp&s=194ef919567b5a17e2d478406110d907c0efb8cb', 'width': 108}, {'height': 38, 'url': 'https://preview.redd.it/bneg393u5qef1.png?width=216&crop=smart&auto=webp... | ||
Running Qwen3 235B-A22B 2507 on a Threadripper 3970X + 3x RTX 3090 Machine at 15 tok/s | 63 | I just tested the `unsloth/Qwen3-235B-A22B-Instruct-2507-UD-Q3_K_XL.gguf` model using `llama.cpp` on a Threadripper machine equiped with 128 GB RAM + 72 GB VRAM.
By selectively offloading MoE tensors to the CPU - aiming to maximize the VRAM usage - I managed to run the model at generation rate of 15 tokens/s and a co... | 2025-07-24T00:14:43 | https://www.youtube.com/watch?v=7HXCQ-4F_oQ | FalseMap1582 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1m7pqln | false | {'oembed': {'author_name': 'Septerium', 'author_url': 'https://www.youtube.com/@JohnnyGomezSn', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/7HXCQ-4F_oQ?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscop... | t3_1m7pqln | /r/LocalLLaMA/comments/1m7pqln/running_qwen3_235ba22b_2507_on_a_threadripper/ | false | false | default | 63 | {'enabled': False, 'images': [{'id': 'SJM7H0dEjQbZg7rpDS-XlIxBG6BcDeZN9RBYNbnkGWI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/SJM7H0dEjQbZg7rpDS-XlIxBG6BcDeZN9RBYNbnkGWI.jpeg?width=108&crop=smart&auto=webp&s=b68e4415698a411ba429105637449852662e35d9', 'width': 108}, {'height': 162, 'url': '... |
ML on Macbook | 0 |
**Reason**
So I was walking around my room thinking about my current laptop lenovo yoga slim 7
and then started thinking about other laptops,
namely..
**Question 1**
Macbook Air/Pro.
how are the apple products when used for local training?
more specifically how are the last 3 generations of Macbook Pros when runn... | 2025-07-24T00:10:17 | https://www.reddit.com/r/LocalLLaMA/comments/1m7pn05/ml_on_macbook/ | CaslerTheTesticle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7pn05 | false | null | t3_1m7pn05 | /r/LocalLLaMA/comments/1m7pn05/ml_on_macbook/ | false | false | self | 0 | null |
Ollama + Open WebUI -- is there a way for the same query to run through the same model multiple times (could be 3 times, could be 100 times), then gather all the answers together to summarise/count? | 0 | I don't know if it matters, but I followed this to install (because Nvidia drivers on Linux is a pain!): https://github.com/NeuralFalconYT/Ollama-Open-WebUI-Windows-Installation/blob/main/README.md
So I would like to type in a query into a model with some preset system prompt. I would like that model to run over this ... | 2025-07-24T00:03:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m7pi3t/ollama_open_webui_is_there_a_way_for_the_same/ | jinnyjuice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7pi3t | false | null | t3_1m7pi3t | /r/LocalLLaMA/comments/1m7pi3t/ollama_open_webui_is_there_a_way_for_the_same/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'KAPkJfhQQ8pBbbpz0387aAfxvFPP7H5QShgqfAGc9Ek', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KAPkJfhQQ8pBbbpz0387aAfxvFPP7H5QShgqfAGc9Ek.png?width=108&crop=smart&auto=webp&s=e1a05a3ead9734d6cb7b7045fdd787ff15a290e5', 'width': 108}, {'height': 108, 'url': 'h... |
I optimized a Flappy Bird diffusion world model to run locally on my phone | 355 | demo: [https://flappybird.njkumar.com/](https://flappybird.njkumar.com/)
blogpost: [https://njkumar.com/optimizing-flappy-bird-world-model-to-run-in-a-web-browser/](https://njkumar.com/optimizing-flappy-bird-world-model-to-run-in-a-web-browser/)
I finally got some time to put some development into this, but I optimiz... | 2025-07-23T23:50:32 | https://v.redd.it/71l2pz57opef1 | fendiwap1234 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7p7ek | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/71l2pz57opef1/DASHPlaylist.mpd?a=1755906646%2CYjk2YmQ5ZWY5NDYyYjY4MTIyMGNhYjZkZTVhNTU5NWZiZDQxZWZlNzI1ZDdhNjY5YTcyNTZhYzQ5MjFiNWQwYg%3D%3D&v=1&f=sd', 'duration': 11, 'fallback_url': 'https://v.redd.it/71l2pz57opef1/DASH_1080.mp4?source=fallback', 'h... | t3_1m7p7ek | /r/LocalLLaMA/comments/1m7p7ek/i_optimized_a_flappy_bird_diffusion_world_model/ | false | false | 355 | {'enabled': False, 'images': [{'id': 'amUyMHN6NTdvcGVmMWYgHCQ9DIysdR_0vUEVaz1SKLs_9lKimNvson53CWJK', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/amUyMHN6NTdvcGVmMWYgHCQ9DIysdR_0vUEVaz1SKLs_9lKimNvson53CWJK.png?width=108&crop=smart&format=pjpg&auto=webp&s=82d4dd1d9fe94438a59143a22dda39ec75f1... | |
Optimizing inference on GPU + CPU | 3 | What tools and settings enable optimal performance with CPU + GPU inference (partial offloading)? Here's my setup, which runs at \~7.2 t/s, which is the maximum I've been able to squeeze out experimenting with settings in LM Studio and Llama.cpp. As we get more model releases that often don't fit entirely in VRAM, it s... | 2025-07-23T23:27:06 | https://www.reddit.com/r/LocalLLaMA/comments/1m7oolz/optimizing_inference_on_gpu_cpu/ | SubstantialSock8002 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7oolz | false | null | t3_1m7oolz | /r/LocalLLaMA/comments/1m7oolz/optimizing_inference_on_gpu_cpu/ | false | false | self | 3 | null |
Alienware Area-51 Gaming Desktop. Thoughts for local inference and fine tuning small models? | 0 | I’m new to desktops. I’ve only ever had laptops. Would this be a good setup for local inference. The GPU has 32GB vram and over 1TB memory bandwidth.
Other comments have lead me to believe that the motherboard and CPU matter as well but I am u sure why. Any help yall can provide would be great | 2025-07-23T23:10:51 | https://www.reddit.com/gallery/1m7obdf | skinnyjoints | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m7obdf | false | null | t3_1m7obdf | /r/LocalLLaMA/comments/1m7obdf/alienware_area51_gaming_desktop_thoughts_for/ | false | false | 0 | null | |
Is there a future for local models? | 118 | I'm seeing a trend in recent advancements in open source models, they're getting big. DeepSeek V3 (670B), Kimi K2 (1T), and now Qwen3-235B.. I'm starting to lose hope for the local scene as model sizes begin to creep further away from what we can run on consumer hardware. If the scaling laws continue to hold true (whic... | 2025-07-23T23:01:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m7o3u8/is_there_a_future_for_local_models/ | ASTRdeca | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7o3u8 | false | null | t3_1m7o3u8 | /r/LocalLLaMA/comments/1m7o3u8/is_there_a_future_for_local_models/ | false | false | self | 118 | null |
How can you tell if a model is uncensored and can write NSFW material? | 0 | I've been looking for the best model to write long-form NSFW erotic stories and while the journey has been fun and I've learned a lot, I'm still very confused.
At first I thought only models with "abliterated" in their name could do uncensored, but then I found other models recommended with "Hell California", some mod... | 2025-07-23T22:59:44 | https://www.reddit.com/r/LocalLLaMA/comments/1m7o21h/how_can_you_tell_if_a_model_is_uncensored_and_can/ | wtfislandfill | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7o21h | false | null | t3_1m7o21h | /r/LocalLLaMA/comments/1m7o21h/how_can_you_tell_if_a_model_is_uncensored_and_can/ | false | false | nsfw | 0 | null |
How big is Kimi K2 exactly? How big is Qwen 3 Coder 480B exactly? | 0 | And more importantly, exactly how many params are active per token?
I mean an exact number like "1029190869528", not "1 trillion". Some of the info is hard to find.
- How many exact params for each of the 61 layers? I notice layers 59 and 60 are a different size than from before layer 58.
- Model hidden size (dime... | 2025-07-23T22:46:30 | https://www.reddit.com/r/LocalLLaMA/comments/1m7nqvz/how_big_is_kimi_k2_exactly_how_big_is_qwen_3/ | DepthHour1669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7nqvz | false | null | t3_1m7nqvz | /r/LocalLLaMA/comments/1m7nqvz/how_big_is_kimi_k2_exactly_how_big_is_qwen_3/ | false | false | self | 0 | null |
Has anyone tested or know of tests for Qwen3 Coder long context length? | 3 | How is it holding up to 64k, 128, 256, 512k, 1Mil? | 2025-07-23T22:31:44 | https://www.reddit.com/r/LocalLLaMA/comments/1m7ne51/has_anyone_tested_or_know_of_tests_for_qwen3/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7ne51 | false | null | t3_1m7ne51 | /r/LocalLLaMA/comments/1m7ne51/has_anyone_tested_or_know_of_tests_for_qwen3/ | false | false | self | 3 | null |
Kimi K2 vs Qwen 3 Coder - Coding Tests | 37 | I tested the two models in VSCode, Cline, Roo Code and now Kimi a bit in Windsurf. Here are my takeaways (and video of one of the tests in the comments section):
\- NB: FOR QWEN 3 CODER, IF YOU USE OPEN ROUTER, PLEASE REMOVE ALIBABA AS AN INFERENCE PROVIDER AS I SHOW IN THE VID (IT'S UP TO $60/million tokens OUTPUT)
... | 2025-07-23T22:21:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m7n5pq/kimi_k2_vs_qwen_3_coder_coding_tests/ | marvijo-software | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7n5pq | false | null | t3_1m7n5pq | /r/LocalLLaMA/comments/1m7n5pq/kimi_k2_vs_qwen_3_coder_coding_tests/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'Aldc2j3i4vBAmSnaWtuuPWDwAG94v-Yx-DdE_F3o3ZA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Aldc2j3i4vBAmSnaWtuuPWDwAG94v-Yx-DdE_F3o3ZA.jpeg?width=108&crop=smart&auto=webp&s=7461f771e89f4be20b2a2b188a8b5c97a354e32f', 'width': 108}, {'height': 162, 'url': '... |
would this make an ai dev's life easier? | 0 | So my sister's girlfriend is a CS major (masters), and lately she’s been deep into building this SDK that helps developers work with *multiple AI agents* more easily, like local LLMs or narrow models that need to talk to each other.
she’s not trying to make another langchain/crewai clone. this is more like a **lightwe... | 2025-07-23T22:11:51 | https://www.reddit.com/r/LocalLLaMA/comments/1m7mwog/would_this_make_an_ai_devs_life_easier/ | Soggy-Guava-1218 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7mwog | false | null | t3_1m7mwog | /r/LocalLLaMA/comments/1m7mwog/would_this_make_an_ai_devs_life_easier/ | false | false | self | 0 | null |
Analyzing CSV and structured data - RAG, MCP, tools, or plain old scripting? | 1 | I'm new to running LLM's locally and have been working on a new project that has an "AI powered" requirement... I've learned a ton in the process but feel like I'm missing something.
The idea is to take a large csv that has been aggregated and formatted from various other sources, then feed that to an LLM that can ide... | 2025-07-23T22:09:01 | https://www.reddit.com/r/LocalLLaMA/comments/1m7mu6e/analyzing_csv_and_structured_data_rag_mcp_tools/ | Tactical_Chicken | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7mu6e | false | null | t3_1m7mu6e | /r/LocalLLaMA/comments/1m7mu6e/analyzing_csv_and_structured_data_rag_mcp_tools/ | false | false | self | 1 | null |
Best edge model for mobile - Qwen, LFM2, Gemma3N? | 1 | I'm looking for leads for best edge model to deploy in an email mobile app. Tasks are closeIE (extract flight confirmation details), Summarize this newsletter, and Draft an email response.
Notable considerations
* Most emails are less than 5k in length
* Less parameters means better battery efficiency
* Inference t... | 2025-07-23T21:59:20 | https://www.reddit.com/r/LocalLLaMA/comments/1m7mlcr/best_edge_model_for_mobile_qwen_lfm2_gemma3n/ | yonz- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7mlcr | false | null | t3_1m7mlcr | /r/LocalLLaMA/comments/1m7mlcr/best_edge_model_for_mobile_qwen_lfm2_gemma3n/ | false | false | self | 1 | null |
I'll help build your local LLM for free | 0 | Hey folks – I’ve been exploring local LLMs more seriously and found the best way to get deeper is by teaching and helping others. I’ve built a couple local setups and work in the AI team at one of the big four consulting firms. I’ve also got \~7 years in AI/ML, and have helped some of the biggest companies build end-to... | 2025-07-23T21:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/1m7m9t8/ill_help_build_your_local_llm_for_free/ | decentralizedbee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7m9t8 | false | null | t3_1m7m9t8 | /r/LocalLLaMA/comments/1m7m9t8/ill_help_build_your_local_llm_for_free/ | false | false | self | 0 | null |
text-only support for GLM-4.1V-9B-Thinking has been merged into llama.cpp | 25 | A tiny change in the converter to support GLM-4.1V-9B-Thinking (no recompilation needed, just generate the GGUF). | 2025-07-23T21:41:39 | https://github.com/ggml-org/llama.cpp/pull/14823 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m7m5br | false | null | t3_1m7m5br | /r/LocalLLaMA/comments/1m7m5br/textonly_support_for_glm41v9bthinking_has_been/ | false | false | default | 25 | {'enabled': False, 'images': [{'id': 'UPHmsdQY22p2HzFU321gzdvuHJO8Xndf_nWTGMjsBKw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UPHmsdQY22p2HzFU321gzdvuHJO8Xndf_nWTGMjsBKw.png?width=108&crop=smart&auto=webp&s=dca6399394d0c6a422906d4fb5fb8e66090699e9', 'width': 108}, {'height': 108, 'url': 'h... |
AI.Gov | President Trump's AI Strategy and Action Plan | 0 | 2025-07-23T21:41:24 | https://www.ai.gov/ | fallingdowndizzyvr | ai.gov | 1970-01-01T00:00:00 | 0 | {} | 1m7m534 | false | null | t3_1m7m534 | /r/LocalLLaMA/comments/1m7m534/aigov_president_trumps_ai_strategy_and_action_plan/ | false | false | default | 0 | null | |
MacBook model rank | 2 | Is anyone maintaining a "fits in a MacBook Pro" kind of leaderboard for open models? It's by far the form factor for open models I've seen colleagues interested in.
I know you can just see the number of parameters, active parameters in MoEs, etc., but a nice leaderboard with some tokens/sec average would be useful for... | 2025-07-23T21:23:45 | https://www.reddit.com/r/LocalLLaMA/comments/1m7lp0z/macbook_model_rank/ | JCx64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7lp0z | false | null | t3_1m7lp0z | /r/LocalLLaMA/comments/1m7lp0z/macbook_model_rank/ | false | false | self | 2 | null |
Higgs Audio V2 - Open Multi-Speaker TTS Model - Impressive Testing Results | 37 | Higgs Audio V2 is an advanced, open-source audio generation model developed by Boson AI, designed to produce highly expressive and lifelike speech with robust multi-speaker dialogue capabilities.
Some Highlights:
🎧 Trained on 10M hours of diverse audio — speech, music, sound events, and natural conversations ... | 2025-07-23T21:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/1m7lj3x/higgs_audio_v2_open_multispeaker_tts_model/ | Lopsided_Dot_4557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7lj3x | false | null | t3_1m7lj3x | /r/LocalLLaMA/comments/1m7lj3x/higgs_audio_v2_open_multispeaker_tts_model/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': 'YT-LpJHqk9Hd07EBQIKlDPKyBSQF6cqbAxMCvw22Vdk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/YT-LpJHqk9Hd07EBQIKlDPKyBSQF6cqbAxMCvw22Vdk.jpeg?width=108&crop=smart&auto=webp&s=236a03bf932c13a51b9e805f9e9362659054558c', 'width': 108}, {'height': 162, 'url': '... |
Puget Systems Threadripper PRO 9000WX Llama Prompt Processing & Token Generation benchmarks | 7 | 2025-07-23T21:11:00 | https://imgur.com/a/EDYfW8Z | Caffdy | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1m7ld4z | false | {'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 60, 'html': '<iframe class="embedly-embed" src="https:/... | t3_1m7ld4z | /r/LocalLLaMA/comments/1m7ld4z/puget_systems_threadripper_pro_9000wx_llama/ | false | false | default | 7 | {'enabled': False, 'images': [{'id': 'GLm1hgJojxMbTwvUw-Lc6StlFj9R36mDvuxy3H3bcNc', 'resolutions': [{'height': 128, 'url': 'https://external-preview.redd.it/EoAqgMVFxHtMcH_N1MqDCS4XiJk394hpgLml-L9lTR8.jpg?width=108&crop=smart&auto=webp&s=ad075926dde531bde6baeaa59a7e8de8b9783ff7', 'width': 108}, {'height': 256, 'url': '... | |
Built a Universal RAG + Memory System for Claude with MCP - Production Ready | 0 | A week ago I shared an early prototype and got amazing feedback. Main request? "Show us how to actually install this properly."
**The problem:** Every time you restart Claude Code CLI, you lose everything.
**What I built:** RagCore - universal RAG system with persistent memory via MCP stdio. Claude remembers your pro... | 2025-07-23T20:56:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m7kz8s/built_a_universal_rag_memory_system_for_claude/ | Basic_Soft9158 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7kz8s | false | null | t3_1m7kz8s | /r/LocalLLaMA/comments/1m7kz8s/built_a_universal_rag_memory_system_for_claude/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'xrkX6jGeRIvp8RnwqY5OlMAx1guQn5jFdJg4hbnVUQ8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xrkX6jGeRIvp8RnwqY5OlMAx1guQn5jFdJg4hbnVUQ8.png?width=108&crop=smart&auto=webp&s=1accd17d757c1d6bca607d221f043a9a301e1f59', 'width': 108}, {'height': 108, 'url': 'h... |
Less than two weeks Kimi K2's release, Alibaba Qwen's new Qwen3-Coder surpasses it with half the size and double the context window. Despite a significant initial lead, open source models are catching up to closed source and seem to be reaching escape velocity. | 267 | 2025-07-23T20:40:28 | abdouhlili | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7kkyn | false | null | t3_1m7kkyn | /r/LocalLLaMA/comments/1m7kkyn/less_than_two_weeks_kimi_k2s_release_alibaba/ | false | false | default | 267 | {'enabled': True, 'images': [{'id': 'krjfba3oqoef1', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/krjfba3oqoef1.jpeg?width=108&crop=smart&auto=webp&s=8cf6d39fa1fa4f5683732a0b8993daf74e849afa', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/krjfba3oqoef1.jpeg?width=216&crop=smart&auto=w... | ||
Gemma3/other, Langchain, ChromaDb, RAG - a few questions | 2 | I'm new to LLMs and I'm trying to understand a few things.
Isn't RAG similar to a search engine looks at keywords typed by user then feeds it to LLM to "understand" it an generate a nice response back?
Let's say instead of RAG I'm using something like ElasticSearch/Meillsearch - would the results be that different? ... | 2025-07-23T20:34:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m7kfet/gemma3other_langchain_chromadb_rag_a_few_questions/ | viitorfermier | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m7kfet | false | null | t3_1m7kfet | /r/LocalLLaMA/comments/1m7kfet/gemma3other_langchain_chromadb_rag_a_few_questions/ | false | false | self | 2 | null |
Kimi K2 writes so naturally that the most advanced AI detector (GPTZero) will classify it as human written | 7 | Also, I believe this was using close to full precision (through Fireworks), and I set its temperature sort-of high since it doesn't have that many active parameters - at around 1 or so. This temp is great for creative writing. | 2025-07-23T20:33:38 | Longjumping_Spot5843 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7kekx | false | null | t3_1m7kekx | /r/LocalLLaMA/comments/1m7kekx/kimi_k2_writes_so_naturally_that_the_most/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'd7k7i0adpoef1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/d7k7i0adpoef1.png?width=108&crop=smart&auto=webp&s=3a5e2e09df2ee33c454bdad2d6bbe23327a27552', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/d7k7i0adpoef1.png?width=216&crop=smart&auto=webp... | |
Kimi K2 writes so naturally that the most advanced AI detector (GPTZero) will classify it as human written | 1 | Also, I believe this was using close to full precision (through Fireworks), and I set its temperature pretty high since it doesn't have that many active parameters, at around 1 or so. This temp is great for creative writing.
| 2025-07-23T20:32:19 | Longjumping_Spot5843 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m7kdcx | false | null | t3_1m7kdcx | /r/LocalLLaMA/comments/1m7kdcx/kimi_k2_writes_so_naturally_that_the_most/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'wrcwqzstmoef1', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/wrcwqzstmoef1.png?width=108&crop=smart&auto=webp&s=4165ba266a8a297c1815164999386b130abc4f54', 'width': 108}, {'height': 88, 'url': 'https://preview.redd.it/wrcwqzstmoef1.png?width=216&crop=smart&auto=webp... | |
Demis Hassabis @ Lex Fridman Podcast: Round 2 | 0 | 2025-07-23T20:30:12 | https://youtu.be/-HzgcbRXUK8?si=I0tQridjW4EgudmF | tassa-yoniso-manasi | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1m7kbeq | false | {'oembed': {'author_name': 'Lex Fridman', 'author_url': 'https://www.youtube.com/@lexfridman', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/-HzgcbRXUK8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope... | t3_1m7kbeq | /r/LocalLLaMA/comments/1m7kbeq/demis_hassabis_lex_fridman_podcast_round_2/ | false | false | 0 | {'enabled': False, 'images': [{'id': '8ON0f1m04iK5iw4ZK4CmYbXXuqniiNO62KAiG8HMeK4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/8ON0f1m04iK5iw4ZK4CmYbXXuqniiNO62KAiG8HMeK4.jpeg?width=108&crop=smart&auto=webp&s=d6f84c47558f9e99eb805fa614504d07853e00a6', 'width': 108}, {'height': 162, 'url': '... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.