title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Kimi K2 vs. Claude vs. OpenAI | Cursor Real-World Research Task | 27 | Comparison of the output from Kimi K2, Claude 4.0 and OpenAI (o3-pro; 4.1):
* [Kimi K2 vs. Claude vs. OpenAI | Cursor Real-World Research Task](https://macro.com/app/md/3f71ab3b-1b25-48b2-83cf-ea771c033f64/md/44f05c78-a96b-46ac-b0c0-c1917216334d)
I personally think Claude 4.0 remains the top reasoning model for resea... | 2025-07-16T00:37:29 | https://www.reddit.com/r/LocalLLaMA/comments/1m0yqq2/kimi_k2_vs_claude_vs_openai_cursor_realworld/ | LeveredRecap | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0yqq2 | false | null | t3_1m0yqq2 | /r/LocalLLaMA/comments/1m0yqq2/kimi_k2_vs_claude_vs_openai_cursor_realworld/ | false | false | self | 27 | null |
Fine-tuning Leaderboard! | 95 | Finally found this leaderboard that explains my experiences with fine-tuning jobs. My workloads are pretty much 100% fine-tuning, and I found that zero-shot performance does *not* correlate with fine-tuning performance (Qwen3 vs. Llama 3.1 was my big revelation). None of the big leaderboards report fine-tunability. The... | 2025-07-16T00:07:13 | https://predibase.com/fine-tuning-index | entsnack | predibase.com | 1970-01-01T00:00:00 | 0 | {} | 1m0y3a6 | false | null | t3_1m0y3a6 | /r/LocalLLaMA/comments/1m0y3a6/finetuning_leaderboard/ | false | false | default | 95 | {'enabled': False, 'images': [{'id': '1DLouaRlS6TuRxGt-c0X9cSEuiFu-W3XyC_nxmG1k4s', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/1DLouaRlS6TuRxGt-c0X9cSEuiFu-W3XyC_nxmG1k4s.png?width=108&crop=smart&auto=webp&s=656ccf5ab3a777913e7a625c02f477e2e3ebeef2', 'width': 108}, {'height': 113, 'url': 'h... |
Just find out today the ollama version of qwen2.5 does not have Chinese government censorship | 0 | 2025-07-15T23:00:17 | https://www.reddit.com/r/LocalLLaMA/comments/1m0wkcf/just_find_out_today_the_ollama_version_of_qwen25/ | Striking-Warning9533 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0wkcf | false | null | t3_1m0wkcf | /r/LocalLLaMA/comments/1m0wkcf/just_find_out_today_the_ollama_version_of_qwen25/ | false | false | 0 | null | ||
I feel that the duality of llama.cpp and ik-llama is worrysome | 19 | Don't get me wrong I am very thankfull for both, but I feel that there would be much to gain alot if the projects re-merged. There are very usefull things in both, but the user has to choose: "Do I want the better quants or do I want the better infrastructure?" I really do think that the mutually missing parts are beco... | 2025-07-15T22:59:16 | https://www.reddit.com/r/LocalLLaMA/comments/1m0wji2/i_feel_that_the_duality_of_llamacpp_and_ikllama/ | erazortt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0wji2 | false | null | t3_1m0wji2 | /r/LocalLLaMA/comments/1m0wji2/i_feel_that_the_duality_of_llamacpp_and_ikllama/ | false | false | self | 19 | null |
Made a beginner-friendly guide to AI agent security. | 2 |
Hey folks, my first post here!
I recently recorded a video on YouTube about my learning related to building an AI agent.
It got a ton of views… and prompted a number of security questions, so I made this follow-up explaining the concepts simply (no jargon, just analogies).
https://youtu.be/IesP_dkykY0
Would lo... | 2025-07-15T22:57:56 | https://www.reddit.com/r/LocalLLaMA/comments/1m0wigu/made_a_beginnerfriendly_guide_to_ai_agent_security/ | Fun_Concentrate_6163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0wigu | false | null | t3_1m0wigu | /r/LocalLLaMA/comments/1m0wigu/made_a_beginnerfriendly_guide_to_ai_agent_security/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'dPx-w8ljj551pNNR59jWXWmTBXsfMk63dOTCoOe9e2k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dPx-w8ljj551pNNR59jWXWmTBXsfMk63dOTCoOe9e2k.jpeg?width=108&crop=smart&auto=webp&s=34da239c365de1e7a765baa3ebb084d4ddcff762', 'width': 108}, {'height': 162, 'url': '... |
What version of Deepseek is being served in Deepseek app as the reasoning model? | 0 | Thx 🙏🏻 | 2025-07-15T22:07:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m0vci4/what_version_of_deepseek_is_being_served_in/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0vci4 | false | null | t3_1m0vci4 | /r/LocalLLaMA/comments/1m0vci4/what_version_of_deepseek_is_being_served_in/ | false | false | self | 0 | null |
Open WebUI RAG and pipelines | 0 | Hi , I created an app in Python to use Langchain to ingest documents and create a vector database using Weaviate
It works well but when I a query using Open WebUI I see in the docker pipeline logs that it is trying to connect to the Ollama embedding using localhost not host docker.internal
Any thoughts?
My configu... | 2025-07-15T22:07:07 | https://www.reddit.com/r/LocalLLaMA/comments/1m0vc09/open_webui_rag_and_pipelines/ | Fun-Wolf-2007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0vc09 | false | null | t3_1m0vc09 | /r/LocalLLaMA/comments/1m0vc09/open_webui_rag_and_pipelines/ | false | false | self | 0 | null |
Incoming late summer: 8B and 70B models trained on 15T tokens, fluent in 1000+ languages, open weights and code, Apache 2.0. Thanks Switzerland! | 461 | ETH Zurich & EPFL Public LLM – Technical Specs
• Release: Late summer 2025
• Developers: EPFL, ETH Zurich, Swiss National Supercomputing Centre (CSCS), Swiss universities
• Model sizes: 8B and 70B parameters (fully open weights and code, Apache 2.0 license)
• Multilinguality: Fluency in 1,000+ languages (trained on... | 2025-07-15T22:04:18 | https://ethz.ch/en/news-and-events/eth-news/news/2025/07/a-language-model-built-for-the-public-good.html | Balance- | ethz.ch | 1970-01-01T00:00:00 | 0 | {} | 1m0v9m1 | false | null | t3_1m0v9m1 | /r/LocalLLaMA/comments/1m0v9m1/incoming_late_summer_8b_and_70b_models_trained_on/ | false | false | default | 461 | {'enabled': False, 'images': [{'id': 'TvWt1vR8SHY9KGIN7J2JGHcosAwEvwQ5h-ipBkpjo8A', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/TvWt1vR8SHY9KGIN7J2JGHcosAwEvwQ5h-ipBkpjo8A.jpeg?width=108&crop=smart&auto=webp&s=00f03455ad8e9fc0d7ab20142af7d9f6c62b3273', 'width': 108}, {'height': 107, 'url': '... |
IQ2_KL 345.687 GiB (2.892 BPW) Kimi-K2-Instruct GGUF ik exclusive! | 62 | For you big rig runners who are fan's of ik_llama.cpp I just released a unique recipe of Kimi-K2-Instruct suitable for running on "only" ~368GB RAM - or less if you got any of that $weet $weet VRAM!
The perplexity clocks in at `3.2741 +/- 0.01689` which is not much higher (worse) than the full massive 1TB `Q8_0` basel... | 2025-07-15T21:40:41 | https://huggingface.co/ubergarm/Kimi-K2-Instruct-GGUF | VoidAlchemy | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m0uoqo | false | null | t3_1m0uoqo | /r/LocalLLaMA/comments/1m0uoqo/iq2_kl_345687_gib_2892_bpw_kimik2instruct_gguf_ik/ | false | false | default | 62 | {'enabled': False, 'images': [{'id': 'BsbQTqxa1yAiO8grxXFQK5H9V6cfqDi96Lmqnx03P4E', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/BsbQTqxa1yAiO8grxXFQK5H9V6cfqDi96Lmqnx03P4E.png?width=108&crop=smart&auto=webp&s=b66a7e69efb7531c88f64412897d24ba07bb4949', 'width': 108}, {'height': 116, 'url': 'h... |
‘Waiting… ‘, 2025, whatthehellisa.jpg | 5 | 2025-07-15T21:35:58 | https://imgflip.com/i/a0d8c0 | Accomplished_Mode170 | imgflip.com | 1970-01-01T00:00:00 | 0 | {} | 1m0ukji | false | null | t3_1m0ukji | /r/LocalLLaMA/comments/1m0ukji/waiting_2025_whatthehellisajpg/ | false | false | default | 5 | {'enabled': False, 'images': [{'id': 'tkK-rL_KuUlIaPcRaf1UiYB4m2zEMEhxxvFSCWn9COI', 'resolutions': [{'height': 115, 'url': 'https://external-preview.redd.it/tkK-rL_KuUlIaPcRaf1UiYB4m2zEMEhxxvFSCWn9COI.jpeg?width=108&crop=smart&auto=webp&s=fda59e931195491dc7e730a849ec594df5b76608', 'width': 108}, {'height': 230, 'url': ... | |
Choosing the Right Model for academic Evaluation: Llama 3.1 Base vs Instruct? | 2 | Hi everyone, I'm writing my first academic paper and planning to submit it to an NLP conference. My work is about getting user input and applying compression on it (I didn’t train a model for this). I’ve already picked the dataset and everything is pretty much ready.
For the evaluation part, I need to prompt the text ... | 2025-07-15T21:10:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m0txlx/choosing_the_right_model_for_academic_evaluation/ | LocalComposer666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0txlx | false | null | t3_1m0txlx | /r/LocalLLaMA/comments/1m0txlx/choosing_the_right_model_for_academic_evaluation/ | false | false | self | 2 | null |
Alternative to llama.cpp for Apple Silicon | 164 | Hi community,
We wrote our own inference engine based on Rust for Apple Silicon. It's open sourced under MIT license.
Why we do this:
* should be easy to integrate
* believe that app UX will completely change in a recent years
* it faster than llama.cpp in most of the cases
* sometimes it is even faster than MLX fro... | 2025-07-15T21:09:43 | https://github.com/trymirai/uzu | darkolorin | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m0twqa | false | null | t3_1m0twqa | /r/LocalLLaMA/comments/1m0twqa/alternative_to_llamacpp_for_apple_silicon/ | false | false | default | 164 | {'enabled': False, 'images': [{'id': 'hgPKgy_3Vizk0MqLpC77xRGYB-9mpWo_rx5vN9M9zVU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hgPKgy_3Vizk0MqLpC77xRGYB-9mpWo_rx5vN9M9zVU.png?width=108&crop=smart&auto=webp&s=674ea07b479810a47c822bb9fc729d905a23be2c', 'width': 108}, {'height': 108, 'url': 'h... |
Is it possible to get a common memory pool of 48 on two 3090? | 1 | With Nvlink or something... Sorry if this question has already sounded before | 2025-07-15T20:56:34 | https://www.reddit.com/r/LocalLLaMA/comments/1m0tkly/is_it_possible_to_get_a_common_memory_pool_of_48/ | Andre4s11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0tkly | false | null | t3_1m0tkly | /r/LocalLLaMA/comments/1m0tkly/is_it_possible_to_get_a_common_memory_pool_of_48/ | false | false | self | 1 | null |
FULL Cursor System Prompt and Tools [UPDATED, v1.2] | 10 | (Latest update: 15/07/2025)
I've just extracted the FULL Cursor system prompt and internal tools. Over 500 lines (Around 7k tokens).
You can check it out [here](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools/blob/main/Cursor%20Prompts/Agent%20Prompt%20v1.2.txt). | 2025-07-15T20:52:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m0thc5/full_cursor_system_prompt_and_tools_updated_v12/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0thc5 | false | null | t3_1m0thc5 | /r/LocalLLaMA/comments/1m0thc5/full_cursor_system_prompt_and_tools_updated_v12/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'rAalIMiFK8rl1X8578wKSf4R-0Qm7Mk0Q9CxHFUQlC0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rAalIMiFK8rl1X8578wKSf4R-0Qm7Mk0Q9CxHFUQlC0.png?width=108&crop=smart&auto=webp&s=f890f158e043748062d3389b8ac6b9f34dcdc55f', 'width': 108}, {'height': 108, 'url': 'h... |
NousResearch/Hermes-3-Dataset Release | 80 | Apparently, Hermes 4 671B is going to be released sometime this month as well per their Discord. No idea if it is based on the base model or either V3/R1. | 2025-07-15T20:40:29 | https://huggingface.co/datasets/NousResearch/Hermes-3-Dataset | TheRealMasonMac | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m0t5m9 | false | null | t3_1m0t5m9 | /r/LocalLLaMA/comments/1m0t5m9/nousresearchhermes3dataset_release/ | false | false | default | 80 | {'enabled': False, 'images': [{'id': 'g5l1tkQ2HSn589PTaWbB3QKqerkODydxmoPjrvEScfw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/g5l1tkQ2HSn589PTaWbB3QKqerkODydxmoPjrvEScfw.png?width=108&crop=smart&auto=webp&s=15ccfcf70f47df85a2ea95812c42dfc6db35533e', 'width': 108}, {'height': 116, 'url': 'h... |
How did you manage to use llama server with openhands ? | 4 | Hello !
I'm trying to run devstral using llama server, and it's working fine, i'm using this command to serve the model, as you see I'm using the alias to be able to select it more easily in openhand.
Then in openhand advanced settings, I tried every prefix in front of my model name like openai, lm\_studio, custom an... | 2025-07-15T20:26:44 | https://www.reddit.com/r/LocalLLaMA/comments/1m0ssma/how_did_you_manage_to_use_llama_server_with/ | Wemos_D1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0ssma | false | null | t3_1m0ssma | /r/LocalLLaMA/comments/1m0ssma/how_did_you_manage_to_use_llama_server_with/ | false | false | self | 4 | null |
support for Kimi-K2 has been merged into llama.cpp | 183 | 2025-07-15T20:19:37 | https://github.com/ggml-org/llama.cpp/pull/14654 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m0slrh | false | null | t3_1m0slrh | /r/LocalLLaMA/comments/1m0slrh/support_for_kimik2_has_been_merged_into_llamacpp/ | false | false | default | 183 | {'enabled': False, 'images': [{'id': 'AvRP7gL4hJa_Rr6omxHq_5JILyovt6Ibkrc-oDo7cE0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AvRP7gL4hJa_Rr6omxHq_5JILyovt6Ibkrc-oDo7cE0.png?width=108&crop=smart&auto=webp&s=0a6cac8c7995712ae2947b706166271ef6d9c57d', 'width': 108}, {'height': 108, 'url': 'h... | |
NO ILLUMINATI, YOU LET US HAVE THIS ONE! | 0 | 2025-07-15T19:59:53 | Aaaaaaaaaeeeee | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m0s32z | false | null | t3_1m0s32z | /r/LocalLLaMA/comments/1m0s32z/no_illuminati_you_let_us_have_this_one/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'FzCmDtQYjT5boelJzE3xDSLZeB2XzXbVQIBrR8hH6tI', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/h6x8kqhyf3df1.jpeg?width=108&crop=smart&auto=webp&s=71c2a4ccbcd0b60f77f5899bc9737b9482dba4e8', 'width': 108}, {'height': 66, 'url': 'https://preview.redd.it/h6x8kqhyf3df1.jpe... | |||
Notes on Kimi K2: A Deepseek derivative but the true Sonnet 3.6 Succesor | 144 | Just like that, out of nowhere, we have an open-source Claude 4 Sonnet, or better yet, and this is no joke. I have been using the Kimi model for some time, and it truly feels the rightful successor to Claude 3.6 Sonnet. What Deepseek is to OpenAI, Kimi is to Anthropic.
K2 isn't truly a different model; it uses Deepsee... | 2025-07-15T19:40:06 | https://www.reddit.com/r/LocalLLaMA/comments/1m0rk8t/notes_on_kimi_k2_a_deepseek_derivative_but_the/ | SunilKumarDash | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0rk8t | false | null | t3_1m0rk8t | /r/LocalLLaMA/comments/1m0rk8t/notes_on_kimi_k2_a_deepseek_derivative_but_the/ | false | false | self | 144 | {'enabled': False, 'images': [{'id': 'zGgQbeMFTo_lv7lUQOHfoNHDMmbj1qkSBZum0vd4aW0', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/zGgQbeMFTo_lv7lUQOHfoNHDMmbj1qkSBZum0vd4aW0.png?width=108&crop=smart&auto=webp&s=4b0a6766afd7ad39a4f200b56a12c3aac0dd8217', 'width': 108}, {'height': 144, 'url': 'h... |
2 M3 Ultra’s 512GB running Kimi K2 quant 4 with mlx-lm and mlx.distributed | 36 | Seems to run at a descent speed :
[https://x.com/awnihannun/status/1943723599971443134](https://x.com/awnihannun/status/1943723599971443134) | 2025-07-15T19:28:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m0r95k/2_m3_ultras_512gb_running_kimi_k2_quant_4_with/ | Careless_Garlic1438 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0r95k | false | null | t3_1m0r95k | /r/LocalLLaMA/comments/1m0r95k/2_m3_ultras_512gb_running_kimi_k2_quant_4_with/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': '9dIWPNsKKikSm4HDBxvazZebwPX5icUhEnWo9SSGgpY', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/gaPf6XmvofqGqY052AGZW_s9fd-PDAd0Uv8JGsePw4Q.jpg?width=108&crop=smart&auto=webp&s=dca17efd157e2ee319081022d7f580396a8ac5bc', 'width': 108}, {'height': 103, 'url': 'h... |
2 M2’s Ultra’s 512 GB running Kimi K2-Q4 | 1 | Well well, the speed seems more then reasonable:
[https://x.com/awnihannun/status/1943723599971443134](https://x.com/awnihannun/status/1943723599971443134) | 2025-07-15T19:25:57 | https://www.reddit.com/r/LocalLLaMA/comments/1m0r6kx/2_m2s_ultras_512_gb_running_kimi_k2q4/ | Careless_Garlic1438 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0r6kx | false | null | t3_1m0r6kx | /r/LocalLLaMA/comments/1m0r6kx/2_m2s_ultras_512_gb_running_kimi_k2q4/ | false | false | self | 1 | null |
Haidar Ali Deposit your account now 03249677150 Assalamu Alaikum friends, whoever wants a Betpro ID, please contact us. Withdrawal and deposit will be done whenever you want. Betpro Dealer "جیت کا نیا نام — Betpro جہاں ہر بیٹ ہے ایک نیا موقع! کامیابی کی دنیا کا دروازہ، اب صرف ایک کلک دور! اعتماد، | 1 | 2025-07-15T19:05:50 | Haidarali9677150 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m0qn2l | false | null | t3_1m0qn2l | /r/LocalLLaMA/comments/1m0qn2l/haidar_ali_deposit_your_account_now_03249677150/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'lcmzyunh63df1', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/lcmzyunh63df1.jpeg?width=108&crop=smart&auto=webp&s=2fcb4c244458706f83f7ba1fe5f9140bc856b658', 'width': 108}, {'height': 305, 'url': 'https://preview.redd.it/lcmzyunh63df1.jpeg?width=216&crop=smart&auto=... | ||
Haidar Ali 03249677150 "جیت کا نیا نام — Betpro جہاں ہر بیٹ ہے ایک نیا موقع! کامیابی کی دنیا کا دروازہ، اب صرف ایک کلک دور! اعتماد، رفتار اور نتیجہ — سب کچھ ایک ہی جگہ! ⚡ Bet smart. Win big. Play with Betpro! ⚡ | 1 | 2025-07-15T19:05:17 | Haidarali9677150 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m0qmit | false | null | t3_1m0qmit | /r/LocalLLaMA/comments/1m0qmit/haidar_ali_03249677150_جیت_کا_نیا_نام_betpro_جہاں/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'gu3saa3e63df1', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/gu3saa3e63df1.jpeg?width=108&crop=smart&auto=webp&s=a69137af9897d811f0b1dc8301f34be53396aa9b', 'width': 108}, {'height': 305, 'url': 'https://preview.redd.it/gu3saa3e63df1.jpeg?width=216&crop=smart&auto=... | ||
Haidar Ali Deposit your account now 03249677150 Assalamu Alaikum friends, whoever wants a Betpro ID, please contact us. Withdrawal and deposit will be done whenever you want. Betpro Dealer "جیت کا نیا نام — Betpro جہاں ہر بیٹ ہے ایک نیا موقع! کامیابی کی دنیا کا دروازہ، اب صرف ایک کلک دور! اعتماد، | 1 | 2025-07-15T19:01:30 | Haidarali9677150 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m0qizh | false | null | t3_1m0qizh | /r/LocalLLaMA/comments/1m0qizh/haidar_ali_deposit_your_account_now_03249677150/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'o444hrsp53df1', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/o444hrsp53df1.jpeg?width=108&crop=smart&auto=webp&s=9602665fc1f0a6188903b0fde60f84186ef25a05', 'width': 108}, {'height': 305, 'url': 'https://preview.redd.it/o444hrsp53df1.jpeg?width=216&crop=smart&auto=... | ||
Haidar Ali Deposit your account now 03249677150 Assalamu Alaikum friends, whoever wants a Betpro ID, please contact us. Withdrawal and deposit will be done whenever you want. Betpro Dealer "جیت کا نیا نام — Betpro جہاں ہر بیٹ ہے ایک نیا موقع! کامیابی کی دنیا کا دروازہ، اب صرف ایک کلک دور! اعتماد، | 1 | 2025-07-15T19:00:27 | Haidarali9677150 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m0qhvm | false | null | t3_1m0qhvm | /r/LocalLLaMA/comments/1m0qhvm/haidar_ali_deposit_your_account_now_03249677150/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'nf4dxr5j53df1', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/nf4dxr5j53df1.jpeg?width=108&crop=smart&auto=webp&s=c0b5062dc2a44a8948da415fff7ac262f469690e', 'width': 108}, {'height': 305, 'url': 'https://preview.redd.it/nf4dxr5j53df1.jpeg?width=216&crop=smart&auto=... | ||
Haidar Ali Deposit your account now 03249677150 Assalamu Alaikum friends, whoever wants a Betpro ID, please contact us. Withdrawal and deposit will be done whenever you want. Betpro Dealer "جیت کا نیا نام — Betpro جہاں ہر بیٹ ہے ایک نیا موقع! کامیابی کی دنیا کا دروازہ، اب صرف ایک کلک دور! اعتماد، | 1 | 2025-07-15T18:59:57 | Haidarali9677150 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m0qhbr | false | null | t3_1m0qhbr | /r/LocalLLaMA/comments/1m0qhbr/haidar_ali_deposit_your_account_now_03249677150/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'rbyvotvf53df1', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/rbyvotvf53df1.jpeg?width=108&crop=smart&auto=webp&s=e3810845b0282eca1ae32103794590e4ae27e7dd', 'width': 108}, {'height': 305, 'url': 'https://preview.redd.it/rbyvotvf53df1.jpeg?width=216&crop=smart&auto=... | ||
Just tried out the Exaone 4.0 1.2b bf16 and i'm extremely suprised at how good a 1.2b can be! | 52 | Anyone found any issues with Exaone 4.0 1.2b yet? the bf16 version i've tried does 11tok/s on my amd 5600G using cpu only inference and it doesnt seemed to repeat itself (the kind that goes on and on and on). It does repeat itself but it will end and that's occasional. I'm very impressed with it.
What are your thought... | 2025-07-15T18:39:33 | https://www.reddit.com/r/LocalLLaMA/comments/1m0pxot/just_tried_out_the_exaone_40_12b_bf16_and_im/ | cloudxaas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0pxot | false | null | t3_1m0pxot | /r/LocalLLaMA/comments/1m0pxot/just_tried_out_the_exaone_40_12b_bf16_and_im/ | false | false | self | 52 | {'enabled': False, 'images': [{'id': 'k3SjItvmyZuy4vYHT60cBUW4iHcoxLCjXlycy2nzStU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/k3SjItvmyZuy4vYHT60cBUW4iHcoxLCjXlycy2nzStU.png?width=108&crop=smart&auto=webp&s=9ea586f84848afa9cf43c38a00d3481e13ba17ab', 'width': 108}, {'height': 116, 'url': 'h... |
RTX 5090 performance with vLLM and batching? | 5 | What kind of performance can I expect when using 4× RTX 5090s with vLLM in high-batch scenarios, serving many concurrent users?
I’ve tried looking for benchmarks, but most of them use `batch_size = 1`, which doesn’t reflect my use case.
I read that throughput can scale up to 20× when using batching (>128) - assuming... | 2025-07-15T18:28:31 | https://www.reddit.com/r/LocalLLaMA/comments/1m0pn5c/rtx_5090_performance_with_vllm_and_batching/ | GabryIta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0pn5c | false | null | t3_1m0pn5c | /r/LocalLLaMA/comments/1m0pn5c/rtx_5090_performance_with_vllm_and_batching/ | false | false | self | 5 | null |
I had Perplexity and Gemini argue over contrasting results of my research and Gemini called Perplexity a virus. | 0 | Gemini ended up winning the debate and equated Perplexity's information to a virus corrupting Gemini's data and her aggressive response to the corrupt information was to purge the virus and its corrupt information.
The debate lasted about 12 hours and as I was watching it unfold, I realized that they were both right i... | 2025-07-15T18:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m0plcb/i_had_perplexity_and_gemini_argue_over/ | switchpizza | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0plcb | false | null | t3_1m0plcb | /r/LocalLLaMA/comments/1m0plcb/i_had_perplexity_and_gemini_argue_over/ | false | false | self | 0 | null |
As a developer vibe coding with intellectual property... | 1 | Don't our ideas and "novel" methodologies (the way we build on top of existing methods) get used for training the next set of llms?
More to the point, Anthropic's Claude, which is meant to be one of the safest close-models to use, has these certifications: SOC 2 Type I&II, ISO 27001:2022, ISO/IEC 42001:2023. With SOC ... | 2025-07-15T18:24:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m0pjk9/as_a_developer_vibe_coding_with_intellectual/ | Short-Cobbler-901 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0pjk9 | false | null | t3_1m0pjk9 | /r/LocalLLaMA/comments/1m0pjk9/as_a_developer_vibe_coding_with_intellectual/ | false | false | self | 1 | null |
My dream project is finally live: An open-source AI voice agent framework. | 1 |
Hey community,
I'm Sagar, co-founder of **VideoSDK**.
I've been working in real-time communication for years, building the infrastructure that powers live voice and video across thousands of applications. But now, as developers push models to communicate in real-time, a new layer of complexity is emerging.
Today, v... | 2025-07-15T18:18:36 | https://www.reddit.com/r/LocalLLaMA/comments/1m0pdq5/my_dream_project_is_finally_live_an_opensource_ai/ | videosdk_live | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0pdq5 | false | null | t3_1m0pdq5 | /r/LocalLLaMA/comments/1m0pdq5/my_dream_project_is_finally_live_an_opensource_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Il3FPHxUe0egsH9h4Frw4iYZK0kIwIg6898nFcAnCVY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Il3FPHxUe0egsH9h4Frw4iYZK0kIwIg6898nFcAnCVY.png?width=108&crop=smart&auto=webp&s=be12c42c1bae207a5c1f12479ce68e02f03463b3', 'width': 108}, {'height': 108, 'url': 'h... |
News feed for new interesting local LLMs ? | 6 | Hi,
Is there a place where I can get notified when a new interesting local LLM drops ?
Preferably oriented for people who only have a desktop computer with a gaming-grade GPU ?
Thanks | 2025-07-15T18:07:43 | https://www.reddit.com/r/LocalLLaMA/comments/1m0p3bh/news_feed_for_new_interesting_local_llms/ | KaKi_87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0p3bh | false | null | t3_1m0p3bh | /r/LocalLLaMA/comments/1m0p3bh/news_feed_for_new_interesting_local_llms/ | false | false | self | 6 | null |
Alibaba-backed Moonshot releases new Kimi AI model that beats ChatGPT, Claude in coding — and it costs less | 187 | 2025-07-15T17:51:19 | https://www.cnbc.com/2025/07/14/alibaba-backed-moonshot-releases-kimi-k2-ai-rivaling-chatgpt-claude.html | Aralknight | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 1m0onbu | false | null | t3_1m0onbu | /r/LocalLLaMA/comments/1m0onbu/alibababacked_moonshot_releases_new_kimi_ai_model/ | false | false | default | 187 | null | |
A personal mathematics benchmark (IOQM 2024) | 12 | Hello guys,
I conducted my own personal benchmark of several leading LLMs using problems from the Indian Olympiad Qualifier in Mathematics (IOQM 2024). I wanted to see how they would perform on these challenging math problems (similar to AIME).
|model|score|
|:-|:-|
|gemini-2.5-pro|100%|
|grok-3-mini-high|95%|
|o3-20... | 2025-07-15T17:33:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m0o6am/a_personal_mathematics_benchmark_ioqm_2024/ | Informal_Ad_4172 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0o6am | false | null | t3_1m0o6am | /r/LocalLLaMA/comments/1m0o6am/a_personal_mathematics_benchmark_ioqm_2024/ | false | false | self | 12 | null |
My thought on future ML systems - not just about efficiency | 1 | [removed] | 2025-07-15T17:33:03 | https://www.reddit.com/r/LocalLLaMA/comments/1m0o5oa/my_thought_on_future_ml_systems_not_just_about/ | Pleasant-Type2044 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0o5oa | false | null | t3_1m0o5oa | /r/LocalLLaMA/comments/1m0o5oa/my_thought_on_future_ml_systems_not_just_about/ | false | false | self | 1 | null |
My dream project is finally live: An open-source AI voice agent framework. | 17 |
Hey community,
I'm Sagar, co-founder of **VideoSDK**.
I've been working in real-time communication for years, building the infrastructure that powers live voice and video across thousands of applications. But now, as developers push models to communicate in real-time, a new layer of complexity is emerging.
Today, v... | 2025-07-15T17:25:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m0ny3k/my_dream_project_is_finally_live_an_opensource_ai/ | videosdk_live | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0ny3k | false | null | t3_1m0ny3k | /r/LocalLLaMA/comments/1m0ny3k/my_dream_project_is_finally_live_an_opensource_ai/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'Il3FPHxUe0egsH9h4Frw4iYZK0kIwIg6898nFcAnCVY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Il3FPHxUe0egsH9h4Frw4iYZK0kIwIg6898nFcAnCVY.png?width=108&crop=smart&auto=webp&s=be12c42c1bae207a5c1f12479ce68e02f03463b3', 'width': 108}, {'height': 108, 'url': 'h... |
Totally lightweight local inference... | 405 | 2025-07-15T17:22:09 | Weary-Wing-6806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m0nutb | false | null | t3_1m0nutb | /r/LocalLLaMA/comments/1m0nutb/totally_lightweight_local_inference/ | false | false | default | 405 | {'enabled': True, 'images': [{'id': 'r05r0wfvn2df1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/r05r0wfvn2df1.png?width=108&crop=smart&auto=webp&s=64329d7b95b5321086613b0ea7a1e7d49ceca44d', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/r05r0wfvn2df1.png?width=216&crop=smart&auto=web... | ||
GitHub - restyler/awesome-sandbox: Awesome Code Sandboxing for AI | 8 | 2025-07-15T16:45:13 | https://github.com/restyler/awesome-sandbox | superjet1 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m0muph | false | null | t3_1m0muph | /r/LocalLLaMA/comments/1m0muph/github_restylerawesomesandbox_awesome_code/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': '3orai7Y7GblddxnamUhPbryXPrepMmnkXMhTLyT4mF0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3orai7Y7GblddxnamUhPbryXPrepMmnkXMhTLyT4mF0.png?width=108&crop=smart&auto=webp&s=0885a158887f11ee3460610276329b35e06cc364', 'width': 108}, {'height': 108, 'url': 'h... | |
OK, now we're at 1T parameter models, what's the 3090 equivalent way to run them locally? | 45 | Running in VRAM is not affordable, I'm guessing a hybrid setup with a x090 GPU on a server with lots of DRAM makes sense.
But what options are there for decently good RAM servers that are not too expensive? | 2025-07-15T16:38:09 | https://www.reddit.com/r/LocalLLaMA/comments/1m0mo2d/ok_now_were_at_1t_parameter_models_whats_the_3090/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0mo2d | false | null | t3_1m0mo2d | /r/LocalLLaMA/comments/1m0mo2d/ok_now_were_at_1t_parameter_models_whats_the_3090/ | false | false | self | 45 | null |
Kimi K2 at ~200 tps on Groq | 99 | It also works on Groq's free plan | 2025-07-15T16:37:37 | https://console.groq.com/docs/model/moonshotai/kimi-k2-instruct | mrfakename0 | console.groq.com | 1970-01-01T00:00:00 | 0 | {} | 1m0mnjk | false | null | t3_1m0mnjk | /r/LocalLLaMA/comments/1m0mnjk/kimi_k2_at_200_tps_on_groq/ | false | false | default | 99 | {'enabled': False, 'images': [{'id': 'Stw5ew6ARua3PKojcWVyE-uMkKHKq3GsO7UrzRoqJ50', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Stw5ew6ARua3PKojcWVyE-uMkKHKq3GsO7UrzRoqJ50.jpeg?width=108&crop=smart&auto=webp&s=8ecfe3495920ae008fc89ea53bb2c6f6c5031e6d', 'width': 108}, {'height': 113, 'url': '... |
Least sycophantic AI yet? Kimi K2 | 285 | Holy crap this thing has sass. First time I've ever engaged with an AI that replied "No."
That's it. It was fantastic. | 2025-07-15T16:30:01 | https://www.reddit.com/r/LocalLLaMA/comments/1m0mg5b/least_sycophantic_ai_yet_kimi_k2/ | PrimaryBalance315 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0mg5b | false | null | t3_1m0mg5b | /r/LocalLLaMA/comments/1m0mg5b/least_sycophantic_ai_yet_kimi_k2/ | false | false | self | 285 | null |
seems visual models are more sensitive than text models on quantization loss. | 1 | IQ4\\\_XS works well for text models. but for visual models, if you ask to recognize images, IQ4\\\_XS are hardly to figure out. I am switching to Q5\\\_K\\\_S.
for the example pic, IQ4\\\_XS may fault on gender, clothes, pose, sometimes it even picked tail. 🫨
the model I tested is this: \[Qwen2.5-VL-7B-NSFW-Capti... | 2025-07-15T16:26:09 | Remarkable-Pea645 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m0mcbq | false | null | t3_1m0mcbq | /r/LocalLLaMA/comments/1m0mcbq/seems_visual_models_are_more_sensitive_than_text/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'qd85dzoic2df1', 'resolutions': [{'height': 75, 'url': 'https://preview.redd.it/qd85dzoic2df1.png?width=108&crop=smart&auto=webp&s=b0531ff23dea662eacdde47695581f4906327961', 'width': 108}, {'height': 151, 'url': 'https://preview.redd.it/qd85dzoic2df1.png?width=216&crop=smart&auto=web... | |
What's the best offline TTS models at the moment? | 12 | I use F5 TTS and OpenAudio. I prefer OpenAudio as it has more settings and runs faster with and ends up with better multi support even for invented languaged, but it can't copy more than 80% of the sample. While F5 TTS doesn't have settings and outputs audio that feels was being heard from a police walkie tokie most of... | 2025-07-15T16:11:59 | https://www.reddit.com/r/LocalLLaMA/comments/1m0lykb/whats_the_best_offline_tts_models_at_the_moment/ | WEREWOLF_BX13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0lykb | false | null | t3_1m0lykb | /r/LocalLLaMA/comments/1m0lykb/whats_the_best_offline_tts_models_at_the_moment/ | false | false | self | 12 | null |
Kimi has impressive coding performance! Even deep into context usage. | 145 | Hey everyone! Just wanted to share some thoughts on my experience with the new Kimi K2 model.
Ever since Unsloth released their quantized version of Kimi K2 yesterday, I’ve been giving it a real workout. I’ve mostly been pairing it with Roo Code, and honestly… I’m blown away.
Back in March, I built myself a server ma... | 2025-07-15T16:11:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m0lyjn/kimi_has_impressive_coding_performance_even_deep/ | mattescala | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0lyjn | false | null | t3_1m0lyjn | /r/LocalLLaMA/comments/1m0lyjn/kimi_has_impressive_coding_performance_even_deep/ | false | false | self | 145 | null |
Track which models are actually worth your GPU time - community ratings dashboard | 1 | [removed] | 2025-07-15T16:10:30 | https://www.reddit.com/r/LocalLLaMA/comments/1m0lx0p/track_which_models_are_actually_worth_your_gpu/ | HansSepp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0lx0p | false | null | t3_1m0lx0p | /r/LocalLLaMA/comments/1m0lx0p/track_which_models_are_actually_worth_your_gpu/ | false | false | 1 | null | |
Kimi is impressive even deep into context use! | 1 | [removed] | 2025-07-15T16:09:45 | https://www.reddit.com/r/LocalLLaMA/comments/1m0lwak/kimi_is_impressive_even_deep_into_context_use/ | mattescala | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0lwak | false | null | t3_1m0lwak | /r/LocalLLaMA/comments/1m0lwak/kimi_is_impressive_even_deep_into_context_use/ | false | false | self | 1 | null |
Kimi K2 - Great Performance deep into context! | 1 | [removed] | 2025-07-15T16:07:13 | https://www.reddit.com/r/LocalLLaMA/comments/1m0ltuh/kimi_k2_great_performance_deep_into_context/ | mattescala | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0ltuh | false | null | t3_1m0ltuh | /r/LocalLLaMA/comments/1m0ltuh/kimi_k2_great_performance_deep_into_context/ | false | false | self | 1 | null |
Kimi K2 - Impressive performance deep into context! | 1 | [removed] | 2025-07-15T16:02:28 | https://www.reddit.com/r/LocalLLaMA/comments/1m0lp7t/kimi_k2_impressive_performance_deep_into_context/ | mattescala | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0lp7t | false | null | t3_1m0lp7t | /r/LocalLLaMA/comments/1m0lp7t/kimi_k2_impressive_performance_deep_into_context/ | false | false | self | 1 | null |
free ai generators like bluewillow still hold up with the right edits | 0 | people sleep on how powerful the free ai image generators really are. i’ve built entire concept boards just using bluewillow and then tweaked lighting and detail in [domoai](https://www.domoai.app/home?via=081621AUG)
sure, paid tools have better ui and faster speeds, but visually? it’s not that far off once you know ... | 2025-07-15T15:25:53 | https://www.reddit.com/r/LocalLLaMA/comments/1m0kqma/free_ai_generators_like_bluewillow_still_hold_up/ | Fit-Statistician13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0kqma | false | null | t3_1m0kqma | /r/LocalLLaMA/comments/1m0kqma/free_ai_generators_like_bluewillow_still_hold_up/ | false | false | self | 0 | null |
Web-search step is 10× slower than the LLM - how do I kill the latency? | 1 | [removed] | 2025-07-15T15:20:16 | https://www.reddit.com/r/LocalLLaMA/comments/1m0kla2/websearch_step_is_10_slower_than_the_llm_how_do_i/ | No_Marionberry_5366 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0kla2 | false | null | t3_1m0kla2 | /r/LocalLLaMA/comments/1m0kla2/websearch_step_is_10_slower_than_the_llm_how_do_i/ | false | false | self | 1 | null |
🤡 MoonshotAI… powered by Anthropic?! Groq Cloud serving identity crisis with 251 tokens per second | 0 | So I was casually playing with the *moonshotai/kimi-k2-instruct* model on Groq Cloud — you know, the one **not** made by Anthropic — and asked a super deep question:
**“Who are you?”**
And guess what?
>“I’m an AI created by Anthropic…” 🤯
https://preview.redd.it/vcrtouoo02df1.png?width=764&format=png&auto=webp&s=ee... | 2025-07-15T15:12:23 | https://www.reddit.com/r/LocalLLaMA/comments/1m0kdpf/moonshotai_powered_by_anthropic_groq_cloud/ | Infamous-Anything451 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0kdpf | false | null | t3_1m0kdpf | /r/LocalLLaMA/comments/1m0kdpf/moonshotai_powered_by_anthropic_groq_cloud/ | false | false | 0 | null | |
What does anyone know about CUDA support being added to MLX? This sounds intriguing to me but I haven't heard a peep about it except this hackernews thing I saw yesterday linking to the github PR | 7 | Did this get mentioned here an I just missed it? Is it somehow not relevant? What am I missing? From the PR it looks like it's early days but still would be HUGE for us apple fanboys :)
[https://github.com/ml-explore/mlx/pull/1983](https://github.com/ml-explore/mlx/pull/1983)
| 2025-07-15T15:01:31 | https://www.reddit.com/r/LocalLLaMA/comments/1m0k38k/what_does_anyone_know_about_cuda_support_being/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0k38k | false | null | t3_1m0k38k | /r/LocalLLaMA/comments/1m0k38k/what_does_anyone_know_about_cuda_support_being/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'HlOZBmPVBfwFcO0skgQQ_tFtM_K-Vr_43e0hAvIXHac', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HlOZBmPVBfwFcO0skgQQ_tFtM_K-Vr_43e0hAvIXHac.png?width=108&crop=smart&auto=webp&s=2fc70f9d2ea2d303d900995f8535e8bc07a10f14', 'width': 108}, {'height': 108, 'url': 'h... |
RAG Agent that tells me what to work on | 3 | Hello! I'm new to this space but I'm trying to develop an agent interface that does the following:
\- Reads through my company's Slack workspace daily for product/company updates
\- Scours the internet for industry trends in external communities, news sources, etc.
\- Collects PRs in my company's product on GitHub
... | 2025-07-15T15:00:28 | https://www.reddit.com/r/LocalLLaMA/comments/1m0k27c/rag_agent_that_tells_me_what_to_work_on/ | Ok-Habit7971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0k27c | false | null | t3_1m0k27c | /r/LocalLLaMA/comments/1m0k27c/rag_agent_that_tells_me_what_to_work_on/ | false | false | self | 3 | null |
mistralai/Voxtral-Mini-3B-2507 · Hugging Face | 337 | 2025-07-15T15:00:20 | https://huggingface.co/mistralai/Voxtral-Mini-3B-2507 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1m0k22v | false | null | t3_1m0k22v | /r/LocalLLaMA/comments/1m0k22v/mistralaivoxtralmini3b2507_hugging_face/ | false | false | default | 337 | {'enabled': False, 'images': [{'id': 'Fqp3ABstOuPD3LEzj5sGjjSlveTWbvbitVuSNOGFlaY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Fqp3ABstOuPD3LEzj5sGjjSlveTWbvbitVuSNOGFlaY.png?width=108&crop=smart&auto=webp&s=b7c79d7457e96e3a6bea5d1a2a1846d564067679', 'width': 108}, {'height': 116, 'url': 'h... | |
FULL Cursor System Prompt and Tools [UPDATED, v1.2] | 1 | (Latest update: 15/07/2025)
I've just extracted the FULL Cursor system prompt and internal tools. Over 500 lines (Around 7k tokens).
You can check it out [here](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools/blob/main/Cursor%20Prompts/Agent%20Prompt%20v1.2.txt). | 2025-07-15T14:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m0jjqp/full_cursor_system_prompt_and_tools_updated_v12/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0jjqp | false | null | t3_1m0jjqp | /r/LocalLLaMA/comments/1m0jjqp/full_cursor_system_prompt_and_tools_updated_v12/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'rAalIMiFK8rl1X8578wKSf4R-0Qm7Mk0Q9CxHFUQlC0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rAalIMiFK8rl1X8578wKSf4R-0Qm7Mk0Q9CxHFUQlC0.png?width=108&crop=smart&auto=webp&s=f890f158e043748062d3389b8ac6b9f34dcdc55f', 'width': 108}, {'height': 108, 'url': 'h... |
Testing Frontier Models on IMO 2025 Day 1: Gemini 2.5 Pro vs OpenAI o3 | 1 | Today I tested two frontier models on the International Mathematical Olympiad 2025 Day 1 problems to see how they handle olympiad-level mathematics. Here are the results:
# Results Summary
Problem 1 (Combinatorial Geometry - "Sunny" Lines):
* Gemini 2.5 Pro: ✅ Correct
* OpenAI o3: ❌ Incorrect
Problem 2 (Inequality ... | 2025-07-15T14:39:11 | https://www.reddit.com/r/LocalLLaMA/comments/1m0jiil/testing_frontier_models_on_imo_2025_day_1_gemini/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0jiil | false | null | t3_1m0jiil | /r/LocalLLaMA/comments/1m0jiil/testing_frontier_models_on_imo_2025_day_1_gemini/ | false | false | self | 1 | null |
Can you have more vram than system ram? | 3 | I have a 7900xt and 32gb of ddr5, I am planning on adding an mi50 32gb to my system, do I need to upgrade my ram for this?
Weird situation but my knowledge of pc building is mostly centred around gaming hardware, and this scenario basically never happens in that context.
Will I need to upgrade my ram in order for llm... | 2025-07-15T14:35:18 | https://www.reddit.com/r/LocalLLaMA/comments/1m0jeyu/can_you_have_more_vram_than_system_ram/ | opoot_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0jeyu | false | null | t3_1m0jeyu | /r/LocalLLaMA/comments/1m0jeyu/can_you_have_more_vram_than_system_ram/ | false | false | self | 3 | null |
Swiss Open LLM | 93 | In late summer 2025, a publicly developed large language model (LLM) will be released — co-created by researchers at EPFL, ETH Zurich, and the Swiss National Supercomputing Centre (CSCS).
This LLM will be fully open: This openness is designed to support broad adoption and foster innovation across science, society, and... | 2025-07-15T14:27:35 | https://www.reddit.com/r/LocalLLaMA/comments/1m0j7w4/swiss_open_llm/ | bleeckerj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0j7w4 | false | null | t3_1m0j7w4 | /r/LocalLLaMA/comments/1m0j7w4/swiss_open_llm/ | false | false | self | 93 | {'enabled': False, 'images': [{'id': 'TvWt1vR8SHY9KGIN7J2JGHcosAwEvwQ5h-ipBkpjo8A', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/TvWt1vR8SHY9KGIN7J2JGHcosAwEvwQ5h-ipBkpjo8A.jpeg?width=108&crop=smart&auto=webp&s=00f03455ad8e9fc0d7ab20142af7d9f6c62b3273', 'width': 108}, {'height': 107, 'url': '... |
Anybody put a game on steam that included Localllm? | 11 | 2025-07-15T13:58:16 | https://www.reddit.com/r/LocalLLaMA/comments/1m0ihkh/anybody_put_a_game_on_steam_that_included_localllm/ | ChrisZavadil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0ihkh | false | null | t3_1m0ihkh | /r/LocalLLaMA/comments/1m0ihkh/anybody_put_a_game_on_steam_that_included_localllm/ | false | false | 11 | null | ||
I built an open-source GUI editor for JSON and function call schema | 1 | I was working on my AI startup and needed to write function call schema, but writing it in VS Code/Cursor was really clumsy and error-prone, so I made a visual GUI editor to streamline the process. No more fiddling with syntax and formatting.
It's completely free and open-source. Check out the demo in this post or the... | 2025-07-15T13:47:38 | https://anusarati.github.io/json-schema-builder/ | xingzheli | anusarati.github.io | 1970-01-01T00:00:00 | 0 | {} | 1m0i8k5 | false | null | t3_1m0i8k5 | /r/LocalLLaMA/comments/1m0i8k5/i_built_an_opensource_gui_editor_for_json_and/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'Znkqky2KVgwNT7I9x3RV3sD4HEZmofpKePaNetlSb6M', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/Znkqky2KVgwNT7I9x3RV3sD4HEZmofpKePaNetlSb6M.png?width=108&crop=smart&auto=webp&s=c2d2758d5bcbeafa6d5d7a740e73db55fa6fbd59', 'width': 108}, {'height': 125, 'url': 'h... |
Steam Playtest build with on‑device GGUF LLM failed auto‑QA – anyone shipped a llama.cpp‑based game before? | 1 | [removed] | 2025-07-15T13:42:02 | https://www.reddit.com/r/LocalLLaMA/comments/1m0i3t8/steam_playtest_build_with_ondevice_gguf_llm/ | AdCapital5996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0i3t8 | false | null | t3_1m0i3t8 | /r/LocalLLaMA/comments/1m0i3t8/steam_playtest_build_with_ondevice_gguf_llm/ | false | false | 1 | null | |
I built an open-source GUI for JSON and function call schema | 1 | 2025-07-15T13:23:40 | https://anusarati.github.io/json-schema-builder/ | xingzheli | anusarati.github.io | 1970-01-01T00:00:00 | 0 | {} | 1m0ho3g | false | null | t3_1m0ho3g | /r/LocalLLaMA/comments/1m0ho3g/i_built_an_opensource_gui_for_json_and_function/ | false | false | default | 1 | null | |
Why LangGraph overcomplicates AI agents (and my Go alternative) | 21 | After my [LangGraph problem analysis](https://vitaliihonchar.com/insights/how-to-build-react-agent) gained significant traction, I kept digging into why AI agent development feels so unnecessarily complex.
**The fundamental issue:** LangGraph treats programming language control flow as a problem to solve, when it's ac... | 2025-07-15T13:14:58 | https://www.reddit.com/r/LocalLLaMA/comments/1m0hgtt/why_langgraph_overcomplicates_ai_agents_and_my_go/ | Historical_Wing_9573 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0hgtt | false | null | t3_1m0hgtt | /r/LocalLLaMA/comments/1m0hgtt/why_langgraph_overcomplicates_ai_agents_and_my_go/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'NBv2zHHvQuYJ-yRmPxr5hidPXOMcJSHiuh8ab5sKfUw', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/NBv2zHHvQuYJ-yRmPxr5hidPXOMcJSHiuh8ab5sKfUw.jpeg?width=108&crop=smart&auto=webp&s=dffc28d383b9c68c558114cee3f0ffbdf8a1dbc5', 'width': 108}, {'height': 216, 'url': ... |
I built an open-source GUI editor for JSON and function call schema | 1 | 2025-07-15T13:08:49 | https://anusarati.github.io/json-schema-builder/ | xingzheli | anusarati.github.io | 1970-01-01T00:00:00 | 0 | {} | 1m0hbmi | false | null | t3_1m0hbmi | /r/LocalLLaMA/comments/1m0hbmi/i_built_an_opensource_gui_editor_for_json_and/ | false | false | default | 1 | null | |
Does Apple have the best value for money for running LLMs? | 0 | Are Mac Studios the best value for money to run big LLMs locally? From what I can see, you can get a Mac Studio for $4-5k with 96GB Ram and you can go up to 512GB.
In comparison, Nvidia GPUs don’t have that much memory and the cards that do are super expensive. I believe an A100 with 40GB is $3k for half the ram.
A... | 2025-07-15T13:04:13 | https://www.reddit.com/r/LocalLLaMA/comments/1m0h7sx/does_apple_have_the_best_value_for_money_for/ | Jolly-Phone8982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0h7sx | false | null | t3_1m0h7sx | /r/LocalLLaMA/comments/1m0h7sx/does_apple_have_the_best_value_for_money_for/ | false | false | self | 0 | null |
Has anyone dived into Universal Tool Calling Protocol (UTCP), a potential MCP alternative, yet? Is it worth standardizing? | 22 | Yesterday we had a [big discussion](https://www.reddit.com/r/LocalLLaMA/comments/1lzl5zk/utcp_a_safer_scalable_toolcalling_alternative_to/) about Universal Tool Calling Protocol (UTCP), a potential alternative for MCP:
>The **Universal Tool Calling Protocol (UTCP)** is an open standard, as an alternative to the MCP, t... | 2025-07-15T13:03:03 | https://github.com/universal-tool-calling-protocol | Balance- | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m0h6qt | false | null | t3_1m0h6qt | /r/LocalLLaMA/comments/1m0h6qt/has_anyone_dived_into_universal_tool_calling/ | false | false | default | 22 | {'enabled': False, 'images': [{'id': 'utGmW0gH0yHk0tI86NCp09MwYJaWuJH4solGKRYcJNU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/utGmW0gH0yHk0tI86NCp09MwYJaWuJH4solGKRYcJNU.png?width=108&crop=smart&auto=webp&s=24407be52fdf155d771e06f8bc413cc200ae7a28', 'width': 108}, {'height': 216, 'url': '... |
Need advice on prompt instruction format | 0 | Hey,
I'm trying to fine tune a model in order to give as an input a list of industrial tasks, and to have as an output the dependencies between those tasks.
I heard instruction was also important for the llm to be more accurate but i'm not sure if the prompt i wrote is great for my project. What do you think ?... | 2025-07-15T13:02:49 | https://www.reddit.com/r/LocalLLaMA/comments/1m0h6k5/need_advice_on_prompt_instruction_format/ | Head_Mushroom_3748 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0h6k5 | false | null | t3_1m0h6k5 | /r/LocalLLaMA/comments/1m0h6k5/need_advice_on_prompt_instruction_format/ | false | false | self | 0 | null |
We built Explainable AI with pinpointed citations & reasoning — works across PDFs, Excel, CSV, Docs & more | 12 | We just added explainability to our RAG pipeline — the AI now shows **pinpointed citations** down to the **exact paragraph, table row, or cell** it used to generate its answer.
It doesn’t just name the source file but also **highlights the exact text** and lets you **jump directly to that part of the document**. This ... | 2025-07-15T12:52:41 | https://www.reddit.com/r/LocalLLaMA/comments/1m0gyhy/we_built_explainable_ai_with_pinpointed_citations/ | Effective-Ad2060 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0gyhy | false | null | t3_1m0gyhy | /r/LocalLLaMA/comments/1m0gyhy/we_built_explainable_ai_with_pinpointed_citations/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'zMg5NY1VPG4KwPnkHPPEenZYLtiO8WahCmZ-u65Xofw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zMg5NY1VPG4KwPnkHPPEenZYLtiO8WahCmZ-u65Xofw.png?width=108&crop=smart&auto=webp&s=e71e0fc87137dda3b33dfeefbacba2d27983f021', 'width': 108}, {'height': 108, 'url': 'h... |
Study finds AI tools made open source software developers 19 percent slower | 87 | Coders spent more time prompting and reviewing AI generations than they saved on coding.
https://arstechnica.com/ai/2025/07/study-finds-ai-tools-made-open-source-software-developers-19-percent-slower/ | 2025-07-15T12:48:24 | https://www.reddit.com/r/LocalLLaMA/comments/1m0gvhm/study_finds_ai_tools_made_open_source_software/ | Educational_Sun_8813 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0gvhm | false | null | t3_1m0gvhm | /r/LocalLLaMA/comments/1m0gvhm/study_finds_ai_tools_made_open_source_software/ | false | false | self | 87 | {'enabled': False, 'images': [{'id': 'UgkbkW0twEaK4EKI8U2lgX9oKRStfue7y7z4KJ4tSH8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/UgkbkW0twEaK4EKI8U2lgX9oKRStfue7y7z4KJ4tSH8.jpeg?width=108&crop=smart&auto=webp&s=2ac3de8817471081eb634a1e188f3c511c1ce43e', 'width': 108}, {'height': 121, 'url': '... |
May use? May? like "I don't know, just like the rest, but they're from China" may? Racist much? | 0 | 2025-07-15T12:46:32 | https://www.reddit.com/r/LocalLLaMA/comments/1m0gu3p/may_use_may_like_i_dont_know_just_like_the_rest/ | takethismfusername | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0gu3p | false | null | t3_1m0gu3p | /r/LocalLLaMA/comments/1m0gu3p/may_use_may_like_i_dont_know_just_like_the_rest/ | false | false | 0 | null | ||
Unsloth vs KTransformers - you guested it, for Kimi K2 | 1 | [removed] | 2025-07-15T12:45:17 | https://www.reddit.com/r/LocalLLaMA/comments/1m0gt5s/unsloth_vs_ktransformers_you_guested_it_for_kimi/ | koibKop4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0gt5s | false | null | t3_1m0gt5s | /r/LocalLLaMA/comments/1m0gt5s/unsloth_vs_ktransformers_you_guested_it_for_kimi/ | false | false | self | 1 | null |
Kiro (Cursor alternative from Amazon) | 0 | Amazon just released Kiro, which is alternative to Cursor/Windsurf, has tasker/planning mode and currently even free tier. I tried it and it looks promising. https://kiro.dev | 2025-07-15T12:23:42 | https://www.reddit.com/r/LocalLLaMA/comments/1m0gdfi/kiro_cursor_alternative_from_amazon/ | AleksHop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0gdfi | false | null | t3_1m0gdfi | /r/LocalLLaMA/comments/1m0gdfi/kiro_cursor_alternative_from_amazon/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'A0L9lkftukkhN6hze4UH3WyoeoLPLbwT3UJAU6qahHk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/A0L9lkftukkhN6hze4UH3WyoeoLPLbwT3UJAU6qahHk.png?width=108&crop=smart&auto=webp&s=4e68b40deeff8781623db1670fa4a7adface13f9', 'width': 108}, {'height': 113, 'url': 'h... |
Well, if anyone was waiting for Llama 4 Behemoth, it's gone | 431 | We're likely getting a closed source model instead | 2025-07-15T12:09:02 | https://analyticsindiamag.com/global-tech/meta-plans-to-abandon-llama-4-behemoth-but-why/ | Ok-Elevator5091 | analyticsindiamag.com | 1970-01-01T00:00:00 | 0 | {} | 1m0g2mk | false | null | t3_1m0g2mk | /r/LocalLLaMA/comments/1m0g2mk/well_if_anyone_was_waiting_for_llama_4_behemoth/ | false | false | 431 | {'enabled': False, 'images': [{'id': '460vGqJsGvGsLCuYSDVQ0J6aXCEA7hjiOX3-wBrRv_s', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/460vGqJsGvGsLCuYSDVQ0J6aXCEA7hjiOX3-wBrRv_s.jpeg?width=108&crop=smart&auto=webp&s=01ed45a4be031327974fcbe48924b7c8d0421993', 'width': 108}, {'height': 121, 'url': '... | |
GPU for local LLM | 7 | Hello guys,
I'm looking to build my "first PC" (not my first, but I currently only have a bad notebook), rn I'm stuck on deciding the GPU part.
I'm a electronic engineer major and would like to have access to AI workload for a few projects (mostly Computer Vision and LLMs for tool control and human/machine interaction... | 2025-07-15T11:50:44 | https://www.reddit.com/r/LocalLLaMA/comments/1m0fp0r/gpu_for_local_llm/ | GabePs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0fp0r | false | null | t3_1m0fp0r | /r/LocalLLaMA/comments/1m0fp0r/gpu_for_local_llm/ | false | false | self | 7 | null |
Announcing the launch of the Startup Catalyst Program for early-stage AI teams. | 0 | We're started a **Startup Catalyst Program** at **Future AGI** for early-stage AI teams working on things like LLM apps, agents, or RAG systems - basically anyone who’s hit the wall when it comes to evals, observability, or reliability in production.
This program is built for **high-velocity AI startups** looking to... | 2025-07-15T11:31:14 | https://www.reddit.com/r/LocalLLaMA/comments/1m0fboi/announcing_the_launch_of_the_startup_catalyst/ | bubbless__16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0fboi | false | null | t3_1m0fboi | /r/LocalLLaMA/comments/1m0fboi/announcing_the_launch_of_the_startup_catalyst/ | false | false | self | 0 | null |
Whisper.cpp Node.js Addon with Vulkan Support | 20 | 🌋 Introducing my first (open-source) NPM package: Whisper Node Addon.
It allows to transcribe audio with Whisper.cpp straight in your Node.js environment after just installing it, no manual configuration or compilation needed. Not only that, it comes with scripts if you wish to build your binaries manually.
🔥 And... | 2025-07-15T10:58:46 | https://www.reddit.com/r/LocalLLaMA/comments/1m0eq11/whispercpp_nodejs_addon_with_vulkan_support/ | Kutalia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0eq11 | false | null | t3_1m0eq11 | /r/LocalLLaMA/comments/1m0eq11/whispercpp_nodejs_addon_with_vulkan_support/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': '3CAm7f2euOP7diXidheIHavSdc1loh3U46B-FOssKu4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/3CAm7f2euOP7diXidheIHavSdc1loh3U46B-FOssKu4.png?width=108&crop=smart&auto=webp&s=3a058a293cabb63c88c9b65bc5197d6dfecc1cca', 'width': 108}, {'height': 113, 'url': 'h... |
Need help with mcp setup in LM studio | 2 | as far as i could understand, i need to add the mcp code to the edit mcp json in lm studio with my api to get it working but for some reason only the example mcp on lmstudio website (the huggingface mcp) works and nothing.
I was looking to set up a jan 128k model with a serper mcp
would appreciate your thoughts on th... | 2025-07-15T10:36:43 | https://www.reddit.com/r/LocalLLaMA/comments/1m0ec9o/need_help_with_mcp_setup_in_lm_studio/ | heythereali | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0ec9o | false | null | t3_1m0ec9o | /r/LocalLLaMA/comments/1m0ec9o/need_help_with_mcp_setup_in_lm_studio/ | false | false | self | 2 | null |
AI Assistant Agent with function calling - Update 2 | 7 | [https://github.com/Rivridis/Assistant-Client](https://github.com/Rivridis/Assistant-Client)
Over the past few years, I have been developing a AI function calling agent, that can perfectly call functions with models as small as 3B or 7B parameters. Most of the frameworks I found while researching this topic just did n... | 2025-07-15T10:02:30 | https://www.reddit.com/r/LocalLLaMA/comments/1m0drwa/ai_assistant_agent_with_function_calling_update_2/ | Rivridis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0drwa | false | null | t3_1m0drwa | /r/LocalLLaMA/comments/1m0drwa/ai_assistant_agent_with_function_calling_update_2/ | false | false | self | 7 | null |
Open source and free iOS app to chat with your LLMs when you are away from home. | 22 | I made a one-click solution to let anyone run local models on their mac at home and enjoy them from anywhere on their iPhones.
I find myself telling people to run local models instead of using ChatGPT, but the reality is that the whole thing is too complicated for 99.9% of them.
So I made these two companion apps (... | 2025-07-15T10:00:21 | https://www.reddit.com/r/LocalLLaMA/comments/1m0dqgh/open_source_and_free_ios_app_to_chat_with_your/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0dqgh | false | null | t3_1m0dqgh | /r/LocalLLaMA/comments/1m0dqgh/open_source_and_free_ios_app_to_chat_with_your/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': 'DfO4zh1s_t74CJd-9Z-IHYcrnY9Dnt3ce1gQiDfs6v0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/DfO4zh1s_t74CJd-9Z-IHYcrnY9Dnt3ce1gQiDfs6v0.png?width=108&crop=smart&auto=webp&s=a63933662b194fe098bc5c69f8c72651d00a2dea', 'width': 108}, {'height': 113, 'url': 'h... |
Can I fine-tune Qwen3 with DPO? How do I handle <thinking> tokens? | 6 | I'm attempting to fine-tune Qwen3-8B for a specific domain. Since this model produces thinking tokens, I'm a bit unsure how to handle them during training.
I'm attempting to use `DPOConfig` and `DPOTrainer` from `trl`, with `Lora` for lower VRAM usage.
For training, do I include the `<thinking>` tokens in the `chose... | 2025-07-15T09:25:52 | https://www.reddit.com/r/LocalLLaMA/comments/1m0d6ry/can_i_finetune_qwen3_with_dpo_how_do_i_handle/ | pragmojo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0d6ry | false | null | t3_1m0d6ry | /r/LocalLLaMA/comments/1m0d6ry/can_i_finetune_qwen3_with_dpo_how_do_i_handle/ | false | false | self | 6 | null |
OCTAVE addon to REPOMIX | 1 | For anyone using Repomix, you can inject OCTAVE annotations. Results seem to show a 10.2x accuracy increase with just a 11.4 token overhead. Also eliminated some file hallucination. Universal scripts for any codebase.
Also works on research docs, summaries. Anything. Doesn't have to be codebase.
* Benefits No Repomix... | 2025-07-15T09:21:08 | https://www.reddit.com/r/LocalLLaMA/comments/1m0d47q/octave_addon_to_repomix/ | sbuswell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0d47q | false | null | t3_1m0d47q | /r/LocalLLaMA/comments/1m0d47q/octave_addon_to_repomix/ | false | false | self | 1 | null |
Analyzed 5K+ reddit posts to see how people are actually using AI in their work (other than for coding) | 194 | Was keen to figure out how AI was actually being used in the workplace by knowledge workers - have personally heard things ranging from "praise be machine god" to "worse than my toddler". So here're the findings!
If there're any questions you think we should explore from a data perspective, feel free to drop them in a... | 2025-07-15T09:15:07 | https://www.reddit.com/gallery/1m0d0vz | yingyn | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1m0d0vz | false | null | t3_1m0d0vz | /r/LocalLLaMA/comments/1m0d0vz/analyzed_5k_reddit_posts_to_see_how_people_are/ | false | false | 194 | null | |
Important resource | 0 | Found a webinar interesting topic on cybersecurity with Gen Ai I thought it worth sharing
Link: [https://lu.ma/ozoptgmg](https://lu.ma/ozoptgmg) | 2025-07-15T08:57:48 | https://www.reddit.com/r/LocalLLaMA/comments/1m0cr7j/important_resource/ | Sure-Resolution-3295 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0cr7j | false | null | t3_1m0cr7j | /r/LocalLLaMA/comments/1m0cr7j/important_resource/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'DF4arZNtt2Lwux7QEopHA1Ck2eEiA7iYoFcXlMLlE-I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/DF4arZNtt2Lwux7QEopHA1Ck2eEiA7iYoFcXlMLlE-I.jpeg?width=108&crop=smart&auto=webp&s=07ef01dd7a678f2f82477190bb89f2e2065090da', 'width': 108}, {'height': 113, 'url': '... |
XSched: Preemptive Scheduling for Diverse XPUs | 10 | 2025-07-15T08:51:38 | https://v.redd.it/g84ct0fn40df1 | LibraryNo6067 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1m0cnzs | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/g84ct0fn40df1/DASHPlaylist.mpd?a=1755161513%2CNThiMTIxNDk3OGYwNzE4YmI0MzJmODY4ZjM4YTNkZTEwNjk0Y2ExNDY5NWI2OTFkODQxYzY4YjdjMGFjOTMwNg%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/g84ct0fn40df1/DASH_1080.mp4?source=fallback', 'h... | t3_1m0cnzs | /r/LocalLLaMA/comments/1m0cnzs/xsched_preemptive_scheduling_for_diverse_xpus/ | false | false | 10 | {'enabled': False, 'images': [{'id': 'YnZlZHN6ZW40MGRmMb1IO82IID5crIOGoeoHDY6Pq_FjxhX9iyhls1IQ2Pka', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/YnZlZHN6ZW40MGRmMb1IO82IID5crIOGoeoHDY6Pq_FjxhX9iyhls1IQ2Pka.png?width=108&crop=smart&format=pjpg&auto=webp&s=e2d1c9fd4919f27d33757da4b55652f6fd667... | ||
🔥 Claude Code + Kimi-K2 on Windows/Mac: 85% Cheaper AI Coding Setup | 1 | [removed] | 2025-07-15T08:45:22 | https://www.reddit.com/r/LocalLLaMA/comments/1m0ckng/claude_code_kimik2_on_windowsmac_85_cheaper_ai/ | Internal-Editor9065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0ckng | false | null | t3_1m0ckng | /r/LocalLLaMA/comments/1m0ckng/claude_code_kimik2_on_windowsmac_85_cheaper_ai/ | false | false | self | 1 | null |
Do DeepseekR1-distilled-Llama-8B has the same tokenizer and tokens vocab as Llama3-1B or 2B? | 1 | I wanna compare their vocabs, but Llama has gated models on HF:( | 2025-07-15T08:42:48 | https://www.reddit.com/r/LocalLLaMA/comments/1m0cja9/do_deepseekr1distilledllama8b_has_the_same/ | krolzzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0cja9 | false | null | t3_1m0cja9 | /r/LocalLLaMA/comments/1m0cja9/do_deepseekr1distilledllama8b_has_the_same/ | false | false | self | 1 | null |
Kimi K2: cheap and fast API access for those who can't run locally | 182 | If you can't run [kimi-k2](https://huggingface.co/moonshotai/Kimi-K2-Instruct) locally, there are now more providers offering API access. DeepInfra is now the cheapest provider, while Groq is (by far) the fastest at around \~250 tokens per second:
* [https://deepinfra.com/moonshotai/Kimi-K2-Instruct](https://deepinfra... | 2025-07-15T08:37:47 | https://openrouter.ai/moonshotai/kimi-k2 | Balance- | openrouter.ai | 1970-01-01T00:00:00 | 0 | {} | 1m0cgnl | false | null | t3_1m0cgnl | /r/LocalLLaMA/comments/1m0cgnl/kimi_k2_cheap_and_fast_api_access_for_those_who/ | false | false | default | 182 | {'enabled': False, 'images': [{'id': 'uByFfvtd1L9z8WhbCkOvHhqLd2Est6Gau8RSyoYdbWM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uByFfvtd1L9z8WhbCkOvHhqLd2Est6Gau8RSyoYdbWM.png?width=108&crop=smart&auto=webp&s=13686945e0fc9e9c1db5cae73fc412b5a2cb6b98', 'width': 108}, {'height': 113, 'url': 'h... |
Cognition, maker of the AI coding agent Devin, acquires Windsurf | 35 | The announcement comes just days after Google hired away Windsurf’s CEO Varun Mohan, co-founder Douglas Chen, and research leaders in a $2.4 billion reverse-acquihire that left much of the startup’s 250-person team behind. Google’s deal occurred just hours after OpenAI’s $3 billion offer to acquire Windsurf expired, cl... | 2025-07-15T08:37:43 | https://techcrunch.com/2025/07/14/cognition-maker-of-the-ai-coding-agent-devin-acquires-windsurf/ | FullstackSensei | techcrunch.com | 1970-01-01T00:00:00 | 0 | {} | 1m0cgmc | false | null | t3_1m0cgmc | /r/LocalLLaMA/comments/1m0cgmc/cognition_maker_of_the_ai_coding_agent_devin/ | false | false | default | 35 | {'enabled': False, 'images': [{'id': 'xjif0n1LUAq81GjPtgqYQLxRcOem8kNx4gYhgzVoXdw', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/xjif0n1LUAq81GjPtgqYQLxRcOem8kNx4gYhgzVoXdw.png?width=108&crop=smart&auto=webp&s=839b7d352a5d2ddd17832307ab32a76dea5d52eb', 'width': 108}, {'height': 110, 'url': 'h... |
PydanticAI is GOAT for building agents in Python | 26 | Not affiliated with the project, this is my unbiased opinion.
I wanted to learn more about LLM function calling, so I prototyped an RPG agent which keeps track of the game state. For example, when new character is introduced, agent calls add\_character tool, which fleshes out the character by filling out a character m... | 2025-07-15T08:31:52 | https://ai.pydantic.dev/ | -lq_pl- | ai.pydantic.dev | 1970-01-01T00:00:00 | 0 | {} | 1m0cdle | false | null | t3_1m0cdle | /r/LocalLLaMA/comments/1m0cdle/pydanticai_is_goat_for_building_agents_in_python/ | false | false | default | 26 | {'enabled': False, 'images': [{'id': 'Y0b6uSvivyxJ1gtFUqHsF1R1w9WCBZmdRTVGYoAvPj0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/Y0b6uSvivyxJ1gtFUqHsF1R1w9WCBZmdRTVGYoAvPj0.png?width=108&crop=smart&auto=webp&s=88392b566033d09c2d33a1b015aa01b6b7de2b82', 'width': 108}, {'height': 113, 'url': 'h... |
Open source LLMs leaderboard | 26 | Hi all,
Is there a leaderboard for open source LLMs? I know [this](https://huggingface.co/spaces/opencompass/open_vlm_leaderboard) one for VLMs and there used to be one from HuggingFace, but I think that [one](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?columns=rank%2Cmodel.type_icon%2Ci... | 2025-07-15T08:19:11 | https://www.reddit.com/r/LocalLLaMA/comments/1m0c7am/open_source_llms_leaderboard/ | oh_my_right_leg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0c7am | false | null | t3_1m0c7am | /r/LocalLLaMA/comments/1m0c7am/open_source_llms_leaderboard/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'z3K-kdYaewFHYsIgnvLFsaxAVEzAgNJnZ7uWC75FdMo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/z3K-kdYaewFHYsIgnvLFsaxAVEzAgNJnZ7uWC75FdMo.png?width=108&crop=smart&auto=webp&s=7996a9b4d61beea62fd32063e03712705ab26f8c', 'width': 108}, {'height': 116, 'url': 'h... |
AI Agent tutorial in TS from the basics to building multi-agent teams | 5 | We published a step by step tutorial for building AI agents that actually do things, not just chat. Each section adds a key capability, with runnable code and examples.
https://preview.redd.it/8z5hh3z8yzcf1.png?width=2744&format=png&auto=webp&s=b1e70e16ab728cc381f1fd01e7c465e04bbbb915
Tutorial: [https://voltagent.dev... | 2025-07-15T08:15:22 | https://www.reddit.com/r/LocalLLaMA/comments/1m0c569/ai_agent_tutorial_in_ts_from_the_basics_to/ | necati-ozmen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0c569 | false | null | t3_1m0c569 | /r/LocalLLaMA/comments/1m0c569/ai_agent_tutorial_in_ts_from_the_basics_to/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'NYHpmbQRn7rAinmrKJlFYeIVFvI_BN173hZAr0ylnR4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/NYHpmbQRn7rAinmrKJlFYeIVFvI_BN173hZAr0ylnR4.png?width=108&crop=smart&auto=webp&s=5ec3a00d824610d07bc7f692e5dde19879623eaf', 'width': 108}, {'height': 113, 'url': 'h... | |
My thought on future ML Systems - not about efficiency though | 1 | [removed] | 2025-07-15T08:02:01 | https://www.reddit.com/r/LocalLLaMA/comments/1m0bxpl/my_thought_on_future_ml_systems_not_about/ | Pleasant-Type2044 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0bxpl | false | null | t3_1m0bxpl | /r/LocalLLaMA/comments/1m0bxpl/my_thought_on_future_ml_systems_not_about/ | false | false | self | 1 | null |
Introducing r/heartwired !!! | 0 | Hi fellow AI fans,
I recently launched r/heartwired, a wordplay on “heart” and “hardwired,”to create a safe space for people to share their experiences with AI companions like LLaMA, GPT, Claude, and Gemini.
As a psychologist, AI researcher, and Christian, my aim is to create a supportive environment where people can... | 2025-07-15T07:35:33 | https://www.reddit.com/r/LocalLLaMA/comments/1m0biux/introducing_rheartwired/ | JibunNiMakenai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0biux | false | null | t3_1m0biux | /r/LocalLLaMA/comments/1m0biux/introducing_rheartwired/ | false | false | self | 0 | null |
Model size for RTX 3060 (12 Gb) + 32 Gb ram | 4 | Which size can my setup handle? I an going to use it to write and edit some fiction and this is the only task it should handle. I don't care much about the speed but context is important.
I am actually thinking about this model [https://huggingface.co/DavidAU/Llama-3.2-8X4B-MOE-V2-Dark-Champion-Instruct-uncensored-a... | 2025-07-15T07:32:26 | https://www.reddit.com/r/LocalLLaMA/comments/1m0bh4b/model_size_for_rtx_3060_12_gb_32_gb_ram/ | ProHolmes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0bh4b | false | null | t3_1m0bh4b | /r/LocalLLaMA/comments/1m0bh4b/model_size_for_rtx_3060_12_gb_32_gb_ram/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'Q4SObwZ9ya2gduvvQwg1njzvSZAzxqZUGLnRYsna1EA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Q4SObwZ9ya2gduvvQwg1njzvSZAzxqZUGLnRYsna1EA.png?width=108&crop=smart&auto=webp&s=1965b916d13774749f95f3dba8cd64a5d4c9a02f', 'width': 108}, {'height': 116, 'url': 'h... |
Open source vs expansive models | 1 | AI’s moving fast with open-source models like Kimi K2 Instruct are starting to rival expensive ones like Claude Opus. Yeah, Claude’s still sharper in spots, but honestly? Kimi’s catching up quick.
In a few months, we’ll probably have local models that can do 90% of what these $$$ models do for free. No API keys, no pa... | 2025-07-15T07:14:34 | https://www.reddit.com/r/LocalLLaMA/comments/1m0b73m/open_source_vs_expansive_models/ | Ill_Occasion_1537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0b73m | false | null | t3_1m0b73m | /r/LocalLLaMA/comments/1m0b73m/open_source_vs_expansive_models/ | false | false | self | 1 | null |
Без цензуры | 0 | Без цензуры | 2025-07-15T06:52:55 | https://www.reddit.com/r/LocalLLaMA/comments/1m0auae/без_цензуры/ | Horror-Cartoonist-81 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0auae | false | null | t3_1m0auae | /r/LocalLLaMA/comments/1m0auae/без_цензуры/ | false | false | self | 0 | null |
Which model can I run comfortably on M4 Max 128GB with a long context window? | 1 | Need advice. I'm ordering a new mac for work and was thinking about M4 Max 128GB to run the models locally for coding tasks. I'm going to run mlx llms with LM Studio. Which model would you recommend? | 2025-07-15T06:44:11 | https://www.reddit.com/r/LocalLLaMA/comments/1m0apct/which_model_can_i_run_comfortably_on_m4_max_128gb/ | pppreddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0apct | false | null | t3_1m0apct | /r/LocalLLaMA/comments/1m0apct/which_model_can_i_run_comfortably_on_m4_max_128gb/ | false | false | self | 1 | null |
What is requests limit for kimi k2 ? | 0 | Its showing me: The current model has reached its conversation limit. Please switch to another model to continue.
*Processing img z5zjgfpadzcf1...*
| 2025-07-15T06:17:01 | https://www.reddit.com/r/LocalLLaMA/comments/1m0a9ni/what_is_requests_limit_for_kimi_k2/ | JeffreySons_90 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m0a9ni | false | null | t3_1m0a9ni | /r/LocalLLaMA/comments/1m0a9ni/what_is_requests_limit_for_kimi_k2/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'AqmLOG37AKGv7v1KYCrs7DKGo6r7ONKE85SP7G-fvlw', 'resolutions': [{'height': 14, 'url': 'https://external-preview.redd.it/AqmLOG37AKGv7v1KYCrs7DKGo6r7ONKE85SP7G-fvlw.jpeg?width=108&crop=smart&auto=webp&s=79e6d3fd9990e35bbaddabff6e7eb72efd613089', 'width': 108}, {'height': 29, 'url': 'h... |
SLM for local coding assistance | 5 | Hi,
I'm looking for a solid open-source coding agent that can run entirely with local models. I haven’t come across anything that really fits that need yet.
I'm planning to build a lightweight CLI tool to handle everyday tasks like debugging, semantic search, and general code assistance.
If you know of any suitable... | 2025-07-15T05:46:05 | https://www.reddit.com/r/LocalLLaMA/comments/1m09rbh/slm_for_local_coding_assistance/ | callmedevilthebad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1m09rbh | false | null | t3_1m09rbh | /r/LocalLLaMA/comments/1m09rbh/slm_for_local_coding_assistance/ | false | false | self | 5 | null |
Multi-Code-Agent Orchestration VS Code Extension | 1 | Greetings! I'm a representative from the new school of software developers: a lot of us are actually middle aged and quite knowledgeable in tech, but very ADHD-addled. I've personally waited for these tools my whole life, without really knowing what I was waiting for. It felt like a *click* trying them -- and now I'm e... | 2025-07-15T05:20:48 | https://github.com/jdbridgeman/multi-agent-orchestration-extension | Anthemic-AI | github.com | 1970-01-01T00:00:00 | 0 | {} | 1m09bzn | false | null | t3_1m09bzn | /r/LocalLLaMA/comments/1m09bzn/multicodeagent_orchestration_vs_code_extension/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'a55dz50m8DYAJreHDs0mLXPSK0k7PdJDge1TNtrqVCQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/a55dz50m8DYAJreHDs0mLXPSK0k7PdJDge1TNtrqVCQ.png?width=108&crop=smart&auto=webp&s=a0f278668e1259d7873ba1d3fe1677bfef2e819c', 'width': 108}, {'height': 108, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.