title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Docker Desktop 4.42 adds integrated MCP Toolkit, Server, & Catalog of MCPs (servers and clients) | 27 | Docker seems like they are trying to be a pretty compelling turnkey AI solution lately. Their recent addition of a built in LLM model runner has made serving models with a llama.cpp-based server easier than setting up llama.cop itself, possibly even easier than using Ollama.
Now they’ve added an integrated MCP server... | 2025-06-17T02:11:04 | https://www.docker.com/blog/docker-desktop-4-42-native-ipv6-built-in-mcp-and-better-model-packaging/ | Porespellar | docker.com | 1970-01-01T00:00:00 | 0 | {} | 1ldbh4i | false | null | t3_1ldbh4i | /r/LocalLLaMA/comments/1ldbh4i/docker_desktop_442_adds_integrated_mcp_toolkit/ | false | false | default | 27 | {'enabled': False, 'images': [{'id': 'xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/xjpSfMqFJN636ZC--w4Xh4_nijURjUdciUc8m1MYbYM.png?width=108&crop=smart&auto=webp&s=ea8bd235ddec234f4ca95c3725a3bbd452bb6616', 'width': 108}, {'height': 216, 'url': '... |
Zentara-Code update: v 0.1.3 release. The first open-source comprehensive AI coder and AI debugger and two-in-one. | 1 | [removed] | 2025-06-17T02:05:45 | https://v.redd.it/pipd3eax8e7f1 | bn_from_zentara | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ldbdad | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/pipd3eax8e7f1/DASHPlaylist.mpd?a=1752717956%2COTgwODFjYTM3NDZmZjAyZTE5ZjFkZmIyOWQ1NTJjMzM0NDVhY2ZlOWJiNjY1ZjkxMzVlMGNmYjE0ZTA2YTMwMw%3D%3D&v=1&f=sd', 'duration': 54, 'fallback_url': 'https://v.redd.it/pipd3eax8e7f1/DASH_1080.mp4?source=fallback', 'h... | t3_1ldbdad | /r/LocalLLaMA/comments/1ldbdad/zentaracode_update_v_013_release_the_first/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'andlYnM5YXg4ZTdmMeA4DLGtrklgXBXJ6oA5MA6pSKVMkJaUHzia-F_uXhBj', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/andlYnM5YXg4ZTdmMeA4DLGtrklgXBXJ6oA5MA6pSKVMkJaUHzia-F_uXhBj.png?width=108&crop=smart&format=pjpg&auto=webp&s=2a60c9ee0ed683495216801abeec48c92d216... | |
Poor man's dual GPU, 60 tk/s for Qwen3-A3B Q4, (RX 9060 XT 16GB & RX 6600 8GB) | 2 | Inference in action using LMStudio (llama.cpp vulkan)
[https://www.youtube.com/watch?v=zEh93MBCBZ8](https://www.youtube.com/watch?v=zEh93MBCBZ8)
RX 9060 XT in primary PCIE 4.0 x16 slot.
RX 6600 vertical mounted using PCIE 3.0 riser cable on secondary PCIE 3.0 slot (x4 lanes)
https://preview.redd.it/tojsaq9i7e7f1.j... | 2025-06-17T01:49:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ldb0z8/poor_mans_dual_gpu_60_tks_for_qwen3a3b_q4_rx_9060/ | dsjlee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldb0z8 | false | {'oembed': {'author_name': 'ROGU-CDN', 'author_url': 'https://www.youtube.com/@rogucdn', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/zEh93MBCBZ8?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; pict... | t3_1ldb0z8 | /r/LocalLLaMA/comments/1ldb0z8/poor_mans_dual_gpu_60_tks_for_qwen3a3b_q4_rx_9060/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'VtL14-IPuuLrM0gAcAHioOKuHkzlJE3UrQY0Qvl8glo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/VtL14-IPuuLrM0gAcAHioOKuHkzlJE3UrQY0Qvl8glo.jpeg?width=108&crop=smart&auto=webp&s=7edf347e43bfba1a8626879e43e6baddd6f186be', 'width': 108}, {'height': 162, 'url': '... | |
Deepseek r1 0528 ties opus for #1 rank on webdev | 90 | [https://x.com/lmarena\_ai](https://x.com/lmarena_ai)
| 2025-06-17T01:45:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ldayo0/deepseek_r1_0528_ties_opus_for_1_rank_on_webdev/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldayo0 | false | null | t3_1ldayo0 | /r/LocalLLaMA/comments/1ldayo0/deepseek_r1_0528_ties_opus_for_1_rank_on_webdev/ | false | false | self | 90 | {'enabled': False, 'images': [{'id': 'vlrlFfPjuaHeSVOdod1Na7hKcY6FPGT7VKPYqrYRfVM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vlrlFfPjuaHeSVOdod1Na7hKcY6FPGT7VKPYqrYRfVM.png?width=108&crop=smart&auto=webp&s=66c66038ae77cb2eea20e6768969beb85ddada16', 'width': 108}, {'height': 116, 'url': 'h... |
Company reduces the size of LLMs by up to 95% without hurting performance | 0 | https://www.reuters.com/business/retail-consumer/spains-multiverse-raises-217-million-compressing-ai-models-2025-06-12/ | 2025-06-17T01:43:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ldax08/company_reduces_the_size_of_llms_by_up_to_95/ | ariesonthecusp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldax08 | false | null | t3_1ldax08 | /r/LocalLLaMA/comments/1ldax08/company_reduces_the_size_of_llms_by_up_to_95/ | false | false | self | 0 | null |
Sama: MCP coming to OpenAI today | 0 | Source: was in attendance at YC AI startup school | 2025-06-17T01:31:44 | numinouslymusing | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ldaohu | false | null | t3_1ldaohu | /r/LocalLLaMA/comments/1ldaohu/sama_mcp_coming_to_openai_today/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'rgyntgbw4e7f1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/rgyntgbw4e7f1.jpeg?width=108&crop=smart&auto=webp&s=cbf975578e92c24a9dca1922c0f6836f323e23ed', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/rgyntgbw4e7f1.jpeg?width=216&crop=smart&auto=... | |
Recommendations for Bad JSON? | 1 | [removed] | 2025-06-17T01:24:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ldaj1w/recommendations_for_bad_json/ | lenankamp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldaj1w | false | null | t3_1ldaj1w | /r/LocalLLaMA/comments/1ldaj1w/recommendations_for_bad_json/ | false | false | self | 1 | null |
Cline with local model? | 7 | Has anyone gotten a working setup with a local model in Cline with MCP use? | 2025-06-17T01:20:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ldagg5/cline_with_local_model/ | rm-rf-rm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldagg5 | false | null | t3_1ldagg5 | /r/LocalLLaMA/comments/1ldagg5/cline_with_local_model/ | false | false | self | 7 | null |
🚸Trained a Tiny Model(30 million parameter) to Tell Children's Stories!🚸 | 39 | Ever wondered if a small language model, just 30 million parameters, could write meaningful, imaginative stories for kids? So I built one and it works.
Introducing Tiny-Children-Stories, a purpose-built, open-source model that specializes in generating short and creative stories.
📌 Why I Built It
Most large languag... | 2025-06-17T01:15:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ldaco5/trained_a_tiny_model30_million_parameter_to_tell/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ldaco5 | false | null | t3_1ldaco5 | /r/LocalLLaMA/comments/1ldaco5/trained_a_tiny_model30_million_parameter_to_tell/ | false | false | 39 | null | |
Which local TTS is the best for long videos? I’m using an RTX 5070 Ti? | 1 | [removed] | 2025-06-17T00:18:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ld97b2/which_local_tts_is_the_best_for_long_videos_im/ | linharmy1368 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld97b2 | false | null | t3_1ld97b2 | /r/LocalLLaMA/comments/1ld97b2/which_local_tts_is_the_best_for_long_videos_im/ | false | false | self | 1 | null |
what are the best models for deep research web usage? | 7 | Looking for models specifically for this task, what are the better ones, between open source and private? | 2025-06-17T00:12:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ld92n3/what_are_the_best_models_for_deep_research_web/ | BlueeWaater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld92n3 | false | null | t3_1ld92n3 | /r/LocalLLaMA/comments/1ld92n3/what_are_the_best_models_for_deep_research_web/ | false | false | self | 7 | null |
[Update] Serene Pub v0.2.0-alpha - Added group chats, LM Studio, OpenAI support and more | 5 | # Introduction
I'm excited to release a significant update for Serene Tavern. Some fixes, UI improvements and additional connection adapter support. Also overhauled how
# Attention!
Create a copy of your `main.db` before running this new version to prevent accidental loss of data. If some of your data disappears, pl... | 2025-06-16T23:55:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ld8phi/update_serene_pub_v020alpha_added_group_chats_lm/ | doolijb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld8phi | false | null | t3_1ld8phi | /r/LocalLLaMA/comments/1ld8phi/update_serene_pub_v020alpha_added_group_chats_lm/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'HH4ipsr8vrX14hBcxPWy9fvouEY_nJ5_IPcmeGnh3eo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HH4ipsr8vrX14hBcxPWy9fvouEY_nJ5_IPcmeGnh3eo.jpeg?width=108&crop=smart&auto=webp&s=01374c1f586d233f9bf062458a1162c5fb2e71bd', 'width': 108}, {'height': 108, 'url': '... |
Fine-tuning may be underestimated | 45 | I often see comments and posts online dismissing fine-tuning and saying that RAG is the way to go. While RAG is very powerful, what if i want to save both on tokens and compute? Fine tuning allows you to achieve the same results as RAG with smaller LLMs and fewer tokens. LORA won’t always be enough but you can get a mo... | 2025-06-16T23:44:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ld8gs4/finetuning_may_be_underestimated/ | AgreeableCaptain1372 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld8gs4 | false | null | t3_1ld8gs4 | /r/LocalLLaMA/comments/1ld8gs4/finetuning_may_be_underestimated/ | false | false | self | 45 | null |
What is DeepSeek-R1-0528's knowledge cutoff? | 6 | It's super hard to find online! | 2025-06-16T22:35:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ld6x18/what_is_deepseekr10528s_knowledge_cutoff/ | sixft2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld6x18 | false | null | t3_1ld6x18 | /r/LocalLLaMA/comments/1ld6x18/what_is_deepseekr10528s_knowledge_cutoff/ | false | false | self | 6 | null |
Newbie trying to make a super AI | 1 | [removed] | 2025-06-16T22:28:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ld6r0m/newbie_trying_to_make_a_super_ai/ | Fit-Butterfly-4314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld6r0m | false | null | t3_1ld6r0m | /r/LocalLLaMA/comments/1ld6r0m/newbie_trying_to_make_a_super_ai/ | false | false | self | 1 | null |
Fortune 500s Are Burning Millions on LLM APIs. Why Not Build Their Own? | 270 | You’re at a Fortune 500 company, spending millions annually on LLM APIs (OpenAI, Google, etc). Yet you’re limited by IP concerns, data control, and vendor constraints.
At what point does it make sense to build your own LLM in-house?
I work at a company behind one of the major LLMs, and the amount enterprises pay us i... | 2025-06-16T22:04:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ld66t0/fortune_500s_are_burning_millions_on_llm_apis_why/ | Neat-Knowledge5642 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld66t0 | false | null | t3_1ld66t0 | /r/LocalLLaMA/comments/1ld66t0/fortune_500s_are_burning_millions_on_llm_apis_why/ | false | false | self | 270 | null |
How are you using different LLM API providers? | 1 | [removed] | 2025-06-16T21:10:47 | https://www.reddit.com/r/LocalLLaMA/comments/1ld4urs/how_are_you_using_different_llm_api_providers/ | interviuu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld4urs | false | null | t3_1ld4urs | /r/LocalLLaMA/comments/1ld4urs/how_are_you_using_different_llm_api_providers/ | false | false | self | 1 | null |
What's new in vLLM and llm-d | 7 | Hot off the press:
>In this session, we explored the latest updates in the vLLM v0.9.1 release, including the new Magistral model, FlexAttention support, multi-node serving optimization, and more.
>
>We also did a deep dive into llm-d, the new Kubernetes-native high-performance distributed LLM inference framework co-d... | 2025-06-16T21:07:09 | https://www.youtube.com/watch?v=pYujrc3rGjk | DeltaSqueezer | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1ld4rei | false | {'oembed': {'author_name': 'Neural Magic', 'author_url': 'https://www.youtube.com/@neuralmagic', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/pYujrc3rGjk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosco... | t3_1ld4rei | /r/LocalLLaMA/comments/1ld4rei/whats_new_in_vllm_and_llmd/ | false | false | 7 | {'enabled': False, 'images': [{'id': '_GTeYJTqgCY78BPqBcLVZkHTyQQTs_Fy5gkJz9OR8A0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/_GTeYJTqgCY78BPqBcLVZkHTyQQTs_Fy5gkJz9OR8A0.jpeg?width=108&crop=smart&auto=webp&s=641b8f140b2da02c4f7f974da4038b007f4a7467', 'width': 108}, {'height': 162, 'url': '... | |
How are you using your local LLM to code and why? | 27 | chat (cut & paste)
editor plugin- copilot, vscode, zed, [continue.dev](http://continue.dev)
cli - aider
agentic editor - roo/cline/windsurf
agent - something like claude code
I still prefer chat cut & paste. I can control the input, prompt and get faster response and I can steer towards my idea faster. It does r... | 2025-06-16T21:04:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ld4okl/how_are_you_using_your_local_llm_to_code_and_why/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld4okl | false | null | t3_1ld4okl | /r/LocalLLaMA/comments/1ld4okl/how_are_you_using_your_local_llm_to_code_and_why/ | false | false | self | 27 | null |
Are there any local llm options for android that have image recognition? | 3 | Found a few localllm apps - but they’re just text only which is useless.
I’ve heard some people use termux and either ollama or kobold?
Do these options allow for image recognition
Is there a certain gguf type that does image recognition?
Would that work as an option 🤔
| 2025-06-16T20:23:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ld3nb3/are_there_any_local_llm_options_for_android_that/ | diggels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld3nb3 | false | null | t3_1ld3nb3 | /r/LocalLLaMA/comments/1ld3nb3/are_there_any_local_llm_options_for_android_that/ | false | false | self | 3 | null |
Mixed Ram+Vram strategies for large MoE models - is it viable on consumer hardware? | 13 | I am currently running a system with 24gb vram and 32gb ram and am thinking of getting an upgrade to 128gb (and later possibly 256 gb) ram to enable inference for large MoE models, such as dots.llm, Qwen 3 and possibly V3 if i was to go to 256gb ram.
The question is, what can you actually expect on such a system? I wo... | 2025-06-16T20:19:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ld3ivo/mixed_ramvram_strategies_for_large_moe_models_is/ | LagOps91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld3ivo | false | null | t3_1ld3ivo | /r/LocalLLaMA/comments/1ld3ivo/mixed_ramvram_strategies_for_large_moe_models_is/ | false | false | self | 13 | null |
OLLAMA API USE FOR SALE | 0 | Hi everyone, I'd like to share my project: a service that sells usage of the Ollama API, now live at[**http://190.191.75.113:9092**](http://190.191.75.113:9092).
The cost of using LLM APIs is very high, which is why I created this project. I have a significant amount of NVIDIA GPU hardware from crypto mining that is n... | 2025-06-16T19:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/1ld2t2x/ollama_api_use_for_sale/ | EmotionalSignature65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld2t2x | false | null | t3_1ld2t2x | /r/LocalLLaMA/comments/1ld2t2x/ollama_api_use_for_sale/ | false | false | self | 0 | null |
What can I use to ERP? | 1 | [removed] | 2025-06-16T19:45:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ld2n6x/what_can_i_use_to_erp/ | AccomplishedStorm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld2n6x | false | null | t3_1ld2n6x | /r/LocalLLaMA/comments/1ld2n6x/what_can_i_use_to_erp/ | false | false | self | 1 | null |
Humanity's last library, which locally ran LLM would be best? | 116 | An apocalypse has come upon us. The internet is no more. Libraries are no more. The only things left are local networks and people with the electricity to run them.
If you were to create humanity's last library, a distilled LLM with the entirety of human knowledge. What would be a good model for that? | 2025-06-16T18:43:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ld11x4/humanitys_last_library_which_locally_ran_llm/ | TheCuriousBread | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld11x4 | false | null | t3_1ld11x4 | /r/LocalLLaMA/comments/1ld11x4/humanitys_last_library_which_locally_ran_llm/ | false | false | self | 116 | null |
MiniMax latest open-sourcing LLM, MiniMax-M1 — setting new standards in long-context reasoning,m | 297 | The coding demo in video is so amazing!
- World’s longest context window: 1M-token input, 80k-token output
- State-of-the-art agentic use among open-source models
- RL at unmatched efficiency: trained with just $534,700
- 40k: https://huggingface.co/MiniMaxAI/MiniMax-M1-40k
- 80k: https://huggingface.co/MiniMaxAI/M... | 2025-06-16T18:42:52 | https://v.redd.it/t859utey3c7f1 | srtng | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ld116d | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/t859utey3c7f1/DASHPlaylist.mpd?a=1752691385%2CNTA1YWMyMmYzYmE3MWMyODY5MDc0ZDdhYjg5ZWRhMmQ4MTU5NTljZmRkMTNhNDlkZDI2ZTliMTA2YzVmMTVhMA%3D%3D&v=1&f=sd', 'duration': 35, 'fallback_url': 'https://v.redd.it/t859utey3c7f1/DASH_1080.mp4?source=fallback', 'h... | t3_1ld116d | /r/LocalLLaMA/comments/1ld116d/minimax_latest_opensourcing_llm_minimaxm1_setting/ | false | false | 297 | {'enabled': False, 'images': [{'id': 'NmY1emg2N3kzYzdmMYrLLSKpxq16_nlRw_xdAcAPTlqNhk8r4UDdsUawD6kP', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/NmY1emg2N3kzYzdmMYrLLSKpxq16_nlRw_xdAcAPTlqNhk8r4UDdsUawD6kP.png?width=108&crop=smart&format=pjpg&auto=webp&s=3b995e18101e868fdf82c4226429fedf13ff2... | |
MiniMax latest open-sourcing LLM, MiniMax-M1 — setting new standards in long-context reasoning | 1 |
- World’s longest context window: 1M-token input, 80k-token output
- State-of-the-art agentic use among open-source models
- RL at unmatched efficiency: trained with just $534,700
- 40k: https://huggingface.co/MiniMaxAI/MiniMax-M1-40k
- 80k: https://huggingface.co/MiniMaxAI/MiniMax-M1-80k
- Space: https://huggingf... | 2025-06-16T18:33:21 | https://huggingface.co/MiniMaxAI/MiniMax-M1-80k | srtng | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ld0s1z | false | null | t3_1ld0s1z | /r/LocalLLaMA/comments/1ld0s1z/minimax_latest_opensourcing_llm_minimaxm1_setting/ | false | false | default | 1 | null |
MiniMax-M1 40k / 80k | 1 |
- World’s longest context window: 1M-token input, 80k-token output
- State-of-the-art agentic use among open-source models
- RL at unmatched efficiency: trained with just $534,700
- 40k: https://huggingface.co/MiniMaxAI/MiniMax-M1-40k
- 80k: https://huggingface.co/MiniMaxAI/MiniMax-M1-80k
- Space: https://huggingf... | 2025-06-16T18:27:51 | https://huggingface.co/MiniMaxAI/MiniMax-M1-80k | srtng | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ld0muu | false | null | t3_1ld0muu | /r/LocalLLaMA/comments/1ld0muu/minimaxm1_40k_80k/ | false | false | default | 1 | null |
What Really Happens When You Ask a Cursor a Question with GitHub MCP Integrated | 0 | https://i.redd.it/vqsjkdjq0c7f1.gif
*Have you ever wondered what really happens when you type a prompt like “Show my open PRs” in Cursor, connected via the* [*GitHub MCP server*](https://github.com/github/github-mcp-server) *and Cursor’s own Model Context Protocol integration? This article breaks down every step, r... | 2025-06-16T18:27:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ld0mo1/what_really_happens_when_you_ask_a_cursor_a/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld0mo1 | false | null | t3_1ld0mo1 | /r/LocalLLaMA/comments/1ld0mo1/what_really_happens_when_you_ask_a_cursor_a/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'PzaEpIAh22P5SJX20euddepGax_6lKEPNF_QD8rqFzU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PzaEpIAh22P5SJX20euddepGax_6lKEPNF_QD8rqFzU.png?width=108&crop=smart&auto=webp&s=3b643464d07a052c9f4b35b9b596d2ac39195f75', 'width': 108}, {'height': 108, 'url': 'h... | |
MiniMax-M1 | 1 | - World’s longest context window: 1M-token input, 80k-token output
- State-of-the-art agentic use among open-source models
- RL at unmatched efficiency: trained with just $534,700
40k: https://huggingface.co/MiniMaxAI/MiniMax-M1-40k
80k: https://huggingface.co/MiniMaxAI/MiniMax-M1-80k
Space: https://huggingface.co/... | 2025-06-16T18:22:36 | https://huggingface.co/MiniMaxAI/MiniMax-M1-80k | srtng | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ld0hvs | false | null | t3_1ld0hvs | /r/LocalLLaMA/comments/1ld0hvs/minimaxm1/ | false | false | default | 1 | null |
Real Time Speech to Text | 1 | As an intern in a finance related company, I need to know about realtime speech to text solutions for our product. I don't have advance knowledge in STT. 1) Any resources to know more about real time STT 2) Best existing products for real time audio (like phone calls) to text for our MLOps pipeline | 2025-06-16T18:13:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ld08xa/real_time_speech_to_text/ | ThomasSparrow0511 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ld08xa | false | null | t3_1ld08xa | /r/LocalLLaMA/comments/1ld08xa/real_time_speech_to_text/ | false | false | self | 1 | null |
Recommending Practical Experiments from Research Papers | 7 | Lately, I've been using LLMs to rank new arXiv papers based on the context of my own work.
This has helped me find relevant results hours after they've been posted, regardless of the virality.
Historically, I've been finetuning VLMs with LoRA, so [EMLoC](https://hsi-che-lin.github.io/EMLoC/) recently came recommended... | 2025-06-16T17:47:17 | remyxai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lcziww | false | null | t3_1lcziww | /r/LocalLLaMA/comments/1lcziww/recommending_practical_experiments_from_research/ | false | false | default | 7 | {'enabled': True, 'images': [{'id': 'y35s13wkrb7f1', 'resolutions': [{'height': 37, 'url': 'https://preview.redd.it/y35s13wkrb7f1.png?width=108&crop=smart&auto=webp&s=db7fed16a675565cef00252c6f37019787795225', 'width': 108}, {'height': 74, 'url': 'https://preview.redd.it/y35s13wkrb7f1.png?width=216&crop=smart&auto=webp... | |
What do we need for Qwen 3 235? | 8 | My company plans to acquire hardware to do local offline sensitive document processing. We do not need super high throughput, maybe 3 or 4 batches of document processing at a time, but we have the means to spend up to 30.000€. I was thinking about a small Apple Silicon cluster, but is that the way to go in that budget ... | 2025-06-16T17:37:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lcz8lg/what_do_we_need_for_qwen_3_235/ | Fant1xX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcz8lg | false | null | t3_1lcz8lg | /r/LocalLLaMA/comments/1lcz8lg/what_do_we_need_for_qwen_3_235/ | false | false | self | 8 | null |
Uncensored LLM that knows details of Videogames/characters? | 1 | [removed] | 2025-06-16T17:29:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lcz1gv/uncensored_llm_that_knows_details_of/ | mazini95 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcz1gv | false | null | t3_1lcz1gv | /r/LocalLLaMA/comments/1lcz1gv/uncensored_llm_that_knows_details_of/ | false | false | self | 1 | null |
Any recent Goose tutorials? | 1 | [removed] | 2025-06-16T17:05:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lcydvj/any_recent_goose_tutorials/ | a_newer_throwaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcydvj | false | null | t3_1lcydvj | /r/LocalLLaMA/comments/1lcydvj/any_recent_goose_tutorials/ | false | false | self | 1 | null |
Jan-nano-4b-q8 ain’t playin’ and doesn’t have time for your BS. | 0 | The following is a slightly dramatized conversation between Jan-nano-4b-q8 and myself:
Me: <Starts Jan-nano in the Ollama CLI>
Me: “Test”
Jan-nano: “—bash…. Writing shell script….accessing file system…..”
Jan-nano <random computer beeps and boops like you see in the movies>
Me: <frantically presses Ctrl-C repeat... | 2025-06-16T17:01:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lcyac2/jannano4bq8_aint_playin_and_doesnt_have_time_for/ | Porespellar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcyac2 | false | null | t3_1lcyac2 | /r/LocalLLaMA/comments/1lcyac2/jannano4bq8_aint_playin_and_doesnt_have_time_for/ | false | false | self | 0 | null |
Local Image gen dead? | 78 | Is it me or is the progress on local image generation
entirely stagnated? No big release since ages. Latest Flux release is a paid cloud service. | 2025-06-16T17:01:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lcya8p/local_image_gen_dead/ | maglat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcya8p | false | null | t3_1lcya8p | /r/LocalLLaMA/comments/1lcya8p/local_image_gen_dead/ | false | false | self | 78 | null |
would a(multiple?) quadro p2200(s) work for a test server? | 1 | I am trying to get a prototype local llm setup at work before asking the bigwigs to spend real money. we have a few old designer computers lying around from our last round of upgrades and i've got like 3 or 4 good quadro p2200s.
question i have for you is, would this card suffice for testing purposes? if so, can i us... | 2025-06-16T16:59:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lcy7s2/would_amultiple_quadro_p2200s_work_for_a_test/ | ackley14 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcy7s2 | false | null | t3_1lcy7s2 | /r/LocalLLaMA/comments/1lcy7s2/would_amultiple_quadro_p2200s_work_for_a_test/ | false | false | self | 1 | null |
DeepSeek R1 0528 Ties Claude Opus 4 for #1 in WebDev Arena — [Ranks #6 Overall, #2 in Coding, #4 in Hard Prompts, & #5 in Math] | 72 | 2025-06-16T16:58:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lcy6fc/deepseek_r1_0528_ties_claude_opus_4_for_1_in/ | Xhehab_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcy6fc | false | null | t3_1lcy6fc | /r/LocalLLaMA/comments/1lcy6fc/deepseek_r1_0528_ties_claude_opus_4_for_1_in/ | false | false | 72 | null | ||
Which vectorDB do you use? and why? | 61 | I hate pinecone, why do you hate it? | 2025-06-16T16:26:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lcxcuv/which_vectordb_do_you_use_and_why/ | Expert-Address-2918 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcxcuv | false | null | t3_1lcxcuv | /r/LocalLLaMA/comments/1lcxcuv/which_vectordb_do_you_use_and_why/ | false | false | self | 61 | null |
Dual 5090 vs RTX Pro 6000 for local LLM | 0 | Hi all, I am planning to build a new machine for local LLM, some fine-tuning and other deep learning tasks, wonder if I should go for Dual 5090 vs RTX Pro 6000? Thanks. | 2025-06-16T16:10:17 | https://www.reddit.com/r/LocalLLaMA/comments/1lcwx8o/dual_5090_vs_rtx_pro_6000_for_local_llm/ | kitgary | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcwx8o | false | null | t3_1lcwx8o | /r/LocalLLaMA/comments/1lcwx8o/dual_5090_vs_rtx_pro_6000_for_local_llm/ | false | false | self | 0 | null |
With open source models, you simply get rid of its system prompt and dialogue construct, then ask for anything. | 1 | [removed] | 2025-06-16T15:59:28 | Longjumping_Spot5843 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lcwmky | false | null | t3_1lcwmky | /r/LocalLLaMA/comments/1lcwmky/with_open_source_models_you_simply_get_rid_of_its/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ZdLn6ZF5IUmUJOQ8AHsrjV45AX2Z4Ok3d9qgi2HTK_Q', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/u5c4lmorab7f1.png?width=108&crop=smart&auto=webp&s=7419cc8b79e7d6ed1a69b3b8ac860c6f92c27ed0', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/u5c4lmorab7f1.png... | ||
Finetuning the o3 model api | 1 | [removed] | 2025-06-16T15:53:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lcwgvi/finetuning_the_o3_model_api/ | Desperate_Bread1418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcwgvi | false | null | t3_1lcwgvi | /r/LocalLLaMA/comments/1lcwgvi/finetuning_the_o3_model_api/ | false | false | self | 1 | null |
I wish for a local model with mood recognition | 2 | It would be interesting if we could have a local model that could understand the mood we were in by our voice and images it captured of us. | 2025-06-16T15:46:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lcwb3g/i_wish_for_a_local_model_with_mood_recognition/ | MinimumPC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcwb3g | false | null | t3_1lcwb3g | /r/LocalLLaMA/comments/1lcwb3g/i_wish_for_a_local_model_with_mood_recognition/ | false | false | self | 2 | null |
Kimi-Dev-72B | 152 | 2025-06-16T15:40:31 | https://huggingface.co/moonshotai/Kimi-Dev-72B | realJoeTrump | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lcw50r | false | null | t3_1lcw50r | /r/LocalLLaMA/comments/1lcw50r/kimidev72b/ | false | false | 152 | {'enabled': False, 'images': [{'id': '1kvJDTWOvntivVoW834gDLI4V0P6WaqmrGfz5xyEWNU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1kvJDTWOvntivVoW834gDLI4V0P6WaqmrGfz5xyEWNU.png?width=108&crop=smart&auto=webp&s=37c8a0c41b7b8284411d4b8a7496a73bf8623214', 'width': 108}, {'height': 116, 'url': 'h... | ||
Cherche aide pour créer une IA locale autonome avec mémoire évolutive et interaction vocale — Projet Soléane | 1 | [removed] | 2025-06-16T15:31:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lcvw58/cherche_aide_pour_créer_une_ia_locale_autonome/ | DarkDamien777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcvw58 | false | null | t3_1lcvw58 | /r/LocalLLaMA/comments/1lcvw58/cherche_aide_pour_créer_une_ia_locale_autonome/ | false | false | self | 1 | null |
Cherche aide pour créer une IA locale autonome avec mémoire évolutive et interaction vocale - projet Soléane | 1 | [removed] | 2025-06-16T15:19:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lcvlag/cherche_aide_pour_créer_une_ia_locale_autonome/ | DarkDamien777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcvlag | false | null | t3_1lcvlag | /r/LocalLLaMA/comments/1lcvlag/cherche_aide_pour_créer_une_ia_locale_autonome/ | false | false | self | 1 | null |
I keep getting this error message but my vram is empty. Help! | 1 | [removed] | 2025-06-16T14:50:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lcuu7n/i_keep_getting_this_error_message_but_my_vram_is/ | TheLastAssassin_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcuu7n | false | null | t3_1lcuu7n | /r/LocalLLaMA/comments/1lcuu7n/i_keep_getting_this_error_message_but_my_vram_is/ | false | false | self | 1 | null |
MiniMax-M1 - a MiniMaxAI Collection | 124 | 2025-06-16T14:35:55 | https://huggingface.co/collections/MiniMaxAI/minimax-m1-68502ad9634ec0eeac8cf094 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lcuglb | false | null | t3_1lcuglb | /r/LocalLLaMA/comments/1lcuglb/minimaxm1_a_minimaxai_collection/ | false | false | 124 | {'enabled': False, 'images': [{'id': 'KeaWNzZG0TAkUEwWyVGmsizl5dXuAOVMgFreGf02gFI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/KeaWNzZG0TAkUEwWyVGmsizl5dXuAOVMgFreGf02gFI.png?width=108&crop=smart&auto=webp&s=5b662213f7e2fd766f341f3bd350a6027c20a373', 'width': 108}, {'height': 116, 'url': 'h... | ||
Local Open Source VScode Copilot model with MCP | 235 | You don't need remote APIs for a coding copliot, or the MCP Course! Set up a fully local IDE with MCP integration using Continue. In this tutorial Continue guides you through setting it up.
This is what you need to do to take control of your copilot:
**-** Get the Continue extension from the VS Code marketplace ... | 2025-06-16T14:32:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lcud8j/local_open_source_vscode_copilot_model_with_mcp/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcud8j | false | null | t3_1lcud8j | /r/LocalLLaMA/comments/1lcud8j/local_open_source_vscode_copilot_model_with_mcp/ | false | false | self | 235 | {'enabled': False, 'images': [{'id': 'xhX7nVJZN7NhDmuill5vQz87XUaA6GG5ABaIHRSnVSo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/xhX7nVJZN7NhDmuill5vQz87XUaA6GG5ABaIHRSnVSo.png?width=108&crop=smart&auto=webp&s=693859a5915a703e9fa01d389e2ab09d23b29c81', 'width': 108}], 'source': {'height': 12... |
How do we inference unsloth/DeepSeek-R1-0528-Qwen3-8B ? | 0 | Hey, so I have recently fine-tuned a model for general-purpose response generation to customer queries (FAQ-like). But my question is, this is my first time deploying a model like this. Can someone suggest some strategies? I read about LMDeploy, but that doesn't seem to work for this model (I haven't tried it, I just ... | 2025-06-16T14:05:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lctp48/how_do_we_inference_unslothdeepseekr10528qwen38b/ | No-Trip899 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lctp48 | false | null | t3_1lctp48 | /r/LocalLLaMA/comments/1lctp48/how_do_we_inference_unslothdeepseekr10528qwen38b/ | false | false | self | 0 | null |
Voice input in french, TTS output in English. How hard would this be to set up? | 2 | I work in a bilingual setting and some of my meetings are in French. I don't speak French. This isn't a huge problem but it got me thinking. It would be really cool if I could set up a system that would use my mic to listen to what was being said in the meeting and then output a Text-to-speech translation into my noise... | 2025-06-16T14:04:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lctoan/voice_input_in_french_tts_output_in_english_how/ | LanceThunder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lctoan | false | null | t3_1lctoan | /r/LocalLLaMA/comments/1lctoan/voice_input_in_french_tts_output_in_english_how/ | false | false | self | 2 | null |
Building a custom LLM for my PhD thesis | 1 | [removed] | 2025-06-16T13:43:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lct5oz/building_a_custom_llm_for_my_phd_thesis/ | Glad_Net8882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lct5oz | false | null | t3_1lct5oz | /r/LocalLLaMA/comments/1lct5oz/building_a_custom_llm_for_my_phd_thesis/ | false | false | self | 1 | null |
HF Datasets in Spark in one line of code | 1 | [removed] | 2025-06-16T13:12:27 | qlhoest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lcsfz8 | false | null | t3_1lcsfz8 | /r/LocalLLaMA/comments/1lcsfz8/hf_datasets_in_spark_in_one_line_of_code/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'qon3tb1cea7f1', 'resolutions': [{'height': 49, 'url': 'https://preview.redd.it/qon3tb1cea7f1.jpeg?width=108&crop=smart&auto=webp&s=9acdc5c1359dcbfd9827ef9da924f18b65b44765', 'width': 108}, {'height': 99, 'url': 'https://preview.redd.it/qon3tb1cea7f1.jpeg?width=216&crop=smart&auto=we... | |
Tesla m40 12gb vs gtx 1070 8gb | 1 | I'm not sure which one to choose. Which one would you recommend? | 2025-06-16T13:03:10 | https://www.reddit.com/r/LocalLLaMA/comments/1lcs8mw/tesla_m40_12gb_vs_gtx_1070_8gb/ | EdwardRocks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcs8mw | false | null | t3_1lcs8mw | /r/LocalLLaMA/comments/1lcs8mw/tesla_m40_12gb_vs_gtx_1070_8gb/ | false | false | self | 1 | null |
Beginner | 0 | Yesterday I found out that you can run LLM locally, but I have a lot of questions, I'll list them down here.
1. What is it?
2. What is it used for?
3. Is it better than normal LLM? (not locally)
4. What is the best app for Android?
5. What is the best LLM that I can use on my Samsung Galaxy A35 5g?
6. Are there i... | 2025-06-16T12:55:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lcs2k0/beginner/ | EducationalCorner402 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcs2k0 | false | null | t3_1lcs2k0 | /r/LocalLLaMA/comments/1lcs2k0/beginner/ | false | false | self | 0 | null |
The fine line between helpful AI and creepy AI | 0 | Been thinking about why some AI interactions feel supportive while others make our skin crawl. That line between helpful and creepy is thinner than most developers realize.
Last week, a friend showed me their wellness app's AI coach. It remembered their dog's name from a conversation three months ago and asked "How's ... | 2025-06-16T12:43:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lcrtiu/the_fine_line_between_helpful_ai_and_creepy_ai/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcrtiu | false | null | t3_1lcrtiu | /r/LocalLLaMA/comments/1lcrtiu/the_fine_line_between_helpful_ai_and_creepy_ai/ | false | false | self | 0 | null |
Just finished recording 29 videos on "How to Build DeepSeek from Scratch" | 261 | Playlist link: [https://www.youtube.com/playlist?list=PLPTV0NXA\_ZSiOpKKlHCyOq9lnp-dLvlms](https://www.youtube.com/playlist?list=PLPTV0NXA_ZSiOpKKlHCyOq9lnp-dLvlms)
Here are the 29 videos and their title:
(1) DeepSeek series introduction
(2) DeepSeek basics
(3) Journey of a token into the LLM architecture
(4) Atte... | 2025-06-16T12:43:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lcrt1k/just_finished_recording_29_videos_on_how_to_build/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcrt1k | false | null | t3_1lcrt1k | /r/LocalLLaMA/comments/1lcrt1k/just_finished_recording_29_videos_on_how_to_build/ | false | false | self | 261 | {'enabled': False, 'images': [{'id': 'YYKOMi4vnYm0aGKFEYu9iwdxu8LNUkmNgkG8xdUdmuw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YYKOMi4vnYm0aGKFEYu9iwdxu8LNUkmNgkG8xdUdmuw.jpeg?width=108&crop=smart&auto=webp&s=e0ef8b5130e9dadc053e425d769f8da1d826210e', 'width': 108}, {'height': 121, 'url': '... |
Just finished the Build DeepSeek from Scratch Playlist on Youtube | 29 high quality videos | 1 | [removed] | 2025-06-16T12:39:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lcrqeo/just_finished_the_build_deepseek_from_scratch/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcrqeo | false | {'oembed': {'description': 'Share your videos with friends, family, and the world', 'height': 450, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fvideoseries%3Flist%3DPLPTV0NXA_ZSiOpKKlHCyOq9lnp-dLvlms&display_name=YouTube&url=https%3A%... | t3_1lcrqeo | /r/LocalLLaMA/comments/1lcrqeo/just_finished_the_build_deepseek_from_scratch/ | false | false | 1 | null | |
what is the most powerfull model | 1 | [removed] | 2025-06-16T12:33:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lcrlnk/what_is_the_most_powerfull_model/ | Maximum_Piece2610 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcrlnk | false | null | t3_1lcrlnk | /r/LocalLLaMA/comments/1lcrlnk/what_is_the_most_powerfull_model/ | false | false | self | 1 | null |
Lol | 1 | 2025-06-16T12:02:08 | JAILBREAKSGOATED | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lcqz7k | false | null | t3_1lcqz7k | /r/LocalLLaMA/comments/1lcqz7k/lol/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '6u6ce2sg4a7f1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/6u6ce2sg4a7f1.jpeg?width=108&crop=smart&auto=webp&s=4c5d2472cca82b2178669dfc41069008185ba072', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/6u6ce2sg4a7f1.jpeg?width=216&crop=smart&auto=... | ||
FuturixAI - Cost-Effective Online RFT with Plug-and-Play LoRA Judge | 7 | A tiny LoRA adapter and a simple JSON prompt turn a 7B LLM into a powerful reward model that beats much larger ones - saving massive compute. It even helps a 7B model outperform top 70B baselines on GSM-8K using online RLHF | 2025-06-16T11:15:54 | https://www.futurixai.com/publications | Aquaaa3539 | futurixai.com | 1970-01-01T00:00:00 | 0 | {} | 1lcq4gt | false | null | t3_1lcq4gt | /r/LocalLLaMA/comments/1lcq4gt/futurixai_costeffective_online_rft_with/ | false | false | default | 7 | null |
"World Model" a Step Towards AGI? | 1 | V-JEPA 2: Is Meta's Open-Weight
Meta AI has released V-JEPA 2, a self-supervised "world model" trained on over a million hours of raw video data. This model aims to learn an intuitive understanding of the physical world, with notable performance metrics across several domains, and is being released as open-weights ... | 2025-06-16T11:03:59 | https://v.redd.it/8qys0ew2u97f1 | Rare-Programmer-1747 | /r/LocalLLaMA/comments/1lcpwx6/world_model_a_step_towards_agi/ | 1970-01-01T00:00:00 | 0 | {} | 1lcpwx6 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8qys0ew2u97f1/DASHPlaylist.mpd?a=1752793446%2CYjNhMDExNmFhYzA0NDY0YmUzMmE3MzMwNzY5ZDRkNDY4ODllYWNlNjExZDJjOTFhYzkxOWNlYjQzOTkwZWZjNA%3D%3D&v=1&f=sd', 'duration': 162, 'fallback_url': 'https://v.redd.it/8qys0ew2u97f1/DASH_1080.mp4?source=fallback', '... | t3_1lcpwx6 | /r/LocalLLaMA/comments/1lcpwx6/world_model_a_step_towards_agi/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'Z2VqdzMweDJ1OTdmMSrG8eCPwrLUP6p8yHI14PztqqXQADEWyEV7iN-mE8Mm', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Z2VqdzMweDJ1OTdmMSrG8eCPwrLUP6p8yHI14PztqqXQADEWyEV7iN-mE8Mm.png?width=108&crop=smart&format=pjpg&auto=webp&s=26b02e3bc0e62b375e834806739aa63ea9fe6... | |
Confused between ReAct and MCP for personal usage | 1 | [removed] | 2025-06-16T10:29:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lcpbtc/confused_between_react_and_mcp_for_personal_usage/ | arnab_best | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcpbtc | false | null | t3_1lcpbtc | /r/LocalLLaMA/comments/1lcpbtc/confused_between_react_and_mcp_for_personal_usage/ | false | false | self | 1 | null |
Looking for Unfiltered LLM for making AI Character dialogue | 7 | Im just gonna be honest, i want to get dialogue for character chatbots, but unfiltered is what i need, that's pretty much it | 2025-06-16T10:18:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lcp5rg/looking_for_unfiltered_llm_for_making_ai/ | mohmar2010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcp5rg | false | null | t3_1lcp5rg | /r/LocalLLaMA/comments/1lcp5rg/looking_for_unfiltered_llm_for_making_ai/ | false | false | self | 7 | null |
Using Knowledge Graphs to create personas ? | 6 | I'm exploring using a Knowledge Graph (KG) to create persona(s). The goal is to create a chat companion with a real, queryable memory.
I have a few questions,
* **Has anyone tried this?** What were your experiences and was it effective?
* **What's the best method?** My first thought is a RAG setup that pulls facts fr... | 2025-06-16T09:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lcoewz/using_knowledge_graphs_to_create_personas/ | TheAmendingMonk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcoewz | false | null | t3_1lcoewz | /r/LocalLLaMA/comments/1lcoewz/using_knowledge_graphs_to_create_personas/ | false | false | self | 6 | null |
Why are API requests to a local LLM on LM Studio slow? | 1 | [removed] | 2025-06-16T09:29:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lcoecy/why_are_api_requests_to_a_local_llm_on_lm_studio/ | Ultimonumber36 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcoecy | false | null | t3_1lcoecy | /r/LocalLLaMA/comments/1lcoecy/why_are_api_requests_to_a_local_llm_on_lm_studio/ | false | false | self | 1 | null |
Using Knowledge Graphs to create personas ? | 1 | [removed] | 2025-06-16T09:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lcoec5/using_knowledge_graphs_to_create_personas/ | Fluid-Beyond3878 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcoec5 | false | null | t3_1lcoec5 | /r/LocalLLaMA/comments/1lcoec5/using_knowledge_graphs_to_create_personas/ | false | false | self | 1 | null |
Recommendations for Local LLMs (Under 70B) with Cline/Roo Code | 23 | I'd like to know what, if any, are some good local models under 70b that can handle tasks well when using Cline/Roo Code. I’ve tried a *lot* to use Cline or Roo Code for various things, and most of the time it's simple tasks, but the agents often get stuck in loops or make things worse. It feels like the size of the in... | 2025-06-16T09:20:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lco9ik/recommendations_for_local_llms_under_70b_with/ | AMOVCS | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lco9ik | false | null | t3_1lco9ik | /r/LocalLLaMA/comments/1lco9ik/recommendations_for_local_llms_under_70b_with/ | false | false | self | 23 | null |
Qwen releases official MLX quants for Qwen3 models in 4 quantization levels: 4bit, 6bit, 8bit, and BF16 | 442 | 🚀 Excited to launch Qwen3 models in MLX format today!
Now available in 4 quantization levels: 4bit, 6bit, 8bit, and BF16 — Optimized for MLX framework.
👉 Try it now!
X post: https://x.com/alibaba_qwen/status/1934517774635991412?s=46
Hugging Face: https://huggingface.co/collections/Qwen/qwen3-67dd247413f0e2e4f653... | 2025-06-16T07:54:58 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lcn0vz | false | null | t3_1lcn0vz | /r/LocalLLaMA/comments/1lcn0vz/qwen_releases_official_mlx_quants_for_qwen3/ | false | false | default | 442 | {'enabled': True, 'images': [{'id': '5jpskt9dw87f1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/5jpskt9dw87f1.jpeg?width=108&crop=smart&auto=webp&s=e63f96d14e61383a0f70b9af465eb63bb8732b2e', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/5jpskt9dw87f1.jpeg?width=216&crop=smart&auto=w... | |
Run Qwen3-235B-A22B with ktransformers on AMD rocm? | 2 | Hey!
Has anyone managed to run models successfully on AMD/ROCM Linux with Ktransformers? Can you share a docker image or instructions?
*There is a need to use tensor parallelism* | 2025-06-16T07:22:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lcmk3s/run_qwen3235ba22b_with_ktransformers_on_amd_rocm/ | djdeniro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcmk3s | false | null | t3_1lcmk3s | /r/LocalLLaMA/comments/1lcmk3s/run_qwen3235ba22b_with_ktransformers_on_amd_rocm/ | false | false | self | 2 | null |
Why do we have Q6_K for models but not Q6_0 for KV cache? | 2 | We have tons of model quantization options: Q2\_K, Q6\_K, etc. But for KV cache we're stuck with only Q8\_0 and Q4\_0? The jump between the two can be fairly brutal, so why don't we have a Q5\~6\_0 KV cache as a middle ground for long context without destroying quality?
Is there a technical reason or did developers ju... | 2025-06-16T06:46:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lcm0ys/why_do_we_have_q6_k_for_models_but_not_q6_0_for/ | Bimbam_tm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcm0ys | false | null | t3_1lcm0ys | /r/LocalLLaMA/comments/1lcm0ys/why_do_we_have_q6_k_for_models_but_not_q6_0_for/ | false | false | self | 2 | null |
[Fine-Tuning] [Structured Output] Fine-Tuning for JSON Extraction – Need Help With the Right Approach | 1 | [removed] | 2025-06-16T06:38:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lclwqi/finetuning_structured_output_finetuning_for_json/ | LieDistinct857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lclwqi | false | null | t3_1lclwqi | /r/LocalLLaMA/comments/1lclwqi/finetuning_structured_output_finetuning_for_json/ | false | false | self | 1 | null |
Do I snatch this ? | 1 | [removed] | 2025-06-16T06:19:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lclmfb/do_i_snatch_this/ | ketgoodgame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lclmfb | false | null | t3_1lclmfb | /r/LocalLLaMA/comments/1lclmfb/do_i_snatch_this/ | false | false | 1 | null | |
Does llama.cpp save chats? | 0 | I know Ollama will make save chat history in that history file. Does llama.cpp do something similar or is the chat gone forever when I close it. | 2025-06-16T06:11:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lcli97/does_llamacpp_save_chats/ | LeiMoshen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcli97 | false | null | t3_1lcli97 | /r/LocalLLaMA/comments/1lcli97/does_llamacpp_save_chats/ | false | false | self | 0 | null |
An experimental yet useful On-device Android LLM Assistant | 16 | I saw the recent post (at last) where the OP was looking for a digital assistant for android where they didn't want to access the LLM through any other app's interface. After looking around for something like this, I'm happy to say that I've managed to build one myself.
My Goal: To have a local LLM that can instantly ... | 2025-06-16T05:43:26 | https://v.redd.it/s7noh3oh787f1 | abskvrm | /r/LocalLLaMA/comments/1lcl2m1/an_experimental_yet_useful_ondevice_android_llm/ | 1970-01-01T00:00:00 | 0 | {} | 1lcl2m1 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/s7noh3oh787f1/DASHPlaylist.mpd?a=1752776631%2CN2QwMzU5NjQ0YmI0YWVhNzk0M2VlYzI1ZThhMDM2M2VhODcxNGM4OTg5ZDkxODNkNzk1ZGMwYjMwYjYzMmMzZQ%3D%3D&v=1&f=sd', 'duration': 118, 'fallback_url': 'https://v.redd.it/s7noh3oh787f1/DASH_1080.mp4?source=fallback', '... | t3_1lcl2m1 | /r/LocalLLaMA/comments/1lcl2m1/an_experimental_yet_useful_ondevice_android_llm/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'MTFkemE2b2g3ODdmMTOQZ3728JJZIuKLMMMDfapcgNjPcOG-8WNcw_29393l', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MTFkemE2b2g3ODdmMTOQZ3728JJZIuKLMMMDfapcgNjPcOG-8WNcw_29393l.png?width=108&crop=smart&format=pjpg&auto=webp&s=03a59e155b1badf47724ccfd94a11ab60ee9... | |
Do AI wrapper startups have a real future? | 159 | I’ve been thinking about how many startups right now are essentially just wrappers around GPT or Claude, where they take the base model, add a nice UI or some prompt chains, and maybe tailor it to a niche, all while calling it a product.
Some of them are even making money, but I keep wondering… how long can that reall... | 2025-06-16T05:26:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lcksww/do_ai_wrapper_startups_have_a_real_future/ | Samonji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcksww | false | null | t3_1lcksww | /r/LocalLLaMA/comments/1lcksww/do_ai_wrapper_startups_have_a_real_future/ | false | false | self | 159 | null |
Chatterbox GUI | 8 | Guy I know from AMIA posted on LinkedIn a project where he’s made a GUI for chatterbox to generate audiobooks, it does the generation, verifies it with whisper and allows you to individually regenerate things that aren’t working. It took about 5 minutes for me to load it on my machine, another 5 to have all the models ... | 2025-06-16T04:34:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lcjxk2/chatterbox_gui/ | olympics2022wins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcjxk2 | false | null | t3_1lcjxk2 | /r/LocalLLaMA/comments/1lcjxk2/chatterbox_gui/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'LMIn0iKEIrKEXiTJ9TSNs4f3dr3gtGhL3Xe75KqK9DA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LMIn0iKEIrKEXiTJ9TSNs4f3dr3gtGhL3Xe75KqK9DA.png?width=108&crop=smart&auto=webp&s=ef4e08c300d5b9292a1dc4ec21ed31ae765f3c32', 'width': 108}, {'height': 108, 'url': 'h... |
llama-server has multimodal audio input, so I tried it | 3 | I had a nice, simple workthrough here, but it keeps getting auto modded so you'll have to go off site to view it. Sorry. [https://github.com/themanyone/FindAImage](https://github.com/themanyone/FindAImage) | 2025-06-16T04:30:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lcjvfw/llamaserver_has_multimodal_audio_input_so_i_tried/ | DesignToWin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcjvfw | false | null | t3_1lcjvfw | /r/LocalLLaMA/comments/1lcjvfw/llamaserver_has_multimodal_audio_input_so_i_tried/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'dqVlPcEJWR1rFCsffMQrVAGl4S_G9Ax7pEyLF48SX_8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dqVlPcEJWR1rFCsffMQrVAGl4S_G9Ax7pEyLF48SX_8.png?width=108&crop=smart&auto=webp&s=e5915ffc1e30c3c6c4818c5912a7ed4a7ebec952', 'width': 108}, {'height': 108, 'url': 'h... |
[D] Evolving AI: The Imperative of Consciousness, Evolutionary Pressure, and Biomimicry | 1 | [removed] | 2025-06-16T02:38:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lchuqm/d_evolving_ai_the_imperative_of_consciousness/ | Pale-Entertainer-386 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lchuqm | false | null | t3_1lchuqm | /r/LocalLLaMA/comments/1lchuqm/d_evolving_ai_the_imperative_of_consciousness/ | false | false | self | 1 | null |
Trouble installing llama.cpp locally on MacBook Air — Need some help | 1 | [removed] | 2025-06-16T02:15:05 | https://www.reddit.com/r/LocalLLaMA/comments/1lchet7/trouble_installing_llamacpp_locally_on_macbook/ | Mountain-Spell-941 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lchet7 | false | null | t3_1lchet7 | /r/LocalLLaMA/comments/1lchet7/trouble_installing_llamacpp_locally_on_macbook/ | false | false | self | 1 | null |
What’s your current tech stack | 52 | I’m using Ollama for local models (but I’ve been following the threads that talk about ditching it) and LiteLLM as a proxy layer so I can connect to OpenAI and Anthropic models too. I have a Postgres database for LiteLLM to use. All but Ollama is orchestrated through a docker compose and Portainer for docker management... | 2025-06-16T02:08:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lchamn/whats_your_current_tech_stack/ | hokies314 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lchamn | false | null | t3_1lchamn | /r/LocalLLaMA/comments/1lchamn/whats_your_current_tech_stack/ | false | false | self | 52 | null |
Teaching LLama 3.2 to Reason via Its Mistakes — Reflection Fine-Tuning Experiments | 1 | [removed] | 2025-06-16T01:35:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lcgneo/teaching_llama_32_to_reason_via_its_mistakes/ | cyber-inside | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcgneo | false | null | t3_1lcgneo | /r/LocalLLaMA/comments/1lcgneo/teaching_llama_32_to_reason_via_its_mistakes/ | false | false | self | 1 | null |
🧬🧫🦠 Introducing project hormones: Runtime behavior modification | 33 | Hi all!
Bored of endless repetitive behavior of LLMs? Want to see your coding agent get insecure and shut up with its endless confidence after it made the same mistake seven times?
Inspired both by [drugs](https://www.reddit.com/r/LocalLLaMA/comments/18toidc/stop_messing_with_sampling_parameters_and_just/) and by my ... | 2025-06-16T01:33:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lcglze/introducing_project_hormones_runtime_behavior/ | Combinatorilliance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcglze | false | null | t3_1lcglze | /r/LocalLLaMA/comments/1lcglze/introducing_project_hormones_runtime_behavior/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/eFMa-V1uKt_0fEaeLyZatpb8QwmDWeu0cd35rQ-BtGk.jpeg?width=108&crop=smart&auto=webp&s=834c413f42993ddd277061ce386e2876b2d0aaea', 'width': 108}, {'height': 112, 'url': '... |
Teaching LLaMa 3.2 to Reason via Its Own Mistake — Reflection Fine-Tuning Experiment | 1 | [removed] | 2025-06-16T01:32:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lcglnx/teaching_llama_32_to_reason_via_its_own_mistake/ | cyber-inside | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcglnx | false | null | t3_1lcglnx | /r/LocalLLaMA/comments/1lcglnx/teaching_llama_32_to_reason_via_its_own_mistake/ | false | false | self | 1 | null |
Is it possible to give Gemma 3 or any other model on-device screen awareness? | 1 | I got Gemma3 working on my pc last night, it is very fun to have a local llm, now I am trying to find actual use cases that could benefit my workflow. Is it possible to give it onscreen awareness and allow the model to interact with programs on the pc?
| 2025-06-16T00:57:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfxg3/is_it_possible_to_give_gemma_3_or_any_other_model/ | Lord_Greedyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfxg3 | false | null | t3_1lcfxg3 | /r/LocalLLaMA/comments/1lcfxg3/is_it_possible_to_give_gemma_3_or_any_other_model/ | false | false | self | 1 | null |
Augmentoolkit just got a major update - huge advance for dataset generation and fine-tuning | 39 | Just wanted to share that Augmentoolkit got a significant update that's worth checking out if you're into fine-tuning or dataset generation. Augmentoolkit 3.0 is a major upgrade from the previous version.
[https://github.com/e-p-armstrong/augmentoolkit](https://github.com/e-p-armstrong/augmentoolkit)
For context ... | 2025-06-16T00:55:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfvw8/augmentoolkit_just_got_a_major_update_huge/ | mj3815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfvw8 | false | null | t3_1lcfvw8 | /r/LocalLLaMA/comments/1lcfvw8/augmentoolkit_just_got_a_major_update_huge/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'h... |
Test post | 1 | **Over the past year and a half** I've been working on the problem of **factual finetuning** \-- **training an LLM on new facts** so that it learns those facts, essentially extending its knowledge cutoff. Now that I've made significant progress on the problem, I'm releasing **Augmentoolkit 3.0** — an easy-to-use datase... | 2025-06-16T00:52:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lcftun/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcftun | false | null | t3_1lcftun | /r/LocalLLaMA/comments/1lcftun/test_post/ | false | false | self | 1 | null |
Test post, finally figured out what gets these autodeleted | 1 | [removed] | 2025-06-16T00:51:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lcftal/test_post_finally_figured_out_what_gets_these/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcftal | false | null | t3_1lcftal | /r/LocalLLaMA/comments/1lcftal/test_post_finally_figured_out_what_gets_these/ | false | false | self | 1 | null |
Test post | 1 | [deleted] | 2025-06-16T00:50:31 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcfsj8 | false | null | t3_1lcfsj8 | /r/LocalLLaMA/comments/1lcfsj8/test_post/ | false | false | default | 1 | null | ||
Test post | 1 | [removed] | 2025-06-16T00:47:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfqm2/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfqm2 | false | null | t3_1lcfqm2 | /r/LocalLLaMA/comments/1lcfqm2/test_post/ | false | false | self | 1 | null |
Make your local LLM suffer | 1 | [removed] | 2025-06-16T00:47:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfqih/make_your_local_llm_suffer/ | Ok_Ninja7526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfqih | false | null | t3_1lcfqih | /r/LocalLLaMA/comments/1lcfqih/make_your_local_llm_suffer/ | false | false | self | 1 | null |
Test post | 1 | [removed] | 2025-06-16T00:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfq8y/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfq8y | false | null | t3_1lcfq8y | /r/LocalLLaMA/comments/1lcfq8y/test_post/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPQ2u5frOUlzj4usVYnSMnC3vKmiVL4Pn4TxgH87uTw.png?width=108&crop=smart&auto=webp&s=1edc01ca2691dfef3b84d60ab40bcad7bce8a592', 'width': 108}, {'height': 108, 'url': 'h... |
Test post | 1 | **Over the past year and a half** I've been working on the problem of **factual finetuning** \-- **training an LLM on new facts** so that it learns those facts, essentially extending its knowledge cutoff. Now that I've made significant progress on the problem, I'm releasing **Augmentoolkit 3.0** — an easy-to-use datase... | 2025-06-16T00:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfpvn/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfpvn | false | null | t3_1lcfpvn | /r/LocalLLaMA/comments/1lcfpvn/test_post/ | false | false | self | 1 | null |
Test post | 1 | **Over the past year and a half** I've been working on the problem of **factual finetuning** \-- **training an LLM on new facts** so that it learns those facts, essentially extending its knowledge cutoff. Now that I've made significant progress on the problem, I'm releasing **Augmentoolkit 3.0** — an easy-to-use datase... | 2025-06-16T00:46:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfpfn/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfpfn | false | null | t3_1lcfpfn | /r/LocalLLaMA/comments/1lcfpfn/test_post/ | false | false | self | 1 | null |
Augmentoolkit 3.0: 7 months of work, MIT License, Specialist AI Training | 1 | [deleted] | 2025-06-16T00:45:12 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcfoue | false | null | t3_1lcfoue | /r/LocalLLaMA/comments/1lcfoue/augmentoolkit_30_7_months_of_work_mit_license/ | false | false | default | 1 | null | ||
Test post | 1 | **Over the past year and a half** I've been working on the problem of **factual finetuning** \-- **training an LLM on new facts** so that it learns those facts, essentially extending its knowledge cutoff. Now that I've made significant progress on the problem, I'm releasing **Augmentoolkit 3.0** — an easy-to-use datase... | 2025-06-16T00:43:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfntv/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfntv | false | null | t3_1lcfntv | /r/LocalLLaMA/comments/1lcfntv/test_post/ | false | false | self | 1 | null |
Test Post | 1 | [removed] | 2025-06-16T00:42:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lcfn17/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcfn17 | false | null | t3_1lcfn17 | /r/LocalLLaMA/comments/1lcfn17/test_post/ | false | false | self | 1 | null |
Test Post | 1 | [deleted] | 2025-06-16T00:41:40 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcfmd9 | false | null | t3_1lcfmd9 | /r/LocalLLaMA/comments/1lcfmd9/test_post/ | false | false | default | 1 | null | ||
Test post | 1 | [removed] | 2025-06-16T00:40:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lcflol/test_post/ | Heralax_Tekran | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lcflol | false | null | t3_1lcflol | /r/LocalLLaMA/comments/1lcflol/test_post/ | false | false | self | 1 | null |
Test post | 1 | [deleted] | 2025-06-16T00:39:52 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1lcfl1r | false | null | t3_1lcfl1r | /r/LocalLLaMA/comments/1lcfl1r/test_post/ | false | false | default | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.