title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Has anyone attempted to use k40 12gb GPU's they are quite cheap | 2 | I see old K40 GPU's going for around $34 I know they consume alot of power but are they compatible with anything LLM related without requiring alot of tinkering to get it to work at all. Its keplar so very old but $34 is cheap enough to want to make me want to try and experiment with it. | 2025-06-11T23:28:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l97jdb/has_anyone_attempted_to_use_k40_12gb_gpus_they/ | Commercial-Celery769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l97jdb | false | null | t3_1l97jdb | /r/LocalLLaMA/comments/1l97jdb/has_anyone_attempted_to_use_k40_12gb_gpus_they/ | false | false | self | 2 | null |
OpenAI performs KYC to use the latest o3-pro via API | 97 | This afternoon I cobbled together a test-script to mess around with o3-pro. Looked nice, so nice that I came back this evening to give it another go. The OpenAI sdk throws an error in the terminal, prompting me to "Your organization must be verified to stream this model."
Allright, I go to OpenAI platform and lo and b... | 2025-06-11T23:24:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l97fst/openai_performs_kyc_to_use_the_latest_o3pro_via/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l97fst | false | null | t3_1l97fst | /r/LocalLLaMA/comments/1l97fst/openai_performs_kyc_to_use_the_latest_o3pro_via/ | false | false | self | 97 | null |
Why doesn't Apple invest in Mistral? | 0 | We saw the Microsoft/OpenAI and Amazon/Anthropic partnership. Why doesn't Apple do the same with Mistral? What is preventing it?
| 2025-06-11T22:33:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l96aqf/why_doesnt_apple_invest_in_mistral/ | Objective_Lab_3182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l96aqf | false | null | t3_1l96aqf | /r/LocalLLaMA/comments/1l96aqf/why_doesnt_apple_invest_in_mistral/ | false | false | self | 0 | null |
Open WebUI MCP? | 6 | Has anyone had success using “MCP” with Open WebUI? I’m currently serving Llama 3.1 8B Instruct via vLLM, and the tool calling and subsequent utilization has been abysmal. Most of the blogs I see utilizing MCP seems to be using these frontier models, and I have to believe it’s possible locally. There’s always the chanc... | 2025-06-11T22:32:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l96aku/open_webui_mcp/ | memorial_mike | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l96aku | false | null | t3_1l96aku | /r/LocalLLaMA/comments/1l96aku/open_webui_mcp/ | false | false | self | 6 | null |
Chatterbox - open-source SOTA TTS by resemble.ai | 56 | https://github.com/resemble-ai/chatterbox | 2025-06-11T22:32:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l96ag1/chatterbox_opensource_sota_tts_by_resembleai/ | Otis43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l96ag1 | false | null | t3_1l96ag1 | /r/LocalLLaMA/comments/1l96ag1/chatterbox_opensource_sota_tts_by_resembleai/ | false | false | self | 56 | {'enabled': False, 'images': [{'id': 'wKZzEXOJYkBr9dzybJhEftWoZrKKTptoZJVUfKjd_XY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wKZzEXOJYkBr9dzybJhEftWoZrKKTptoZJVUfKjd_XY.png?width=108&crop=smart&auto=webp&s=b836baa00f41475485167c4a531cd19ca6b36e52', 'width': 108}, {'height': 108, 'url': 'h... |
Magistral: Transparent, Multilingual Reasoning for Global Applications | LLM Radar | 1 | [removed] | 2025-06-11T22:16:10 | https://open-llm-radar.com/news/magistral-transparent-multilingual-reasoning-for-global-applications | Humble_String6885 | open-llm-radar.com | 1970-01-01T00:00:00 | 0 | {} | 1l95wkr | false | null | t3_1l95wkr | /r/LocalLLaMA/comments/1l95wkr/magistral_transparent_multilingual_reasoning_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'N2WElaYWnGyyW42lixQwMmRokSDdFtxCk-VcVh40SfI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LGK0M7OllwxmpTJEYFc9b8qWNnmAW7liTxiA1mJ3PBE.jpg?width=108&crop=smart&auto=webp&s=ebd50bebb463190be020448e995b3b173dd4a35a', 'width': 108}, {'height': 108, 'url': 'h... | |
Advice on running 70-200B LLM Local | 1 | [removed] | 2025-06-11T22:06:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l95oi1/advice_on_running_70200b_llm_local/ | Web3Vortex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l95oi1 | false | null | t3_1l95oi1 | /r/LocalLLaMA/comments/1l95oi1/advice_on_running_70200b_llm_local/ | false | false | self | 1 | null |
Local LLM Bible | 1 | [removed] | 2025-06-11T22:01:04 | https://www.reddit.com/r/LocalLLaMA/comments/1l95jc0/local_llm_bible/ | fromqjvoel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l95jc0 | false | null | t3_1l95jc0 | /r/LocalLLaMA/comments/1l95jc0/local_llm_bible/ | false | false | self | 1 | null |
Best site for inferencing medgemma 27B? | 10 | I know it's locallama: I tried the 4B model on lmstudio and got scared that a 5GB file is a better doctor than I will ever be, so now I want to try the 27B model to feel even worse. My poor 3060 with 6 GB VRAM will never handle it and i did not find it on aistudio nor on openrouter. I tried with Vertex AI but it's a pa... | 2025-06-11T21:47:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l957nz/best_site_for_inferencing_medgemma_27b/ | sebastianmicu24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l957nz | false | null | t3_1l957nz | /r/LocalLLaMA/comments/1l957nz/best_site_for_inferencing_medgemma_27b/ | false | false | self | 10 | null |
Accessing ios26 local LLM via React Native | 0 | Am downloading ios26 tonight! I’m not an Xcode or Swift guy. What do you guys think about soon having a native react module can install to allow React Native to access and play with the LLm in my Expo React Native apps.
I’m super stoked! Particularly to test it out to detect objects in photos.
| 2025-06-11T21:23:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l94mca/accessing_ios26_local_llm_via_react_native/ | Puzzleheaded-Fly4322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l94mca | false | null | t3_1l94mca | /r/LocalLLaMA/comments/1l94mca/accessing_ios26_local_llm_via_react_native/ | false | false | self | 0 | null |
DayWise – A Minimal Daily Task Tracker You Can Fully Customize | 1 | [removed] | 2025-06-11T21:20:10 | https://github.com/AhmedOsamaMath/daywise | AhmedOsamaMath | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l94jfx | false | null | t3_1l94jfx | /r/LocalLLaMA/comments/1l94jfx/daywise_a_minimal_daily_task_tracker_you_can/ | false | false | 1 | {'enabled': False, 'images': [{'id': '4-nV4zbLwfnh4eHHDjNUWXmCXam-kV8prcVul_k5peA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/k1P5n9ySvPv_KNkHk7uGXRC7g_irzkla6zDSJozR0u8.jpg?width=108&crop=smart&auto=webp&s=5f5653b4f0bcd06b74f551c7d4f84d076c00b838', 'width': 108}, {'height': 108, 'url': 'h... | |
LiteRT-LM - (An early version of) A C++ library to efficiently run Gemma-3N across various platform | 34 | 2025-06-11T20:49:26 | https://github.com/google-ai-edge/LiteRT-LM | cpldcpu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l93ry3 | false | null | t3_1l93ry3 | /r/LocalLLaMA/comments/1l93ry3/litertlm_an_early_version_of_a_c_library_to/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'whY1sTyeUUN2XlBpEOhXnhmIZo32bTeJQwAwjoq_qkU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5ALlCNjnyb01JF5ItOkTd8Y8O1vVBS6s20XdxDI3_Nc.jpg?width=108&crop=smart&auto=webp&s=b391038ab72c5c9198c134d039c079f8b90e94a7', 'width': 108}, {'height': 108, 'url': 'h... | ||
Open Source LangSmith alternative with LangGraph visualization. | 1 | [removed] | 2025-06-11T20:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l93d4h/open_source_langsmith_alternative_with_langgraph/ | Upstairs-Spell7521 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l93d4h | false | null | t3_1l93d4h | /r/LocalLLaMA/comments/1l93d4h/open_source_langsmith_alternative_with_langgraph/ | false | false | 1 | null | |
As some people asked me to share some details, here is how I got to llama.cpp, llama-swap and Open Webui to fully replace Ollama. | 47 | Sorry to make another post about this, but as some people asked me for more details and the reply was getting lengthy, I decided to write another post. (I
TL;DR: This is for local models only. As I wrote in the other post: I use llama.ccp (and/or ik_llama.cpp), llama-swap, Open Webui (in my... | 2025-06-11T20:12:48 | https://www.reddit.com/r/LocalLLaMA/comments/1l92vr0/as_some_people_asked_me_to_share_some_details/ | relmny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l92vr0 | false | null | t3_1l92vr0 | /r/LocalLLaMA/comments/1l92vr0/as_some_people_asked_me_to_share_some_details/ | false | false | self | 47 | null |
GPU optimization for llama 3.1 8b | 2 | Hi, I am new to this AI/ML filed. I am trying to use 3.18b for entity recognition from bank transaction. The models to process atleast 2000 transactions. So what is best way to use full utlization of GPU. We have a powerful GPU for production. So currently I am sending multiple requests to model using ollama server opt... | 2025-06-11T20:06:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l92py6/gpu_optimization_for_llama_31_8b/ | nimmalachaitanya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l92py6 | false | null | t3_1l92py6 | /r/LocalLLaMA/comments/1l92py6/gpu_optimization_for_llama_31_8b/ | false | false | self | 2 | null |
NSFW Chatbot Translation Nightmares: Need Error-Free Non-English Models! | 1 | [removed] | 2025-06-11T20:01:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l92lgb/nsfw_chatbot_translation_nightmares_need/ | No_Kale_9828 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l92lgb | false | null | t3_1l92lgb | /r/LocalLLaMA/comments/1l92lgb/nsfw_chatbot_translation_nightmares_need/ | false | false | nsfw | 1 | null |
How to decide on a model? | 1 | i’m really new to this! i’m making my first local model now and am trying to pick a model that works for me. i’ve seen a few posts here trying to decode all the various things in model names, but it seems like the general consensus is that there isn’t much rhyme or reason to it. Is there a repository somewhere of all t... | 2025-06-11T19:59:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l92jl9/how_to_decide_on_a_model/ | Loud-Bake-2740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l92jl9 | false | null | t3_1l92jl9 | /r/LocalLLaMA/comments/1l92jl9/how_to_decide_on_a_model/ | false | false | self | 1 | null |
Are we hobbyists lagging behind? | 41 | It almost feels like every local project is a variation of another project or an implementation of a project from the big orgs, i.e, notebook LLM, deepsearch, coding agents, etc.
Felt like a year or two ago, hobbyists were also helping to seriously push the envelope. How do we get back to relevancy and being impactful... | 2025-06-11T19:45:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l926uy/are_we_hobbyists_lagging_behind/ | segmond | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l926uy | false | null | t3_1l926uy | /r/LocalLLaMA/comments/1l926uy/are_we_hobbyists_lagging_behind/ | false | false | self | 41 | null |
What AI industry events are you attending? | 0 | Hi everyone!
We're curious to know what types of AI-focused events you all enjoy attending or would love to see more of in the future. Are there any you're more interested in such as:
* Tech conferences
* Hackathons
* Meetups
* Workshops
* Online webinars
* Something else?
If you have any tips on how to get the most... | 2025-06-11T19:42:38 | https://www.reddit.com/r/LocalLLaMA/comments/1l924e6/what_ai_industry_events_are_you_attending/ | MetaforDevelopers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l924e6 | false | null | t3_1l924e6 | /r/LocalLLaMA/comments/1l924e6/what_ai_industry_events_are_you_attending/ | false | false | self | 0 | null |
3x3090 Build | 1 | [removed] | 2025-06-11T19:09:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l91afk/3x3090_build/ | bitrecs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l91afk | false | null | t3_1l91afk | /r/LocalLLaMA/comments/1l91afk/3x3090_build/ | false | false | self | 1 | null |
Can we RL/GRPO a language model to hack its own brain by rewarding for specific measurements inside the transformer architecture during inference? | 4 | Hey folks, very simple concept. Basically you are doing reinforcement learning, so that means you might have a batch of 16 rollout per step. On each of these rollout, you are tracking measurements over the states of computation inside the LLM. For example the variance of its hidden states or activations during inferenc... | 2025-06-11T18:43:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l90m7g/can_we_rlgrpo_a_language_model_to_hack_its_own/ | ryunuck | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l90m7g | false | null | t3_1l90m7g | /r/LocalLLaMA/comments/1l90m7g/can_we_rlgrpo_a_language_model_to_hack_its_own/ | false | false | self | 4 | null |
More Intel Arc Pro B60 Photos | 1 | [removed] | 2025-06-11T18:19:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l900iy/more_intel_arc_pro_b60_photos/ | Dr_Karminski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l900iy | false | null | t3_1l900iy | /r/LocalLLaMA/comments/1l900iy/more_intel_arc_pro_b60_photos/ | false | false | 1 | null | |
Disney and Universal sue AI image company Midjourney for unlicensed use of Star Wars, The Simpsons and more | 401 | This is big! When Disney gets involved, shit is about to hit the fan.
If they come after Midourney, then expect other AI labs trained on similar training data to be hit soon.
What do you think? | 2025-06-11T18:11:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l8zssy/disney_and_universal_sue_ai_image_company/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8zssy | false | null | t3_1l8zssy | /r/LocalLLaMA/comments/1l8zssy/disney_and_universal_sue_ai_image_company/ | false | false | self | 401 | null |
PSA: GPU Host Interface board power cables can melt, too. | 0 | My 3090s fell off the bus and stopped working after 3 days of sustained load this morning.
Investigating revealed they burned right through this (cheap) SATA power splitter powering my SFF-8654 interface boards.
I thought those lines were just used just for the 3.3v PCIe bucks (which is true for my SFF-8611 boards!) ... | 2025-06-11T17:49:30 | https://www.reddit.com/gallery/1l8z89y | kryptkpr | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l8z89y | false | null | t3_1l8z89y | /r/LocalLLaMA/comments/1l8z89y/psa_gpu_host_interface_board_power_cables_can/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'RwsfeDYD6-Gznk9QyPSRJiN-8yplblUEUDvxaWDPg-o', 'resolutions': [{'height': 143, 'url': 'https://external-preview.redd.it/RwsfeDYD6-Gznk9QyPSRJiN-8yplblUEUDvxaWDPg-o.jpeg?width=108&crop=smart&auto=webp&s=52557638e0ec626d081bd88928d4cc6866af6369', 'width': 108}, {'height': 286, 'url': '... | |
deepseek v3 0324 vs deepseek r1 0528 for agentic coding? (also agent-zero troubleshoot) | 1 | [removed] | 2025-06-11T17:48:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l8z6xc/deepseek_v3_0324_vs_deepseek_r1_0528_for_agentic/ | sans_the_comicc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8z6xc | false | null | t3_1l8z6xc | /r/LocalLLaMA/comments/1l8z6xc/deepseek_v3_0324_vs_deepseek_r1_0528_for_agentic/ | false | false | self | 1 | null |
deepseek v3 0324 vs deepseek r1 0528 for agentic coding? (also agent-zero troubleshoot) | 1 | [removed] | 2025-06-11T17:46:40 | https://www.reddit.com/r/LocalLLaMA/comments/1l8z5ma/deepseek_v3_0324_vs_deepseek_r1_0528_for_agentic/ | sans_the_comicc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8z5ma | false | null | t3_1l8z5ma | /r/LocalLLaMA/comments/1l8z5ma/deepseek_v3_0324_vs_deepseek_r1_0528_for_agentic/ | false | false | self | 1 | null |
I built a local-first AI video editor | 1 | [removed] | 2025-06-11T17:29:22 | https://youtu.be/0YBcYGmYV4c | ExtremeKangaroo5437 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1l8ypjw | false | {'oembed': {'author_name': 'Gowrav Vishwakarma', 'author_url': 'https://www.youtube.com/@gowravvishwakarma', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/0YBcYGmYV4c?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-me... | t3_1l8ypjw | /r/LocalLLaMA/comments/1l8ypjw/i_built_a_localfirst_ai_video_editor/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eLLGv4TPae01UCRCPaHe0jr4Vo8Sn_L68frbEmgceVA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eLLGv4TPae01UCRCPaHe0jr4Vo8Sn_L68frbEmgceVA.jpeg?width=108&crop=smart&auto=webp&s=145122f49289487526d65dc5f4bb55e5ba9c3ef0', 'width': 108}, {'height': 162, 'url': '... | |
n8n vs claudecode | 1 | [removed] | 2025-06-11T17:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l8yfvz/n8n_vs_claudecode/ | pr0scient | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8yfvz | false | null | t3_1l8yfvz | /r/LocalLLaMA/comments/1l8yfvz/n8n_vs_claudecode/ | false | false | self | 1 | null |
Made a medical document anonymizer using ollama! | 1 | [removed] | 2025-06-11T17:08:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l8y68t/made_a_medical_document_anonymizer_using_ollama/ | ramu9703 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8y68t | false | null | t3_1l8y68t | /r/LocalLLaMA/comments/1l8y68t/made_a_medical_document_anonymizer_using_ollama/ | false | false | self | 1 | null |
Anyone built a reasoning-tuned Gemma 3 variant (GRPO or MoT)? | 1 | [removed] | 2025-06-11T17:06:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l8y3u9/anyone_built_a_reasoningtuned_gemma_3_variant/ | Thatisverytrue54321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8y3u9 | false | null | t3_1l8y3u9 | /r/LocalLLaMA/comments/1l8y3u9/anyone_built_a_reasoningtuned_gemma_3_variant/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nl-1QG5zRL6EJ6d1x2O0sStynFkKL24DsALXGU8DT-I', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nl-1QG5zRL6EJ6d1x2O0sStynFkKL24DsALXGU8DT-I.png?width=108&crop=smart&auto=webp&s=c60b70012e6e71b439a033b77c4844e3340c9a13', 'width': 108}, {'height': 116, 'url': 'h... |
Looking for a lightweight front-end like llama-server | 1 | I really like llama-server but it lacks some features like continuing generation, editing the models message etc. And it could be better if it stored conversations in json files, but I don't want something like open-webui it's overkill and bloated for me. | 2025-06-11T16:57:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l8xv7i/looking_for_a_lightweight_frontend_like/ | flatminded | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8xv7i | false | null | t3_1l8xv7i | /r/LocalLLaMA/comments/1l8xv7i/looking_for_a_lightweight_frontend_like/ | false | false | self | 1 | null |
MCP overview (15min) | 1 | generated from the docs on [modelcontextprotocol.io](http://modelcontextprotocol.io) | 2025-06-11T16:41:34 | https://v.redd.it/tnogxadjtb6f1 | josh-r-meyer | /r/LocalLLaMA/comments/1l8xgxi/mcp_overview_15min/ | 1970-01-01T00:00:00 | 0 | {} | 1l8xgxi | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tnogxadjtb6f1/DASHPlaylist.mpd?a=1752381702%2CMTA3NTZjMGFhYTNkMDJhMjI0M2Y2NzFiNzk3YzA5ZjFiZmQzMTUwOTBhMDg3MzMwMmZiMTBjZDMwMzU0ZTQxYQ%3D%3D&v=1&f=sd', 'duration': 835, 'fallback_url': 'https://v.redd.it/tnogxadjtb6f1/DASH_1080.mp4?source=fallback', '... | t3_1l8xgxi | /r/LocalLLaMA/comments/1l8xgxi/mcp_overview_15min/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'M3M4dHg4ZGp0YjZmMfQskq0GSv-UBGx4HIRStWuy9smxAWDk8euINcmpqFz8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/M3M4dHg4ZGp0YjZmMfQskq0GSv-UBGx4HIRStWuy9smxAWDk8euINcmpqFz8.png?width=108&crop=smart&format=pjpg&auto=webp&s=620092ddc95d5eae3e10a3fbb17d59fadd581... | |
Qwen 2.5 3B VL performance dropped post fine tuning. | 12 | Beginner here - please help me out.
I was asked to fine tune a Qwen 2.5 3B VL for the following task:
Given an image taken during an online test, check if the candidate is cheating or not. A candidate is considered to be cheating if there’s a mobile phone, headphones, crowd around, etc.
I was able to fine tune Qwen... | 2025-06-11T16:22:49 | https://www.reddit.com/r/LocalLLaMA/comments/1l8x080/qwen_25_3b_vl_performance_dropped_post_fine_tuning/ | chitrabhat4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8x080 | false | null | t3_1l8x080 | /r/LocalLLaMA/comments/1l8x080/qwen_25_3b_vl_performance_dropped_post_fine_tuning/ | false | false | self | 12 | null |
Any easy local configuration that can find typos and gramatical/punctuaction errors in a pdf? | 2 | Hi,
Basically I would like to setup an AI that can look for things like "better better", "making make", "evoution" ... etc in a PDF. and annotate them, so that I can fix them!
I though about setting up a rag with llama3.2 but not sure if that's the best idea
(I could also supply the AI with .tex files that ge... | 2025-06-11T16:17:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l8wvvr/any_easy_local_configuration_that_can_find_typos/ | Super-Government6796 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8wvvr | false | null | t3_1l8wvvr | /r/LocalLLaMA/comments/1l8wvvr/any_easy_local_configuration_that_can_find_typos/ | false | false | self | 2 | null |
[Tool] rvn-convert: OSS Rust-based SafeTensors to GGUF v3 converter (single-shard, fast, no Python) | 33 | Afternoon,
I built a tool out of frustration after losing hours to failed model conversions. (Seriously launching python tool just to see a failure after 159 tensors and 3 hours)
`rvn-convert` is a small Rust utility that memory-maps a HuggingFace `safetensors` file and writes a clean, llama.cpp-compatible `.gguf` fi... | 2025-06-11T16:15:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l8wty9/tool_rvnconvert_oss_rustbased_safetensors_to_gguf/ | rvnllm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8wty9 | false | null | t3_1l8wty9 | /r/LocalLLaMA/comments/1l8wty9/tool_rvnconvert_oss_rustbased_safetensors_to_gguf/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'DMQDI8vFlL9Gsgmf_pq6iToq5QkaGErQaZ02V2Zdi0M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DMQDI8vFlL9Gsgmf_pq6iToq5QkaGErQaZ02V2Zdi0M.png?width=108&crop=smart&auto=webp&s=f08de7056ab47428d83fd53f9b65bd01e6a091ad', 'width': 108}, {'height': 108, 'url': 'h... |
Would you use an open source AI Voice Assistant Keychain, configurable to use local or frontier models? | 0 | Would you use an Al Assistant keychain with press to talk to an LLM (with wifi / cellular integration)?
You can control what tools the Al has available, select your LLM, and use companion app to manage transcripts.
Siri, Alexa, and Google are closed and difficult to customize. They own your data and you have no direc... | 2025-06-11T16:15:49 | zuluana | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l8wtxf | false | null | t3_1l8wtxf | /r/LocalLLaMA/comments/1l8wtxf/would_you_use_an_open_source_ai_voice_assistant/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'X27XrlpIdix0M0HpUJjwnVYuNzQK7By9FXZM6aPp63w', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/7q0w3276pb6f1.jpeg?width=108&crop=smart&auto=webp&s=2949cce14b0e3f2a2ca2e1ef1fcdd33db932b7c8', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/7q0w3276pb6f1.j... | ||
Perception Language Models (PLM): 1B, 3B, and 8B VLMs with code and data | 29 | Very cool resource if you're working in the VLM space!
* Models: [https://huggingface.co/collections/facebook/perception-lm-67f9783f171948c383ee7498](https://huggingface.co/collections/facebook/perception-lm-67f9783f171948c383ee7498)
* Code: [https://github.com/facebookresearch/perception\_models](https://github.com/f... | 2025-06-11T16:07:25 | https://huggingface.co/collections/facebook/perception-lm-67f9783f171948c383ee7498 | entsnack | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l8wm8v | false | null | t3_1l8wm8v | /r/LocalLLaMA/comments/1l8wm8v/perception_language_models_plm_1b_3b_and_8b_vlms/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'n4hFtoyE8VyE50topwUGDQMo2PXOFF4-oMSKgCANc4w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kRxHl4QHoi7JxOqoDPbjwpU5kW4q0t0a7pYH8RdzKEQ.jpg?width=108&crop=smart&auto=webp&s=9271d8bf5bcdc029f80d41a794b775e1c953f42c', 'width': 108}, {'height': 116, 'url': 'h... | |
Which model should I use on my macbook m4? | 0 | I recently got a MacBook Air M4 and upgraded the RAM to 32 GB
I am not an expert, and neither do I have a technical background in web development, but I am quite a curious mind and was wondering which model you think I can run the best for code generation for web app developments? thanks! | 2025-06-11T15:43:23 | https://www.reddit.com/r/LocalLLaMA/comments/1l8w04v/which_model_should_i_use_on_my_macbook_m4/ | Sergioramos0447 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8w04v | false | null | t3_1l8w04v | /r/LocalLLaMA/comments/1l8w04v/which_model_should_i_use_on_my_macbook_m4/ | false | false | self | 0 | null |
Looking to hire someone to teach me LLM finetuning / LoRa training | 1 | [removed] | 2025-06-11T15:43:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l8w00g/looking_to_hire_someone_to_teach_me_llm/ | contentedpoverty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8w00g | false | null | t3_1l8w00g | /r/LocalLLaMA/comments/1l8w00g/looking_to_hire_someone_to_teach_me_llm/ | false | false | self | 1 | null |
What is the current state of llama.cpp rpc-server? | 13 | For context, I serendipitously got an extra x99 motherboard, and I have a couple spare GPUs available to use with it.
I'm curious, given the current state of llama.cpp rpc, if it's worth buying the CPU, cooler, etc. in order to run this board as an RPC node in llama.cpp?
I tried looking for information online, but co... | 2025-06-11T15:42:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l8vziy/what_is_the_current_state_of_llamacpp_rpcserver/ | kevin_1994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8vziy | false | null | t3_1l8vziy | /r/LocalLLaMA/comments/1l8vziy/what_is_the_current_state_of_llamacpp_rpcserver/ | false | false | self | 13 | null |
M4 Max 128GB v NVIDIA DGX Spark? | 1 | [removed] | 2025-06-11T15:22:34 | https://www.reddit.com/r/LocalLLaMA/comments/1l8vhdl/m4_max_128gb_v_nvidia_dgx_spark/ | OptimisticSwitcheroo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8vhdl | false | null | t3_1l8vhdl | /r/LocalLLaMA/comments/1l8vhdl/m4_max_128gb_v_nvidia_dgx_spark/ | false | false | self | 1 | null |
From "LangGraph is trash" to "pip install langgraph": A Stockholm Syndrome Story | 1 | [removed] | 2025-06-11T15:03:21 | https://www.reddit.com/r/LocalLLaMA/comments/1l8uzrk/from_langgraph_is_trash_to_pip_install_langgraph/ | FailingUpAllDay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8uzrk | false | null | t3_1l8uzrk | /r/LocalLLaMA/comments/1l8uzrk/from_langgraph_is_trash_to_pip_install_langgraph/ | false | false | self | 1 | null |
Meta releases V-JEPA 2, the first world model trained on video | 280 | 2025-06-11T14:48:35 | https://huggingface.co/collections/facebook/v-jepa-2-6841bad8413014e185b497a6 | juanviera23 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l8umf2 | false | null | t3_1l8umf2 | /r/LocalLLaMA/comments/1l8umf2/meta_releases_vjepa_2_the_first_world_model/ | false | false | 280 | {'enabled': False, 'images': [{'id': '7pBvBFe8bd_NQplAFgrEpaIQ63MNMr2sBmAuWlM0Xes', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XYCW87FCFGR0wI2hYDorldwWOlBC0pjIIfGLZhngZC4.jpg?width=108&crop=smart&auto=webp&s=2a03f4b14f6d80535555c4c68506482525daf741', 'width': 108}, {'height': 116, 'url': 'h... | ||
AI Deep Research Explained | 39 | Probably a lot of you are using deep research on ChatGPT, Perplexity, or Grok to get better and more comprehensive answers to your questions, or data you want to investigate.
But did you ever stop to think how it actually works behind the scenes?
In my latest blog post, I break down the system-level mechanics behind ... | 2025-06-11T14:32:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l8u7wy/ai_deep_research_explained/ | Nir777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8u7wy | false | null | t3_1l8u7wy | /r/LocalLLaMA/comments/1l8u7wy/ai_deep_research_explained/ | false | false | self | 39 | null |
NeuralCodecs Adds Speech: Dia TTS in C# .NET | 16 | Includes full Dia support with voice cloning and custom dynamic speed correction to solve Dia's speed-up issues on longer prompts.
Performance-wise, we miss out on the benefits of python's torch.compile, but still achieve slightly better tokens/s than the non-compiled Python in my setup (Windows/RTX 3090). Would love ... | 2025-06-11T13:57:03 | https://github.com/DillionLowry/NeuralCodecs | Knehm | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l8tcle | false | null | t3_1l8tcle | /r/LocalLLaMA/comments/1l8tcle/neuralcodecs_adds_speech_dia_tts_in_c_net/ | false | false | 16 | {'enabled': False, 'images': [{'id': 'D5xFS_6431ks4R6vzrvuvQxupuS0sM74zH0PT3IPLEE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/D5xFS_6431ks4R6vzrvuvQxupuS0sM74zH0PT3IPLEE.png?width=108&crop=smart&auto=webp&s=ccc6f2cc5d0e7d83263ead97b2d5f5e6021ca7e0', 'width': 108}, {'height': 108, 'url': 'h... | |
Anybody know what agent UI framwork this is (it has a terminal emulator for letting the LLM execute commands) | 0 | More or less title.
Anybody know an existing agent UI framework where one could let an LLM (local or via API) execute commands in a shell guided by the prompt on the right? | 2025-06-11T13:54:48 | Birder | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l8tap1 | false | null | t3_1l8tap1 | /r/LocalLLaMA/comments/1l8tap1/anybody_know_what_agent_ui_framwork_this_is_it/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'ygcCCuiqmUmeSWfdJzqqvGRmandTvGm1mS61b3VowKQ', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/dpqc2lnuza6f1.png?width=108&crop=smart&auto=webp&s=197b313d9261fee1f989da9f05e3a85879f97384', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/dpqc2lnuza6f1.png... | ||
Huge VRAM usage with VLLM | 1 | Hi, I'm trying to make vllm run on my local machine (windows 11 laptop with a 4070 8GB of VRAM).
My goal is tu use vision models, and people said that gguf version of the models were bad for vision, and I can't run non gguf models with ollama, so I tried vllm.
After few day of trying with an old docker repo, and a ... | 2025-06-11T13:52:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l8t8n8/huge_vram_usage_with_vllm/ | Wintlink- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8t8n8 | false | null | t3_1l8t8n8 | /r/LocalLLaMA/comments/1l8t8n8/huge_vram_usage_with_vllm/ | false | false | self | 1 | null |
Recommendations for Models for Tool Usage | 5 | I’ve built a small app to experiment with mcp. I integrated about 2 dozen tools that my team uses for data processing pipelines. It works really well. The tool call success rate is probably over 95%. I built it using the OpenAI API. Ideally I’d like to host everything locally without changing my code, just the Op... | 2025-06-11T13:36:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l8sv82/recommendations_for_models_for_tool_usage/ | Simusid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8sv82 | false | null | t3_1l8sv82 | /r/LocalLLaMA/comments/1l8sv82/recommendations_for_models_for_tool_usage/ | false | false | self | 5 | null |
LA RELACIÓN DE AYUDA EN TRABAJO SOCIAL | 1 | [removed] | 2025-06-11T13:35:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l8sunm/la_relación_de_ayuda_en_trabajo_social/ | PersonalCounty3622 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8sunm | false | null | t3_1l8sunm | /r/LocalLLaMA/comments/1l8sunm/la_relación_de_ayuda_en_trabajo_social/ | false | false | self | 1 | null |
llama-server vs llama python binding | 2 | I am trying to build some applications which include RAG
llama.cpp python binding installs and run the CPU build instead of using a build i made.
(couldn't configure this to use my build)
Using llama-server makes sense but couldn't figure out how do i use my own chat template and loading the embedding model.
Any tip... | 2025-06-11T13:18:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l8sh4m/llamaserver_vs_llama_python_binding/ | daxxy_1125 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8sh4m | false | null | t3_1l8sh4m | /r/LocalLLaMA/comments/1l8sh4m/llamaserver_vs_llama_python_binding/ | false | false | self | 2 | null |
Sarvam AI (indian startup) is likely pulling of massive "download farming" in HF | 1 | [removed] | 2025-06-11T13:00:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l8s1p7/sarvam_ai_indian_startup_is_likely_pulling_of/ | Ortho-BenzoPhenone | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8s1p7 | false | null | t3_1l8s1p7 | /r/LocalLLaMA/comments/1l8s1p7/sarvam_ai_indian_startup_is_likely_pulling_of/ | false | false | 1 | null | |
An app to match specs to LLM | 2 | I get a lot of questions from people irl about which models to run locally on a persons spec. Frankly, I'd love to point them to an app that makes the recommendation based on an inputted spec. Does that app exist yet or do I have to build one? (Don't want to re-invent the wheel...) | 2025-06-11T12:58:33 | https://www.reddit.com/r/LocalLLaMA/comments/1l8s0ka/an_app_to_match_specs_to_llm/ | jrf_1973 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8s0ka | false | null | t3_1l8s0ka | /r/LocalLLaMA/comments/1l8s0ka/an_app_to_match_specs_to_llm/ | false | false | self | 2 | null |
I find and analize 200k Construction Jobs using LLaMA | 1 | [removed] | 2025-06-11T12:44:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l8rq3k/i_find_and_analize_200k_construction_jobs_using/ | Separate-Breath2267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8rq3k | false | null | t3_1l8rq3k | /r/LocalLLaMA/comments/1l8rq3k/i_find_and_analize_200k_construction_jobs_using/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=108&crop=smart&auto=webp&s=8e5f4eecb8f4e20584a0a45a6c7b3d80bca50562', 'width': 108}, {'height': 113, 'url': 'h... |
Can an AI agent analyse spreadsheets locally without exposing your data? | 1 | [removed] | 2025-06-11T12:06:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l8qxur/can_an_ai_agent_analyse_spreadsheets_locally/ | StrictAd876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8qxur | false | null | t3_1l8qxur | /r/LocalLLaMA/comments/1l8qxur/can_an_ai_agent_analyse_spreadsheets_locally/ | false | false | self | 1 | null |
Can an AI agent analyse spreadsheets locally without exposing your data? | 1 | [removed] | 2025-06-11T12:01:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l8qu82/can_an_ai_agent_analyse_spreadsheets_locally/ | StrictAd876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8qu82 | false | null | t3_1l8qu82 | /r/LocalLLaMA/comments/1l8qu82/can_an_ai_agent_analyse_spreadsheets_locally/ | false | false | self | 1 | null |
Can an AI agent analyse spreadsheets locally without exposing your data? | 1 | [removed] | 2025-06-11T11:56:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l8qq8u/can_an_ai_agent_analyse_spreadsheets_locally/ | StrictAd876 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8qq8u | false | null | t3_1l8qq8u | /r/LocalLLaMA/comments/1l8qq8u/can_an_ai_agent_analyse_spreadsheets_locally/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'USf2kP3eKTBj6g-52cbURnnHi7gB4YX2-nB48uBwCnA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/TnSEM6z35Mj98X6zjgDOFMEVNvl5C3_sYeeZW-BreaM.jpg?width=108&crop=smart&auto=webp&s=76cc045b0789ecd0db0a59879378f60a85e96c20', 'width': 108}, {'height': 216, 'url': '... | |
MNN TaoAvatar: run 3d avatar offline, Android app by alibaba mnn team | 124 | https://github.com/alibaba/MNN/blob/master/apps/Android/Mnn3dAvatar/README.md#version-001 | 2025-06-11T11:43:01 | https://v.redd.it/65vyq2fhca6f1 | Juude89 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l8qh2a | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/65vyq2fhca6f1/DASHPlaylist.mpd?a=1752234194%2CYjViZDY4MDRkNzU0MGMyYzc2YTdkOGZmMjZmYWFmODFmMTMzMjkzOTQ2MWE2MGIzZDE3NTgzY2I2MmUxOGExZA%3D%3D&v=1&f=sd', 'duration': 33, 'fallback_url': 'https://v.redd.it/65vyq2fhca6f1/DASH_720.mp4?source=fallback', 'ha... | t3_1l8qh2a | /r/LocalLLaMA/comments/1l8qh2a/mnn_taoavatar_run_3d_avatar_offline_android_app/ | false | false | 124 | {'enabled': False, 'images': [{'id': 'MXhndzBsZmhjYTZmMcCLIwnOvaDL_InTTSsx50yogh0oEgldtT-tbB2eca5E', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MXhndzBsZmhjYTZmMcCLIwnOvaDL_InTTSsx50yogh0oEgldtT-tbB2eca5E.png?width=108&crop=smart&format=pjpg&auto=webp&s=ebc62b33d86e75ae58896af589275914b232... | |
Testing Jamba 1.6 near the 256K context limit? | 1 | [removed] | 2025-06-11T11:24:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l8q4x3/testing_jamba_16_near_the_256k_context_limit/ | zennaxxarion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8q4x3 | false | null | t3_1l8q4x3 | /r/LocalLLaMA/comments/1l8q4x3/testing_jamba_16_near_the_256k_context_limit/ | false | false | self | 1 | null |
Which model & prompts I should use for this OCR work? | 2 | So I want to run OCR works on an old Japanese book and run into the following problems:
1. The book is stained and some of the words are blurred.
2. The texts are all in a vertical way and I would like the final results in a normal way.
3. There are annotations above some characters and I would like to capture those... | 2025-06-11T11:08:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l8pu7y/which_model_prompts_i_should_use_for_this_ocr_work/ | lemuever17 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8pu7y | false | null | t3_1l8pu7y | /r/LocalLLaMA/comments/1l8pu7y/which_model_prompts_i_should_use_for_this_ocr_work/ | false | false | self | 2 | null |
I finally got rid of Ollama! | 549 | About a month ago, I decided to move away from Ollama (while still using Open WebUI as frontend), and I actually did it faster and easier than I thought!
Since then, my setup has been (on both Linux and Windows):
llama.cpp or ik\_llama.cpp for inference
llama-swap to load/unload/auto-unload models (have a big config... | 2025-06-11T10:42:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l8pem0/i_finally_got_rid_of_ollama/ | relmny | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8pem0 | false | null | t3_1l8pem0 | /r/LocalLLaMA/comments/1l8pem0/i_finally_got_rid_of_ollama/ | false | false | self | 549 | null |
Have access to 2 x A100s - wish to train something that's beneficial to the community | 1 | [removed] | 2025-06-11T10:28:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l8p6bs/have_access_to_2_x_a100s_wish_to_train_something/ | fullgoopy_alchemist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8p6bs | false | null | t3_1l8p6bs | /r/LocalLLaMA/comments/1l8p6bs/have_access_to_2_x_a100s_wish_to_train_something/ | false | false | self | 1 | null |
Have access to 2 x A100s - what multimodal LLM should I train that's beneficial to the community here? | 1 | [removed] | 2025-06-11T10:24:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l8p4cs/have_access_to_2_x_a100s_what_multimodal_llm/ | fullgoopy_alchemist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8p4cs | false | null | t3_1l8p4cs | /r/LocalLLaMA/comments/1l8p4cs/have_access_to_2_x_a100s_what_multimodal_llm/ | false | false | self | 1 | null |
Have access to 2 x A100s - what multimodal LLM should I train that's beneficial to the community here? | 1 | [removed] | 2025-06-11T10:22:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l8p364/have_access_to_2_x_a100s_what_multimodal_llm/ | fullgoopy_alchemist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8p364 | false | null | t3_1l8p364 | /r/LocalLLaMA/comments/1l8p364/have_access_to_2_x_a100s_what_multimodal_llm/ | false | false | self | 1 | null |
Where is the tutorial/guide to run locally? I look at the sidebar, and it's just filter by 'tutorial | guide' flair. | 1 | [removed] | 2025-06-11T10:15:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l8oynz/where_is_the_tutorialguide_to_run_locally_i_look/ | jinnyjuice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8oynz | false | null | t3_1l8oynz | /r/LocalLLaMA/comments/1l8oynz/where_is_the_tutorialguide_to_run_locally_i_look/ | false | false | self | 1 | null |
Image vector embedding | 1 | [removed] | 2025-06-11T09:48:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l8ojfc/image_vector_embedding/ | thorf_44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8ojfc | false | null | t3_1l8ojfc | /r/LocalLLaMA/comments/1l8ojfc/image_vector_embedding/ | false | false | self | 1 | null |
It should be possible to create a "dummy GPU" hardware device that does nothing but host VRAM for a real card via NVLink | 1 | [removed] | 2025-06-11T09:48:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l8ojcr/it_should_be_possible_to_create_a_dummy_gpu/ | Antique_Savings7249 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8ojcr | false | null | t3_1l8ojcr | /r/LocalLLaMA/comments/1l8ojcr/it_should_be_possible_to_create_a_dummy_gpu/ | false | false | self | 1 | null |
Looking for a great rp model for independent characters | 1 | [removed] | 2025-06-11T09:43:07 | https://www.reddit.com/r/LocalLLaMA/comments/1l8ogi7/looking_for_a_great_rp_model_for_independent/ | Past-Deal5045 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8ogi7 | false | null | t3_1l8ogi7 | /r/LocalLLaMA/comments/1l8ogi7/looking_for_a_great_rp_model_for_independent/ | false | false | self | 1 | null |
Looking for a great rp model for independent characters | 1 | [removed] | 2025-06-11T09:41:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l8ofi3/looking_for_a_great_rp_model_for_independent/ | Past-Deal5045 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8ofi3 | false | null | t3_1l8ofi3 | /r/LocalLLaMA/comments/1l8ofi3/looking_for_a_great_rp_model_for_independent/ | false | false | self | 1 | null |
Altman on open weight 🤔🤔 | 200 | 2025-06-11T09:38:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l8oe8g/altman_on_open_weight/ | Mean-Neighborhood-42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8oe8g | false | null | t3_1l8oe8g | /r/LocalLLaMA/comments/1l8oe8g/altman_on_open_weight/ | false | false | 200 | {'enabled': False, 'images': [{'id': 'MKBlag_qUu9yYrp0owyTwlx7WLmGYYg_LjKIGYqQ8xg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/GVhfh_dphC644v-zed5GXAiqXgODxyxxMhlNyvcKd5w.jpg?width=108&crop=smart&auto=webp&s=6da23f937afc76fa32caeb0f34e23d73d37134ed', 'width': 108}], 'source': {'height': 20... | ||
Automated RAG systems which know the best way to index an arbitrary document? | 1 | [removed] | 2025-06-11T09:31:58 | https://www.reddit.com/r/LocalLLaMA/comments/1l8oaqh/automated_rag_systems_which_know_the_best_way_to/ | anythingjust__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8oaqh | false | null | t3_1l8oaqh | /r/LocalLLaMA/comments/1l8oaqh/automated_rag_systems_which_know_the_best_way_to/ | false | false | self | 1 | null |
is there a local singing tts? we can use today? | 1 | [removed] | 2025-06-11T09:06:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nx6m/is_there_a_local_singing_tts_we_can_use_today/ | Exact-Yesterday-992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nx6m | false | null | t3_1l8nx6m | /r/LocalLLaMA/comments/1l8nx6m/is_there_a_local_singing_tts_we_can_use_today/ | false | false | self | 1 | null |
Why AI augmentation beats AI automation | 5 | The real power isn't in AI replacing humans - it's in the combination. Think about it like this: a drummer doesn't lose their creativity when they use a drum machine. They just get more tools to express their vision. Same thing's happening with content creation right now.
Recent data backs this up - LinkedIn reported ... | 2025-06-11T08:55:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nrf5/why_ai_augmentation_beats_ai_automation/ | Necessary-Tap5971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nrf5 | false | null | t3_1l8nrf5 | /r/LocalLLaMA/comments/1l8nrf5/why_ai_augmentation_beats_ai_automation/ | false | false | self | 5 | null |
Quick Ollama bench 9070XT vs 4060Ti | 0 | Ran aidatatools/ollama-benchmark/ with custom model set. On gaming PCs in our house. Thought I'd share.
9070XT - NixOS-unstable, running via docker ollama-rocm
4060Ti - Windows 10
9070XT:
\* \*\*deepseek-r1:14b\*\*: 42.58
\* \*\*gemma2:9b\*\*: 56.64
\* \*\*llava:13b\*\*: 57.89
\* \*\*llama3.1:8b\*\*: 7... | 2025-06-11T08:40:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nje0/quick_ollama_bench_9070xt_vs_4060ti/ | Mysterious_Prune415 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nje0 | false | null | t3_1l8nje0 | /r/LocalLLaMA/comments/1l8nje0/quick_ollama_bench_9070xt_vs_4060ti/ | false | false | self | 0 | null |
Supercharge Your API Integrations for LLMs: My Take on Dynamic Setup with YAML & MCP | 1 | [removed] | 2025-06-11T08:35:48 | https://www.reddit.com/r/LocalLLaMA/comments/1l8ngxy/supercharge_your_api_integrations_for_llms_my/ | Raga_123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8ngxy | false | null | t3_1l8ngxy | /r/LocalLLaMA/comments/1l8ngxy/supercharge_your_api_integrations_for_llms_my/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DRNc5j6NRGO-QAoDG4FL_idk8cHxJfV15ie_XR9vNuI', 'resolutions': [{'height': 133, 'url': 'https://external-preview.redd.it/PsTCl9xcdJmqlv10thhIDliPmE_ZERKkug9D3ucIs2U.jpg?width=108&crop=smart&auto=webp&s=3776380b6e474bb1f8ae861e266f510003ae9686', 'width': 108}, {'height': 267, 'url': '... |
Image captioning | 3 | Hi everyone! I am working on a project that requires detailed analysis of certain figures using an llm to describe them. I am getting okay performance with qwen vl 2.5 30b, but only if I use very specific prompting. Since I am dealing with a variety of different kinds figures I would like to use different prompts depen... | 2025-06-11T08:33:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nfop/image_captioning/ | 3oclockam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nfop | false | null | t3_1l8nfop | /r/LocalLLaMA/comments/1l8nfop/image_captioning/ | false | false | self | 3 | null |
Ollama vs mlx-lm | 1 | [removed] | 2025-06-11T08:28:39 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nd48/ollama_vs_mlxlm/ | Wooden_Living_4553 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nd48 | false | null | t3_1l8nd48 | /r/LocalLLaMA/comments/1l8nd48/ollama_vs_mlxlm/ | false | false | self | 1 | null |
mlx-lm issue with GPU | 1 | [removed] | 2025-06-11T08:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l8nc75/mlxlm_issue_with_gpu/ | Wooden_Living_4553 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8nc75 | false | null | t3_1l8nc75 | /r/LocalLLaMA/comments/1l8nc75/mlxlm_issue_with_gpu/ | false | false | self | 1 | null |
At least Meta is better than this | 1 | 2025-06-11T08:23:36 | chimichanga_3 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l8nahq | false | null | t3_1l8nahq | /r/LocalLLaMA/comments/1l8nahq/at_least_meta_is_better_than_this/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'TlZQqUhkDVNNGws_b4gTg5k7VdCzWBvYE45PLSpRYcQ', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ozp0sr2xc96f1.jpeg?width=108&crop=smart&auto=webp&s=e682a7db45c80e9c89c3fcbc6ecd69648a190764', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ozp0sr2xc96f1.j... | |||
[ Removed by Reddit ] | 1 | [removed] | 2025-06-11T08:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l8n8uz/removed_by_reddit/ | IkusaNakamura | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8n8uz | false | null | t3_1l8n8uz | /r/LocalLLaMA/comments/1l8n8uz/removed_by_reddit/ | false | false | nsfw | 1 | null |
Llama.cpp has been ported on ShelfMC | 1 | [removed] | 2025-06-11T08:14:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l8n5zs/llamacpp_has_been_ported_on_shelfmc/ | ulianownw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8n5zs | false | null | t3_1l8n5zs | /r/LocalLLaMA/comments/1l8n5zs/llamacpp_has_been_ported_on_shelfmc/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'sbiprA2BShK4BbvOgZ9xlD3vhgrZpIzevYsl69L3KOc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/oASb0K-qA08LWUe0F1yOF6qJfKVONDyiUwPV6UJ-Wjs.jpg?width=108&crop=smart&auto=webp&s=669249b367a0a138eb48352068db7a1e63cc24d3', 'width': 108}, {'height': 162, 'url': 'h... |
How to speed up Kokoro-TTS? | 1 | [removed] | 2025-06-11T08:13:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l8n5iy/how_to_speed_up_kokorotts/ | fungigamer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8n5iy | false | null | t3_1l8n5iy | /r/LocalLLaMA/comments/1l8n5iy/how_to_speed_up_kokorotts/ | false | false | self | 1 | null |
OpenSloth: Multi-GPU Unsloth Training with Sequence Packing | 1 | [removed] | 2025-06-11T07:49:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l8msd0/opensloth_multigpu_unsloth_training_with_sequence/ | Spirited_Vacation785 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8msd0 | false | null | t3_1l8msd0 | /r/LocalLLaMA/comments/1l8msd0/opensloth_multigpu_unsloth_training_with_sequence/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gEjkgJMaFQy2TO8DwlRHCAFsaqTHFrL5qN-kmfLOqgI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Mqt1qsgzopwL_Av3gBadLC04QUjL_Ssl_DFRa72cVyU.jpg?width=108&crop=smart&auto=webp&s=d05ee090a5105f5c0bc9e4d190b62716bb435b41', 'width': 108}, {'height': 108, 'url': 'h... |
**OpenSloth: Multi-GPU Unsloth Training with Sequence Packing** | 1 | [removed] | 2025-06-11T07:47:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l8mrhf/opensloth_multigpu_unsloth_training_with_sequence/ | Spirited_Vacation785 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8mrhf | false | null | t3_1l8mrhf | /r/LocalLLaMA/comments/1l8mrhf/opensloth_multigpu_unsloth_training_with_sequence/ | false | false | self | 1 | null |
Tired of juggling llama.cpp instances? I built FlexLlama to run multiple models with one API | 1 | [removed] | 2025-06-11T07:22:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l8mdwe/tired_of_juggling_llamacpp_instances_i_built/ | yazoniak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8mdwe | false | null | t3_1l8mdwe | /r/LocalLLaMA/comments/1l8mdwe/tired_of_juggling_llamacpp_instances_i_built/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'vWE4QuTyF6L-LcDKzHydTwzJFbuW3jF4nE7CkhLBSxA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RtyioJQY43gkptAUptsrMVnTDYnCeODr0a7araa9UUs.jpg?width=108&crop=smart&auto=webp&s=fe64e448119be62e1e646d066ab20fc8af80a8a0', 'width': 108}, {'height': 108, 'url': 'h... |
What is the best set up for LLM and ai agent like crewai? | 1 | [removed] | 2025-06-11T06:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/1l8lrjr/what_is_the_best_set_up_for_llm_and_ai_agent_like/ | Ill_Occasion_1537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8lrjr | false | null | t3_1l8lrjr | /r/LocalLLaMA/comments/1l8lrjr/what_is_the_best_set_up_for_llm_and_ai_agent_like/ | false | false | self | 1 | null |
NSFW image to text | 24 | Hi everyone,
I’m doing some research using disturbing images, and some of the images are being flagged as NSFW by openAi models and other models (i.e. grok, gemini, Claude).
Anyone have any indication of local (or server) models (preferably with API) with less filters that are mire ir less plug and play?
Thanks in a... | 2025-06-11T05:45:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l8kx53/nsfw_image_to_text/ | CarRepresentative843 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8kx53 | false | null | t3_1l8kx53 | /r/LocalLLaMA/comments/1l8kx53/nsfw_image_to_text/ | false | false | nsfw | 24 | null |
Eye catching topic | 1 | [removed] | 2025-06-11T05:10:17 | https://www.reddit.com/r/LocalLLaMA/comments/1l8kczo/eye_catching_topic/ | Zucchini_Klutzy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8kczo | false | null | t3_1l8kczo | /r/LocalLLaMA/comments/1l8kczo/eye_catching_topic/ | false | false | self | 1 | null |
Best OS for a local AI server | 1 | [removed] | 2025-06-11T05:05:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l8kaa6/best_os_for_a_local_ai_server/ | Impossible-Web-2782 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8kaa6 | false | null | t3_1l8kaa6 | /r/LocalLLaMA/comments/1l8kaa6/best_os_for_a_local_ai_server/ | false | false | self | 1 | null |
[Major Update] llmbasedos: Now Docker-first + bootable USB keys dropping soon | 1 | [removed] | 2025-06-11T03:39:38 | https://github.com/iluxu/llmbasedos | iluxu | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l8is7a | false | null | t3_1l8is7a | /r/LocalLLaMA/comments/1l8is7a/major_update_llmbasedos_now_dockerfirst_bootable/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'kB_Vi1_3pPRsyqzS1Yxtf7SHPxp3A7697x_Ykpj9E9o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/6zi9JfPwuwB-jmIB2j-2gdp6SuFZzhNzke_4CKW7VA8.jpg?width=108&crop=smart&auto=webp&s=49fe8a351350d91caf93790646c811c0ba4b9224', 'width': 108}, {'height': 108, 'url': 'h... | |
How do I make an LLM act more human. With imperfections, hesitation, natural pauses, shorter replies, etc.? | 49 | Hey all,
I've been trying to build a more human-like LLM. Not just smart, but emotionally and behaviorally human. I want it to **hesitate**, **think before responding**, sometimes reply in **shorter, more casual ways**, maybe **swear**, **joke**, or even get things a bit wrong like people do. Basically, feel like you... | 2025-06-11T03:19:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l8ieff/how_do_i_make_an_llm_act_more_human_with/ | PhraseProfessional54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8ieff | false | null | t3_1l8ieff | /r/LocalLLaMA/comments/1l8ieff/how_do_i_make_an_llm_act_more_human_with/ | false | false | self | 49 | null |
Why are there drastic differences between deepseek r1 models on pocketpal? | 0 | 2025-06-11T03:13:07 | johncenaraper | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l8iahr | false | null | t3_1l8iahr | /r/LocalLLaMA/comments/1l8iahr/why_are_there_drastic_differences_between/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'iokZiZC4sDGW8xS1VfV2qJJlm_Rol3l87iEXHASen8Q', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/imv6t0wit76f1.jpeg?width=108&crop=smart&auto=webp&s=512cc06d96c3153034fc60223ea8741efaa01e8c', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/imv6t0wit76f1.j... | |||
Best local coding model | 1 | [removed] | 2025-06-11T03:03:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l8i3w1/best_local_coding_model/ | FlowgrammerCrew | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8i3w1 | false | null | t3_1l8i3w1 | /r/LocalLLaMA/comments/1l8i3w1/best_local_coding_model/ | false | false | self | 1 | null |
Recommended cloud machines for DeepSeek R1? | 3 | I know, I know, we're in LocalLlama, but hear me out.
Given that it's a bit tricky to run a small datacenter with enough latest-gen VRAM at home, I'm looking for the next best option. Are there any good and trusted options you use to run it in cloud?
(Note: I understand there are ways to run DeepSeek at home on chea... | 2025-06-11T02:42:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l8hp5t/recommended_cloud_machines_for_deepseek_r1/ | lakySK | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8hp5t | false | null | t3_1l8hp5t | /r/LocalLLaMA/comments/1l8hp5t/recommended_cloud_machines_for_deepseek_r1/ | false | false | self | 3 | null |
With an AI code execution agent, how should it approach sandboxing? | 2 | I'm working on an AI agent that can run and execute code. Currently the code (Python) is executed in a docker container with resource limits, and no direct filesystem access. The problem with this is that if I want to include specific tools or functions, (for instance, a module containing functions to send emails or ot... | 2025-06-11T02:20:32 | https://www.reddit.com/r/LocalLLaMA/comments/1l8h9wa/with_an_ai_code_execution_agent_how_should_it/ | Pretend_Guava7322 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8h9wa | false | null | t3_1l8h9wa | /r/LocalLLaMA/comments/1l8h9wa/with_an_ai_code_execution_agent_how_should_it/ | false | false | self | 2 | null |
How does one get the new Qwen3 reranking models to work in llama.cpp? (GGUF) | 16 | The documentation isn’t great, and I haven’t been able to get it working with llama-server either. Anyone had any luck? | 2025-06-11T02:19:32 | https://www.reddit.com/r/LocalLLaMA/comments/1l8h95q/how_does_one_get_the_new_qwen3_reranking_models/ | 42GOLDSTANDARD42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8h95q | false | null | t3_1l8h95q | /r/LocalLLaMA/comments/1l8h95q/how_does_one_get_the_new_qwen3_reranking_models/ | false | false | self | 16 | null |
🎙️ Looking for Beta Testers – Get 24 Hours of Free TTS Audio | 0 | I'm launching a new TTS (text-to-speech) service and I'm looking for a few early users to help test it out. If you're into AI voices, audio content, or just want to convert a lot of text to audio, this is a great chance to try it for free.
✅ Beta testers get **24 hours of audio generation** (no strings attached)
✅ S... | 2025-06-11T01:48:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l8gn0a/looking_for_beta_testers_get_24_hours_of_free_tts/ | mythicinfinity | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8gn0a | false | null | t3_1l8gn0a | /r/LocalLLaMA/comments/1l8gn0a/looking_for_beta_testers_get_24_hours_of_free_tts/ | false | false | self | 0 | null |
Meta to pay nearly $15 billion for Scale AI stake, The Information reports | 96 | Meta’s investment in Scale AI—reportedly valued between $14 billion and $15 billion for a 49% stake—signals a pivotal shift in the tech giant’s artificial intelligence strategy and has broad implications for the AI industry, Meta’s competitive position, and the broader landscape of AI infrastructure[3](https://www.wash... | 2025-06-11T01:38:51 | https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/ | Vatnik_Annihilator | reuters.com | 1970-01-01T00:00:00 | 0 | {} | 1l8gg51 | false | null | t3_1l8gg51 | /r/LocalLLaMA/comments/1l8gg51/meta_to_pay_nearly_15_billion_for_scale_ai_stake/ | false | false | 96 | {'enabled': False, 'images': [{'id': 'U-NXckJd-ahQHc3V5x4ph9oZyPpjG9eIkfbSC_aHu8I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=108&crop=smart&auto=webp&s=2066895ce3eb7802d4c587c8310ad283ca79c924', 'width': 108}, {'height': 113, 'url': 'h... | |
You can chat with iOS’ local LLM on iOS 26 | 1 | [removed] | 2025-06-11T01:35:27 | Weak_Tie1467 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l8gdqt | false | null | t3_1l8gdqt | /r/LocalLLaMA/comments/1l8gdqt/you_can_chat_with_ios_local_llm_on_ios_26/ | false | false | 1 | {'enabled': True, 'images': [{'id': '9VcLUY1pPKjavQr_rA-GBE5Ub4QcJY5cUizyN2ISTvs', 'resolutions': [{'height': 92, 'url': 'https://preview.redd.it/4qitb3n3c76f1.jpeg?width=108&crop=smart&auto=webp&s=394f81be06440ec61a5b30dcde7abfb537a75df1', 'width': 108}, {'height': 184, 'url': 'https://preview.redd.it/4qitb3n3c76f1.jp... | ||
Meta to pay nearly $15 billion for Scale AI stake, The Information reports | 1 | June 10 (Reuters) - Meta Platforms [(META.O)](https://www.reuters.com/markets/companies/META.O)[, opens new tab](https://www.reuters.com/markets/companies/META.O) has agreed to take a 49% stake in artificial intelligence startup Scale AI for $14.8 billion, The Information reported on Tuesday, citing two people familiar... | 2025-06-11T01:30:21 | https://www.reuters.com/business/meta-pay-nearly-15-billion-scale-ai-stake-information-reports-2025-06-10/ | Vatnik_Annihilator | reuters.com | 1970-01-01T00:00:00 | 0 | {} | 1l8ga2w | false | null | t3_1l8ga2w | /r/LocalLLaMA/comments/1l8ga2w/meta_to_pay_nearly_15_billion_for_scale_ai_stake/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'U-NXckJd-ahQHc3V5x4ph9oZyPpjG9eIkfbSC_aHu8I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/RWnBj-m9OL2YChixPfK99gwnogICJmHOW0j8MrU__kM.jpg?width=108&crop=smart&auto=webp&s=2066895ce3eb7802d4c587c8310ad283ca79c924', 'width': 108}, {'height': 113, 'url': 'h... | |
venice.ai vs ollama on server | 0 | I have ollama installed on a vps. I'm all so looking at [venice.ai](http://venice.ai) . I just want to know witch one would you do | 2025-06-11T01:12:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l8fxtl/veniceai_vs_ollama_on_server/ | wbiggs205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l8fxtl | false | null | t3_1l8fxtl | /r/LocalLLaMA/comments/1l8fxtl/veniceai_vs_ollama_on_server/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'rRDGOTZd1prv-7QHj5_Degzi3zpUKn55iTiFjQ3pvaY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/zx8qq2YSz3br54Y5q5Hyjtu-NCc5vbsE026WWlddR7o.jpg?width=108&crop=smart&auto=webp&s=ad7e4a70207be163e0c21e7ff4ec56eec0eb3920', 'width': 108}, {'height': 113, 'url': 'h... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.