title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3090 or 5060 Ti | 5 | I am interested in building a new desktop computer, and would like to make sure to be able to run some local function-calling llm (for toying around, and maybe using it in some coding assistance tool) and also NLP.
I've seen those two devices. One is relativelly old but can be bought used at about 700€, while a 5060 t... | 2025-05-19T11:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kq99yg/3090_or_5060_ti/ | marius851000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq99yg | false | null | t3_1kq99yg | /r/LocalLLaMA/comments/1kq99yg/3090_or_5060_ti/ | false | false | self | 5 | null |
Intel launches $299 Arc Pro B50 with 16GB of memory, 'Project Battlematrix' workstations with 24GB Arc Pro B60 GPUs | 776 | "While the B60 is designed for powerful 'Project Battlematrix' AI workstations... will carry a roughly $500 per-unit price tag | 2025-05-19T11:14:29 | https://www.tomshardware.com/pc-components/gpus/intel-launches-usd299-arc-pro-b50-with-16gb-of-memory-project-battlematrix-workstations-with-24gb-arc-pro-b60-gpus | FullstackSensei | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 1kq9294 | false | null | t3_1kq9294 | /r/LocalLLaMA/comments/1kq9294/intel_launches_299_arc_pro_b50_with_16gb_of/ | false | false | 776 | {'enabled': False, 'images': [{'id': '2WRQJFuDy0yvdo8Tiv2FKWqHIhmhdcrt4EosSmebgBg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/lJpkUaWR7aRg9qhyrcIgwW2kvtG6PxI9-Hw_9dnqBZU.jpg?width=108&crop=smart&auto=webp&s=6b3edf0d2b0683e25c02fa3aaba823f08261c32f', 'width': 108}, {'height': 121, 'url': 'h... | |
Computex: Intel Unveils New GPUs for AI and Workstations | 186 | 24GB for $500 | 2025-05-19T11:05:10 | https://newsroom.intel.com/client-computing/computex-intel-unveils-new-gpus-ai-workstations | MR_-_501 | newsroom.intel.com | 1970-01-01T00:00:00 | 0 | {} | 1kq8wo4 | false | null | t3_1kq8wo4 | /r/LocalLLaMA/comments/1kq8wo4/computex_intel_unveils_new_gpus_for_ai_and/ | false | false | 186 | {'enabled': False, 'images': [{'id': '007o_fpFSpvZlrkPAPfXPwKClNNhBgQoF7pYoT0U_Fc', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/a_EOuFMT3wImCaaTP_AxZtoh2M_kQm2Ho4iekIvJrVk.jpg?width=108&crop=smart&auto=webp&s=618bc4b8d0174c09145f919bfdf2728c47d7e7f4', 'width': 108}, {'height': 121, 'url': 'h... | |
Qwen hallucinating chinese || Better models for german RAG use cases? | 3 | No matter which qwen model i use, it keeps sometimes randomly hallucinating chinese characters, which makes it unusable for my usecase in a german business environment. I am specifically looking for a model proficient in german and specialized for RAG use cases. For efficiency I would like to use an AWQ quantization. I... | 2025-05-19T11:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kq8uqn/qwen_hallucinating_chinese_better_models_for/ | okonemi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq8uqn | false | null | t3_1kq8uqn | /r/LocalLLaMA/comments/1kq8uqn/qwen_hallucinating_chinese_better_models_for/ | false | false | self | 3 | null |
How to make your MCP clients (Cursor, Windsurf...) share context with each other | 1 | [removed] | 2025-05-19T11:01:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kq8uct/how_to_make_your_mcp_clients_cursor_windsurf/ | anmolbaranwal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq8uct | false | null | t3_1kq8uct | /r/LocalLLaMA/comments/1kq8uct/how_to_make_your_mcp_clients_cursor_windsurf/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '8yw1aMEiwnrwsodbJvTKRxq08BfbHuBN6x3eS_kY70k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XUP9Ymk8zh-fwyrDcatHMCpuMSC_Hgf4XdqGH8l-n2I.jpg?width=108&crop=smart&auto=webp&s=8ffbfc22020c54c31b3695c4c8e7f62e98864899', 'width': 108}, {'height': 108, 'url': 'h... |
Any lightweight AI model for ollama that can be trained to do queries and read software manuals? | 1 | [removed] | 2025-05-19T10:53:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kq8pp2/any_lightweight_ai_model_for_ollama_that_can_be/ | Palova98 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq8pp2 | false | null | t3_1kq8pp2 | /r/LocalLLaMA/comments/1kq8pp2/any_lightweight_ai_model_for_ollama_that_can_be/ | false | false | self | 1 | null |
Water Cooling My RTX 4090 48GB: A DIY Mod with a 240mm AIO | 1 | [removed] | 2025-05-19T10:25:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kq895r/water_cooling_my_rtx_4090_48gb_a_diy_mod_with_a/ | Academic-Passenger99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq895r | false | null | t3_1kq895r | /r/LocalLLaMA/comments/1kq895r/water_cooling_my_rtx_4090_48gb_a_diy_mod_with_a/ | false | false | 1 | null | |
Water Cooling My RTX 4090 48GB: DIY Mod with a 240mm AIO | 1 | [removed] | 2025-05-19T10:20:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kq86kc/water_cooling_my_rtx_4090_48gb_diy_mod_with_a/ | Weekly-Program-2004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq86kc | false | null | t3_1kq86kc | /r/LocalLLaMA/comments/1kq86kc/water_cooling_my_rtx_4090_48gb_diy_mod_with_a/ | false | false | self | 1 | null |
Water Cooling My RTX 4090 48GB: A DIY Mod with a 240mm AIO | 1 | [removed] | 2025-05-19T10:17:35 | https://www.reddit.com/r/LocalLLaMA/comments/1kq8568/water_cooling_my_rtx_4090_48gb_a_diy_mod_with_a/ | Weekly-Program-2004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq8568 | false | null | t3_1kq8568 | /r/LocalLLaMA/comments/1kq8568/water_cooling_my_rtx_4090_48gb_a_diy_mod_with_a/ | false | false | 1 | null | |
Anything below 7b is useless | 1 | [removed] | 2025-05-19T10:15:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kq847j/anything_below_7b_is_useless/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq847j | false | null | t3_1kq847j | /r/LocalLLaMA/comments/1kq847j/anything_below_7b_is_useless/ | false | false | self | 1 | null |
Anything below 7b is a waste of time | 1 | [removed] | 2025-05-19T10:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kq7vrv/anything_below_7b_is_a_waste_of_time/ | GreenTreeAndBlueSky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq7vrv | false | null | t3_1kq7vrv | /r/LocalLLaMA/comments/1kq7vrv/anything_below_7b_is_a_waste_of_time/ | false | false | self | 1 | null |
Is a Rx 9070Xt Good enough? | 1 | [removed] | 2025-05-19T09:57:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kq7tnh/is_a_rx_9070xt_good_enough/ | uc-- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq7tnh | false | null | t3_1kq7tnh | /r/LocalLLaMA/comments/1kq7tnh/is_a_rx_9070xt_good_enough/ | false | false | self | 1 | null |
Any tts that support Hebrew? | 1 | I just need one that sound natural. | 2025-05-19T09:43:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kq7mol/any_tts_that_support_hebrew/ | ResponsibleTruck4717 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq7mol | false | null | t3_1kq7mol | /r/LocalLLaMA/comments/1kq7mol/any_tts_that_support_hebrew/ | false | false | self | 1 | null |
OuteTTS 1.0 (0.6B) — Apache 2.0, Batch Inference (~0.1–0.02 RTF) | 144 | Hey everyone! I just released OuteTTS-1.0-0.6B, a lighter variant built on Qwen-3 0.6B.
OuteTTS-1.0-0.6B
- Model Architecture: Based on Qwen-3 0.6B.
- License: Apache 2.0 (free for commercial and personal use)
- Multilingual: 14 supported languages: English, Chinese, Dutch, French, Georgian, German, Hungarian, Italia... | 2025-05-19T08:56:52 | https://huggingface.co/OuteAI/OuteTTS-1.0-0.6B | OuteAI | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kq6ysz | false | null | t3_1kq6ysz | /r/LocalLLaMA/comments/1kq6ysz/outetts_10_06b_apache_20_batch_inference_01002_rtf/ | false | false | 144 | {'enabled': False, 'images': [{'id': 'bRXF6OCSYqhV__cmYbl3mo1a9EDkvDWIr5S2Odd-RwA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mkj0c5KE7uG2t5lRcNFyEg2Rx_CYpgOSNHlXCK0pNG4.jpg?width=108&crop=smart&auto=webp&s=ed3a16858e6830b08e5756690dac427b68e1e3f3', 'width': 108}, {'height': 116, 'url': 'h... | |
Real time voice to voice AI | 1 | Hello everyone,
I’m building a website that allows users to practice interviews with a virtual examiner. This means I need a real-time, voice-to-voice solution with low latency and reasonable cost.
The business model is as follows: for example, a customer pays $10 for a 20-minute mock interview. The interview script ... | 2025-05-19T08:34:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kq6nkv/real_time_voice_to_voice_ai/ | Prestigious-Ant-4348 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq6nkv | false | null | t3_1kq6nkv | /r/LocalLLaMA/comments/1kq6nkv/real_time_voice_to_voice_ai/ | false | false | self | 1 | null |
Creating your own avatar | 1 | I saw in the news today that UBS were creating AI avatars of their analysts to make presentations, see: https://www.ft.com/content/0916d635-755b-4cdc-b722-e32d94ae334d (paywalled).
I was curious about doing the same thing for myself but run locally so I have full control over my avatar.
Has anyone done something like... | 2025-05-19T08:14:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kq6e5v/creating_your_own_avatar/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq6e5v | false | null | t3_1kq6e5v | /r/LocalLLaMA/comments/1kq6e5v/creating_your_own_avatar/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Ljdpzyn2d8htEgCmohLfQbgKl1NNRFeXoNkEhxnWqSw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OeDSdLvnckzY6teyy8Fj0jYItPdNsjrrnd3B9Ye4OsQ.jpg?width=108&crop=smart&auto=webp&s=8237593dc599d5728240f7fa80130c7f969ca1a7', 'width': 108}, {'height': 121, 'url': 'h... |
NVIDIA Launches GB10-Powered DGX Spark & GB300-Powered DGX Station AI Systems, Blackwell Ultra With 20 PFLOPs Compute | 14 | 2025-05-19T08:12:32 | https://wccftech.com/nvidia-gb10-powered-dgx-spark-gb300-powered-dgx-station-ai-systems/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1kq6d4u | false | null | t3_1kq6d4u | /r/LocalLLaMA/comments/1kq6d4u/nvidia_launches_gb10powered_dgx_spark/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'PD8rnvifNMtTH2QZfbt1ABecTzvsQu7n786xD74W-RU', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Kf9iwaDR1s4NlZiLQgFyqXBmfU0c5KzKmPwqHwOjBKE.jpg?width=108&crop=smart&auto=webp&s=3583a7b92227eedbd32ca5e7e391e998ee1a6b40', 'width': 108}, {'height': 121, 'url': 'h... | ||
Best Way to Serve LLaMA 4 Scout or DeepSeek V3 with 10 Concurrent Users @ 30 t/s? | 1 | [removed] | 2025-05-19T08:11:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kq6cok/best_way_to_serve_llama_4_scout_or_deepseek_v3/ | HereForAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq6cok | false | null | t3_1kq6cok | /r/LocalLLaMA/comments/1kq6cok/best_way_to_serve_llama_4_scout_or_deepseek_v3/ | false | false | self | 1 | null |
NVIDIA Intros RTX PRO Servers For Enterprise, Equipped With RTX PRO 6000 "Blackwell" Server GPUs | 4 | 2025-05-19T08:11:31 | https://wccftech.com/nvidia-rtx-pro-servers-enterprise-equipped-rtx-pro-6000-blackwell-server-gpus/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1kq6cn1 | false | null | t3_1kq6cn1 | /r/LocalLLaMA/comments/1kq6cn1/nvidia_intros_rtx_pro_servers_for_enterprise/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'HCa-4VpwQmGL_cOq69TsjRRM6M7wSgNqzxyYXesaFBY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/SPV-nOvGz6aOxZ24Ebe-WSTOsw-hV6ptHmK457FG9lE.jpg?width=108&crop=smart&auto=webp&s=3b3db277afd3e8f102d2521822e406ac49635363', 'width': 108}, {'height': 121, 'url': 'h... | ||
Very mixed results with llama3.2 - the 3b version | 1 | Hello,
I'm working on a "simple" sentiment check.
The strings / text are usually a few words long and should be checked by a system (n8n, sentiment analysis node) and afterwards categorized (positive, neutral, negative).
If I'm testing this on an OpenAI account - or maybe even a local qwen3:4b this seems to work qu... | 2025-05-19T07:53:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kq63vz/very_mixed_results_with_llama32_the_3b_version/ | Chris8080 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq63vz | false | null | t3_1kq63vz | /r/LocalLLaMA/comments/1kq63vz/very_mixed_results_with_llama32_the_3b_version/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'AoIRwsKr-WBO6KSvFraVVBUgYYrHU7YCZVpWdBqjCnY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/NaiUuri2WGbCs_1xSZf0qZi56RKNYEV6mFk6gXOYSHM.jpg?width=108&crop=smart&auto=webp&s=90a202c41e658f5623626f92d99d6b86507ba3c0', 'width': 108}, {'height': 113, 'url': 'h... |
LM Studio: Setting `trust_remote_code=True` | 1 | [removed] | 2025-05-19T07:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kq6105/lm_studio_setting_trust_remote_codetrue/ | NiceLinden97 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq6105 | false | null | t3_1kq6105 | /r/LocalLLaMA/comments/1kq6105/lm_studio_setting_trust_remote_codetrue/ | false | false | self | 1 | null |
How do you know which tool to run your model with? | 1 | I was watching a few videos from Bijan Bowen and he often says he has to launch the model from vllm or specifically from LM Studio, etc.
Is there a reason why models need to be run using specific tools and how do you know where to run the LLM? | 2025-05-19T07:40:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kq5x9w/how_do_you_know_which_tool_to_run_your_model_with/ | crispyfrybits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq5x9w | false | null | t3_1kq5x9w | /r/LocalLLaMA/comments/1kq5x9w/how_do_you_know_which_tool_to_run_your_model_with/ | false | false | self | 1 | null |
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases | 1 | [removed] | 2025-05-19T07:21:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kq5num/challenges_in_finetuning_llms_on_large/ | SaladNo6817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq5num | false | null | t3_1kq5num | /r/LocalLLaMA/comments/1kq5num/challenges_in_finetuning_llms_on_large/ | false | false | self | 1 | null |
Looking for GPU recommendations for local LLM (on Linux) | 1 | [removed] | 2025-05-19T07:10:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kq5hxv/looking_for_gpu_recommendations_for_local_llm_on/ | Southern-Shift-736 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq5hxv | false | null | t3_1kq5hxv | /r/LocalLLaMA/comments/1kq5hxv/looking_for_gpu_recommendations_for_local_llm_on/ | false | false | self | 1 | null |
Clara — A fully offline, Modular AI workspace (LLMs + Agents + Automation + Image Gen) | 590 | So I’ve been working on this for the past few months and finally feel good enough to share it.
It’s called **Clara** — and the idea is simple:
🧩 **Imagine building your own workspace for AI** — with local tools, agents, automations, and image generation.
Note: Created this becoz i hated the ChatUI for everything, I... | 2025-05-19T06:53:01 | BadBoy17Ge | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kq590b | false | null | t3_1kq590b | /r/LocalLLaMA/comments/1kq590b/clara_a_fully_offline_modular_ai_workspace_llms/ | false | false | 590 | {'enabled': True, 'images': [{'id': 'mpzTqJBqnQqcN7JUUt3vhpTSsrB2Q1Yt81APefO73sg', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/u6niruxjqo1f1.png?width=108&crop=smart&auto=webp&s=eccc34dfe2ed11aeac1431f9d2435d5623b3c5c0', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/u6niruxjqo1f1.png... | ||
#Lab Leak vs.Natural Origin
2020年,一场疫情改变了世界。但病毒的起源至今仍是谜团。近日,美国麦卡洛基金会管理员兼流行病学家尼古拉斯·赫尔舍尔发表论文《研究发现北卡罗来纳大学教堂山分校BSL-3设施发生多起SARS-CoV-2(冠状病毒)实验室泄露》,认为“大量证据表明SARS-CoV-2病毒是人为制造”论文称,在2020年6月至2021年1月期间,北卡罗来纳大学测序了7例SARS-CoV-2病毒“实验室获得性感染”,均疑似源自该大学顶级冠状病毒实验室进行的合成病毒研究,其中包括巴里克实验室。更关键的是,UNC实验室合成的病毒带有独特的‘基因水印’——T15102 | 1 | 2025-05-19T06:31:25 | NorthAgency4433 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kq4xot | false | null | t3_1kq4xot | /r/LocalLLaMA/comments/1kq4xot/lab_leak_vsnatural_origin/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'or3t8khrno1f1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/or3t8khrno1f1.jpeg?width=108&crop=smart&auto=webp&s=a41152c754b30dc24e7dedfff98da7c01748c04f', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/or3t8khrno1f1.jpeg?width=216&crop=smart&auto=... | ||
OuteTTS v1.0 now supported by chatllm.cpp | 28 | After Orpheus-TTS is implemented in [ChatLLM.cpp](https://github.com/foldl/chatllm.cpp), now here comes
[OuteTTS v1.0](https://huggingface.co/OuteAI/Llama-OuteTTS-1.0-1B). | 2025-05-19T06:27:47 | https://v.redd.it/cpcocy4jmo1f1 | foldl-li | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kq4vrv | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cpcocy4jmo1f1/DASHPlaylist.mpd?a=1750228079%2CYTU1MzYxMjJiM2M1NzI1ODI5M2U0YjAwYTcwNTJiY2Y4YzU3NTcxODdkNTc5OWE4ZjdiZDVmYWY3ODE4ZWYwYQ%3D%3D&v=1&f=sd', 'duration': 9, 'fallback_url': 'https://v.redd.it/cpcocy4jmo1f1/DASH_1080.mp4?source=fallback', 'ha... | t3_1kq4vrv | /r/LocalLLaMA/comments/1kq4vrv/outetts_v10_now_supported_by_chatllmcpp/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'bGFzbDV5NGptbzFmMYcMr_Cq2gLg7E5zrnm6bi-e1D6-e2IdLt9Ao5b5ur7s', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bGFzbDV5NGptbzFmMYcMr_Cq2gLg7E5zrnm6bi-e1D6-e2IdLt9Ao5b5ur7s.png?width=108&crop=smart&format=pjpg&auto=webp&s=7c65f8f44d36dfc70d800a53650d8478f97bf... | |
NVIDIA says DGX Spark releasing in July | 61 | DGX Spark should be available in July.
The 128 GB unified memory amount is nice, but there's been discussions about whether the bandwidth will be too slow to be practical. Will be interesting to see what independent benchmarks will show, I don't think it's had any outsider reviews yet. I couldn't find a price yet, tha... | 2025-05-19T05:56:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kq4ey4/nvidia_says_dgx_spark_releasing_in_july/ | Aplakka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq4ey4 | false | null | t3_1kq4ey4 | /r/LocalLLaMA/comments/1kq4ey4/nvidia_says_dgx_spark_releasing_in_july/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': 'Kkp9zaJa0nIObH7G6Qz8lvwwNpFIqHm-PW5o6mlo3Dk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ggt6c4juOCzmVY5sG5r1Hrv23PCDLUqE-nnPBVYjAs4.jpg?width=108&crop=smart&auto=webp&s=0cdc4a0fc0bb5ef1e75c3757bef9404477d1d883', 'width': 108}, {'height': 121, 'url': 'h... |
Is a Q&A dataset absolutely necessary when fine-tuning an LLM? | 1 | [removed] | 2025-05-19T05:53:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kq4dlb/is_a_qa_dataset_absolutely_necessary_when/ | Cyp9715 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq4dlb | false | null | t3_1kq4dlb | /r/LocalLLaMA/comments/1kq4dlb/is_a_qa_dataset_absolutely_necessary_when/ | false | false | self | 1 | null |
Is it possible to use Qwen2.5-VL's vision encoder to generate pure image embeddings like CLIP or ViT? | 1 | [removed] | 2025-05-19T05:28:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kq3zve/is_it_possible_to_use_qwen25vls_vision_encoder_to/ | MysteriousAlps608 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq3zve | false | null | t3_1kq3zve | /r/LocalLLaMA/comments/1kq3zve/is_it_possible_to_use_qwen25vls_vision_encoder_to/ | false | false | self | 1 | null |
[OSS] Containerized llama.cpp + Ollama backend runner for RunPod serverless (easy LLM deployment) | 6 | I'm sharing an open-source project I built called `runpod-llm` \- a containerized setup for running LLMs on RunPod, with minimal config and full support for both llama.cpp and Ollama backends.
# ⚙️ What It Does
* Lets you spin up an LLM container on RunPod (e.g., serverless GPU) with a few env vars
* Supports both `l... | 2025-05-19T05:19:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kq3v0u/oss_containerized_llamacpp_ollama_backend_runner/ | zeeb0t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq3v0u | false | null | t3_1kq3v0u | /r/LocalLLaMA/comments/1kq3v0u/oss_containerized_llamacpp_ollama_backend_runner/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'Cwyggr1WckApp_4o7_KbiFTNs608MlRMpv0_7qmC4DA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/AYRIfGXAmZYxTJ8LI1HWwu47ZkRAjkUVrKlvfPtUrUE.jpg?width=108&crop=smart&auto=webp&s=2d9f764a1f630a1418fd6620bb5937a95ca75c9d', 'width': 108}, {'height': 108, 'url': 'h... |
Challenges in Fine-Tuning LLMs on Large Codebases | 1 | [removed] | 2025-05-19T05:09:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kq3pm9/challenges_in_finetuning_llms_on_large_codebases/ | SaladNo6817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq3pm9 | false | null | t3_1kq3pm9 | /r/LocalLLaMA/comments/1kq3pm9/challenges_in_finetuning_llms_on_large_codebases/ | false | false | self | 1 | null |
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases | 1 | [removed] | 2025-05-19T05:06:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kq3nwv/challenges_in_finetuning_llms_on_large/ | SaladNo6817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq3nwv | false | null | t3_1kq3nwv | /r/LocalLLaMA/comments/1kq3nwv/challenges_in_finetuning_llms_on_large/ | false | false | self | 1 | null |
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases | 1 | [removed] | 2025-05-19T05:04:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kq3n5d/challenges_in_finetuning_llms_on_large/ | SaladNo6817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq3n5d | false | null | t3_1kq3n5d | /r/LocalLLaMA/comments/1kq3n5d/challenges_in_finetuning_llms_on_large/ | false | false | self | 1 | null |
I created a program that create personalized playlist from a large playlist using LLM | 1 | [removed] | 2025-05-19T04:54:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kq3h1b/i_created_a_program_that_create_personalized/ | MoodOdd9657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq3h1b | false | null | t3_1kq3h1b | /r/LocalLLaMA/comments/1kq3h1b/i_created_a_program_that_create_personalized/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OfgZLP4gv_k8jMG04ZbCW0i7qucIwr7BuybdnlYYlaM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=108&crop=smart&auto=webp&s=ae90ffa3a9a53e2d7d87797b9966ee3578b0c628', 'width': 108}, {'height': 108, 'url': 'h... |
My post about fine-tuning has been removed by the filter. | 1 | [removed] | 2025-05-19T04:52:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kq3gcm/my_post_about_finetuning_has_been_removed_by_the/ | SaladNo6817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq3gcm | false | null | t3_1kq3gcm | /r/LocalLLaMA/comments/1kq3gcm/my_post_about_finetuning_has_been_removed_by_the/ | false | false | self | 1 | null |
I created a program that create personalized playlist from a large playlist using LLM( Hope it helps you organize your chaotic playlist) | 1 | [removed] | 2025-05-19T04:51:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kq3fhm/i_created_a_program_that_create_personalized/ | MoodOdd9657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq3fhm | false | null | t3_1kq3fhm | /r/LocalLLaMA/comments/1kq3fhm/i_created_a_program_that_create_personalized/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'OfgZLP4gv_k8jMG04ZbCW0i7qucIwr7BuybdnlYYlaM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-XsCps4n1z5hfvbsp4fv0t0IizynYLRMj-i2RTS1yjQ.jpg?width=108&crop=smart&auto=webp&s=ae90ffa3a9a53e2d7d87797b9966ee3578b0c628', 'width': 108}, {'height': 108, 'url': 'h... |
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases | 1 | [removed] | 2025-05-19T04:37:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kq37pu/challenges_in_finetuning_llms_on_large/ | SaladNo6817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq37pu | false | null | t3_1kq37pu | /r/LocalLLaMA/comments/1kq37pu/challenges_in_finetuning_llms_on_large/ | false | false | self | 1 | null |
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases | 1 | [removed] | 2025-05-19T04:35:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kq36b5/challenges_in_finetuning_llms_on_large/ | SaladNo6817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq36b5 | false | null | t3_1kq36b5 | /r/LocalLLaMA/comments/1kq36b5/challenges_in_finetuning_llms_on_large/ | false | false | self | 1 | null |
SAGA - Semantic And Graph-enhanced Authoring | 21 | I'd like to share a little project I've been actively working on for the last couple weeks called SAGA. It is still very much under development, so I'd love to know your thoughts about it!.
SAGA (Semantic And Graph-enhanced Authoring) is a sophisticated AI-powered creative writing system designed to generate full-le... | 2025-05-19T04:23:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kq2zgg/saga_semantic_and_graphenhanced_authoring/ | MariusNocturnum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq2zgg | false | null | t3_1kq2zgg | /r/LocalLLaMA/comments/1kq2zgg/saga_semantic_and_graphenhanced_authoring/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': '0V29OFL9XjKhu7pB82_qXO3IeOsVcMk5AhsL2AK4HqE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=108&crop=smart&auto=webp&s=04407b3d983c12adada102ccea1df4032cfb5857', 'width': 108}, {'height': 108, 'url': 'h... |
SAGA: Semantic And Graph-enhanced Authoring | 1 | [removed] | 2025-05-19T04:19:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kq2wx6/saga_semantic_and_graphenhanced_authoring/ | MariusNocturnum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq2wx6 | false | null | t3_1kq2wx6 | /r/LocalLLaMA/comments/1kq2wx6/saga_semantic_and_graphenhanced_authoring/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '0V29OFL9XjKhu7pB82_qXO3IeOsVcMk5AhsL2AK4HqE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=108&crop=smart&auto=webp&s=04407b3d983c12adada102ccea1df4032cfb5857', 'width': 108}, {'height': 108, 'url': 'h... |
I made a tool to efficiently find optimal parameters | 49 | TLDR: https://github.com/kooshi/TaguchiBench
Taguchi lets you change multiple variables at once to test a bunch of stuff quickly, and I made a tool to do it for AI and other stuff
---
I've been waking up inspired often recently, with the multiplying effect of Claude and Gemini, I can explore ideas as fast as I come ... | 2025-05-19T04:19:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kq2wr0/i_made_a_tool_to_efficiently_find_optimal/ | Kooshi_Govno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq2wr0 | false | null | t3_1kq2wr0 | /r/LocalLLaMA/comments/1kq2wr0/i_made_a_tool_to_efficiently_find_optimal/ | false | false | self | 49 | {'enabled': False, 'images': [{'id': 'pqOdkXftXSgOnRklCwdQYtWnW6Aq7pS-skqM5PzrA-0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RHEjmD2Uv6HsIiItPBnrOe_TAPyct-iifl0dQKXTIvk.jpg?width=108&crop=smart&auto=webp&s=1f76682be88866ec2a0714eb6ccb052f89af2058', 'width': 108}, {'height': 108, 'url': 'h... |
SAGA: Semantic And Graph-enhanced Authoring | 1 | [removed] | 2025-05-19T04:16:38 | https://github.com/Lanerra/saga | MariusNocturnum | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kq2vdl | false | null | t3_1kq2vdl | /r/LocalLLaMA/comments/1kq2vdl/saga_semantic_and_graphenhanced_authoring/ | false | false | 1 | {'enabled': False, 'images': [{'id': '0V29OFL9XjKhu7pB82_qXO3IeOsVcMk5AhsL2AK4HqE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QmuLGRVq_Hg8OYJ-AhdIAUNcJdwxHtu3tKwEAXfAhB4.jpg?width=108&crop=smart&auto=webp&s=04407b3d983c12adada102ccea1df4032cfb5857', 'width': 108}, {'height': 108, 'url': 'h... | |
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases | 1 | [removed] | 2025-05-19T04:00:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kq2kuo/challenges_in_finetuning_llms_on_large/ | SaladNo6817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq2kuo | false | null | t3_1kq2kuo | /r/LocalLLaMA/comments/1kq2kuo/challenges_in_finetuning_llms_on_large/ | false | false | self | 1 | null |
Challenges in Fine-Tuning LLMs on Large Proprietary Codebases | 1 | [removed] | 2025-05-19T03:57:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kq2jf2/challenges_in_finetuning_llms_on_large/ | SaladNo6817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq2jf2 | false | null | t3_1kq2jf2 | /r/LocalLLaMA/comments/1kq2jf2/challenges_in_finetuning_llms_on_large/ | false | false | self | 1 | null |
Who wants to buy to run a local LLM? Please contact me. | 0 | 2025-05-19T03:39:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kq27u1/who_wants_to_buy_to_run_a_local_llm_please/ | Reasonable-Climate66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq27u1 | false | null | t3_1kq27u1 | /r/LocalLLaMA/comments/1kq27u1/who_wants_to_buy_to_run_a_local_llm_please/ | false | false | 0 | null | ||
Qwen Web Dev just got even better! One click to deploy! | 1 | [removed] | 2025-05-19T03:15:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kq1ssl/qwen_web_dev_just_got_even_better_one_click_to/ | No_Banana_5663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq1ssl | false | null | t3_1kq1ssl | /r/LocalLLaMA/comments/1kq1ssl/qwen_web_dev_just_got_even_better_one_click_to/ | false | false | self | 1 | null |
The first author of the ParScale paper discusses how they turned ParScale from an idea into reality | 74 | Because many friends have given feedback that Zhihu cannot be accessed without registration, I am simply using a translation plugin to translate posts from Zhihu into English and taking screenshots.
The original author is keytoyze, who holds all rights to the article. The original address is:
[www.zhihu.com/ques... | 2025-05-19T02:55:28 | https://www.reddit.com/r/LocalLLaMA/comments/1kq1g7s/the_first_author_of_the_parscale_paper_discusses/ | Dr_Karminski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq1g7s | false | null | t3_1kq1g7s | /r/LocalLLaMA/comments/1kq1g7s/the_first_author_of_the_parscale_paper_discusses/ | false | false | 74 | null | |
Where can I find this AO3 dataset for creative writing LLM | 1 | [removed] | 2025-05-19T01:55:54 | https://huggingface.co/datasets/nyuuzyou/archiveofourown | EastPanic647 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kq0cfx | false | null | t3_1kq0cfx | /r/LocalLLaMA/comments/1kq0cfx/where_can_i_find_this_ao3_dataset_for_creative/ | false | false | 1 | {'enabled': False, 'images': [{'id': '6pBxabgD7OhKKdtNMAWcuXn2kajwWkXmJ38aOm5Jx8M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=108&crop=smart&auto=webp&s=82c14b58a1ea066ceb2285ad939d1ae35ddce74c', 'width': 108}, {'height': 116, 'url': 'h... | |
Where can I find this AO3 dataset for creative writing LLM | 1 | [removed] | 2025-05-19T01:54:13 | https://huggingface.co/datasets/nyuuzyou/archiveofourown | EastPanic647 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kq0bai | false | null | t3_1kq0bai | /r/LocalLLaMA/comments/1kq0bai/where_can_i_find_this_ao3_dataset_for_creative/ | false | false | 1 | {'enabled': False, 'images': [{'id': '6pBxabgD7OhKKdtNMAWcuXn2kajwWkXmJ38aOm5Jx8M', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_gcd8T9jnslCz83hv3axTJ6bD2j8tD5ikhOUeJc1fdc.jpg?width=108&crop=smart&auto=webp&s=82c14b58a1ea066ceb2285ad939d1ae35ddce74c', 'width': 108}, {'height': 116, 'url': 'h... | |
Is Qwen 2.5 Coder Instruct still the best option for local coding with 24GB VRAM? | 47 | Is Qwen 2.5 Coder Instruct still the best option for local coding with 24GB VRAM, or has that changed since Qwen 3 came out? I haven't noticed a coding model for it, but it's possible other models have come in gone that I've missed that handle python better than Qwen 2.5. | 2025-05-19T01:40:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kq029v/is_qwen_25_coder_instruct_still_the_best_option/ | MrWeirdoFace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kq029v | false | null | t3_1kq029v | /r/LocalLLaMA/comments/1kq029v/is_qwen_25_coder_instruct_still_the_best_option/ | false | false | self | 47 | null |
Are there any models that I can run locally with only 2 gb of RAM? | 0 | Hello this maybe a very dumb question but are there any llms that I can run locally on my potato pc? Or are they all RAM hogging and the only way to run them is through a expensive cloud computing service? | 2025-05-19T01:34:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kpzy8g/are_there_any_models_that_i_can_run_locally_with/ | LaidBackDev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpzy8g | false | null | t3_1kpzy8g | /r/LocalLLaMA/comments/1kpzy8g/are_there_any_models_that_i_can_run_locally_with/ | false | false | self | 0 | null |
I built an open-source AI-powered library for web testing with Llama/Mistral | 1 | [removed] | 2025-05-19T01:03:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kpzdjx/i_built_an_opensource_aipowered_library_for_web/ | p0deje | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpzdjx | false | null | t3_1kpzdjx | /r/LocalLLaMA/comments/1kpzdjx/i_built_an_opensource_aipowered_library_for_web/ | false | false | self | 1 | null |
To think or to no_think with Qwen3 | 17 | Lately I got a 5090 and been experimenting with Qwen3-32B at Q5 (unsloth). With Flash attention and KV cache quantization at Q8, I am able to get up to 32k token window while fully occupying the GPU memory (30-31 GB). It gives a generation speed of 50 t/s which is very impressive. I am using that with Roocode via Visua... | 2025-05-19T01:00:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kpzbvl/to_think_or_to_no_think_with_qwen3/ | SandboChang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpzbvl | false | null | t3_1kpzbvl | /r/LocalLLaMA/comments/1kpzbvl/to_think_or_to_no_think_with_qwen3/ | false | false | self | 17 | null |
Can I split my GPU VRAM? | 1 | [removed] | 2025-05-19T00:53:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kpz6u4/can_i_split_my_gpu_vram/ | Sufficient_Bit_3312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpz6u4 | false | null | t3_1kpz6u4 | /r/LocalLLaMA/comments/1kpz6u4/can_i_split_my_gpu_vram/ | false | false | self | 1 | null |
How can I improve this subtitle translator prompt? | 5 | Hello, I've been trying to use AI models on OpenRouter in order to translate subtitles. My script will break the subtitle file into chunks and feed it to the LLM model 1 by 1. After a bit of testing I found Deepseek V3 0324 to yield the best results. However, it'll still take multiple tries for it to translate it prope... | 2025-05-19T00:31:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kpyrrs/how_can_i_improve_this_subtitle_translator_prompt/ | OneSteelTank | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpyrrs | false | null | t3_1kpyrrs | /r/LocalLLaMA/comments/1kpyrrs/how_can_i_improve_this_subtitle_translator_prompt/ | false | false | self | 5 | null |
Qwen released new paper and model: ParScale, ParScale-1.8B-(P1-P8) | 468 | The original text says, 'We theoretically and empirically establish that scaling with P parallel streams is comparable to scaling the number of parameters by O(log P).' Does this mean that a 30B model can achieve the effect of a 45B model? | 2025-05-19T00:24:28 | Dr_Karminski | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kpyn8g | false | null | t3_1kpyn8g | /r/LocalLLaMA/comments/1kpyn8g/qwen_released_new_paper_and_model_parscale/ | false | false | 468 | {'enabled': True, 'images': [{'id': 'PyGaUo1WVJJTNJTihagMB-1J-8iG1Q6G3HjZt2t5Foc', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/7q0xsc86um1f1.png?width=108&crop=smart&auto=webp&s=1e8068b081f67db09e13530f196c6274a5008fca', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/7q0xsc86um1f1.pn... | ||
Riffusion Ai music generator Ai voices spoken word, Shakespeare "All the World's a Stage", Abraham Lincoln ordering Pizza, German, Russian Spanish Singing/spoken word. I clone these Riffusion Ai voices of emotion and use in Zonos to create various types of voices for male and female. | 5 | 2025-05-18T23:29:35 | https://v.redd.it/zmpy3wuajm1f1 | Extension-Fee-8480 | /r/LocalLLaMA/comments/1kpxk18/riffusion_ai_music_generator_ai_voices_spoken/ | 1970-01-01T00:00:00 | 0 | {} | 1kpxk18 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/zmpy3wuajm1f1/DASHPlaylist.mpd?a=1750332580%2CMzU3MWVlZDM5MjJiNDhiNTcwOTExM2E1YjJlZjU2ZTBlNDBlMzliYjViZGYwMWRkY2Y3NDRjNmM1OWFjMzFiOA%3D%3D&v=1&f=sd', 'duration': 600, 'fallback_url': 'https://v.redd.it/zmpy3wuajm1f1/DASH_720.mp4?source=fallback', 'h... | t3_1kpxk18 | /r/LocalLLaMA/comments/1kpxk18/riffusion_ai_music_generator_ai_voices_spoken/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'MzRsNGkwdmFqbTFmMQig-YZTkSc6LjowdLkLgQB-FI1hgO_IdhK5hGYwlsFQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MzRsNGkwdmFqbTFmMQig-YZTkSc6LjowdLkLgQB-FI1hgO_IdhK5hGYwlsFQ.png?width=108&crop=smart&format=pjpg&auto=webp&s=7caf8eb74603f95f408dd7ba2c3eb7c56941e... | ||
I built a tool to profile LLM energy usage on Macs programmatically (down to the line of code) | 11 | If you want to measure LLM energy consumption on Macs, you have options like powermetrics (a CLI tool that periodically prints energy usage to your terminal) or Activity Monitor.
These work fine if you just want a high-level glance at your LLM's energy usage, but if you want more precise measurement (like seeing *... | 2025-05-18T23:19:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kpxd7t/i_built_a_tool_to_profile_llm_energy_usage_on/ | cachehit_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpxd7t | false | null | t3_1kpxd7t | /r/LocalLLaMA/comments/1kpxd7t/i_built_a_tool_to_profile_llm_energy_usage_on/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': '-YYyL_j0bV3W5TR_FNmATZI7qBS4xDSm4VOnHHmYJXg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LKirANG9XZbpKhq9T4bV1x4DpOpo63CrIDXxzKImYQ8.jpg?width=108&crop=smart&auto=webp&s=d81ec25fcb4923413147f6ac1234cf1a0c0cb375', 'width': 108}, {'height': 108, 'url': 'h... |
Can I pool VRAM of the new nvidia workstation GPU's for local models? | 1 | [removed] | 2025-05-18T23:15:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kpxab3/can_i_pool_vram_of_the_new_nvidia_workstation/ | tyflips | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpxab3 | false | null | t3_1kpxab3 | /r/LocalLLaMA/comments/1kpxab3/can_i_pool_vram_of_the_new_nvidia_workstation/ | false | false | self | 1 | null |
Qwen 3 14B gguf "chat"? | 1 | [removed] | 2025-05-18T23:10:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kpx67j/qwen_3_14b_gguf_chat/ | Effective_Owl7362 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpx67j | false | null | t3_1kpx67j | /r/LocalLLaMA/comments/1kpx67j/qwen_3_14b_gguf_chat/ | false | false | self | 1 | null |
"After constant IPTV issues in Canada, this one finally delivered" | 1 | [removed] | 2025-05-18T22:58:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kpwxq4/after_constant_iptv_issues_in_canada_this_one/ | Any-Passion625 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpwxq4 | false | null | t3_1kpwxq4 | /r/LocalLLaMA/comments/1kpwxq4/after_constant_iptv_issues_in_canada_this_one/ | false | false | self | 1 | null |
Unlock Qwen3's Full Power: cot_proxy for Easy Mode Switching, Parameter Control & Clean Outputs! | 39 | Hey AI Devs & Qwen3 Users! 👋
Struggling to effectively use Qwen3 models with their hybrid reasoning (`/think`) and normal (`/no_think`) modes? It can be a real challenge when each mode needs different sampling parameters, and tools like Cline or RooCode don't offer that fine-grained control.
That's where `cot_proxy`... | 2025-05-18T22:35:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kpwgjy/unlock_qwen3s_full_power_cot_proxy_for_easy_mode/ | ben1984th | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpwgjy | false | null | t3_1kpwgjy | /r/LocalLLaMA/comments/1kpwgjy/unlock_qwen3s_full_power_cot_proxy_for_easy_mode/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': 'Ypsh0bt2sx_9RWDd-holqz-jR_IsFaeSixPwaViweRs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_G3QCLTdRHjxiIHAdv9pyGbcJVsySFIJVKNgmIthdpU.jpg?width=108&crop=smart&auto=webp&s=7009670e65ffcee32f51561d60cb583610807f53', 'width': 108}, {'height': 108, 'url': 'h... |
Unlimited text-to-speech using Kokoro-JS, 100% local, 100% open source | 177 | 2025-05-18T22:26:10 | https://streaming-kokoro.glitch.me/ | paranoidray | streaming-kokoro.glitch.me | 1970-01-01T00:00:00 | 0 | {} | 1kpw9nw | false | null | t3_1kpw9nw | /r/LocalLLaMA/comments/1kpw9nw/unlimited_texttospeech_using_kokorojs_100_local/ | false | false | default | 177 | null | |
Looking for lightweight open-source LLM for Egyptian Arabic real estate assistant (on Colab) | 1 | [removed] | 2025-05-18T22:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kpw2k2/looking_for_lightweight_opensource_llm_for/ | Ok-Watercress-451 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpw2k2 | false | null | t3_1kpw2k2 | /r/LocalLLaMA/comments/1kpw2k2/looking_for_lightweight_opensource_llm_for/ | false | false | self | 1 | null |
Best local LLaMA model for coding + fine-tuning on M2 Max (64 GB) & Zed Editor? | 2 | Hey everyone, I’m experimenting with running a LLaMA-style model 100% locally on my MacBook Pro M2 Max (64 GB RAM), and I have a few questions before I dive in:
1. Which model for coding?
•I work mainly in Astro, React and modern JS/TS stacks and we all know how this stacks update every week.
•I’m torn between sma... | 2025-05-18T21:54:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kpvju2/best_local_llama_model_for_coding_finetuning_on/ | webmero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpvju2 | false | null | t3_1kpvju2 | /r/LocalLLaMA/comments/1kpvju2/best_local_llama_model_for_coding_finetuning_on/ | false | false | self | 2 | null |
Best local llm to run on a 16gb MacBook Pro M4 | 1 | [removed] | 2025-05-18T21:49:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kpvgam/best_local_llm_to_run_on_a_16gb_macbook_pro_m4/ | combo-user | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpvgam | false | null | t3_1kpvgam | /r/LocalLLaMA/comments/1kpvgam/best_local_llm_to_run_on_a_16gb_macbook_pro_m4/ | false | false | self | 1 | null |
How to choose a TTS model for your voice agent | 0 | https://comparevoiceai.com/blog/how-to-choose-tts-voice-ai-model | 2025-05-18T21:30:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kpv0ga/how_to_choose_a_tts_model_for_your_voice_agent/ | Excellent-Effect237 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpv0ga | false | null | t3_1kpv0ga | /r/LocalLLaMA/comments/1kpv0ga/how_to_choose_a_tts_model_for_your_voice_agent/ | false | false | self | 0 | null |
How does num_gpu work in Ollama? Why Ollama keeps using the GPU after the model stopped? | 0 | Hello guys, i'm confused, i hope you guys can help me.
If i run **Qwen3 30B A3B** with num\_gpu **maxed out**, i get **2-3 T/s** with **90% GPU** usage and **20% CPU** usage.
If i run it at **default**, i get **12-17 T/s** with **60% GPU** usage and **50%** **CPU** usage.
While if i run **Gemma 3 12B QAT** with num... | 2025-05-18T21:24:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kpuvnn/how_does_num_gpu_work_in_ollama_why_ollama_keeps/ | S4lVin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpuvnn | false | null | t3_1kpuvnn | /r/LocalLLaMA/comments/1kpuvnn/how_does_num_gpu_work_in_ollama_why_ollama_keeps/ | false | false | self | 0 | null |
Serve 1 LLM with different prompts for Visual Studio Code? | 1 | How do you guys tackle this scenario?
I'd like to have VSCode run Continue or Copilot or something else with both "Chat" and "Autocomplete/Fill in the middle" but instead of running 2 models, simply run the same instruct model with different system prompts or what not.
I'm not very experienced with Ollama and LMStudi... | 2025-05-18T20:28:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kptmbk/serve_1_llm_with_different_prompts_for_visual/ | windozeFanboi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kptmbk | false | null | t3_1kptmbk | /r/LocalLLaMA/comments/1kptmbk/serve_1_llm_with_different_prompts_for_visual/ | false | false | self | 1 | null |
Can I pool VRAM of the new nvidia workstation GPU's for local models? | 1 | [removed] | 2025-05-18T19:59:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kpsxdh/can_i_pool_vram_of_the_new_nvidia_workstation/ | tyflips | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpsxdh | false | null | t3_1kpsxdh | /r/LocalLLaMA/comments/1kpsxdh/can_i_pool_vram_of_the_new_nvidia_workstation/ | false | false | self | 1 | null |
Unsloth phi4 reasoning plus Q6 has big problems with thinking compared to QWQ3. Should I use unsloth PHI4 Reasoning Q6? | 1 | [removed] | 2025-05-18T19:48:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kpsoqi/unsloth_phi4_reasoning_plus_q6_has_big_problems/ | Hot_Watercress5440 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpsoqi | false | null | t3_1kpsoqi | /r/LocalLLaMA/comments/1kpsoqi/unsloth_phi4_reasoning_plus_q6_has_big_problems/ | false | false | self | 1 | null |
Can I pool VRAM of the new nvidia workstation GPU's for local models? | 1 | [removed] | 2025-05-18T19:41:42 | https://www.reddit.com/r/LocalLLaMA/comments/1kpsiv8/can_i_pool_vram_of_the_new_nvidia_workstation/ | Careless-Wrongdoer82 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpsiv8 | false | null | t3_1kpsiv8 | /r/LocalLLaMA/comments/1kpsiv8/can_i_pool_vram_of_the_new_nvidia_workstation/ | false | false | self | 1 | null |
A doubt | 1 | [removed] | 2025-05-18T19:35:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kpsdd8/a_doubt/ | Relative_Ability_220 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpsdd8 | false | null | t3_1kpsdd8 | /r/LocalLLaMA/comments/1kpsdd8/a_doubt/ | false | false | self | 1 | null |
How to choose STT model for your Voice agent | 0 | 2025-05-18T19:21:38 | https://comparevoiceai.com/blog/how-to-choose-stt-voice-ai-model | Excellent-Effect237 | comparevoiceai.com | 1970-01-01T00:00:00 | 0 | {} | 1kps1z4 | false | null | t3_1kps1z4 | /r/LocalLLaMA/comments/1kps1z4/how_to_choose_stt_model_for_your_voice_agent/ | false | false | default | 0 | null | |
Skeptical about the increased focus on STEM and CoT | 78 | With the release of Qwen3, I’ve been growing increasingly skeptical about the direction many labs are taking with CoT and STEM focused LLMs. With Qwen3, every model in the lineup follows a hybrid CoT approach and has a heavy emphasis on STEM tasks. This seems to be part of why the models feel “overcooked”. I have seen ... | 2025-05-18T19:10:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kprsun/skeptical_about_the_increased_focus_on_stem_and/ | Quazar386 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kprsun | false | null | t3_1kprsun | /r/LocalLLaMA/comments/1kprsun/skeptical_about_the_increased_focus_on_stem_and/ | false | false | self | 78 | null |
minimum parameter model needed for rag? can i do it without llama | 1 | [removed] | 2025-05-18T18:52:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kprdez/minimum_parameter_model_needed_for_rag_can_i_do/ | ExtremeAcceptable289 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kprdez | false | null | t3_1kprdez | /r/LocalLLaMA/comments/1kprdez/minimum_parameter_model_needed_for_rag_can_i_do/ | false | false | self | 1 | null |
Geekbench equivalent for local LLM perf | 1 | [removed] | 2025-05-18T18:32:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kpqwh8/geekbench_equivalent_for_local_llm_perf/ | Friendly_Writer_8549 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpqwh8 | false | null | t3_1kpqwh8 | /r/LocalLLaMA/comments/1kpqwh8/geekbench_equivalent_for_local_llm_perf/ | false | false | self | 1 | null |
MLX vs. UD GGUF | 13 | Not sure if this is useful to anyone else, but I benchmarked Unsloth's Qwen3-30B-A3B Dynamic 2.0 GGUF against the MLX version. Both models are the 8-bit quantization. Both are running on LM Studio with the recommended Qwen 3 settings for samplers and temperature.
Results from the same thinking prompt:
\- MLX: 3,516 t... | 2025-05-18T18:27:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kpqrzz/mlx_vs_ud_gguf/ | cspenn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpqrzz | false | null | t3_1kpqrzz | /r/LocalLLaMA/comments/1kpqrzz/mlx_vs_ud_gguf/ | false | false | self | 13 | null |
Orange Pi AI Studio pro is now available. 192gb for ~2900$. Anyone knows how it performs and what can be done with it? | 55 | There was some speculation about it some months ago in this thread: https://www.reddit.com/r/LocalLLaMA/comments/1im141p/orange_pi_ai_studio_pro_mini_pc_with_408gbs/
Seems it can be ordered now on AliExpress (96gb for ~2600$, 192gb for ~2900$, but I couldn't find any english reviews or more info on it than what was sp... | 2025-05-18T18:17:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kpqk4c/orange_pi_ai_studio_pro_is_now_available_192gb/ | MarinatedPickachu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpqk4c | false | null | t3_1kpqk4c | /r/LocalLLaMA/comments/1kpqk4c/orange_pi_ai_studio_pro_is_now_available_192gb/ | false | false | self | 55 | null |
Best ultra low budget GPU for 70B and best LLM for my purpose | 1 | [removed] | 2025-05-18T18:16:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kpqj7e/best_ultra_low_budget_gpu_for_70b_and_best_llm/ | ExtensionAd182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpqj7e | false | null | t3_1kpqj7e | /r/LocalLLaMA/comments/1kpqj7e/best_ultra_low_budget_gpu_for_70b_and_best_llm/ | false | false | self | 1 | null |
Optimizing llama-server for RTX 509 + RTX 4090 | 1 | [removed] | 2025-05-18T18:11:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kpqell/optimizing_llamaserver_for_rtx_509_rtx_4090/ | Lumpy-Flamingo6802 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpqell | false | null | t3_1kpqell | /r/LocalLLaMA/comments/1kpqell/optimizing_llamaserver_for_rtx_509_rtx_4090/ | false | false | self | 1 | null |
What ai is best for Chinese to English translation currently? | 2 | [removed] | 2025-05-18T18:03:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kpq8e4/what_ai_is_best_for_chinese_to_english/ | Civil_Candidate_824 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpq8e4 | false | null | t3_1kpq8e4 | /r/LocalLLaMA/comments/1kpq8e4/what_ai_is_best_for_chinese_to_english/ | false | false | self | 2 | null |
What do you think of Arcee's Virtuoso Large and Coder Large? | 3 | I'm testing them through OpenRouter and they look pretty good. Anyone using them? | 2025-05-18T17:54:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kpq099/what_do_you_think_of_arcees_virtuoso_large_and/ | Sky_Linx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpq099 | false | null | t3_1kpq099 | /r/LocalLLaMA/comments/1kpq099/what_do_you_think_of_arcees_virtuoso_large_and/ | false | false | self | 3 | null |
What's the best local model for M2 32gb Macbook (Audio/Text) in May 2025? | 0 | I'm looking to process private interviews (10 - 2 hour interviews) I conducted with victims of abuse for a research project. This must be done locally for privacy. Once it's in the LLM I want to see how it compares to human raters as far as assessing common themes. What's the best local model for transcribing and then ... | 2025-05-18T17:43:55 | https://www.reddit.com/r/LocalLLaMA/comments/1kppr0t/whats_the_best_local_model_for_m2_32gb_macbook/ | SinkThink5779 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kppr0t | false | null | t3_1kppr0t | /r/LocalLLaMA/comments/1kppr0t/whats_the_best_local_model_for_m2_32gb_macbook/ | false | false | self | 0 | null |
Resurrecting the Dead starting with Jacque Fresco | 1 | [removed] | 2025-05-18T17:38:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kppm4f/resurrecting_the_dead_starting_with_jacque_fresco/ | Longjumping-You-7118 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kppm4f | false | null | t3_1kppm4f | /r/LocalLLaMA/comments/1kppm4f/resurrecting_the_dead_starting_with_jacque_fresco/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'cwstZZAcNq2LFiN6j5DFN9V9MIMP9cdJ7CHn883KRl8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/h1XXfghEWBLBrMK5p9xt8ro65OuQEZ0-_vFht_QGMBM.jpg?width=108&crop=smart&auto=webp&s=aed2a4a36510d044c78fcf720fe226c71d056754', 'width': 108}, {'height': 108, 'url': 'h... |
Handwriting OCR (HTR) | 12 | Has anyone experimented with using VLMs like Qwen2.5-VL to OCR handwriting? I have had better results on full pages of handwriting with unpredictable structure (old travel journals with dates in the margins or elsewhere, for instance) using Qwen than with traditional OCR or even more recent methods like TrOCR.
I belie... | 2025-05-18T17:33:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kppihw/handwriting_ocr_htr/ | dzdn1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kppihw | false | null | t3_1kppihw | /r/LocalLLaMA/comments/1kppihw/handwriting_ocr_htr/ | false | false | self | 12 | null |
MSI PC with NVIDIA GB10 Superchip - 6144 CUDA Cores and 128GB LPDDR5X Confirmed | 110 | ASUS, Dell, and Lenovo have released their version of Nvidia DGX Spark, and now MSI has as well.
[https://en.gamegpu.com/iron/msi-showed-edgeexpert-ms-c931-s-nvidia-gb10-superchip-confirmed-6144-cuda-yader-i-128-gb-lpddr5x](https://en.gamegpu.com/iron/msi-showed-edgeexpert-ms-c931-s-nvidia-gb10-superchip-confirmed-614... | 2025-05-18T17:28:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kppdhb/msi_pc_with_nvidia_gb10_superchip_6144_cuda_cores/ | shakhizat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kppdhb | false | null | t3_1kppdhb | /r/LocalLLaMA/comments/1kppdhb/msi_pc_with_nvidia_gb10_superchip_6144_cuda_cores/ | false | false | self | 110 | null |
is Qwen 30B-A3B the best model to run locally right now? | 121 | I recently got into running models locally, and just some days ago Qwen 3 got launched.
I saw a lot of posts about Mistral, Deepseek R1, end Llama, but since Qwen 3 got released recently, there isn't much information about it. But reading the benchmarks, it looks like Qwen 3 outperforms all the other models, and also ... | 2025-05-18T17:18:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kpp5op/is_qwen_30ba3b_the_best_model_to_run_locally/ | S4lVin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpp5op | false | null | t3_1kpp5op | /r/LocalLLaMA/comments/1kpp5op/is_qwen_30ba3b_the_best_model_to_run_locally/ | false | false | self | 121 | null |
Better alternatives to searxng for web scraping / search? | 1 | [removed] | 2025-05-18T17:17:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kpp4qt/better_alternatives_to_searxng_for_web_scraping/ | gnulinux-pony | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpp4qt | false | null | t3_1kpp4qt | /r/LocalLLaMA/comments/1kpp4qt/better_alternatives_to_searxng_for_web_scraping/ | false | false | self | 1 | null |
Cherry Studio is now my favorite frontend | 86 | I've been looking for an open source LLM frontend desktop app for a while that did everything; rag, web searching, local models, connecting to Gemini and ChatGPT, etc. Jan AI has a lot of potential but the rag is experimental and doesn't really work for me. Anything LLM's rag for some reason has never worked for me, wh... | 2025-05-18T17:11:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kpozhd/cherry_studio_is_now_my_favorite_frontend/ | ConsistentCan4633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpozhd | false | null | t3_1kpozhd | /r/LocalLLaMA/comments/1kpozhd/cherry_studio_is_now_my_favorite_frontend/ | false | false | self | 86 | {'enabled': False, 'images': [{'id': 'He5VG53rTBjWbNk1_UdCjYukNuT1UhGRClb6ecDAOwM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/asw6R0ibq6fWJLI0jTiqq5MWe_ZOda7dhXjccGwW8KM.jpg?width=108&crop=smart&auto=webp&s=6c9b9a17a1cba0f4382bf80f06bb3715c6dc44e3', 'width': 108}, {'height': 108, 'url': 'h... |
Hosting a code model | 0 | What is the best coding model right now with large context, mainly i use js, node, php, html, tailwind. I have 2 x rtx 3090, so with reasonable speed and good context size? | 2025-05-18T17:08:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kpoww0/hosting_a_code_model/ | pyrolols | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpoww0 | false | null | t3_1kpoww0 | /r/LocalLLaMA/comments/1kpoww0/hosting_a_code_model/ | false | false | self | 0 | null |
Memory for ai | 0 | I've been working with AI for a little over a week. I made a conscious decision and decided I was going to dive in. I've done coding in the past so I gravitated in that direction pretty quickly and was able to finish a couple small projects.
Very quickly I started to get a feel for the limitations of how much it can ... | 2025-05-18T17:07:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kpowdx/memory_for_ai/ | michaelkeithduncan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpowdx | false | null | t3_1kpowdx | /r/LocalLLaMA/comments/1kpowdx/memory_for_ai/ | false | false | self | 0 | null |
Multiple, concurrent user accessing to local LLM 🦙🦙🦙🦙 | 1 | I did a bit of research with the help of AI and it seems that it should work fine, but I haven't yet tested it and put it to real use. So I'm hoping someone who has, can share their experience.
It seems that LLMs (even with 1 GPU and 1 model loaded) can be used with multiple, concurrent users and the performance will... | 2025-05-18T17:04:24 | https://www.reddit.com/r/LocalLLaMA/comments/1kpotei/multiple_concurrent_user_accessing_to_local_llm/ | Prestigious-Use5483 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpotei | false | null | t3_1kpotei | /r/LocalLLaMA/comments/1kpotei/multiple_concurrent_user_accessing_to_local_llm/ | false | false | self | 1 | null |
Requesting help with my thesis | 0 | TLDR: Are the models I have linked comparable if I were to feed them the same dataset, with the same instructions/prompt and ask them to make a decision? The documents I intend to feed them are very large (probably around 20-30k tokens), which leads be to suspect some level of performance degradation. Is there a way to... | 2025-05-18T17:03:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kposu0/requesting_help_with_my_thesis/ | Nissepelle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kposu0 | false | null | t3_1kposu0 | /r/LocalLLaMA/comments/1kposu0/requesting_help_with_my_thesis/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'h... |
Contribution to ollama-python: decorators, helper functions and simplified creation tool | 0 | Hi, guys, I posted this on the official ollama Reddit but I decided to post it here too! (This post was written in Portuguese)
I made a commit to ollama-python with the aim of making it easier to create and use custom tools. You can now use simple decorators to register functions:
@ollama_tool – for synchronous funct... | 2025-05-18T16:52:31 | https://github.com/ollama/ollama-python/pull/516 | chavomodder | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kpoj6x | false | null | t3_1kpoj6x | /r/LocalLLaMA/comments/1kpoj6x/contribution_to_ollamapython_decorators_helper/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bPOdod-cBJBOtdjmJXLlGWxEW_dSxDsdqhZEtPFYJhA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0WHEreexf2DJMw78A-6XfudwOUYNJRPPM2H2EZ2R2b8.jpg?width=108&crop=smart&auto=webp&s=c5164269da6e88b84384a3c242adf2062ae64efb', 'width': 108}, {'height': 108, 'url': 'h... | |
(5K t/s prefill 1K t/s gen) High throughput with Qwen3-30B on VLLM and it's smart enough for dataset curation! | 78 | We've just started offering Qwen3-30B-A3B and internally it is being used for dataset filtering and curation. The speeds you can get out of it are extremely impressive running on VLLM and RTX 3090s!
I feel like Qwen3-30B is being overlooked in terms of where it can be really useful. Qwen3-30B might be a small regress... | 2025-05-18T16:50:29 | Arli_AI | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kpohfm | false | null | t3_1kpohfm | /r/LocalLLaMA/comments/1kpohfm/5k_ts_prefill_1k_ts_gen_high_throughput_with/ | false | false | 78 | {'enabled': True, 'images': [{'id': 'noBOS2WsirZltIIGJuWLC37Ycea91JT987C1SoEI288', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/4o2ohg30kk1f1.png?width=108&crop=smart&auto=webp&s=b0b7bd6d5549522912a51fedbeb44fcbee80419d', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/4o2ohg30kk1f1.png... | ||
Plz help me setup my local LLM | 1 | [removed] | 2025-05-18T16:37:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kpo6ha/plz_help_me_setup_my_local_llm/ | InformationRadiant43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpo6ha | false | null | t3_1kpo6ha | /r/LocalLLaMA/comments/1kpo6ha/plz_help_me_setup_my_local_llm/ | false | false | self | 1 | null |
Gemini's Long Context MoE Architecture (Hypothesized) | 0 | Gemini's Long Context MoE Architecture (Hypothesized):
Sharing how I think (hypothesis) Gemini models achieve their 1-10 Million long context window. With details to clues to support the same.
Ensemble of Expert (EoE) or Mesh of Expert (MeoE) with common/shared long (1-10M) context window
Gemini's 1M+ token MoE like... | 2025-05-18T16:34:01 | ditpoo94 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kpo3vl | false | null | t3_1kpo3vl | /r/LocalLLaMA/comments/1kpo3vl/geminis_long_context_moe_architecture_hypothesized/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'LW1ET0OFjMp8HjO16TgXDhOL2d6DlwPdKweVdshs5QM', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/7yyw9esihk1f1.png?width=108&crop=smart&auto=webp&s=f7f5beab95f3956b6be51b1de3a300fa26ee63c5', 'width': 108}, {'height': 164, 'url': 'https://preview.redd.it/7yyw9esihk1f1.png... | ||
Applying LoRA in exllamav2 | 1 | [removed] | 2025-05-18T16:31:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kpo1sd/applying_lora_in_exllamav2/ | Hotel_West | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpo1sd | false | null | t3_1kpo1sd | /r/LocalLLaMA/comments/1kpo1sd/applying_lora_in_exllamav2/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.