title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
OSS benchmarks | 4 | 2025-08-05T17:04:51 | smsp2021 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mieuov | false | null | t3_1mieuov | /r/LocalLLaMA/comments/1mieuov/oss_benchmarks/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': '24km2ku1g8hf1', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/24km2ku1g8hf1.png?width=108&crop=smart&auto=webp&s=180a2426260b5198443d650a9c8de60fe0e65770', 'width': 108}, {'height': 249, 'url': 'https://preview.redd.it/24km2ku1g8hf1.png?width=216&crop=smart&auto=we... | ||
OpenAI playground / live chat for their open model | 0 | 2025-08-05T17:04:13 | https://www.gpt-oss.com/ | petuman | gpt-oss.com | 1970-01-01T00:00:00 | 0 | {} | 1mieu25 | false | null | t3_1mieu25 | /r/LocalLLaMA/comments/1mieu25/openai_playground_live_chat_for_their_open_model/ | false | false | default | 0 | null | |
OpenAI releases gpt-oss 120b and 20b reasoning models | 1 | [deleted] | 2025-08-05T17:03:55 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1mietrf | false | null | t3_1mietrf | /r/LocalLLaMA/comments/1mietrf/openai_releases_gptoss_120b_and_20b_reasoning/ | false | false | default | 1 | null | ||
Open models by OpenAI | 0 | 2025-08-05T17:03:14 | https://openai.com/open-models/ | garg | openai.com | 1970-01-01T00:00:00 | 0 | {} | 1miet2t | false | null | t3_1miet2t | /r/LocalLLaMA/comments/1miet2t/open_models_by_openai/ | false | false | default | 0 | null | |
OpenAI OSS release link | 0 | 2025-08-05T17:03:10 | https://openai.com/open-models/ | Freonr2 | openai.com | 1970-01-01T00:00:00 | 0 | {} | 1miet0k | false | null | t3_1miet0k | /r/LocalLLaMA/comments/1miet0k/openai_oss_release_link/ | false | false | default | 0 | null | |
gpt-oss - a openai Collection | 0 | 2025-08-05T17:03:00 | https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1miesud | false | null | t3_1miesud | /r/LocalLLaMA/comments/1miesud/gptoss_a_openai_collection/ | false | false | default | 0 | null | |
GitHub - Website-Crawler: Extract data from websites in LLM ready JSON or CSV format. Crawl or Scrape entire website with Website Crawler | 0 | Extract data from websites with just three lines of code | 2025-08-05T17:02:51 | https://github.com/pc8544/Website-Crawler | PsychologicalTap1541 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1miesp1 | false | null | t3_1miesp1 | /r/LocalLLaMA/comments/1miesp1/github_websitecrawler_extract_data_from_websites/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': '_bJlsWxorHAfZXBGarXcynEgAl7JBd5fxC6KmGGedEU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_bJlsWxorHAfZXBGarXcynEgAl7JBd5fxC6KmGGedEU.png?width=108&crop=smart&auto=webp&s=e5e57a4074460f867d2288db23fa8c1a336a71c1', 'width': 108}, {'height': 108, 'url': 'h... |
OpenAI oss | 1 | https://huggingface.co/openai/gpt-oss-120b | 2025-08-05T17:01:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mierss/openai_oss/ | smsp2021 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mierss | false | null | t3_1mierss | /r/LocalLLaMA/comments/1mierss/openai_oss/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=108&crop=smart&auto=webp&s=292c3d3a2dfa2ce762d4e0ad0113f21057208fb5', 'width': 108}, {'height': 116, 'url': 'h... |
openai/gpt-oss-120b · Hugging Face | 462 | 2025-08-05T17:00:37 | https://huggingface.co/openai/gpt-oss-120b | ShreckAndDonkey123 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1mieqcb | false | null | t3_1mieqcb | /r/LocalLLaMA/comments/1mieqcb/openaigptoss120b_hugging_face/ | false | false | default | 462 | {'enabled': False, 'images': [{'id': '12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/12ojQ9khZuJRm7jqdMaOtnKaFtBC6Yo7dfwq4qKZ3jA.png?width=108&crop=smart&auto=webp&s=292c3d3a2dfa2ce762d4e0ad0113f21057208fb5', 'width': 108}, {'height': 116, 'url': 'h... | |
GitHub - openai/harmony: Renderer for the harmony response format to be used with gpt-oss | 3 | 2025-08-05T16:59:24 | https://github.com/openai/harmony | EmberElement | github.com | 1970-01-01T00:00:00 | 0 | {} | 1miep3t | false | null | t3_1miep3t | /r/LocalLLaMA/comments/1miep3t/github_openaiharmony_renderer_for_the_harmony/ | false | false | default | 3 | {'enabled': False, 'images': [{'id': 'oDkONWaKKO91iK6gOIqT4AJ0OX-NA8K7G-aGFHl_RQw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oDkONWaKKO91iK6gOIqT4AJ0OX-NA8K7G-aGFHl_RQw.png?width=108&crop=smart&auto=webp&s=167dbda2dcf15764b7a726336916c7fb1fa870d2', 'width': 108}, {'height': 108, 'url': 'h... | |
GitHub - openai/gpt-oss: gpt-oss-120b and gpt-oss-20b are two open-weight language models by OpenAI | 5 | 2025-08-05T16:59:17 | https://github.com/openai/gpt-oss | ShreckAndDonkey123 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1miep0h | false | null | t3_1miep0h | /r/LocalLLaMA/comments/1miep0h/github_openaigptoss_gptoss120b_and_gptoss20b_are/ | false | false | default | 5 | null | |
OpenAI GPT OSS: 21B & 117B models (3.6B & 5.1B active) | 40 | GPT OSS is a hugely anticipated open-weights release by OpenAI, designed for powerful reasoning, agentic tasks, and versatile developer use cases. It comprises two models: a big one with 117B parameters ([gpt-oss-120b](https://hf.co/openai/gpt-oss-120b)), and a smaller one with 21B parameters ([gpt-oss-20b](https://hf.... | 2025-08-05T16:54:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mieksq/openai_gpt_oss_21b_117b_models_36b_51b_active/ | Xhehab_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mieksq | false | null | t3_1mieksq | /r/LocalLLaMA/comments/1mieksq/openai_gpt_oss_21b_117b_models_36b_51b_active/ | false | false | self | 40 | null |
OpenAI GPT OSS 21B and 117B total parameters, with 3.6B and 5.1B active parameters [Apache 2.0, with a small complementary use policy] | 1 | GPT OSS is a hugely anticipated open-weights release by OpenAI, designed for powerful reasoning, agentic tasks, and versatile developer use cases. It comprises two models: a big one with 117B parameters (gpt-oss-120b), and a smaller one with 21B parameters (gpt-oss-20b). Both are mixture-of-experts (MoEs) and use a 4-b... | 2025-08-05T16:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mieg8s/openai_gpt_oss_21b_and_117b_total_parameters_with/ | Xhehab_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mieg8s | false | null | t3_1mieg8s | /r/LocalLLaMA/comments/1mieg8s/openai_gpt_oss_21b_and_117b_total_parameters_with/ | false | false | self | 1 | null |
Release v4.55.0: New openai GPT OSS model! · huggingface/transformers | 108 | 2025-08-05T16:46:13 | https://github.com/huggingface/transformers/releases/tag/v4.55.0 | lomero | github.com | 1970-01-01T00:00:00 | 0 | {} | 1miec9u | false | null | t3_1miec9u | /r/LocalLLaMA/comments/1miec9u/release_v4550_new_openai_gpt_oss_model/ | false | false | default | 108 | {'enabled': False, 'images': [{'id': 'o3wB6ioZOZhpNvqAVXe5Ffp8Gi7bbUI44EsDB2_JzvY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/o3wB6ioZOZhpNvqAVXe5Ffp8Gi7bbUI44EsDB2_JzvY.png?width=108&crop=smart&auto=webp&s=032040f2e93ec17dfaf5a8b23628e743ca08d3e6', 'width': 108}, {'height': 108, 'url': 'h... | |
OPENAI OPENSOURCE MODEL LEAKED BEFORE RELEASE | 3 | #
The model set to release today by openai is "gpt-oss-120b".
It is currently unreleased but for those of you using other coding tools you can access the model through an openai compatible endpoint on [https://cloud.cerebras.ai/](https://cloud.cerebras.ai/) .
The model is currently unlisted and hidden, but it is st... | 2025-08-05T16:45:32 | https://www.reddit.com/r/LocalLLaMA/comments/1miebln/openai_opensource_model_leaked_before_release/ | x8ko_dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miebln | false | null | t3_1miebln | /r/LocalLLaMA/comments/1miebln/openai_opensource_model_leaked_before_release/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'coEmGd_pWhN0B1qXeWRtEJ2-mRxyYYm9szrKzWBRnCI', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/coEmGd_pWhN0B1qXeWRtEJ2-mRxyYYm9szrKzWBRnCI.png?width=108&crop=smart&auto=webp&s=a2aec122b2d2a5c22157d8fbbab07b88ceaddfca', 'width': 108}, {'height': 113, 'url': 'h... |
Does OpenRouter support image generation models? | 3 | Hi. I know this is "local" Llama, but don't know a better place to ask this: Does OpenRouter support image generation models in its rest interface? In a broader sense, is image generation possible in an OpenAI-compatible API? All examples I saw are about images in the input, not the output :/
Thanks | 2025-08-05T16:40:21 | https://www.reddit.com/r/LocalLLaMA/comments/1mie6pd/does_openrouter_support_image_generation_models/ | ihatebeinganonymous | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mie6pd | false | null | t3_1mie6pd | /r/LocalLLaMA/comments/1mie6pd/does_openrouter_support_image_generation_models/ | false | false | self | 3 | null |
Anthropic released Claude Opus 4.1 | 3 | Seems nobody cares
[Model Card](https://www.anthropic.com/news/claude-opus-4-1)
https://preview.redd.it/tao0yrpva8hf1.png?width=2600&format=png&auto=webp&s=30a77b11de103abb79e191d9500cfe8bb9e68fb0
https://preview.redd.it/1mxsnrvwa8hf1.png?width=3840&format=png&auto=webp&s=f1907848c43f27ff3f53f6254b9b8bbc054989b4
| 2025-08-05T16:36:05 | https://www.reddit.com/r/LocalLLaMA/comments/1mie2jy/anthropic_released_claude_opus_41/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mie2jy | false | null | t3_1mie2jy | /r/LocalLLaMA/comments/1mie2jy/anthropic_released_claude_opus_41/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'WrJeMYVK0J-k_mNUSf9UOS_kHn5AxibmTBDPOt84T1U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/WrJeMYVK0J-k_mNUSf9UOS_kHn5AxibmTBDPOt84T1U.png?width=108&crop=smart&auto=webp&s=21018bf01ce5242d8aeff9589d924de17f2f1850', 'width': 108}, {'height': 113, 'url': 'h... | |
Hello!!! | 0 | Hello... if this is not the right place to ask such question i apologize. I found in my garage my old "toaster": i5 4570k 16gbram and a rx470-4gb. Can i run any local models on this old junk? Thank you in advance. | 2025-08-05T16:35:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mie1ub/hello/ | warmarduk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mie1ub | false | null | t3_1mie1ub | /r/LocalLLaMA/comments/1mie1ub/hello/ | false | false | self | 0 | null |
AMD AI MAX+ 395 with NVIDIA? | 5 | For the life of me I can't figure out how to use the 96gb VRAM alongside a discrete NVIDIA GPU. I have tried Vulcan, ROCm, and CUDA llama.cpp runtimes to no avail. Anyone else have this setup? | 2025-08-05T16:34:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mie0s2/amd_ai_max_395_with_nvidia/ | kkzzzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mie0s2 | false | null | t3_1mie0s2 | /r/LocalLLaMA/comments/1mie0s2/amd_ai_max_395_with_nvidia/ | false | false | self | 5 | null |
Introduce OpenAI Harmony - a response format designed for GPT-OSS model series | 11 | [https://github.com/openai/harmony](https://github.com/openai/harmony)
Now we finally understand the meaning and purpose of the "channel" in the leaked system prompt of ChatGPT | 2025-08-05T16:29:09 | https://www.reddit.com/r/LocalLLaMA/comments/1midvvn/introduce_openai_harmony_a_response_format/ | nekofneko | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1midvvn | false | null | t3_1midvvn | /r/LocalLLaMA/comments/1midvvn/introduce_openai_harmony_a_response_format/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'oDkONWaKKO91iK6gOIqT4AJ0OX-NA8K7G-aGFHl_RQw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oDkONWaKKO91iK6gOIqT4AJ0OX-NA8K7G-aGFHl_RQw.png?width=108&crop=smart&auto=webp&s=167dbda2dcf15764b7a726336916c7fb1fa870d2', 'width': 108}, {'height': 108, 'url': 'h... |
Build and deploy a Travel Deal application using the Groq Cloud, Firecrawl API, and the Hugging Face ecosystem. | 1 | Kimi K2 is a state-of-the-art open-source agentic AI model that is rapidly gaining attention across the tech industry. Developed by Moonshot AI, a fast-growing Chinese company, Kimi K2 delivers performance on par with leading proprietary models like Claude 4 Sonnet, but with the flexibility and accessibility of open-so... | 2025-08-05T16:27:57 | https://www.reddit.com/r/LocalLLaMA/comments/1midunx/build_and_deploy_a_travel_deal_application_using/ | kingabzpro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1midunx | false | null | t3_1midunx | /r/LocalLLaMA/comments/1midunx/build_and_deploy_a_travel_deal_application_using/ | false | false | self | 1 | null |
GPT-OSS today! | 152 | Keep an eye on these links!
https://github.com/openai/harmony
https://openai.com/open-models
https://gpt-oss.com | 2025-08-05T16:27:22 | https://www.reddit.com/r/LocalLLaMA/comments/1midu35/gptoss_today/ | Jawshoeadan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1midu35 | false | null | t3_1midu35 | /r/LocalLLaMA/comments/1midu35/gptoss_today/ | false | false | self | 152 | {'enabled': False, 'images': [{'id': 'oDkONWaKKO91iK6gOIqT4AJ0OX-NA8K7G-aGFHl_RQw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/oDkONWaKKO91iK6gOIqT4AJ0OX-NA8K7G-aGFHl_RQw.png?width=108&crop=smart&auto=webp&s=167dbda2dcf15764b7a726336916c7fb1fa870d2', 'width': 108}, {'height': 108, 'url': 'h... |
new open weight model from OpenAI will have computer use | 5 | 2025-08-05T16:24:34 | https://x.com/_aidan_clark_/status/1952760702122557684 | mvp525 | x.com | 1970-01-01T00:00:00 | 0 | {} | 1midrb7 | false | null | t3_1midrb7 | /r/LocalLLaMA/comments/1midrb7/new_open_weight_model_from_openai_will_have/ | false | false | default | 5 | null | |
GPT-OSS today? | 341 | because this is almost merged [https://github.com/ggml-org/llama.cpp/pull/15091](https://github.com/ggml-org/llama.cpp/pull/15091) | 2025-08-05T16:14:54 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1midi67 | false | null | t3_1midi67 | /r/LocalLLaMA/comments/1midi67/gptoss_today/ | false | false | 341 | {'enabled': True, 'images': [{'id': '2bbvgtLr_--LF8ZbLAklwJGWtil3xpMgUZbjdC26W-k', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/2br9oi8178hf1.png?width=108&crop=smart&auto=webp&s=8743b46eb33077e121abc8ff636f8ec7cb342be9', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/2br9oi8178hf1.png... | ||
How many layers does GLM 4.5 Air have? | 6 | My upgraded RAM kit is arriving later today and I'm excited to try the new GLM 4.5 Air GGUF model that I downloaded from Unsloth 4K_XL (68GB).
I am using an RTX 3090 (24GB) GPU and the RAM that I ordered is 96GB (2x48GB) DDR5 6600Mhz CL32.
Does anyone know how many GPU layers I should use in my situation and how many... | 2025-08-05T16:10:31 | https://www.reddit.com/r/LocalLLaMA/comments/1middv4/how_many_layers_does_glm_45_air_have/ | Prestigious-Use5483 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1middv4 | false | null | t3_1middv4 | /r/LocalLLaMA/comments/1middv4/how_many_layers_does_glm_45_air_have/ | false | false | self | 6 | null |
We're ready | 0 | [https://x.com/sama/status/1952759361417466016](https://x.com/sama/status/1952759361417466016) | 2025-08-05T16:09:21 | https://www.reddit.com/r/LocalLLaMA/comments/1midcs5/were_ready/ | Silent-Apple5026 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1midcs5 | false | null | t3_1midcs5 | /r/LocalLLaMA/comments/1midcs5/were_ready/ | false | false | self | 0 | null |
Why sudden demand in older quadro cards? | 0 | What am I missing? I've been eyeing a quadro P6000 for weeks now. The prices hovered between mid $400 to higher $500. Suddenly I see a spike in pricing and nothing available bellow mid $700 with most priced at 1k. This is a card released 10 years ago. What changed or was released that makes this suddenly appealing? I ... | 2025-08-05T15:30:24 | https://www.reddit.com/r/LocalLLaMA/comments/1miccql/why_sudden_demand_in_older_quadro_cards/ | AggressiveChange420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miccql | false | null | t3_1miccql | /r/LocalLLaMA/comments/1miccql/why_sudden_demand_in_older_quadro_cards/ | false | false | self | 0 | null |
Best balanced setup for 2-3000$ right now (Alternative to Strix Halo 128GB)? | 6 | Yet another build question - I know but the last similar thread I've found was in 2024 and I figured things might have changed since then.
I'm still waiting for Strix Halo with 128GB of unified memory to actually become available and in the meantime I thought that maybe there are better alternatives in that price poin... | 2025-08-05T15:30:08 | https://www.reddit.com/r/LocalLLaMA/comments/1micci9/best_balanced_setup_for_23000_right_now/ | KontoOficjalneMR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1micci9 | false | null | t3_1micci9 | /r/LocalLLaMA/comments/1micci9/best_balanced_setup_for_23000_right_now/ | false | false | self | 6 | null |
Llama.cpp: Add GPT-OSS | 345 | 2025-08-05T15:25:57 | https://github.com/ggml-org/llama.cpp/pull/15091 | atgctg | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mic8kf | false | null | t3_1mic8kf | /r/LocalLLaMA/comments/1mic8kf/llamacpp_add_gptoss/ | false | false | default | 345 | {'enabled': False, 'images': [{'id': 'SMmA2lbsQDUuflgGCV0_YBw5k-KcfZS-9iAMN58tb_s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SMmA2lbsQDUuflgGCV0_YBw5k-KcfZS-9iAMN58tb_s.png?width=108&crop=smart&auto=webp&s=7fc0a9456ea2aba0fff93672ee04a1dff1ae7e21', 'width': 108}, {'height': 108, 'url': 'h... | |
Kitten-TTS : Smallest ever TTS model (25MB, 15M params), runs on CPU | 55 | I just checked out Kitten-TTS, an open-sourced TTS model 1/5th the size of Kokoro 82M, and giving out decent enough results. The model is optimized for CPU and looks great given its size. Also, the inference is quite fast and is able to generate samples within seconds on a CPU as well.
HuggingFace: [https://huggingfa... | 2025-08-05T15:11:00 | https://www.reddit.com/r/LocalLLaMA/comments/1mibuho/kittentts_smallest_ever_tts_model_25mb_15m_params/ | Technical-Love-8479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mibuho | false | null | t3_1mibuho | /r/LocalLLaMA/comments/1mibuho/kittentts_smallest_ever_tts_model_25mb_15m_params/ | false | false | self | 55 | {'enabled': False, 'images': [{'id': 'fYbs-SKje4YJrrat-ROc4IhPZL1Qbl8K6JJyK1TzagU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fYbs-SKje4YJrrat-ROc4IhPZL1Qbl8K6JJyK1TzagU.png?width=108&crop=smart&auto=webp&s=f1a52aa180fe8365a726ae4ab6882c22572d40fb', 'width': 108}, {'height': 116, 'url': 'h... |
Just got the new RTX 6000 Blackwell 96VRAM GB What is the best LLM to run? | 0 | What is the smartest for daily task LLM i can run with it + for code review? | 2025-08-05T15:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1mibpt2/just_got_the_new_rtx_6000_blackwell_96vram_gb/ | Pitiful_Gene_3648 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mibpt2 | false | null | t3_1mibpt2 | /r/LocalLLaMA/comments/1mibpt2/just_got_the_new_rtx_6000_blackwell_96vram_gb/ | false | false | self | 0 | null |
Extending Llama 3s tokenizer vocabulary | 1 | How do I extend llama 3s tokeniser
It seems that it uses tiktoken for tokenisation and that we cant directly train a tokenise from tiktoken
How do people normally do this? | 2025-08-05T15:02:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mibmdd/extending_llama_3s_tokenizer_vocabulary/ | Awkward-Quiet5795 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mibmdd | false | null | t3_1mibmdd | /r/LocalLLaMA/comments/1mibmdd/extending_llama_3s_tokenizer_vocabulary/ | false | false | self | 1 | null |
Flown under the Radar: Eloisa-Qwen3-8B | 32 | Not my finetune, but thought Eloisa-Qwen3-8B by nbeerbower was a surprisingly competent model. It's one of the only finetunes trained on Qwen3-R1-SLERP-Q3T-8B, which is a slerp merge between the original Qwen3 instruct, and the Deepseek R1 0528 Qwen3 distill, using the superior qwen3 tokenizer, getting the best benefit... | 2025-08-05T14:52:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mibd4n/flown_under_the_radar_eloisaqwen38b/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mibd4n | false | null | t3_1mibd4n | /r/LocalLLaMA/comments/1mibd4n/flown_under_the_radar_eloisaqwen38b/ | false | false | self | 32 | {'enabled': False, 'images': [{'id': 'J89w6YbIPhr3_Ba_Pvba7rGHKg6SRf7H8tJahfnAHsM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/J89w6YbIPhr3_Ba_Pvba7rGHKg6SRf7H8tJahfnAHsM.png?width=108&crop=smart&auto=webp&s=37b0396825e3d3c7ea3783b687400cc3684e1f1b', 'width': 108}, {'height': 116, 'url': 'h... |
PyTorch on ROCm v6.5.0rc (gfx1151 / AMD Strix Halo / Ryzen AI Max+ 395) Detecting Only 15.49GB VRAM Despite 96GB Usable | 7 | Hey Guys,
I’m running into an issue where PyTorch built for ROCm (v6.5.0rc from [[scottt/rocm-TheRock](https://github.com/scottt/rocm-TheRock/releases/tag/v6.5.0rc-pytorch)](https://github.com/scottt/rocm-TheRock/releases/tag/v6.5.0rc-pytorch)) on an AMD Strix Halo machine (gfx1151) is only detecting **15.49 GB** of V... | 2025-08-05T14:46:51 | https://www.reddit.com/r/LocalLLaMA/comments/1mib7l9/pytorch_on_rocm_v650rc_gfx1151_amd_strix_halo/ | ashwin3005 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mib7l9 | false | null | t3_1mib7l9 | /r/LocalLLaMA/comments/1mib7l9/pytorch_on_rocm_v650rc_gfx1151_amd_strix_halo/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'PvNkArCJ3hseS_V1i2zoJLrVKWYX7RHTOHD6vOH7s5E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PvNkArCJ3hseS_V1i2zoJLrVKWYX7RHTOHD6vOH7s5E.png?width=108&crop=smart&auto=webp&s=d523188194fc237f5ef940ad96668c76d46191b7', 'width': 108}, {'height': 108, 'url': 'h... |
II-Search-4B: model tuned for reasoning with search tools | 86 | Most search models need the cloud.
II-Search-4B doesn’t.
4B model tuned for reasoning with search tools, built for local use.
Performance of models 10x its size.
Search that is small, smart, and open.
II-Search-4B: https://huggingface.co/Intelligent-Internet/II-Search-4B
II-Search-CIR-4B: https://huggingface.co/I... | 2025-08-05T14:33:02 | ResearchCrafty1804 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miaugk | false | null | t3_1miaugk | /r/LocalLLaMA/comments/1miaugk/iisearch4b_model_tuned_for_reasoning_with_search/ | false | false | default | 86 | {'enabled': True, 'images': [{'id': 'w6vtnupyo7hf1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/w6vtnupyo7hf1.jpeg?width=108&crop=smart&auto=webp&s=996723c8e8186dc330ca475e4cb1f632918b32b5', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/w6vtnupyo7hf1.jpeg?width=216&crop=smart&auto=w... | |
Native audio input LLM | 6 | Are there any decent LLMs that I can run locally to do STT that requires some wider context understanding than a typical STT model?
For example I have some audio recordings of conversations that contain multiple speakers and use some names and terminology that whisper etc. would struggle to understand. I have tested u... | 2025-08-05T14:19:32 | https://www.reddit.com/r/LocalLLaMA/comments/1miahhw/native_audio_input_llm/ | reddysteady | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1miahhw | false | null | t3_1miahhw | /r/LocalLLaMA/comments/1miahhw/native_audio_input_llm/ | false | false | self | 6 | null |
Kaggle is considering further opening its evaluation platform to allow the community to add their own game environments. | 19 | 2025-08-05T14:18:17 | Loud_Possibility_148 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miagcn | false | null | t3_1miagcn | /r/LocalLLaMA/comments/1miagcn/kaggle_is_considering_further_opening_its/ | false | false | default | 19 | {'enabled': True, 'images': [{'id': 'ew3ww50cm7hf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ew3ww50cm7hf1.jpeg?width=108&crop=smart&auto=webp&s=ef453369fb21b742c8a6e94173d828223baaa2b8', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ew3ww50cm7hf1.jpeg?width=216&crop=smart&auto=... | ||
Kaggle is considering further opening its evaluation platform to allow the community to add their own game environments. | 1 | 2025-08-05T14:15:04 | Loud_Possibility_148 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1miadgr | false | null | t3_1miadgr | /r/LocalLLaMA/comments/1miadgr/kaggle_is_considering_further_opening_its/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'dp0tduarl7hf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/dp0tduarl7hf1.jpeg?width=108&crop=smart&auto=webp&s=e4ce93b948374ecdb80f6f34b347ff9f2da4da4c', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/dp0tduarl7hf1.jpeg?width=216&crop=smart&auto=... | ||
Kaggle is considering opening up its simulation system further to allow the community to add their own games. | 4 | 2025-08-05T14:09:09 | Loud_Possibility_148 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mia81s | false | null | t3_1mia81s | /r/LocalLLaMA/comments/1mia81s/kaggle_is_considering_opening_up_its_simulation/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'slxd0hapk7hf1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/slxd0hapk7hf1.jpeg?width=108&crop=smart&auto=webp&s=d3d3a5bc693ce456893d71c4c27352635631ca79', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/slxd0hapk7hf1.jpeg?width=216&crop=smart&auto=... | ||
I built a free tool to streamline the local LLM workflow (HW check, model management, multi-model chat). Seeking feedback from serious users. | 0 | Hey r/LocalLLaMA,
Full disclosure, this is my second attempt at posting this. My first try this morning wasn't formatted well, as I've never really posted on Reddit before, so my apologies for that. I'm hoping to get it right this time.
I've been deep in the local AI scene and got frustrated with the constant frictio... | 2025-08-05T14:07:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mia63x/i_built_a_free_tool_to_streamline_the_local_llm/ | OkLoad5267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mia63x | false | null | t3_1mia63x | /r/LocalLLaMA/comments/1mia63x/i_built_a_free_tool_to_streamline_the_local_llm/ | false | false | 0 | null | |
An opensource platform to learn python and generative AI | 5 | I know many of you guys are prodigies, but yeah! There are beginners, right? You can't learn generative AI or AI if you don't write code yourself; that is final and that is reality, the way to do it, of course, copy the code from LLM (like ChatGPT) and write it yourself, write it, write it, and write it, that's how you... | 2025-08-05T13:51:43 | https://www.reddit.com/r/LocalLLaMA/comments/1mi9rz1/an_opensource_platform_to_learn_python_and/ | Nandakishor_ml | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi9rz1 | false | null | t3_1mi9rz1 | /r/LocalLLaMA/comments/1mi9rz1/an_opensource_platform_to_learn_python_and/ | false | false | self | 5 | null |
Advice on running Qwen3-Coder-30B-A3B locally | 7 | I'm trying to run the Qwen3-Coder-30B-A3B-Instruct model on my own hardware and I'm looking for recommendations from anyone who's had success with similar setups.
**The model**
* Qwen/Qwen3-Coder-30B-A3B-Instruct
* 30.5B total parameters with 3.3B active MOE
* Available on Hugging Face
* I'm fine using quantized form... | 2025-08-05T13:40:25 | https://www.reddit.com/r/LocalLLaMA/comments/1mi9i1g/advice_on_running_qwen3coder30ba3b_locally/ | medi6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi9i1g | false | null | t3_1mi9i1g | /r/LocalLLaMA/comments/1mi9i1g/advice_on_running_qwen3coder30ba3b_locally/ | false | false | self | 7 | null |
git commit message generator | 0 | i have been using it with Claude Code for one month, really boosts my productivity. share here.
`pip install gitme-cli`
[https://pypi.org/project/gitme-cli/](https://pypi.org/project/gitme-cli/) | 2025-08-05T13:39:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mi9h1y/git_commit_message_generator/ | MinimalisticStoic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi9h1y | false | null | t3_1mi9h1y | /r/LocalLLaMA/comments/1mi9h1y/git_commit_message_generator/ | false | false | self | 0 | null |
GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning | 6 | [GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning](https://arxiv.org/abs/2507.19457)
and my poorman's implementation
[https://github.com/wangjing0/gepa-optimizer.git](https://github.com/wangjing0/gepa-optimizer.git)
| 2025-08-05T13:36:06 | https://www.reddit.com/r/LocalLLaMA/comments/1mi9eay/gepa_reflective_prompt_evolution_can_outperform/ | MinimalisticStoic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi9eay | false | null | t3_1mi9eay | /r/LocalLLaMA/comments/1mi9eay/gepa_reflective_prompt_evolution_can_outperform/ | false | false | self | 6 | null |
Google's LangExtract work with other major LLM providers | 4 | Google's LangExtract work with other major LLM providers
[https://github.com/wangjing0/langextract](https://github.com/wangjing0/langextract) | 2025-08-05T13:33:32 | https://www.reddit.com/r/LocalLLaMA/comments/1mi9c1p/googles_langextract_work_with_other_major_llm/ | MinimalisticStoic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi9c1p | false | null | t3_1mi9c1p | /r/LocalLLaMA/comments/1mi9c1p/googles_langextract_work_with_other_major_llm/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'GwrgQmpcF0VCAY-QQ-Q3LaQocffDxrNVPovAplZsAqg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GwrgQmpcF0VCAY-QQ-Q3LaQocffDxrNVPovAplZsAqg.png?width=108&crop=smart&auto=webp&s=394de7e27fd0ebfd7c380f0b63cb942d3fa8f76d', 'width': 108}, {'height': 108, 'url': 'h... |
Qwen-Image: Crafting with Native Text Rendering | 7 | 2025-08-05T13:17:26 | https://qwenlm.github.io/blog/qwen-image/ | ChiliPepperHott | qwenlm.github.io | 1970-01-01T00:00:00 | 0 | {} | 1mi8y9p | false | null | t3_1mi8y9p | /r/LocalLLaMA/comments/1mi8y9p/qwenimage_crafting_with_native_text_rendering/ | false | false | default | 7 | null | |
Qwen3 Coder vs. Kimi K2 vs. Sonnet 4 Coding Comparison (Tested on Qwen CLI) | 143 | Alibaba released Qwen3‑Coder (480B → 35B active) alongside Qwen Code CLI, a complete fork of Gemini CLI for agentic coding workflows specifically adapted for Qwen3 Coder. I tested it head-to-head with Kimi K2 and Claude Sonnet 4 in practical coding tasks using the same CLI via **OpenRouter** to keep things consistent f... | 2025-08-05T13:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/1mi8lbl/qwen3_coder_vs_kimi_k2_vs_sonnet_4_coding/ | shricodev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi8lbl | false | null | t3_1mi8lbl | /r/LocalLLaMA/comments/1mi8lbl/qwen3_coder_vs_kimi_k2_vs_sonnet_4_coding/ | false | false | self | 143 | {'enabled': False, 'images': [{'id': 'tPGVVSZ25uVwXfZJDJjOZNmkuZHcVbZBxE6h5qxijJI', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/tPGVVSZ25uVwXfZJDJjOZNmkuZHcVbZBxE6h5qxijJI.png?width=108&crop=smart&auto=webp&s=f61b603d00e0057876109bb01fe01a95ec11848d', 'width': 108}, {'height': 143, 'url': 'h... |
m1 max vs m1 ultra at 64 gb of ram | 0 | Would I see much benefit to running m1 ultra over m1 max at 64GB of RAM? I know ultra can go up to 128, but for this comparison I am just looking at 64GB models. | 2025-08-05T12:53:04 | https://www.reddit.com/r/LocalLLaMA/comments/1mi8dmu/m1_max_vs_m1_ultra_at_64_gb_of_ram/ | Wild_Warning3716 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi8dmu | false | null | t3_1mi8dmu | /r/LocalLLaMA/comments/1mi8dmu/m1_max_vs_m1_ultra_at_64_gb_of_ram/ | false | false | self | 0 | null |
MultiNRC: A Challenging and Native Multilingual Reasoning Evaluation Benchmark for LLMs | 5 | Abstract:
> Although recent Large Language Models (LLMs) have shown rapid improvement on reasoning benchmarks in English, the evaluation of such LLMs' multilingual reasoning capability across diverse languages and cultural contexts remains limited. Existing multilingual reasoning benchmarks are typically constructed b... | 2025-08-05T12:51:48 | https://arxiv.org/abs/2507.17476 | Balance- | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1mi8cm2 | false | null | t3_1mi8cm2 | /r/LocalLLaMA/comments/1mi8cm2/multinrc_a_challenging_and_native_multilingual/ | false | false | default | 5 | null |
Try Chatgpt agent and GLM4.5 on slides making | 10 | I try to make poster for Yellowstone park using both Chatgpt agent(plus mode) and glm4.5(on z.ai), The prompt is like: "Make me a four-season poster of Yellowstone Park, four sheets, to attract everyone to visit.,A sophisticated design style"
Chatgpt agent use the browser for 10 minutes and return the result
https... | 2025-08-05T12:50:33 | https://www.reddit.com/r/LocalLLaMA/comments/1mi8bkb/try_chatgpt_agent_and_glm45_on_slides_making/ | Apart-River475 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi8bkb | false | null | t3_1mi8bkb | /r/LocalLLaMA/comments/1mi8bkb/try_chatgpt_agent_and_glm45_on_slides_making/ | false | false | 10 | null | |
Has anybody used Letta (former memgpt) Succesfully with local models? | 2 | Hello, lately I have been experimenting for the first time with the various tool calling / mcp services and I have been trying to use ollama or vllm as the model providers.
Unfortunately I'm running into lots of issues because from what I understand the number of models that support tool calling and are usable self h... | 2025-08-05T12:47:31 | https://www.reddit.com/r/LocalLLaMA/comments/1mi8917/has_anybody_used_letta_former_memgpt_succesfully/ | LAWN_Red | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi8917 | false | null | t3_1mi8917 | /r/LocalLLaMA/comments/1mi8917/has_anybody_used_letta_former_memgpt_succesfully/ | false | false | self | 2 | null |
gpustack llama-box on raspberry pi 5 cluster | 2 | Hi,
I've been looking for distributed inference that easily integrates with non-Apple devices and runs on CPUs for a while now. Recently, I came across gpustack, and llama-box as one of its backends.
On their website, they [state](https://docs.gpustack.ai/0.6/installation/cpu/online-installation/#__tabbed_2_2):
>In ... | 2025-08-05T12:34:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mi7ymi/gpustack_llamabox_on_raspberry_pi_5_cluster/ | Tartarus116 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi7ymi | false | null | t3_1mi7ymi | /r/LocalLLaMA/comments/1mi7ymi/gpustack_llamabox_on_raspberry_pi_5_cluster/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'U88E_xRMtkU6qLs6_xsjeOm2t-_xIDlJu4_tcLtkkHI', 'resolutions': [{'height': 48, 'url': 'https://external-preview.redd.it/ls5cg9rKValLr_yFfHo1GT8G2q7pSzW37wWK6vawo5s.jpg?width=108&crop=smart&auto=webp&s=fd69b9000f70eb1da633f972ebb3d03aada0b554', 'width': 108}, {'height': 96, 'url': 'ht... |
Best quantisation method, libraries and tools? | 2 | Hi, i wanted to experiment around with quantisation and get to more about them indepth. Hence ill be using the Gemma 3 1B QAT unquantised model. Which one should i use, there is and Int4 version and a Q4\_0 version. Which libraries or tools should i use to quantise. Just need guidance on it. | 2025-08-05T12:26:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mi7sg0/best_quantisation_method_libraries_and_tools/ | R46H4V | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi7sg0 | false | null | t3_1mi7sg0 | /r/LocalLLaMA/comments/1mi7sg0/best_quantisation_method_libraries_and_tools/ | false | false | self | 2 | null |
DeepSeek R1 vs. V3 - Going Head-To-Head In AI Roleplay | 15 | When it comes to AI Roleplay, people have had both good and bad experiences with DeepSeek R1 and DeepSeek V3. We wanted to examine how DeepSeek R1 vs. V3 perform in roleplay when they go head-to-head against each other under different scenarios.
This little deep-dive will help you figure out which model will give you ... | 2025-08-05T12:22:52 | https://rpwithai.com/deepseek-r1-vs-v3-for-roleplay/ | RPWithAI | rpwithai.com | 1970-01-01T00:00:00 | 0 | {} | 1mi7pei | false | null | t3_1mi7pei | /r/LocalLLaMA/comments/1mi7pei/deepseek_r1_vs_v3_going_headtohead_in_ai_roleplay/ | false | false | default | 15 | {'enabled': False, 'images': [{'id': '078KX2JOJbgzKwAGsWu6xPvajl1VjjFOwNmdxWSzZUI', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/078KX2JOJbgzKwAGsWu6xPvajl1VjjFOwNmdxWSzZUI.png?width=108&crop=smart&auto=webp&s=a8a9099063a7974764b020a9ed879f8caeef9a04', 'width': 108}, {'height': 144, 'url': 'h... |
Looking to buy a new machine - Mac vs NVIDIA | 0 | So I am looking to buy a new machine for running AI locally, preferably under 1200USD.
My end goal is to use AI to assist me with everything in life including work but I would like it to run locally.
I have traditionally been a Windows-Android kinda guy but I am leaning towards buying Apple Mac Studio and run on my W... | 2025-08-05T12:18:23 | https://www.reddit.com/r/LocalLLaMA/comments/1mi7lwi/looking_to_buy_a_new_machine_mac_vs_nvidia/ | KindlyAnything1996 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi7lwi | false | null | t3_1mi7lwi | /r/LocalLLaMA/comments/1mi7lwi/looking_to_buy_a_new_machine_mac_vs_nvidia/ | false | false | self | 0 | null |
New llama.cpp options make MoE offloading trivial: `--n-cpu-moe` | 292 | No more need for super-complex regular expression in the -ot option! Just do `--cpu-moe` or `--n-cpu-moe #` and reduce the number until the model no longer fits on the GPU. | 2025-08-05T12:04:17 | https://github.com/ggml-org/llama.cpp/pull/15077 | Pristine-Woodpecker | github.com | 1970-01-01T00:00:00 | 0 | {} | 1mi7bem | false | null | t3_1mi7bem | /r/LocalLLaMA/comments/1mi7bem/new_llamacpp_options_make_moe_offloading_trivial/ | false | false | default | 292 | {'enabled': False, 'images': [{'id': 'z9bB4lcxUZhZHvTMdXPfCmtA3BVsCM9FB8umPl48qlU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z9bB4lcxUZhZHvTMdXPfCmtA3BVsCM9FB8umPl48qlU.png?width=108&crop=smart&auto=webp&s=ad9e8961b664b6710eb58c6fa604f119639c53e1', 'width': 108}, {'height': 108, 'url': 'h... |
SmallThinker trained on DeepSeek? | 0 | "Unlike traditional approaches that mainly compress existing models built for clouds, we architect SmallThinker from the ground up to thrive within these limitations."
I like the idea of an simple model for on device usage. It seems DeepSeek was probably used for a lot of the training right? | 2025-08-05T12:04:17 | https://www.reddit.com/gallery/1mi7bec | TheWingsOfWar | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mi7bec | false | null | t3_1mi7bec | /r/LocalLLaMA/comments/1mi7bec/smallthinker_trained_on_deepseek/ | false | false | 0 | null | |
VS Code plugins that can handle XML tool calling? | 1 | I'm dabbling with Qwen Coder and Roo, but it looks like the model was trained to do tool calls in XML instead of the more common JSON. Would Cline do better there? It doesn't seem to work as well with local models. | 2025-08-05T12:00:19 | https://www.reddit.com/r/LocalLLaMA/comments/1mi789a/vs_code_plugins_that_can_handle_xml_tool_calling/ | sciencewarrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi789a | false | null | t3_1mi789a | /r/LocalLLaMA/comments/1mi789a/vs_code_plugins_that_can_handle_xml_tool_calling/ | false | false | self | 1 | null |
What are the best LLMs to transcribe Japanese audio to English? | 4 |
Looking to transcribe Japanese vocals in a track - wondering what the best LLM is to transcribe it to English?
track is this:
https://www.youtube.com/watch?v=ZGWgRa95xv8
I also have the audio file.
| 2025-08-05T11:27:57 | https://www.reddit.com/r/LocalLLaMA/comments/1mi6lek/what_are_the_best_llms_to_transcribe_japanese/ | nomiimon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi6lek | false | null | t3_1mi6lek | /r/LocalLLaMA/comments/1mi6lek/what_are_the_best_llms_to_transcribe_japanese/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': '581jyGh8lbM909lg3YhO1-KQ70H0Vae1AS2u99zGgGo', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/581jyGh8lbM909lg3YhO1-KQ70H0Vae1AS2u99zGgGo.jpeg?width=108&crop=smart&auto=webp&s=3612915e6d1f169e9418850f016fa99b3a9e98af', 'width': 108}, {'height': 162, 'url': '... |
Actual replacements for perplexity, notebookLm | 5 | So I'm sure you all have seen a million posts of yet another ollama vibe coded frontend, yet another I made xyz but free!
But I'm not looking for a fly by night tool. What alternative open source projects are actually really good, well maintained and active? I'm thinking about replacements for perplexity that replicat... | 2025-08-05T11:14:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mi6c0x/actual_replacements_for_perplexity_notebooklm/ | zelkovamoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi6c0x | false | null | t3_1mi6c0x | /r/LocalLLaMA/comments/1mi6c0x/actual_replacements_for_perplexity_notebooklm/ | false | false | self | 5 | null |
Fast and local open source TTS engine. 20+ languages, multiple voices. Model size 25MB to 65MB. Can train on new voices. | 164 | Fast and local TTS engine. 20+ languages, multiple voices. Model size 25MB to 65MB (based on the language). Can train on new voices.
Github Link: [https://github.com/OHF-Voice/piper1-gpl](https://github.com/OHF-Voice/piper1-gpl)
| 2025-08-05T11:13:52 | https://v.redd.it/4f9mf37ap6hf1 | phone_radio_tv | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mi6brm | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/4f9mf37ap6hf1/DASHPlaylist.mpd?a=1756984448%2CNjNmZmQyN2Q3NDkxNDE4NzFjM2IwZDNlN2Q3MzVkZDMyZTgxYmZiNWJkZDVlZjk2M2VjZWVhZTFlOWY5YzNhNw%3D%3D&v=1&f=sd', 'duration': 12, 'fallback_url': 'https://v.redd.it/4f9mf37ap6hf1/DASH_720.mp4?source=fallback', 'ha... | t3_1mi6brm | /r/LocalLLaMA/comments/1mi6brm/fast_and_local_open_source_tts_engine_20/ | false | false | 164 | {'enabled': False, 'images': [{'id': 'Y3B1eTg4N2FwNmhmMcdJr-o9P4ZouvxhN_0BwF4rV8WaxRXMUy0jmPSG-wmF', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/Y3B1eTg4N2FwNmhmMcdJr-o9P4ZouvxhN_0BwF4rV8WaxRXMUy0jmPSG-wmF.png?width=108&crop=smart&format=pjpg&auto=webp&s=ab39c76d32b254190a2a0b0839a0d803368db... | |
The Chess Arena pairings for today's Kaggle exhibition are out, commentary by grandmasters like Hikaru Nakamura! | 109 | 2025-08-05T11:13:33 | Final_Wheel_7486 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mi6bkf | false | null | t3_1mi6bkf | /r/LocalLLaMA/comments/1mi6bkf/the_chess_arena_pairings_for_todays_kaggle/ | false | false | default | 109 | {'enabled': True, 'images': [{'id': 'h2p8ceo4p6hf1', 'resolutions': [{'height': 116, 'url': 'https://preview.redd.it/h2p8ceo4p6hf1.png?width=108&crop=smart&auto=webp&s=5370674f4434a1fbf10228182bb629df91219be4', 'width': 108}, {'height': 232, 'url': 'https://preview.redd.it/h2p8ceo4p6hf1.png?width=216&crop=smart&auto=we... | ||
Ollama RESTful API and code interpreter | 0 | I have a running Ollama server on a remote linux machine. Is it possible to include in an Ollama REST API request a directive that allows the use of code interpreter? If yes, can someone point me to any documentation? I tried looking at https://ollama.qubitpi.org/api/ without success. I found instructions for tool use,... | 2025-08-05T10:51:09 | https://www.reddit.com/r/LocalLLaMA/comments/1mi5wk1/ollama_restful_api_and_code_interpreter/ | ConiglioPipo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi5wk1 | false | null | t3_1mi5wk1 | /r/LocalLLaMA/comments/1mi5wk1/ollama_restful_api_and_code_interpreter/ | false | false | self | 0 | null |
Mi50 32gb (Working config, weirdness and performance) | 15 | Thought I'd share some knowledge after a week with an Mi50 32gb bought from Ebay. Was originally supposed to be a response by hyper-focus took over and this is more suited as a post.
It arrived new-looking. Anti-static bag, not a spec of dust and plastic peel still on the AMD Instinct branded shroud. Mine came with an... | 2025-08-05T10:43:56 | https://www.reddit.com/r/LocalLLaMA/comments/1mi5s6w/mi50_32gb_working_config_weirdness_and_performance/ | Danternas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi5s6w | false | null | t3_1mi5s6w | /r/LocalLLaMA/comments/1mi5s6w/mi50_32gb_working_config_weirdness_and_performance/ | false | false | self | 15 | null |
[Feedback Wanted] Introducing Configen: 100% Free AI Configuration Agent for PCs & Clouds | 1 | [removed] | 2025-08-05T10:32:37 | https://www.reddit.com/r/LocalLLaMA/comments/1mi5lbh/feedback_wanted_introducing_configen_100_free_ai/ | krypta89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi5lbh | false | null | t3_1mi5lbh | /r/LocalLLaMA/comments/1mi5lbh/feedback_wanted_introducing_configen_100_free_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'gzfwolXUDpFwPGKTzZsEbrnIv1qekp7WYaO2dBVoxkk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/gzfwolXUDpFwPGKTzZsEbrnIv1qekp7WYaO2dBVoxkk.png?width=108&crop=smart&auto=webp&s=1a88e50c83a33770d6c587b1663c407ec8b8bfdd', 'width': 108}, {'height': 121, 'url': 'h... |
AI Agent Human Feedback within Tool Use | 2 | Hey all,
I'm hoping someone can help me.
Currently, I'm creating an agentic workflow.
My agent has a tool called `interact_with_customer`.
With this tool, the agent should be able to communicate with the customer.
That means the method should send a message to the frontend and also wait until a response is re... | 2025-08-05T10:21:46 | https://www.reddit.com/r/LocalLLaMA/comments/1mi5er2/ai_agent_human_feedback_within_tool_use/ | Zealousideal_Sir_328 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi5er2 | false | null | t3_1mi5er2 | /r/LocalLLaMA/comments/1mi5er2/ai_agent_human_feedback_within_tool_use/ | false | false | self | 2 | null |
From Large to Super-Tiny: End-to-End Optimization for Cost-Efficient LLMs | 2 | 2025-08-05T09:40:37 | https://arxiv.org/abs/2504.13471 | juanviera23 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1mi4q4o | false | null | t3_1mi4q4o | /r/LocalLLaMA/comments/1mi4q4o/from_large_to_supertiny_endtoend_optimization_for/ | false | false | default | 2 | null | |
why , is "everyone" here Cynics? | 0 | I do not mean any offense, I don't mean to say that you are wrong about it, I am just really curious!
this /r seems to be the most technical of all r/ i spend time in. and it is my understanding that people here have generally a very Cynicism way of looking at the world, or at least the tech world.
once again, I am n... | 2025-08-05T09:20:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mi4emi/why_is_everyone_here_cynics/ | Trick_Ad_4388 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi4emi | false | null | t3_1mi4emi | /r/LocalLLaMA/comments/1mi4emi/why_is_everyone_here_cynics/ | false | false | self | 0 | null |
Kitten TTS Web Demo | 58 | I made a quick web demo of the new [Kitten TTS](https://www.reddit.com/r/LocalLLaMA/comments/1mhyzp7/kitten_tts_sota_supertiny_tts_model_less_than_25/). Loads the model up using transformers.js in the browser, running fully locally client-side: https://clowerweb.github.io/kitten-tts-web-demo/
Repo: https://github.com/... | 2025-08-05T09:04:22 | https://www.reddit.com/r/LocalLLaMA/comments/1mi45h1/kitten_tts_web_demo/ | CommunityTough1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi45h1 | false | null | t3_1mi45h1 | /r/LocalLLaMA/comments/1mi45h1/kitten_tts_web_demo/ | false | false | self | 58 | null |
🔥GPT-5 is coming... one day, according to Altman's cosmic calendar | 0 | 2025-08-05T08:57:59 | Ok_Ninja7526 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mi41q9 | false | null | t3_1mi41q9 | /r/LocalLLaMA/comments/1mi41q9/gpt5_is_coming_one_day_according_to_altmans/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'dh4wbk4616hf1', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/dh4wbk4616hf1.jpeg?width=108&crop=smart&auto=webp&s=b09b49d85251f91886943d0d93fc60a7ba06845d', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/dh4wbk4616hf1.jpeg?width=216&crop=smart&auto=... | ||
🔥GPT-5 is coming... eventually, according to Altman's cosmic calendar | 1 | <blockquote class="twitter-tweet"><p lang="en" dir="ltr">we have a ton of stuff to launch over the next couple of months--new models, products, features, and more.<br><br>please bear with us through some probable hiccups and capacity crunches. although it may be slightly choppy, we think you\'ll really love what we... | 2025-08-05T08:54:59 | https://www.reddit.com/r/LocalLLaMA/comments/1mi405u/gpt5_is_coming_eventually_according_to_altmans/ | Ok_Ninja7526 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi405u | false | null | t3_1mi405u | /r/LocalLLaMA/comments/1mi405u/gpt5_is_coming_eventually_according_to_altmans/ | false | false | self | 1 | null |
The translation capability of GLM4.5 for Chinese slang. | 19 | I find that GLM4.5 can successfully understand and translate the slang in Chinese. Take an example in Seed-X-Challenge benchmark: the source text is "离谱她妈给离谱开门 离谱到家了", and this sentence needs to be translated in a way that captures its extremely absurd, rather than being translated literally.
The translation result ... | 2025-08-05T08:22:17 | https://www.reddit.com/r/LocalLLaMA/comments/1mi3igq/the_translation_capability_of_glm45_for_chinese/ | OddUnderstanding1633 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi3igq | false | null | t3_1mi3igq | /r/LocalLLaMA/comments/1mi3igq/the_translation_capability_of_glm45_for_chinese/ | false | false | self | 19 | null |
Can I fine-tune GLM-4.5 Air via MLX? | 0 | Since the release of GLM 4.5, I've seen many contributors working hard to support at llama.cpp.
However, as far as I remember, serise of quant model were registered on MLX community almost on the zero day in GLM case.
1. Can the safetensor of usual MOE model be easily converted to quant using MLX? Or did Apple provid... | 2025-08-05T08:20:14 | https://www.reddit.com/r/LocalLLaMA/comments/1mi3has/can_i_finetune_glm45_air_via_mlx/ | Desperate-Sir-5088 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi3has | false | null | t3_1mi3has | /r/LocalLLaMA/comments/1mi3has/can_i_finetune_glm45_air_via_mlx/ | false | false | self | 0 | null |
Hugging Face | Dragonfly | 0 | 2025-08-05T08:18:52 | https://d7y.io/docs/next/operations/integrations/hugging-face/ | MoneyPowerNexis | d7y.io | 1970-01-01T00:00:00 | 0 | {} | 1mi3gk0 | false | null | t3_1mi3gk0 | /r/LocalLLaMA/comments/1mi3gk0/hugging_face_dragonfly/ | false | false | default | 0 | null | |
OCR Recognition and ASCII Generation of Medical Prescription | 5 | I was having a very tough time in getting OCR of Medical Prescriptions. Medical prescriptions have so many different formats. Conversion to a JSON directly causes issues. So to preserve the structure and the semantic meaning I thought to convert it to ASCII.
[https://limewire.com/d/JGqOt#o7boivJrZv](https://limewire.c... | 2025-08-05T08:08:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mi3b19/ocr_recognition_and_ascii_generation_of_medical/ | Rukelele_Dixit21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi3b19 | false | null | t3_1mi3b19 | /r/LocalLLaMA/comments/1mi3b19/ocr_recognition_and_ascii_generation_of_medical/ | false | false | self | 5 | null |
Raw text file not starting Lora training | 0 | 2025-08-05T07:18:09 | vulgar1171 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mi2izf | false | null | t3_1mi2izf | /r/LocalLLaMA/comments/1mi2izf/raw_text_file_not_starting_lora_training/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'GRzeufP-ztJe-fwF-rg7WEvEUnZzozqsUZRtyQGiQCE', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/m3madkmdj5hf1.jpeg?width=108&crop=smart&auto=webp&s=aa95790d9d2e01003c57cb82e3e4b115d76f73a6', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/m3madkmdj5hf1.j... | |||
How does someone with programming exp get started with LLMs? | 2 | For a bit of context, I'm a software developer with 4 years of exp, in dotnet and I've worked with python as well. My goal is to hit the ground running by creating projects using LLMs, I feel like the way to learn is by doing the thing, but I'm a bit lost on probably getting started.
For the most part there seems to b... | 2025-08-05T07:10:01 | https://www.reddit.com/r/LocalLLaMA/comments/1mi2ebo/how_does_someone_with_programming_exp_get_started/ | parleG_OP | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi2ebo | false | null | t3_1mi2ebo | /r/LocalLLaMA/comments/1mi2ebo/how_does_someone_with_programming_exp_get_started/ | false | false | self | 2 | null |
How does someone with programming exp get started in learning LLM? | 1 | [removed] | 2025-08-05T07:07:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mi2d2g/how_does_someone_with_programming_exp_get_started/ | parleG_OP | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi2d2g | false | null | t3_1mi2d2g | /r/LocalLLaMA/comments/1mi2d2g/how_does_someone_with_programming_exp_get_started/ | true | false | self | 1 | null |
Anyone here figured out how to reliably extract formulas from PDFs? | 2 | Hey folks!
I’ve been testing a few document parsers to extract formulas from PDFs (like scientific papers, math-heavy docs, etc). Tried **Docling**, but the results are not great so far. Especially struggling with keeping the formula structure intact.
Curious if anyone here has found a good method or tool that actua... | 2025-08-05T06:53:18 | https://www.reddit.com/r/LocalLLaMA/comments/1mi24mj/anyone_here_figured_out_how_to_reliably_extract/ | duke_x91 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi24mj | false | null | t3_1mi24mj | /r/LocalLLaMA/comments/1mi24mj/anyone_here_figured_out_how_to_reliably_extract/ | false | false | self | 2 | null |
Thoughts on Georg Zoeller | 0 | Quite critical of LLMs… | 2025-08-05T06:51:20 | https://www.reddit.com/r/LocalLLaMA/comments/1mi23im/thoughts_on_georg_zoeller/ | Salt_Armadillo8884 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi23im | false | null | t3_1mi23im | /r/LocalLLaMA/comments/1mi23im/thoughts_on_georg_zoeller/ | false | false | self | 0 | null |
Confused About TPS Needs for On-Device LLM: 5 vs 30 TPS for Voice? | 1 | I'm working on a robot that uses a server-based LLM for voice conversations, but I'm planning to add an on-device LLM as a fallback when there's no internet connection.
Here are the current specs:
* **CPU**: Cortex-A53 x 4 @ 1.8GHz
* **RAM**: 8GB LPDDR4
* **OS**: Android (AOSP-based)
I've asked models like ChatGPT a... | 2025-08-05T06:39:27 | https://www.reddit.com/r/LocalLLaMA/comments/1mi1wwr/confused_about_tps_needs_for_ondevice_llm_5_vs_30/ | Public_Paint8683 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi1wwr | false | null | t3_1mi1wwr | /r/LocalLLaMA/comments/1mi1wwr/confused_about_tps_needs_for_ondevice_llm_5_vs_30/ | false | false | self | 1 | null |
Qwen-image now supported in ComfyUI | 76 | At last after wait of few hours, ComfyUI now has support for Qwen-Image. Its from their [git repo](https://github.com/comfyanonymous/ComfyUI/pull/9179).
| 2025-08-05T06:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mi1vov/qwenimage_now_supported_in_comfyui/ | Lopsided_Dot_4557 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi1vov | false | null | t3_1mi1vov | /r/LocalLLaMA/comments/1mi1vov/qwenimage_now_supported_in_comfyui/ | false | false | self | 76 | {'enabled': False, 'images': [{'id': '__JsZtcXYjOJBdf-yj1H7jt4OhvbL9fq0fnYqZ8VZTE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/__JsZtcXYjOJBdf-yj1H7jt4OhvbL9fq0fnYqZ8VZTE.png?width=108&crop=smart&auto=webp&s=8b771b90ef6b66d110ba5fc17e7800c3c4c7b89d', 'width': 108}, {'height': 108, 'url': 'h... |
Is llama.cpp sycl backend really worth it? | 5 | I have an old laptop i5 1145g7 11gen 2x8gb ddr4 ram iris xe igpu 8bg shared vram. I recently came across [intel article to run llms utilizing igpu in 11,12,13 gen](https://www.intel.com/content/www/us/en/developer/articles/technical/run-llms-on-gpus-using-llama-cpp.html). I have been trying to run [this](https://huggin... | 2025-08-05T06:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/1mi1tns/is_llamacpp_sycl_backend_really_worth_it/ | Sweet_Eggplant4659 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi1tns | false | null | t3_1mi1tns | /r/LocalLLaMA/comments/1mi1tns/is_llamacpp_sycl_backend_really_worth_it/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'dxXECHarBSKz1hyLrBwlOcCTI-CwnnfIQLajz9bmY9Y', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/dxXECHarBSKz1hyLrBwlOcCTI-CwnnfIQLajz9bmY9Y.png?width=108&crop=smart&auto=webp&s=89ff655453a7c1ebf52a06d95507c865ac6aab18', 'width': 108}, {'height': 216, 'url': '... |
Exaone 4.0-1.2B is creating pretty wild fake language stories when asking to write in any other language than English or Korean. | 9 | Prompts:
write a story in german
write a story in french
write a story in italian
write a story in japanese | 2025-08-05T06:13:12 | https://www.reddit.com/gallery/1mi1hl9 | cpldcpu | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mi1hl9 | false | null | t3_1mi1hl9 | /r/LocalLLaMA/comments/1mi1hl9/exaone_4012b_is_creating_pretty_wild_fake/ | false | false | 9 | null | |
Moving Beyond Prompt Engineering: Why Context Engineering is the Real Skill to Master | 0 | Hey everyone,
I've been developing with LLMs for some time, and I wanted to share a key insight that has fundamentally changed my approach to building reliable AI systems.
Initially, like many in the field, my focus was almost entirely on **prompt engineering**. I'd spend countless hours refining phrasing and instruc... | 2025-08-05T06:09:55 | https://www.reddit.com/r/LocalLLaMA/comments/1mi1fp7/moving_beyond_prompt_engineering_why_context/ | YakoStarwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi1fp7 | false | null | t3_1mi1fp7 | /r/LocalLLaMA/comments/1mi1fp7/moving_beyond_prompt_engineering_why_context/ | false | false | self | 0 | null |
DFLoat11 Quantization for Qwen-Image Drops – Run It on 17GB VRAM with CPU Offloading! | 156 | 2025-08-05T06:09:21 | XMasterrrr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mi1fdc | false | null | t3_1mi1fdc | /r/LocalLLaMA/comments/1mi1fdc/dfloat11_quantization_for_qwenimage_drops_run_it/ | false | false | default | 156 | {'enabled': True, 'images': [{'id': 'sv779zmy65hf1', 'resolutions': [{'height': 90, 'url': 'https://preview.redd.it/sv779zmy65hf1.png?width=108&crop=smart&auto=webp&s=4ebbb232e4b60bf6fe7db6c4eb2170131c369e86', 'width': 108}, {'height': 180, 'url': 'https://preview.redd.it/sv779zmy65hf1.png?width=216&crop=smart&auto=web... | ||
Exaone 4.0-1.2B is creating pretty wild fake language stories when asking to write in any other language than English or Korean | 1 | Prompts:
`Write a story in german`
`Write a story in italian`
`Write a story in french`
`Write a story in japanese` | 2025-08-05T06:08:42 | https://www.reddit.com/r/LocalLLaMA/comments/1mi1ez0/exaone_4012b_is_creating_pretty_wild_fake/ | cpldcpu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi1ez0 | false | null | t3_1mi1ez0 | /r/LocalLLaMA/comments/1mi1ez0/exaone_4012b_is_creating_pretty_wild_fake/ | false | false | self | 1 | null |
I made a free tool to check if your rig is ready for Ollama and other local LLMs | 1 | Hey folks,
I got tired of guessing whether my machine could handle various models, so I built a small GUI app with some features I wish I had from day one.
Here’s what it does:
Scans your GPU, VRAM, RAM, CPU (AVX2), disk space — all in one screen.
Tells you which models will run perfectly, which are on the edge, an... | 2025-08-05T06:08:28 | https://www.reddit.com/r/LocalLLaMA/comments/1mi1eu6/i_made_a_free_tool_to_check_if_your_rig_is_ready/ | OkLoad5267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi1eu6 | false | null | t3_1mi1eu6 | /r/LocalLLaMA/comments/1mi1eu6/i_made_a_free_tool_to_check_if_your_rig_is_ready/ | false | false | self | 1 | null |
What should I pick ? 5090 or Asus GX10 or Halo Strix MiniPC at similar prices | 0 | Hi all,
I'm a frequent reader but too poor to actually invest.
With all new models and upcomming hardware release I think it is the time to start planning.
My use case is quite straight foward, just code agent and design doc (md/mermaid) generation. With the rising of AI tool I'm actually spending more and more time... | 2025-08-05T06:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/1mi1bic/what_should_i_pick_5090_or_asus_gx10_or_halo/ | quyetnd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi1bic | false | null | t3_1mi1bic | /r/LocalLLaMA/comments/1mi1bic/what_should_i_pick_5090_or_asus_gx10_or_halo/ | false | false | self | 0 | null |
MTP with GLM 4.5 Air on Mac possible? | 1 | I see in the release notes that the GLM model support Multi-Token-Prediction, but am unsure how to actually make use of it. Im currently using the 4bit quant (MLX) on mac through ML Studio, and it supports MTP through speculative decoding with a draft model, but that is different to what GLM has right?
I also see disc... | 2025-08-05T05:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/1mi0y3u/mtp_with_glm_45_air_on_mac_possible/ | TangerineRough4628 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi0y3u | false | null | t3_1mi0y3u | /r/LocalLLaMA/comments/1mi0y3u/mtp_with_glm_45_air_on_mac_possible/ | false | false | self | 1 | null |
How do I convert a .xml file to a .json file to train my LLM? | 0 | If there was a dataset or pages from Wikipedia that is in .xml format, what do you use to change into an alpaca format like .json? | 2025-08-05T05:37:49 | https://www.reddit.com/r/LocalLLaMA/comments/1mi0wkg/how_do_i_convert_a_xml_file_to_a_json_file_to/ | vulgar1171 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi0wkg | false | null | t3_1mi0wkg | /r/LocalLLaMA/comments/1mi0wkg/how_do_i_convert_a_xml_file_to_a_json_file_to/ | false | false | self | 0 | null |
What is the best current model for roleplay if I have 8gb vram 6600xt 16gb ram and 3600 ryzen? | 0 | I know about the rule of researching beforehand, but I didn't find any satisfactory answers. Right now, after setting up koboldcpp and sillytavern I use dolphin 2.6 mistral 7b which was recommended to me by deepseek, I installed everything with it's help and because of that I didn't at first had a thought about searchi... | 2025-08-05T05:31:24 | https://www.reddit.com/r/LocalLLaMA/comments/1mi0sr3/what_is_the_best_current_model_for_roleplay_if_i/ | Mental_Budget_5085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi0sr3 | false | null | t3_1mi0sr3 | /r/LocalLLaMA/comments/1mi0sr3/what_is_the_best_current_model_for_roleplay_if_i/ | false | false | self | 0 | null |
generated using Qwen | 187 | 2025-08-05T05:20:23 | https://www.reddit.com/gallery/1mi0luy | Vision--SuperAI | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1mi0luy | false | null | t3_1mi0luy | /r/LocalLLaMA/comments/1mi0luy/generated_using_qwen/ | false | false | 187 | null | ||
Introducing VAC (ViktorADAM Core) - An Open-Source AGI Reasoning Prototype | 1 | [removed] | 2025-08-05T05:16:45 | https://www.reddit.com/r/LocalLLaMA/comments/1mi0jh0/introducing_vac_viktoradam_core_an_opensource_agi/ | VAC-AGI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mi0jh0 | false | null | t3_1mi0jh0 | /r/LocalLLaMA/comments/1mi0jh0/introducing_vac_viktoradam_core_an_opensource_agi/ | false | false | self | 1 | null |
Anthropic's CEO dismisses open source as 'red herring' - but his reasoning seems to miss the point entirely! | 394 | From Dario Amodei's recent interview on Big Technology Podcast discussing open source AI models. Thoughts on this reasoning?
Source: [https://x.com/jikkujose/status/1952588432280051930](https://x.com/jikkujose/status/1952588432280051930) | 2025-08-05T05:06:08 | MrJiks | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mi0co2 | false | null | t3_1mi0co2 | /r/LocalLLaMA/comments/1mi0co2/anthropics_ceo_dismisses_open_source_as_red/ | false | false | default | 394 | {'enabled': True, 'images': [{'id': '9z1vbpnsu4hf1', 'resolutions': [{'height': 127, 'url': 'https://preview.redd.it/9z1vbpnsu4hf1.png?width=108&crop=smart&auto=webp&s=e84a5f122eeace4cb08d0cfe862ccf07e8d62424', 'width': 108}, {'height': 254, 'url': 'https://preview.redd.it/9z1vbpnsu4hf1.png?width=216&crop=smart&auto=we... | |
Finding a local model for text table QA | 0 | Task example, with the question being "What was net sales by reportable segment in europe in 2016?" an a table in a text format like the following:
| | | | | | | | | | | | | | | | |
| 2018| | Change| | 2017| | Change| | 2016
Net Sales by Reportable Segment:| | | | | | | | |
Americ... | 2025-08-05T03:59:30 | https://www.reddit.com/r/LocalLLaMA/comments/1mhz4jl/finding_a_local_model_for_text_table_qa/ | Sea_Pomegranate_7803 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhz4jl | false | null | t3_1mhz4jl | /r/LocalLLaMA/comments/1mhz4jl/finding_a_local_model_for_text_table_qa/ | false | false | self | 0 | null |
[Student Unsloth Help] Save to GGUF Taking Forever with Gemma 3 4B Vision + Unsloth on WSL (Single 4090) | 0 | Hi everyone, I'm a student working on a project involving fine-tuning the **Gemma 3 4B Vision** model using **Unsloth** on a local WSL setup with a single NVIDIA RTX 4090. I'm running into a major issue where the save\_pretrained\_gguf function is taking **over 480 minutes** with no output, and I could really use some ... | 2025-08-05T03:57:02 | https://www.reddit.com/r/LocalLLaMA/comments/1mhz2sc/student_unsloth_help_save_to_gguf_taking_forever/ | LeastExperience1579 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhz2sc | false | null | t3_1mhz2sc | /r/LocalLLaMA/comments/1mhz2sc/student_unsloth_help_save_to_gguf_taking_forever/ | false | false | 0 | null | |
Kitten TTS : SOTA Super-tiny TTS Model (Less than 25 MB) | 1,947 | **Model introduction:**
Kitten ML has released open source code and weights of their new TTS model's preview.
Github: [https://github.com/KittenML/KittenTTS](https://github.com/KittenML/KittenTTS)
Huggingface: [https://huggingface.co/KittenML/kitten-tts-nano-0.1](https://huggingface.co/KittenML/kitten-tts-nano-0.1)
... | 2025-08-05T03:52:26 | https://v.redd.it/vdfv5uihi4hf1 | ElectricalBar7464 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1mhyzp7 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/vdfv5uihi4hf1/DASHPlaylist.mpd?a=1756957961%2CNjIxM2ZmMjFlZDNhYTUwNjIxOGY0YjAyOGYwZjRjM2VhZjU2YTdjZmIwZGJkYmVhZmJkMzNkYmVkYzA2OTAwMg%3D%3D&v=1&f=sd', 'duration': 60, 'fallback_url': 'https://v.redd.it/vdfv5uihi4hf1/DASH_720.mp4?source=fallback', 'ha... | t3_1mhyzp7 | /r/LocalLLaMA/comments/1mhyzp7/kitten_tts_sota_supertiny_tts_model_less_than_25/ | false | false | 1,947 | {'enabled': False, 'images': [{'id': 'ajlvMGd2aWhpNGhmMUpar5lWZvhVHx9_BWGYhGbOyuld4cLO275_Q90LHrwX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ajlvMGd2aWhpNGhmMUpar5lWZvhVHx9_BWGYhGbOyuld4cLO275_Q90LHrwX.png?width=108&crop=smart&format=pjpg&auto=webp&s=eea9ab012c1208b48649b975de77487ec7063... | |
Kitten TTS - SOTA Super-Tiny TTS Model (Less than 25 MB) | 2 |
**Model introduction:**
Kitten ML has released open source code and weights of their new TTS model's preview.
https://reddit.com/link/1mhyvpv/video/hjatkvebh4hf1/player
Github: [https://github.com/KittenML/KittenTTS](https://github.com/KittenML/KittenTTS)
Huggingface: [https://huggingface.co/KittenML/kitt... | 2025-08-05T03:46:44 | https://www.reddit.com/r/LocalLLaMA/comments/1mhyvpv/kitten_tts_sota_supertiny_tts_model_less_than_25/ | ElectricalBar7464 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhyvpv | false | null | t3_1mhyvpv | /r/LocalLLaMA/comments/1mhyvpv/kitten_tts_sota_supertiny_tts_model_less_than_25/ | false | false | self | 2 | null |
Best tool to prioritize workloads sharing with LLM? | 0 | I have a beast of a machine: [https://www.reddit.com/r/nvidia/comments/1mf0yal/2xl40s\_2x6000\_ada\_4xrtx\_6000\_pro\_build/](https://www.reddit.com/r/nvidia/comments/1mf0yal/2xl40s_2x6000_ada_4xrtx_6000_pro_build/)
However, most of the time I am running heavy CUDA workloads. I want to run LLM locally on CPU and o... | 2025-08-05T03:34:38 | https://www.reddit.com/r/LocalLLaMA/comments/1mhyn3c/best_tool_to_prioritize_workloads_sharing_with_llm/ | Ill_Recipe7620 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1mhyn3c | false | null | t3_1mhyn3c | /r/LocalLLaMA/comments/1mhyn3c/best_tool_to_prioritize_workloads_sharing_with_llm/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.