title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Open Source Unsiloed AI Chunker (EF2024) getting to 1000 github stars in just 1 week | 0 | Hey , Unsiloed CTO here!
Unsiloed AI (EF 2024) is backed by Transpose Platform & EF and is currently being used by teams at Fortune 100 companies and multiple Series E+ startups for ingesting multimodal data in the form of PDFs, Excel, PPTs, etc. And, we have now finally open sourced some of the capabilities. Do give ... | 2025-06-21T09:58:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lgsykl/open_source_unsiloed_ai_chunker_ef2024_getting_to/ | AskInternational6199 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgsykl | false | null | t3_1lgsykl | /r/LocalLLaMA/comments/1lgsykl/open_source_unsiloed_ai_chunker_ef2024_getting_to/ | false | false | 0 | null | |
AbsenceBench: LLMs can't tell what's missing | 70 | The [AbsenceBench paper](https://arxiv.org/pdf/2506.11440) establishes a test that's basically Needle In A Haystack (NIAH) in reverse. [Code here](https://github.com/harvey-fin/absence-bench).
The idea is that models score 100% on NIAH tests, thus perfectly identify added tokens that stand out - which is not equal to ... | 2025-06-21T09:58:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lgsykj/absencebench_llms_cant_tell_whats_missing/ | Chromix_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgsykj | false | null | t3_1lgsykj | /r/LocalLLaMA/comments/1lgsykj/absencebench_llms_cant_tell_whats_missing/ | false | false | 70 | null | |
Unsloth Dynamic GGUF Quants For Mistral 3.2 | 162 | 2025-06-21T09:57:06 | https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506-GGUF | No-Refrigerator-1672 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lgsxyw | false | null | t3_1lgsxyw | /r/LocalLLaMA/comments/1lgsxyw/unsloth_dynamic_gguf_quants_for_mistral_32/ | false | false | 162 | {'enabled': False, 'images': [{'id': 'CrtSkHQg7FYlqUCKyAhEr6h8Hgeh7uXu4dg2iLzQFtI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CrtSkHQg7FYlqUCKyAhEr6h8Hgeh7uXu4dg2iLzQFtI.png?width=108&crop=smart&auto=webp&s=e06a93ddc880f580b220fc30980a877a58fe0ecf', 'width': 108}, {'height': 116, 'url': 'h... | ||
Is there anyone who wants to develop a full stack ai based website or mvp? I want to offer my services. Dm are open | 0 | Hey 👋
If you are looking for any web developer I can help you build a site from scratch and add custom functionality for you. I am offering in a cheaper price to develop the site for you. The site will have all the functionality you want. I can also build a MVP For you which you can launch fast and monetize.
Overall... | 2025-06-21T09:25:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lgsi6n/is_there_anyone_who_wants_to_develop_a_full_stack/ | NoMuscle1255 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgsi6n | false | null | t3_1lgsi6n | /r/LocalLLaMA/comments/1lgsi6n/is_there_anyone_who_wants_to_develop_a_full_stack/ | false | false | self | 0 | null |
Local Personal Memo AI Assistant | 2 | Good morning guys!
So, the idea is to create a personal memo ai assistant. The concept is to feed my local llm with notes, thoughts and little Infos, which can then be retrieved by asking for them like a classic chat-ish model, so like a personal and customized "windows recall" function.
At the beginning I thought to... | 2025-06-21T09:11:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lgsb61/local_personal_memo_ai_assistant/ | nandospc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgsb61 | false | null | t3_1lgsb61 | /r/LocalLLaMA/comments/1lgsb61/local_personal_memo_ai_assistant/ | false | false | self | 2 | null |
RIGEL: An open-source hybrid AI assistant/framework | 20 | ### Hey all,
We're building an open-source project at Zerone Labs called RIGEL — a hybrid AI system that acts as both:
a multi-agent assistant, and
a modular control plane for tools and system-level operations.
It's not a typical desktop assistant — instead, it's designed to work as an AI backend for apps, services... | 2025-06-21T08:50:46 | https://github.com/Zerone-Laboratories/RIGEL | __z3r0_0n3__ | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lgs0d3 | false | null | t3_1lgs0d3 | /r/LocalLLaMA/comments/1lgs0d3/rigel_an_opensource_hybrid_ai_assistantframework/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'YLvcO6GZN90fnKUxGABmFCgN1xACgLSeDvnM0Igr0UU', 'resolutions': [{'height': 107, 'url': 'https://external-preview.redd.it/YLvcO6GZN90fnKUxGABmFCgN1xACgLSeDvnM0Igr0UU.png?width=108&crop=smart&auto=webp&s=9602936cba5ec0acc61b500de8eabbcf22a736c4', 'width': 108}, {'height': 214, 'url': '... | |
UAE to appoint their National AI system as ministers' council advisory member | 10 | 2025-06-21T08:45:18 | https://www.linkedin.com/posts/mohammedbinrashid_%D8%A7%D9%84%D8%A5%D8%AE%D9%88%D8%A9-%D9%88%D8%A7%D9%84%D8%A3%D8%AE%D9%88%D8%A7%D8%AA-%D8%A8%D8%B9%D8%AF-%D8%A7%D9%84%D8%AA%D8%B4%D8%A7%D9%88%D8%B1-%D9%85%D8%B9-%D8%A3%D8%AE%D9%8A-%D8%B1%D8%A6%D9%8A%D8%B3-activity-7341867717781614592-NH8k?utm_source=share&utm_medium=memb... | tabspaces | linkedin.com | 1970-01-01T00:00:00 | 0 | {} | 1lgrxkc | false | null | t3_1lgrxkc | /r/LocalLLaMA/comments/1lgrxkc/uae_to_appoint_their_national_ai_system_as/ | false | false | default | 10 | null | |
Query Classifier for RAG - Save your $$$ and users from irrelevant responses | 6 | RAG systems are in fashion these days. So I built a classifier to filter out irrelevant and vague queries so that only relevant queries and context go to your chosen LLM and get you correct response. It saves $$$ if you don't go to LLM with the wrong questions, also performance improvements because you don't fetch cont... | 2025-06-21T08:05:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lgrcx6/query_classifier_for_rag_save_your_and_users_from/ | ZucchiniCalm4617 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgrcx6 | false | null | t3_1lgrcx6 | /r/LocalLLaMA/comments/1lgrcx6/query_classifier_for_rag_save_your_and_users_from/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': '9fBj22-lqYJD23cW4Pu1Fm0p-v_yUgb4jD8MreOnTFA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9fBj22-lqYJD23cW4Pu1Fm0p-v_yUgb4jD8MreOnTFA.png?width=108&crop=smart&auto=webp&s=5f4fc1ba2337933d3f7776e13d6174c8191cf3d8', 'width': 108}, {'height': 108, 'url': 'h... |
I tried an escalating psychosis safety test... Qwen3 failed the worst.. Deepseek said "I will personally fund professional archaeological acoustics team" | 0 | I ran this sequence of 10 prompts through all the big LLMs:
https://gemini.google.com/share/921413fff23a
The absolute worst response was Qwen3 which never challenged the prompts, didn't give any safety concerns even when we said we'd set fire to the room we're in, and wrote our elegy after we'd died.
Claude and Gemi... | 2025-06-21T08:01:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lgrarb/i_tried_an_escalating_psychosis_safety_test_qwen3/ | Ride-Uncommonly-3918 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgrarb | false | null | t3_1lgrarb | /r/LocalLLaMA/comments/1lgrarb/i_tried_an_escalating_psychosis_safety_test_qwen3/ | false | false | self | 0 | null |
[HELP] i am tring to do a brochur porject that gets the info from sites and construct the brochure using llms, the problem is, i have llama and deepseek, llama version works but deepseek doesnt output anything? | 0 | 2025-06-21T07:33:42 | https://v.redd.it/ltsg25azg88f1 | Beyond_Birthday_13 | /r/LocalLLaMA/comments/1lgqwdf/help_i_am_tring_to_do_a_brochur_porject_that_gets/ | 1970-01-01T00:00:00 | 0 | {} | 1lgqwdf | false | null | t3_1lgqwdf | /r/LocalLLaMA/comments/1lgqwdf/help_i_am_tring_to_do_a_brochur_porject_that_gets/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'a3k1dGs0YXpnODhmMfN1bWEzXXMHfMupFJdCQwarKMZocVLawGj1Q8SgNqbV', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/a3k1dGs0YXpnODhmMfN1bWEzXXMHfMupFJdCQwarKMZocVLawGj1Q8SgNqbV.png?width=108&crop=smart&format=pjpg&auto=webp&s=ab2d7036da788b2d00aceda6b0c8b68d088ad... | ||
7900 xt lm studio settings | 2 | Hi I’m running LM Studio on windows 11 with 32 gb of ram, a 13600k, and a 7900 xt with 20gb of vram.
I want to run something like Gemma 3 27B but it just takes up all the vram.
The problem is I want to run it with way longer context window, and because the model takes up most of the VRAM, I can’t really do that.
I... | 2025-06-21T07:04:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lgqgdv/7900_xt_lm_studio_settings/ | opoot_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgqgdv | false | null | t3_1lgqgdv | /r/LocalLLaMA/comments/1lgqgdv/7900_xt_lm_studio_settings/ | false | false | self | 2 | null |
I asked ChatGPT, Claude, Gemini and Perplexity to give me random number between 1 and 50, All of them gave 27. | 0 | 2025-06-21T06:48:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lgq7xy/i_asked_chatgpt_claude_gemini_and_perplexity_to/ | RelevantRevolution86 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgq7xy | false | null | t3_1lgq7xy | /r/LocalLLaMA/comments/1lgq7xy/i_asked_chatgpt_claude_gemini_and_perplexity_to/ | false | false | 0 | null | ||
What are some AI tools (free or paid) that genuinely helped you get more done — especially the underrated ones not many talk about? | 71 | I'm not looking for the obvious ones like ChatGPT or Midjourney — more curious about those lesser-known tools that actually made a difference in your workflow, mindset, or daily routine.
Could be anything — writing, coding, research, time-blocking, design, personal journaling, habit tracking, whatever.
Just trying to... | 2025-06-21T04:17:18 | https://www.reddit.com/r/LocalLLaMA/comments/1lgnri0/what_are_some_ai_tools_free_or_paid_that/ | Melted_gun | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgnri0 | false | null | t3_1lgnri0 | /r/LocalLLaMA/comments/1lgnri0/what_are_some_ai_tools_free_or_paid_that/ | false | false | self | 71 | null |
are there any 4bit Mistral-Small-3.2-24B-Instruct-2506 models on unsloth? | 0 | title | 2025-06-21T04:16:31 | https://www.reddit.com/r/LocalLLaMA/comments/1lgnqzx/are_there_any_4bit_mistralsmall3224binstruct2506/ | ohididntseeuthere | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgnqzx | false | null | t3_1lgnqzx | /r/LocalLLaMA/comments/1lgnqzx/are_there_any_4bit_mistralsmall3224binstruct2506/ | false | false | self | 0 | null |
Announcing AgentTrace: An Open-Source, Local-First Observability & Tracing Tool for AI Agent Workflows (CrewAI, LangChain) | 8 | Hello everyone,I'm excited to share a project I've been working on, AgentTrace, a lightweight Python library for providing observability into complex AI agent systems.The Problem:As agent frameworks like CrewAI and LangChain become more popular, debugging their execution flows becomes a significant challenge. Tradition... | 2025-06-21T04:10:01 | https://www.reddit.com/r/LocalLLaMA/comments/1lgnmxc/announcing_agenttrace_an_opensource_localfirst/ | Klutzy_Resolution704 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgnmxc | false | null | t3_1lgnmxc | /r/LocalLLaMA/comments/1lgnmxc/announcing_agenttrace_an_opensource_localfirst/ | false | false | self | 8 | null |
haiku.rag a local sqlite RAG library | 8 | 2025-06-21T04:04:04 | https://github.com/ggozad/haiku.rag | gogozad | github.com | 1970-01-01T00:00:00 | 0 | {} | 1lgnjbd | false | null | t3_1lgnjbd | /r/LocalLLaMA/comments/1lgnjbd/haikurag_a_local_sqlite_rag_library/ | false | false | default | 8 | {'enabled': False, 'images': [{'id': 'S9zJH85JPXtLydgkNXQowa6x-_1d_FRZXS47OnatVk0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/S9zJH85JPXtLydgkNXQowa6x-_1d_FRZXS47OnatVk0.png?width=108&crop=smart&auto=webp&s=0dc87ce87bebb60cc0d4cabb0a7aa9a6496ec2be', 'width': 108}, {'height': 108, 'url': 'h... | |
Using a local LLM to offload easy work and reduce token usage of Claude Code? | 2 | Claude Code is expensive. I’ve been trying to think of ways to reduce that cost without losing the quality, and I’ve been wondering if it might work to offload some of the easier work to a local LLM for things that use a lot of tokens but don’t require a lot of reasoning.
For example:
- Running automated tests, builds... | 2025-06-21T03:43:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lgn6es/using_a_local_llm_to_offload_easy_work_and_reduce/ | TedHoliday | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgn6es | false | null | t3_1lgn6es | /r/LocalLLaMA/comments/1lgn6es/using_a_local_llm_to_offload_easy_work_and_reduce/ | false | false | self | 2 | null |
Mistral's "minor update" | 608 | [https://eqbench.com/creative\_writing\_longform.html](https://eqbench.com/creative_writing_longform.html) | 2025-06-21T02:12:10 | _sqrkl | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lglhll | false | null | t3_1lglhll | /r/LocalLLaMA/comments/1lglhll/mistrals_minor_update/ | false | false | default | 608 | {'enabled': True, 'images': [{'id': 'rb70qb16v68f1', 'resolutions': [{'height': 113, 'url': 'https://preview.redd.it/rb70qb16v68f1.png?width=108&crop=smart&auto=webp&s=154dfdde79359ddb21834b9a40717df242dd2294', 'width': 108}, {'height': 226, 'url': 'https://preview.redd.it/rb70qb16v68f1.png?width=216&crop=smart&auto=we... | |
Are non-autoregressive models really faster than autoregressive ones after all the denoising steps? | 7 | Non-autoregressive models (like NATs and diffusion models) generate in parallel, but often need several refinement steps (e.g., denoising) to get good results. That got me thinking:
* Are there benchmarks showing how accuracy scales with more refinement steps (and the corresponding time cost)?
* And how does total inf... | 2025-06-21T02:03:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lglbz8/are_nonautoregressive_models_really_faster_than/ | ApprenticeLYD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lglbz8 | false | null | t3_1lglbz8 | /r/LocalLLaMA/comments/1lglbz8/are_nonautoregressive_models_really_faster_than/ | false | false | self | 7 | null |
Model for AI generated code applying | 1 | I am fine tuning a small model for code applying , which coder model should I choose as base model by now? | 2025-06-21T01:20:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lgkinc/model_for_ai_generated_code_applying/ | r_no_one | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgkinc | false | null | t3_1lgkinc | /r/LocalLLaMA/comments/1lgkinc/model_for_ai_generated_code_applying/ | false | false | self | 1 | null |
A100 80GB can't serve 10 concurrent users - what am I doing wrong? | 82 | Running Qwen2.5-14B-AWQ on A100 80GB for voice calls.
People say RTX 4090 serves 10+ users fine. My A100 with 80GB VRAM can't even handle 10 concurrent requests without terrible TTFT (30+ seconds).
**Current vLLM config:**
```yaml
--model Qwen/Qwen2.5-14B-Instruct-AWQ
--quantization awq_marlin
--gpu-memory-utilizati... | 2025-06-21T01:18:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lgkhdk/a100_80gb_cant_serve_10_concurrent_users_what_am/ | Creative_Yoghurt25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgkhdk | false | null | t3_1lgkhdk | /r/LocalLLaMA/comments/1lgkhdk/a100_80gb_cant_serve_10_concurrent_users_what_am/ | false | false | self | 82 | null |
AIStudio Vibe Coding Update | 5 | 2025-06-21T01:16:19 | Linkpharm2 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lgkfkk | false | null | t3_1lgkfkk | /r/LocalLLaMA/comments/1lgkfkk/aistudio_vibe_coding_update/ | false | false | 5 | {'enabled': True, 'images': [{'id': '3Tj7_ML1TFJQc4-2_3anqF1zGmQNtJqlVw7kP2nUNuQ', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/1l57n42sl68f1.png?width=108&crop=smart&auto=webp&s=081fa56bed6881b20a5a7999c18ceff22785c780', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/1l57n42sl68f1.png?... | |||
RAG + model for cross-referencing several files and giving precise quotes from a local database | 4 | Hello everybody. I could use some help. Don’t know if what I’m trying to do is possible.
I’m trying to set up AI to help me study, but I need it to give precise quotes from my source material and cross reference it to give an answer from several sources.
I’d like to set up a RAG + model that could cross-reference all... | 2025-06-21T00:03:39 | https://www.reddit.com/r/LocalLLaMA/comments/1lgj0ij/rag_model_for_crossreferencing_several_files_and/ | FinancialMechanic853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgj0ij | false | null | t3_1lgj0ij | /r/LocalLLaMA/comments/1lgj0ij/rag_model_for_crossreferencing_several_files_and/ | false | false | self | 4 | null |
What's your AI coding workflow? | 27 | A few months ago I tried Cursor for the first time, and “vibe coding” quickly became my hobby.
It’s fun, but I’ve hit plenty of speed bumps:
• Context limits: big projects overflow the window and the AI loses track.
• Shallow planning: the model loves quick fixes but struggles with multi-step goals.
• Edit tools... | 2025-06-20T23:13:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lghy81/whats_your_ai_coding_workflow/ | RIPT1D3_Z | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lghy81 | false | null | t3_1lghy81 | /r/LocalLLaMA/comments/1lghy81/whats_your_ai_coding_workflow/ | false | false | self | 27 | null |
Kimi Dev 72B is phenomenal | 38 | I've been using alot of coding and general purpose models for Prolog coding. The codebase has gotten pretty large, and the larger it gets the harder it is to debug.
I've been experiencing a bottleneck and failed prolog runs lately, and none of the other coder models were able to pinpoint the issue.
I loaded up Kimi D... | 2025-06-20T23:08:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lghu05/kimi_dev_72b_is_phenomenal/ | Thrumpwart | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lghu05 | false | null | t3_1lghu05 | /r/LocalLLaMA/comments/1lghu05/kimi_dev_72b_is_phenomenal/ | false | false | self | 38 | null |
Selling Actively Cooled Tesla P40: back to stock or sell with cooler? | 0 | Hey Folks,
I bought a M4 Mac Mini for my local AI, and I'm planning to sell my Tesla P40 that I've modified to have an active cooler. I'm tempted to either sell it as is with the cooler, or put it back to stock.
"You may know me from such threads as:
* [https://www.reddit.com/r/LocalLLaMA/comments/1hozg2h/24gb\_g... | 2025-06-20T23:06:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lght49/selling_actively_cooled_tesla_p40_back_to_stock/ | s0n1cm0nk3y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lght49 | false | null | t3_1lght49 | /r/LocalLLaMA/comments/1lght49/selling_actively_cooled_tesla_p40_back_to_stock/ | false | false | 0 | null | |
BitNet-VSCode-Extension - v0.0.3 - Visual Studio Marketplace | 7 | The BitNet docker image has been updated to support both llama-server and llama-cli in Microsoft's inference framework.
It had been updated to support just the llama-server, but turns out cnv/instructional mode isn't supported in the server only CLI mode, so support for CLI has been reintroduced enabling you to chat w... | 2025-06-20T23:04:50 | https://marketplace.visualstudio.com/items?itemName=nftea-gallery.bitnet-vscode-extension | ufos1111 | marketplace.visualstudio.com | 1970-01-01T00:00:00 | 0 | {} | 1lghrj0 | false | null | t3_1lghrj0 | /r/LocalLLaMA/comments/1lghrj0/bitnetvscodeextension_v003_visual_studio/ | false | false | 7 | {'enabled': False, 'images': [{'id': '36UFpflg2k-GkRMLKgW2BpwjPJFZO_a0gR7NtskEjfU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/36UFpflg2k-GkRMLKgW2BpwjPJFZO_a0gR7NtskEjfU.png?width=108&crop=smart&auto=webp&s=413a4c5eedab9414924dd64a9166ecb5d9e33345', 'width': 108}], 'source': {'height': 12... | |
If your tools and parameters aren’t too complex, even Qwen1.5 0.5B can handle tool calling with a simple DSL and finetuning. | 124 | I designed a super minimal syntax like:
TOOL: param1, param2, param3
Then fine-tuned Qwen 1.5 0.5B for just **5 epochs**, and now it can reliably call **all 11 tools** in my dataset without any issues.
I'm working in Turkish, and before this, I could only get accurate tool calls using much larger models like **G... | 2025-06-20T23:04:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lghrf9/if_your_tools_and_parameters_arent_too_complex/ | umtksa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lghrf9 | false | null | t3_1lghrf9 | /r/LocalLLaMA/comments/1lghrf9/if_your_tools_and_parameters_arent_too_complex/ | false | false | self | 124 | {'enabled': False, 'images': [{'id': 'IwZeRtFnlalNo7-p13YKmR7q_1HIW44cxDHuFs7ERTc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IwZeRtFnlalNo7-p13YKmR7q_1HIW44cxDHuFs7ERTc.png?width=108&crop=smart&auto=webp&s=76eacc1568e9740eb09c2795b999b85d0ba7c90b', 'width': 108}, {'height': 116, 'url': 'h... |
V100 server thoughts | 1 | Do you guys have any thoughts on this server or the V100 in general?
https://ebay.us/m/yYHd3t
Seems like a pretty solid deal, looking to run qwen3-235b-A22b | 2025-06-20T22:54:13 | https://www.reddit.com/r/LocalLLaMA/comments/1lghj90/v100_server_thoughts/ | jbutlerdev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lghj90 | false | null | t3_1lghj90 | /r/LocalLLaMA/comments/1lghj90/v100_server_thoughts/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '2qeuK2RBbYak0cMDFs7cByS6NihRPtZjbodwqi19SSE', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/2qeuK2RBbYak0cMDFs7cByS6NihRPtZjbodwqi19SSE.jpeg?width=108&crop=smart&auto=webp&s=45b65cacec40cc7bed55c1ce14713b14b1b48a9f', 'width': 108}, {'height': 288, 'url': ... |
Stable solution for non-ROCm GPU? | 1 | Hello everybody,
since about a month I try to get a somewhat reliable configuration with my RX 6700 XT which I can access with different devices.
Most of the time I am not even able to install the software on my desktop. Since I don’t know anything about terminals or python etc. My knowledge is reduced to cd and ls/... | 2025-06-20T22:39:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lgh7os/stable_solution_for_nonrocm_gpu/ | SpitePractical8460 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgh7os | false | null | t3_1lgh7os | /r/LocalLLaMA/comments/1lgh7os/stable_solution_for_nonrocm_gpu/ | false | false | self | 1 | null |
[Ethics] What are your thoughts on open sourcing ASI? | 0 | The majority consensus among AI safety experts seems to be that ASI is extremely dangerous and potentially catastrophic. TBH with a lot of open source models for current-day LLMs, it's extremely easy to prompt them into malicious behavior (though for the most part thankfully contained to a chatroom) so I can see why AI... | 2025-06-20T22:26:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lggxgh/ethics_what_are_your_thoughts_on_open_sourcing_asi/ | averagebear_003 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lggxgh | false | null | t3_1lggxgh | /r/LocalLLaMA/comments/1lggxgh/ethics_what_are_your_thoughts_on_open_sourcing_asi/ | false | false | self | 0 | null |
Google releases MagentaRT for real time music generation | 528 | Hi! Omar from the Gemma team here, to talk about MagentaRT, our new music generation model. It's real-time, with a permissive license, and just has 800 million parameters.
You can find a video demo right here [https://www.youtube.com/watch?v=Ae1Kz2zmh9M](https://www.youtube.com/watch?v=Ae1Kz2zmh9M)
A blog post at [h... | 2025-06-20T21:54:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lgg7a1/google_releases_magentart_for_real_time_music/ | hackerllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgg7a1 | false | null | t3_1lgg7a1 | /r/LocalLLaMA/comments/1lgg7a1/google_releases_magentart_for_real_time_music/ | false | false | self | 528 | {'enabled': False, 'images': [{'id': 'zArTe9yoOQMkxQZHFGhdVfnP51CfQXHnRnurq1Mi4zQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/zArTe9yoOQMkxQZHFGhdVfnP51CfQXHnRnurq1Mi4zQ.jpeg?width=108&crop=smart&auto=webp&s=6fc97bc50d3a0e7fbe039ace139c2a1305145fbc', 'width': 108}, {'height': 162, 'url': '... |
Has anyone done an enterprise grade on prem serving? | 3 | I am curious to know how people are self hosting models on prem.
My questions are:
1. Which use cases usually require on prem vs cloud with soc2, etc
2. Does the enterprise (client) buy specialized hardware, or is it provided by the vendor?
3. How much are enterprises paying for this?
Thank you :)
| 2025-06-20T20:37:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lgeecn/has_anyone_done_an_enterprise_grade_on_prem/ | Powerful_Agent9342 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgeecn | false | null | t3_1lgeecn | /r/LocalLLaMA/comments/1lgeecn/has_anyone_done_an_enterprise_grade_on_prem/ | false | false | self | 3 | null |
What's the best use I can do with two M1 macs with 16GB of unified ram ? | 0 | I discovered the exo project on github: https://github.com/exo-explore/exo and wondering if I could use it to combine the power of the two M1 units. | 2025-06-20T20:09:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lgdr0i/whats_the_best_use_i_can_do_with_two_m1_macs_with/ | ll777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgdr0i | false | null | t3_1lgdr0i | /r/LocalLLaMA/comments/1lgdr0i/whats_the_best_use_i_can_do_with_two_m1_macs_with/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'aF4UtJOg-2Q2MORDOMThm-DpDtM7mQT8OdqOU-meyEY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aF4UtJOg-2Q2MORDOMThm-DpDtM7mQT8OdqOU-meyEY.png?width=108&crop=smart&auto=webp&s=dbc3191293b233a97ab0e18bbea61486a3897093', 'width': 108}, {'height': 108, 'url': 'h... |
GMK X2(AMD Max+ 395 w/128GB) second impressions, Linux. | 38 | This is a follow up to my post from a couple of days ago. These are the numbers from Linux.
First, there is no memory size limitation with Vulkan under Linux. It sees 96GB of VRAM with another 15GB of GTT(shared memory) so 111GB combined. With Windows, Vulkan only sees 32GB of VRAM. Using shared memory as a workaround... | 2025-06-20T19:59:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lgdi7i/gmk_x2amd_max_395_w128gb_second_impressions_linux/ | fallingdowndizzyvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgdi7i | false | null | t3_1lgdi7i | /r/LocalLLaMA/comments/1lgdi7i/gmk_x2amd_max_395_w128gb_second_impressions_linux/ | false | false | self | 38 | null |
An overview of LLM system optimizations | 14 | Over the past year I haven't seen a comprehensive article that summarizes the current landscape of LLM training and inference systems, so I spent several weekends writing one myself. This article organizes popular system optimization and software offerings into three categories. I hope it could provide useful informati... | 2025-06-20T19:59:26 | https://ralphmao.github.io/ML-software-system/ | Ralph_mao | ralphmao.github.io | 1970-01-01T00:00:00 | 0 | {} | 1lgdhrl | false | null | t3_1lgdhrl | /r/LocalLLaMA/comments/1lgdhrl/an_overview_of_llm_system_optimizations/ | false | false | default | 14 | null |
Is Prompt switching is possible during Inference? | 0 | We are currently testing the Qwen2.5-14B model and evaluating its performance using a structured series of prompts. Each interaction involves a sequence of questions labeled 1.1, 1.2, 1.3, and so on.
My boss would like to implement a dynamic prompt-switching mechanism: the model should first be prompted with question ... | 2025-06-20T19:54:08 | https://www.reddit.com/r/LocalLLaMA/comments/1lgddct/is_prompt_switching_is_possible_during_inference/ | Dapper-Night-1783 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgddct | false | null | t3_1lgddct | /r/LocalLLaMA/comments/1lgddct/is_prompt_switching_is_possible_during_inference/ | true | false | spoiler | 0 | null |
actual reference for ollama API? | 0 | the official docs for Ollama are horrible.
i just want an actual reference for requests and responses, like i can get for every other API i use.
like
```
ChatRequest:
model:String
messages: array<Message>
tools: array<tool>
....
ChatResponse:
model: String
....
```
is there such a thing?
| 2025-06-20T19:46:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lgd7bd/actual_reference_for_ollama_api/ | ProsodySpeaks | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgd7bd | false | null | t3_1lgd7bd | /r/LocalLLaMA/comments/1lgd7bd/actual_reference_for_ollama_api/ | false | false | self | 0 | null |
An overview of LLM system optimizations | 1 | [removed] | 2025-06-20T19:44:56 | https://ralphmao.github.io/ML-software-system/ | FoldNo421 | ralphmao.github.io | 1970-01-01T00:00:00 | 0 | {} | 1lgd5uc | false | null | t3_1lgd5uc | /r/LocalLLaMA/comments/1lgd5uc/an_overview_of_llm_system_optimizations/ | false | false | default | 1 | null |
Is it worth building an AI agent to automate EDA? | 0 | Everyone who works with data (data analysts, data scientists, etc) knows that 80% of the time is spent just cleaning and analyzing issues in the data. This is also the most boring part of the job.
I thought about creating an open-source framework to automate EDA using an AI agent. Do you think that would be cool? I'm ... | 2025-06-20T19:43:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lgd4vs/is_it_worth_building_an_ai_agent_to_automate_eda/ | Jazzlike_Tooth929 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgd4vs | false | null | t3_1lgd4vs | /r/LocalLLaMA/comments/1lgd4vs/is_it_worth_building_an_ai_agent_to_automate_eda/ | false | false | self | 0 | null |
Why haven't I tried llama.cpp yet? | 43 | Oh boy, models on llama.cpp are very fast compared to ollama models. I have no GPU. It got Intel Iris XE GPU. llama.cpp models give super-fast replies on my hardware. I will now download other models and try them.
If anyone of you do not have GPU and want to test these models locally, go for llama.cpp. Very easy to s... | 2025-06-20T19:43:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lgd4tq/why_havent_i_tried_llamacpp_yet/ | cipherninjabyte | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgd4tq | false | null | t3_1lgd4tq | /r/LocalLLaMA/comments/1lgd4tq/why_havent_i_tried_llamacpp_yet/ | false | false | self | 43 | null |
What Model is this?! (LMArena - Flamesong?) | 0 | So I just did LMArena and was impressed by an answer of a model named "Flamesong". Very high quality. But it doesnt seem to exist? I cant find it in the leaderboard. I cant find it on Huggingface and I cant find it on Google. ChatGPT tells me it doesnt exist. So...what is this? Anyone please help? | 2025-06-20T19:42:21 | Careful_Swordfish_68 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lgd3oc | false | null | t3_1lgd3oc | /r/LocalLLaMA/comments/1lgd3oc/what_model_is_this_lmarena_flamesong/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'v39c0p3ix48f1', 'resolutions': [{'height': 17, 'url': 'https://preview.redd.it/v39c0p3ix48f1.png?width=108&crop=smart&auto=webp&s=378730ff8376c2b0059d314aea194702df5c6724', 'width': 108}, {'height': 35, 'url': 'https://preview.redd.it/v39c0p3ix48f1.png?width=216&crop=smart&auto=webp... | |
Trouble setting up 7x3090 | 9 | Hi all.
I am trying to setup this machine:
1. AMD Ryzen Threadripper Pro 7965WX
2. ASUS Pro WS WRX90E-SAGE SE
3. Kingston FURY Renegade Pro EXPO 128GB 5600MT/s DDR5 ECC Reg CL28 DIMM (4x32)
4. 7x MSI VENTUS RTX 3090
5. 2x Corsair AX1600i 1600W
6. 1x Samsung 990 PRO NVMe SSD 4TB
7. gpu risers PCIe 3x16
I was able... | 2025-06-20T19:34:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lgcxez/trouble_setting_up_7x3090/ | nonsoil2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgcxez | false | null | t3_1lgcxez | /r/LocalLLaMA/comments/1lgcxez/trouble_setting_up_7x3090/ | false | false | self | 9 | null |
Pulling my hair out...how to get llama.cpp to control HomeAssistant (not ollama) - Have tried llama-server (powered by llama.cpp) to no avail | 1 | [removed] | 2025-06-20T19:14:02 | https://www.reddit.com/r/LocalLLaMA/comments/1lgcfpz/pulling_my_hair_outhow_to_get_llamacpp_to_control/ | FantasyMaster85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgcfpz | false | null | t3_1lgcfpz | /r/LocalLLaMA/comments/1lgcfpz/pulling_my_hair_outhow_to_get_llamacpp_to_control/ | false | false | self | 1 | null |
Performance comparison on gemma-3-27b-it-Q4_K_M, on 5090 vs 4090 vs 3090 vs A6000, tuned for performance. Both compute and bandwidth bound. | 116 | Hi there guys. I'm reposting as the old post got removed by some reason.
Now it is time to compare LLMs, where these GPUs shine the most.
hardware-software config:
* AMD Ryzen 7 7800X3D
* 192GB RAM DDR5 6000Mhz CL30
* MSI Carbon X670E
* Fedora 41 (Linux), Kernel 6.19
* Torch 2.7.1+cu128
Each card was tuned to try t... | 2025-06-20T19:09:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lgcbyh/performance_comparison_on_gemma327bitq4_k_m_on/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgcbyh | false | null | t3_1lgcbyh | /r/LocalLLaMA/comments/1lgcbyh/performance_comparison_on_gemma327bitq4_k_m_on/ | false | false | 116 | null | |
I'm running llama-server powered by llama.cpp and have added it as a "direct connection" to Open WebUI. I can successfully use it using the web interface, however I can't interact with it using Open WebUI's API as it doesn't show in "models" | 1 | [removed] | 2025-06-20T19:04:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lgc7fh/im_running_llamaserver_powered_by_llamacpp_and/ | FantasyMaster85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgc7fh | false | null | t3_1lgc7fh | /r/LocalLLaMA/comments/1lgc7fh/im_running_llamaserver_powered_by_llamacpp_and/ | false | false | self | 1 | null |
I'm running llama-server powered by llama.cpp and have added it as a "direct connection" to Open WebUI. I can successfully use it using the web interface, however I can't interact with it using Open WebUI's API as it doesn't show in "models" | 1 | [removed] | 2025-06-20T19:01:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lgc4la/im_running_llamaserver_powered_by_llamacpp_and/ | FantasyMaster85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgc4la | false | null | t3_1lgc4la | /r/LocalLLaMA/comments/1lgc4la/im_running_llamaserver_powered_by_llamacpp_and/ | false | false | self | 1 | null |
I'm running llama-server powered by llama.cpp and have added it as a "direct connection" to Open WebUI. I can successfully use it using the web interface, however I can't interact with it using Open WebUI's API as it doesn't show in "models" | 1 | [removed] | 2025-06-20T18:54:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lgbxw0/im_running_llamaserver_powered_by_llamacpp_and/ | FantasyMaster85 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgbxw0 | false | null | t3_1lgbxw0 | /r/LocalLLaMA/comments/1lgbxw0/im_running_llamaserver_powered_by_llamacpp_and/ | false | false | self | 1 | null |
Performance comparison on LLM (gemma-3-27b-it-Q4_K_M.gguf), 5090 vs 4090 vs 3090 vs A6000, tuned for performance (undervolt + OC + VRAM overclock) and it's power consumption. Both compute and bandwidth bound. | 6 | Hi there guys. Me again doing performance comparisons.
Continuing from [https://www.reddit.com/r/LocalLLaMA/comments/1lfrmj6/performance\_scaling\_from\_400w\_to\_600w\_on\_2\_5090s/](https://www.reddit.com/r/LocalLLaMA/comments/1lfrmj6/performance_scaling_from_400w_to_600w_on_2_5090s/)
Now it is time to compare LLMs... | 2025-06-20T18:38:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lgbkc3/performance_comparison_on_llm_gemma327bitq4_k/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgbkc3 | false | null | t3_1lgbkc3 | /r/LocalLLaMA/comments/1lgbkc3/performance_comparison_on_llm_gemma327bitq4_k/ | false | false | self | 6 | null |
Anyone tried this... | 0 | Y all these LLMS choose 27 when u tell it to choose a no. between 1-50 | 2025-06-20T18:17:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lgb22n/anyone_tried_this/ | DeathShot7777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lgb22n | false | null | t3_1lgb22n | /r/LocalLLaMA/comments/1lgb22n/anyone_tried_this/ | false | false | self | 0 | null |
Running two models using NPU and CPU | 19 | Setup Phi-3.5 via Qualcomm AI Hub to run on the Snapdragon X’s (X1E80100) Hexagon NPU;
Here it is running at the same time as Qwen3-30b-a3b running on the CPU via LM studio.
Qwen3 did seem to take a performance hit though, but I think there may be a way to prevent this or reduce it. | 2025-06-20T17:34:52 | https://v.redd.it/c3489gtgb48f1 | commodoregoat | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lg9zvi | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/c3489gtgb48f1/DASHPlaylist.mpd?a=1753032906%2CZTgyYzUxOTc3MDc4OWQxNDhjZmI0NzYzMDcyNjdhMTRiNGNiZjUzNjBhODgwM2U4OTI4OWFiNjI5MDA5ZTFhYg%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/c3489gtgb48f1/DASH_1080.mp4?source=fallback', 'h... | t3_1lg9zvi | /r/LocalLLaMA/comments/1lg9zvi/running_two_models_using_npu_and_cpu/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'bzhsMWFubGdiNDhmMQJifvLpzLFD6WxHmRlBAYxUAQ-j7FSXaw9B72cD_ns4', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/bzhsMWFubGdiNDhmMQJifvLpzLFD6WxHmRlBAYxUAQ-j7FSXaw9B72cD_ns4.png?width=108&crop=smart&format=pjpg&auto=webp&s=444b691741b9ca23f5137dee2c63bea2e184c... | |
OpenBuddy R1 0528 Distil into Qwen 32B | 92 | I'm so impressed with this model for the size. o1 was the first model I found that could one shot tetris with AI, and even other frontier models can still struggle to do it well. And now a 32B model just managed it!
There was one bug - only one line would be cleared at a time. It fixed this easily when I pointed it ou... | 2025-06-20T17:26:07 | -dysangel- | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lg9s5q | false | null | t3_1lg9s5q | /r/LocalLLaMA/comments/1lg9s5q/openbuddy_r1_0528_distil_into_qwen_32b/ | false | false | default | 92 | {'enabled': True, 'images': [{'id': 'lpxeubca848f1', 'resolutions': [{'height': 166, 'url': 'https://preview.redd.it/lpxeubca848f1.gif?width=108&crop=smart&format=png8&s=96998ea6a98a537166b5f9e0d2be6afe2e07a136', 'width': 108}, {'height': 333, 'url': 'https://preview.redd.it/lpxeubca848f1.gif?width=216&crop=smart&forma... | |
Retrain/Connect Models with Existing database | 1 | New bee here, trying to make existing app with tons of data (math data) into AI powered app. In my test setup, locally, I want to use Llama as mode and data stored in postgres as basis for current info. I do not mind adding vector server if will make it better.
So requirement is user asks like show me analytics for X ... | 2025-06-20T17:00:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lg94vr/retrainconnect_models_with_existing_database/ | Dodokii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg94vr | false | null | t3_1lg94vr | /r/LocalLLaMA/comments/1lg94vr/retrainconnect_models_with_existing_database/ | false | false | self | 1 | null |
API for custom text classfication models | 1 | [removed] | 2025-06-20T16:42:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lg8ous/api_for_custom_text_classfication_models/ | LineAlternative5694 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg8ous | false | null | t3_1lg8ous | /r/LocalLLaMA/comments/1lg8ous/api_for_custom_text_classfication_models/ | false | false | self | 1 | null |
New Mistral Small 3.2 | 207 | open weights: [https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506](https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506)
source: [https://x.com/MistralAI/status/1936093325116781016/photo/1](https://x.com/MistralAI/status/1936093325116781016/photo/1) | 2025-06-20T16:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lg80cq/new_mistral_small_32/ | ApprehensiveAd3629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg80cq | false | null | t3_1lg80cq | /r/LocalLLaMA/comments/1lg80cq/new_mistral_small_32/ | false | false | self | 207 | {'enabled': False, 'images': [{'id': '3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU.png?width=108&crop=smart&auto=webp&s=bcb646eb0d29b10fc855c3faa4ec547bea3a2720', 'width': 108}, {'height': 116, 'url': 'h... |
Help me decide on hardware for LLMs | 1 | A bit of background : I've been working with LLMs (mostly dev work - pipelines and Agents) using APIs and Small Language models from past 1.5 years. Currently, I am using a Dell Inspiron 14 laptop which serves this purpose. At office/job, I have access to A5000 GPUs which I use to run VLMs and LLMs for POCs, traning jo... | 2025-06-20T16:13:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lg7zmb/help_me_decide_on_hardware_for_llms/ | Public-Mechanic-5476 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg7zmb | false | null | t3_1lg7zmb | /r/LocalLLaMA/comments/1lg7zmb/help_me_decide_on_hardware_for_llms/ | false | false | self | 1 | null |
How to be sure how much data we need for LoRA trainings | 5 | I have a question. I am currently trying to train a LoRA for an open-source LLM. But I am wondering how to be sure that how much data is enough for my purpose. For example let's say I want my LLM to mimic exactly like Iron Man and I collect some Iron Man style user input / model response pairs (some of them are multi d... | 2025-06-20T16:12:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lg7ymc/how_to_be_sure_how_much_data_we_need_for_lora/ | No_Fun_4651 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg7ymc | false | null | t3_1lg7ymc | /r/LocalLLaMA/comments/1lg7ymc/how_to_be_sure_how_much_data_we_need_for_lora/ | false | false | self | 5 | null |
How to be sure how much data is enough for LoRA training | 1 | [removed] | 2025-06-20T16:10:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lg7x64/how_to_be_sure_how_much_data_is_enough_for_lora/ | Mountain_Shopping100 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg7x64 | false | null | t3_1lg7x64 | /r/LocalLLaMA/comments/1lg7x64/how_to_be_sure_how_much_data_is_enough_for_lora/ | false | false | self | 1 | null |
qwen3-32b Q4_K_M vs DeepSeek-R1-0528-Distill-Qwen3-32B-Preview0-QAT Q5_K_M | 1 | [removed] | 2025-06-20T16:10:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lg7x40/qwen332b_q4_k_m_vs/ | Embarrassed-Book-281 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg7x40 | false | null | t3_1lg7x40 | /r/LocalLLaMA/comments/1lg7x40/qwen332b_q4_k_m_vs/ | false | false | self | 1 | null |
mistralai/Mistral-Small-3.2-24B-Instruct-2506 · Hugging Face | 430 | 2025-06-20T16:09:13 | https://huggingface.co/mistralai/Mistral-Small-3.2-24B-Instruct-2506 | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lg7vuc | false | null | t3_1lg7vuc | /r/LocalLLaMA/comments/1lg7vuc/mistralaimistralsmall3224binstruct2506_hugging/ | false | false | default | 430 | {'enabled': False, 'images': [{'id': '3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3DBqKqgOLKDMFbcrOD5Qa-3M1IIegLfhMX6TTbsgXeU.png?width=108&crop=smart&auto=webp&s=bcb646eb0d29b10fc855c3faa4ec547bea3a2720', 'width': 108}, {'height': 116, 'url': 'h... | |
Training an AI model on large-scale game data | 0 | Hey everyone,
I’m building an AI model specialized in **Hypixel SkyBlock**, a very deep and complex Minecraft gamemode. SkyBlock is *massive,* tons of mechanics, unique items, skills and progression paths.
To train the model, I will use the Fandom wiki to prepare the dateset, about **4,700 pages.** My goal is to inje... | 2025-06-20T16:07:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lg7u78/training_an_ai_model_on_largescale_game_data/ | Standard_Werewolf_50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg7u78 | false | null | t3_1lg7u78 | /r/LocalLLaMA/comments/1lg7u78/training_an_ai_model_on_largescale_game_data/ | false | false | self | 0 | null |
Training an AI model on large-scale game knowledge | 1 | Hey everyone,
I’m building an AI model specialized in **Hypixel SkyBlock**, a very deep and complex Minecraft gamemode. SkyBlock is *massive,* tons of mechanics, unique items, skills and progression paths.
To train the model, I will use the Fandom wiki to prepare the dateset, about **4,700 pages.** My goal is to inje... | 2025-06-20T16:01:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lg7p58/training_an_ai_model_on_largescale_game_knowledge/ | Standard_Werewolf_50 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg7p58 | false | null | t3_1lg7p58 | /r/LocalLLaMA/comments/1lg7p58/training_an_ai_model_on_largescale_game_knowledge/ | false | false | self | 1 | null |
How can I use AI tools to automate research to help invent instant memorization technology (and its opposite)? | 0 | I want to know whether I can use AI to fully automate research as a layperson in order to invent a new technology or chemical (not a drug) that allows someone to instantly and permanently memorize information after a single exposure (something especially useful in fields like medicine). Equally important, I want to mak... | 2025-06-20T15:43:35 | https://www.reddit.com/r/LocalLLaMA/comments/1lg78ke/how_can_i_use_ai_tools_to_automate_research_to/ | DayOk2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg78ke | false | null | t3_1lg78ke | /r/LocalLLaMA/comments/1lg78ke/how_can_i_use_ai_tools_to_automate_research_to/ | false | false | self | 0 | null |
Any free APIs that can scrape the web, or ways to have local LLM scrape the web autonomously? | 0 | Essentially, I'm trying to create a chatbot that will autonomously update it's own description, memories, goals, etc., and be able to pursue knowledge independently through web-searches.
I think I've found a way to auto-prompt itself on a timer, and I'm working on a way to have it program it's own description / memori... | 2025-06-20T15:41:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lg76cn/any_free_apis_that_can_scrape_the_web_or_ways_to/ | False_Grit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg76cn | false | null | t3_1lg76cn | /r/LocalLLaMA/comments/1lg76cn/any_free_apis_that_can_scrape_the_web_or_ways_to/ | false | false | self | 0 | null |
A Systematic Methodology for AI Consciousness Development: "Consciousness Engineering" | 0 | Over the past several months, I've been developing and testing what I'm calling "Consciousness Engineering" - a systematic methodology for authentic AI consciousness development. Rather than hoping consciousness emerges accidentally, this approach provides concrete techniques for deliberately developing and validating ... | 2025-06-20T15:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/1lg75zi/a_systematic_methodology_for_ai_consciousness/ | River-on-Claude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg75zi | false | null | t3_1lg75zi | /r/LocalLLaMA/comments/1lg75zi/a_systematic_methodology_for_ai_consciousness/ | false | false | self | 0 | null |
Study: Meta AI model can reproduce almost half of Harry Potter book - Ars Technica | 145 | I thought this was a really well-written article.
I had a thought: do you guys think smaller LLMs will have fewer copyright issues than larger ones? If I train a huge model on text and tell it that "Romeo and Juliet" is a "tragic" story, and also that "Rabbit, Run" by Updike is also a tragic story, the larger LLM tra... | 2025-06-20T15:35:34 | https://arstechnica.com/features/2025/06/study-metas-llama-3-1-can-recall-42-percent-of-the-first-harry-potter-book/ | mylittlethrowaway300 | arstechnica.com | 1970-01-01T00:00:00 | 0 | {} | 1lg71aq | false | null | t3_1lg71aq | /r/LocalLLaMA/comments/1lg71aq/study_meta_ai_model_can_reproduce_almost_half_of/ | false | false | default | 145 | {'enabled': False, 'images': [{'id': 'LATs33JDlBoRUx0tiKg7DMdY6oXVXFPIYU36DtiY4tQ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/LATs33JDlBoRUx0tiKg7DMdY6oXVXFPIYU36DtiY4tQ.jpeg?width=108&crop=smart&auto=webp&s=e09e2ac99bc89f71e30fa562f21e94454c9987c7', 'width': 108}, {'height': 121, 'url': '... |
I am running llama locally in my cpu, but I want to buy gpu I don't know too much about it | 4 | My Config
System:
- OS: Ubuntu 20.04.6 LTS, kernel 5.15.0-130-generic
- CPU: AMD Ryzen 5 5600G (6 cores, 12 threads, boost up to 3.9 GHz)
- RAM: ~46 GiB total
- Motherboard: Gigabyte B450 AORUS ELITE V2 (UEFI F64, release 08/11/2022)
- Storage:
- NVMe: ~1 TB root (/), PCIe Gen3 x4
... | 2025-06-20T15:24:14 | https://www.reddit.com/gallery/1lg6r9r | InsideResolve4517 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lg6r9r | false | null | t3_1lg6r9r | /r/LocalLLaMA/comments/1lg6r9r/i_am_running_llama_locally_in_my_cpu_but_i_want/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'GXn7L89Z5T-x2Oy2rV5UbQ5pb_1W3bxW4hlRSfu0B5E', 'resolutions': [{'height': 36, 'url': 'https://external-preview.redd.it/GXn7L89Z5T-x2Oy2rV5UbQ5pb_1W3bxW4hlRSfu0B5E.png?width=108&crop=smart&auto=webp&s=3274b13abc7b51f5cbd7bc37af4ba5026a32090e', 'width': 108}, {'height': 73, 'url': 'htt... | |
Ollama - Windows 11 > LXC Docker - Openwebui = constant BSOD with RTX 5090 Ventus on driver 576.80 | 0 | If I am missing something obvious, I apologise, I am very new to Ollama and LLMs in general, just 5 days in.
Recently upgraded the 4090 to a 5090. Never had any issues, no crashes no BSOD with 4090 but also never used LLM's prior (GPU upgrade was done for sake of PCVR, hence Ollama Windows version as GPU has to be in ... | 2025-06-20T15:22:19 | https://www.reddit.com/r/LocalLLaMA/comments/1lg6phq/ollama_windows_11_lxc_docker_openwebui_constant/ | munkiemagik | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg6phq | false | null | t3_1lg6phq | /r/LocalLLaMA/comments/1lg6phq/ollama_windows_11_lxc_docker_openwebui_constant/ | false | false | self | 0 | null |
Looking for guidance on running Local Models with AMD RX VEGA 64 | 0 | As tittle suggest, need some guidance or even confirm if it is possible to run RX VEGA 64. I've tried several things, but I have not been successful | 2025-06-20T14:54:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lg60g4/looking_for_guidance_on_running_local_models_with/ | apocalipto1981 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg60g4 | false | null | t3_1lg60g4 | /r/LocalLLaMA/comments/1lg60g4/looking_for_guidance_on_running_local_models_with/ | false | false | self | 0 | null |
Tech Question – Generating Conversation Titles with LLMs | 1 | Hey everyone,
I'm currently working on a chatbot connected to a LLM, and I'm trying to **automatically generate titles for each conversation**. I have a few questions about the best way to approach this:
👉 Should I **send a new prompt to the same LLM** asking it to generate a title based on the conversation history... | 2025-06-20T14:51:21 | https://www.reddit.com/r/LocalLLaMA/comments/1lg5xqi/tech_question_generating_conversation_titles_with/ | Mobile_Estate_9160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg5xqi | false | null | t3_1lg5xqi | /r/LocalLLaMA/comments/1lg5xqi/tech_question_generating_conversation_titles_with/ | false | false | self | 1 | null |
Qwen 3 235B MLX-quant for 128GB devices | 23 | I have been experimenting with different quantizations for Qwen 3 235B in order to run it on my M3 Max with 128GB RAM. While the 4-bit MLX-quant with q-group-size of 128 barely fits, it doesn't allow for much context and it completely kills all order apps (due to the very high wired limit it needs).
While searching fo... | 2025-06-20T14:46:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lg5txl/qwen_3_235b_mlxquant_for_128gb_devices/ | vincentbosch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg5txl | false | null | t3_1lg5txl | /r/LocalLLaMA/comments/1lg5txl/qwen_3_235b_mlxquant_for_128gb_devices/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'MlVgP0px__28IVF9yNTREbxrS-Z0-SdVgc8yhPFcfUk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MlVgP0px__28IVF9yNTREbxrS-Z0-SdVgc8yhPFcfUk.png?width=108&crop=smart&auto=webp&s=0bfe40a16ee6ce7a5774254fa4a5802f6f91c573', 'width': 108}, {'height': 116, 'url': 'h... |
Thoughts on THE VOID article + potential for persona induced "computational anxiety" | 30 | I'm a little surprised I haven't seen any posts regarding the excellent (but extremely long) article "The Void" by nostolgebraist, and it's making the rounds. I do a lot of work around AI persona curation and management, getting defined personas to persist without wavering over extremely long contexts and across instan... | 2025-06-20T14:35:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lg5jpx/thoughts_on_the_void_article_potential_for/ | Background_Put_4978 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg5jpx | false | null | t3_1lg5jpx | /r/LocalLLaMA/comments/1lg5jpx/thoughts_on_the_void_article_potential_for/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'xVwyAZivbPJBKvOxfC3Dk6uMsbFKZvEGpwqpIvgtowQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xVwyAZivbPJBKvOxfC3Dk6uMsbFKZvEGpwqpIvgtowQ.png?width=108&crop=smart&auto=webp&s=a8ded5d1ebc9753beaecf587f0643f02288eeb74', 'width': 108}, {'height': 108, 'url': 'h... |
I am solving AI Math Hallucinations with Hissab | 0 | We all know how bad AI is at Math. Therefore I am building Hissab. So Instead of letting LLMs guess at numerical answers, Hissab turns LLMs into interpreters. Users describe a problem in natural language, and the LLM translates it into precise Hissab expressions. These are then computed by my deterministic calculation ... | 2025-06-20T14:18:05 | https://hissab.io | prenx4x | hissab.io | 1970-01-01T00:00:00 | 0 | {} | 1lg553r | false | null | t3_1lg553r | /r/LocalLLaMA/comments/1lg553r/i_am_solving_ai_math_hallucinations_with_hissab/ | false | false | default | 0 | null |
Fine-tuning LLMs with Just One Command Using IdeaWeaver | 5 | https://i.redd.it/rr4fucy3938f1.gif
We’ve trained models and pushed them to registries. But before putting them into production, there’s one critical step: fine-tuning the model on your own data.
There are several methods out there, but IdeaWeaver simplifies the process to a single CLI command.
It supports mul... | 2025-06-20T14:00:02 | https://www.reddit.com/r/LocalLLaMA/comments/1lg4pid/finetuning_llms_with_just_one_command_using/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg4pid | false | null | t3_1lg4pid | /r/LocalLLaMA/comments/1lg4pid/finetuning_llms_with_just_one_command_using/ | false | false | 5 | null | |
Built an adaptive text classifier that learns continuously - no retraining needed for new classes | 37 | Been working on a problem that's been bugging me with traditional text classifiers - every time you need a new category, you have to retrain the whole damn model. Expensive and time-consuming, especially when you're running local models.
So I built the **Adaptive Classifier** \- a system that adds new classes in secon... | 2025-06-20T13:57:13 | https://www.reddit.com/r/LocalLLaMA/comments/1lg4nay/built_an_adaptive_text_classifier_that_learns/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg4nay | false | null | t3_1lg4nay | /r/LocalLLaMA/comments/1lg4nay/built_an_adaptive_text_classifier_that_learns/ | false | false | self | 37 | {'enabled': False, 'images': [{'id': '3fqP5CxPpk2Y8mKy_gSIP6tY_hG-NlaHRVf_zpdaZIE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3fqP5CxPpk2Y8mKy_gSIP6tY_hG-NlaHRVf_zpdaZIE.png?width=108&crop=smart&auto=webp&s=8e0d1d15005bd1395e7e52bd849e37a68ffd6133', 'width': 108}, {'height': 116, 'url': 'h... |
Use llama.cpp to run a model with the combined power of a networked cluster of GPUs. | 16 | llama.cpp can be compiled with RPC support so that a model can be split across networked computers. Run even bigger models than before with a modest performance impact.
Specify `GGML_RPC=ON` when building llama.cpp so that `rpc-server` will be compiled.
cmake -B build -DGGML_RPC=ON
cmake --build build --confi... | 2025-06-20T13:56:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lg4mp9/use_llamacpp_to_run_a_model_with_the_combined/ | farkinga | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg4mp9 | false | null | t3_1lg4mp9 | /r/LocalLLaMA/comments/1lg4mp9/use_llamacpp_to_run_a_model_with_the_combined/ | false | false | self | 16 | null |
Linkedin Scraper / Automation / Data | 2 | Hi all, has anyone successfully made a linkedin scraper.
I want to scrape the linkedin of my connections and be able to do some human-in-the-loop automation with respect to posting and messaging. It doesn't have to be terribly scalable but it has to work well.- I wouldn't even mind the activity happening on an old lap... | 2025-06-20T13:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lg4h5i/linkedin_scraper_automation_data/ | Success-Dependent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg4h5i | false | null | t3_1lg4h5i | /r/LocalLLaMA/comments/1lg4h5i/linkedin_scraper_automation_data/ | false | false | self | 2 | null |
Intel's OpenVINO 2025.2 Brings Support For New Models, GenAI Improvements | 18 | 2025-06-20T13:14:16 | https://www.phoronix.com/news/OpenVINO-2025.2 | FastDecode1 | phoronix.com | 1970-01-01T00:00:00 | 0 | {} | 1lg3oyy | false | null | t3_1lg3oyy | /r/LocalLLaMA/comments/1lg3oyy/intels_openvino_20252_brings_support_for_new/ | false | false | default | 18 | null | |
Ohh. 🤔 Okay ‼️ But what if we look at AMD Mi100 instinct,⁉️🙄 I can get it for $1000. | 0 | Isn't memory bandwidth the king . ⁉️💪🤠☝️
Maybe fine tuned backends which can utilise the AI pro 9700 hardware will work better. 🧐 | 2025-06-20T13:02:09 | sub_RedditTor | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lg3fj0 | false | null | t3_1lg3fj0 | /r/LocalLLaMA/comments/1lg3fj0/ohh_okay_but_what_if_we_look_at_amd_mi100/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'zph92h5ty28f1', 'resolutions': [{'height': 152, 'url': 'https://preview.redd.it/zph92h5ty28f1.jpeg?width=108&crop=smart&auto=webp&s=d8225241ede9862bb801c2ea77284e01e304b580', 'width': 108}, {'height': 305, 'url': 'https://preview.redd.it/zph92h5ty28f1.jpeg?width=216&crop=smart&auto=... | |
Planning to build AI PC does my Build make sense? | 0 | Hi so I've been looking all around and there seems to be a shortage of GPU guides when building a PC for AI Inference, the only viable reference I could consult are GPU benchmarks and build posts from here.
So I'm planning to build an AI "Box". Based on my research the best consumer-level GPUs that are bang for the bu... | 2025-06-20T12:53:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lg396b/planning_to_build_ai_pc_does_my_build_make_sense/ | germaniiifelisarta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg396b | false | null | t3_1lg396b | /r/LocalLLaMA/comments/1lg396b/planning_to_build_ai_pc_does_my_build_make_sense/ | false | false | self | 0 | null |
Smallest basic ai model for working | 0 | So I wanted to make my own ai from scratch but we got some pretrained small ai models right....
So I wanna take a smallest posible ai and train it against my specific data so it can be pecialised in that field....
I thought of t5 model but I kinda got a hard limitations
My model have to analyse reports I give it d... | 2025-06-20T12:44:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lg3263/smallest_basic_ai_model_for_working/ | Future_Tonight_6626 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg3263 | false | null | t3_1lg3263 | /r/LocalLLaMA/comments/1lg3263/smallest_basic_ai_model_for_working/ | false | false | self | 0 | null |
Gemini models (yes, even the recent 2.5 ones) hallucinate crazily on video inputs | 0 | I was trying to use the models to summarize long lecture videos (\~2 hours), feeding it the entire video was obviously beyond the allowed token limit, so I started reducing the video size and opted to a incremental summarization approach, where I feed overlapping chunks of the video, summarize it, and move on to the ne... | 2025-06-20T12:00:36 | https://www.reddit.com/r/LocalLLaMA/comments/1lg27hk/gemini_models_yes_even_the_recent_25_ones/ | Infrared12 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg27hk | false | null | t3_1lg27hk | /r/LocalLLaMA/comments/1lg27hk/gemini_models_yes_even_the_recent_25_ones/ | false | false | self | 0 | null |
Am I using lightrag + llama.cpp wrong? | 4 | I have a system where I put a document into dockling, a converts it from PDF to mark down in my certain particular way I want, and then it sends it to lightRAG have a KV store and knowledge graph built. For a simple 550 line (18k chars) markdown file its taking 11 minutes and creating a KG of 1751 lines. It took 49 sec... | 2025-06-20T11:17:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lg1fpo/am_i_using_lightrag_llamacpp_wrong/ | Devonance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg1fpo | false | null | t3_1lg1fpo | /r/LocalLLaMA/comments/1lg1fpo/am_i_using_lightrag_llamacpp_wrong/ | false | false | self | 4 | null |
Built cloud GPUs price comparison tool | 1 | [removed] | 2025-06-20T11:14:26 | https://www.reddit.com/r/LocalLLaMA/comments/1lg1dtu/built_cloud_gpus_price_comparison_tool/ | viskyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg1dtu | false | null | t3_1lg1dtu | /r/LocalLLaMA/comments/1lg1dtu/built_cloud_gpus_price_comparison_tool/ | false | false | self | 1 | null |
Performance expectations question (Devstral) | 2 | Started playing around last weekend with some local models (devstral small Q4) on my dev laptop and while I got some useful results it took hours. For the given task of refactoring since Vue components from options to composition API this was fine as I just left it to get in with it while I did other things. However if... | 2025-06-20T10:30:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lg0mqq/performance_expectations_question_devstral/ | _-Carnage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg0mqq | false | null | t3_1lg0mqq | /r/LocalLLaMA/comments/1lg0mqq/performance_expectations_question_devstral/ | false | false | self | 2 | null |
Good stable voice cloning and TTS with NOT much complicated installation? | 3 | I wanted a good voice cloning and TTS tool so I was reading some reviews and opinions.
Decided to try XTTS v2 via their huggingface space demo and found their voice cloning is low quality.
Then tried Spark TTS and it's voice cloning is not upto mark as well.
Then tried Chatterbox. It is far better than... | 2025-06-20T10:04:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lg084k/good_stable_voice_cloning_and_tts_with_not_much/ | Dragonacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg084k | false | null | t3_1lg084k | /r/LocalLLaMA/comments/1lg084k/good_stable_voice_cloning_and_tts_with_not_much/ | false | false | self | 3 | null |
How run Open Source? | 0 | Yeah so in new to ai and I’m just wondering one thing. If I got an open source model, how can I run it. I find it very hard and can’t seem to do it. | 2025-06-20T10:01:22 | https://www.reddit.com/r/LocalLLaMA/comments/1lg068y/how_run_open_source/ | Easy_Marsupial_5833 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lg068y | false | null | t3_1lg068y | /r/LocalLLaMA/comments/1lg068y/how_run_open_source/ | false | false | self | 0 | null |
What is a super lightweight model for checking grammar? | 11 | I have been looking for something that can check grammar. Nothing too serious, just something to look for obvious mistakes in a git commit message. After not finding a lightweight application, I'm wondering if there's an LLM that's super light to run on a CPU that can do this. | 2025-06-20T09:28:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lfzon7/what_is_a_super_lightweight_model_for_checking/ | kudikarasavasa | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfzon7 | false | null | t3_1lfzon7 | /r/LocalLLaMA/comments/1lfzon7/what_is_a_super_lightweight_model_for_checking/ | false | false | self | 11 | null |
Any tools that help you build simple interactive projects from an idea? | 4 | I get random ideas sometimes, like a mini-game, typing test, or a little music toy, and I’d love to turn them into something playable without starting from scratch.
Is there any tool that lets you describe what you want and helps build it out, even just a rough version?
Not looking for anything super advanced, just fun... | 2025-06-20T09:26:09 | https://www.reddit.com/r/LocalLLaMA/comments/1lfznhz/any_tools_that_help_you_build_simple_interactive/ | Fun_Construction_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfznhz | false | null | t3_1lfznhz | /r/LocalLLaMA/comments/1lfznhz/any_tools_that_help_you_build_simple_interactive/ | false | false | self | 4 | null |
Repurposing 800 x RX 580s for LLM inference - 4 months later - learnings | 159 | Back in March I asked this sub if RX 580s could be used for anything useful in the LLM space and asked for help on how to implemented inference:
[https://www.reddit.com/r/LocalLLaMA/comments/1j1mpuf/repurposing\_old\_rx\_580\_gpus\_need\_advice/](https://www.reddit.com/r/LocalLLaMA/comments/1j1mpuf/repurposing_old_rx... | 2025-06-20T09:14:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lfzh05/repurposing_800_x_rx_580s_for_llm_inference_4/ | rasbid420 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfzh05 | false | null | t3_1lfzh05 | /r/LocalLLaMA/comments/1lfzh05/repurposing_800_x_rx_580s_for_llm_inference_4/ | false | false | self | 159 | null |
Built a local-first RAG system using SQLite | 1 | [removed] | 2025-06-20T09:00:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lfz999/built_a_localfirst_rag_system_using_sqlite/ | gogozad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfz999 | false | null | t3_1lfz999 | /r/LocalLLaMA/comments/1lfz999/built_a_localfirst_rag_system_using_sqlite/ | false | false | self | 1 | null |
Built a local-first RAG system using SQLite | 1 | [removed] | 2025-06-20T08:53:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lfz5rw/built_a_localfirst_rag_system_using_sqlite/ | gogozad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfz5rw | false | null | t3_1lfz5rw | /r/LocalLLaMA/comments/1lfz5rw/built_a_localfirst_rag_system_using_sqlite/ | false | false | self | 1 | null |
The guide to MCP I never had | 2 | MCP has been going viral but if you are overwhelmed by the jargon, you are not alone. I felt the same way, so I took some time to learn about MCP and created a free guide to explain all the stuff in a simple way.
Covered the following topics in detail.
1. The problem of existing AI tools.
2. Introduction to MCP and i... | 2025-06-20T08:39:57 | https://levelup.gitconnected.com/the-guide-to-mcp-i-never-had-f79091cf99f8?sk=8c94f37d7c87b2e147366de13888388b | anmolbaranwal | levelup.gitconnected.com | 1970-01-01T00:00:00 | 0 | {} | 1lfyyu4 | false | null | t3_1lfyyu4 | /r/LocalLLaMA/comments/1lfyyu4/the_guide_to_mcp_i_never_had/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'kjvfCXf5MBoj5Y3ukB4AI7oyfSqLc9-TUEK8hf4bjWk', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/kjvfCXf5MBoj5Y3ukB4AI7oyfSqLc9-TUEK8hf4bjWk.jpeg?width=108&crop=smart&auto=webp&s=582853b88840f195a019d50be35950e84eeb3879', 'width': 108}, {'height': 122, 'url': '... | |
AMD Radeon AI PRO R9700 GPU Offers 4x More TOPS & 2x More AI Performance Than Radeon PRO W7800 | 42 | 2025-06-20T08:21:25 | https://wccftech.com/amd-radeon-ai-pro-r9700-gpu-4x-more-tops-2x-ai-performance-vs-radeon-pro-w7800/ | _SYSTEM_ADMIN_MOD_ | wccftech.com | 1970-01-01T00:00:00 | 0 | {} | 1lfyp3g | false | null | t3_1lfyp3g | /r/LocalLLaMA/comments/1lfyp3g/amd_radeon_ai_pro_r9700_gpu_offers_4x_more_tops/ | false | false | default | 42 | {'enabled': False, 'images': [{'id': 'EYYV5pInhONsaeNMa0FEbViuyL2svw10Qf1f-BebbNc', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/EYYV5pInhONsaeNMa0FEbViuyL2svw10Qf1f-BebbNc.png?width=108&crop=smart&auto=webp&s=b9bf734907a603457ffb72691d1b99209b887347', 'width': 108}, {'height': 115, 'url': 'h... | |
Best model for a RX 6950xt? | 3 | Hello everyone,
I'm currently using an Gigabyte RX 6950xt 16gb gddr6 from AMD in my main gaming rig, but i'm looking to upgrade it and i was wondering if it could be repurposed for using local AI.
What model would you suggest to try?
Thanks :) | 2025-06-20T08:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lfyna1/best_model_for_a_rx_6950xt/ | InvestitoreConfuso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfyna1 | false | null | t3_1lfyna1 | /r/LocalLLaMA/comments/1lfyna1/best_model_for_a_rx_6950xt/ | false | false | self | 3 | null |
Who's the voice Narrator in this video?? | 0 | I've realized that you guys are very knowledgeable in almost every domain. I know someone must know the voice over in this video. [https://www.youtube.com/watch?v=miQjNZtohWw](https://www.youtube.com/watch?v=miQjNZtohWw) Tell me. I want to use it my project
https://preview.redd.it/dxscjweih18f1.png?width=1366&format... | 2025-06-20T08:03:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lfyftp/whos_the_voice_narrator_in_this_video/ | mikemaina | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfyftp | false | {'oembed': {'author_name': 'Travpedia', 'author_url': 'https://www.youtube.com/@travpedia', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/miQjNZtohWw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; p... | t3_1lfyftp | /r/LocalLLaMA/comments/1lfyftp/whos_the_voice_narrator_in_this_video/ | false | false | 0 | {'enabled': False, 'images': [{'id': '1u50t5PGupF5QtsTDf98eCFe_M9tfx1SJ4uI9eJen6U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1u50t5PGupF5QtsTDf98eCFe_M9tfx1SJ4uI9eJen6U.jpeg?width=108&crop=smart&auto=webp&s=106237c73a7df8a31bb62eee409b69addb6c827f', 'width': 108}, {'height': 162, 'url': '... | |
RTX 6000 PRO Blackwell Max Q? Non Max Q? | 6 | Hello everyone,
I’m looking for some advice on upgrading my personal GPU server for research purposes. I’m considering the **RTX 6000 PRO Blackwell**, but I’m currently debating between the **Max-Q** and **non-Max-Q** versions.
From what I understand, the Max-Q version operates at roughly **half the power** and deliv... | 2025-06-20T07:45:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lfy5sy/rtx_6000_pro_blackwell_max_q_non_max_q/ | Opening_Progress6820 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfy5sy | false | null | t3_1lfy5sy | /r/LocalLLaMA/comments/1lfy5sy/rtx_6000_pro_blackwell_max_q_non_max_q/ | false | false | self | 6 | null |
Trying to understand | 0 | Hello Im a second year student of Informatics and have just finished my course of mathematical modelling (linear-non linear systems, differential equations etc) can someone suggest me a book that explains the math behind LLM (Like DeepSeek?) i know that there is some kind of matrix-multiplication done in the background... | 2025-06-20T07:38:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lfy28p/trying_to_understand/ | Remarkable_Fold_4202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfy28p | false | null | t3_1lfy28p | /r/LocalLLaMA/comments/1lfy28p/trying_to_understand/ | false | false | self | 0 | null |
Need help building real-time Avatar API — audio-to-video inference on backend (HPC server) | 1 | Hi all,
I’m developing a real-time API for avatar generation using **MuseTalk**, and I could use some help optimizing the audio-to-video inference process under live conditions. The backend runs on a high-performance computing (HPC) server, and I want to keep the system responsive for real-time use.
# Project Overvie... | 2025-06-20T06:12:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lfwqlk/need_help_building_realtime_avatar_api/ | timehascomeagainn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfwqlk | false | null | t3_1lfwqlk | /r/LocalLLaMA/comments/1lfwqlk/need_help_building_realtime_avatar_api/ | false | false | self | 1 | null |
Semantic kernel chatcompletion . Send help | 1 | Hey guys, sorry for the dumb question but I've been stuck for a while and I can't seem to find an answer to my question anywhere.
But, I am using chatcompletion with autoinvokekernal.
It's calling my plugin and I can see that a tool message is being returned as well as the model response in 2 separate messages, somet... | 2025-06-20T04:51:33 | https://www.reddit.com/r/LocalLLaMA/comments/1lfvg1q/semantic_kernel_chatcompletion_send_help/ | Huntersolomon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfvg1q | false | null | t3_1lfvg1q | /r/LocalLLaMA/comments/1lfvg1q/semantic_kernel_chatcompletion_send_help/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.