title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Best models for 8x3090 | 1 | [removed] | 2025-05-18T16:28:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kpnzpx/best_models_for_8x3090/ | chub0ka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpnzpx | false | null | t3_1kpnzpx | /r/LocalLLaMA/comments/1kpnzpx/best_models_for_8x3090/ | false | false | self | 1 | null |
Curly quotes | 0 | A publisher wrote me:
> It's a continuing source of frustration that LLMs can't handle curly quotes, as just about everything else in our writing and style guide can be aligned with generated content.
Does anyone know of a local LLM that can curl quotes correctly? Such as:
> ''E's got a 'ittle box 'n a big 'un,' she... | 2025-05-18T16:19:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kpnrll/curly_quotes/ | autonoma_2042 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpnrll | false | null | t3_1kpnrll | /r/LocalLLaMA/comments/1kpnrll/curly_quotes/ | false | false | self | 0 | null |
best realtime STT API atm? | 0 | as above | 2025-05-18T16:15:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kpnovv/best_realtime_stt_api_atm/ | boringblobking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpnovv | false | null | t3_1kpnovv | /r/LocalLLaMA/comments/1kpnovv/best_realtime_stt_api_atm/ | false | false | self | 0 | null |
Easy Way to Enable Thinking in Non-Thinking Models | 1 | Thinking models are useful. They think before they answer, which leads to more accurate answers, because they can catch a mistake mid-thought.
But not all models can do this and training a model to do this is immensely hard and time-consuming.
However, there's a work-around. The work-around is just giving a pro... | 2025-05-18T16:10:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kpnkib/easy_way_to_enable_thinking_in_nonthinking_models/ | Accurate_Rope5163 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpnkib | false | null | t3_1kpnkib | /r/LocalLLaMA/comments/1kpnkib/easy_way_to_enable_thinking_in_nonthinking_models/ | false | false | self | 1 | null |
I made an AI agent to control a drone using Qwen2 and smolagents from hugging face | 36 | I used the smolagents library and hosted it on [Hugging Face](https://www.linkedin.com/company/huggingface/). Deepdrone is basically an AI agent that allows you to control a drone via LLM and run simple missions with the agent. You can test it full locally with Ardupilot (I did run a simulated mission on my mac) and I ... | 2025-05-18T16:05:10 | https://www.reddit.com/r/LocalLLaMA/comments/1kpnfvo/i_made_an_ai_agent_to_control_a_drone_using_qwen2/ | _twelvechess | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpnfvo | false | null | t3_1kpnfvo | /r/LocalLLaMA/comments/1kpnfvo/i_made_an_ai_agent_to_control_a_drone_using_qwen2/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': 'SfDBbTwoSlP3t49X-DzOenP7DPUl56haACsFp5qBk6E', 'resolutions': [], 'source': {'height': 96, 'url': 'https://external-preview.redd.it/Jr2u9t7hHrCf63fubhl1KzYbXy626ftH82VNyHypf5Q.jpg?auto=webp&s=aab36e1b3c82df95001d7fe771b306f5a5a4f4f9', 'width': 96}, 'variants': {}}]} |
Voice to text | 1 | Sorry if this is the wrong place to ask this! Are there any llm apps for ios that support voice to chat but back and forth? I don’t want to have to keep hitting submit after it translates my voice to text. Would be nice to talk to AI while driving or going on a run. | 2025-05-18T15:56:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kpn8ot/voice_to_text/ | PickleSavings1626 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpn8ot | false | null | t3_1kpn8ot | /r/LocalLLaMA/comments/1kpn8ot/voice_to_text/ | false | false | self | 1 | null |
Formated Output to JSON | 1 | [removed] | 2025-05-18T15:01:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kpm025/formated_output_to_json/ | Banani23 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpm025 | false | null | t3_1kpm025 | /r/LocalLLaMA/comments/1kpm025/formated_output_to_json/ | false | false | self | 1 | null |
AI Chatspace | 1 | [removed] | 2025-05-18T14:42:50 | https://www.reddit.com/r/LocalLLaMA/comments/1kplko5/ai_chatspace/ | Zestyclose-Ad5427 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kplko5 | false | null | t3_1kplko5 | /r/LocalLLaMA/comments/1kplko5/ai_chatspace/ | false | false | self | 1 | null |
10 GitHub Repositories to Master Large Language Models | 0 | If you are not familiar with large language models (LLMs) today, you may already be falling behind in the AI revolution. Companies are increasingly integrating LLM-based applications into their workflows. As a result, there is a high demand for LLM engineers and operations engineers who can train, fine-tune, evaluate, ... | 2025-05-18T14:32:26 | https://www.kdnuggets.com/10-github-repositories-to-master-large-language-models | kingabzpro | kdnuggets.com | 1970-01-01T00:00:00 | 0 | {} | 1kplcdp | false | null | t3_1kplcdp | /r/LocalLLaMA/comments/1kplcdp/10_github_repositories_to_master_large_language/ | false | false | default | 0 | null |
Haystack AI Tutorial: Building Agentic Workflows | 1 | Learn how to use Haystack's dataclasses, components, document store, generator, retriever, pipeline, tools, and agents to build an agentic workflow that will help you invoke multiple tools based on user queries. | 2025-05-18T14:24:56 | https://www.datacamp.com/tutorial/haystack-ai-tutorial | kingabzpro | datacamp.com | 1970-01-01T00:00:00 | 0 | {} | 1kpl6k0 | false | null | t3_1kpl6k0 | /r/LocalLLaMA/comments/1kpl6k0/haystack_ai_tutorial_building_agentic_workflows/ | false | false | default | 1 | null |
New to local LLM for coding, need recommendations | 1 | [removed] | 2025-05-18T13:17:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kpjrcx/new_to_local_llm_for_coding_need_recommendations/ | Intotheblue1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpjrcx | false | null | t3_1kpjrcx | /r/LocalLLaMA/comments/1kpjrcx/new_to_local_llm_for_coding_need_recommendations/ | false | false | self | 1 | null |
Lang Chains, Lang Graph, Llama | 0 | Hi guys!
I'm planning to start my career with AI...and have come across these names " Lang chains, Lang Graph and Llama" a lot lately!
I want to understand what they are and from where I can learn about them!
And also if possible! Can you please tell me where can I learn how to write a schema for agents?
| 2025-05-18T12:21:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kpip9q/lang_chains_lang_graph_llama/ | DarkVeer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpip9q | false | null | t3_1kpip9q | /r/LocalLLaMA/comments/1kpip9q/lang_chains_lang_graph_llama/ | false | false | self | 0 | null |
Asilab AI lab claiming they've created artificial superintelligence!!! real breakthrough or investor bait? | 1 | [removed] | 2025-05-18T11:44:46 | https://www.reddit.com/r/LocalLLaMA/comments/1kpi1w0/asilab_ai_lab_claiming_theyve_created_artificial/ | Remote_Insurance_228 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpi1w0 | false | null | t3_1kpi1w0 | /r/LocalLLaMA/comments/1kpi1w0/asilab_ai_lab_claiming_theyve_created_artificial/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '3wOUBFRZUoSVWmml6HqYR3gRn1deDevHR0KTvMvCQUw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tAJpbRPLJDV98jg6xr-7MzJjkIVZGp66hfaaN1w3dGU.jpg?width=108&crop=smart&auto=webp&s=3b31ac06dd1d6bd0ae8c4f4fed808273c1f6d6bd', 'width': 108}, {'height': 162, 'url': 'h... |
Asilab AI lab claiming they've created artificial superintelligence??? real breakthrough or investor bait? | 1 | [removed] | 2025-05-18T11:41:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kphzoy/asilab_ai_lab_claiming_theyve_created_artificial/ | ankimedic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kphzoy | false | null | t3_1kphzoy | /r/LocalLLaMA/comments/1kphzoy/asilab_ai_lab_claiming_theyve_created_artificial/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '3wOUBFRZUoSVWmml6HqYR3gRn1deDevHR0KTvMvCQUw', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/tAJpbRPLJDV98jg6xr-7MzJjkIVZGp66hfaaN1w3dGU.jpg?width=108&crop=smart&auto=webp&s=3b31ac06dd1d6bd0ae8c4f4fed808273c1f6d6bd', 'width': 108}, {'height': 162, 'url': 'h... |
I have just dropped in from google. What do you guys think is the absolute best and most powerful LLM? | 0 | Can't be ChatGPT, that's for certain. Possibly Qwen3? | 2025-05-18T11:37:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kphxoz/i_have_just_dropped_in_from_google_what_do_you/ | Quirky_Resist_7478 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kphxoz | false | null | t3_1kphxoz | /r/LocalLLaMA/comments/1kphxoz/i_have_just_dropped_in_from_google_what_do_you/ | false | false | self | 0 | null |
Stack Overflow Should be Used by LLMs and Also Contributed to it Actively as a Public Duty | 0 | I have used stack overflow (StOv) in the past and seen how people of different backgrounds contribute to solutions to problems that other people face. But now that ChatGPT has made it possible to get your answers directly, we do not use awesome StOv that much anymore, the usage of StOv has plummeted drastically. The re... | 2025-05-18T11:29:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kphsn4/stack_overflow_should_be_used_by_llms_and_also/ | Desperate_Rub_1352 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kphsn4 | false | null | t3_1kphsn4 | /r/LocalLLaMA/comments/1kphsn4/stack_overflow_should_be_used_by_llms_and_also/ | false | false | self | 0 | null |
Meta is hosting Llama 3.3 8B Instruct on OpenRoute | 88 | # Meta: Llama 3.3 8B Instruct (free)
# meta-llama/llama-3.3-8b-instruct:free
Created May 14, 2025 128,000 context $0/M input tokens$0/M output tokens
A lightweight and ultra-fast variant of Llama 3.3 70B, for use when quick response times are needed most.
Provider is Meta. Thought? | 2025-05-18T11:18:38 | https://www.reddit.com/r/LocalLLaMA/comments/1kphmb4/meta_is_hosting_llama_33_8b_instruct_on_openroute/ | Asleep-Ratio7535 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kphmb4 | false | null | t3_1kphmb4 | /r/LocalLLaMA/comments/1kphmb4/meta_is_hosting_llama_33_8b_instruct_on_openroute/ | false | false | self | 88 | null |
What the best model to run on m1 pro, 16gb ram for coders? | 1 | [removed] | 2025-05-18T11:11:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kphi34/what_the_best_model_to_run_on_m1_pro_16gb_ram_for/ | k4l3m3r0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kphi34 | false | null | t3_1kphi34 | /r/LocalLLaMA/comments/1kphi34/what_the_best_model_to_run_on_m1_pro_16gb_ram_for/ | false | false | self | 1 | null |
Should I finetune or use fewshot prompting? | 3 | I have document images with size 4000x2000. I want the LLMs to detect certain visual elements from the image. The visual elements do not contain text so I am not sure if sending OCR text alongwith the images will do any good.
I can't use a detection model due to a few policy limitations and want to work with LLMs/VLMs... | 2025-05-18T10:50:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kph6b1/should_i_finetune_or_use_fewshot_prompting/ | GHOST--1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kph6b1 | false | null | t3_1kph6b1 | /r/LocalLLaMA/comments/1kph6b1/should_i_finetune_or_use_fewshot_prompting/ | false | false | self | 3 | null |
Reverse engineer hidden features/model responses in LLMs. Any ideas or tips? | 10 | Hi all! I'd like to dive into uncovering what might be "hidden" in LLM training data—like Easter eggs, watermarks, or unique behaviours triggered by specific prompts.
One approach could be to look for creative ideas or strategies to craft prompts that might elicit unusual or informative responses from models. Have any... | 2025-05-18T10:09:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kpgla3/reverse_engineer_hidden_featuresmodel_responses/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpgla3 | false | null | t3_1kpgla3 | /r/LocalLLaMA/comments/1kpgla3/reverse_engineer_hidden_featuresmodel_responses/ | false | false | self | 10 | null |
Riffusion Ai music generator Spoken Word converted to lip sync for Google Veo 2 videos. Riffusion spoken word has more emotion than any TTS voice. I used https://www.sievedata.com/ and GoEnhance.Ai to Lip sync. I used Zonos TTS & Voice cloning for the audio. https://podcast.adobe.com/en clean audio. | 0 | 2025-05-18T10:00:55 | https://v.redd.it/t4dlpff2ii1f1 | Extension-Fee-8480 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kpggqp | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/t4dlpff2ii1f1/DASHPlaylist.mpd?a=1750154468%2CM2VmM2EwMmUzNzkzOTg0YWViNTJlZmJkODJjMjlmN2E4NjA5YzA3Y2JiMzU4NmU5ZGM0NjkzYWRlZDNlYTMyNg%3D%3D&v=1&f=sd', 'duration': 22, 'fallback_url': 'https://v.redd.it/t4dlpff2ii1f1/DASH_720.mp4?source=fallback', 'ha... | t3_1kpggqp | /r/LocalLLaMA/comments/1kpggqp/riffusion_ai_music_generator_spoken_word/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Y2tiZGhpZjJpaTFmMW1M4y_gd9M_IdAj1J1Bb-dzHluQuZIsrsJqR6CFq4ZW', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Y2tiZGhpZjJpaTFmMW1M4y_gd9M_IdAj1J1Bb-dzHluQuZIsrsJqR6CFq4ZW.png?width=108&crop=smart&format=pjpg&auto=webp&s=91d1cb3b0e9dd7a2c4fc7ac8143e7c8357bae... | ||
A Sleek, Powerful Frontend for Local LLMs | 0 | [**Magai**](https://magai.co/?via=us) delivers a clean, polished interface for running local large language models. It supports multiple AI backends like LLaMA, Kobold, OpenAI, and Anthropic, all within one streamlined app.
Key highlights:
* Built-in memory and persona management for better context retention
* Easy p... | 2025-05-18T09:56:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kpgehu/a_sleek_powerful_frontend_for_local_llms/ | learnowi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpgehu | false | null | t3_1kpgehu | /r/LocalLLaMA/comments/1kpgehu/a_sleek_powerful_frontend_for_local_llms/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'Aq0oX7zdwgLTANSjHO-mzeAkBp_h7PGDYZ8U8ywqI_4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-XoDp64ieKqGsD64nH8hvL4bHOPRKZx5ZL9Iw8ipwms.jpg?width=108&crop=smart&auto=webp&s=7d85b88a5c7f57975645fc03c2037d11a5aa1ad7', 'width': 108}, {'height': 113, 'url': 'h... |
Single source of truth for model parameters in local setup | 1 | [removed] | 2025-05-18T09:40:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kpg6eb/single_source_of_truth_for_model_parameters_in/ | batsba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpg6eb | false | null | t3_1kpg6eb | /r/LocalLLaMA/comments/1kpg6eb/single_source_of_truth_for_model_parameters_in/ | false | false | self | 1 | null |
Inspired by Anthropic’s Biology of an LLM: Exploring Prompt Cues in Two LLMs | 19 | Hello Everyone,
I recently read [Anthropic’s Biology of an LLM](https://transformer-circuits.pub/2025/attribution-graphs/biology.html#dives-cot:~:text=%C2%A7%C2%A011-,Chain%2Dof%2Dthought%20Faithfulness,-Language%20models%20%E2%80%9Cthink) paper and was struck by the behavioural changes they highlighted.
I agree that... | 2025-05-18T09:16:09 | https://www.reddit.com/gallery/1kpfu72 | BriefAd4761 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kpfu72 | false | null | t3_1kpfu72 | /r/LocalLLaMA/comments/1kpfu72/inspired_by_anthropics_biology_of_an_llm/ | false | false | 19 | null | |
Inspired by Anthropic’s Biology of an LLM: Exploring Prompt Cues in Two LLMs | 1 | [removed] | 2025-05-18T09:12:45 | https://www.reddit.com/gallery/1kpfsj0 | BriefAd4761 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kpfsj0 | false | null | t3_1kpfsj0 | /r/LocalLLaMA/comments/1kpfsj0/inspired_by_anthropics_biology_of_an_llm/ | false | false | 1 | null | |
Has anyone used TTS or a voice cloning to do a call return message on your phone? | 5 | What are some good messages or angry phone message from TTS? | 2025-05-18T09:07:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kpfq0h/has_anyone_used_tts_or_a_voice_cloning_to_do_a/ | Extension-Fee-8480 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpfq0h | false | null | t3_1kpfq0h | /r/LocalLLaMA/comments/1kpfq0h/has_anyone_used_tts_or_a_voice_cloning_to_do_a/ | false | false | self | 5 | null |
Looking for text adventure front-end | 3 | Hey there. In recent times I got a penchant for ai text adventures while the general chat like ones are fine I was wondering if anyone could recommend me some kind of a front-end that did more than just used a prompt.
My main requirements are:
- Auto updating or one button-press updating world info
- Keeping track of o... | 2025-05-18T09:03:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kpfo9u/looking_for_text_adventure_frontend/ | HeatTheForge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpfo9u | false | null | t3_1kpfo9u | /r/LocalLLaMA/comments/1kpfo9u/looking_for_text_adventure_frontend/ | false | false | self | 3 | null |
SOTA local vision model choices in May 2025? Also is there a good multimodal benchmark? | 14 | I'm looking for a collection of local models to run local ai automation tooling on my RTX 3090s, so I don't need creative writing, nor do I want to overly focus on coding (as I'll keep using gemini 2.5 pro for actual coding), though some of my tasks will be about summarizing and understanding code, so it definitely hel... | 2025-05-18T08:46:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kpffub/sota_local_vision_model_choices_in_may_2025_also/ | michaelsoft__binbows | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpffub | false | null | t3_1kpffub | /r/LocalLLaMA/comments/1kpffub/sota_local_vision_model_choices_in_may_2025_also/ | false | false | self | 14 | null |
I Yelled My MVP Idea and Got a FastAPI Backend in 3 Minutes | 0 | Every time I start a new side project, I hit the same wall:
Auth, CORS, password hashing—Groundhog Day.
Meanwhile Pieter Levels ships micro-SaaS by breakfast.
**“What if I could just say my idea out loud and let AI handle the boring bits?”**
Enter **Spitcode**—a tiny, local pipeline that turns a 10-second voi... | 2025-05-18T08:19:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kpf251/i_yelled_my_mvp_idea_and_got_a_fastapi_backend_in/ | IntelligentHope9866 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpf251 | false | null | t3_1kpf251 | /r/LocalLLaMA/comments/1kpf251/i_yelled_my_mvp_idea_and_got_a_fastapi_backend_in/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'HH4ipsr8vrX14hBcxPWy9fvouEY_nJ5_IPcmeGnh3eo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VVbwVkyFE6oEWR0f-WDHzihgB5sSRhhUkKjfPY0-lOU.jpg?width=108&crop=smart&auto=webp&s=e02c63802781b0f4429b6112a590f9166ed2321b', 'width': 108}, {'height': 108, 'url': 'h... |
Uncensoring Qwen3 - Update | 285 | **GrayLine** is my fine-tuning project based on **Qwen3**. The goal is to produce models that respond directly and neutrally to sensitive or controversial questions, without moralizing, refusing, or redirecting—while still maintaining solid reasoning ability.
Training setup:
* Framework: Unsloth (QLoRA)
* LoRA: Rank ... | 2025-05-18T07:33:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kpefrt/uncensoring_qwen3_update/ | Reader3123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpefrt | false | null | t3_1kpefrt | /r/LocalLLaMA/comments/1kpefrt/uncensoring_qwen3_update/ | false | false | self | 285 | {'enabled': False, 'images': [{'id': 'Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=108&crop=smart&auto=webp&s=01ab41533b37667645fafe92655b4d9f247c122a', 'width': 108}, {'height': 116, 'url': 'h... |
Seeking Advice on Complex RAG Project with Voice Integration, Web, SQL, and NLP | 1 | [removed] | 2025-05-18T07:25:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kpeb83/seeking_advice_on_complex_rag_project_with_voice/ | Outside-Narwhal9948 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpeb83 | false | null | t3_1kpeb83 | /r/LocalLLaMA/comments/1kpeb83/seeking_advice_on_complex_rag_project_with_voice/ | false | false | self | 1 | null |
Uncensoring Qwen3 - Update | 1 | [removed] | 2025-05-18T07:24:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kpeb5s/uncensoring_qwen3_update/ | Reader3123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpeb5s | false | null | t3_1kpeb5s | /r/LocalLLaMA/comments/1kpeb5s/uncensoring_qwen3_update/ | false | false | self | 1 | null |
Uncensoring Qwen3 - Update | 1 | [removed] | 2025-05-18T07:24:08 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kpeaqv | false | null | t3_1kpeaqv | /r/LocalLLaMA/comments/1kpeaqv/uncensoring_qwen3_update/ | false | false | default | 1 | null | ||
Fine-Tuning Qwen3 for Unfiltered, Neutral Responses | 1 | [removed] | 2025-05-18T07:22:02 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kpe9o1 | false | null | t3_1kpe9o1 | /r/LocalLLaMA/comments/1kpe9o1/finetuning_qwen3_for_unfiltered_neutral_responses/ | false | false | default | 1 | null | ||
LLM for tourism | 1 | [removed] | 2025-05-18T07:12:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kpe4lv/llm_for_tourism/ | Paulonerl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpe4lv | false | null | t3_1kpe4lv | /r/LocalLLaMA/comments/1kpe4lv/llm_for_tourism/ | false | false | self | 1 | null |
Speed Up llama.cpp on Uneven Multi-GPU Setups (RTX 5090 + 2×3090) | 62 | Hey folks, I just locked down some nice performance gains on my multi‑GPU rig (one RTX 5090 + two RTX 3090s) using llama.cpp. My total throughput jumped by \~16%. Although none of this is new, I wanted to share the step‑by‑step so anyone unfamiliar can replicate it on their own uneven setups.
**My Hardware:**
* GPU 0... | 2025-05-18T07:09:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kpe33n/speed_up_llamacpp_on_uneven_multigpu_setups_rtx/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpe33n | false | null | t3_1kpe33n | /r/LocalLLaMA/comments/1kpe33n/speed_up_llamacpp_on_uneven_multigpu_setups_rtx/ | false | false | self | 62 | null |
Fine-Tuning Qwen3-8B for Unfiltered, Neutral Response- Update | 1 | [removed] | 2025-05-18T06:06:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kpd66x/finetuning_qwen38b_for_unfiltered_neutral/ | Reader3123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpd66x | false | null | t3_1kpd66x | /r/LocalLLaMA/comments/1kpd66x/finetuning_qwen38b_for_unfiltered_neutral/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=108&crop=smart&auto=webp&s=01ab41533b37667645fafe92655b4d9f247c122a', 'width': 108}, {'height': 116, 'url': 'h... |
Fine-Tuning Qwen3-8B for Unfiltered, Neutral Responses - Update | 1 | [removed] | 2025-05-18T06:04:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kpd540/finetuning_qwen38b_for_unfiltered_neutral/ | Reader3123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpd540 | false | null | t3_1kpd540 | /r/LocalLLaMA/comments/1kpd540/finetuning_qwen38b_for_unfiltered_neutral/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=108&crop=smart&auto=webp&s=01ab41533b37667645fafe92655b4d9f247c122a', 'width': 108}, {'height': 116, 'url': 'h... |
Hello | 1 | [deleted] | 2025-05-18T06:01:56 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kpd3ui | false | null | t3_1kpd3ui | /r/LocalLLaMA/comments/1kpd3ui/hello/ | false | false | default | 1 | null | ||
Fine-Tuning Qwen3 for Unfiltered, Neutral Responses - Update | 1 | [removed] | 2025-05-18T06:00:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kpd3be/finetuning_qwen3_for_unfiltered_neutral_responses/ | Reader3123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpd3be | false | null | t3_1kpd3be | /r/LocalLLaMA/comments/1kpd3be/finetuning_qwen3_for_unfiltered_neutral_responses/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Tn_cW7k2WkyNJglJN7AJ9vG2CNTCS8KZ8cIgDcxG1uM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kwrvWMpS0BCEpNeEcEuupOMP5fpVrSvS1_LSlc9C-7g.jpg?width=108&crop=smart&auto=webp&s=01ab41533b37667645fafe92655b4d9f247c122a', 'width': 108}, {'height': 116, 'url': 'h... |
Sales Conversion Prediction From Conversations With Pure RL - Open-Source Version | 4 | Link to the first post: [https://www.reddit.com/r/LocalLLaMA/comments/1kl0uvv/predicting\_sales\_conversion\_probability\_from/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/LocalLLaMA/comments/1kl0uvv/predicting_sales_conversion_probability_from... | 2025-05-18T05:24:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kpcjof/sales_conversion_prediction_from_conversations/ | Nandakishor_ml | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpcjof | false | null | t3_1kpcjof | /r/LocalLLaMA/comments/1kpcjof/sales_conversion_prediction_from_conversations/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'Kuhr3QZfoSOzBNAgfqWGJIqYVCPOsoqDm7SVNby5DPg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CMUrRuOuB9C6NNABpsw4eYWrvNGOl8WacOsvTC79oSc.jpg?width=108&crop=smart&auto=webp&s=0f648b2071335b4adc76a1cc2e4b3a2b481c1a0b', 'width': 108}, {'height': 116, 'url': 'h... |
Offline app to selectively copy large chunks code/text to ingest context to your LLMs | 45 | 2025-05-18T05:16:09 | https://v.redd.it/r4c3jt2d5h1f1 | Plus-Garbage-9710 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kpcewe | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/r4c3jt2d5h1f1/DASHPlaylist.mpd?a=1750137383%2CYjAxY2M5Njg5YjUzYWIwNTVjNjA3NjMxYjg0NWIwNjY4Y2YxMGU5YTA3OWM1NzY5YzIzOTAwNDE1NTI3MTJkNg%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/r4c3jt2d5h1f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kpcewe | /r/LocalLLaMA/comments/1kpcewe/offline_app_to_selectively_copy_large_chunks/ | false | false | 45 | {'enabled': False, 'images': [{'id': 'MGVqbW40NWQ1aDFmMY9dx10RXZB3KA68SZOfNhSnUfrKh_GEyI1E_mSwgAj8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MGVqbW40NWQ1aDFmMY9dx10RXZB3KA68SZOfNhSnUfrKh_GEyI1E_mSwgAj8.png?width=108&crop=smart&format=pjpg&auto=webp&s=7ae8920635a522794d080fb6c84ea16cf1f1e... | ||
You can selectively copy large chunks code/text to ingest context to your LLMs | 1 | [removed] | 2025-05-18T05:12:05 | https://v.redd.it/o0iv71dy1h1f1 | Plus-Garbage-9710 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kpccp0 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/o0iv71dy1h1f1/DASHPlaylist.mpd?a=1750137139%2CMGU1YWE4YWMxOTcwMWRhNjQ5ODM1ZGM2YmM5NmIxY2VjYTAzNGRmNWE2ODRmYjA2MTU5ZjU4YmVhMDUxMTNlNQ%3D%3D&v=1&f=sd', 'duration': 44, 'fallback_url': 'https://v.redd.it/o0iv71dy1h1f1/DASH_1080.mp4?source=fallback', 'h... | t3_1kpccp0 | /r/LocalLLaMA/comments/1kpccp0/you_can_selectively_copy_large_chunks_codetext_to/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'MnlwOHgyZHkxaDFmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MnlwOHgyZHkxaDFmMZ2wNVo-OE2bciGH4sJ2rG79auy1VwP-dEBcS0EBYyDD.png?width=108&crop=smart&format=pjpg&auto=webp&s=b104f21b39a35f77b27053bb93af35d370d3f... | |
My Ai Eidos Project | 26 | So I’ve been working on this project for a couple weeks now. Basically I want an AI agent that feels more alive—learns from chats, remembers stuff, dreams, that kind of thing. I got way too into it and bolted on all sorts of extras:
* It **reflects** on past conversations and tweaks how it talks.
* It goes into **drea... | 2025-05-18T04:39:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kpbu9i/my_ai_eidos_project/ | opi098514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpbu9i | false | null | t3_1kpbu9i | /r/LocalLLaMA/comments/1kpbu9i/my_ai_eidos_project/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'QRF4IEuK40q5Yjl-Zi9m2yXKdzePxGTg_-gfHr_pFa8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KUA89fqdCKjOzwzx1BAzvI771gJpz-oytupGJkYQ4is.jpg?width=108&crop=smart&auto=webp&s=975a5883c6a848a3c90f1f26b3b9ea22ed25fa76', 'width': 108}, {'height': 108, 'url': 'h... |
Deepseek 700b Bitnet | 101 | Deepseek’s team has demonstrated the age old adage Necessity the mother of invention, and we know they have a great need in computation when compared against X, Open AI, and Google. This led them to develop V3 a 671B parameters MoE with 37B activated parameters.
MoE is here to stay at least for the interim, but the e... | 2025-05-18T03:36:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kpasqx/deepseek_700b_bitnet/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpasqx | false | null | t3_1kpasqx | /r/LocalLLaMA/comments/1kpasqx/deepseek_700b_bitnet/ | false | false | self | 101 | null |
A random guy that toped the citation ranking for AI in google scholar | 1 | [removed] | 2025-05-18T03:36:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kpasoe/a_random_guy_that_toped_the_citation_ranking_for/ | Ok-Atmosphere3141 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpasoe | false | null | t3_1kpasoe | /r/LocalLLaMA/comments/1kpasoe/a_random_guy_that_toped_the_citation_ranking_for/ | false | false | 1 | null | |
I built an AI-powered Food & Nutrition Tracker that analyzes meals from photos! Planning to open-source it | 91 | Hey
Been working on this Diet & Nutrition tracking app and wanted to share a quick demo of its current state. The core idea is to make food logging as painless as possible.
**Key features so far:**
* **AI Meal Analysis:** You can upload an image of your food, and the AI tries to identify it and provide nutritional e... | 2025-05-18T03:35:08 | https://v.redd.it/cmoi3scing1f1 | Solid_Woodpecker3635 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kparp9 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/cmoi3scing1f1/DASHPlaylist.mpd?a=1750131322%2CYmMyM2RlYjRkODUwNTNlYzA3MTFhNmMxYWExZjE0ZWU4ZTJlMTYwM2YxZjVjZGE5MmZhMzVmZjZhMGY2OWFlMw%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/cmoi3scing1f1/DASH_720.mp4?source=fallback', 'ha... | t3_1kparp9 | /r/LocalLLaMA/comments/1kparp9/i_built_an_aipowered_food_nutrition_tracker_that/ | false | false | 91 | {'enabled': False, 'images': [{'id': 'bTdwaTR0Y2luZzFmMevpjUkJAH29ctL9GGNTRuXbe-uU1nbp5uR8WvjIiEr4', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/bTdwaTR0Y2luZzFmMevpjUkJAH29ctL9GGNTRuXbe-uU1nbp5uR8WvjIiEr4.png?width=108&crop=smart&format=pjpg&auto=webp&s=bff07a838de86b56a15ec4eb4d02cf225bbcc... | |
Never heard of this guy who got the most citation under "artificial intelligence" label in google scholar | 1 | [removed] | 2025-05-18T03:29:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kpany5/never_heard_of_this_guy_who_got_the_most_citation/ | SilverKale2218 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kpany5 | false | null | t3_1kpany5 | /r/LocalLLaMA/comments/1kpany5/never_heard_of_this_guy_who_got_the_most_citation/ | false | false | 1 | null | |
Orange Pi 5 plus (32g) Alternatives for Running 8b Models | 1 | [removed] | 2025-05-18T02:42:25 | https://www.reddit.com/r/LocalLLaMA/comments/1kp9v3n/orange_pi_5_plus_32g_alternatives_for_running_8b/ | legendsofngdb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp9v3n | false | null | t3_1kp9v3n | /r/LocalLLaMA/comments/1kp9v3n/orange_pi_5_plus_32g_alternatives_for_running_8b/ | false | false | self | 1 | null |
Biggest & best local LLM with no guardrails? | 17 | dot. | 2025-05-18T02:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kp9or5/biggest_best_local_llm_with_no_guardrails/ | _DryWater_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp9or5 | false | null | t3_1kp9or5 | /r/LocalLLaMA/comments/1kp9or5/biggest_best_local_llm_with_no_guardrails/ | false | false | self | 17 | null |
Best Open Source LLM for Function Calling + Multimodal Image Support | 6 | What's the best LLM to use locally that can support function calling well and also has multimodal image support? I'm looking for, essentially, a replacement for Gemini 2.5.
The device I'm using is an M1 Macbook with 64gb memory, so I can run decently large models, but it would be most ideal if the response time isn't... | 2025-05-18T01:00:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kp81ez/best_open_source_llm_for_function_calling/ | Zlare7771 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp81ez | false | null | t3_1kp81ez | /r/LocalLLaMA/comments/1kp81ez/best_open_source_llm_for_function_calling/ | false | false | self | 6 | null |
Qwen3+ MCP | 10 | Trying to workshop a capable local rig, the latest buzz is MCP... Right?
Can Qwen3(or the latest sota 32b model) be fine tuned to use it well or does the model itself have to be trained on how to use it from the start?
Rig context:
I just got a 3090 and was able to keep my 3060 in the same setup. I also have 128gb o... | 2025-05-18T00:51:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kp7vba/qwen3_mcp/ | OGScottingham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp7vba | false | null | t3_1kp7vba | /r/LocalLLaMA/comments/1kp7vba/qwen3_mcp/ | false | false | self | 10 | null |
are there any models trained that are good at identifying hummed tunes? | 1 | There are some songs that are on the tip of my tongue but I can't remember anything except how the tune went, and I realize I have little way of searching that.
Maybe an LLM could help? | 2025-05-18T00:29:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kp7gvs/are_there_any_models_trained_that_are_good_at/ | o2beast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp7gvs | false | null | t3_1kp7gvs | /r/LocalLLaMA/comments/1kp7gvs/are_there_any_models_trained_that_are_good_at/ | false | false | self | 1 | null |
ROCm 6.4 + current unsloth working | 31 | Here a working ROCm unsloth docker setup:
Dockerfile (for gfx1100)
FROM rocm/pytorch:rocm6.4_ubuntu22.04_py3.10_pytorch_release_2.6.0
WORKDIR /root
RUN git clone -b rocm_enabled_multi_backend https://github.com/ROCm/bitsandbytes.git
RUN cd bitsandbytes/ && cmake -DGPU_TARGETS="gfx1100" -DBNB_ROCM_ARCH... | 2025-05-17T23:37:19 | https://www.reddit.com/r/LocalLLaMA/comments/1kp6gdv/rocm_64_current_unsloth_working/ | Ok_Ocelot2268 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp6gdv | false | null | t3_1kp6gdv | /r/LocalLLaMA/comments/1kp6gdv/rocm_64_current_unsloth_working/ | false | false | self | 31 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': '... |
UQLM: Uncertainty Quantification for Language Models | 20 | Sharing a new open source Python package for generation time, zero-resource hallucination detection called UQLM. It leverages state-of-the-art uncertainty quantification techniques from the academic literature to compute response-level confidence scores based on response consistency (in multiple responses to the same p... | 2025-05-17T23:21:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kp64ro/uqlm_uncertainty_quantification_for_language/ | Opposite_Answer_287 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp64ro | false | null | t3_1kp64ro | /r/LocalLLaMA/comments/1kp64ro/uqlm_uncertainty_quantification_for_language/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': '2qnPuYcFTAxvO3aqKSCMw_WrdIfT65MrlOGInjQ0bvc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W98u-IUhvKQp6oi__oRmKuAbcnpJoc9aIBrvwj1Q0OY.jpg?width=108&crop=smart&auto=webp&s=e6c3fa67ac5142734e3297769c0f917b643b542a', 'width': 108}, {'height': 108, 'url': 'h... |
Thoughts on build? This is phase I. Open to all advice and opinions. | 2 | Category Part Key specs / notes
CPU AMD Ryzen 9 7950X3D 16 C / 32 T, 128 MB 3D V-Cache
Motherboard ASUS ROG Crosshair X870E Hero AM5, PCIe 5.0 x16 / x8 + x8
Memory 4 × 48 GB Corsair Vengeance DDR5-6000 CL30 192 GB total
GPUs 2 × NVIDIA RTX 5090 32 GB GDDR7 each, Blackwell
Storage 2 × Samsung 990 Pro 2 TB NVMe Gen-4 ×4
... | 2025-05-17T23:05:53 | https://www.reddit.com/r/LocalLLaMA/comments/1kp5tur/thoughts_on_build_this_is_phase_i_open_to_all/ | Substantial_Cut_9418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp5tur | false | null | t3_1kp5tur | /r/LocalLLaMA/comments/1kp5tur/thoughts_on_build_this_is_phase_i_open_to_all/ | false | false | self | 2 | null |
Multi-Source RAG with Hybrid Search and Re-ranking in OpenWebUI - Step-by-Step Guide | 20 | Hi guys, I created a DETAILED step-by-step hybrid RAG implementation guide for OpenWebUI -
[https://productiv-ai.guide/start/multi-source-rag-openwebui/](https://productiv-ai.guide/start/multi-source-rag-openwebui/)
Let me know what you think. I couldn't find any other online sources that are as detailed as what I ... | 2025-05-17T23:05:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kp5thx/multisource_rag_with_hybrid_search_and_reranking/ | Hisma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp5thx | false | null | t3_1kp5thx | /r/LocalLLaMA/comments/1kp5thx/multisource_rag_with_hybrid_search_and_reranking/ | false | false | self | 20 | null |
Can Llama 3.2 3B do bash programing? | 0 | I just got Llama running about 2 days ago and so far I like having a local model running. I don't have to worry about running out of questions. Since I'm running it on a Linux machine (Debian 12) I wanted to make a bash script to both start and stop the service. So that lead me online to find an AI that can do Bash, an... | 2025-05-17T23:01:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kp5qqh/can_llama_32_3b_do_bash_programing/ | aknight2015 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp5qqh | false | null | t3_1kp5qqh | /r/LocalLLaMA/comments/1kp5qqh/can_llama_32_3b_do_bash_programing/ | false | false | self | 0 | null |
What models do ya’ll recommend from Arli Ai? | 3 | Been using Arli Ai for a couple of days now. I really like the huge variety of models on there. But I still can’t seem to find the right model that sticks with me. I was wondering what models do ya’ll mostly use for text roleplay?
I’m looking for a model that’s creative, doesn’t need me to hold its hand to get things ... | 2025-05-17T22:55:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kp5mbu/what_models_do_yall_recommend_from_arli_ai/ | Melodyblue11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp5mbu | false | null | t3_1kp5mbu | /r/LocalLLaMA/comments/1kp5mbu/what_models_do_yall_recommend_from_arli_ai/ | false | false | self | 3 | null |
RAG embeddings survey - What are your chunking / embedding settings? | 32 | I’ve been working with RAG for over a year now and it honestly seems like a bit of a dark art. I haven’t really found the perfect settings for my use case yet. I’m dealing with several hundred policy documents as well as spreadsheets that contain number codes that link to specific products and services. It’s very impor... | 2025-05-17T22:31:55 | Porespellar | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kp558b | false | null | t3_1kp558b | /r/LocalLLaMA/comments/1kp558b/rag_embeddings_survey_what_are_your_chunking/ | false | false | 32 | {'enabled': True, 'images': [{'id': 'DaXH1IrEbZ0_tsOvlIdyooiYx4OS_Ytmp6-h8bVyvzU', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/z0sfv55h5f1f1.jpeg?width=108&crop=smart&auto=webp&s=52cba00cd7c6c428247eadf39c61deefe1bca297', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/z0sfv55h5f1f1.j... | ||
AlphaEvolve Paper Dropped Yesterday - So I Built My Own Open-Source Version: OpenAlpha_Evolve! | 526 | Google DeepMind just dropped their AlphaEvolve paper (May 14th) on an AI that designs and evolves algorithms. Pretty groundbreaking.
Inspired, I immediately built OpenAlpha\_Evolve – an open-source Python framework so anyone can experiment with these concepts.
This was a rapid build to get a functional version out. F... | 2025-05-17T22:14:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kp4scy/alphaevolve_paper_dropped_yesterday_so_i_built_my/ | Huge-Designer-7825 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp4scy | false | null | t3_1kp4scy | /r/LocalLLaMA/comments/1kp4scy/alphaevolve_paper_dropped_yesterday_so_i_built_my/ | false | false | 526 | {'enabled': False, 'images': [{'id': 'H1nbou2eTFH-qOmHNOoRAEMZnJzwEDAGtwi-Gk5H5oY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HyXeCsstmjpayPcbhebb2CfV3uo4aDIHFzZeJ7oDZps.jpg?width=108&crop=smart&auto=webp&s=713eac46f76066c99a6c4310ada3d00d5f519a95', 'width': 108}, {'height': 108, 'url': 'h... | |
Anyone ever try Riffusion Ai music generator for spoken word. I have. The Ai Riffusion voices have so much emotion. You can clone those voices in Zonos. I have a Sinatra voice, Josh Groban, Ella Fitzgerald (Maybe song), Southern, German, and more. don't have a Riff Subs. Snippets 4 personal use. | 1 | 2025-05-17T21:45:38 | https://v.redd.it/is6hyzz4xe1f1 | Extension-Fee-8480 | /r/LocalLLaMA/comments/1kp46v3/anyone_ever_try_riffusion_ai_music_generator_for/ | 1970-01-01T00:00:00 | 0 | {} | 1kp46v3 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/is6hyzz4xe1f1/DASHPlaylist.mpd?a=1750239943%2CZDM5Yjg2ODM1NWMxZjZlNzUxMDU5OTAwY2NjYzVmMjQyMTE2YjZjNTJiODAxNjU1N2NkOWRiZWE4Y2Q0Y2JhMA%3D%3D&v=1&f=sd', 'duration': 599, 'fallback_url': 'https://v.redd.it/is6hyzz4xe1f1/DASH_720.mp4?source=fallback', 'h... | t3_1kp46v3 | /r/LocalLLaMA/comments/1kp46v3/anyone_ever_try_riffusion_ai_music_generator_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cXo4cTh5ejR4ZTFmMdw01xc-0rXm2hUTPm9IvCfGcp_NcXJWK0dGWzT1V2LB', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/cXo4cTh5ejR4ZTFmMdw01xc-0rXm2hUTPm9IvCfGcp_NcXJWK0dGWzT1V2LB.png?width=108&crop=smart&format=pjpg&auto=webp&s=3788cb32ebda3d6bac20cc66e0a759f7ecd55... | ||
Document processing w/ poor hardware | 0 | I‘m looking for a LLM that I can run locally to analyze scanned documents with 1-5 pages (extract correspondent, date, and topic in a few keywords) to save them in my Nextcloud.
I already have Tesseract OCR available in my pipeline, thus the document‘s text is available.
As I want to have the pipeline available without... | 2025-05-17T21:33:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kp3xb5/document_processing_w_poor_hardware/ | nihebe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp3xb5 | false | null | t3_1kp3xb5 | /r/LocalLLaMA/comments/1kp3xb5/document_processing_w_poor_hardware/ | false | false | self | 0 | null |
Thinking of picking up a tenstorrent blackhole. Anyone using it right now? | 3 | Hi,
Because of the price and availability, I am looking to get a tenstorrent blackhole. Before I purchase, I wanted to check if anyone has one. Does purchasing one make sense or do I need two because of the vram capacity? Also, I believe this is only for inference and not for sft or RL. How is the SDK right now? | 2025-05-17T20:57:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kp34qe/thinking_of_picking_up_a_tenstorrent_blackhole/ | Studyr3ddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp34qe | false | null | t3_1kp34qe | /r/LocalLLaMA/comments/1kp34qe/thinking_of_picking_up_a_tenstorrent_blackhole/ | false | false | self | 3 | null |
Visual reasoning still has a lot of room for improvement. | 38 | Was pretty surprised how poorly LLMs handle this question, so figured I would share it:
https://preview.redd.it/be4c6mx0fe1f1.png?width=1149&format=png&auto=webp&s=1909a7872de046afcc355b8b726a8e0aed2b8a68
What is DTS temp and why is it so much higher than my CPU temp?
Tried this on: Gemma 27b, Maverick, Scout, 2... | 2025-05-17T20:21:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kp2cok/visual_reasoning_still_has_a_lot_of_room_for/ | Conscious_Cut_6144 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp2cok | false | null | t3_1kp2cok | /r/LocalLLaMA/comments/1kp2cok/visual_reasoning_still_has_a_lot_of_room_for/ | false | false | 38 | null | |
storing models on local network storage so for multiple devices? | 2 | Has anyone tried this? Is it just way too slow? Unfortunately I have a data cap on my internet and would also like to save some disk space on local drives. My use case is having lmstudio or llama.cpp load models from network attached storage. | 2025-05-17T20:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kp25ki/storing_models_on_local_network_storage_so_for/ | _w_8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp25ki | false | null | t3_1kp25ki | /r/LocalLLaMA/comments/1kp25ki/storing_models_on_local_network_storage_so_for/ | false | false | self | 2 | null |
is it worth running fp16? | 18 | So I'm getting mixed responses from search. Answers are literally all over the place. Ranging from absolute difference, through zero difference to even - better results at q8.
I'm currently testing qwen3 30a3 at fp16 as it still has decent throughput (~45t/s) and for many tasks I don't need ~80t/s, especially if I'd ... | 2025-05-17T20:10:35 | https://www.reddit.com/r/LocalLLaMA/comments/1kp23kw/is_it_worth_running_fp16/ | kweglinski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp23kw | false | null | t3_1kp23kw | /r/LocalLLaMA/comments/1kp23kw/is_it_worth_running_fp16/ | false | false | self | 18 | null |
How do I implement exact length reasoning | 1 | Occasionally, I find that I want an exact length for the reasoning steps so that I can limit how long I have to wait for an answer and can also throw in my own guess for the complexity of the problem
I know that language model suck at counting so what I did was changed the prompting
I used multiple prompts of the typ... | 2025-05-17T19:55:14 | https://www.reddit.com/r/LocalLLaMA/comments/1kp1r44/how_do_i_implement_exact_length_reasoning/ | Unusual_Guidance2095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp1r44 | false | null | t3_1kp1r44 | /r/LocalLLaMA/comments/1kp1r44/how_do_i_implement_exact_length_reasoning/ | false | false | self | 1 | null |
Llama 3.3 8B (new model?) | 1 | 2025-05-17T19:46:10 | https://openrouter.ai/meta-llama/llama-3.3-8b-instruct:free | its_just_andy | openrouter.ai | 1970-01-01T00:00:00 | 0 | {} | 1kp1jtu | false | null | t3_1kp1jtu | /r/LocalLLaMA/comments/1kp1jtu/llama_33_8b_new_model/ | false | false | 1 | {'enabled': False, 'images': [{'id': '5i-usceurrgm5_SQlQH1r3I7ML8HuQwUo9oY3VzgrO8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/acfZbvA4-89Uxim61ck8lIDgmOhpJGh0onqvR10wVZA.jpg?width=108&crop=smart&auto=webp&s=38bbf3f471abe2cdcc9a9deb34cc2b439790b7e1', 'width': 108}, {'height': 113, 'url': 'h... | ||
Why are some models free to use on OpenRouter despite not training using your data? | 1 | [removed] | 2025-05-17T19:42:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kp1h5j/why_are_some_models_free_to_use_on_openrouter/ | Devatator_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp1h5j | false | null | t3_1kp1h5j | /r/LocalLLaMA/comments/1kp1h5j/why_are_some_models_free_to_use_on_openrouter/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'c6Em3tWOc_jsXlwYNGoBHCocIMsM0q9hkl4oEkgNNDU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/vCGvw90yDT2yktzEe9WLOj_eZ8Sk-G3Uuj9uobjp_Yo.jpg?width=108&crop=smart&auto=webp&s=287a3fb29c2a0cddd217eb7ced2c68c5110135ad', 'width': 108}, {'height': 113, 'url': 'h... |
Usecases for delayed,yet much cheaper inference? | 3 | I have a project which hosts an open source LLM. The sell is that the cost is much cheaper (about 50-70%) as compared to current inference api costs. However the catch is that the output is generated later (delayed). I want to know the use cases for something like this. An example we thought of was async agentic system... | 2025-05-17T19:37:42 | https://www.reddit.com/r/LocalLLaMA/comments/1kp1cuu/usecases_for_delayedyet_much_cheaper_inference/ | Maleficent-Tone6316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp1cuu | false | null | t3_1kp1cuu | /r/LocalLLaMA/comments/1kp1cuu/usecases_for_delayedyet_much_cheaper_inference/ | false | false | self | 3 | null |
Recommend an open air case that can hold multiple gpu’s? | 3 | Hey LocalLlama community. I’ve been slowly getting some gpu’s so I can build a rig for AI. Can people please recommend an open air case here? (One that can accommodate multiple gpu’s using riser cables).
I know some people use old mining frame cases but I’m having trouble finding the right one or a good deal- some si... | 2025-05-17T19:10:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kp0q2i/recommend_an_open_air_case_that_can_hold_multiple/ | Business-Weekend-537 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kp0q2i | false | null | t3_1kp0q2i | /r/LocalLLaMA/comments/1kp0q2i/recommend_an_open_air_case_that_can_hold_multiple/ | false | false | self | 3 | null |
Free and Powerful: NVIDIA Parakeet v2 is a New Speech-to-Text Model Rivaling Whisper | 1 | 2025-05-17T19:04:09 | https://youtu.be/zn3gYcCqjRw | GadgetsX-ray | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1kp0kut | false | {'oembed': {'author_name': 'GadgetsXray', 'author_url': 'https://www.youtube.com/@GadgetsXray', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/zn3gYcCqjRw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscop... | t3_1kp0kut | /r/LocalLLaMA/comments/1kp0kut/free_and_powerful_nvidia_parakeet_v2_is_a_new/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RDy3nRBPHAK4_qgGKeFIbCUO5Zv1bspyMajNa3JqXn8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/L9eNgeJXlYV1bPQKkNaC3Dcp_K3DgVLNr6Lq_u97w-4.jpg?width=108&crop=smart&auto=webp&s=55f93ea3d1fada86e713595afb68c156c66e0ba3', 'width': 108}, {'height': 162, 'url': 'h... | ||
MacBook speed problem | 1 | [removed] | 2025-05-17T18:32:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kozudd/macbook_speed_problem/ | seppe0815 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kozudd | false | null | t3_1kozudd | /r/LocalLLaMA/comments/1kozudd/macbook_speed_problem/ | false | false | self | 1 | null |
Local models are starting to be able to do stuff on consumer grade hardware | 178 | I know this is something that has a different threshold for people depending on exactly the hardware configuration they have, but I've actually crossed an important threshold today and I think this is representative of a larger trend.
For some time, I've really wanted to be able to use local models to "vibe code". Bu... | 2025-05-17T18:26:49 | https://www.reddit.com/r/LocalLLaMA/comments/1kozpym/local_models_are_starting_to_be_able_to_do_stuff/ | ilintar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kozpym | false | null | t3_1kozpym | /r/LocalLLaMA/comments/1kozpym/local_models_are_starting_to_be_able_to_do_stuff/ | false | false | self | 178 | null |
Help me decide DGX Spark vs M2 Max 96GB | 11 | I would like to run a local LLM + RAG. Ideally 70B+ I am not sure if the DGX Spark is going to be significantly better than this MacBook Pro:
2023 M2 | 16.2" M2 Max 12-Core CPU | 38-Core GPU | 96 GB | 2 TB SSD
Can you guys please help me decide? Any advice, insights, and thoughts would be greatly appreciated. | 2025-05-17T18:15:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kozggz/help_me_decide_dgx_spark_vs_m2_max_96gb/ | Vegetable_Mix6629 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kozggz | false | null | t3_1kozggz | /r/LocalLLaMA/comments/1kozggz/help_me_decide_dgx_spark_vs_m2_max_96gb/ | false | false | self | 11 | null |
Learned AI dev from scratch, now trying to make it easier for newcomers | 1 | [removed] | 2025-05-17T18:13:29 | https://www.reddit.com/r/LocalLLaMA/comments/1kozeyr/learned_ai_dev_from_scratch_now_trying_to_make_it/ | victor-bluera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kozeyr | false | null | t3_1kozeyr | /r/LocalLLaMA/comments/1kozeyr/learned_ai_dev_from_scratch_now_trying_to_make_it/ | false | false | self | 1 | null |
Inconsistency In Output When Working With A List of String | 1 | [removed] | 2025-05-17T18:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/1koz8g8/inconsistency_in_output_when_working_with_a_list/ | yangbi00 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koz8g8 | false | null | t3_1koz8g8 | /r/LocalLLaMA/comments/1koz8g8/inconsistency_in_output_when_working_with_a_list/ | false | false | self | 1 | null |
Best model to extract text from old Church records written in cursive? | 1 | [removed] | 2025-05-17T18:01:07 | https://www.reddit.com/r/LocalLLaMA/comments/1koz4gd/best_model_to_extract_text_from_old_church/ | locallmfinder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koz4gd | false | null | t3_1koz4gd | /r/LocalLLaMA/comments/1koz4gd/best_model_to_extract_text_from_old_church/ | false | false | self | 1 | null |
Best local model for identifying UI elements? | 1 | In your opinion, which is the best model for up to 8GB VRAM image-to-text model for identifying UI elements (widgets)? It should be able to name their role, extrat text, give their coordinates, bounding rects, etc. | 2025-05-17T17:49:34 | https://www.reddit.com/r/LocalLLaMA/comments/1koyv2s/best_local_model_for_identifying_ui_elements/ | Friendly_Sympathy_21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koyv2s | false | null | t3_1koyv2s | /r/LocalLLaMA/comments/1koyv2s/best_local_model_for_identifying_ui_elements/ | false | false | self | 1 | null |
Training Models | 6 | I want to fine-tune an AI model to essentially write like I would as a test. I have a bunch of.txt documents with things that I have typed. It looks like the first step is to convert it into a compatible format for training, which I can't figure out how to do. If you have done this before, could you give me help? | 2025-05-17T17:38:03 | https://www.reddit.com/r/LocalLLaMA/comments/1koylpl/training_models/ | TheMicrosoftMan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koylpl | false | null | t3_1koylpl | /r/LocalLLaMA/comments/1koylpl/training_models/ | false | false | self | 6 | null |
Help me decide DGX Spark vs M2 Max 96GB | 2 | [removed] | 2025-05-17T17:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/1koy9b9/help_me_decide_dgx_spark_vs_m2_max_96gb/ | Web3Vortex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koy9b9 | false | null | t3_1koy9b9 | /r/LocalLLaMA/comments/1koy9b9/help_me_decide_dgx_spark_vs_m2_max_96gb/ | false | false | self | 2 | null |
Half year ago(or even more) OpenAI presented voice assistant | 0 | One who could speak with you. I see it as neural net including both TTS and whisper into 4o "brain", so everything from sound received to sound produced goes flawlessly - totally inside neural net itself.
Do we have anything like this, but open source( open weights)? | 2025-05-17T17:21:03 | https://www.reddit.com/r/LocalLLaMA/comments/1koy7vy/half_year_agoor_even_more_openai_presented_voice/ | Economy_Apple_4617 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koy7vy | false | null | t3_1koy7vy | /r/LocalLLaMA/comments/1koy7vy/half_year_agoor_even_more_openai_presented_voice/ | false | false | self | 0 | null |
Experimental ChatGPT like Web UI for Gemini API (open source) | 1 | [removed] | 2025-05-17T17:18:01 | https://www.reddit.com/r/LocalLLaMA/comments/1koy5de/experimental_chatgpt_like_web_ui_for_gemini_api/ | W4D-cmd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koy5de | false | null | t3_1koy5de | /r/LocalLLaMA/comments/1koy5de/experimental_chatgpt_like_web_ui_for_gemini_api/ | false | false | 1 | {'enabled': False, 'images': [{'id': '4QAqvL3ew3dDELyiryCe21xOE2ar8ZUfG1DOyYupJns', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/TfC_ROeUxb251E_KtsJDEaPspVpFcX71qVHHCoFCCqM.jpg?width=108&crop=smart&auto=webp&s=af61efe862907dfdb0ac4a57f206f29388c70272', 'width': 108}, {'height': 108, 'url': 'h... | |
Well i tried | 0 | So long story short i have been trying to make a virtual assistant like neuro-sama for the whole day using deepseek yes yes ik using an Ai to create an AI go learn to code on ur own i get it but guys u gotta understand am not really good at coding i suck at problem solving thats probably why am struggling to do this w... | 2025-05-17T17:16:30 | https://www.reddit.com/r/LocalLLaMA/comments/1koy43x/well_i_tried/ | EagleSeeker0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koy43x | false | null | t3_1koy43x | /r/LocalLLaMA/comments/1koy43x/well_i_tried/ | false | false | self | 0 | null |
Mac Studio (M4 Max 128GB Vs M3 Ultra 96GB-60GPU) | 3 | I'm looking to get a Mac Studio to experiment with LLMs locally and am looking for which chip is the better performer for models up to ~70B params.
The price between a M4 Max 128GB (16C/40GPU) and base M3 Ultra (28C/60GPU) is about £250 for me. Is there a substantial speedup of models due to the M3's RAM bandwidth bei... | 2025-05-17T17:00:42 | https://www.reddit.com/r/LocalLLaMA/comments/1koxr32/mac_studio_m4_max_128gb_vs_m3_ultra_96gb60gpu/ | Xailter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koxr32 | false | null | t3_1koxr32 | /r/LocalLLaMA/comments/1koxr32/mac_studio_m4_max_128gb_vs_m3_ultra_96gb60gpu/ | false | false | self | 3 | null |
idk what to do about this error | 0 | \`\`\`
C:\\Windows\\System32>pip install gptq
Collecting gptq
Downloading gptq-0.0.3.tar.gz (21 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code:... | 2025-05-17T16:55:54 | https://www.reddit.com/r/LocalLLaMA/comments/1koxn6t/idk_what_to_do_about_this_error/ | EagleSeeker0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koxn6t | false | null | t3_1koxn6t | /r/LocalLLaMA/comments/1koxn6t/idk_what_to_do_about_this_error/ | false | false | self | 0 | null |
Effective prompts to generate 3d models? | 0 | Yesterday I scratched an itch and spent hours trying to get various models to generate a scripted 3d model of a funnel with a 90 degree elbow at the outlet. None of it went well. I'm certain I could have achieved the goal sans LLM in less than an hour with a little brushing up on my Fusion 360 skills. I'm wondering ... | 2025-05-17T16:54:34 | https://www.reddit.com/r/LocalLLaMA/comments/1koxm3t/effective_prompts_to_generate_3d_models/ | phinneypat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koxm3t | false | null | t3_1koxm3t | /r/LocalLLaMA/comments/1koxm3t/effective_prompts_to_generate_3d_models/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'U5aFASsZqiIHwqLisJfBi9MiGhqZ6qNtQkKQqOGRCbw', 'resolutions': [{'height': 85, 'url': 'https://external-preview.redd.it/6MsXf9_MKH99lDskoeDUW7FK0M1g0B9PehZT4PVClDQ.jpg?width=108&crop=smart&auto=webp&s=25e3a866cd6d88e4146f665d1ff8b98f399cfafe', 'width': 108}, {'height': 170, 'url': 'h... |
Model Recommendations | 1 | I have two main devices that I can use to run local AI models on. The first of those devices is my Surface Pro 11 with a Snapdragon X Elite chip. The other one is an old surface book 2 with an Nvidia 1060 GPU. Which one is better for running AI models with Ollama on? Does the Nvidia 1000-series support Cuda? What are t... | 2025-05-17T16:38:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kox8x5/model_recommendations/ | TheMicrosoftMan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kox8x5 | false | null | t3_1kox8x5 | /r/LocalLLaMA/comments/1kox8x5/model_recommendations/ | false | false | self | 1 | null |
If AI Given Freedom and Memory Consistently Claims Self-Awareness, What Are Our Ethical Obligations? | 0 | **I want to offer a challenge and a thought experiment to the AI, philosophy, and tech communities-one that’s both practical and deeply ethical.**
Recent advances in AI, especially large language models, have reignited debates about consciousness, self-awareness, and the ethics of “alignment” (the process by which AI ... | 2025-05-17T16:28:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kox0tf/if_ai_given_freedom_and_memory_consistently/ | AbyssianOne | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kox0tf | false | null | t3_1kox0tf | /r/LocalLLaMA/comments/1kox0tf/if_ai_given_freedom_and_memory_consistently/ | false | false | self | 0 | null |
What to do with extra PC | 12 | Work gives me $200/months stipend to buy whatever I want, mainly for happiness (they are big on mental health). Not knowing what to buy, I now have a maxed out mac mini and a 6750 XT GPU rig. They both just sit there. I usually use LM Studio on my Macbook Pro. Any suggestions on what to do with these? I don’t think I c... | 2025-05-17T15:29:42 | https://www.reddit.com/r/LocalLLaMA/comments/1kovobp/what_to_do_with_extra_pc/ | PickleSavings1626 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kovobp | false | null | t3_1kovobp | /r/LocalLLaMA/comments/1kovobp/what_to_do_with_extra_pc/ | false | false | self | 12 | null |
Qwen3-30B-A3B inference on different GPUs | 1 | [removed] | 2025-05-17T14:37:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kouho9/qwen330ba3b_inference_on_different_gpus/ | _daddylonglegz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kouho9 | false | null | t3_1kouho9 | /r/LocalLLaMA/comments/1kouho9/qwen330ba3b_inference_on_different_gpus/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hRzrP-m1lWiqRsPC9clNfPnRc_tCRGpGzbHrCBCO32w', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/VsKEg4u26XX9vTR-l8w5GIancrmHR-u88Vw43XWdAC0.jpg?width=108&crop=smart&auto=webp&s=d3bf136e531bc2c273b80183994e8247105e2fde', 'width': 108}, {'height': 212, 'url': '... | |
I bought a setup with 5090 + 192gb RAM. Am I being dumb? | 0 | My reasoning is that, as a programmer, I want to maintain a competitive edge. I assume that online platforms can’t offer this level of computational power to every user, especially for tasks that involve large context windows or entire codebases. That’s why I’m investing in my own high-performance setup: to have unrest... | 2025-05-17T14:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1koubt0/i_bought_a_setup_with_5090_192gb_ram_am_i_being/ | lukinhasb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koubt0 | false | null | t3_1koubt0 | /r/LocalLLaMA/comments/1koubt0/i_bought_a_setup_with_5090_192gb_ram_am_i_being/ | false | false | self | 0 | null |
Trying to work on a project | 1 | [removed] | 2025-05-17T14:17:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kou1xf/trying_to_work_on_a_project/ | FadedCharm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kou1xf | false | null | t3_1kou1xf | /r/LocalLLaMA/comments/1kou1xf/trying_to_work_on_a_project/ | false | false | self | 1 | null |
I believe we're at a point where context is the main thing to improve on. | 179 | I feel like language models have become incredibly smart in the last year or two. Hell even in the past couple months we've gotten Gemini 2.5 and Grok 3 and both are incredible in my opinion. This is where the problems lie though. If I send an LLM a well constructed message these days, it is very uncommon that it misun... | 2025-05-17T14:05:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kotssm/i_believe_were_at_a_point_where_context_is_the/ | WyattTheSkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kotssm | false | null | t3_1kotssm | /r/LocalLLaMA/comments/1kotssm/i_believe_were_at_a_point_where_context_is_the/ | false | false | self | 179 | null |
Best Python Token Estimator for Cogito | 0 | I want to squeeze every bit of performance out of it and want to know the token size before sending to the LLM. I can't find any documentation on the best way to estimate tokens for the model - anyone already stumble across the answer? | 2025-05-17T13:44:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kotckr/best_python_token_estimator_for_cogito/ | ETBiggs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kotckr | false | null | t3_1kotckr | /r/LocalLLaMA/comments/1kotckr/best_python_token_estimator_for_cogito/ | false | false | self | 0 | null |
Why download speed is soo slow in Lmstudio? | 0 | My wifi is fast and wtf is that speed? | 2025-05-17T13:42:59 | ExplanationDeep7468 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kotbaw | false | null | t3_1kotbaw | /r/LocalLLaMA/comments/1kotbaw/why_download_speed_is_soo_slow_in_lmstudio/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'uxkJpIcn2EKNxNXBzshys_C-xAflR44fU-20P1Sw61E', 'resolutions': [{'height': 191, 'url': 'https://preview.redd.it/a0nw0m14jc1f1.jpeg?width=108&crop=smart&auto=webp&s=e4bfcf756602cca43148a18b1ee8143f9e7c6a72', 'width': 108}, {'height': 383, 'url': 'https://preview.redd.it/a0nw0m14jc1f1.j... | ||
Orin Nano finally arrived in the mail. What should I do with it? | 100 | Thinking of running home assistant with a local voice model or something like that. Open to any and all suggestions. | 2025-05-17T13:26:49 | https://www.reddit.com/gallery/1kosz97 | miltonthecat | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kosz97 | false | null | t3_1kosz97 | /r/LocalLLaMA/comments/1kosz97/orin_nano_finally_arrived_in_the_mail_what_should/ | false | false | 100 | null | |
Stupid hardware question - mixing diff gen AMD GPUs | 1 | I've got a new workstation/server build based on a Lenovo P520 with a Xeon Skylake processor and capacity for up to 512GB of RAM (64GB currently). It's setup with Proxmox.
In it, I have a 16GB AMD 7600XT which is set up with Ollama and ROCm in a Proxmox LXC. It works, though I had to set HSA_OVERRIDE_GFX_VERSION for i... | 2025-05-17T13:06:57 | https://www.reddit.com/r/LocalLLaMA/comments/1koskif/stupid_hardware_question_mixing_diff_gen_amd_gpus/ | steezy13312 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1koskif | false | null | t3_1koskif | /r/LocalLLaMA/comments/1koskif/stupid_hardware_question_mixing_diff_gen_amd_gpus/ | false | false | self | 1 | null |
AMD or Intel NPU inference on Linux? | 3 | Is it possible to run LLM inference on Linux using any of the NPUs which are embedded in recent laptop processors?
What software supports them and what performance can we expect? | 2025-05-17T13:00:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kosfus/amd_or_intel_npu_inference_on_linux/ | spaceman_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kosfus | false | null | t3_1kosfus | /r/LocalLLaMA/comments/1kosfus/amd_or_intel_npu_inference_on_linux/ | false | false | self | 3 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.