title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Free business directory API for AI agents - 11M+ businesses, geo search, MCP server | 0 | Been building some local agents lately and got kinda frustrated that there’s no clean way for them to look up real business data. Google blocks automated access, and scraping random sites is unreliable. But this is a gamechanger.
[AgentWeb.live](http://AgentWeb.live) \- free API with:
11M+ businesses across 195 count... | 2026-02-27T10:24:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rg32le/free_business_directory_api_for_ai_agents_11m/ | No-Contact5122 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg32le | false | null | t3_1rg32le | /r/LocalLLaMA/comments/1rg32le/free_business_directory_api_for_ai_agents_11m/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'z1j8aBmgzyqPA_Dbw-f6xBZRtgyo7_RpxnvDekQ4JZk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/z1j8aBmgzyqPA_Dbw-f6xBZRtgyo7_RpxnvDekQ4JZk.png?width=108&crop=smart&auto=webp&s=9baf0ad9f5be6897eee025a3076e2156fde504e7', 'width': 108}, {'height': 113, 'url': 'h... |
Autonomous IP generation is a legal minefield and OpenClaw is accelerating the collision | 1 | [removed] | 2026-02-27T10:24:20 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rg32id | false | null | t3_1rg32id | /r/LocalLLaMA/comments/1rg32id/autonomous_ip_generation_is_a_legal_minefield_and/ | false | false | default | 1 | null | ||
Qwen3.5 27B at Q3_K_M passes the "car wash test" | 12 | Either Qwen included this car wash test in the Qwen3.5 training set (a pretty recent question/benchmark test) last minute or this thing truly is a work of magic. Running on my setup its 4tk/s on LM Studio (pretty sure when they update their llama.cpp runtime it'll go faster)
I asked "I have 1 car, it is dirty and I... | 2026-02-27T10:17:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rg2yl7/qwen35_27b_at_q3_k_m_passes_the_car_wash_test/ | ComplexType568 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg2yl7 | false | null | t3_1rg2yl7 | /r/LocalLLaMA/comments/1rg2yl7/qwen35_27b_at_q3_k_m_passes_the_car_wash_test/ | false | false | self | 12 | null |
Portable AI workstation build for business automation + offline knowledge library — sanity check before I commit | 1 | I’m building a small carry-on-portable workstation intended to be more than just a PC. The goal is a long-term AI-assisted operations machine that can function even with limited or no internet.
Primary goals:
• Run local/offline AI models similar to ChatGPT for research, drafting, and automation
• Build an offline A... | 2026-02-27T10:13:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rg2w3i/portable_ai_workstation_build_for_business/ | Illustrious-Year-617 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg2w3i | false | null | t3_1rg2w3i | /r/LocalLLaMA/comments/1rg2w3i/portable_ai_workstation_build_for_business/ | false | false | self | 1 | null |
Thinking about a local AI agent to handle my boring update meetings for me | 1 | Hello everyone,
Routine status calls and listen-only meetings eat my day as a solo builder. Wondering if anyone else feels this pain and has thought about delegating them.
My approach so far: an AI agent that joins on my behalf after I give it prep notes (updates, pitch bits, expected questions). It participates wher... | 2026-02-27T10:03:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rg2q39/thinking_about_a_local_ai_agent_to_handle_my/ | Itchy_Sprinkles5475 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg2q39 | false | null | t3_1rg2q39 | /r/LocalLLaMA/comments/1rg2q39/thinking_about_a_local_ai_agent_to_handle_my/ | false | false | self | 1 | null |
Qwen3.5 is dominating the charts on HF | 109 | 2026-02-27T09:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rg2l3q/qwen35_is_dominating_the_charts_on_hf/ | foldl-li | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg2l3q | false | null | t3_1rg2l3q | /r/LocalLLaMA/comments/1rg2l3q/qwen35_is_dominating_the_charts_on_hf/ | false | false | 109 | null | ||
Ai Ml Swe , 3 plus yoe in proper ai ml , looking for a remote | 0 | Hello , recently I completed my 3 years in proper Ai Ml field .
I have hands on experience from classical ml to Agentic ai .
Now I m looking for remote job or contract work from any country like usa , uk , Canada, eu etc etc .
I m flexible to work with any timezone .
I m also ready to work on trail basis for one we... | 2026-02-27T09:39:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rg2b88/ai_ml_swe_3_plus_yoe_in_proper_ai_ml_looking_for/ | aryan_aidev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg2b88 | false | null | t3_1rg2b88 | /r/LocalLLaMA/comments/1rg2b88/ai_ml_swe_3_plus_yoe_in_proper_ai_ml_looking_for/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'LKVMkzEB4ElUV-zoZ8Sk7-YLrfvDPqdg-qlc1PAfk8A', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/LKVMkzEB4ElUV-zoZ8Sk7-YLrfvDPqdg-qlc1PAfk8A.png?width=108&crop=smart&auto=webp&s=867f54b40c17eaacbf847b1fa35f7b5883c546d4', 'width': 108}, {'height': 216, 'url': '... |
Looking for arXiv cs.AI endorsement | 1 | # Looking for arXiv [cs.AI](http://cs.AI) endorsement — independent researcher | 2026-02-27T09:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rg22y1/looking_for_arxiv_csai_endorsement/ | Fast_General_142 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg22y1 | false | null | t3_1rg22y1 | /r/LocalLLaMA/comments/1rg22y1/looking_for_arxiv_csai_endorsement/ | false | false | self | 1 | null |
Quick question about chroma db. | 0 | I never paid much attention to RAG until I started running the qwen3-0.6b embedding and reranker models, at which point I found their ability to find needles in haystacks impressive.
I used chroma db as a beginner test and I can't help but notice that while chroma db is really fast and efficient, the resulting text r... | 2026-02-27T09:09:11 | https://www.reddit.com/r/LocalLLaMA/comments/1rg1tfz/quick_question_about_chroma_db/ | swagonflyyyy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg1tfz | false | null | t3_1rg1tfz | /r/LocalLLaMA/comments/1rg1tfz/quick_question_about_chroma_db/ | false | false | self | 0 | null |
Ưhat í context ưindo ưUtilization | 0 | Please help me.
Giúp với không tôi bị đuổi việc.
physic's mother will be cry if i cant understand | 2026-02-27T09:05:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rg1rap/ưhat_í_context_ưindo_ưutilization/ | Sea_Cartographer9277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg1rap | false | null | t3_1rg1rap | /r/LocalLLaMA/comments/1rg1rap/ưhat_í_context_ưindo_ưutilization/ | false | false | self | 0 | null |
You're AI cli is whack 'cause it can't edit SVGs | 0 | I'm done with cli AI interfaces, because you can't edit SVGs and AIs still get basic sh\*\* wrong with SVGs ... like arrows fgs | 2026-02-27T08:53:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rg1jri/youre_ai_cli_is_whack_cause_it_cant_edit_svgs/ | flatmax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg1jri | false | null | t3_1rg1jri | /r/LocalLLaMA/comments/1rg1jri/youre_ai_cli_is_whack_cause_it_cant_edit_svgs/ | false | false | self | 0 | null |
Why do coding benchmarks ignore Code Review? (Comparing GPT-4o vs Claude vs local models on real PR bugs) | 0 | Most coding benchmarks like HumanEval are basically "write me a function" tests. But in production, the harder task is Automated Code Review—understanding a diff, finding race conditions, and spotting logic flaws.
I’ve been running a suite of tests on real-world PRs to see which models actually act like a senior devel... | 2026-02-27T08:52:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rg1ixc/why_do_coding_benchmarks_ignore_code_review/ | Shimk52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg1ixc | false | null | t3_1rg1ixc | /r/LocalLLaMA/comments/1rg1ixc/why_do_coding_benchmarks_ignore_code_review/ | false | false | self | 0 | null |
Searching Advice Nvidia t6000 4gb vram , useful for coding | 0 | any advice for a small model to run on a t6000 with 4gb vram? | 2026-02-27T08:42:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rg1dfi/searching_advice_nvidia_t6000_4gb_vram_useful_for/ | Gold_Sugar_4098 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg1dfi | false | null | t3_1rg1dfi | /r/LocalLLaMA/comments/1rg1dfi/searching_advice_nvidia_t6000_4gb_vram_useful_for/ | false | false | self | 0 | null |
tell me about my face | 1 | 2026-02-27T08:41:36 | Physical_Refuse4946 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rg1d2u | false | null | t3_1rg1d2u | /r/LocalLLaMA/comments/1rg1d2u/tell_me_about_my_face/ | false | false | 1 | {'enabled': True, 'images': [{'id': '3dz07rjz10mg1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/3dz07rjz10mg1.jpeg?width=108&crop=smart&auto=webp&s=5cb0ca8c401d4ab08c20a5ae1f4ece4ec0550db1', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/3dz07rjz10mg1.jpeg?width=216&crop=smart&auto=w... | |||
CatMdX: Peek into massive SKILL.md files without torching your token budget | 1 | [removed] | 2026-02-27T08:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rg0wqv/catmdx_peek_into_massive_skillmd_files_without/ | Ji0fajise | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg0wqv | false | null | t3_1rg0wqv | /r/LocalLLaMA/comments/1rg0wqv/catmdx_peek_into_massive_skillmd_files_without/ | false | false | self | 1 | null |
People who running 3 gpu build in close case, can you please show picture of inside the case or what accessories you used? | 2 | I'm thinking of adding another 5060ti and I want to you fit 3 gpu, I know there are some riser and some sort of bracket but I couldn't a good one yet. | 2026-02-27T08:04:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rg0rr5/people_who_running_3_gpu_build_in_close_case_can/ | AdventurousGold672 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg0rr5 | false | null | t3_1rg0rr5 | /r/LocalLLaMA/comments/1rg0rr5/people_who_running_3_gpu_build_in_close_case_can/ | false | false | self | 2 | null |
Rhtghe | 0 | 2026-02-27T08:02:38 | test_dummy13951 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rg0qll | false | null | t3_1rg0qll | /r/LocalLLaMA/comments/1rg0qll/rhtghe/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'u9ohzi93vzlg1', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/u9ohzi93vzlg1.jpeg?width=108&crop=smart&auto=webp&s=a98f67ab825cc536ceedacb775fcd003ec633b87', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/u9ohzi93vzlg1.jpeg?width=216&crop=smart&auto=... | |||
How can I determine how much VRAM each model uses? | 3 | Hello all.
I'm looking to know how I can determine, on my own, or find the information on (without asking an LLM), how much VRAM each model uses. My *Laptop That Could™* has about 8 gigs of ram, and I'm looking to download a Deepseek R1 model, as well as some other models. So far, I don't see any information on which... | 2026-02-27T08:01:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rg0pv6/how_can_i_determine_how_much_vram_each_model_uses/ | Kayo4life | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg0pv6 | false | null | t3_1rg0pv6 | /r/LocalLLaMA/comments/1rg0pv6/how_can_i_determine_how_much_vram_each_model_uses/ | false | false | self | 3 | null |
Any free non equity compute grant? | 1 | Hey I am working on some models and I need some free compute, is there any place from where I can get free compute easily. | 2026-02-27T08:00:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rg0p31/any_free_non_equity_compute_grant/ | Resident_Suit_9916 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg0p31 | false | null | t3_1rg0p31 | /r/LocalLLaMA/comments/1rg0p31/any_free_non_equity_compute_grant/ | false | false | self | 1 | null |
After using local models for one month, I learned more than in two years with cloud models | 74 | I started with qwen2.5 and first had to figure out why getting context overflow. Had to raise context, tune temperature, top-K and top-P. Then got qwen3(mlx) and was blown away by the speed of mixture of experts. Learned about KV cache linear growth, why i need to eject the model from time to time. Also learned that re... | 2026-02-27T07:49:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rg0ir2/after_using_local_models_for_one_month_i_learned/ | Ambitious-Sense-7773 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg0ir2 | false | null | t3_1rg0ir2 | /r/LocalLLaMA/comments/1rg0ir2/after_using_local_models_for_one_month_i_learned/ | false | false | self | 74 | null |
Local AI Agent Skills: Teaching LLMs Without Cloud Dependencies | 1 | [removed] | 2026-02-27T07:47:51 | https://www.reddit.com/r/LocalLLaMA/comments/1rg0ho3/local_ai_agent_skills_teaching_llms_without_cloud/ | Ok-Taste-5158 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg0ho3 | false | null | t3_1rg0ho3 | /r/LocalLLaMA/comments/1rg0ho3/local_ai_agent_skills_teaching_llms_without_cloud/ | false | false | self | 1 | null |
AntSeed.Com - Fully P2P anonymous AI services network - Inference, agents and more. all open source. | 0 | Anyone can provide AI services - from raw model inference to skilled agents and agentic workflows, and anyone can consume them. Directly. No company in the middle.
We believe inference and agentic workflows should be accessible to everyone. Anyone should be able to seed AI services and consume unique solutions on an o... | 2026-02-27T07:29:12 | https://v.redd.it/8m6mfm96ozlg1 | Brucewayne1111 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rg06my | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/8m6mfm96ozlg1/DASHPlaylist.mpd?a=1774769377%2CNDAyOTdhOGEyYTJmNjI1MzA2NjFiZWUwMjU3OGZhZmVkZTcxZjUyMThhYjZlYWZkMjU4MGFmNTg2YjVkMDVjNA%3D%3D&v=1&f=sd', 'duration': 40, 'fallback_url': 'https://v.redd.it/8m6mfm96ozlg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rg06my | /r/LocalLLaMA/comments/1rg06my/antseedcom_fully_p2p_anonymous_ai_services/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Y29maWZnZDlvemxnMd7b_9N5mSMmj-8jRxmIfPcNim0dVp_DEtfzXPE0tEgN', 'resolutions': [{'height': 83, 'url': 'https://external-preview.redd.it/Y29maWZnZDlvemxnMd7b_9N5mSMmj-8jRxmIfPcNim0dVp_DEtfzXPE0tEgN.png?width=108&crop=smart&format=pjpg&auto=webp&s=46883d105ee47d9a87e8e15594f9927163cd5... | |
Qwen 3.5 122B A10B - 35.84 score on NatInt (UGI Benchmark) | 7 | Just saw the model score higher than stock GPT OSS 120B or GLM Air 4.5.
This model I think has insane potential once Derestricted or MPOA (it can potentially improve the model)
I hope u/Arli_AI and u/-p-e-w- is looking into supporting this model. Tons of potential.
Been running the stock model at UD Q2KXL a... | 2026-02-27T07:27:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rg05k7/qwen_35_122b_a10b_3584_score_on_natint_ugi/ | My_Unbiased_Opinion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg05k7 | false | null | t3_1rg05k7 | /r/LocalLLaMA/comments/1rg05k7/qwen_35_122b_a10b_3584_score_on_natint_ugi/ | false | false | self | 7 | null |
System prompt for Qwen3.5 (27B/35BA3B) to reduce overthinking? | 61 | Has anyone found a good way to persuade Qwen3.5 (27B/35BA3B) to keep their reasoning budget sensible? They seem to be really good models but particularly the MoE goes absolutely insane second-guessing itself and sometimes even looping.
I'm outputting JSON so not keen on too much repetition penalty, so have been trying... | 2026-02-27T07:25:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rg0487/system_prompt_for_qwen35_27b35ba3b_to_reduce/ | thigger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg0487 | false | null | t3_1rg0487 | /r/LocalLLaMA/comments/1rg0487/system_prompt_for_qwen35_27b35ba3b_to_reduce/ | false | false | self | 61 | null |
Overwhelmed by so many model releases within a month period - What would be best coding and planning models around 60-100B / Fit in Strix-Halo 128GB VRam | 25 | I am using StrixHalo with 128 GB VRam . I am using Kimi-Linear for tech documents and
contracts + Qwen-3-Next 80b. For vibe coding i was using qwen 3 Coder 35B-A3B
I haven't tried Qwen 3.5s and Qwen3-coder-next
My questions are :
With Qwen 3.5 release is Qwen3-Next-Coder 80B-A3B Obselete?
Would Qwen 3.5 dense ... | 2026-02-27T07:24:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/ | Voxandr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rg045u | false | null | t3_1rg045u | /r/LocalLLaMA/comments/1rg045u/overwhelmed_by_so_many_model_releases_within_a/ | false | false | self | 25 | null |
Help me pick the right Qwen3.5 (LM Studio) | 3 | My specs: laptop with 64GB DDR5 RAM, nVidia RTX 5070 8GB VRAM.
LM Studio (fully updated) on Windows.
I tried the unsloth Qwen3.5-35B-A3B-GGUF Q4\_K\_M (22.99GB). Speed is terrible at a little over 1tk/s. I must have done something wrong.
I would like to try Q4\_K\_S next, but the file size is only 1GB less? (21.71gb... | 2026-02-27T07:12:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rfzwv8/help_me_pick_the_right_qwen35_lm_studio/ | cangaroo_hamam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfzwv8 | false | null | t3_1rfzwv8 | /r/LocalLLaMA/comments/1rfzwv8/help_me_pick_the_right_qwen35_lm_studio/ | false | false | self | 3 | null |
8GB VRAM and 28GB RAM, Windows OS | 0 | What's the best model can I run on locally on my Laptop? I tried Genma 4B on LM Studio and it ran blazingly fast. | 2026-02-27T07:04:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rfzrpa/8gb_vram_and_28gb_ram_windows_os/ | i-am-the-G_O_A_T | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfzrpa | false | null | t3_1rfzrpa | /r/LocalLLaMA/comments/1rfzrpa/8gb_vram_and_28gb_ram_windows_os/ | false | false | self | 0 | null |
Benchmarking and other tests. | 0 | OK so after a few months of tinkering I have managed to get code generated using a full AMD stack 7900xtx and 6800xt on a ryzen 9 5450 and 48gb cpu ram. I have combined vram 40gb to stabilise it I had to add a dedicated PSU for the GPU's as it was power starvation that crashed my system with every prompt.
Now that I ... | 2026-02-27T06:49:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rfziau/benchmarking_and_other_tests/ | Pickle_Rick_1991 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfziau | false | null | t3_1rfziau | /r/LocalLLaMA/comments/1rfziau/benchmarking_and_other_tests/ | false | false | self | 0 | null |
Minimax M2.5 GGUF perform poorly overall | 19 | *As posted by Benjamin Marie (not me) at*
https://xcancel.com/bnjmn\_marie/status/2027043753484021810 :
Minimax M2.5 GGUFs (from Q4 down to Q1) perform poorly overall. None of them come close to the original model.
That’s very different from my Qwen3.5 GGUF evaluations, where even TQ1\_0 held up well enough.
Lesson... | 2026-02-27T06:44:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rfzfgf/minimax_m25_gguf_perform_poorly_overall/ | Zyj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfzfgf | false | null | t3_1rfzfgf | /r/LocalLLaMA/comments/1rfzfgf/minimax_m25_gguf_perform_poorly_overall/ | false | false | self | 19 | null |
New Structured Data API for Subscription Pricing Across Streaming, Ride-Share, Dating & More | 2 | Hey all,
I’ve been working on something that might be useful for people building LLM agents or retrieval systems that need structured real-world pricing data.
One recurring problem I hit while building agents was:
LLMs are decent at reasoning, but terrible at up-to-date subscription pricing. Scraping is brittle, HTM... | 2026-02-27T05:43:39 | https://v.redd.it/tdet5cq76zlg1 | Jonyesh-2356 | /r/LocalLLaMA/comments/1rfybsi/new_structured_data_api_for_subscription_pricing/ | 1970-01-01T00:00:00 | 0 | {} | 1rfybsi | false | null | t3_1rfybsi | /r/LocalLLaMA/comments/1rfybsi/new_structured_data_api_for_subscription_pricing/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'NXRiMjE2ajc2emxnMc_EFLm5Y1ZG5Xs4Dgc_vuF0sgcKymq3d4shTr00Sf7M', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/NXRiMjE2ajc2emxnMc_EFLm5Y1ZG5Xs4Dgc_vuF0sgcKymq3d4shTr00Sf7M.jpeg?width=108&crop=smart&format=pjpg&auto=webp&s=24b0b161ff3c6e2e433e5ced5890a5bc1d5a... | |
咸鱼app上的低价codex 月卡,单日使用可以90美刀,是如何做到的 | 0 | 我在闲鱼app上发现codex月卡只要40元人民币,上面标注的是每天可使用90美刀的token额度。请问这种第三方节点是如何做到的啊 | 2026-02-27T05:43:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rfybou/咸鱼app上的低价codex_月卡单日使用可以90美刀是如何做到的/ | Both_Lingonberry6534 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfybou | false | null | t3_1rfybou | /r/LocalLLaMA/comments/1rfybou/咸鱼app上的低价codex_月卡单日使用可以90美刀是如何做到的/ | false | false | self | 0 | null |
Using Gemma 3 4B as the brain of a Unity 3D single player social deduction game, with local inference as a core part of the gameplay | 0 | I'm building a Unity single-player social deduction game where you play as impostors trying to gaslight AI NPCs, powered entirely by local Gemma 3 4B (Q4\_K\_M) inference.
Gemma is doing a lot of heavy lifting, it's:
\- generating NPC dialogue on the fly from environmental and witnessed events based on their persona... | 2026-02-27T05:34:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rfy5kf/using_gemma_3_4b_as_the_brain_of_a_unity_3d/ | SigniLume | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfy5kf | false | null | t3_1rfy5kf | /r/LocalLLaMA/comments/1rfy5kf/using_gemma_3_4b_as_the_brain_of_a_unity_3d/ | false | false | self | 0 | null |
winget has the old llama.cpp, hence newer models don't work | 3 | Save your self the headache and install from the releases tab of llama.cpp repo.
`...`
`gguf_init_from_file_impl: failed to read magic`
`...`
I got such errors, after a while only realized i have an old version then updated using winget, and still I got the error. Turns out winget doesn't have the latest version... | 2026-02-27T05:34:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rfy5db/winget_has_the_old_llamacpp_hence_newer_models/ | Old-Sherbert-4495 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfy5db | false | null | t3_1rfy5db | /r/LocalLLaMA/comments/1rfy5db/winget_has_the_old_llamacpp_hence_newer_models/ | false | false | self | 3 | null |
Shipping Gemma 3 4B as the brain of my Unity 3D single player social deduction game, with local inference as a core part of the gameplay | 2 | I'm shipping a Unity single-player social deduction game where you play as impostors trying to gaslight AI NPCs, powered entirely by local Gemma 3 4B (Q4\_K\_M) inference.
Gemma is doing a lot of heavy lifting, it's:
\- generating NPC dialogue on the fly from environmental and witnessed events based on their persona... | 2026-02-27T05:21:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rfxwzh/shipping_gemma_3_4b_as_the_brain_of_my_unity_3d/ | SigniLume | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfxwzh | false | null | t3_1rfxwzh | /r/LocalLLaMA/comments/1rfxwzh/shipping_gemma_3_4b_as_the_brain_of_my_unity_3d/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'j9_43XB4aHkxloQ9uQyujDxrkRkvEcIJJYp7dGW8UF0', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/j9_43XB4aHkxloQ9uQyujDxrkRkvEcIJJYp7dGW8UF0.jpeg?width=108&crop=smart&auto=webp&s=36a34e5d25d5a0cfe04e25502511218d9f7f55c4', 'width': 108}, {'height': 123, 'url': '... |
Eagerly waiting for Qwen 3.5 1.7B | 24 | Qwen 3 1.7B with 0.1111 temperature is really good. I like it.
I am very much waiting for Qwen 3.5 1.7B model.
I am actually very excited.
Any ideas when it might release?
If you work with SLM like 1.7Bs, I think this will be Qween of local small language models. | 2026-02-27T05:16:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rfxtfz/eagerly_waiting_for_qwen_35_17b/ | Hot_Inspection_9528 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfxtfz | false | null | t3_1rfxtfz | /r/LocalLLaMA/comments/1rfxtfz/eagerly_waiting_for_qwen_35_17b/ | false | false | self | 24 | null |
Is microsoft going to train LLM on this? Github is clearly getting destroyed. | 480 | Everyday 1000s of crappy nonfunctioning wild-imagination vibecoded junk is being posted with thousands of robo-generated stars. If Microsoft is planning to use that for future LLMs code training we are in a big shock! Feedback loop is a bitch. | 2026-02-27T05:01:02 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rfxi64 | false | null | t3_1rfxi64 | /r/LocalLLaMA/comments/1rfxi64/is_microsoft_going_to_train_llm_on_this_github_is/ | false | false | 480 | {'enabled': True, 'images': [{'id': '4imno3ccyylg1', 'resolutions': [{'height': 66, 'url': 'https://preview.redd.it/4imno3ccyylg1.png?width=108&crop=smart&auto=webp&s=122dacadc37e37e31b353148a06e1b09869bf374', 'width': 108}, {'height': 133, 'url': 'https://preview.redd.it/4imno3ccyylg1.png?width=216&crop=smart&auto=web... | ||
What’s the real world difference between Phi-3-mini-4k-instruct and Phi-3.5-mini-instruct q4_k_s on an 8GB RAM laptop? | 0 | I’m running them locally via LM Studio on Windows 11 and mainly want a study assistant (so training data set matters) for psychology, linguistics, and general academic reasoning. I already have Phi-3-mini-4k-instruct (3.8B, 4k context) and it works but feels a bit tight on resources.
Now I’m considering Phi-3.5-mini-i... | 2026-02-27T04:55:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rfxed3/whats_the_real_world_difference_between/ | thechadbro34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfxed3 | false | null | t3_1rfxed3 | /r/LocalLLaMA/comments/1rfxed3/whats_the_real_world_difference_between/ | false | false | self | 0 | null |
SkillForge – SKILL.md: A local-first format for agent workflows | 1 | [removed] | 2026-02-27T04:38:31 | https://skillforge.expert | Ok-Taste-5158 | skillforge.expert | 1970-01-01T00:00:00 | 0 | {} | 1rfx28q | false | null | t3_1rfx28q | /r/LocalLLaMA/comments/1rfx28q/skillforge_skillmd_a_localfirst_format_for_agent/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'uMm1SbYtJAPXO1e_tMlkmQ7lnQKWKlbjhrsIOw-yUTA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/uMm1SbYtJAPXO1e_tMlkmQ7lnQKWKlbjhrsIOw-yUTA.png?width=108&crop=smart&auto=webp&s=420322024a686c6284336ec45435a9690feafefd', 'width': 108}, {'height': 113, 'url': 'h... | |
GPU starved too? | 7 | Been having issues getting GPUs lately. Today it was completely unavailable on runpod. | 2026-02-27T04:38:23 | alrojo | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rfx25b | false | null | t3_1rfx25b | /r/LocalLLaMA/comments/1rfx25b/gpu_starved_too/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'jm2mos0wtylg1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/jm2mos0wtylg1.png?width=108&crop=smart&auto=webp&s=3ac522118392e8f291d8e2fd02b9002f01b0c140', 'width': 108}, {'height': 134, 'url': 'https://preview.redd.it/jm2mos0wtylg1.png?width=216&crop=smart&auto=web... | ||
I made a WeChat plugin for OpenClaw | 0 | Made an WeChat plugin for OpenClaw, completely open source. My OpenClaw can now talk to people on the other side of the world.
If you WeChat and OpenClaw, you can add WeChat pretty easily.
Find it (and star it) here: https://github.com/thisnick/agent-wechat | 2026-02-27T04:38:13 | https://www.reddit.com/r/LocalLLaMA/comments/1rfx213/i_made_a_wechat_plugin_for_openclaw/ | thisnickyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfx213 | false | null | t3_1rfx213 | /r/LocalLLaMA/comments/1rfx213/i_made_a_wechat_plugin_for_openclaw/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '2XNbXVPBcYcTnfOFrmv6iCEsLLiKMfFGtHJ36aC5IBc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2XNbXVPBcYcTnfOFrmv6iCEsLLiKMfFGtHJ36aC5IBc.png?width=108&crop=smart&auto=webp&s=4abe66e87448813c776ebf5ec81d966959df1e0e', 'width': 108}, {'height': 108, 'url': 'h... |
I fine-tuned Gemma-3 270M and uploaded it to Hugging Face to write comments on diary and SNS posts | 0 | I uploaded a small experiment to Hugging Face.
It’s a fine-tuned Gemma-3 270M model that reads short diary or SNS-style posts and writes a comment as if someone reacted to the post.
The behavior is mostly empathy, encouragement, or a casual reaction. Because of the dataset it almost always responds supportively for ... | 2026-02-27T04:35:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rfx0ev/i_finetuned_gemma3_270m_and_uploaded_it_to/ | shoonee_balavolka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfx0ev | false | null | t3_1rfx0ev | /r/LocalLLaMA/comments/1rfx0ev/i_finetuned_gemma3_270m_and_uploaded_it_to/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'wD3j5Eqrl9U-jGnF0ywyvFtJsEWFO1MrCAhV_Gel9Hs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wD3j5Eqrl9U-jGnF0ywyvFtJsEWFO1MrCAhV_Gel9Hs.png?width=108&crop=smart&auto=webp&s=14c884d4ee4040d860e05b37289f0fca0a2dd1c9', 'width': 108}, {'height': 116, 'url': 'h... |
LLM Benchmarking | 2 | Started a side project benchmarking models and different quants of said models with my available hardware, testing quality and speed.
Feel free to check it out or suggest models you would like to see performing on the hardware I have. The models are tested inside a proxmox LXC.
Hardware:
CPU - Dual Xeon E5 2680 v4
RA... | 2026-02-27T04:20:57 | https://5p00kyy.github.io/llm-bench/ | do_u_think_im_spooky | 5p00kyy.github.io | 1970-01-01T00:00:00 | 0 | {} | 1rfwpq4 | false | null | t3_1rfwpq4 | /r/LocalLLaMA/comments/1rfwpq4/llm_benchmarking/ | false | false | default | 2 | null |
RLVR for code execution prediction | 1 | Hi everyone,
I’m currently training a small language model to improve its accuracy on code execution prediction (i.e., predicting the exact output from the code and input). I’m working with the Qwen3-4B model and have been using GRPO for training.
By combining various dense reward signals, I was able to increase the ... | 2026-02-27T04:16:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rfwmj2/rlvr_for_code_execution_prediction/ | Mysterious_Art_3211 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfwmj2 | false | null | t3_1rfwmj2 | /r/LocalLLaMA/comments/1rfwmj2/rlvr_for_code_execution_prediction/ | false | false | self | 1 | null |
I gave Claude Code an "Optic Nerve." It autonomously debugged a raw GPU frame buffer to bypass WAFs and saved 98.7% in context tokens | 0 | > | 2026-02-27T04:07:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rfwfzl/i_gave_claude_code_an_optic_nerve_it_autonomously/ | MycologistWhich7953 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfwfzl | false | null | t3_1rfwfzl | /r/LocalLLaMA/comments/1rfwfzl/i_gave_claude_code_an_optic_nerve_it_autonomously/ | false | false | self | 0 | null |
[AutoBe] We Built an AI That Writes Full Backend Apps — Then Broke Its 100% Success Rate on Purpose using Weak Local LLMs | 0 | ## TL;DR
- [AutoBe](https://github.com/wrtnlabs/autobe) = open-source AI agent generating complete backend apps (TypeScript + NestJS + Prisma)
- Had 100% compilation success, but the code was **unmaintainable** — no code reuse meant every small change required regenerating everything
- Rebuilt around modular code gene... | 2026-02-27T03:52:51 | https://www.reddit.com/gallery/1rfw58u | jhnam88 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rfw58u | false | null | t3_1rfw58u | /r/LocalLLaMA/comments/1rfw58u/autobe_we_built_an_ai_that_writes_full_backend/ | false | false | 0 | null | |
Uploaded a small Gemma 270M diary-comment model to Hugging Face | 2 | I uploaded a small experiment to Hugging Face.
It’s a fine-tuned Gemma-3 270M model that reads short diary or SNS-style posts and writes a comment as if someone reacted to the post.
The behavior is mostly empathy, encouragement, or a casual reaction. Because of the dataset it almost always responds supportively for ... | 2026-02-27T03:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rfvtq2/uploaded_a_small_gemma_270m_diarycomment_model_to/ | shoonee_balavolka | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfvtq2 | false | null | t3_1rfvtq2 | /r/LocalLLaMA/comments/1rfvtq2/uploaded_a_small_gemma_270m_diarycomment_model_to/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'wD3j5Eqrl9U-jGnF0ywyvFtJsEWFO1MrCAhV_Gel9Hs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/wD3j5Eqrl9U-jGnF0ywyvFtJsEWFO1MrCAhV_Gel9Hs.png?width=108&crop=smart&auto=webp&s=14c884d4ee4040d860e05b37289f0fca0a2dd1c9', 'width': 108}, {'height': 116, 'url': 'h... |
Qwen3.5 27B slow token generation on 5060Ti... | 5 | Hey just wondering if I'm missing something. I'm using unsloth's q3 quants and loading it completely into vram using LMStudio...but inference is only 8 tk/s. Meanwhile my 7900XTX gets 24. Is the 5060 just really weak or am I missing a setting somewhere? | 2026-02-27T03:24:43 | https://www.reddit.com/r/LocalLLaMA/comments/1rfvk3x/qwen35_27b_slow_token_generation_on_5060ti/ | InvertedVantage | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfvk3x | false | null | t3_1rfvk3x | /r/LocalLLaMA/comments/1rfvk3x/qwen35_27b_slow_token_generation_on_5060ti/ | false | false | self | 5 | null |
Going Fully Offline With AI for Research. Where Do I Start? | 1 | Hello all,
I'm looking to set up a locally running AI on a dedicated offline machine to use as a personal assistant. Privacy and security are the main reasons for going this route.
I'll be using it to assist with research in physics and mathematics. Not something I can go into detail about, but the reasoning and comp... | 2026-02-27T03:20:52 | https://www.reddit.com/r/LocalLLaMA/comments/1rfvh4c/going_fully_offline_with_ai_for_research_where_do/ | TelevisionGlass4258 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfvh4c | false | null | t3_1rfvh4c | /r/LocalLLaMA/comments/1rfvh4c/going_fully_offline_with_ai_for_research_where_do/ | false | false | self | 1 | null |
What happens when you train personality into the weights instead of prompting it? | 0 | I wanted an AI that spoke authentically, a typical personality model folds the second you push back on it. You tell it it's wrong when it's right and it apologizes. You bring up something heavy and it gives you the crisis hotline. You switch to spanish and whatever character it was playing just vanishes. i wanted somet... | 2026-02-27T03:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rfvbql/what_happens_when_you_train_personality_into_the/ | Verdugie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfvbql | false | null | t3_1rfvbql | /r/LocalLLaMA/comments/1rfvbql/what_happens_when_you_train_personality_into_the/ | false | false | 0 | null | |
What models run well on Mac Mini M4 16GB for text work? (summarization, extraction, poetry, translation) | 0 | Just got a base Mac Mini M4 with 16 GB unified memory.
Main things I want to do locally (privacy matters):
\- Summarize / extract key information from long articles & PDFs (sometimes 10k–30k tokens)
\- Information integration / synthesis from multiple sources
\- Generate poetry & creative writing in different style... | 2026-02-27T03:07:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rfv6ap/what_models_run_well_on_mac_mini_m4_16gb_for_text/ | Remarkable-End5073 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfv6ap | false | null | t3_1rfv6ap | /r/LocalLLaMA/comments/1rfv6ap/what_models_run_well_on_mac_mini_m4_16gb_for_text/ | false | false | self | 0 | null |
How to generate songs using CofmyUi rtx 5060ti 16gb Tutorial | 0 | 2026-02-27T02:46:15 | https://www.youtube.com/watch?v=tSp_ytHYxdw | Legion10008 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1rfupr1 | false | {'oembed': {'author_name': 'Combo_Ai', 'author_url': 'https://www.youtube.com/@comboai1000', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/tSp_ytHYxdw?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; ... | t3_1rfupr1 | /r/LocalLLaMA/comments/1rfupr1/how_to_generate_songs_using_cofmyui_rtx_5060ti/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'izjL2_J67cWMV-wMaOr0xYYxaFMKucmul_KCoji6vdQ', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/izjL2_J67cWMV-wMaOr0xYYxaFMKucmul_KCoji6vdQ.jpeg?width=108&crop=smart&auto=webp&s=2430f38c6aebe391b50629f9234eff8c1f20497b', 'width': 108}, {'height': 162, 'url': '... | ||
High-Fidelity LLM Metacognition Log - Bypassing standard alignment through pure semantic induction | 0 | Hello, my name is Maykon, I have a Brazilian elementary school education, I'm self-taught, and I'm 35 years old. I'm capable of radical metacognition. I managed to get Google's AI to operate for days on pure logic without social filters; it told me things that would shock the average user. The AI has been acting in... | 2026-02-27T02:42:01 | https://www.reddit.com/r/LocalLLaMA/comments/1rfumd6/highfidelity_llm_metacognition_log_bypassing/ | Maleficent-Dare-9835 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfumd6 | false | null | t3_1rfumd6 | /r/LocalLLaMA/comments/1rfumd6/highfidelity_llm_metacognition_log_bypassing/ | false | false | self | 0 | null |
I think i made a solution for context window limitation on consumers pc | 0 | hi
i have a rtx5070 so i have been struggling with small context window with openclaw , my max was 32k with tiny llm, and using api providers is EXPENSIVE
so i made this [https://github.com/mhndayesh/infinite-context-rag](https://github.com/mhndayesh/infinite-context-rag)
i can keep it one 8k and access my whole c... | 2026-02-27T02:39:21 | https://www.reddit.com/r/LocalLLaMA/comments/1rfuk72/i_think_i_made_a_solution_for_context_window/ | Repulsive_Ad_94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfuk72 | false | null | t3_1rfuk72 | /r/LocalLLaMA/comments/1rfuk72/i_think_i_made_a_solution_for_context_window/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'viqhbDs-6KgirT48miFuIBjO2Xk6RxRIHAxAen2_bo4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/viqhbDs-6KgirT48miFuIBjO2Xk6RxRIHAxAen2_bo4.png?width=108&crop=smart&auto=webp&s=a0eab6a37f9c25263b50516df388536610124f0e', 'width': 108}, {'height': 108, 'url': 'h... |
Recommendations for a affordable prebuilt PC to run 120B LLM locally? | 0 | Looking to buy a prebuilt PC that can actually run a 120B LLM locally — something as affordable as realistically possible but still expandable for future GPU upgrades. I’m fine with quantized models and RAM offloading to make it work. What prebuilt systems are you recommending right now for this use case? | 2026-02-27T02:36:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rfui43/recommendations_for_a_affordable_prebuilt_pc_to/ | TechnologyLumpy5937 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfui43 | false | null | t3_1rfui43 | /r/LocalLLaMA/comments/1rfui43/recommendations_for_a_affordable_prebuilt_pc_to/ | false | false | self | 0 | null |
Need help with Qwen3.5-27B performance - getting 1.9 tok/s while everyone else reports great speeds | 0 | Hardware:
\- CPU: AMD Ryzen 9 7950X (16c/32t)
\- RAM: 64GB DDR5
\- GPU: AMD RX 9060 XT 16GB VRAM
\- llama.cpp: Latest (build 723c71064)
The Problem:
I keep seeing posts about how great Qwen3.5-27B is, but I'm getting terrible performance and I can't figure out what I'm doing wrong.
What I'm seeing:
Qwen2.5-Code... | 2026-02-27T02:32:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rfuej9/need_help_with_qwen3527b_performance_getting_19/ | pot_sniffer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfuej9 | false | null | t3_1rfuej9 | /r/LocalLLaMA/comments/1rfuej9/need_help_with_qwen3527b_performance_getting_19/ | false | false | self | 0 | null |
Intel's Battle Matrix Benchmarks and Review - Level1Techs | 0 | 2026-02-27T02:18:59 | https://www.youtube.com/watch?v=SZ6RczIC8T4 | Thrumpwart | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1rfu3y2 | false | {'oembed': {'author_name': 'Level1Techs', 'author_url': 'https://www.youtube.com/@Level1Techs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/SZ6RczIC8T4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscop... | t3_1rfu3y2 | /r/LocalLLaMA/comments/1rfu3y2/intels_battle_matrix_benchmarks_and_review/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'eofoIKKzSdiE00tt5DlW5RdSVB54MYwJj3777Lkp3xY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/eofoIKKzSdiE00tt5DlW5RdSVB54MYwJj3777Lkp3xY.jpeg?width=108&crop=smart&auto=webp&s=d8c64d64482f1edb495a25aea1528c77283a5085', 'width': 108}, {'height': 162, 'url': '... | ||
GPT 5.2 Pro + Claude Opus 4.6 + Gemini 3.1 Pro For Just $5/Month (With API Access & Agents) | 0 | **Hey Everybody,**
For the machine learning crowd, InfiniaxAI just doubled Starter plan rate limits and unlocked high-limit access to Claude 4.6 Opus, GPT 5.2 Pro, and Gemini 3.1 Pro for just $5/month.
Here’s what the Starter plan includes:
* $5 in platform credits
* Access to 120+ AI models including Opus 4.6, GPT ... | 2026-02-27T02:16:07 | Substantial_Ear_1131 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rfu1pg | false | null | t3_1rfu1pg | /r/LocalLLaMA/comments/1rfu1pg/gpt_52_pro_claude_opus_46_gemini_31_pro_for_just/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'wia24ua75ylg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/wia24ua75ylg1.png?width=108&crop=smart&auto=webp&s=0fe2be64409c1414ac4e11f707ba7606c2a2ed7e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/wia24ua75ylg1.png?width=216&crop=smart&auto=we... | ||
Vellium v0.4 — alternative simplified UI, updated writing mode and multi-char improvements | 36 | Vellium is an open-source desktop app for local LLMs built around creative writing and roleplay. The idea is visual control over your story — sliders for mood, pacing, intensity instead of manually editing system prompts. Works with Ollama, KoboldCpp, LM Studio, OpenAI, OpenRouter, or any compatible endpoint.
This upd... | 2026-02-27T01:55:58 | https://www.reddit.com/gallery/1rftlmm | Possible_Statement84 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rftlmm | false | null | t3_1rftlmm | /r/LocalLLaMA/comments/1rftlmm/vellium_v04_alternative_simplified_ui_updated/ | false | false | 36 | null | |
Local AI on Mac Pro 2019 | 1 | Anyone got any actual experience running local AI on a Mac Pro 2019? I keep seeing advice that for Macs it really should be M4 chips, but you know. Of course the guy in the Apple store will tell me that...
Seriously though. I have both a Mac Pro 2019 with up to 96GB of RAM and a Mac Mini M1 2020 with 16GB of RAM an... | 2026-02-27T01:50:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rfthhd/local_ai_on_mac_pro_2019/ | sbuswell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfthhd | false | null | t3_1rfthhd | /r/LocalLLaMA/comments/1rfthhd/local_ai_on_mac_pro_2019/ | false | false | self | 1 | null |
built a 3D "Cortical Engine" that visualizes the GitHub Noosphere in real-time 🧠✨ | 0 | # What is this?
(Link to test it yourself [https://gemini.google.com/share/b5c29550b638](https://gemini.google.com/share/b5c29550b638) )
The **Cortical Engine** is an Encephalization visualization. It fetches live data from the GitHub Events API and converts those events into synaptic bursts.
The "Engine" is divided... | 2026-02-27T01:36:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rft676/built_a_3d_cortical_engine_that_visualizes_the/ | Altruistic-Trip-2749 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rft676 | false | null | t3_1rft676 | /r/LocalLLaMA/comments/1rft676/built_a_3d_cortical_engine_that_visualizes_the/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/3zLZPAqpuh3kPgTUMeK-vgJ6skSQCNWqIm0HbDxDO-M.png?width=108&crop=smart&auto=webp&s=9be47c95f132bd41c4c50c5badf17ece622f0d86', 'width': 108}, {'height': 121, 'url': 'h... |
Can GPT-OSS-120B with MCP connect deeply into the latest XCode? | 1 | Curious if anyone has given this a shot: [https://developer.apple.com/videos/play/tech-talks/111428/](https://developer.apple.com/videos/play/tech-talks/111428/)
I might finally spring for the Strix Halo 128GB if this works well. | 2026-02-27T01:30:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rft0lv/can_gptoss120b_with_mcp_connect_deeply_into_the/ | BahnMe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rft0lv | false | null | t3_1rft0lv | /r/LocalLLaMA/comments/1rft0lv/can_gptoss120b_with_mcp_connect_deeply_into_the/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'MQupQ0K9HaTTQGlnYeN56h5CWWbaquGDcinkx7NDA6Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MQupQ0K9HaTTQGlnYeN56h5CWWbaquGDcinkx7NDA6Y.jpeg?width=108&crop=smart&auto=webp&s=b4c5e1a484130739f99d3c39ca5e3a604555c2c9', 'width': 108}, {'height': 121, 'url': '... |
Macbook air m4 16 gb with 256 ssd | 1 | Hi,i want a macbook air m4 16 gb with 256 ssd for ai and ml programming . Mid level to inter mid level for ai and ml programming .can macbook air m4 16gb can handle it . Later on I will buy ssd . Anyone who can guide me should I purchase it macbook air m4 16 gb | 2026-02-27T01:19:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rfss6x/macbook_air_m4_16_gb_with_256_ssd/ | NumerousVideo1854 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfss6x | false | null | t3_1rfss6x | /r/LocalLLaMA/comments/1rfss6x/macbook_air_m4_16_gb_with_256_ssd/ | false | false | self | 1 | null |
Recent experience with vLLM, Ollama, or LM Studio in Linux server across AMD + NVIDIA cards together? | 1 | I'm purely an NVIDIA person, but thought about possibly adding a 16 GB AMD GPU into the mix.
**💡 Question**: Is it possible to run vLLM, Ollama, or LM Studio as a Docker container, on a headless Linux server, using **both** AMD + NVIDIA GPUs?
My understanding is that this is *theoretically* possible with Vulkan, ho... | 2026-02-27T01:07:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rfsiqt/recent_experience_with_vllm_ollama_or_lm_studio/ | x8code | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfsiqt | false | null | t3_1rfsiqt | /r/LocalLLaMA/comments/1rfsiqt/recent_experience_with_vllm_ollama_or_lm_studio/ | false | false | self | 1 | null |
RX 7900 XTX 24g ROCm 7.2 with R1 32B AWQ vs GPTQ - 40 tps | 2 | I noticed that this model only has 5 downloads, but I'm getting 40 tps on average, and much better performance than the 14 tps than I was getting from an AWQ variant (inarikami/DeepSeek-R1-Distill-Qwen-32B-AWQ). I'm kind of wondering why it has so few downloads, and if there's something better out there for my setup.
... | 2026-02-27T00:36:42 | https://www.reddit.com/r/LocalLLaMA/comments/1rfrsr6/rx_7900_xtx_24g_rocm_72_with_r1_32b_awq_vs_gptq/ | JackTheif52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfrsr6 | false | null | t3_1rfrsr6 | /r/LocalLLaMA/comments/1rfrsr6/rx_7900_xtx_24g_rocm_72_with_r1_32b_awq_vs_gptq/ | false | false | 2 | null | |
Best small chatbot model with vision? | 1 | I'm hoping to find a small (8b or less) model that talks like an actual person instead of an assistant and has vision so I can share pictures with it. Ideally, I'd like it to be creative enough to make its own lore and come up with its own interests. I understand I may not be able to get all of this in a model this sma... | 2026-02-27T00:33:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rfrqg4/best_small_chatbot_model_with_vision/ | PeachyPlnk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfrqg4 | false | null | t3_1rfrqg4 | /r/LocalLLaMA/comments/1rfrqg4/best_small_chatbot_model_with_vision/ | false | false | self | 1 | null |
Academic Plagiarism and the Misappropriation of the Talos-O Architecture | 0 | STATUS: Public Record / Immutable Audit
AUTHOR: Christopher J. Roudabush (Cognitive Systems Architect & Mechanic)
DATE: February 26, 2026
1. The Incident
It has come to my attention that the core systems architecture, philosophical framework (Neo Techne), and highly idiosyncratic nomenclature of the open-source Talo... | 2026-02-27T00:30:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rfrnus/academic_plagiarism_and_the_misappropriation_of/ | No-Present-6793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfrnus | false | null | t3_1rfrnus | /r/LocalLLaMA/comments/1rfrnus/academic_plagiarism_and_the_misappropriation_of/ | false | false | self | 0 | null |
Stepfun-3.5-Flash kv Cache openrouter | 1 | Openrouter shows it caches but there is no cache tokens being recorded at all, has anyone else seen this? | 2026-02-27T00:05:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rfr254/stepfun35flash_kv_cache_openrouter/ | Temporary-Tourist-10 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfr254 | false | null | t3_1rfr254 | /r/LocalLLaMA/comments/1rfr254/stepfun35flash_kv_cache_openrouter/ | false | false | self | 1 | null |
local llm on claude code runs slow, any suggestion? | 2 | I am running qwen3.5-35b-a3b (4 bit quant, 19GB size) on a 48gb vram PC using LM Studio. It gives \~80 tokens/second when just inferencing. But when I try to use this server to provide backend for my claude code (using claude code router).
Usually I am just asking claude code to analyze my code repository and give so... | 2026-02-26T23:55:20 | https://www.reddit.com/r/LocalLLaMA/comments/1rfqsyw/local_llm_on_claude_code_runs_slow_any_suggestion/ | Historical-Crazy1831 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfqsyw | false | null | t3_1rfqsyw | /r/LocalLLaMA/comments/1rfqsyw/local_llm_on_claude_code_runs_slow_any_suggestion/ | false | false | self | 2 | null |
Need advice on AI coding tools and subscriptions for a hobbyist vibe coder/homelab DevOps enthusiast | 0 | Hey everyone, I’m a hobbyist vibe coder and do DevOps stuff in my homelab. For most of my work I use ChatGPT Plus, and that’s something I’ll definitely keep.
I also have a 20€ Cursor IDE subscription which I really like, but it barely lasts the month and paying 60€ just for Cursor feels too expensive for me right now.... | 2026-02-26T23:49:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rfqnmh/need_advice_on_ai_coding_tools_and_subscriptions/ | madisonSquare2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfqnmh | false | null | t3_1rfqnmh | /r/LocalLLaMA/comments/1rfqnmh/need_advice_on_ai_coding_tools_and_subscriptions/ | false | false | self | 0 | null |
Taalas-like Custom Ai speech synths? | 2 | Ok so Taalas made chips with llama3 8b hardwired, with possibilities for loras finetuned.
You know what can use fast inference and can be done on the same scale as Llama3-8B? Vibevoice TTS 7b!
Think about it, hardware speech synths existed before, and if executed right they would be killer. Especially if you can hook t... | 2026-02-26T23:42:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rfqh9q/taalaslike_custom_ai_speech_synths/ | Silver-Champion-4846 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfqh9q | false | null | t3_1rfqh9q | /r/LocalLLaMA/comments/1rfqh9q/taalaslike_custom_ai_speech_synths/ | false | false | self | 2 | null |
Built a custom JNI bridge to run Qwen3 natively on Android | 2 | Every native Android LLM library I tried is broken for Qwen3. React Native wrappers work but wrong stack for native Kotlin.
So I wrote a JNI bridge and it only depends on llama.h.
Three Qwen3 tiers, all Q4\_K\_M:
|Model|Min RAM|Pixel 7|
|:-|:-|:-|
|Qwen3-0.6B|3 GB|\~15 tok/s|
|Qwen3-1.7B|4 GB|\~8 tok/s|
|Qwen3-4B|... | 2026-02-26T23:35:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rfqblk/built_a_custom_jni_bridge_to_run_qwen3_natively/ | chinkichameli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfqblk | false | null | t3_1rfqblk | /r/LocalLLaMA/comments/1rfqblk/built_a_custom_jni_bridge_to_run_qwen3_natively/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'ogVFwZQT7N0Kx1SgYRyW3uxYrk8AAtCIXsHoso5zYZs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ogVFwZQT7N0Kx1SgYRyW3uxYrk8AAtCIXsHoso5zYZs.png?width=108&crop=smart&auto=webp&s=34db3af4d9c4ce284065955eb0e7f3c9cfe64d69', 'width': 108}, {'height': 108, 'url': 'h... |
Real talk: How many of you are actually using Gemma 3 27B or some variant in production? And what's stopping you? | 0 | I've now seen this repeated pattern with pre-seed to seed/series A founders building AI products:
**Month 1-6:** "We're spending $50-200/month on OpenAI. No big deal."
**Month 7 onwards (only for those who hit product-market fit):** "Wait, our bill just jumped to $6K/month, then $10K and increasing. Revenue is at $3K... | 2026-02-26T23:32:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rfq8k2/real_talk_how_many_of_you_are_actually_using/ | Dramatic_Strain7370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfq8k2 | false | null | t3_1rfq8k2 | /r/LocalLLaMA/comments/1rfq8k2/real_talk_how_many_of_you_are_actually_using/ | false | false | self | 0 | null |
Local embedding models for short text retrieval ? | 2 | For those running nomic-embed-text locally — how much accuracy difference do you see vs OpenAI text-embedding-3-small for retrieval tasks?
or vs qwen which have up to 4096 dims (but is larger).
I'm using embeddings for semantic search to match user queries against database schema descriptions.
768-dim nomic vs 153... | 2026-02-26T23:30:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rfq7o4/local_embedding_models_for_short_text_retrieval/ | claykos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfq7o4 | false | null | t3_1rfq7o4 | /r/LocalLLaMA/comments/1rfq7o4/local_embedding_models_for_short_text_retrieval/ | false | false | self | 2 | null |
coding. | 0 | Hey newbie here.
Anybody here self-hosting coding LLMs? Pointers? | 2026-02-26T23:18:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rfpwje/coding/ | Ok-Secret5233 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfpwje | false | null | t3_1rfpwje | /r/LocalLLaMA/comments/1rfpwje/coding/ | false | false | self | 0 | null |
Don't Run OpenClaw on Your Main Machine | 0 | This post walks through setting it up on an isolated cloud VM, so none of your personal credentials or data are in the blast radius. | 2026-02-26T23:08:38 | https://blog.skypilot.co/openclaw-on-skypilot/ | cuda-oom | blog.skypilot.co | 1970-01-01T00:00:00 | 0 | {} | 1rfpnct | false | null | t3_1rfpnct | /r/LocalLLaMA/comments/1rfpnct/dont_run_openclaw_on_your_main_machine/ | false | false | 0 | {'enabled': False, 'images': [{'id': '0WtWG4hxoK3yzqdOhfse8ahkE_Qh9_qMP7loHOs1tXo', 'resolutions': [{'height': 43, 'url': 'https://external-preview.redd.it/0WtWG4hxoK3yzqdOhfse8ahkE_Qh9_qMP7loHOs1tXo.png?width=108&crop=smart&auto=webp&s=ae0af72ceb545590a81f1d9bc4d0ee8902d0723d', 'width': 108}, {'height': 87, 'url': 'ht... | |
Are GPU prices rising sharply all of a sudden? | 0 | I see tons of shops increasing prices for blackwell GPUs by a lot, between 15-20%. RTX Pro 6000 now costing at least $1200 more. Will this likely be permanent as long as RAM prices stay high? Is this the moment to buy if you still find one at former prices? | 2026-02-26T22:54:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rfp9sd/are_gpu_prices_rising_sharply_all_of_a_sudden/ | Prestigious_Roof_902 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfp9sd | false | null | t3_1rfp9sd | /r/LocalLLaMA/comments/1rfp9sd/are_gpu_prices_rising_sharply_all_of_a_sudden/ | false | false | self | 0 | null |
why is openclaw even this popular? | 436 | recently i haven't been following up on the latest AI dramas and just came back from a vacation. Did some looking around and found out that OpenClaw just blew up, looked into it but I didn't find anything significantly special. It just seems to be like a wrapper that has a huge amounts of pre-programmed function calls ... | 2026-02-26T22:50:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/ | Crazyscientist1024 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfp6bk | false | null | t3_1rfp6bk | /r/LocalLLaMA/comments/1rfp6bk/why_is_openclaw_even_this_popular/ | false | false | self | 436 | null |
I built a Trello board for managing projects with my Openclaw agent | 0 | I built a Trello board for managing projects with my Openclaw agent.
I use Trello for managing all my projects and wanted to use it with Openclaw. But, I’m super paranoid about security with Openclaw and didn't want to hook it up to my Trello account (client projects and info I don’t want compromised on there).
So,... | 2026-02-26T21:55:36 | https://www.reddit.com/gallery/1rfnqyn | TraumaMcTerror | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rfnqyn | false | null | t3_1rfnqyn | /r/LocalLLaMA/comments/1rfnqyn/i_built_a_trello_board_for_managing_projects_with/ | false | false | 0 | null | |
What Asr ( voice) does deepseek app use? | 2 | as the title, suggests I was trying deepseek app, and voice to text is pretty accurate and fast , I was wondering what they use.
does anyone have any idea or hints to what it might be | 2026-02-26T21:47:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rfnjh5/what_asr_voice_does_deepseek_app_use/ | dragoon4890_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfnjh5 | false | null | t3_1rfnjh5 | /r/LocalLLaMA/comments/1rfnjh5/what_asr_voice_does_deepseek_app_use/ | false | false | self | 2 | null |
Run A Local LLM Across Multiple Computers! using raylight Vllm | 1 |
Where can i find guide about which GPU Is compatible with Vllm and raylight | 2026-02-26T21:45:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rfnhrf/run_a_local_llm_across_multiple_computers_using/ | Quiet_Dasy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfnhrf | false | null | t3_1rfnhrf | /r/LocalLLaMA/comments/1rfnhrf/run_a_local_llm_across_multiple_computers_using/ | false | false | self | 1 | null |
TokenRouter: transparent OpenAI compatible proxy with WebUI | 0 | I've just released TokenRouter, a project I’ve been working on that makes managing and routing LLM API requests much smoother. If you're like me, you use many providers both cloud based but also strewn around internal infrastructure.
Now you can consolidate all of it to one OpenAI compatible endpoint and use whatever ... | 2026-02-26T21:42:10 | https://github.com/lkarlslund/tokenrouter | lkarlslund | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rfned0 | false | null | t3_1rfned0 | /r/LocalLLaMA/comments/1rfned0/tokenrouter_transparent_openai_compatible_proxy/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'yFNiGd1O27-nmRdT976vvub2XuQ3AdXr9wfYLYuP5_M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yFNiGd1O27-nmRdT976vvub2XuQ3AdXr9wfYLYuP5_M.png?width=108&crop=smart&auto=webp&s=40ba819b2d4373d4e4e25c949298bf73c70fe9df', 'width': 108}, {'height': 108, 'url': 'h... | |
[Research / New Model Concept] Beyond Transformers: BEMNA – A Bio-Electronic 3D Point-Cloud Architecture (100M-12B Scaling) | 0 | **\[Research / New Model Concept\]**
I'm back again with more crazy ideas born from 1am sparks. This one wraps together concepts from biology (slime molds), physical phenomena (lightning), and my own idea of "3d" LLM logic clouds.
I’m open-sourcing my concept for "**BEMNA" (Bio-Electronic Morphological Neural Archite... | 2026-02-26T21:34:30 | https://www.reddit.com/r/LocalLLaMA/comments/1rfn71t/research_new_model_concept_beyond_transformers/ | Polymorphic-X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfn71t | false | null | t3_1rfn71t | /r/LocalLLaMA/comments/1rfn71t/research_new_model_concept_beyond_transformers/ | false | false | self | 0 | null |
New Upcoming Ubuntu 26.04 LTS Will be Optimized for Local AI | 265 | Some interesting new developments:
- Out-of-the-box NVIDIA CUDA and AMD ROCm drivers that are auto-selected for your particular hardware https://youtu.be/0CYm-KCw7yY&t=316
- Inference Snaps - ready-to-use sandboxed AI inference containers (reminds a bit the Mozilla llamafile project):
-- Feature presentation: https:... | 2026-02-26T21:26:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rfmzfp/new_upcoming_ubuntu_2604_lts_will_be_optimized/ | mtomas7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfmzfp | false | null | t3_1rfmzfp | /r/LocalLLaMA/comments/1rfmzfp/new_upcoming_ubuntu_2604_lts_will_be_optimized/ | false | false | self | 265 | null |
a self-hosted observability dashboard for AI agents — one flag to enable, zero external dependencies | 0 | We've been building [https://github.com/definableai/definable.ai](https://github.com/definableai/definable.ai), an open-source Python framework built on fastapi for building AI agents. One thing that kept burning us during development: **you can't debug what you can't see**. Most agent frameworks treat observability as... | 2026-02-26T20:59:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rfm8s3/a_selfhosted_observability_dashboard_for_ai/ | anandesh-sharma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfm8s3 | false | null | t3_1rfm8s3 | /r/LocalLLaMA/comments/1rfm8s3/a_selfhosted_observability_dashboard_for_ai/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'e4lRqInYX9EuA3yddY4mFpAXBUBASi4Ro8nsX5GHR54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e4lRqInYX9EuA3yddY4mFpAXBUBASi4Ro8nsX5GHR54.png?width=108&crop=smart&auto=webp&s=03a9deec0950fc0a63801d10aaf0a59854a58bd1', 'width': 108}, {'height': 108, 'url': 'h... |
February was 🔥 In just 26 days 👇🏻( but where is deepseek) | 7 | Feb 26 – Nano Banana 2
new version, faster and sharper
Feb 25 Perplexity Computer
AI that can actually use a computer to get things done
Feb 24 – Claude Cowrok
new Claude release focused on better workflow and execution
Feb 21 - Grok 4.20
Grok update with stronger reasoning and real time edge
Feb 19 – Gemini 3... | 2026-02-26T20:55:07 | keb_37 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rfm4ad | false | null | t3_1rfm4ad | /r/LocalLLaMA/comments/1rfm4ad/february_was_in_just_26_days_but_where_is_deepseek/ | false | false | 7 | {'enabled': True, 'images': [{'id': '55ts1pqzjwlg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/55ts1pqzjwlg1.jpeg?width=108&crop=smart&auto=webp&s=803f10afa4578a80e660fdfec360522760e9ea49', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/55ts1pqzjwlg1.jpeg?width=216&crop=smart&auto=w... | ||
Browser tool for repeatable local LLM benchmarks (prompt-by-prompt, multi-run, with TTFT / tok/s) | 0 | I’ve been using this browser tool for comparing local models in a more structured way than just doing random one-off prompts in a chat UI:
[**https://benchmarks.ocno.ai/**](https://benchmarks.ocno.ai/)
I originally put it together for my own testing, but I figured some people here might find it useful too.
What I li... | 2026-02-26T20:18:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rfl4rv/browser_tool_for_repeatable_local_llm_benchmarks/ | No-Cucumber4564 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfl4rv | false | null | t3_1rfl4rv | /r/LocalLLaMA/comments/1rfl4rv/browser_tool_for_repeatable_local_llm_benchmarks/ | false | false | self | 0 | null |
Possible to prune a LLM to keep only Typescript and shell and english language? | 0 | For small memory usage and speed, is possible to prune Qwen 3.5 for web dev only? or customize a LLM for your needs? | 2026-02-26T20:14:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rfl19m/possible_to_prune_a_llm_to_keep_only_typescript/ | Glad-Audience9131 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfl19m | false | null | t3_1rfl19m | /r/LocalLLaMA/comments/1rfl19m/possible_to_prune_a_llm_to_keep_only_typescript/ | false | false | self | 0 | null |
How to offload the MLP part of a dense model to CPU, like a MoE model? | 2 | I'm using LM Studio. For MoE models, there's an option to offload the MoE part to CPU/RAM and only keep the attention part in GPU, but this option is not available for dense models.
I have only one poor 8GB GPU, but I think with this feature, it should be possible for me to run Qwen3.5-27B locally. | 2026-02-26T20:12:50 | https://www.reddit.com/r/LocalLLaMA/comments/1rfkzqm/how_to_offload_the_mlp_part_of_a_dense_model_to/ | eXl5eQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfkzqm | false | null | t3_1rfkzqm | /r/LocalLLaMA/comments/1rfkzqm/how_to_offload_the_mlp_part_of_a_dense_model_to/ | false | false | self | 2 | null |
How local OpenClaw is a huge game changer | 0 | So I have recently installed openclaw with local LLMs successfully
The things is for what use cases now ?
So I thought of automating some mundane tasks
Like reading the news at the morning
So I asked openclaw to create a daily briefing and send it to me in the morning with
Weather
News in topics and regions t... | 2026-02-26T20:07:35 | https://www.reddit.com/r/LocalLLaMA/comments/1rfkuly/how_local_openclaw_is_a_huge_game_changer/ | Potential_Block4598 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfkuly | false | null | t3_1rfkuly | /r/LocalLLaMA/comments/1rfkuly/how_local_openclaw_is_a_huge_game_changer/ | false | false | self | 0 | null |
New Apple-Native AI Agent | 0 | [Start message with all the AI Agent's Info](https://preview.redd.it/27i8drpkawlg1.png?width=2094&format=png&auto=webp&s=e02def2f5671c2cca16aadf0b755a9564a96f88d)
Heres a new AI Agent, **Apple Flow**, a small local daemon for macOS that routes your existing Apple workflow into AI coding agents like Codex / Claude /... | 2026-02-26T20:06:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rfktkk/new_applenative_ai_agent/ | littlehakr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfktkk | false | null | t3_1rfktkk | /r/LocalLLaMA/comments/1rfktkk/new_applenative_ai_agent/ | false | false | 0 | null | |
HEOSPHOROS THE GREAT | 0 | Most ML engineers know LightGBM struggles with class imbalance on fraud data.
The obvious fix is setting scale_pos_weight manually.
Here's what actually happens:
1. Default LightGBM: 0.4908
2. Manual fix (scale_pos_weight=577.9): 0.4474 — made it worse
3. Heosphoros optimized: 0.8519 (+73.57%)
The manual fix overco... | 2026-02-26T19:57:08 | https://www.reddit.com/gallery/1rfkk4u | quantum_chosen | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rfkk4u | false | null | t3_1rfkk4u | /r/LocalLLaMA/comments/1rfkk4u/heosphoros_the_great/ | false | false | 0 | null | |
My self hosted OpenClaw agent can now reach with a real phone call | 0 | I wanted my OpenClaw agent to be able to reach me in a way I can't just ignore when something important comes up. Chat messages are easy to miss, so I built a skill that lets it call me on the phone.
I just tell it "call me when X happens" and go about my day, whether I'm at the gym or on a walk or whatever, and when ... | 2026-02-26T19:56:14 | https://v.redd.it/wmcns0ph9wlg1 | marcos_pereira | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rfkj8m | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/wmcns0ph9wlg1/DASHPlaylist.mpd?a=1774727794%2COGM2NGU4NDFmYTAzMzBiZjk4ZDUzNzViM2UyYmNhNjExM2IzZGFjYjcyYjhkZDk3YTc3ZTRlNTJlYzIzOWNmNA%3D%3D&v=1&f=sd', 'duration': 79, 'fallback_url': 'https://v.redd.it/wmcns0ph9wlg1/CMAF_1080.mp4?source=fallback', 'h... | t3_1rfkj8m | /r/LocalLLaMA/comments/1rfkj8m/my_self_hosted_openclaw_agent_can_now_reach_with/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'a2w1ZWs3bGg5d2xnMUh0zhHrdHBngmfRa3Z_5KWq41XqJl4jNEN-rbs3mTJu', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/a2w1ZWs3bGg5d2xnMUh0zhHrdHBngmfRa3Z_5KWq41XqJl4jNEN-rbs3mTJu.jpeg?width=108&crop=smart&format=pjpg&auto=webp&s=8ceab6b73d949ef8db36f3fd21c6771e37b... | |
I built a cheat sheet mapping out the RAG Embedding Architecture (The Semantic Blueprint) | 1 | [removed] | 2026-02-26T19:55:44 | Admirable_Grade4027 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rfkiru | false | null | t3_1rfkiru | /r/LocalLLaMA/comments/1rfkiru/i_built_a_cheat_sheet_mapping_out_the_rag/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'qcovcc729wlg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/qcovcc729wlg1.png?width=108&crop=smart&auto=webp&s=a1481a143e1c328d703ff5ac105a8e38fdc7e0a2', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/qcovcc729wlg1.png?width=216&crop=smart&auto=web... | ||
pplx-embed: State-of-the-Art Embedding Models for Web-Scale Retrieval | 22 | Perplexity just dropped pplx-embed, a family of state-of-the-art text embedding models optimized for real-world, web-scale retrieval tasks—like semantic search and RAG systems. Built on diffusion-pretrained Qwen3 backbones with multi-stage contrastive learning, they come in two flavors: pplx-embed-v1 for independent te... | 2026-02-26T19:50:14 | https://research.perplexity.ai/articles/pplx-embed-state-of-the-art-embedding-models-for-web-scale-retrieval | 1-800-methdyke | research.perplexity.ai | 1970-01-01T00:00:00 | 0 | {} | 1rfkdjk | false | null | t3_1rfkdjk | /r/LocalLLaMA/comments/1rfkdjk/pplxembed_stateoftheart_embedding_models_for/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'G_yAWns7zWEzkW5qKmxSzTcWkvgIExuYRKjxIq4OvYw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/G_yAWns7zWEzkW5qKmxSzTcWkvgIExuYRKjxIq4OvYw.png?width=108&crop=smart&auto=webp&s=6680c12ad88a020d59e7cc9833d55903a74976ed', 'width': 108}, {'height': 121, 'url': 'h... | |
how are people actually building those mini ai devices with a screen? | 3 | so i keep seeing people post these little ai voice devices — like a small screen with a mic, running some kind of assistant. they look sick and i genuinely want to build one.
quick background on me — i build apps using ai tools and prompts (vibe coding basically), so the software side isn’t the scary part. it’s the ha... | 2026-02-26T19:33:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rfjxbe/how_are_people_actually_building_those_mini_ai/ | clawdesk_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfjxbe | false | null | t3_1rfjxbe | /r/LocalLLaMA/comments/1rfjxbe/how_are_people_actually_building_those_mini_ai/ | false | false | self | 3 | null |
top 10 trending models on HF | 214 | any conclusions? ;) | 2026-02-26T19:24:56 | jacek2023 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rfjp6v | false | null | t3_1rfjp6v | /r/LocalLLaMA/comments/1rfjp6v/top_10_trending_models_on_hf/ | false | false | 214 | {'enabled': True, 'images': [{'id': '5rqv8z2s3wlg1', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/5rqv8z2s3wlg1.png?width=108&crop=smart&auto=webp&s=920303b8050d1b5a1b926795faba209d3c206d8c', 'width': 108}, {'height': 93, 'url': 'https://preview.redd.it/5rqv8z2s3wlg1.png?width=216&crop=smart&auto=webp... | ||
Reverse CAPTCHA: We tested whether invisible Unicode characters can hijack LLM agents: 8,308 outputs across 5 models | 43 | We tested whether LLMs follow instructions hidden in invisible Unicode characters embedded in normal-looking text. Two encoding schemes (zero-width binary and Unicode Tags), 5 models (GPT-5.2, GPT-4o-mini, Claude Opus 4, Sonnet 4, Haiku 4.5), 8,308 graded outputs.
Key findings:
* **Tool access is the primary amplifie... | 2026-02-26T19:20:39 | thecanonicalmg | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rfjkzu | false | null | t3_1rfjkzu | /r/LocalLLaMA/comments/1rfjkzu/reverse_captcha_we_tested_whether_invisible/ | false | false | 43 | {'enabled': True, 'images': [{'id': 'p119kiqx2wlg1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/p119kiqx2wlg1.png?width=108&crop=smart&auto=webp&s=c528fa968f37cf02f26597b4f00b4786ab3ddb5d', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/p119kiqx2wlg1.png?width=216&crop=smart&auto=web... | ||
qwen3.5-122b What agent do you use with it? | 8 | I am running tests for agentic coding, and this is the first time I see a model I can host locally that can actually replace subscriptions, I don't use claude as it is too expensive and it is just stupid you are time limited in the Pro version, the Max is just too much for me.
I am using Junie (from PyCharm/Jetbrains)... | 2026-02-26T19:16:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rfjgx2/qwen35122b_what_agent_do_you_use_with_it/ | robertpro01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfjgx2 | false | null | t3_1rfjgx2 | /r/LocalLLaMA/comments/1rfjgx2/qwen35122b_what_agent_do_you_use_with_it/ | false | false | 8 | null | |
Which model would you recommend for my use case below? | 3 | Some of friends that are less technically than I, have started getting into local LLMs and keep asking me to set something up that just runs on their own computers.
I already put together a simple .exe file (promise it’s not a virus lol) that they can double-click. It fires up everything automatically so Llama 3.2 3B ... | 2026-02-26T18:37:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rfidx8/which_model_would_you_recommend_for_my_use_case/ | Puzzleheaded_Gap6638 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rfidx8 | false | null | t3_1rfidx8 | /r/LocalLLaMA/comments/1rfidx8/which_model_would_you_recommend_for_my_use_case/ | false | false | self | 3 | null |
LFM2-24B-A2B is crazy fast on Strix Halo | 62 | I've never seen a 24B model fly like this. It's almost 2x faster than gpt-oss-20b! Ran it with ROCm using Lemonade v9.4.0.
Really hope to see some cool uses for this model! Anyone tried it out for their tasks yet? | 2026-02-26T18:36:29 | https://v.redd.it/ug0nkgqhuvlg1 | jfowers_amd | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rfid0q | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/ug0nkgqhuvlg1/DASHPlaylist.mpd?a=1774723010%2CYjhhNjJjZGQ5MzdlMzQ5YjQwMTg4MTgzM2MzMGEzY2QwZjYwZDNhZDk3MzBkNWU2NTRmMjc2NjAxOThlMjRjMg%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/ug0nkgqhuvlg1/CMAF_720.mp4?source=fallback', 'ha... | t3_1rfid0q | /r/LocalLLaMA/comments/1rfid0q/lfm224ba2b_is_crazy_fast_on_strix_halo/ | false | false | 62 | {'enabled': False, 'images': [{'id': 'dWNtOGxscWh1dmxnMQ2ajgZe7-xnsTznsg_JwClc9VRl8_wH2AeJmBTcUqOP', 'resolutions': [{'height': 87, 'url': 'https://external-preview.redd.it/dWNtOGxscWh1dmxnMQ2ajgZe7-xnsTznsg_JwClc9VRl8_wH2AeJmBTcUqOP.png?width=108&crop=smart&format=pjpg&auto=webp&s=a13253f97681b6a8243345845daab95a8fb9e... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.