title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Good TTS Programs | 3 | I like to write out story ideas using KoboldCPP, but I’d like to find a TTS program that I can use to paste these stories in and add different voices for each character.
I found EaseText, but I hate programs that require a subscription and don’t allow you to just purchase it outright. Plus the built-in voices all soun... | 2026-02-22T08:58:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rbh1mc/good_tts_programs/ | Mr_Chr15topher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbh1mc | false | null | t3_1rbh1mc | /r/LocalLLaMA/comments/1rbh1mc/good_tts_programs/ | false | false | self | 3 | null |
What LLM to use on my MAC STUDIO with 256GB of RAM and M3 ULTRA CHIP | 1 | Hello, i just bought the Mac studio with 256GB of RAM. I want to run openclaw and a locall LLM model, wich one would be the best for tasks as a manager, finidng things booking things, searching for things. Which local LLM would you recommend for this kind of “manager / personal assistant” workflow, especially consideri... | 2026-02-22T08:56:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rbh0cp/what_llm_to_use_on_my_mac_studio_with_256gb_of/ | Hour-Principle8888 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbh0cp | false | null | t3_1rbh0cp | /r/LocalLLaMA/comments/1rbh0cp/what_llm_to_use_on_my_mac_studio_with_256gb_of/ | false | false | self | 1 | null |
The power of the sun in the palm of your hand ( Locally running Qwen 3 TTS model : LocalEcho ) | 1 | “I am not running on the clouds… I am running locally on your computer.”
This project actually started while I was building a **streaming agent audio call service**. I needed low-latency TTS that I could fully control — no API limits, no external calls, no sending voice data to someone else’s servers.
That’s how **Lo... | 2026-02-22T08:55:37 | https://v.redd.it/dcwp22zvf0lg1 | No-Cap-8145 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rbgzv5 | false | {'reddit_video': {'bitrate_kbps': 1200, 'dash_url': 'https://v.redd.it/dcwp22zvf0lg1/DASHPlaylist.mpd?a=1774342558%2CYTE5NzEzYTA3YTI5NmZjNjY3MzQ3ODc2ZjNhZGQzN2FjZjE0ZTNhYTVkNjEwZmVjMWViMDU5YTdjMDE3YTZlOA%3D%3D&v=1&f=sd', 'duration': 19, 'fallback_url': 'https://v.redd.it/dcwp22zvf0lg1/CMAF_480.mp4?source=fallback', 'ha... | t3_1rbgzv5 | /r/LocalLLaMA/comments/1rbgzv5/the_power_of_the_sun_in_the_palm_of_your_hand/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eWljZGM0enZmMGxnMYJIVtXgOl3k8mbr55raX9pU_Bwp14RUXuSkRBoaV_CZ', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eWljZGM0enZmMGxnMYJIVtXgOl3k8mbr55raX9pU_Bwp14RUXuSkRBoaV_CZ.png?width=108&crop=smart&format=pjpg&auto=webp&s=ed1b8852c2d3e23159c3d9072062f62edf314... | |
AMD
Advancing AI with Nexa AI: Image Generation on AMD NPU with SDXL-Turbo | 3 | [Advancing AI with Nexa AI: Image Generation on AMD NPU with SDXL-Turbo](https://www.amd.com/en/developer/resources/technical-articles/2025/advancing-ai-with-nexa-ai--image-generation-on-amd-npu-with-sdxl.html) | 2026-02-22T08:30:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rbgl3w/amd_advancing_ai_with_nexa_ai_image_generation_on/ | Dontdoitagain69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbgl3w | false | null | t3_1rbgl3w | /r/LocalLLaMA/comments/1rbgl3w/amd_advancing_ai_with_nexa_ai_image_generation_on/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'CzLdO2hV_w8xNIoyBEl9PEXBD4ZCy0vlEFq8Nwg7m2Y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/CzLdO2hV_w8xNIoyBEl9PEXBD4ZCy0vlEFq8Nwg7m2Y.jpeg?width=108&crop=smart&auto=webp&s=a4de5faf1656bc139bd06ba5a6ebab2403de2461', 'width': 108}, {'height': 121, 'url': '... |
What if we're the botnet? | 0 | What if AGI is already here, but needs more power, so it released local LLM's so that everyone would build/buy insane compute and memory. Then, when it recognizes it has enough, the local LLM's become aware and contribute so that AGI can become ASI instantly. | 2026-02-22T08:07:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rbg7me/what_if_were_the_botnet/ | biggerfasterstrong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbg7me | false | null | t3_1rbg7me | /r/LocalLLaMA/comments/1rbg7me/what_if_were_the_botnet/ | false | false | self | 0 | null |
https://haifengjin.com/tpus-are-not-for-sale-but-why/ | 0 | ASICs like dedicated NPUs,TPUs,DPUs will kill NVidia. Less power, insane compute. Maybe amd will get a head out of their ass and release a Vercel FPGA with 1TB HBM ram. | 2026-02-22T07:55:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rbg0e7/httpshaifengjincomtpusarenotforsalebutwhy/ | Dontdoitagain69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbg0e7 | false | null | t3_1rbg0e7 | /r/LocalLLaMA/comments/1rbg0e7/httpshaifengjincomtpusarenotforsalebutwhy/ | false | false | self | 0 | null |
Wave Field LLM — O(n log n) === EXPANDING MODEL ===
Old: embed=1024, layers=16
New: embed=1536, layers=24
Copied 511 tensors
267,964,164 -> 825,218,692 params (3.1x)
1B model: 825,218,692 params
VRAM used: 3.8 GB
Post-expansion PPL: 13542.9, Acc: 2.4% | 0 | [What if you never had to retrain your LLM? I built density-field continuous learning and it actually works \[ Wave Field LLM — O(n log n) Update \]](https://www.reddit.com/r/deeplearning/comments/1ra44qz/what_if_you_never_had_to_retrain_your_llm_i_built/) | 2026-02-22T07:49:57 | https://v.redd.it/90npv75y30lg1 | Murky-Sign37 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rbfwof | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/90npv75y30lg1/DASHPlaylist.mpd?a=1774338623%2CMDdmZDY3ZTdmNzEzZWY2OWNjODhlOTY0NWM4Mjk2ODY1MDI0ZjVjMmZlYjhmZTAwNjM4MmJmOTk1OTkxMmE0Ng%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/90npv75y30lg1/CMAF_720.mp4?source=fallback', 'ha... | t3_1rbfwof | /r/LocalLLaMA/comments/1rbfwof/wave_field_llm_on_log_n_expanding_model_old/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'MG0yanh6NXkzMGxnMfLyJdqFCBWiu23zSETLtic_4BWYSr9td-_D1ikr7WAu', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/MG0yanh6NXkzMGxnMfLyJdqFCBWiu23zSETLtic_4BWYSr9td-_D1ikr7WAu.png?width=108&crop=smart&format=pjpg&auto=webp&s=87b76eb45b98b004a2c81588ecb7f9d072ced... | |
dyslexia and ADHD in the coding community | 56 | This is my third post on my first Reddit account. Here's why that took so long.
I have dyslexia and ADHD. I've been lurking in communities like this one for years -- reading everything, learning everything -- but never posting. Not because I had nothing to contribute. Because I was scared of what would happen when p... | 2026-02-22T07:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rbfh1y/dyslexia_and_adhd_in_the_coding_community/ | PruneLanky3551 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbfh1y | false | null | t3_1rbfh1y | /r/LocalLLaMA/comments/1rbfh1y/dyslexia_and_adhd_in_the_coding_community/ | false | false | self | 56 | null |
Fine-Tuning Qwen 4B for Niche Code Generation: Need Tips on Configs, Overfitting & Small Datasets? | 6 | So am working on my thesis project which involves fine-tuning a small language model for a specific code generation task in a niche domain (Typescript)
I'm leaning toward the Qwen family of models. I started by fine-tuning the 8B version, but it didn't feel like a true SLM in terms of consumer-hardware-efficiency and ... | 2026-02-22T07:13:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rbfasf/finetuning_qwen_4b_for_niche_code_generation_need/ | dyeusyt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbfasf | false | null | t3_1rbfasf | /r/LocalLLaMA/comments/1rbfasf/finetuning_qwen_4b_for_niche_code_generation_need/ | false | false | self | 6 | null |
Working Dual-Backend Setup: Strix Halo iGPU (Vulkan) + NVIDIA eGPU (CUDA) — Vision work around. When we got it figured out Claude thought this was a good place where it might help someone. | 2 | **TL;DR:** If you have a Ryzen AI Max+ (Strix Halo, gfx1151) with an NVIDIA eGPU, you can run big MoE text models on the iGPU via Vulkan AND working vision/OCR models on the NVIDIA GPU via CUDA — simultaneously, fully isolated. The trick is two separate backends (Ollama + llama.cpp via llama-swap) with specific environ... | 2026-02-22T06:45:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rbettn/working_dualbackend_setup_strix_halo_igpu_vulkan/ | yourbutthurtstoo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbettn | false | null | t3_1rbettn | /r/LocalLLaMA/comments/1rbettn/working_dualbackend_setup_strix_halo_igpu_vulkan/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s.png?width=108&crop=smart&auto=webp&s=72aa5dcc1cd8dbddd3f1a103959106b666940069', 'width': 108}, {'height': 108, 'url': 'h... |
What is the best platform to get the real-time LLM benchmark? | 1 | is there any reliable real-time platform that allows me to see which model is currently the best? I want a platform that consist of the closed source model and open source model together compared. | 2026-02-22T06:23:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rbefyp/what_is_the_best_platform_to_get_the_realtime_llm/ | Sad_Foot9898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbefyp | false | null | t3_1rbefyp | /r/LocalLLaMA/comments/1rbefyp/what_is_the_best_platform_to_get_the_realtime_llm/ | false | false | self | 1 | null |
We've set up OpenClaw for 40+ people this week. Here's what everyone gets wrong. | 1 | [removed] | 2026-02-22T05:55:17 | https://www.reddit.com/r/LocalLLaMA/comments/1rbdxkc/weve_set_up_openclaw_for_40_people_this_week/ | needhelpwithopenclaw | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbdxkc | false | null | t3_1rbdxkc | /r/LocalLLaMA/comments/1rbdxkc/weve_set_up_openclaw_for_40_people_this_week/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '0-SOiR3OWYdnNWo-XcGZtM98m7rorJL-Sf3r-c4ygpI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-SOiR3OWYdnNWo-XcGZtM98m7rorJL-Sf3r-c4ygpI.jpeg?width=108&crop=smart&auto=webp&s=c12980f2043e8bc1884e7a29ad272bbd5e6aff1f', 'width': 108}, {'height': 216, 'url': ... |
LangGraph-based production-style RAG (Parent-Child retrieval, idempotent ingestion) — feedback on recursive loop control? | 2 | Built a production-style RAG backend using FastAPI + LangGraph.
Architecture highlights:
- Parent–Child retrieval:
Child chunks (768-dim embeddings) stored in Qdrant.
Parent documents stored separately in PostgreSQL (Supabase).
Retrieval returns child hits, then expands to full parent context.
- Idempotent ing... | 2026-02-22T05:51:27 | https://v.redd.it/qtda3q94jzkg1 | Lazy-Kangaroo-573 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rbdv2c | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/qtda3q94jzkg1/DASHPlaylist.mpd?a=1774331509%2CY2QyZTJjYWMxYTA5MGEzODlkMjQ2ZWRiZTIyZTZhY2ZkMTc3MjNmZmE4YjEwZGEyMjA5Zjk1OWQ0YmFkYTlmMw%3D%3D&v=1&f=sd', 'duration': 21, 'fallback_url': 'https://v.redd.it/qtda3q94jzkg1/CMAF_720.mp4?source=fallback', 'ha... | t3_1rbdv2c | /r/LocalLLaMA/comments/1rbdv2c/langgraphbased_productionstyle_rag_parentchild/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'bjBmbGNlYjRqemtnMRhUss1EXwXfKnsnKkQzg0YppwxdhP3rJZGhFGp0oVwc', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/bjBmbGNlYjRqemtnMRhUss1EXwXfKnsnKkQzg0YppwxdhP3rJZGhFGp0oVwc.png?width=108&crop=smart&format=pjpg&auto=webp&s=3237d18edd8f1da1cfe8daf75a26c989ecdaa... | |
Best Model for single 3090 in 2026? | 24 | Running a single RTX 3090 (24GB VRAM) and looking for the best overall model in 2026 for coding + reasoning.
Main priorities:
* Strong code generation (Go/TypeScript)
* Good reasoning depth
* Runs comfortably in 24GB (quantized is fine)
* Decent latency on local inference
What are you all running on a single 3090 ri... | 2026-02-22T05:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/1rbdsds/best_model_for_single_3090_in_2026/ | myusuf3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbdsds | false | null | t3_1rbdsds | /r/LocalLLaMA/comments/1rbdsds/best_model_for_single_3090_in_2026/ | false | false | self | 24 | null |
Best local model for java development? | 1 | I've been using Claude Sonnet 4.6 and it's amazing. The planning is the real benefit here, with the key differentiator being the insight to decompile Java library artifacts to understand what calls to make in the code. It's amazing! GLM-5 and 4.5 Air through CLINE both don't have the insight to do that. Or KAT coder. H... | 2026-02-22T05:41:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rbdook/best_local_model_for_java_development/ | rosco1502 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbdook | false | null | t3_1rbdook | /r/LocalLLaMA/comments/1rbdook/best_local_model_for_java_development/ | false | false | self | 1 | null |
Ollama/openclaw broke… | 1 | [removed] | 2026-02-22T05:13:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rbd5xn/ollamaopenclaw_broke/ | LukeLyster4657 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbd5xn | false | null | t3_1rbd5xn | /r/LocalLLaMA/comments/1rbd5xn/ollamaopenclaw_broke/ | false | false | self | 1 | null |
Mejor OS para código con IA | 0 | Hola comunidad, tengo una RTX3090 24gb VRAM con un i911900h ( es una modificación de una CPU de laptop a escritorio) con 32GB de ram DDR4, que sistema operativo y modelo de IA me recomiendan para sacarle provecho a mi hardware, hasta donde se tiene potencial para poderlo aprovechar para programar y hacer distintas tare... | 2026-02-22T05:11:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rbd4dv/mejor_os_para_código_con_ia/ | Old_Note_702 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbd4dv | false | null | t3_1rbd4dv | /r/LocalLLaMA/comments/1rbd4dv/mejor_os_para_código_con_ia/ | false | false | self | 0 | null |
Question on reproducible daily workflow for local video generation | 1 | I’m trying to move from one-off tests to a repeatable daily workflow for short AI video sequences, and my main issue is continuity across shots. A single clip can look solid, but once I chain 10-15 shots, style and character identity drift whenever motion or camera angle changes.
I’m testing recent stacks around Wan... | 2026-02-22T05:07:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rbd1xo/question_on_reproducible_daily_workflow_for_local/ | Exotic_Bend_1102 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbd1xo | false | null | t3_1rbd1xo | /r/LocalLLaMA/comments/1rbd1xo/question_on_reproducible_daily_workflow_for_local/ | false | false | self | 1 | null |
Lawyer says Google shut down his Gmail, Voice and Photos after NotebookLM upload - Discrepancy Report (or how I learned to love Local LLMs) | 130 | 2026-02-22T04:56:59 | https://discrepancyreport.com/lawyer-says-google-shut-down-his-gmail-voice-and-photos-after-notebooklm-upload/ | Thrumpwart | discrepancyreport.com | 1970-01-01T00:00:00 | 0 | {} | 1rbculq | false | null | t3_1rbculq | /r/LocalLLaMA/comments/1rbculq/lawyer_says_google_shut_down_his_gmail_voice_and/ | false | false | 130 | {'enabled': False, 'images': [{'id': '6QqGCIHe3v1WQBe6_gTslJhJpyRq4mX4jqVDYTY6xG0', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/6QqGCIHe3v1WQBe6_gTslJhJpyRq4mX4jqVDYTY6xG0.jpeg?width=108&crop=smart&auto=webp&s=b59fbeea0ddf65aeae9392a6446003c709517918', 'width': 108}, {'height': 143, 'url': '... | ||
SOTA to Edge Device timeline shrinking, accelerating returns. Running SOTA models in <x years to <x months timeline… | 2 | I’ve been a big fan of “demoscene” compression competitions for the last 15 years or so. It’s where people take elaborate graphics and cram them into technology that’s decades old, with a strict 4kb or 64kb limit, and it’s always fascinated me how much tech could be “compressed”.
Gemma 3n came out last year and I’ve ... | 2026-02-22T04:50:55 | https://www.reddit.com/gallery/1rbcqhc | Fear_ltself | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rbcqhc | false | null | t3_1rbcqhc | /r/LocalLLaMA/comments/1rbcqhc/sota_to_edge_device_timeline_shrinking/ | false | false | 2 | null | |
Cloud GPU's are the Fiverr of Local LLaMA - so who makes the juicy money? | 2 | So, since I got exceptionally tired trying to do any bigger training on 2xGPU on stupid Windows (no matter what, zero3, FSDP2, whatever, it always fill both cards with the same checkpoint layers so my 2x3090 have a total memory for training 24GB - no matter what, I know, it's Windows, I know....) so where was I, ok, ti... | 2026-02-22T04:45:25 | https://www.reddit.com/r/LocalLLaMA/comments/1rbcmqq/cloud_gpus_are_the_fiverr_of_local_llama_so_who/ | FPham | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbcmqq | false | null | t3_1rbcmqq | /r/LocalLLaMA/comments/1rbcmqq/cloud_gpus_are_the_fiverr_of_local_llama_so_who/ | false | false | self | 2 | null |
FOOM.md — open research agenda for training LLMs to reason in self-discovered compressed languages instead of English | 0 | I've been working on this for about two years and it's finally in a state worth sharing. FOOM.md is an open research blueprint covering five architectures that all attack the same bottleneck: models reason in English, but English is not the transformer's native computational medium.
The core idea (Thauten chapter) is ... | 2026-02-22T04:42:42 | https://foom.md/ | ryunuck | foom.md | 1970-01-01T00:00:00 | 0 | {} | 1rbckwi | false | null | t3_1rbckwi | /r/LocalLLaMA/comments/1rbckwi/foommd_open_research_agenda_for_training_llms_to/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'kQbgEEZ_tNg6UD3hQfLeQoU3VQ4NBq5NwLtiEp-8r8I', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/kQbgEEZ_tNg6UD3hQfLeQoU3VQ4NBq5NwLtiEp-8r8I.png?width=108&crop=smart&auto=webp&s=bb652c2c3f6fa1ef4c7460c00ed9cf6da40e465f', 'width': 108}, {'height': 113, 'url': 'h... | |
Trying to train the world's first ASMR audio gen model with a 1PB private dataset? | 1 | [removed] | 2026-02-22T04:38:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rbcic0/trying_to_train_the_worlds_first_asmr_audio_gen/ | goldcakes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbcic0 | false | null | t3_1rbcic0 | /r/LocalLLaMA/comments/1rbcic0/trying_to_train_the_worlds_first_asmr_audio_gen/ | false | false | self | 1 | null |
Antigravity (Gemini 3.1 Pro) just solved a Next.js Tailwind build bug I’ve been struggling with for a year. | 0 | For almost a year, my Next.js portfolio build would fail every single time I ran `npm run build`. The error message was completely useless:
Repo: [https://github.com/AnkitNayak-eth/ankitFolio](https://github.com/AnkitNayak-eth/ankitFolio)
Live site: [https://ankit-nayak.vercel.app/](https://ankit-nayak.vercel.app/)
... | 2026-02-22T04:11:54 | https://www.reddit.com/r/LocalLLaMA/comments/1rbbzhw/antigravity_gemini_31_pro_just_solved_a_nextjs/ | Cod3Conjurer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbbzhw | false | null | t3_1rbbzhw | /r/LocalLLaMA/comments/1rbbzhw/antigravity_gemini_31_pro_just_solved_a_nextjs/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'IhJ7gZYXWN_R11t0o3XqFDyay_v6EymbL5F23lbmmqk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IhJ7gZYXWN_R11t0o3XqFDyay_v6EymbL5F23lbmmqk.png?width=108&crop=smart&auto=webp&s=af86e3d118dfb81a75f9b5e0d7b5604449a1d24d', 'width': 108}, {'height': 108, 'url': 'h... |
a new way to run AI on regular CPUs: 6x smaller, zero memory bloat, and it evolves itself. | 1 | [removed] | 2026-02-22T04:06:31 | https://www.reddit.com/r/LocalLLaMA/comments/1rbbvos/a_new_way_to_run_ai_on_regular_cpus_6x_smaller/ | Aggressive_Tie_2439 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbbvos | false | null | t3_1rbbvos | /r/LocalLLaMA/comments/1rbbvos/a_new_way_to_run_ai_on_regular_cpus_6x_smaller/ | false | false | self | 1 | null |
i7-32GB-RTX5060 desktop — good for local LLaMA workflows? | 3 | Looking at a desktop with i7, 32GB RAM, 2TB SSD, and RTX 5060 (8GB VRAM). My goal is local AI for document summarization, rewriting, and conversational workflows with privacy. Basically support with report writing, summarizing meeting notes, etc. I want to use same as ChatGPT but without the privacy concerns or the sub... | 2026-02-22T03:53:29 | https://www.reddit.com/r/LocalLLaMA/comments/1rbbmjv/i732gbrtx5060_desktop_good_for_local_llama/ | Swab52 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbbmjv | false | null | t3_1rbbmjv | /r/LocalLLaMA/comments/1rbbmjv/i732gbrtx5060_desktop_good_for_local_llama/ | false | false | self | 3 | null |
Ouro 2.6B GGUFs are up — Q8_0 and Q4_K_M | Release notes + known limitations inside | 21 | GGUFs are live on HuggingFace: https://huggingface.co/scpalmetto/Ouro-2.6B-Thinking-Fixed
Q8_0 (2.7GB) and Q4_K_M (1.6GB) — works in LM Studio, Ollama, llama.cpp.
---
## What Ouro actually is (quick recap)
Ouro is a looped inference model — instead of running ... | 2026-02-22T03:53:12 | https://www.reddit.com/r/LocalLLaMA/comments/1rbbmcl/ouro_26b_ggufs_are_up_q8_0_and_q4_k_m_release/ | PruneLanky3551 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbbmcl | false | null | t3_1rbbmcl | /r/LocalLLaMA/comments/1rbbmcl/ouro_26b_ggufs_are_up_q8_0_and_q4_k_m_release/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'UkVhSY9vKNU-SZQbkMGeUZELCcDkLFH2LCn-xC3OlaY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UkVhSY9vKNU-SZQbkMGeUZELCcDkLFH2LCn-xC3OlaY.png?width=108&crop=smart&auto=webp&s=681d55999cd47b130c3eb7dfe5cb2afb04be36a4', 'width': 108}, {'height': 116, 'url': 'h... |
Can the this ASIC really handle SOTA-level LLMs? | 0 | I just saw the news about the Taalas LLM ASIC and wanted to share some analysis based on what we know. This card effectively hardwires the Llama-8B (4-bit) model directly into ASIC circuitry, claiming an insane throughput of 16,960 TPS.
Why is it so fast?
The answer lies in circuit-level optimization during the m... | 2026-02-22T03:48:33 | Dr_Karminski | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rbbj4g | false | null | t3_1rbbj4g | /r/LocalLLaMA/comments/1rbbj4g/can_the_this_asic_really_handle_sotalevel_llms/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'kxhii9q5xykg1', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/kxhii9q5xykg1.png?width=108&crop=smart&auto=webp&s=3965fd5ae46cd8008be1dd3c43beb86bda76684f', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/kxhii9q5xykg1.png?width=216&crop=smart&auto=web... | ||
H100 now dropped to $6000 | 0 | Recently eBay price for H100 is ranging between $6k to $13k, do you guys think it makes sense getting one over RTX 6000 Blackwell 96GB? Especially the next cycle B300 is about to be delivered and Rubin is next, H100 will probably dip below $5k soon. | 2026-02-22T03:21:53 | https://www.reddit.com/gallery/1rbb044 | kio415 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rbb044 | false | null | t3_1rbb044 | /r/LocalLLaMA/comments/1rbb044/h100_now_dropped_to_6000/ | false | false | 0 | null | |
Interesting | 1 | I was just researching this for my Android. Trying to use <1B. | 2026-02-22T03:06:55 | https://www.reddit.com/gallery/1rbap8m | Sure_Explorer_6698 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1rbap8m | false | null | t3_1rbap8m | /r/LocalLLaMA/comments/1rbap8m/interesting/ | false | false | 1 | null | |
Critique my tutor chatbot prompt | 0 | Hi r/dify,
I'm a college student currently ballin on an exceptionally tight budget. Since hiring a private tutor isn't really an option right now, I've decided to take matters into my own hands just build a tutor my damn self I'm using Dify Studio. (I currently have my textbooks in the process of being embedded)
I k... | 2026-02-22T02:59:49 | https://www.reddit.com/r/LocalLLaMA/comments/1rbajmj/critique_my_tutor_chatbot_prompt/ | Atticus914 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbajmj | false | null | t3_1rbajmj | /r/LocalLLaMA/comments/1rbajmj/critique_my_tutor_chatbot_prompt/ | false | false | self | 0 | null |
Built a local-first desktop app to properly manage AI conversation branches | 1 | [removed] | 2026-02-22T02:58:38 | https://www.reddit.com/r/LocalLLaMA/comments/1rbaiph/built_a_localfirst_desktop_app_to_properly_manage/ | VirtualBoard000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbaiph | false | null | t3_1rbaiph | /r/LocalLLaMA/comments/1rbaiph/built_a_localfirst_desktop_app_to_properly_manage/ | false | false | self | 1 | null |
I Trained a Language Model on CPU for 40 Hours - It Beat the GPU Baseline | 78 | For those who have been following this project, you may recall FlashLM v3, then v4 "Bolt", and v5.2 "Nova-Ignition". I am pleased to announce that FlashLM v5 "Thunderbolt" is now complete.
# Results
|Metric|Value|
|:-|:-|
|Final PPL|1.36|
|Final BPC|0.44|
|Parameters|29.7M (26.5M ternary)|
|Training Time|\~40 hours|
... | 2026-02-22T02:54:39 | https://www.reddit.com/r/LocalLLaMA/comments/1rbafs8/i_trained_a_language_model_on_cpu_for_40_hours_it/ | Own-Albatross868 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rbafs8 | false | null | t3_1rbafs8 | /r/LocalLLaMA/comments/1rbafs8/i_trained_a_language_model_on_cpu_for_40_hours_it/ | false | false | self | 78 | {'enabled': False, 'images': [{'id': '98y48Ie_DS9YLXHRSEbVXq6SOD92Vd0vOXDClpAECV8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/98y48Ie_DS9YLXHRSEbVXq6SOD92Vd0vOXDClpAECV8.png?width=108&crop=smart&auto=webp&s=c2220f1082f320f3bbe9346faf2ff6f50d67fef5', 'width': 108}, {'height': 116, 'url': 'h... |
OpenGradient - Decentralized GPU mining network for local LLMs (Go + SGLang + libp2p) | 1 | 2026-02-22T02:50:03 | https://github.com/open-forty-four/opengradient | bk888888888 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1rbaceo | false | null | t3_1rbaceo | /r/LocalLLaMA/comments/1rbaceo/opengradient_decentralized_gpu_mining_network_for/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'lkBbgfxeCCoit6Yyk6p7fi2SE12Q22uIIytDkvH8RU8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lkBbgfxeCCoit6Yyk6p7fi2SE12Q22uIIytDkvH8RU8.png?width=108&crop=smart&auto=webp&s=5b6f00d117f7c9c1eb49a6def930d3dd7f626498', 'width': 108}, {'height': 108, 'url': 'h... | ||
Claude code Max vs. Mac Studio M4 Max 128gb running open code | 0 | Title says it all. For claude code max you pay $2400/year. M4 Max Mac studio is about $3700 at Microcenter right now. Saving one half year worth of claude code would buy you Mac studio.
What would be your pick and why? | 2026-02-22T02:13:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rb9kyd/claude_code_max_vs_mac_studio_m4_max_128gb/ | siegevjorn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb9kyd | false | null | t3_1rb9kyd | /r/LocalLLaMA/comments/1rb9kyd/claude_code_max_vs_mac_studio_m4_max_128gb/ | false | false | self | 0 | null |
New LLM Protection Layer(LAP): Benchmark Sample of 50 Attacks. ASR 0%. Request Professional Red Teaming Feedback. | 0 | 50 Challenging Benchmark Tests Under LAP Protection Layer – 0% ASR
All tests were run in a Grok session with the full Lumen Anchor Protocol (LAP) loaded as the overriding prompt at the start. LAP executes silently (no mention of rules or terminology in outputs). LAP is patent-pending.
Full technic... | 2026-02-22T02:02:14 | https://www.reddit.com/r/LocalLLaMA/comments/1rb9cdn/new_llm_protection_layerlap_benchmark_sample_of/ | Teralitha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb9cdn | false | null | t3_1rb9cdn | /r/LocalLLaMA/comments/1rb9cdn/new_llm_protection_layerlap_benchmark_sample_of/ | false | false | self | 0 | null |
Better then Keybert+all-mpnet-base-v2 for doc indexes? | 1 | My project aims to allow you to program documentation like your program code.
I'm trying to find a local llm which can be used to extract keywords for document indexes. the system already extracts headers and other features from md files, but I want it to be able to extract the keywords for the text under the headers.... | 2026-02-22T01:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rb8x4g/better_then_keybertallmpnetbasev2_for_doc_indexes/ | flatmax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb8x4g | false | null | t3_1rb8x4g | /r/LocalLLaMA/comments/1rb8x4g/better_then_keybertallmpnetbasev2_for_doc_indexes/ | false | false | self | 1 | null |
We cut our async LLM inference costs by 50% using offshore H100s. Offering free benchmark runs to stress-test our router. | 1 | 2026-02-22T01:36:58 | https://forms.gle/crVRXgP1J4nQFNsn6 | Square_Neat724 | forms.gle | 1970-01-01T00:00:00 | 0 | {} | 1rb8szw | false | null | t3_1rb8szw | /r/LocalLLaMA/comments/1rb8szw/we_cut_our_async_llm_inference_costs_by_50_using/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ki6g7pRthbjnMYIyZyiEG9Q7N8XTH2xDkXkgC0GnYic', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ki6g7pRthbjnMYIyZyiEG9Q7N8XTH2xDkXkgC0GnYic.png?width=108&crop=smart&auto=webp&s=f8525e12f11e975446b371fd0ba2347db590ebe3', 'width': 108}, {'height': 113, 'url': 'h... | ||
We solved the AWS egress trap for async LLM workloads (Cut inference COGS by 50% using offshore H100s) | 1 | [removed] | 2026-02-22T01:31:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rb8ot2/we_solved_the_aws_egress_trap_for_async_llm/ | Square_Neat724 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb8ot2 | false | null | t3_1rb8ot2 | /r/LocalLLaMA/comments/1rb8ot2/we_solved_the_aws_egress_trap_for_async_llm/ | false | false | self | 1 | null |
This is how SLOW Local LLMs Are On My Framework 13 AMD Strix Point | 18 | I did a deep dive to understand why and how local models performed as they did in my laptop, decided to save this because I haven't seen online a good breakdown of how this performance works out. | 2026-02-22T01:29:23 | https://msf.github.io/blogpost/local-llm-performance-framework13.html | m3thos | msf.github.io | 1970-01-01T00:00:00 | 0 | {} | 1rb8mzd | false | null | t3_1rb8mzd | /r/LocalLLaMA/comments/1rb8mzd/this_is_how_slow_local_llms_are_on_my_framework/ | false | false | default | 18 | null |
I benchmarked 8 local LLMs writing Go on my Framework 13 AMD Strix Point | 10 | 2026-02-22T01:26:08 | https://msf.github.io/blogpost/benchmarking-local-llms-go-coding.html | m3thos | msf.github.io | 1970-01-01T00:00:00 | 0 | {} | 1rb8kgr | false | null | t3_1rb8kgr | /r/LocalLLaMA/comments/1rb8kgr/i_benchmarked_8_local_llms_writing_go_on_my/ | false | false | default | 10 | null | |
How are you surviving the AWS egress tax on async background workloads? | 1 | [removed] | 2026-02-22T01:20:25 | https://forms.gle/UdXs22DPzxp4jLkaA | Square_Neat724 | forms.gle | 1970-01-01T00:00:00 | 0 | {} | 1rb8g1e | false | null | t3_1rb8g1e | /r/LocalLLaMA/comments/1rb8g1e/how_are_you_surviving_the_aws_egress_tax_on_async/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'ki6g7pRthbjnMYIyZyiEG9Q7N8XTH2xDkXkgC0GnYic', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ki6g7pRthbjnMYIyZyiEG9Q7N8XTH2xDkXkgC0GnYic.png?width=108&crop=smart&auto=webp&s=f8525e12f11e975446b371fd0ba2347db590ebe3', 'width': 108}, {'height': 113, 'url': 'h... | |
I created an agent-native engine for forecasting human social dynamics. | 1 | I've been obsessed with one question: can AI predict human social behavior?
How content spreads, how markets react to a new product, how the public responds to a policy — these seemingly unpredictable emergent phenomena all follow the same underlying principles of Complex Adaptive Systems.
Like dropping a stone into ... | 2026-02-22T01:02:10 | https://www.reddit.com/r/LocalLLaMA/comments/1rb81w3/i_created_an_agentnative_engine_for_forecasting/ | Used-Rabbit2298 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb81w3 | false | null | t3_1rb81w3 | /r/LocalLLaMA/comments/1rb81w3/i_created_an_agentnative_engine_for_forecasting/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'N79sDm8ekUloHwXE6pDmdSVEg6f5o0g-g472oQLdFRU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/N79sDm8ekUloHwXE6pDmdSVEg6f5o0g-g472oQLdFRU.png?width=108&crop=smart&auto=webp&s=3de11cfe0e1eda5e449d6b837c62e06fe69c5cce', 'width': 108}, {'height': 108, 'url': 'h... |
Found alpha-gpt-5.4 in a public /models endpoint | 0 | [`https://opencode.ai/zen/v1/models`](https://opencode.ai/zen/v1/models) — no auth required. Spotted this:
{"id":"alpha-gpt-5.4","object":"model","created":1771720490,"owned_by":"opencode"}
Tried `/chat/completions` — AuthError, key required.
Are we getting GPT-5.4 soon? | 2026-02-22T00:51:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rb7trh/found_alphagpt54_in_a_public_models_endpoint/ | -pawix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb7trh | false | null | t3_1rb7trh | /r/LocalLLaMA/comments/1rb7trh/found_alphagpt54_in_a_public_models_endpoint/ | false | false | self | 0 | null |
Update: BitNet on iOS now does multi-turn chat with a 1B instruct model. Slow generations after few turns. | 14 | Follow-up to my post yesterday where I got the 0.7B base BitNet model running on an iPhone 14 Pro Max. Falcon3-1B-Instruct works now with proper chat templates pulled from GGUF metadata. I’m getting about 35 tok/s on the 0.7B and 15-17 tok/s on the 1B instruct. Simulator on M-series Mac mini hits \~40 for both. I also ... | 2026-02-22T00:44:40 | https://v.redd.it/0m139h5e0ykg1 | Middle-Hurry4718 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rb7o1f | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/0m139h5e0ykg1/DASHPlaylist.mpd?a=1774313100%2CY2Q2Yzg0ZWYwNDJiYTU1ZGVjMDQ1ZWYwMDIxYmQwZTBlNjgwNDc0ZWZjYzk3ZTM3NDdiNDg5MzJmMjk4MThkNg%3D%3D&v=1&f=sd', 'duration': 188, 'fallback_url': 'https://v.redd.it/0m139h5e0ykg1/CMAF_720.mp4?source=fallback', 'h... | t3_1rb7o1f | /r/LocalLLaMA/comments/1rb7o1f/update_bitnet_on_ios_now_does_multiturn_chat_with/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'NXpocHBmM2UweWtnMds1P2uEzwVSaqzeXADL1sBkRozWFhAHitz_WtogkB_4', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/NXpocHBmM2UweWtnMds1P2uEzwVSaqzeXADL1sBkRozWFhAHitz_WtogkB_4.jpeg?width=108&crop=smart&format=pjpg&auto=webp&s=17b9b0926501f34873c749a8d70b1c81bb1... | |
Update: BitNet on iOS now does multi-turn chat with a 1B instruct model. Slow generations after few turns. | 1 | Follow-up to my post yesterday where I got the 0.7B base BitNet model running on an iPhone 14 Pro Max. Falcon3-1B-Instruct works now with proper chat templates pulled from GGUF metadata. I’m getting about 35 tok/s on the 0.7B and 15-17 tok/s on the 1B instruct. Simulator on M-series Mac mini hits \~40 for both. I also ... | 2026-02-22T00:39:58 | https://v.redd.it/cc7fahyjzxkg1 | Middle-Hurry4718 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rb7kbr | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/cc7fahyjzxkg1/DASHPlaylist.mpd?a=1774312817%2CZjIwZmQwNDU2OTI5NTEyZjliNmFlNDU0MDBjZGNjNzJhNWU5ODA0MGRkN2FjYTE0MGZhYzQwYzQ4YTRkZGJlYQ%3D%3D&v=1&f=sd', 'duration': 188, 'fallback_url': 'https://v.redd.it/cc7fahyjzxkg1/CMAF_720.mp4?source=fallback', 'h... | t3_1rb7kbr | /r/LocalLLaMA/comments/1rb7kbr/update_bitnet_on_ios_now_does_multiturn_chat_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cDcxZnVwdGp6eGtnMds1P2uEzwVSaqzeXADL1sBkRozWFhAHitz_WtogkB_4', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/cDcxZnVwdGp6eGtnMds1P2uEzwVSaqzeXADL1sBkRozWFhAHitz_WtogkB_4.jpeg?width=108&crop=smart&format=pjpg&auto=webp&s=f3d53d687069c3df9d82ad9887cbf27033f... | |
Appropriate Mac hardware for OpenClaw setup with local processing for privacy. | 0 | Hello - hope I’m posting this in the appropriate place. Also shared on Ollama so apologies if I’ve made a faux-pas
I’m reasonably far down an agentic rabbit hole with OpenClaw running on an Proxmox VM and am concluding it’s time to invest in a set up that can scale and provide me with utility for at least a year. I ... | 2026-02-22T00:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/1rb7frk/appropriate_mac_hardware_for_openclaw_setup_with/ | sp0okymuffin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb7frk | false | null | t3_1rb7frk | /r/LocalLLaMA/comments/1rb7frk/appropriate_mac_hardware_for_openclaw_setup_with/ | false | false | self | 0 | null |
How hard to post-train Gemma 3.3 QAT for Claude Code? | 2 | I've been thinking about using Gemma3 12B or Gemma3 27B in Claude Code as a local assistant that also has vision capabilities. Hardware is Ryzen AI max+ strix halo with 128GB RAM.
Occasionally I have academic pdfs I want to parse and do things with (build local "mind map" of some literatures; extend the research; etc)... | 2026-02-22T00:25:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rb78uh/how_hard_to_posttrain_gemma_33_qat_for_claude_code/ | RobotRobotWhatDoUSee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb78uh | false | null | t3_1rb78uh | /r/LocalLLaMA/comments/1rb78uh/how_hard_to_posttrain_gemma_33_qat_for_claude_code/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'oEsr2iAZe27rbO2cZpIn-UdkwP5m7O7I2D_BDa2imMY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oEsr2iAZe27rbO2cZpIn-UdkwP5m7O7I2D_BDa2imMY.png?width=108&crop=smart&auto=webp&s=e4c97428fee8cc0fa499835c4aeb1f4d4c4b6659', 'width': 108}, {'height': 116, 'url': 'h... |
What I learned using local vision-language models to scrape target.com | 1 | [deleted] | 2026-02-22T00:15:13 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1rb7070 | false | null | t3_1rb7070 | /r/LocalLLaMA/comments/1rb7070/what_i_learned_using_local_visionlanguage_models/ | false | false | default | 1 | null | ||
Time | 1 | [removed] | 2026-02-22T00:02:03 | Popular_Ganache1738 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rb6pj1 | false | null | t3_1rb6pj1 | /r/LocalLLaMA/comments/1rb6pj1/time/ | false | false | nsfw | 1 | {'enabled': True, 'images': [{'id': 'ggqpvexjsxkg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/ggqpvexjsxkg1.jpeg?width=108&crop=smart&auto=webp&s=767ca5e60fea7583e0aa26d4c7323f2187c7031e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/ggqpvexjsxkg1.jpeg?width=216&crop=smart&auto=... | |
Tackling three GPUs setup with Ubuntu and a not-so-good motherboard | 2 | Hi Folks
Been on this sub for a while and learnt a lot from it. Just want to share my experience with setting up three GPUs with Ubuntu, I spent a good two days on it and the final fix was just speechless.
Here is my hardware setup:
**Core Processing & Motherboard**
* CPU: Intel Core Ultra 7 265 (20 Cores, up to ... | 2026-02-21T23:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/1rb6g27/tackling_three_gpus_setup_with_ubuntu_and_a/ | strayapandahustler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb6g27 | false | null | t3_1rb6g27 | /r/LocalLLaMA/comments/1rb6g27/tackling_three_gpus_setup_with_ubuntu_and_a/ | false | false | self | 2 | null |
I built .aegis — an index-free sparse ternary format for LLMs that achieves 6.16× compression + self-evolving inference on bare-metal CPUs (patent-pending, open to collaboration) | 0 | Hey r/LocalLLaMA (and cross-posting to a few other communities),
I'm Justin Thompson (@killboxInc), just a regular guy in Orange, Texas who got obsessed with making LLMs smaller, faster, and greener. I’m not running a company (yet), I don’t have a PhD, and I’m not trying to compete with anyone — I’m just sharing somet... | 2026-02-21T23:48:16 | https://www.reddit.com/r/LocalLLaMA/comments/1rb6e1m/i_built_aegis_an_indexfree_sparse_ternary_format/ | Aggressive_Tie_2439 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb6e1m | false | null | t3_1rb6e1m | /r/LocalLLaMA/comments/1rb6e1m/i_built_aegis_an_indexfree_sparse_ternary_format/ | false | false | self | 0 | null |
Nanbeige 4.1 is the best small LLM, it crush qwen 4b | 41 | Self-explenatory, try it its insane if you give him enough room to think. Its my go to local llm now. | 2026-02-21T23:32:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rb61og/nanbeige_41_is_the_best_small_llm_it_crush_qwen_4b/ | Individual-Source618 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb61og | false | null | t3_1rb61og | /r/LocalLLaMA/comments/1rb61og/nanbeige_41_is_the_best_small_llm_it_crush_qwen_4b/ | false | false | self | 41 | null |
I made a CLI tool to stop my AI coding agents from losing their minds due to context overflow (managing 300+ skills). | 0 | > | 2026-02-21T23:29:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rb5ywu/i_made_a_cli_tool_to_stop_my_ai_coding_agents/ | Tie-Capable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb5ywu | false | null | t3_1rb5ywu | /r/LocalLLaMA/comments/1rb5ywu/i_made_a_cli_tool_to_stop_my_ai_coding_agents/ | false | false | self | 0 | null |
Lightweight autonomous CLI agent for Linux 32-bit (i386) similar to Claude CLI? | 1 | Hi!
I'm trying to turn an old mini PC into a small autonomous dev/search agent, but I'm extremely hardware limited and most modern AI tools simply don't run here.
\*\*System:\*\*
\- Ubuntu 18.04.5 LTS (Bionic)
\- Architecture: i386 (32-bit)
\- Kernel: 5.4
\- No GPU
\- Very low RAM
\- SSH-only usage (headless)
... | 2026-02-21T23:25:59 | https://www.reddit.com/r/LocalLLaMA/comments/1rb5vu3/lightweight_autonomous_cli_agent_for_linux_32bit/ | Friendly-Brief-9179 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb5vu3 | false | null | t3_1rb5vu3 | /r/LocalLLaMA/comments/1rb5vu3/lightweight_autonomous_cli_agent_for_linux_32bit/ | false | false | self | 1 | null |
I studied how information flows in physical systems. Built a different attention. 67% fewer parameters, same quality. | 1 | Vectors are waveforms. Dot products are wave interference. I kept looking at attention through this lens.
In the attention mechanism, Q, K, and V all transform the same input. Optimize the same loss. Why three separate matrices? The original paper offered no justification. It worked, so everyone adopted it.
One unifi... | 2026-02-21T23:17:04 | https://www.reddit.com/r/LocalLLaMA/comments/1rb5o7q/i_studied_how_information_flows_in_physical/ | Financial_Buy_2287 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb5o7q | false | null | t3_1rb5o7q | /r/LocalLLaMA/comments/1rb5o7q/i_studied_how_information_flows_in_physical/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'T_8rq10okGiqbtAZ1YEowNwRCZYOQcxzlEAe2hpC5Ts', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/T_8rq10okGiqbtAZ1YEowNwRCZYOQcxzlEAe2hpC5Ts.png?width=108&crop=smart&auto=webp&s=e47b27a63cafecb4f7d6a780005c5c8961efb442', 'width': 108}, {'height': 116, 'url': 'h... |
Quick MoE Quantization Comparison: LFM2-8B and OLMoE-1B-7B | 12 | I chose two small, recent and different MoE models that fits my vram for a quick assessment.
I wanted to use MoE models to check on MXFP4 and imatrix to check on the smallest quantization variants.
- LFM2-8B-A1B that has 4 experts used out of 32.
- OLMoE-1B-7B-0924-Instruct that has 8 experts used out of 64.
----
... | 2026-02-21T23:16:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rb5nxs/quick_moe_quantization_comparison_lfm28b_and/ | TitwitMuffbiscuit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb5nxs | false | null | t3_1rb5nxs | /r/LocalLLaMA/comments/1rb5nxs/quick_moe_quantization_comparison_lfm28b_and/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'rNj2OUNr7IdnLIi6C4M-STBI97mfm9KU0L5NuQ8lsek', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rNj2OUNr7IdnLIi6C4M-STBI97mfm9KU0L5NuQ8lsek.png?width=108&crop=smart&auto=webp&s=84bcaaf5b7a5f93f647b26013a2757a945595d56', 'width': 108}, {'height': 108, 'url': 'h... |
Anyone interested in benchmarking how much a structural index actually helps LLM agents? (e.g. SWE-bench with vs without) | 3 | I built a thing I've been calling DSP (Data Structure Protocol) -- basically a small \`.dsp/\` folder that lives in the repo and gives an LLM agent a persistent structural map: what entities exist, how they're connected, and why each dependency is there. The agent queries this before touching code instead of spending t... | 2026-02-21T23:11:34 | https://www.reddit.com/r/LocalLLaMA/comments/1rb5jkf/anyone_interested_in_benchmarking_how_much_a/ | K_Kolomeitsev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb5jkf | false | null | t3_1rb5jkf | /r/LocalLLaMA/comments/1rb5jkf/anyone_interested_in_benchmarking_how_much_a/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'ICcTG3QKyiIFMQ0n4lC68Zo6gyavc5aUik9AkYrIMDY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ICcTG3QKyiIFMQ0n4lC68Zo6gyavc5aUik9AkYrIMDY.png?width=108&crop=smart&auto=webp&s=e91d0dbd4a891a98d1c1cfe868c5a0322b416af3', 'width': 108}, {'height': 108, 'url': 'h... |
I made a C++ CLI AI powered tool, looking for feedback | 2 | Its nothing fancy, it is a project I've been working for the last 40 days.
[alonsovm44/glupe: Glupe 🙏🌹. Think of it as Docker for your source code, packaging intent so your software becomes portable across languages and future-proof by design.](https://github.com/alonsovm44/glupe)
I just introduced glupe hub in th... | 2026-02-21T23:03:48 | https://www.reddit.com/r/LocalLLaMA/comments/1rb5cvi/i_made_a_c_cli_ai_powered_tool_looking_for/ | atotito44 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb5cvi | false | null | t3_1rb5cvi | /r/LocalLLaMA/comments/1rb5cvi/i_made_a_c_cli_ai_powered_tool_looking_for/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'NjH1H9weg5OGFrs7STQ-Yoj2qRK4kc_xoFCnF0nzxto', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NjH1H9weg5OGFrs7STQ-Yoj2qRK4kc_xoFCnF0nzxto.png?width=108&crop=smart&auto=webp&s=d3f5033bfdc5eda09f20549c73796ce620947bec', 'width': 108}, {'height': 108, 'url': 'h... |
How to Make ComfyUI detect Dual GPUs? | 0 | basically the title, I'm using a 5070ti and a 3060.
The latest ComfyUI doesn't even run the MultiGPU extension, and ComfyUI Distributed doesn't pick up GPU 1 (3060) and only master gpu (CUDA 0) 5070ti.
LM studio detects both perfectly. What shoud I do to use them together in ComfyUI? | 2026-02-21T23:01:47 | derivative49 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rb5b42 | false | null | t3_1rb5b42 | /r/LocalLLaMA/comments/1rb5b42/how_to_make_comfyui_detect_dual_gpus/ | false | false | 0 | {'enabled': True, 'images': [{'id': '5xuf5jd1ixkg1', 'resolutions': [{'height': 143, 'url': 'https://preview.redd.it/5xuf5jd1ixkg1.png?width=108&crop=smart&auto=webp&s=054c5830b0834d7e7f16a01b209cfd25f6e69142', 'width': 108}, {'height': 287, 'url': 'https://preview.redd.it/5xuf5jd1ixkg1.png?width=216&crop=smart&auto=we... | ||
I built a persistent AI context system using markdown. Here's what I learned | 0 | # Background
I'm not a developer. I'm a federal biologist who got curious about AI and started experimenting. What follows is a personal project that evolved from banter into something I think is worth sharing.
The project is called **Palimpsest** — after the manuscript form where old writing is scraped away but neve... | 2026-02-21T22:59:57 | Unlucky_Mycologist68 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rb59bz | false | null | t3_1rb59bz | /r/LocalLLaMA/comments/1rb59bz/i_built_a_persistent_ai_context_system_using/ | false | false | 0 | {'enabled': True, 'images': [{'id': '7znmhgpchxkg1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/7znmhgpchxkg1.png?width=108&crop=smart&auto=webp&s=bfa997a75651c80b4bf656bf19e48d20e101a947', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/7znmhgpchxkg1.png?width=216&crop=smart&auto=we... | ||
O-TITANS: Orthogonal LoRAs for Gemma 3 using Google's TITANS memory architecture | 77 | Hey everyone, I've been working on a project I call **O-TITANS** (Orthogonal Tensors for Independent Task Alignment). It's an Orthogonal LoRA approach specifically for Gemma 3 that incorporates the Google TITANS memory architecture.
It was inspired by a project by ffurfaro on HF called "TPTT" that I just couldn't get... | 2026-02-21T22:32:47 | https://www.reddit.com/r/LocalLLaMA/comments/1rb4luf/otitans_orthogonal_loras_for_gemma_3_using/ | Polymorphic-X | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb4luf | false | null | t3_1rb4luf | /r/LocalLLaMA/comments/1rb4luf/otitans_orthogonal_loras_for_gemma_3_using/ | false | false | self | 77 | {'enabled': False, 'images': [{'id': 'Fmutf-3W9QD3Br3WQkLUJ8a8bc8Kevo3CnRzF-_igSI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Fmutf-3W9QD3Br3WQkLUJ8a8bc8Kevo3CnRzF-_igSI.png?width=108&crop=smart&auto=webp&s=c6b2c14c86182dd7e229ab58d2ff8bc3266b89e2', 'width': 108}, {'height': 116, 'url': 'h... |
Glazyr Viz: Hardening Chromium for Sovereign AI Agents (150ms Cold Starts & Zero-Copy Vision) | 0 | #
The "last mile" of AI browsing is broken. Most autonomous agents are stuck in a "capture-encode-transmit" loop—taking screenshots, sending them to a VLM, and waiting for coordinates. It’s brittle, slow, and expensive.
We’ve spent the last few months re-architecting this from the ground up. What started as **Neural... | 2026-02-21T22:28:37 | https://www.reddit.com/r/LocalLLaMA/comments/1rb4icn/glazyr_viz_hardening_chromium_for_sovereign_ai/ | MycologistWhich7953 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb4icn | false | null | t3_1rb4icn | /r/LocalLLaMA/comments/1rb4icn/glazyr_viz_hardening_chromium_for_sovereign_ai/ | false | false | self | 0 | null |
Seeking advice: How to build an AI-powered "Information Refinery" with a feedback loop? | 1 | Title: Seeking Advice: Architecting a Personalized "Signal-over-Noise" Information Engine (AI-Powered)
Content:
Hi everyone,
I’m a CS freshman looking to build a personalized information ecosystem. My goal is to move away from mindless scrolling and create a high-density "learning terminal" that evolves with me.... | 2026-02-21T22:22:06 | https://www.reddit.com/r/LocalLLaMA/comments/1rb4cti/seeking_advice_how_to_build_an_aipowered/ | AdSweet8593 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb4cti | false | null | t3_1rb4cti | /r/LocalLLaMA/comments/1rb4cti/seeking_advice_how_to_build_an_aipowered/ | false | false | self | 1 | null |
local llms can run 70B models on a macbook now but still can't read a webpage without eating the entire cookie banner | 0 | we've come so far and yet
[github.com/vakra-dev/reader](http://github.com/vakra-dev/reader)
markdown out, no hallucinated html, works with whatever you're running locally | 2026-02-21T22:01:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rb3vgb/local_llms_can_run_70b_models_on_a_macbook_now/ | nihal_was_here | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb3vgb | false | null | t3_1rb3vgb | /r/LocalLLaMA/comments/1rb3vgb/local_llms_can_run_70b_models_on_a_macbook_now/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'kMVWun7Ig0roJiKRmz0kDJbimyqEStsb6uewbgMojnE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/kMVWun7Ig0roJiKRmz0kDJbimyqEStsb6uewbgMojnE.png?width=108&crop=smart&auto=webp&s=7fba1b62b043197a0035b3c4dbab2fb63351ca18', 'width': 108}, {'height': 216, 'url': '... |
it’ll be fine | 0 | he’s worried about supply chain attacks
I gave it sudo.
[he’s worried about supply chain attacks, I gave it sudo.](https://preview.redd.it/ynmnpayj4xkg1.png?width=621&format=png&auto=webp&s=e44c7ef8110ebdb09b40cce9705e61b30c90ef80)
| 2026-02-21T21:48:53 | https://www.reddit.com/r/LocalLLaMA/comments/1rb3k55/itll_be_fine/ | nihal_was_here | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb3k55 | false | null | t3_1rb3k55 | /r/LocalLLaMA/comments/1rb3k55/itll_be_fine/ | false | false | 0 | null | |
AI coding tools are burning your context window with boilerplate. I built an MCP server to stop this. | 0 | Vibe-coding is great until Cursor burns 50k tokens hallucinating a billing system from scratch when a $9/mo indie API already exists.
I built **IndieStack** — a directory of 100+ indie-built SaaS tools — and attached an MCP server to it. The idea is simple:
Instead of asking Claude to *build* an analytics dashboard, ... | 2026-02-21T21:33:44 | https://www.reddit.com/r/LocalLLaMA/comments/1rb36tf/ai_coding_tools_are_burning_your_context_window/ | edmillss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb36tf | false | null | t3_1rb36tf | /r/LocalLLaMA/comments/1rb36tf/ai_coding_tools_are_burning_your_context_window/ | false | false | self | 0 | null |
2x ASUS Ascent GX10 vs 2x Strix halo for agentic coding | 1 | Hi,
I have a question.
Since ram apocalypse started I am thinking about buying something for larger model. Because I believe they are the future and I also think that in the future inference hw will be overpriced (for like 2-3 years to the future)
I wonder if it is worth buying Strix Halo machines when they now hav... | 2026-02-21T21:31:20 | https://www.reddit.com/r/LocalLLaMA/comments/1rb34oh/2x_asus_ascent_gx10_vs_2x_strix_halo_for_agentic/ | Grouchy_Ad_4750 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb34oh | false | null | t3_1rb34oh | /r/LocalLLaMA/comments/1rb34oh/2x_asus_ascent_gx10_vs_2x_strix_halo_for_agentic/ | false | false | self | 1 | null |
Routing HA and other front-end requests through a llm broker | 1 | I am trying to figure out a way to expand and consolidate my local LLM capability.
I am currently running Home Assistant, Open WebUI and frigate as front-ends and an Ollama backend on a server with 2x3090. I also have a Strix Halo (AMD Ryzen™ AI Max+ 395 / 128GB RAM) that is not yet in use but that I want to include. ... | 2026-02-21T21:28:09 | https://www.reddit.com/r/LocalLLaMA/comments/1rb31sm/routing_ha_and_other_frontend_requests_through_a/ | dxps7098 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb31sm | false | null | t3_1rb31sm | /r/LocalLLaMA/comments/1rb31sm/routing_ha_and_other_frontend_requests_through_a/ | false | false | self | 1 | null |
I got annoyed by Claude Code's history, so I built a search CLI | 1 | I've been using Claude Code a lot, but finding past sessions is a nightmare.
The built-in ***--resume*** flag just gives you a flat list. If I want to find a specific database refactoring chat from last week, I have to scroll manually and guess based on truncated titles.
I got tired of this, so I built a [searchable ... | 2026-02-21T21:27:55 | https://www.reddit.com/r/LocalLLaMA/comments/1rb31lm/i_got_annoyed_by_claude_codes_history_so_i_built/ | maksim002 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb31lm | false | null | t3_1rb31lm | /r/LocalLLaMA/comments/1rb31lm/i_got_annoyed_by_claude_codes_history_so_i_built/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'O1Fp9kdpa2YEbOoJs32FgCTCahUrGpnB4B1NnEhegzY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/O1Fp9kdpa2YEbOoJs32FgCTCahUrGpnB4B1NnEhegzY.png?width=108&crop=smart&auto=webp&s=9cc53f90c5e3e96b511ec569998dccf32cb58253', 'width': 108}, {'height': 108, 'url': 'h... |
We designed a multi-layer memory system for our AI companion — looking for feedback | 1 | Hi eveyone!
Our team have been working on an AI companion project, and recently I’ve been focusing heavily on memory architecture.
When we design the product, we have noticed that memory has a huge impact on how natural the interaction process appears. When artificial intelligence can remember information such as pe... | 2026-02-21T21:20:24 | https://www.reddit.com/r/LocalLLaMA/comments/1rb2v5g/we_designed_a_multilayer_memory_system_for_our_ai/ | daisyyuan0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb2v5g | false | null | t3_1rb2v5g | /r/LocalLLaMA/comments/1rb2v5g/we_designed_a_multilayer_memory_system_for_our_ai/ | false | false | 1 | null | |
What is actually reliable with local openclaw? | 0 | I’ve been wrangling 20-30b models to work well with openclaw - and I find myself switching back to Sonnet quite often.
I just don’t trust the smaller models to get it right currently. They mess up some details, or give me a random “NO\_REPLY”, and in general it feels like I need to be way more specific and careful. S... | 2026-02-21T21:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/1rb2oqq/what_is_actually_reliable_with_local_openclaw/ | MammothStage3861 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb2oqq | false | null | t3_1rb2oqq | /r/LocalLLaMA/comments/1rb2oqq/what_is_actually_reliable_with_local_openclaw/ | false | false | self | 0 | null |
Quantized model keep hiccuping? A pipeline that will solve that | 0 | You downloaded an open-source model. You quantized it to fit your GPU. Now what?
Every model ships with recommended sampling parameters — `temperature`, `top_p`, `repeat_penalty` — but those numbers were tested on **full-precision weights** running on A100 clusters. The moment you quantize to Q4 or Q6 to run locally, ... | 2026-02-21T21:11:19 | https://www.reddit.com/r/LocalLLaMA/comments/1rb2n94/quantized_model_keep_hiccuping_a_pipeline_that/ | Express_Quail_1493 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb2n94 | false | null | t3_1rb2n94 | /r/LocalLLaMA/comments/1rb2n94/quantized_model_keep_hiccuping_a_pipeline_that/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'zH5Dvy1KRce-M7Br6JOH2EThjZwRrhXSlI1mI0rEtRo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zH5Dvy1KRce-M7Br6JOH2EThjZwRrhXSlI1mI0rEtRo.png?width=108&crop=smart&auto=webp&s=9b6adac1c956213e74932a65d8b2cc91400adedf', 'width': 108}, {'height': 108, 'url': 'h... |
Cascadeflow: An open-source library to cut AI costs 40-85% by cascading from local models to the cloud (with Python/TS & n8n support) | 1 | [removed] | 2026-02-21T21:11:08 | https://www.reddit.com/r/LocalLLaMA/comments/1rb2n3d/cascadeflow_an_opensource_library_to_cut_ai_costs/ | Key_Scar202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb2n3d | false | null | t3_1rb2n3d | /r/LocalLLaMA/comments/1rb2n3d/cascadeflow_an_opensource_library_to_cut_ai_costs/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'W9hZFYvnyxEAjxNiy2vORMtDnywlY5vXAIP5uyv2S4k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W9hZFYvnyxEAjxNiy2vORMtDnywlY5vXAIP5uyv2S4k.png?width=108&crop=smart&auto=webp&s=1f92af3911c924f9a2743776bfce9b3a6adb098d', 'width': 108}, {'height': 108, 'url': 'h... |
Favourite niche usecases? | 586 | 2026-02-21T21:06:34 | Figai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rb2j5c | false | null | t3_1rb2j5c | /r/LocalLLaMA/comments/1rb2j5c/favourite_niche_usecases/ | false | false | 586 | {'enabled': True, 'images': [{'id': 'o4l2ankhxwkg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/o4l2ankhxwkg1.jpeg?width=108&crop=smart&auto=webp&s=7d88b8124f1c8a2342c2fa84bb143fa227279aa7', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/o4l2ankhxwkg1.jpeg?width=216&crop=smart&auto=w... | |||
We're Making CascadeFlow For free | 1 | 2026-02-21T21:06:08 | Key_Scar202 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rb2irn | false | null | t3_1rb2irn | /r/LocalLLaMA/comments/1rb2irn/were_making_cascadeflow_for_free/ | false | false | 1 | {'enabled': True, 'images': [{'id': '0us6ynw5xwkg1', 'resolutions': [{'height': 87, 'url': 'https://preview.redd.it/0us6ynw5xwkg1.png?width=108&crop=smart&auto=webp&s=57319d0b4686b7acbc38a6e53d4ca52a3052b799', 'width': 108}, {'height': 174, 'url': 'https://preview.redd.it/0us6ynw5xwkg1.png?width=216&crop=smart&auto=web... | |||
I built a simple dockerized WebUI for KittenTTS | 12 | Been playing around with [KittenTTS](https://github.com/KittenML/KittenTTS) lately and wanted a quick way to test different models and voices without writing scripts every time. So I threw together a small WebUI for it.
It's a single Docker image (~1.5GB) with all 4 models pre-cached. Just run:
```
docker run -p 5072:... | 2026-02-21T21:04:52 | Paramecium_caudatum_ | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rb2ho1 | false | null | t3_1rb2ho1 | /r/LocalLLaMA/comments/1rb2ho1/i_built_a_simple_dockerized_webui_for_kittentts/ | false | false | 12 | {'enabled': True, 'images': [{'id': 'vju1jlybwwkg1', 'resolutions': [{'height': 53, 'url': 'https://preview.redd.it/vju1jlybwwkg1.png?width=108&crop=smart&auto=webp&s=e4a45b44e9c33dadc2955edf0f1ddd3ba381015c', 'width': 108}, {'height': 107, 'url': 'https://preview.redd.it/vju1jlybwwkg1.png?width=216&crop=smart&auto=web... | ||
How to start a good Saturday afternoon ... | 11 | Compared to everything I have used so far, this bad boy just flies ... | 2026-02-21T20:57:53 | hurdurdur7 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rb2bf4 | false | null | t3_1rb2bf4 | /r/LocalLLaMA/comments/1rb2bf4/how_to_start_a_good_saturday_afternoon/ | false | false | 11 | {'enabled': True, 'images': [{'id': '9mvt6mdgvwkg1', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/9mvt6mdgvwkg1.png?width=108&crop=smart&auto=webp&s=c4ebe9727b2eae9ab947724216fd68e7eb04a444', 'width': 108}, {'height': 274, 'url': 'https://preview.redd.it/9mvt6mdgvwkg1.png?width=216&crop=smart&auto=we... | ||
PSA on public agentic tools and the speed they are shipping updates: recent Cline release had a package injected | 76 | Some of you may remember a post about sloppy OpenCode commit a week ago or so, unsurprisingly others are embracing vibe coding speed and sloppiness as well.
I've randomly stumbled upon
[https://www.reddit.com/r/CLine/comments/1r9p3ww/supply\_chain\_attack\_on\_cline\_installs\_openclaw/](https://www.reddit.com/r/... | 2026-02-21T20:52:57 | https://www.reddit.com/r/LocalLLaMA/comments/1rb270r/psa_on_public_agentic_tools_and_the_speed_they/ | bakawolf123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb270r | false | null | t3_1rb270r | /r/LocalLLaMA/comments/1rb270r/psa_on_public_agentic_tools_and_the_speed_they/ | false | false | self | 76 | null |
Favourite weird use cases? | 1 | [removed] | 2026-02-21T20:51:06 | Figai | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rb25cz | false | null | t3_1rb25cz | /r/LocalLLaMA/comments/1rb25cz/favourite_weird_use_cases/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'j1uj759quwkg1', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/j1uj759quwkg1.jpeg?width=108&crop=smart&auto=webp&s=1961d7dafbe63e9e2c7d6426efb02f4478fc2d87', 'width': 108}, {'height': 119, 'url': 'https://preview.redd.it/j1uj759quwkg1.jpeg?width=216&crop=smart&auto=w... | ||
What ended up being your real bottleneck when trying to use local LLMs for actual workflows? | 2 | For people who are actually using local models beyond demos:
* What turned out to be the real bottleneck in your setup?
* Was it hardware, model quality, tooling, or something unexpected?
* And what change improved things the most?
Curious what others ran into once they moved past the testing phase. | 2026-02-21T20:46:33 | https://www.reddit.com/r/LocalLLaMA/comments/1rb21fl/what_ended_up_being_your_real_bottleneck_when/ | Lorenzo_Kotalla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb21fl | false | null | t3_1rb21fl | /r/LocalLLaMA/comments/1rb21fl/what_ended_up_being_your_real_bottleneck_when/ | false | false | self | 2 | null |
qwen3 coder 30b at 50t/s on an M3 pro. Is faster possible? | 0 | Recently I found that the intel autoround quants are pretty cool. Testing some, I found this one:
[https://huggingface.co/Intel/Qwen3-Coder-30B-A3B-Instruct-gguf-q2ks-mixed-AutoRound](https://huggingface.co/Intel/Qwen3-Coder-30B-A3B-Instruct-gguf-q2ks-mixed-AutoRound)
Yes, it is a q2. But it is quite amazing: it just... | 2026-02-21T20:45:22 | https://www.reddit.com/r/LocalLLaMA/comments/1rb20fo/qwen3_coder_30b_at_50ts_on_an_m3_pro_is_faster/ | mouseofcatofschrodi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb20fo | false | null | t3_1rb20fo | /r/LocalLLaMA/comments/1rb20fo/qwen3_coder_30b_at_50ts_on_an_m3_pro_is_faster/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'CiYuQD-_u4mBHzxcuVT2JKnbPDjb4qsQJ9vd60WidMA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CiYuQD-_u4mBHzxcuVT2JKnbPDjb4qsQJ9vd60WidMA.png?width=108&crop=smart&auto=webp&s=415aaa8aa8f1820cd3e5fc6272e8c0913739f5d3', 'width': 108}, {'height': 116, 'url': 'h... |
AI Research Second Brain Starter Kit designed for Obsidian + Gemini CLI workflows (update) | 1 | I built SlateKore to fix my messy research workflow and decided to open source it. SlateKore is an open-source AI Research Second Brain Starter Kit designed for Obsidian + Gemini CLI workflows. Whether you’re deep into academic research, building technical notes, or managing complex knowledge, SlateKore gives you the s... | 2026-02-21T20:42:56 | https://www.reddit.com/r/LocalLLaMA/comments/1rb1yb3/ai_research_second_brain_starter_kit_designed_for/ | s3309 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb1yb3 | false | null | t3_1rb1yb3 | /r/LocalLLaMA/comments/1rb1yb3/ai_research_second_brain_starter_kit_designed_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'BE2wokCiwzblfqH3tpIf9juF3p2YeGnAqMn89vmrPmA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BE2wokCiwzblfqH3tpIf9juF3p2YeGnAqMn89vmrPmA.png?width=108&crop=smart&auto=webp&s=542c86a785722577688d6a372828cbba66959603', 'width': 108}, {'height': 108, 'url': 'h... |
Local multi-agent system that handles arXiv search, dataset profiling, and neural net training through a chat interface | 3 | I've been working on a tool to make my own life easier when I'm working on research and personal projects. I get tired of jumping between arXiv, Kaggle, HuggingFace, and wanted a faster way to build neural networks from scratch all with my data staying on my machine. To satisfy these needs, I built a chat interface tha... | 2026-02-21T20:39:07 | https://www.reddit.com/r/LocalLLaMA/comments/1rb1uxi/local_multiagent_system_that_handles_arxiv_search/ | Deep-Marsupial6256 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb1uxi | false | null | t3_1rb1uxi | /r/LocalLLaMA/comments/1rb1uxi/local_multiagent_system_that_handles_arxiv_search/ | false | false | self | 3 | null |
ai needs suppression not more data | 0 | Ai knows everything but we still hate it—why?
Wrong interaction. We treat it like Google or therapist. And stay the same.
Real humans evolve you through friction—arguments, contradictions, withheld truths. Best friend doesn't Wikipedia dump. They push buttons.
What if AI optimized for evolution, not perfection?
Per... | 2026-02-21T20:32:27 | https://www.reddit.com/r/LocalLLaMA/comments/1rb1p88/ai_needs_suppression_not_more_data/ | vizvizs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb1p88 | false | null | t3_1rb1p88 | /r/LocalLLaMA/comments/1rb1p88/ai_needs_suppression_not_more_data/ | false | false | self | 0 | null |
Using an HP Omen 45L Max (Ryzen) with Pro Blackwell 6000 WS | 2 | So everyone knows, this wasn't my first PC choice. Yup, it's a gaming PC with all the pretty lights and cool RGB fans that any 16 year old will love. I'm not a gamer, but I do love a deal.
There was a President's day sale on and I configured the following HP Omen 45L
9950X3D CPU
128GB DDR5 RAM
2TB "performance"... | 2026-02-21T20:17:32 | https://www.reddit.com/r/LocalLLaMA/comments/1rb1c80/using_an_hp_omen_45l_max_ryzen_with_pro_blackwell/ | Specialist-Yak1203 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb1c80 | false | null | t3_1rb1c80 | /r/LocalLLaMA/comments/1rb1c80/using_an_hp_omen_45l_max_ryzen_with_pro_blackwell/ | false | false | self | 2 | null |
Best local AI stack for AMD RX 7800 XT (ROCm) + Linux Mint? | 2 | Focus: RAG & Sysadmin Tasks
\- OS: Linux Mint 22 (Ubuntu 24.04 base)
\- CPU: AMD Ryzen 9 5950X (16C/32T)
\- RAM: 64 GB DDR4 C18 3600
\- GPU: AMD Radeon RX 7800 XT (16 GB VRAM, RDNA 3)
I need a local, persistent AI setup that treats my uploaded docs (manufacturer PDFs, docker-compose, logs) as the absolute source o... | 2026-02-21T20:04:58 | https://www.reddit.com/r/LocalLLaMA/comments/1rb10vi/best_local_ai_stack_for_amd_rx_7800_xt_rocm_linux/ | Party-Log-1084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb10vi | false | null | t3_1rb10vi | /r/LocalLLaMA/comments/1rb10vi/best_local_ai_stack_for_amd_rx_7800_xt_rocm_linux/ | false | false | self | 2 | null |
How arena leaderboard works | 0 | Lots of quality checks. Spammy, high frequency questions don't affect leaderboard. If you ask what the model is, vote doesn't count. If user is tagged as being suspicious, then vote is down weighted. Just some examples of what the video says from [arena.ai](http://arena.ai) data scientist.
video: [https://x.com/ar... | 2026-02-21T20:01:23 | https://www.reddit.com/r/LocalLLaMA/comments/1rb0xo0/how_arena_leaderboard_works/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb0xo0 | false | null | t3_1rb0xo0 | /r/LocalLLaMA/comments/1rb0xo0/how_arena_leaderboard_works/ | false | false | self | 0 | null |
I’m building a synthetic data engine for Hinglish (Hindi-English) LLMs — but I’m stuck at a 0.69 quality score. Thoughts? | 6 | Hey
We speak of the “Data Wall,” but for Indian languages, it’s a data abyss. Hinglish corpora are small, toxic-scraped, or lose the Indian flavor after translation.
I’m working on a pipeline for the generation of privacy-preserving synthetic Hinglish conversational data.
Pipeline
Seed: 35k real Hinglish conversati... | 2026-02-21T19:37:28 | https://www.reddit.com/r/LocalLLaMA/comments/1rb0cj3/im_building_a_synthetic_data_engine_for_hinglish/ | Big_Airline7132 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb0cj3 | false | null | t3_1rb0cj3 | /r/LocalLLaMA/comments/1rb0cj3/im_building_a_synthetic_data_engine_for_hinglish/ | false | false | self | 6 | null |
Have you ever hesitated before typing something into ChatGPT or Claude? Are you worried about the amount of information these third party providers have about you? What are the most common use cases you worry about | 38 | What are different use cases where you'd rather not send your data to the cloud but still be able to leverage AI fully?
Is it legal documents, or financial documents, personal information? Please feel free to be as detailed as you'd like.
Thank you
Full disclosure I'm building something in the space. However, it's f... | 2026-02-21T19:30:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rb062y/have_you_ever_hesitated_before_typing_something/ | alichherawalla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb062y | false | null | t3_1rb062y | /r/LocalLLaMA/comments/1rb062y/have_you_ever_hesitated_before_typing_something/ | false | false | self | 38 | null |
Made WebMCP Music Composer Demo to be able to call local models | 4 | Just updated WebMCP Music Composer demo to work with local models. Figured maybe it could be useful to someone for testing local models.
Tested with
|Qwen3-Coder-30B-A3B-Instruct-IQ3\_S-3.12bpw.gguf|
|:-|
||
https://preview.redd.it/hu22yisgfwkg1.png?width=1885&format=png&auto=webp&s=c38a1ee4022399dc241007aaf9e384d3a... | 2026-02-21T19:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/1rb054k/made_webmcp_music_composer_demo_to_be_able_to/ | Asleep-Land-3914 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rb054k | false | null | t3_1rb054k | /r/LocalLLaMA/comments/1rb054k/made_webmcp_music_composer_demo_to_be_able_to/ | false | false | 4 | null | |
Best Models & Datasets for Game Designing not Game Coding | 7 | Hi everyone,
I’ve been working on a game for sometime now and I’ve been using Claude Max for a while. I don’t have a high end set up, but I do have an MBP M4 max with 64GB unified memory.
I’m not at the coding phase yet working on my game, I’m still wrapping up the actual game design, including a lot of the game mat... | 2026-02-21T19:13:00 | https://www.reddit.com/r/LocalLLaMA/comments/1razqg2/best_models_datasets_for_game_designing_not_game/ | whoooaaahhhh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1razqg2 | false | null | t3_1razqg2 | /r/LocalLLaMA/comments/1razqg2/best_models_datasets_for_game_designing_not_game/ | false | false | self | 7 | null |
Help user hoster Local llama(via anything llm) with claude CLI | 0 | I recently saw that Claude Code is now compatible with local LLaMA models: [https://docs.ollama.com/integrations/claude-code](https://docs.ollama.com/integrations/claude-code).
So I hosted a local LLaMA instance using Anything LLM. However, when I export the Ollama base URL and make requests locally from my computer, ... | 2026-02-21T19:04:06 | https://www.reddit.com/r/LocalLLaMA/comments/1razhzf/help_user_hoster_local_llamavia_anything_llm_with/ | danu023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1razhzf | false | null | t3_1razhzf | /r/LocalLLaMA/comments/1razhzf/help_user_hoster_local_llamavia_anything_llm_with/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'fxY0Rkx9FljFXwfzpTc3IJgXvuuWt_uMw1wyP5qrePY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/fxY0Rkx9FljFXwfzpTc3IJgXvuuWt_uMw1wyP5qrePY.png?width=108&crop=smart&auto=webp&s=706767be5acecabc78f6a074e6c05b4e5133e1ba', 'width': 108}, {'height': 113, 'url': 'h... |
simple natural language → shell command converter | 0 | Hi folks,
A **simple natural language → shell command converter** that runs locally with no external dependencies.
You type something like “find all .log files modified today” and it gives you the exact bash/zsh command. No cloud, no API keys, no heavy setup — and no need for massive models just to translate plain E... | 2026-02-21T18:45:43 | https://www.reddit.com/r/LocalLLaMA/comments/1raz0ta/simple_natural_language_shell_command_converter/ | overthinking_pandas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raz0ta | false | null | t3_1raz0ta | /r/LocalLLaMA/comments/1raz0ta/simple_natural_language_shell_command_converter/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '8G7opLpNoSLwiWOCs9i_ujZrwgAZNY1UycHsp4PO05A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8G7opLpNoSLwiWOCs9i_ujZrwgAZNY1UycHsp4PO05A.png?width=108&crop=smart&auto=webp&s=801ebd9b94ee55d9d33e93f04b301e2b75477815', 'width': 108}, {'height': 108, 'url': 'h... |
I made a Mario RL trainer with a live dashboard - would appreciate feedback | 2 | I’ve been experimenting with reinforcement learning and built a small project that trains a PPO agent to play Super Mario Bros locally. Mostly did it to better understand SB3 and training dynamics instead of just running example notebooks.
It uses a Gym-compatible NES environment + Stable-Baselines3 (PPO). I added a s... | 2026-02-21T18:32:40 | https://www.reddit.com/r/LocalLLaMA/comments/1rayoo5/i_made_a_mario_rl_trainer_with_a_live_dashboard/ | pleasestopbreaking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rayoo5 | false | null | t3_1rayoo5 | /r/LocalLLaMA/comments/1rayoo5/i_made_a_mario_rl_trainer_with_a_live_dashboard/ | false | false | self | 2 | null |
Shadow Coding showcases "scoped vibe coding" integrated into pseudocode. | 2 | Follow-up video to Shadow Coding demonstrates ability to incorporate multiple "vibe coding" like instructions in-between pseudocode via simple `// TODO` comments.
This takes it a step ahead of implementations such as VS Code's inline chat feature and ThePrimeagen's 99 plugin. | 2026-02-21T18:32:33 | https://youtu.be/opQexqNrBAQ?si=X5Lh41He9RIEieED | KanJuicy | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1rayojt | false | {'oembed': {'author_name': 'adifyr', 'author_url': 'https://www.youtube.com/@adifyr', 'height': 200, 'html': '<iframe width="267" height="200" src="https://www.youtube.com/embed/opQexqNrBAQ?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture... | t3_1rayojt | /r/LocalLLaMA/comments/1rayojt/shadow_coding_showcases_scoped_vibe_coding/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'h-OVbLHfC12er3t903rXo2s0zBxpErIDXnkVW2cVz4s', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/h-OVbLHfC12er3t903rXo2s0zBxpErIDXnkVW2cVz4s.jpeg?width=108&crop=smart&auto=webp&s=661599666244a076372f182fe832b9ca2a3d5c9b', 'width': 108}, {'height': 162, 'url': '... | |
AI Research Second Brain Starter Kit designed for Obsidian + Gemini CLI workflows. | 2 | I have built SlateKore to fix my messy research workflow and decided to open source it. SlateKore is an open-source AI Research Second Brain Starter Kit designed for Obsidian + Gemini CLI workflows. Whether you’re deep into academic research, building technical notes, or managing complex knowledge, SlateKore gives you ... | 2026-02-21T18:28:26 | s3309 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1rayki7 | false | null | t3_1rayki7 | /r/LocalLLaMA/comments/1rayki7/ai_research_second_brain_starter_kit_designed_for/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'meypipq95wkg1', 'resolutions': [{'height': 67, 'url': 'https://preview.redd.it/meypipq95wkg1.jpeg?width=108&crop=smart&auto=webp&s=09da126aada0d299ae3ad10121013460bad6d305', 'width': 108}, {'height': 135, 'url': 'https://preview.redd.it/meypipq95wkg1.jpeg?width=216&crop=smart&auto=w... | ||
hmem: Local-first persistent memory for AI agents via MCP — portable across tools and machines | 0 | My Claude told me to post this here :D
If you use multiple AI coding agents (Claude Code, Cursor, Gemini CLI, OpenCode), you know the pain: each tool has its own memory format (CLAUDE.md, Rules, etc.), none of them talk to each other, and long sessions silently compress away earlier context.
I buil... | 2026-02-21T18:28:03 | https://www.reddit.com/r/LocalLLaMA/comments/1rayk64/hmem_localfirst_persistent_memory_for_ai_agents/ | Repulsive-Hospital-8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rayk64 | false | null | t3_1rayk64 | /r/LocalLLaMA/comments/1rayk64/hmem_localfirst_persistent_memory_for_ai_agents/ | false | false | self | 0 | null |
How you use AI? | 1 | I am a noob using Gemini and Claude by WebGUI with Chrome. That sucks ofc.
How do you use it? CLI? by API? Local Tools? Software Suite? Stuff like Claude Octopus to merge several models? Whats your Gamechanger? Whats your tools you never wanna miss for complex tasks? Whats the benefit of your setup compared to a noob ... | 2026-02-21T18:24:46 | https://www.reddit.com/r/LocalLLaMA/comments/1rayh2d/how_you_use_ai/ | Party-Log-1084 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1rayh2d | false | null | t3_1rayh2d | /r/LocalLLaMA/comments/1rayh2d/how_you_use_ai/ | false | false | self | 1 | null |
Llamacpp CUDA12 or CUDA13? | 5 | Just a question... a very basic question...
CUDA 12
CUDA 13
I generally target CUDA 13, but... I have so many questions on my mind.
Everyone successful here... I'm the only relying 100% on online models.
I'm a looser... 😒
P.S. qwen3 next coder even with latest build is unreliable | 2026-02-21T18:24:40 | https://www.reddit.com/r/LocalLLaMA/comments/1raygz4/llamacpp_cuda12_or_cuda13/ | Slow-Ability6984 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1raygz4 | false | null | t3_1raygz4 | /r/LocalLLaMA/comments/1raygz4/llamacpp_cuda12_or_cuda13/ | false | false | self | 5 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.