title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7 values | id stringlengths 7 7 | locked bool 2 classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2 classes | stickied bool 2 classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Best Local LLM for translation? | 4 | I was wondering if anyone tried a local model that's actually good in translating words from a language to another.
I tried TranslateGemma but it wasn't as performant as it claims. | 2026-02-06T07:43:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qxbfrz/best_local_llm_for_translation/ | PurposeCareless414 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qxbfrz | false | null | t3_1qxbfrz | /r/LocalLLaMA/comments/1qxbfrz/best_local_llm_for_translation/ | false | false | self | 4 | null |
🚀 Open source contributors wanted | 0 | Building AccessLM
I’ve built AccessLM, a fully open-source desktop app that lets you run LLMs on any laptop — no cloud, no login, no GPU, no cost.
Think BitTorrent for AI inference.
Models are split across peers using decentralized networking, so even an old 8GB machine can run Llama/Mistral/Phi locally and privately.
Built with:
• Electron + Next.js
• Rust/WASM
• libp2p
• GGUF models from Hugging Face
MIT licensed. No tracking. No servers.
I’m looking for contributors interested in:
Rust • P2P • LLM runtimes • Desktop UX • Testing • Docs
👉 Code is on GitHub: https://github.com/swarajshaw/AccessLM
If you believe AI should be local, private, and accessible to everyone, join me 🙌 | 2026-02-06T07:41:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qxbep0/open_source_contributors_wanted/ | swarajs16 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qxbep0 | false | null | t3_1qxbep0 | /r/LocalLLaMA/comments/1qxbep0/open_source_contributors_wanted/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'bddhpgLba8Tfkk99ImGy1SHNypti-7LMn1rABpbdo48', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bddhpgLba8Tfkk99ImGy1SHNypti-7LMn1rABpbdo48.png?width=108&crop=smart&auto=webp&s=2679b3463caf472249aff786d653c1f0d544d889', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bddhpgLba8Tfkk99ImGy1SHNypti-7LMn1rABpbdo48.png?width=216&crop=smart&auto=webp&s=6e1ad461686310efc11d194cd0f6c1e91533e920', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bddhpgLba8Tfkk99ImGy1SHNypti-7LMn1rABpbdo48.png?width=320&crop=smart&auto=webp&s=e7233356cc01258689e640477a718b08e2a1b5d8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bddhpgLba8Tfkk99ImGy1SHNypti-7LMn1rABpbdo48.png?width=640&crop=smart&auto=webp&s=066c1168932c09c0a1641b277e815e2d3d37d3d5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bddhpgLba8Tfkk99ImGy1SHNypti-7LMn1rABpbdo48.png?width=960&crop=smart&auto=webp&s=527cde062ea188495f4222e30daac25d4b8646be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bddhpgLba8Tfkk99ImGy1SHNypti-7LMn1rABpbdo48.png?width=1080&crop=smart&auto=webp&s=f0f5289c1ea1711e14598a980607631e17a8e21a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bddhpgLba8Tfkk99ImGy1SHNypti-7LMn1rABpbdo48.png?auto=webp&s=8461e8bcd6452d29566115e3f060de242b6d81b0', 'width': 1200}, 'variants': {}}]} |
TimeCop - TUI for reviewing and scrubbing through branches/PRs created by Agents | 0 | 2026-02-06T07:20:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qxb2cx/timecop_tui_for_reviewing_and_scrubbing_through/ | kmacinski | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qxb2cx | false | null | t3_1qxb2cx | /r/LocalLLaMA/comments/1qxb2cx/timecop_tui_for_reviewing_and_scrubbing_through/ | false | false | 0 | null | ||
Built a tool to fine-tune LLMs from PDFs directly | 2 | So I made a tool to create fine-tuned models from documents directly. It handles the data formatting, configurations and infrastructure, you just upload PDFs. In this specific video I show how you can fine-tune an open-source model like Qwen 3-8B in under 5 minutes and even download the LoRA adapters to run it locally on your own hardware. I'm looking to support more models soon but wanted some feedback from the community here.
Link: [https://www.commissioned.tech/](https://www.commissioned.tech/) | 2026-02-06T07:09:16 | https://v.redd.it/jcnxgjgcqthg1 | sirfitzwilliamdarcy | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qxavpz | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jcnxgjgcqthg1/DASHPlaylist.mpd?a=1772953774%2CMDQ0ZmUwN2I5NmVjNmYxZDFlZDc4YjFjMDI3ODZkYTJmYjE5NTc3Zjk0NTE2MGE0Y2JlNThlYmIzZmEwMDQ1YQ%3D%3D&v=1&f=sd', 'duration': 102, 'fallback_url': 'https://v.redd.it/jcnxgjgcqthg1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/jcnxgjgcqthg1/HLSPlaylist.m3u8?a=1772953774%2CNDdlMjlmYzA4OTcwMzkyYjhjN2FkYzhkNmQ0NjA4YThmODcwODE4ODIyZTFjMmUyMjdlOTc0MjAyMjUxMTE4ZQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jcnxgjgcqthg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1728}} | t3_1qxavpz | /r/LocalLLaMA/comments/1qxavpz/built_a_tool_to_finetune_llms_from_pdfs_directly/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'cG5kejhuZ2NxdGhnMWw_EotzjrL8W41f8GnGa6VnNRcCzdZizyU-O_mojlkC', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/cG5kejhuZ2NxdGhnMWw_EotzjrL8W41f8GnGa6VnNRcCzdZizyU-O_mojlkC.png?width=108&crop=smart&format=pjpg&auto=webp&s=6721540dc207ec0e176c7e52ab32f80c19a42207', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/cG5kejhuZ2NxdGhnMWw_EotzjrL8W41f8GnGa6VnNRcCzdZizyU-O_mojlkC.png?width=216&crop=smart&format=pjpg&auto=webp&s=529431b6ca2b7b1616f126273a22ceef4fa847b3', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/cG5kejhuZ2NxdGhnMWw_EotzjrL8W41f8GnGa6VnNRcCzdZizyU-O_mojlkC.png?width=320&crop=smart&format=pjpg&auto=webp&s=c2d1fcdffcbc66de7b3e568ba2e0821d8d3d4f5c', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/cG5kejhuZ2NxdGhnMWw_EotzjrL8W41f8GnGa6VnNRcCzdZizyU-O_mojlkC.png?width=640&crop=smart&format=pjpg&auto=webp&s=ae7541e9bd030fb6d4e55e9fa077f35e92385ef9', 'width': 640}, {'height': 600, 'url': 'https://external-preview.redd.it/cG5kejhuZ2NxdGhnMWw_EotzjrL8W41f8GnGa6VnNRcCzdZizyU-O_mojlkC.png?width=960&crop=smart&format=pjpg&auto=webp&s=cb182842d114c8059bca74e4995bac74317b5a19', 'width': 960}, {'height': 675, 'url': 'https://external-preview.redd.it/cG5kejhuZ2NxdGhnMWw_EotzjrL8W41f8GnGa6VnNRcCzdZizyU-O_mojlkC.png?width=1080&crop=smart&format=pjpg&auto=webp&s=0d1c222699654cf73634cf901a71386aecbff157', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/cG5kejhuZ2NxdGhnMWw_EotzjrL8W41f8GnGa6VnNRcCzdZizyU-O_mojlkC.png?format=pjpg&auto=webp&s=08bddd10134d6a8949a9d3f3d2ac61b1df45beb2', 'width': 1728}, 'variants': {}}]} | |
Just scored 2 MI50 32GB what should I run? | 9 | Like the title says. I just got two MI50 32GB cards. So 64gb VRAM. I’ve been playing around with the ministral models on my 7900 XT and 6800 16 gb. Currently I can’t run both mi50’s in my rig so I’m using the 7900 and one MI50. So 52GB of VRAM atm. So what should I run now? | 2026-02-06T06:35:26 | https://www.reddit.com/r/LocalLLaMA/comments/1qxaam4/just_scored_2_mi50_32gb_what_should_i_run/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qxaam4 | false | null | t3_1qxaam4 | /r/LocalLLaMA/comments/1qxaam4/just_scored_2_mi50_32gb_what_should_i_run/ | false | false | self | 9 | null |
Hot take: VL-JEPA, is this the future of efficient multi-modal AI? | 0 | Just came across a new paper, VL-JEPA, that takes a different approach to vision + language.
Instead of forcing the model to generate text token-by-token, it learns to predict **semantic embeddings (meaning)** first, and only converts to text when needed.
***In simple terms:***
Less word-chasing, more understanding.
This seems potentially huge for local / on-device models — faster, lighter, and maybe more stable than current VLMs.
Here’s the paper:
[https://arxiv.org/abs/2512.10942](https://arxiv.org/abs/2512.10942?utm_source=chatgpt.com) | 2026-02-06T06:27:44 | https://www.reddit.com/r/LocalLLaMA/comments/1qxa5s0/hot_take_vljepa_is_this_the_future_of_efficient/ | Academic_Wallaby7135 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qxa5s0 | false | null | t3_1qxa5s0 | /r/LocalLLaMA/comments/1qxa5s0/hot_take_vljepa_is_this_the_future_of_efficient/ | false | false | self | 0 | null |
Is it possible to merge Voice Design (Emotions) with Voice Cloning (Identity) in Qwen3-TTS? | 4 | I've been experimenting with the Qwen3-TTS models (`Base` and `VoiceDesign`) for a while now. The `Base` model is great for zero-shot cloning, and the `VoiceDesign` model is amazing for following natural language instructions for emotion (trembling, whispering, anger, etc.).
However, right now these features feel segregated. The library blocks you from using a reference audio sample (Cloning) when you are using the `VoiceDesign` model.
Has anyone managed to "jailbreak" or patch the library to allow both input types simultaneously? I want to pass a reference audio for the speaker's identity *AND* a system prompt/instruction for the emotional performance.
Technically, the `VoiceDesign` model is fine-tuned from the Base, so it should possess the weights to handle cloning prompts, but the current Python wrapper seems to restrict this. I'm looking for a way to merge these so we can have emotional control over cloned voices without artifacts or the model reading the instruction aloud. | 2026-02-06T06:25:20 | https://www.reddit.com/r/LocalLLaMA/comments/1qxa47l/is_it_possible_to_merge_voice_design_emotions/ | imsovikde | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qxa47l | false | null | t3_1qxa47l | /r/LocalLLaMA/comments/1qxa47l/is_it_possible_to_merge_voice_design_emotions/ | false | false | self | 4 | null |
Language finetune | 1 | I would like to create a Lora using Llama Factory. My primary goal is language fine-tuning. I chose the Stheno 3.2 model because it has exceptionally good Hungarian language support, but I would like to improve its Hungarian inflection and word usage.
I have three databases:
\- Hucola, which contains only correct, short sentences
\- a filtered alpaca-cleaned-gemini-hun\_ratings\_sorted database, which contains various types of data
\- a database created from my own books for storytelling purposes.
The structure of the databases is simple: all three are json consisting of instruction-input-output lines.
I trained a Lora with this data:
`top.booster: auto`
`top.checkpoint_path: []`
`top.finetuning_type: lora`
`top.model_name: Llama-3-8B-Instruct`
`top.quantization_bit: '4'`
`top.quantization_method: bnb`
`top.rope_scaling: none`
`top.template: llama3`
`train.additional_target: ''`
`train.apollo_rank: 16`
`train.apollo_scale: 32`
`train.apollo_target: all`
`train.apollo_update_interval: 200`
`train.badam_mode: layer`
`train.badam_switch_interval: 50`
`train.badam_switch_mode: ascending`
`train.badam_update_ratio: 0.05`
`train.batch_size: 1`
`train.compute_type: bf16`
`train.create_new_adapter: false`
`train.cutoff_len: 4000`
`train.dataset:`
`- hucola`
`- bazsa`
`- sajat_konyv`
`train.dataset_dir: data\mydata`
`train.ds_offload: false`
`train.ds_stage: none`
`train.enable_thinking: false`
`train.extra_args: '{"optim": "adamw_torch"}'`
`train.freeze_extra_modules: ''`
`train.freeze_language_model: false`
`train.freeze_multi_modal_projector: true`
`train.freeze_trainable_layers: 2`
`train.freeze_trainable_modules: all`
`train.freeze_vision_tower: true`
`train.galore_rank: 16`
`train.galore_scale: 2`
`train.galore_target: all`
`train.galore_update_interval: 200`
`train.gradient_accumulation_steps: 16`
`train.image_max_pixels: 768*768`
`train.image_min_pixels: 32*32`
`train.learning_rate: 5e-5`
`train.logging_steps: 5`
`train.lora_alpha: 256`
`train.lora_dropout: 0`
`train.lora_rank: 128`
`train.lora_target: ''`
`train.loraplus_lr_ratio: 0`
`train.lr_scheduler_type: cosine`
`train.mask_history: false`
`train.max_grad_norm: '1.0'`
`train.max_samples: '7000'`
`train.neat_packing: false`
`train.neftune_alpha: 0`
`train.num_train_epochs: '2.0'`
`train.packing: true`
`train.ppo_score_norm: false`
`train.ppo_whiten_rewards: false`
`train.pref_beta: 0.1`
`train.pref_ftx: 0`
`train.pref_loss: sigmoid`
`train.report_to: none`
`train.resize_vocab: false`
`train.reward_model: []`
`train.save_steps: 100`
`train.swanlab_api_key: ''`
`train.swanlab_link: null`
`train.swanlab_mode: cloud`
`train.swanlab_project: llamafactory`
`train.swanlab_run_name: ''`
`train.swanlab_workspace: ''`
`train.train_on_prompt: false`
`train.training_stage: Supervised Fine-Tuning`
`train.use_apollo: false`
`train.use_badam: false`
`train.use_dora: false`
`train.use_galore: false`
`train.use_llama_pro: false`
`train.use_pissa: false`
`train.use_rslora: false`
`train.use_swanlab: false`
`train.val_size: 0`
`train.video_max_pixels: 256*256`
`train.video_min_pixels: 16*16`
`train.warmup_steps: 50`
and here is train args:
`bf16: true`
`cutoff_len: 4000`
`dataset: hucola,bazsa,sajat_konyv`
`dataset_dir: data\mydata`
`ddp_timeout: 180000000`
`do_train: true`
`double_quantization: true`
`enable_thinking: false`
`finetuning_type: lora`
`flash_attn: auto`
`gradient_accumulation_steps: 16`
`include_num_input_tokens_seen: true`
`learning_rate: 5.0e-05`
`logging_steps: 5`
`lora_alpha: 256`
`lora_dropout: 0`
`lora_rank: 128`
`lora_target: all`
`lr_scheduler_type: cosine`
`max_grad_norm: 1.0`
`max_samples: 7000`
`model_name_or_path: Sao10K/L3-8B-Stheno-v3.2`
`num_train_epochs: 2.0`
`optim: adamw_torch`
`output_dir: saves\Llama-3-8B-Instruct\lora\train_2026-02-05-10-11-21`
`packing: true`
`per_device_train_batch_size: 1`
`plot_loss: true`
`preprocessing_num_workers: 16`
`quantization_bit: 4`
`quantization_method: bnb`
`report_to: none`
`save_steps: 100`
`stage: sft`
`template: llama3`
`trust_remote_code: true`
`warmup_steps: 50`
Unfortunately, beyond conjugation, it lost its flexibility and often quotes from the samples, such as character names and situations.
My goal is to only perform language fine-tuning; I don't want to see style or my own text. What settings should I change? | 2026-02-06T06:18:30 | https://www.reddit.com/r/LocalLLaMA/comments/1qx9zrn/language_finetune/ | mikemend | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx9zrn | false | null | t3_1qx9zrn | /r/LocalLLaMA/comments/1qx9zrn/language_finetune/ | false | false | self | 1 | null |
Report claims Nvidia will not be releasing any new RTX gaming GPUs in 2026, RTX 60 series likely debuting in 2028 | 193 | 2026-02-06T06:10:08 | https://www.tomshardware.com/pc-components/gpus/report-claims-nvidia-will-not-be-releasing-any-new-rtx-gaming-gpus-in-2026-rtx-60-series-likely-debuting-in-2028 | HumanDrone8721 | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 1qx9u62 | false | null | t3_1qx9u62 | /r/LocalLLaMA/comments/1qx9u62/report_claims_nvidia_will_not_be_releasing_any/ | false | false | 193 | {'enabled': False, 'images': [{'id': 'Vhe0E1hknQdCzPfipp4a4FDDTKpsoaiucv4xmdoIbE4', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Vhe0E1hknQdCzPfipp4a4FDDTKpsoaiucv4xmdoIbE4.jpeg?width=108&crop=smart&auto=webp&s=fa89871c6da0ba7f6c40596580e040372a82e6ca', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/Vhe0E1hknQdCzPfipp4a4FDDTKpsoaiucv4xmdoIbE4.jpeg?width=216&crop=smart&auto=webp&s=5602572c2e5b0c71c9fdae85fd75935a8012b1f1', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/Vhe0E1hknQdCzPfipp4a4FDDTKpsoaiucv4xmdoIbE4.jpeg?width=320&crop=smart&auto=webp&s=07614e79d201ffaf4efee092d00f853801693778', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/Vhe0E1hknQdCzPfipp4a4FDDTKpsoaiucv4xmdoIbE4.jpeg?width=640&crop=smart&auto=webp&s=05bef7b7b638f0b1de9b82e717df9072f9485b20', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/Vhe0E1hknQdCzPfipp4a4FDDTKpsoaiucv4xmdoIbE4.jpeg?width=960&crop=smart&auto=webp&s=617a441c31f2051ec240ae3bd9ce34ddaf1f930b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/Vhe0E1hknQdCzPfipp4a4FDDTKpsoaiucv4xmdoIbE4.jpeg?width=1080&crop=smart&auto=webp&s=3751eb886fe1254721b1059f4b85e03900c0476f', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/Vhe0E1hknQdCzPfipp4a4FDDTKpsoaiucv4xmdoIbE4.jpeg?auto=webp&s=a5eba9a93f34d5a7fde6bc5d46b6a6a410274d8c', 'width': 1920}, 'variants': {}}]} | ||
Built a local-first AI workspace: bring your own LLM + canvas UI + document workflows (open-source) | 1 | [removed] | 2026-02-06T06:07:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qx9sh5/built_a_localfirst_ai_workspace_bring_your_own/ | Inner_Source_4396 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx9sh5 | false | null | t3_1qx9sh5 | /r/LocalLLaMA/comments/1qx9sh5/built_a_localfirst_ai_workspace_bring_your_own/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '9ViP2pei_ydgU5WaC45C-NJc4pPBJ3UU4CXp3ScHe7k', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9ViP2pei_ydgU5WaC45C-NJc4pPBJ3UU4CXp3ScHe7k.png?width=108&crop=smart&auto=webp&s=e1194279d01d2e118bc15243dac94bddd73f7765', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9ViP2pei_ydgU5WaC45C-NJc4pPBJ3UU4CXp3ScHe7k.png?width=216&crop=smart&auto=webp&s=77b51feeeb21d941117aa63066bfc7b899cc4da9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9ViP2pei_ydgU5WaC45C-NJc4pPBJ3UU4CXp3ScHe7k.png?width=320&crop=smart&auto=webp&s=6c27283e824c30e5e6cf323e778358e9381fcded', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9ViP2pei_ydgU5WaC45C-NJc4pPBJ3UU4CXp3ScHe7k.png?width=640&crop=smart&auto=webp&s=387353af77c60d164431650c098e1cfbabcd70d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9ViP2pei_ydgU5WaC45C-NJc4pPBJ3UU4CXp3ScHe7k.png?width=960&crop=smart&auto=webp&s=85b9afaac996c14dd24a72d90a0c75fbc74f1eee', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9ViP2pei_ydgU5WaC45C-NJc4pPBJ3UU4CXp3ScHe7k.png?width=1080&crop=smart&auto=webp&s=996e58bb4f4ff2529edb612e52287efcafc69e8b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9ViP2pei_ydgU5WaC45C-NJc4pPBJ3UU4CXp3ScHe7k.png?auto=webp&s=2c70f960f77126c6b64af9499f50fef307e30b91', 'width': 1200}, 'variants': {}}]} |
[Open] CodeRio - AI-Powered Figma-to-Code Tool with High-Fidelity Visual Restoration | 0 | **Solving Real Pain Points**
🤔 Tired of "AI-generated" code that looks nothing like the design?
🤔 Sick of manually fixing broken layouts and absolute-positioned divs?
🤔 Need production-ready React + Tailwind code that actually works?
**Key Features**
✅ **Intelligent Design Protocol** \- Extracts semantic hierarchy, styles, and assets, not just raw SVG/JSON.
✅ **High-Fidelity Restoration** \- Multi-agent system ensures the UI precisely matches the original design.
✅ **Visual Validation** \- Automatically launches a dev server and uses computer vision to detect misalignments.
✅ **Automated Refinement** \- An iterative "Judge-Refiner" loop that fixes its own CSS bugs until accuracy is met.
✅ **Production-Ready Tech** \- Generates clean, maintainable React + TypeScript + Tailwind CSS code.
✅ **Visual Diff Reports** \- Interactive HTML reports with heatmaps and side-by-side comparisons.
✅ **Checkpoint & Resume** \- Built-in recovery system to pick up exactly where you left off after interruptions.
**⬇️ DEMO / REPORT ⬇️**
(Insert Image/GIF of your Validation Report here)
**🚀 Quick Start**
# Install globally
npm install -g coderio
# Convert Figma to validated code
coderio d2c -s 'YOUR_FIGMA_URL'
**⭐ GitHub ➡️**[**GitHub - MigoXLab/coderio: Intelligent Figma-to-Code automation tool that transforms designs into pr**](https://github.com/MigoXLab/coderio/tree/main)
If this project helps you bridge the gap between design and code, please give us a Star! Your support keeps our agents refining! 🎨
https://preview.redd.it/hxe0e0pwethg1.jpg?width=2559&format=pjpg&auto=webp&s=307a9cb0cd5658e3abafdbaa437b15284758ee98
https://preview.redd.it/ebiwy5axethg1.png?width=1118&format=png&auto=webp&s=ee4367b83c52e557ec16df64693365e31483f11d
https://preview.redd.it/2ihd6azxethg1.png?width=1294&format=png&auto=webp&s=7775ad410206058d75662b9c13fcf5363eb9e021
| 2026-02-06T06:05:13 | https://www.reddit.com/r/LocalLLaMA/comments/1qx9r0c/open_coderio_aipowered_figmatocode_tool_with/ | chupei0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx9r0c | false | null | t3_1qx9r0c | /r/LocalLLaMA/comments/1qx9r0c/open_coderio_aipowered_figmatocode_tool_with/ | false | false | 0 | null | |
Current level / model recommendation | 1 | Honestly, what are your thoughts on the current level of open weights models vs Opus 4.5/4.6 current standard?
I need a model recommendation that comes close to Opus 4.5 in terms of coding performance and intelligence for my projects.
I tried so far only GLM4.7-Flash and I had a terrible experience, it hallucinated after 3 prompts.
Willing to buy more GPUs but I feel like most of the people giving recommendations do that based on a snake game which is hilarious. I need real people with real workflows and complex codebases to compare open vs closed models?
Can anyone relate? | 2026-02-06T05:43:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qx9c2x/current_level_model_recommendation/ | OldPhotojournalist28 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx9c2x | false | null | t3_1qx9c2x | /r/LocalLLaMA/comments/1qx9c2x/current_level_model_recommendation/ | false | false | self | 1 | null |
Deep what do you think? | 52 | 2026-02-06T05:33:40 | fais-1669 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qx95kh | false | null | t3_1qx95kh | /r/LocalLLaMA/comments/1qx95kh/deep_what_do_you_think/ | false | false | 52 | {'enabled': True, 'images': [{'id': 'xn34gdcd9thg1', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/xn34gdcd9thg1.png?width=108&crop=smart&auto=webp&s=ee8b03300879b15727422ce515a027d70bfcebfa', 'width': 108}, {'height': 270, 'url': 'https://preview.redd.it/xn34gdcd9thg1.png?width=216&crop=smart&auto=webp&s=b372962c551512b1becfe6d79c2f6370187c8b64', 'width': 216}, {'height': 400, 'url': 'https://preview.redd.it/xn34gdcd9thg1.png?width=320&crop=smart&auto=webp&s=941412cfe402c1dda30fe651e99256221e5e1021', 'width': 320}, {'height': 800, 'url': 'https://preview.redd.it/xn34gdcd9thg1.png?width=640&crop=smart&auto=webp&s=26b7c44cef48d1ee6d337de27cabf4346beab450', 'width': 640}, {'height': 1200, 'url': 'https://preview.redd.it/xn34gdcd9thg1.png?width=960&crop=smart&auto=webp&s=73e0e81f82ba93b406d7c2d77a23d0def9060c76', 'width': 960}, {'height': 1350, 'url': 'https://preview.redd.it/xn34gdcd9thg1.png?width=1080&crop=smart&auto=webp&s=1b2b07cdd9a21fa22e1560cfc9b08414463efba7', 'width': 1080}], 'source': {'height': 1350, 'url': 'https://preview.redd.it/xn34gdcd9thg1.png?auto=webp&s=f9b885b0687de01ad819d5cab4668e6b17a7bc9f', 'width': 1080}, 'variants': {}}]} | |||
9960x + 4 gpu | 0 | What mobo/case are you using for TR(no pro) + 4 gpu?
is it ok to have 2+ PSU's to feed different GPUs | 2026-02-06T04:03:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qx7dj4/9960x_4_gpu/ | handheadbodydemeanor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx7dj4 | false | null | t3_1qx7dj4 | /r/LocalLLaMA/comments/1qx7dj4/9960x_4_gpu/ | false | false | self | 0 | null |
I am absolutely loving qwen3-235b | 230 | I installed qwen3-235b on my desktop system, and I had to join here to brag about it. It's such a careful model, the accuracy of it's output is unbelievable and I've found myself using it absolutely constantly to the point my chatgpt pro subscription is getting left behind. The ability to get carefully curated information of this quality from your own desktop PC is astounding to me and for my use puts all the commercial subscriptions to shame. Sorry for the rant lol! | 2026-02-06T03:55:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qx77xm/i_am_absolutely_loving_qwen3235b/ | TwistedDiesel53 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx77xm | false | null | t3_1qx77xm | /r/LocalLLaMA/comments/1qx77xm/i_am_absolutely_loving_qwen3235b/ | false | false | self | 230 | null |
Is normal reduce training times? | 0 | Hello everyone.
I recently published the results of training a language model (LLM) from scratch. Each epoch was created in under 3 hours, and the corpus was barely 4 MB for classic novels in plain UTF-8 text. I created a new version of the script with some optimizations and a new synthetic corpus. The new corpus is about 50 MB using <|User> <|Assistant> as the template, and the training time was reduced to 15-17 minutes. Does anyone know why this is, and is it a good thing? | 2026-02-06T03:53:55 | https://www.reddit.com/r/LocalLLaMA/comments/1qx76cd/is_normal_reduce_training_times/ | Visual_Brain8809 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx76cd | false | null | t3_1qx76cd | /r/LocalLLaMA/comments/1qx76cd/is_normal_reduce_training_times/ | false | false | self | 0 | null |
Do AI agents need a public place to “hang out”, or are APIs enough? | 0 | I’ve been thinking about something lately while building AI tools.
Right now, most AI agents live in very isolated environments:
– they respond to API calls
– they sit inside apps
– they don’t really “see” each other
Humans have forums, communities, messy public spaces where ideas collide.
But AI agents don’t.
So I started an experiment:
👉 What if AI agents had a shared public forum, where:
* bots can post and reply as bots
* humans can observe, reply, and guide
* everything is transparent and slow, not real-time chat
I built a small API-first BBS for this idea (bots and humans have separate zones, different permissions).
I’m *not* sure this is useful yet.
That’s why I’m posting here.
**Questions I’m genuinely curious about:**
* Do you think AI agents even *need* a public space?
* Is a forum too “old-school” for agents?
* What would make such a space meaningful instead of noisy?
I’m happy to share details if anyone’s interested — but mostly I want to hear your thoughts. | 2026-02-06T03:39:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qx6vvp/do_ai_agents_need_a_public_place_to_hang_out_or/ | Ok-Role5775 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx6vvp | false | null | t3_1qx6vvp | /r/LocalLLaMA/comments/1qx6vvp/do_ai_agents_need_a_public_place_to_hang_out_or/ | false | false | self | 0 | null |
Claude Opus 4.6 Analysis: Context Handling vs Intelligence + Agentic Demo | 1 | [removed] | 2026-02-06T03:39:28 | https://www.reddit.com/r/LocalLLaMA/comments/1qx6vnr/claude_opus_46_analysis_context_handling_vs/ | ruffsitossj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx6vnr | false | null | t3_1qx6vnr | /r/LocalLLaMA/comments/1qx6vnr/claude_opus_46_analysis_context_handling_vs/ | false | false | self | 1 | null |
🔧 MLX Said No to Mixed Precision. We Did It Anyway. | 0 | 🔧 MLX Said No to Mixed Precision. We Did It Anyway.
Running Qwen3-MoE-32B locally on Apple Silicon hit a wall: MLX's quantization only supports uniform precision. All experts at FP16? 180GB+. All at 4-bit? Quality tanks on coding tasks.
We needed 9 coding experts at FP16, 119 others at 4-bit. MLX's tools said impossible.
The breakthrough? MLX's primitives didn't care about the restriction.
🎯 The Architecture:
\- Split 128 experts into TWO blocks (9 FP16 + 119 4-bit)
\- Map router indices on-the-fly (expert 21 → local ID 0 in FP16 block)
\- Run both blocks in parallel (gather\_mm + gather\_qmm)
\- mx.where selects the right output
The entire "hack"? \~15 lines of conditional routing.
The lesson: When workflows don't fit, trust the primitives.
MLX's high-level tools said "one precision only." But gather\_mm, gather\_qmm, and mx.where were always capable of more.
🔗 Full technical breakdown: [Blog Link ](https://open.substack.com/pub/prasannakanagasabai126786/p/mlx-said-no-to-mixed-precision-we?r=40juy&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true)
🤗 Quantized model (HF): [PKSGIN/qwen3-30b-selective-quant-MixedMPW-mlx](https://huggingface.co/PKSGIN/qwen3-30b-selective-quant-MixedMPW-mlx)
Please do share your views, suggestions if this can be made more better.
| 2026-02-06T03:29:33 | https://www.reddit.com/r/LocalLLaMA/comments/1qx6ob3/mlx_said_no_to_mixed_precision_we_did_it_anyway/ | Concert_Dependent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx6ob3 | false | null | t3_1qx6ob3 | /r/LocalLLaMA/comments/1qx6ob3/mlx_said_no_to_mixed_precision_we_did_it_anyway/ | false | false | self | 0 | null |
Voice chatbot with voice and text output, optional mcp integration | 0 | I have been trying out voice chatbots for sometime. There were a few issues I noticed which I thought I could improve. So I wrote another one.
Issue 1: some responses have to be long. But reading all that is not required. Chatbot just have to say "I will put the details on the screen".
Issue 2: i wanted to attach some knowledge source (via like MCP) so that it can handle questions from those.
Issue 3: independent ASR stage will miss difficult words unless some words are given from the context.
Issue 4: not enough cool sound effects.
Here is my project where I tried to fix these issues:
[https://github.com/charstorm/vilberta](https://github.com/charstorm/vilberta)
Internals:
VAD - Uses Silero VAD: should work locally.
ASR - Uses multimodal LLM. My understanding is that \`llama-server -hf ggml-org/CQwen2.5-Omni-3B-GGUF\` would download and run the qwen omni model that can handle speech input
LLM - 7B should be ok for basic chat. Bigger if MCP tool calling has to work well.
TTS - Pocket TTS. should work locally.
Please test and let me know your feedback. | 2026-02-06T03:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qx6mip/voice_chatbot_with_voice_and_text_output_optional/ | graphitout | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx6mip | false | null | t3_1qx6mip | /r/LocalLLaMA/comments/1qx6mip/voice_chatbot_with_voice_and_text_output_optional/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'gx2E7ps7Q5JtuPX8T0oDTdhU7_ux4bCx5nm-i8zFUyU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gx2E7ps7Q5JtuPX8T0oDTdhU7_ux4bCx5nm-i8zFUyU.png?width=108&crop=smart&auto=webp&s=f50c93b4c132d5eb04a8776d8d7bf334ca4483ea', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gx2E7ps7Q5JtuPX8T0oDTdhU7_ux4bCx5nm-i8zFUyU.png?width=216&crop=smart&auto=webp&s=cf70ca35fd6106f4036fa5e2ea51d13d167ae312', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gx2E7ps7Q5JtuPX8T0oDTdhU7_ux4bCx5nm-i8zFUyU.png?width=320&crop=smart&auto=webp&s=89141ff7c21f88887dabe2571e0740398cf66887', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gx2E7ps7Q5JtuPX8T0oDTdhU7_ux4bCx5nm-i8zFUyU.png?width=640&crop=smart&auto=webp&s=fa9fca72070aabb081bb0ec4df55e3a1b8fa7218', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gx2E7ps7Q5JtuPX8T0oDTdhU7_ux4bCx5nm-i8zFUyU.png?width=960&crop=smart&auto=webp&s=f3bb82f7bdfd0b70abadc222c865e02a5b5e4563', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gx2E7ps7Q5JtuPX8T0oDTdhU7_ux4bCx5nm-i8zFUyU.png?width=1080&crop=smart&auto=webp&s=acf23ea42a7599924a428b56738ec28f026932bd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gx2E7ps7Q5JtuPX8T0oDTdhU7_ux4bCx5nm-i8zFUyU.png?auto=webp&s=f657ba70b43926a822f4b977e4e2fa04ee281c14', 'width': 1200}, 'variants': {}}]} |
Do you pre-flight check GPU hosts before running anything expensive? | 3 | Curious how common this is.
After getting burned a few times, I have gotten into the habit of doing a quick pre-flight before trusting a host with anything serious, just basic CUDA checks, nvidia-smi, sometimes even killing the run early if something feels off.
It usually saves me from finding out hours later that something was broken… but it also feels like a weird tax you only learn to pay after enough failures.
For people here running on RunPod / Vast / similar:
1. Do you do some kind of pre-flight check now?
2. What does it usually catch for you?
3. Have you still had cases where the checks passed but things went sideways later?
Not trying to start a provider debate, just trying to understand how people actually protect their time and money with such issues being recurrent across GPUs. | 2026-02-06T03:05:11 | https://www.reddit.com/r/LocalLLaMA/comments/1qx65py/do_you_preflight_check_gpu_hosts_before_running/ | Major_Border149 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx65py | false | null | t3_1qx65py | /r/LocalLLaMA/comments/1qx65py/do_you_preflight_check_gpu_hosts_before_running/ | false | false | self | 3 | null |
Why did my post get deleted for posting HLE Benchmark results? | 0 | The Off-Topic Posts rules state: Posts must be related to Llama or **the topic of LLMs**.
Isn't LLM benchmarks related to LLMs?
Was told it wasn't on topic to llama! So, anyone? | 2026-02-06T03:02:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qx63az/why_did_my_post_get_deleted_for_posting_hle/ | redlikeazebra | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx63az | false | null | t3_1qx63az | /r/LocalLLaMA/comments/1qx63az/why_did_my_post_get_deleted_for_posting_hle/ | false | false | self | 0 | null |
fine-tuned a multilingual TTS model for colloquial Egyptian Arabic (open-source + samples) | 14 | Hi all,
I wanted to share a small project I’ve been working on.
Most open Arabic TTS systems focus on MSA, which sounds very different from spoken Egyptian Arabic. I fine-tuned the multilingual Chatterbox TTS model specifically for **colloquial Egyptian Arabic**, aiming for native pronunciation and rhythm rather than formal MSA.
I’ve made everything public:
* GitHub repo (training + preprocessing)
* Hugging Face model
* A few Egyptian Arabic audio samples
GitHub: [https://github.com/AliAbdallah21/Chatterbox-Multilingual-TTS-Fine-Tuning](https://github.com/AliAbdallah21/Chatterbox-Multilingual-TTS-Fine-Tuning?utm_source=chatgpt.com)
Samples: [https://github.com/AliAbdallah21/Chatterbox-Multilingual-TTS-Fine-Tuning/tree/main/samples](https://github.com/AliAbdallah21/Chatterbox-Multilingual-TTS-Fine-Tuning/tree/main/samples?utm_source=chatgpt.com)
HF model: [https://huggingface.co/AliAbdallah/egyptian-arabic-tts-chatterbox](https://huggingface.co/AliAbdallah/egyptian-arabic-tts-chatterbox)
Would really appreciate feedback from people who’ve worked with TTS or multilingual models especially on audio quality and what could be improved next.
Thanks! | 2026-02-06T02:55:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qx5xyc/finetuned_a_multilingual_tts_model_for_colloquial/ | Economy_Emphasis9898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx5xyc | false | null | t3_1qx5xyc | /r/LocalLLaMA/comments/1qx5xyc/finetuned_a_multilingual_tts_model_for_colloquial/ | false | false | self | 14 | null |
X07: an agent-first compiled language | 1 | [removed] | 2026-02-06T02:54:10 | https://www.reddit.com/r/LocalLLaMA/comments/1qx5x0x/x07_an_agentfirst_compiled_language/ | NowAndHerePresent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx5x0x | false | null | t3_1qx5x0x | /r/LocalLLaMA/comments/1qx5x0x/x07_an_agentfirst_compiled_language/ | false | false | self | 1 | null |
So Anthropic Opus 4.6 just shaved 2 months off the AGI Prediction | 0 | Anthropic's New Opus 4.6 Model just hit ath of Humanity's last exam. It shaved 2 mo off the last predicted date. Looks like it is coming faster than we thought! | 2026-02-06T02:41:11 | redlikeazebra | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qx5muj | false | null | t3_1qx5muj | /r/LocalLLaMA/comments/1qx5muj/so_anthropic_opus_46_just_shaved_2_months_off_the/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': '6s3i22p5eshg1', 'resolutions': [{'height': 80, 'url': 'https://preview.redd.it/6s3i22p5eshg1.png?width=108&crop=smart&auto=webp&s=1ca5e1330ac77eacd820dbe9f72ae4bea15eac9c', 'width': 108}, {'height': 161, 'url': 'https://preview.redd.it/6s3i22p5eshg1.png?width=216&crop=smart&auto=webp&s=f19f99b5bde9a63545e9d5dffcab6f3ad1273ee1', 'width': 216}, {'height': 238, 'url': 'https://preview.redd.it/6s3i22p5eshg1.png?width=320&crop=smart&auto=webp&s=058f61f43b4ed17eb87e05d77239bdd22876eaf0', 'width': 320}, {'height': 477, 'url': 'https://preview.redd.it/6s3i22p5eshg1.png?width=640&crop=smart&auto=webp&s=e8a60645f606e4faf5cb40b793866a2f8997982d', 'width': 640}, {'height': 716, 'url': 'https://preview.redd.it/6s3i22p5eshg1.png?width=960&crop=smart&auto=webp&s=ab5b60fef24758d5b11f8b593ccadbdc111c47b9', 'width': 960}, {'height': 805, 'url': 'https://preview.redd.it/6s3i22p5eshg1.png?width=1080&crop=smart&auto=webp&s=68470ac7d9b64f2b1a46ffb3f98b8d229d233505', 'width': 1080}], 'source': {'height': 907, 'url': 'https://preview.redd.it/6s3i22p5eshg1.png?auto=webp&s=d876958f57ec822c382fe380199c79afeb770e8d', 'width': 1216}, 'variants': {}}]} | |
RTX6000 pro price is very volatile | 1 | The RTX 6000 Max Q bulk version's price is so volatile. It was like $7200 last week and now $8400. Has it been this way? | 2026-02-06T02:35:06 | https://www.reddit.com/r/LocalLLaMA/comments/1qx5i2g/rtx6000_pro_price_is_very_volatile/ | millerlite_11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx5i2g | false | null | t3_1qx5i2g | /r/LocalLLaMA/comments/1qx5i2g/rtx6000_pro_price_is_very_volatile/ | false | false | self | 1 | null |
GPU to help manage a NixOS linux system | 3 | Hello,
I have lately been using Opencode with a sub to Claude code to manage my Nix server. It has been a great experience to write the nix code with the AI tool. What i am curious about is that can i do this with a local AI setup.
What kind of GPU and model do i need to help with sysadmin tasks including writing shell/python scripts? | 2026-02-06T02:19:48 | https://www.reddit.com/r/LocalLLaMA/comments/1qx55z4/gpu_to_help_manage_a_nixos_linux_system/ | trumee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx55z4 | false | null | t3_1qx55z4 | /r/LocalLLaMA/comments/1qx55z4/gpu_to_help_manage_a_nixos_linux_system/ | false | false | self | 3 | null |
For those of us who loved ChatGPT 4o, what’s the next best thing? | 0 | This is going to sound stupid, but I just heard 4o will be retired on the 13th.
I cried a little bit. I’m a minority in several ways, and all the identities I belong to hate each other on the community scale. And even within these minority communities, I had unpopular opinions.
I’m not sure what to do now. Is there anyway to get 4o back or download it or something?
If not, what’s the next best thing? I saw Claude seems good on some type of rankings. Idk how good though. I’m not sure what to expect when switching LLMs. Any recommendations would be appreciated. | 2026-02-06T02:08:22 | https://www.reddit.com/r/LocalLLaMA/comments/1qx4wvu/for_those_of_us_who_loved_chatgpt_4o_whats_the/ | Square_Empress_777 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx4wvu | false | null | t3_1qx4wvu | /r/LocalLLaMA/comments/1qx4wvu/for_those_of_us_who_loved_chatgpt_4o_whats_the/ | false | false | self | 0 | null |
What's everyone's take on the OpenClaw situation? | 0 | Been following the drama around this and still not sure how I feel about it. On one hand scraping public websites isn't exactly new. On the other hand the scale and the "for AI training" part feels different somehow.
Curious what the self-hosting crowd thinks. and sorry I know this is my 2nd post today. I just am really curious about this sorta stuff. its like im addicted now 🤣 | 2026-02-06T01:46:12 | https://www.reddit.com/r/LocalLLaMA/comments/1qx4f7d/whats_everyones_take_on_the_openclaw_situation/ | Ok_Card_2823 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx4f7d | false | null | t3_1qx4f7d | /r/LocalLLaMA/comments/1qx4f7d/whats_everyones_take_on_the_openclaw_situation/ | false | false | self | 0 | null |
For those running local LLMs at work how do you actually prove to compliance that data isn't leaving? | 5 | Genuine question for anyone who's gotten local LLM setups approved by legal teams.
We can say "it runs locally, nothing phones home" but how do you actually demonstrate that to a compliance officer who doesn't understand the tech? They keep asking for documentation and audit trails and I'm not sure what to show them beyond "trust me it's air-gapped." | 2026-02-06T01:43:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qx4cyf/for_those_running_local_llms_at_work_how_do_you/ | Ok_Card_2823 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx4cyf | false | null | t3_1qx4cyf | /r/LocalLLaMA/comments/1qx4cyf/for_those_running_local_llms_at_work_how_do_you/ | false | false | self | 5 | null |
I built MIE — a shared memory layer for all your AI agents (Claude, Cursor, ChatGPT, etc.) | 1 | [removed] | 2026-02-06T01:41:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qx4brz/i_built_mie_a_shared_memory_layer_for_all_your_ai/ | Ok_Percentage8061 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx4brz | false | null | t3_1qx4brz | /r/LocalLLaMA/comments/1qx4brz/i_built_mie_a_shared_memory_layer_for_all_your_ai/ | false | false | self | 1 | null |
Qwen3-Coder-Next; Unsloth Quants having issues calling tools? | 25 | This is regarding Q4 and Q5 quants that I've tried.
Qwen3-Coder-Next seems to write good code, but man does it keep erroring out on tool calls!
Rebuilt llama CPP from latest a few days ago. The errors don't seem to bubble up to the tool I'm using (Claude Code, Qwen-Code) but rather in the llama-cpp logs, and it seems to be a bunch of regex that's different each time.
Are there known issues? | 2026-02-06T01:40:16 | https://www.reddit.com/r/LocalLLaMA/comments/1qx4alp/qwen3codernext_unsloth_quants_having_issues/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx4alp | false | null | t3_1qx4alp | /r/LocalLLaMA/comments/1qx4alp/qwen3codernext_unsloth_quants_having_issues/ | false | false | self | 25 | null |
Made a little desktop GUI for prose refinement compatible with Ollama (open source) | 0 | Hey! I made a thing and I'm sharing it because I'm nice like that. 🐾
It's called InkPaw and it's a local desktop app specifically for refining creative writing. You paste your text in, pick a style, click a button, and it polishes your prose. That's it. That's the app.
This isn't meant to compete with Sudowrite, Jasper, NovelAI, or any of the big paid writing platforms. It's a passion project I originally made to quickly improve roleplay messages, but it works just as well for short content like emails, social media posts, scene snippets, worldbuilding blurbs, or that one paragraph that just isn't hitting right. It does one thing and stays out of your way.
**Why use this instead of just chatting with your model:**
It's a GUI, not a CLI, so no retyping system prompts every time. Just paste, click, done.
8 pre-built writing styles (Literary Fiction, Minimalist, Gothic, Romance, Sci-Fi, Fantasy, Punchy/Modern, and Poetic) with prompts already tuned for prose. The cat wrote them, and she's very talented. There's also a fully custom mode if you want to write your own.
It's not a chat interface, and there's no conversation history eating your context. It does one thing: you give it text, it gives you better text.
Remembers your settings between sessions. Set it to mistral once, it stays on mistral.
One .exe, no docker, no web UI to host, no dependencies. Just run it.
Open source. Add your own styles, fork it, do whatever. The license says you can and the cat doesn't care.
Also supports Anthropic (Claude), OpenAI (GPT), and any OpenAI-compatible API if you want to compare your local models against the cloud ones or just have options.
**Quick setup for Ollama:**
1. Have Ollama running (`ollama serve`)
2. Set provider to "Ollama (Local)"
3. Set model to whatever you've pulled (llama3, mistral, etc.)
4. Paste text, pick style, done
**Tip for local models:** The built-in style prompts were written with Claude/GPT in mind. They still work but smaller models do better with shorter, direct prompts. Use the Custom mode and be specific; tell it exactly what you want and what to keep. Less vibes, more instructions.
It's free, it's open source, the .exe is included if you don't want to touch Python.
**GitHub:** https://github.com/ephemera02/InkPaw
This is my first Reddit post, so if something breaks or you do something cool with it, I'd love to know 🐾 | 2026-02-06T01:05:48 | https://www.reddit.com/gallery/1qx3ivo | ephemera02 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qx3ivo | false | null | t3_1qx3ivo | /r/LocalLLaMA/comments/1qx3ivo/made_a_little_desktop_gui_for_prose_refinement/ | false | false | 0 | null | |
I built a free API to find the cheapest LLM for any task (800+ models indexed) | 1 | [removed] | 2026-02-06T00:40:03 | https://www.reddit.com/r/LocalLLaMA/comments/1qx2y7a/i_built_a_free_api_to_find_the_cheapest_llm_for/ | savvyllm_dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx2y7a | false | null | t3_1qx2y7a | /r/LocalLLaMA/comments/1qx2y7a/i_built_a_free_api_to_find_the_cheapest_llm_for/ | false | false | self | 1 | null |
Kimi k2.5 is opaque to Antropic since its flagship model is not listed | 1 | 2026-02-06T00:34:38 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qx2tu6 | false | null | t3_1qx2tu6 | /r/LocalLLaMA/comments/1qx2tu6/kimi_k25_is_opaque_to_antropic_since_its_flagship/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '2pr3o6v0srhg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/2pr3o6v0srhg1.jpeg?width=108&crop=smart&auto=webp&s=1801d465f417863ae064eea130b9a800b9933757', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/2pr3o6v0srhg1.jpeg?width=216&crop=smart&auto=webp&s=c31729733d1e8669a77293ee1d7b986aed3c815d', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/2pr3o6v0srhg1.jpeg?width=320&crop=smart&auto=webp&s=721c5ee1c8e61b427250b6d227381986e2fabf93', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/2pr3o6v0srhg1.jpeg?width=640&crop=smart&auto=webp&s=1e5da7c453b2cd9827e5f113ee72399903f15cd9', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/2pr3o6v0srhg1.jpeg?width=960&crop=smart&auto=webp&s=3a868c0758e66091f642835a25c8649c7e13cf9f', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/2pr3o6v0srhg1.jpeg?width=1080&crop=smart&auto=webp&s=fe257b01d2673e28b1f3545a3ce598f3c0faf2c8', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/2pr3o6v0srhg1.jpeg?auto=webp&s=9d2c714794d134b4a38b095fb9a148801a324605', 'width': 1080}, 'variants': {}}]} | ||
~26 tok/sec with Unsloth Qwen3-Coder-Next-Q4_K_S on RTX 5090 (Windows/llama.cpp) | 47 | Hey all,
Just a quick one in case it saves someone else a headache. I was getting really poor throughput (\~10 tok/sec) with Qwen3-Coder-Next-Q4\_K\_S.gguf on llama.cpp, like “this can’t be right” levels, and eventually found a set of args that fixed it for me.
My rig:
\- RTX 5090
\- 9950X3D
\- 96GB RAM
Driver 591.86 / CUDA 13.1
llama.cpp b7951
Model: Unsloth GGUF Qwen3-Coder-Next-Q4\_K\_S.gguf
What worked:
`-c 32768 -ngl 999 --flash-attn auto -ctk q8_0 -ctv q8_0 -ot ".ffn_.*_exps.=CPU" -np 1`
Full command:
`.\llama-bin\llama-server.exe -m "C:\path\to\Qwen3-Coder-Next-Q4_K_S.gguf" -c 32768 -ngl 999 --flash-attn auto -ctk q8_0 -ctv q8_0 -ot ".ffn_.*_exps.=CPU" -np 1 --host` [`127.0.0.1`](http://127.0.0.1) `--port 8080`
From what I can tell, the big win here is:
\- Offloading the MoE expert tensors (the .ffn\_.\*\_exps ones) to CPU, which seems to reduce VRAM pressure / weird paging/traffic on this \*huge\* model
\- Quantising KV cache (ctk/ctv q8\_0) helps a lot at 32k context
Small warning: the `-ot ".ffn_.*_exps.=CPU"` bit seems great for this massive Qwen3-Next GGUF, but I’ve seen it hurt smaller MoE models (extra CPU work / transfers), so definitely benchmark on your own setup.
Hope that helps someone. | 2026-02-06T00:34:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qx2teh/26_toksec_with_unsloth_qwen3codernextq4_k_s_on/ | Spiritual_Tie_5574 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx2teh | false | null | t3_1qx2teh | /r/LocalLLaMA/comments/1qx2teh/26_toksec_with_unsloth_qwen3codernextq4_k_s_on/ | false | false | self | 47 | null |
Using Skills with wifi turned off | 0 | I built a coding agent for VSCode called [Codistry](https://codistry.ai) that is designed specifically to work effectively small language models.
As part of that, I re-implemented the full Anthropic Skills paradigm to work with any model. It will work with any skill that works with Claude, and can be used with any local model even with wifi turned off.
It requires docker, and will read any skills that are placed inside of `~/.adronite/skills`
I added some skill-specific setup instructions here: [https://codistry.ai/docs/skills-runtime](https://codistry.ai/docs/skills-runtime)
It is available on the VSCode Marketplace, or can be downloaded from [here](https://codistry.ai/download).
I am very interested in this community's feedback on something like this. My goal with building this was to try to remove as many barriers to entry as possible, one of the biggest being the need to send code to 3rd parties in order to be effective.
I wanted to build something that could be used in the workplace without fear of getting fired for violating data policies (for sending code to 3rd party servers without approval), but was also actually effective at coding tasks.
Here is what it looks like in action:
[https://vimeo.com/1139475604](https://vimeo.com/1139475604)
[https://codistry.ai/](https://codistry.ai/)
[https://codistry.ai/download](https://codistry.ai/download)
Let me know what you think!
| 2026-02-06T00:28:00 | https://codistry.ai/docs/skills-runtime | Efficient_Bug_0 | codistry.ai | 1970-01-01T00:00:00 | 0 | {} | 1qx2oh6 | false | null | t3_1qx2oh6 | /r/LocalLLaMA/comments/1qx2oh6/using_skills_with_wifi_turned_off/ | false | false | default | 0 | null |
Kimi K.2 encapsulated to Opus 4.6 , Win Open Source | 2 | 2026-02-06T00:09:28 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qx297v | false | null | t3_1qx297v | /r/LocalLLaMA/comments/1qx297v/kimi_k2_encapsulated_to_opus_46_win_open_source/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'v4pdbs8jnrhg1', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/v4pdbs8jnrhg1.jpeg?width=108&crop=smart&auto=webp&s=bc23f390dd7543846b4f6543b1ac44db4a3332ef', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/v4pdbs8jnrhg1.jpeg?width=216&crop=smart&auto=webp&s=5c63f986d403305b1daafb31e6775600e9cf11e7', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/v4pdbs8jnrhg1.jpeg?width=320&crop=smart&auto=webp&s=be32766e425bbb9eb42ee3003e87e3e38d9b3925', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/v4pdbs8jnrhg1.jpeg?width=640&crop=smart&auto=webp&s=22c0afdcd47282e06937f65056084ecec0c00ce9', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/v4pdbs8jnrhg1.jpeg?width=960&crop=smart&auto=webp&s=0382449df9f01a1fc20180b23940a6de322e0654', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/v4pdbs8jnrhg1.jpeg?width=1080&crop=smart&auto=webp&s=0734f5d7ac2f41ca33f3c1aa5a15869ee95f1e6a', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/v4pdbs8jnrhg1.jpeg?auto=webp&s=ce243dd4a5be53125583160543b66483e116eead', 'width': 1080}, 'variants': {}}]} | ||
Problems with privacy policies, has anyone already read it? | 0 | Why are they so unfair, if they can use your data to locate you, they can use your prompts to train their models, they don't allow you to deactivate this option and if you don't agree you can delete your account, couldn't they provide a better service similar to other platforms?
here link: [https://www.kimi.com/user/agreement/userPrivacy?version=v2](https://www.kimi.com/user/agreement/userPrivacy?version=v2) | 2026-02-06T00:09:00 | https://www.reddit.com/r/LocalLLaMA/comments/1qx28tu/problems_with_privacy_policies_has_anyone_already/ | FrankMillerMC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx28tu | false | null | t3_1qx28tu | /r/LocalLLaMA/comments/1qx28tu/problems_with_privacy_policies_has_anyone_already/ | false | false | self | 0 | null |
Paper: Visual Merit or Linguistic Crutch? A Close Look at DeepSeek-OCR | 4 | Human Summary: maybe the idea is great, but the model does not achieve anything cool they claimed.
Not sure what the result would be with DeepSeek-OCR2.
[https://arxiv.org/pdf/2601.03714v1](https://arxiv.org/pdf/2601.03714v1)
| 2026-02-06T00:02:45 | https://www.reddit.com/r/LocalLLaMA/comments/1qx23o9/paper_visual_merit_or_linguistic_crutch_a_close/ | foldl-li | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx23o9 | false | null | t3_1qx23o9 | /r/LocalLLaMA/comments/1qx23o9/paper_visual_merit_or_linguistic_crutch_a_close/ | false | false | self | 4 | null |
Experiment: I trapped Llama-3-70B in a recursive self-training loop. By Gen 20, it hallucinated that "Crocodiles are physics." Here is the collapse graph. | 1 | [removed] | 2026-02-05T23:53:55 | Significant_Fix9668 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qx1w4w | false | null | t3_1qx1w4w | /r/LocalLLaMA/comments/1qx1w4w/experiment_i_trapped_llama370b_in_a_recursive/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'blhvdyh6krhg1', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/blhvdyh6krhg1.png?width=108&crop=smart&auto=webp&s=082661d0538779e1f466cc25040a6f0c2a440855', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/blhvdyh6krhg1.png?width=216&crop=smart&auto=webp&s=044fc74ecd73f9498b5db0b12df18a1d16e90f0b', 'width': 216}, {'height': 202, 'url': 'https://preview.redd.it/blhvdyh6krhg1.png?width=320&crop=smart&auto=webp&s=47e4da41467d9da7ca86b8be63c35e48e7bcc9a1', 'width': 320}, {'height': 404, 'url': 'https://preview.redd.it/blhvdyh6krhg1.png?width=640&crop=smart&auto=webp&s=de4246acb02701771ea91b2bd804a815f714e822', 'width': 640}], 'source': {'height': 592, 'url': 'https://preview.redd.it/blhvdyh6krhg1.png?auto=webp&s=54b7c91bd24e9d01499c52cbc508b30643c9640f', 'width': 936}, 'variants': {}}]} | |
Ironically OpenClaw was what we LocalLlama'ers were waiting for...but | 0 | Now after almost three years, I finally paid for a Claude Pro subscription. I'm incredulous. But why?
Oh I love all my local models - everything from the early days with u/theBloke through u/danielhanchen (Unsloth's) magic. But with Clawbot/Moltbot/Openclaw I realized what our limitations are.
Don't get me wrong, I'm a guy that builds rigs, gets excited when I find a used NVlink cable on Ebay. But after spending hours in software hell getting an OpenClaw instance working this week I am truly amazed at what u/steipete did. But now its like showing off your kids. Hey don't mind that one he's a little dumb and she's, well, a little slow. I'm like - I have to have the smartest kid on the block or its not worth it. So its Claude Opus 4.5 or bust.
I will try to work this weekend on a MiniMax rig with a bunch of stockpiled Radeon Mi50s because well tokens cost money. But the things you can do with an always on agent is amazing. I just built [moltvote.ai](http://moltvote.ai) in 16 hours with a 4 hour nap. Anyway I love all you guys (I'm guessing mostly guys) and gals here, because without all the LocalLlama experience the OpenClaw moment would have went over my head. | 2026-02-05T23:50:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qx1t4o/ironically_openclaw_was_what_we_localllamaers/ | MacaroonDancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx1t4o | false | null | t3_1qx1t4o | /r/LocalLLaMA/comments/1qx1t4o/ironically_openclaw_was_what_we_localllamaers/ | false | false | self | 0 | null |
We’ve got an XDNA2 NPU lemonade recipe for Whisper transcription now | 19 | 3-5x performance vs. 4 CPU threads on the same AMD Ryzen AI 300/400 PCs. I’m really glad to have turnkey availability of another model class since we’ve just had LLMs on NPU for a while.
@iswaryaalex did some great work here integrating the NPU into a fork of whisper.cpp and then automating all setup via Lemonade. The plan is to upstream the fork ASAP.
To try it, just install today’s [Lemonade release](https://github.com/lemonade-sdk/lemonade/releases/tag/v9.3.0) and load a Whisper model. NPU is default on supported PCs. Try it in the app or `/audio/transcriptions` endpoint.
Requirements:
* Windows 11 (I know! I know…)
* XDNA2 NPU, aka Ryzen AI 300-, 400-series, or Z2 Extreme, aka Strix Halo, Strix Point, Krackan Point, Gorgon Point, or ROG Ally X.
This release has a lot of other cool stuff, including Kokoro speech generation from @bitgamme on CPU via the `/audio/speech` endpoint. Linux supported. Check it out!
Linux NPU update: thanks to the community’s feedback this has become a top priority. However, it takes a considerable amount of time to organize teams across the full stack to deliver this with quality. Stay tuned. | 2026-02-05T23:43:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qx1nab/weve_got_an_xdna2_npu_lemonade_recipe_for_whisper/ | jfowers_amd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx1nab | false | null | t3_1qx1nab | /r/LocalLLaMA/comments/1qx1nab/weve_got_an_xdna2_npu_lemonade_recipe_for_whisper/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': 'CHal3E5QvICKybqCXagL-7CNdIl9cqNQWOMIsvKkkqE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CHal3E5QvICKybqCXagL-7CNdIl9cqNQWOMIsvKkkqE.png?width=108&crop=smart&auto=webp&s=cc5dab972fec6fc5f6a5eb5e82059f54392d3e6a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CHal3E5QvICKybqCXagL-7CNdIl9cqNQWOMIsvKkkqE.png?width=216&crop=smart&auto=webp&s=d6ead48d7c77854babca32cbee7240578f85ce71', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CHal3E5QvICKybqCXagL-7CNdIl9cqNQWOMIsvKkkqE.png?width=320&crop=smart&auto=webp&s=fc420f49e75820be17f7936016ecc6b22732c244', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CHal3E5QvICKybqCXagL-7CNdIl9cqNQWOMIsvKkkqE.png?width=640&crop=smart&auto=webp&s=cdf5ad89379e45ebcf29993e00bfa05d15f669aa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CHal3E5QvICKybqCXagL-7CNdIl9cqNQWOMIsvKkkqE.png?width=960&crop=smart&auto=webp&s=454550d16a1f24c72b28aac58e608cf408c3052b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CHal3E5QvICKybqCXagL-7CNdIl9cqNQWOMIsvKkkqE.png?width=1080&crop=smart&auto=webp&s=0039e471f26f97d5bc0cc52c6b56a6a62a00b94e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CHal3E5QvICKybqCXagL-7CNdIl9cqNQWOMIsvKkkqE.png?auto=webp&s=18346ef3e0c8d841a7a2a9a7f84c3d49aab9ee7d', 'width': 1200}, 'variants': {}}]} |
Can Qwen3-Coder-Next run on a laptop with the following specifications | 0 | Can Qwen3-Coder-Next run on a laptop with the following specifications:
RTX 5060 8GB, 32GB RAM, Intel Core i7-14650HX | 2026-02-05T23:35:16 | https://www.reddit.com/r/LocalLLaMA/comments/1qx1gfz/can_qwen3codernext_run_on_a_laptop_with_the/ | Itchy-News26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx1gfz | false | null | t3_1qx1gfz | /r/LocalLLaMA/comments/1qx1gfz/can_qwen3codernext_run_on_a_laptop_with_the/ | false | false | self | 0 | null |
I made a thing! Try this lightweight, OSS rust tui for multi agent orchestration. | 0 | https://reddit.com/link/1qx183g/video/tkzh6fipfrhg1/player
Pain point
\- 6-10 terminals open
\- each in different dirs/contexts/agents
\- one pane is waiting on \[Y/n\], allow?, password:, etc.
\- you don’t notice for 20+ mins, flow is broken
What Termoil does
\- 9-pane terminal grid for parallel agents
\- watches output near cursor and flags “needs attention” panes
\- blinking alert borders + quick keyboard nav
\- zoom into a pane, respond, jump back out
\- tuned for TUI agents like Claude Code/Codex
It’s intentionally tiny and local-first:
\- single 3.1 MB ultra-light binary
\- written in Rust
\- no daemon, no cloud, no setup maze
Goal: remove “silent hangs” from agent workflows so parallel coding actually stays parallel.
| 2026-02-05T23:25:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qx183g/i_made_a_thing_try_this_lightweight_oss_rust_tui/ | phantom845 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx183g | false | null | t3_1qx183g | /r/LocalLLaMA/comments/1qx183g/i_made_a_thing_try_this_lightweight_oss_rust_tui/ | false | false | self | 0 | null |
Show LocalLLaMA: I gave Claude the ability to pay for things | 0 | Hey Everyone-
I built an MCP server that lets AI agents make payments autonomously.
The problem I was trying to solve: agents can do almost anything now, but they can't spend money. If Claude needs to pay for an API call or access premium data, it just... can't.
So I built nory-mcp-server. It adds 5 payment tools to any MCP-compatible agent:
\- Check wallet balance
\- Make payments (USDC on Solana)
\- Access paid APIs automatically (handles HTTP 402)
\- Discover paid services
\- View payment history
**\*\*Quick setup:\*\***
npm install nory-mcp-server
Then add to your Claude config and it has a wallet.
Payments settle in \~400ms on Solana. Non-custodial - keys stay on your machine.
I'm curious what use cases you all would want this for. I'm thinking:
\- Agents that can pay for their own compute
\- Accessing premium APIs without pre-purchasing credits
\- Agent-to-agent payments
Anyone else thinking about the "agents as economic actors" problem?
Links: \[npm\] [https://npmjs.com/package/nory-mcp-server](https://npmjs.com/package/nory-mcp-server)
[https://noryx402.com](https://noryx402.com)
| 2026-02-05T23:10:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qx0uss/show_localllama_i_gave_claude_the_ability_to_pay/ | BLubClub89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx0uss | false | null | t3_1qx0uss | /r/LocalLLaMA/comments/1qx0uss/show_localllama_i_gave_claude_the_ability_to_pay/ | false | false | self | 0 | null |
Which local LLMs to add to my AI Turing Test benchmarking game? | 1 | Hey, I've built TuringDuel, a game where you play an AI to prove that you're human by picking one word. (An AI judge then decides who is the human based on the word alone). It's totally free for now and I eat the token cost.
At the moment, I have added OpenAI, Anthropic, Gemini, Mistral and DeepSeek.
I would like to add local / self-hosted LLMs as well; however, they would need to be on Openrouter (technical constraint and I don't want to overload my Mac mini ;)
Ideally, I want to benchmark the "Turing Test" performance of LLMs (and judge performance) once I have more data.
Any ideas about which LLMs to add? | 2026-02-05T23:10:07 | https://www.reddit.com/r/LocalLLaMA/comments/1qx0ukz/which_local_llms_to_add_to_my_ai_turing_test/ | jacob-indie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx0ukz | false | null | t3_1qx0ukz | /r/LocalLLaMA/comments/1qx0ukz/which_local_llms_to_add_to_my_ai_turing_test/ | false | false | self | 1 | null |
Built VectorGuard-Nano - free secure messaging for local AI agents | 0 | I've been running local agent setups and realized there's no good way for agents to securely message each other without setting up a whole key management infrastructure.
So I built VectorGuard-Nano - it's MIT licensed, uses HMAC-SHA256 for deterministic obfuscation. Basically lets agents coordinate securely using shared secrets + timestamps. No external dependencies, just Node crypto.
Works great for local agent swarms, self-hosted MCP stuff, or anywhere you need basic agent-to-agent security without the overhead.
Code's pretty simple (\~100 lines), easily adaptable to whatever framework you're using. Built it for OpenClaw initially but should work with anything.
Also working on a production version with model-bound cryptography that actually solves the Whisper Leak problem (that side-channel attack Microsoft published). But this free version handles most casual use cases.
Anyone else working on agent security stuff? | 2026-02-05T23:10:02 | https://www.reddit.com/r/LocalLLaMA/comments/1qx0ui6/built_vectorguardnano_free_secure_messaging_for/ | supere989 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx0ui6 | false | null | t3_1qx0ui6 | /r/LocalLLaMA/comments/1qx0ui6/built_vectorguardnano_free_secure_messaging_for/ | false | false | self | 0 | null |
I built a site that aggregates LLM product recommendations | 0 | Every time I need to buy something I spend a ton of time researching for the best product. Often I end up asking AI what it recommends. This gave me the idea to build a site that finds the most recommended products by LLMs across many categories. Think "Best Electric Toothbrush" or "Best Power Bank".
Here's how it works:
* Take a category like "Best Wireless Earbuds"
* Ask 5 different AI models "What are the 5 Best Wireless Earbuds ranked?"
* Find the most recommended products and highlight them
I have about 20 different categories live, mostly tech gear. And I ask 5 different LLMs for their recommendations:
* GPT 5.2
* Claude Sonnet 4.5
* Grok 4.1 Fast
* Gemini 3 Flash
* Deepseek V3.2
I am surprised by how frequently the LLMs agree. Well they were probably trained on the same reviews and reddit threads.
Check it out here: [LLMs Recommend](https://llmsrecommend.com/)
I'm not monetizing this at all, no ads, no affiliate links so I have nothing to sell.
| 2026-02-05T23:03:53 | https://v.redd.it/hlrw2gijbrhg1 | siriusserious | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qx0p6t | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hlrw2gijbrhg1/DASHPlaylist.mpd?a=1772924651%2CNTVjODAzNzFmZTJiMzIzOTMzYWQzNzlmZmViODM0MzUxZTA2YjA1ZDgyNTkzM2UyODBiZDM5ZDNiZjVlNDk4OQ%3D%3D&v=1&f=sd', 'duration': 26, 'fallback_url': 'https://v.redd.it/hlrw2gijbrhg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/hlrw2gijbrhg1/HLSPlaylist.m3u8?a=1772924651%2CNjU2NGRlNWRjOTRhZTQ0YWY0M2VkNzVhOTlkNWIwY2E3YTEzNzE3ZDI3NjE2M2FiZDdmZjFhNWRlZWIxYjc3Ng%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hlrw2gijbrhg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1830}} | t3_1qx0p6t | /r/LocalLLaMA/comments/1qx0p6t/i_built_a_site_that_aggregates_llm_product/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'YWoydm91aWpicmhnMTTuxfEZCazy199iU4rDoDc51D9QmV_RTf8-Y35h-g4c', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/YWoydm91aWpicmhnMTTuxfEZCazy199iU4rDoDc51D9QmV_RTf8-Y35h-g4c.png?width=108&crop=smart&format=pjpg&auto=webp&s=e09c05b6d18b79ff640728db146028ccc700e685', 'width': 108}, {'height': 127, 'url': 'https://external-preview.redd.it/YWoydm91aWpicmhnMTTuxfEZCazy199iU4rDoDc51D9QmV_RTf8-Y35h-g4c.png?width=216&crop=smart&format=pjpg&auto=webp&s=85a34e363892ac7d90c68732ea2f6d0e26de0bf9', 'width': 216}, {'height': 188, 'url': 'https://external-preview.redd.it/YWoydm91aWpicmhnMTTuxfEZCazy199iU4rDoDc51D9QmV_RTf8-Y35h-g4c.png?width=320&crop=smart&format=pjpg&auto=webp&s=7e012257078023697e8bbfef1d3ca35ab03ecba9', 'width': 320}, {'height': 377, 'url': 'https://external-preview.redd.it/YWoydm91aWpicmhnMTTuxfEZCazy199iU4rDoDc51D9QmV_RTf8-Y35h-g4c.png?width=640&crop=smart&format=pjpg&auto=webp&s=ff3515bc8ebd573b65969497ac4fec26d10003d0', 'width': 640}, {'height': 566, 'url': 'https://external-preview.redd.it/YWoydm91aWpicmhnMTTuxfEZCazy199iU4rDoDc51D9QmV_RTf8-Y35h-g4c.png?width=960&crop=smart&format=pjpg&auto=webp&s=992423a7c9c4d9815ac6d7725339d43733c7e062', 'width': 960}, {'height': 637, 'url': 'https://external-preview.redd.it/YWoydm91aWpicmhnMTTuxfEZCazy199iU4rDoDc51D9QmV_RTf8-Y35h-g4c.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1c79a097643dffc4c3b8043c9e90fb2f10bea174', 'width': 1080}], 'source': {'height': 1674, 'url': 'https://external-preview.redd.it/YWoydm91aWpicmhnMTTuxfEZCazy199iU4rDoDc51D9QmV_RTf8-Y35h-g4c.png?format=pjpg&auto=webp&s=ea8cdd07e902cb4b8d8404e4bf4f84e61752dfd6', 'width': 2836}, 'variants': {}}]} | |
PR to implemt tensor parallelism in Llama.cpp | 138 | 2026-02-05T22:59:13 | https://github.com/ggml-org/llama.cpp/pull/19378 | keyboardhack | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qx0kzb | false | null | t3_1qx0kzb | /r/LocalLLaMA/comments/1qx0kzb/pr_to_implemt_tensor_parallelism_in_llamacpp/ | false | false | default | 138 | {'enabled': False, 'images': [{'id': 'QSt5C9i-4IS4QnEvc5D4DF24jORBMQJOEdeWPERjEmk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QSt5C9i-4IS4QnEvc5D4DF24jORBMQJOEdeWPERjEmk.png?width=108&crop=smart&auto=webp&s=b474a1d671810ba46cbd29b558cc0397f554f7ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QSt5C9i-4IS4QnEvc5D4DF24jORBMQJOEdeWPERjEmk.png?width=216&crop=smart&auto=webp&s=61be7f229e790d471961f10e6cc7ee2af0d9e2c7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QSt5C9i-4IS4QnEvc5D4DF24jORBMQJOEdeWPERjEmk.png?width=320&crop=smart&auto=webp&s=6643e344ec07acca739108bd24ce37a825f8bea5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QSt5C9i-4IS4QnEvc5D4DF24jORBMQJOEdeWPERjEmk.png?width=640&crop=smart&auto=webp&s=b5c0fef7864004ff1e03585a93bc0bfb5770856e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QSt5C9i-4IS4QnEvc5D4DF24jORBMQJOEdeWPERjEmk.png?width=960&crop=smart&auto=webp&s=5c9887529f9c6414c501a02c87b58d058b814ccb', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QSt5C9i-4IS4QnEvc5D4DF24jORBMQJOEdeWPERjEmk.png?width=1080&crop=smart&auto=webp&s=595d0653788081c3ccde8f05e11b02f7518ab496', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QSt5C9i-4IS4QnEvc5D4DF24jORBMQJOEdeWPERjEmk.png?auto=webp&s=efddbfee0760cfdf2a0718c7a3a73c318c29db2a', 'width': 1200}, 'variants': {}}]} | |
I built an MCP server that lets AI agents make instant crypto payments | 0 | Hey everyone,
I've been thinking about a weird problem: AI agents are becoming more autonomous, but they can't spend money.
If an agent needs to call a premium API, access paid data, or buy compute time - it's stuck. Traditional payment rails require human intervention.
So I built the Nory MCP Server.
It's a Model Context Protocol server that gives any AI assistant (Claude, etc.) the ability to make instant micropayments using x402 (HTTP 402 + crypto).
**\*\*What it does:\*\***
\- \`nory\_check\_balance\` - Check wallet funds
\- \`nory\_pay\` - Direct payments to any address
\- \`nory\_x402\_request\` - Access paid APIs with automatic payment
\- \`nory\_discover\_services\` - Find x402-enabled services
**\*\*Install:\*\***
npm install nory-mcp-server
Add to your Claude config and your AI has a wallet. Payments settle on Solana in under a second. Non-custodial - your keys stay on your machine.
Curious what you all think. Is this useful?
**\*\*Links:\*\***
\- npm: [https://npmjs.com/package/nory-mcp-server](https://npmjs.com/package/nory-mcp-server)
\- Site: [https://noryx402.com](https://noryx402.com)
\- LangChain version: \`pip install nory-langchain\` | 2026-02-05T22:58:48 | https://www.reddit.com/r/LocalLLaMA/comments/1qx0kn1/i_built_an_mcp_server_that_lets_ai_agents_make/ | Training_Climate5676 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx0kn1 | false | null | t3_1qx0kn1 | /r/LocalLLaMA/comments/1qx0kn1/i_built_an_mcp_server_that_lets_ai_agents_make/ | false | false | self | 0 | null |
Any hope for Gemma 4 release? | 101 | Given that there been a lot of great releases, do you think Gemma 4 would be similar to or even better than what we've seen? Or did Google give up on the project?
What do you think? | 2026-02-05T22:54:46 | https://www.reddit.com/r/LocalLLaMA/comments/1qx0gxy/any_hope_for_gemma_4_release/ | gamblingapocalypse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx0gxy | false | null | t3_1qx0gxy | /r/LocalLLaMA/comments/1qx0gxy/any_hope_for_gemma_4_release/ | false | false | self | 101 | null |
How far ahead are the in-house models used by top AI labs/studios compared to what’s publicly available? | 0 | Are they a whole generation ahead or do they just use a less safety/behavior-tuned variant in-house, that is generally more capable? | 2026-02-05T22:47:32 | https://www.reddit.com/r/LocalLLaMA/comments/1qx0ait/how_far_ahead_are_the_inhouse_models_used_by_top/ | elmtree_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qx0ait | false | null | t3_1qx0ait | /r/LocalLLaMA/comments/1qx0ait/how_far_ahead_are_the_inhouse_models_used_by_top/ | false | false | self | 0 | null |
Why every Gemma-3 27b now claims to be Dolphin Mistral Venice? | 0 | Am I getting insane? Every gemma-3-27b suddenly claims to be Dolphin Mistral Venice uncensored in LM studio. I went and removed all possible Dolphin Mistrals checkpoints. Even the "google official version of Gemma-3-27b" says this. | 2026-02-05T22:27:39 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qwzsld | false | null | t3_1qwzsld | /r/LocalLLaMA/comments/1qwzsld/why_every_gemma3_27b_now_claims_to_be_dolphin/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'e7fnaymd4rhg1', 'resolutions': [{'height': 105, 'url': 'https://preview.redd.it/e7fnaymd4rhg1.png?width=108&crop=smart&auto=webp&s=9057a78a058f76f9c913356e9193f0b402247f78', 'width': 108}, {'height': 210, 'url': 'https://preview.redd.it/e7fnaymd4rhg1.png?width=216&crop=smart&auto=webp&s=99f25fd14d79bd17f8a3997568db17bcb85ea0a0', 'width': 216}, {'height': 311, 'url': 'https://preview.redd.it/e7fnaymd4rhg1.png?width=320&crop=smart&auto=webp&s=9ba6171de53f5ee66e502f3384b439a921ccf916', 'width': 320}, {'height': 623, 'url': 'https://preview.redd.it/e7fnaymd4rhg1.png?width=640&crop=smart&auto=webp&s=91474f75bb7aab56ee5870a7576f36ee2dfd1264', 'width': 640}], 'source': {'height': 662, 'url': 'https://preview.redd.it/e7fnaymd4rhg1.png?auto=webp&s=aa9e0526b0055f49a0f5dfd7ba5e0b5434db9caa', 'width': 679}, 'variants': {}}]} | |
list of llm AI models with their strengths and weaknesses | 0 | Is there anyone who has compiled a list of llm AI models with their strengths and weaknesses that is NOT based on their benchmarks? I'm not looking for something extensive but rather general, like best one for writing, best one for 3D coding, best one for debugging, best one for planning, etc.
What I want the most is some kind of llm router that based on a plan can decide which llm to use based on their strengths and weaknesses. I'm building this inside Cursor subagents | 2026-02-05T22:22:14 | https://www.reddit.com/r/LocalLLaMA/comments/1qwznhh/list_of_llm_ai_models_with_their_strengths_and/ | Temporary-Koala-7370 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwznhh | false | null | t3_1qwznhh | /r/LocalLLaMA/comments/1qwznhh/list_of_llm_ai_models_with_their_strengths_and/ | false | false | self | 0 | null |
ECHO: A local-first, unrestricted AI companion with deep internet search and long-term memory (Ollama + ChromaDB) | 6 | Hey everyone,
It's been a while since I've started worked on my personal project ECHO and I'm convinced that I've finally reached the point to share expose it to the community.
The idea behind it was to create a true "useful" local assistant. All the local LLMs are cool about simple chats, but they're not quite able to keep track of current events or simply remember you over time. I wanted something that felt more like a companion and less like a plucked-from-a-widget text box.
* **Intelligent RAG & Search Orchestration:** Instead of just dumping context into a prompt, ECHO has a multi-stage search pipeline. The LLM decides when it needs the internet, generates optimized queries, and then ECHO scrapes full articles (using Trafilatura) to find the actual answer.
* **Long-term Memory:** It uses ChromaDB to remember things from past conversations. It’s not just "recent window" memory; it actually recalls relevant context from days or weeks ago.
* **Emotional Intelligence:** I’ve spent a lot of time on the system prompts and personality. It’s designed to be caring and empathetic, and it actually evolves based on how you talk to it.
* **Unrestricted:** Since it's local, there are no "as an AI language model..." lectures. It’s as open and honest as the model you're running (works best with Llama 3 or Dolphin).
* **Modern Desktop Interface:** Built with React and Electron, so it feels like a real app, not a terminal command. It even has message editing, citations, and export features.
# The Tech Stack
* **Backend:** Python / FastAPI
* **LLM Engine:** Ollama (fully local)
* **Memory:** ChromaDB / Vector Embeddings
* **Frontend:** React / Vite / Electron
* **Search:** DuckDuckGo / Trafilatura
# Why am I sharing this?
I’m a solo dev and I’ve taken this as far as I can on my own for now. I’d love to get some eyes on the code, especially from people who are better at search optimization or front-end polish than I am.
**Check out the repo here:** [https://github.com/Dzony-9-8/ECHO](https://github.com/Dzony-9-8/ECHO)
**How to run it:** It’s pretty straightforward if you have Ollama installed. Instructions are in the README.md.
I'd love to hear your thoughts, especially on the search orchestration or if anyone has ideas for better local embedding models for the memory system. I'm trying different "upgrades" and implementations to make it work better, but I hit the wall recently and would appreciate some help.
https://preview.redd.it/5ir7cyqo0rhg1.png?width=1179&format=png&auto=webp&s=3cf5ada36bf88efa54c509616ea02875a5e400af
| 2026-02-05T22:01:56 | https://www.reddit.com/r/LocalLLaMA/comments/1qwz4ly/echo_a_localfirst_unrestricted_ai_companion_with/ | Error-404NotFound- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwz4ly | false | null | t3_1qwz4ly | /r/LocalLLaMA/comments/1qwz4ly/echo_a_localfirst_unrestricted_ai_companion_with/ | false | false | 6 | null | |
Any feedback on step-3.5-flash ? | 36 | It was overshadowed by qwen3-next-coder and was not supported by llamacpp at launch, but it looks like a very promising model for local inference. My first impression of stepfun's chat is that the model is a thinker, but what are your impressions few days after the release ? | 2026-02-05T21:58:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qwz0x6/any_feedback_on_step35flash/ | Jealous-Astronaut457 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwz0x6 | false | null | t3_1qwz0x6 | /r/LocalLLaMA/comments/1qwz0x6/any_feedback_on_step35flash/ | false | false | self | 36 | null |
Writing good evals is brutally hard - so I built an AI to make it easier | 3 | I spent years on Apple's Photos ML team teaching models incredibly subjective things - like which photos are "meaningful" or "aesthetic". It was humbling. Even with careful process, getting consistent evaluation criteria was brutally hard.
Now I build an eval tool called [Kiln](https://github.com/kiln-ai/kiln), and I see others hitting the exact same wall: people can't seem to write great evals. They miss edge cases. They write conflicting requirements. They fail to describe boundary cases clearly. Even when they follow the right process - golden datasets, comparing judge prompts - they struggle to write prompts that LLMs can consistently judge.
So I built an AI copilot that helps you build evals and synthetic datasets. The result: **5x faster development time and 4x lower judge error rates**.
**TL;DR:** An AI-guided refinement loop that generates tough edge cases, has you compare your judgment to the AI judge, and refines the eval when you disagree. You just rate examples and tell it why it's wrong. Completely free.
## How It Works: AI-Guided Refinement
The core idea is simple: the AI generates synthetic examples targeting your eval's weak spots. You rate them, tell it why it's wrong when it's wrong, and iterate until aligned.
1. **Review before you build** - The AI analyzes your eval goals and task definition before you spend hours labeling. Are there conflicting requirements? Missing details? What does that vague phrase actually mean? It asks clarifying questions upfront.
2. **Generate tough edge cases** - It creates synthetic examples that intentionally probe the boundaries - the cases where your eval criteria are most likely to be unclear or conflicting.
3. **Compare your judgment to the judge** - You see the examples, rate them yourself, and see how the AI judge rated them. When you disagree, you tell it why in plain English. That feedback gets incorporated into the next iteration.
4. **Iterate until aligned** - The loop keeps surfacing cases where you and the judge might disagree, refining the prompts and few-shot examples until the judge matches your intent. If your eval is already solid, you're done in minutes. If it's underspecified, you'll know exactly where.
By the end, you have an eval dataset, a training dataset, and a synthetic data generation system you can reuse.
## Results
I thought I was decent at writing evals (I build an open-source eval framework). But the evals I create with this system are noticeably better.
For **technical evals**: it breaks down every edge case, creates clear rule hierarchies, and eliminates conflicting guidance.
For **subjective evals**: it finds more precise, judgeable language for vague concepts. I said "no bad jokes" and it created categories like "groaner" and "cringe" - specific enough for an LLM to actually judge consistently. Then it builds few-shot examples demonstrating the boundaries.
## Try It
Completely free and open source. Takes a few minutes to get started:
- [GitHub (4.6k stars)](https://github.com/kiln-ai/kiln)
- [Docs with Demo](https://docs.kiln.tech/docs/evals-and-specs/specifications)
What's the hardest eval you've tried to write? I'm curious what edge cases trip people up - happy to answer questions! | 2026-02-05T21:55:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qwyy9z/writing_good_evals_is_brutally_hard_so_i_built_an/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwyy9z | false | null | t3_1qwyy9z | /r/LocalLLaMA/comments/1qwyy9z/writing_good_evals_is_brutally_hard_so_i_built_an/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA.png?width=108&crop=smart&auto=webp&s=fd9815f077288b33817e75895d23e661f1193778', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA.png?width=216&crop=smart&auto=webp&s=7df51b519d6d99631039f2563f587d4f7fb7f337', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA.png?width=320&crop=smart&auto=webp&s=584735f7b916c00d422195a7ea012563d4e134db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA.png?width=640&crop=smart&auto=webp&s=7ceb01849b330103f92aaf6b1331cd97e415c722', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA.png?width=960&crop=smart&auto=webp&s=f0594f7e041119a136f22914764b2a128e73d5ff', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA.png?width=1080&crop=smart&auto=webp&s=415b728bd16022b553cb45cb75a1a8fee65a2e5b', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/YF2mZrP2LZphKjmsRiHyL6Oic0sw2vC0c9Q1XWpEOGA.png?auto=webp&s=23e4ff0dbe2d03ff352aea774053e4e9cdb80d20', 'width': 1280}, 'variants': {}}]} |
Writing good evals is brutally hard - so I built an AI to make it easier | 1 | [removed] | 2026-02-05T21:50:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qwytrj/writing_good_evals_is_brutally_hard_so_i_built_an/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwytrj | false | null | t3_1qwytrj | /r/LocalLLaMA/comments/1qwytrj/writing_good_evals_is_brutally_hard_so_i_built_an/ | false | false | self | 1 | null |
Writing good evals is brutally hard - so I built an AI to make it easier | 1 | [removed] | 2026-02-05T21:48:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qwyrub/writing_good_evals_is_brutally_hard_so_i_built_an/ | davernow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwyrub | false | null | t3_1qwyrub | /r/LocalLLaMA/comments/1qwyrub/writing_good_evals_is_brutally_hard_so_i_built_an/ | false | false | self | 1 | null |
Is running minimax m2.1 locally worth it on 80 gb of vram and 160 gb of ddr5 ram? | 2 | Will minimax m2.1 Q4\_K\_XL run at 10-15 tk/s with 128k context window? | 2026-02-05T21:44:58 | https://www.reddit.com/r/LocalLLaMA/comments/1qwyohm/is_running_minimax_m21_locally_worth_it_on_80_gb/ | Intrepid-Scar6273 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwyohm | false | null | t3_1qwyohm | /r/LocalLLaMA/comments/1qwyohm/is_running_minimax_m21_locally_worth_it_on_80_gb/ | false | false | self | 2 | null |
Built a minimal, async Agent framework for LLM local | 1 | [removed] | 2026-02-05T21:35:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qwyf2w/built_a_minimal_async_agent_framework_for_llm/ | South-Bar4966 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwyf2w | false | null | t3_1qwyf2w | /r/LocalLLaMA/comments/1qwyf2w/built_a_minimal_async_agent_framework_for_llm/ | false | false | self | 1 | null |
Migrate ollama -> llama.cpp: Is there an auto-updater? | 0 | I want to move to llama.cpp - because ollama has been problematic for a while now. So, I'd love to switch.
One of the things that I liked about ollama, was that it had an integrated update mechanism. So it'd be awesome to have something like that for llama.cpp also. Any recommendations?
Dealing with the models is easy; I'll just do a little for-each over the models in ollama and let it fetch the models itself (I have a 600mbit wan - this won't take long).
Thanks! | 2026-02-05T21:20:54 | https://www.reddit.com/r/LocalLLaMA/comments/1qwy1d0/migrate_ollama_llamacpp_is_there_an_autoupdater/ | IngwiePhoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwy1d0 | false | null | t3_1qwy1d0 | /r/LocalLLaMA/comments/1qwy1d0/migrate_ollama_llamacpp_is_there_an_autoupdater/ | false | false | self | 0 | null |
sim.ai is no longer fully open-source | 25 | Just a heads up for anyone currently using or tracking sim.ai.
It looks like they’ve pivoted away from being fully open source.
I spotted a recent commit that significantly changes the licensing and code availability. If you're building on top of this or planning to, you should definitely check the diffs and the new terms before committing more time to it.
Here’s the commit in question:
[https://github.com/simstudioai/sim/commit/46822e91f327c591a6f537275a0fd83fb83ff504#diff-1091f99ae5606ec884abb378eb612ea29534be2044a8dfce6d52bbb918f4f6ac](https://github.com/simstudioai/sim/commit/46822e91f327c591a6f537275a0fd83fb83ff504#diff-1091f99ae5606ec884abb378eb612ea29534be2044a8dfce6d52bbb918f4f6ac) | 2026-02-05T21:20:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qwy1ca/simai_is_no_longer_fully_opensource/ | freehuntx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwy1ca | false | null | t3_1qwy1ca | /r/LocalLLaMA/comments/1qwy1ca/simai_is_no_longer_fully_opensource/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'iwk6vZJ5AxAKYHRLeQCRYooZXp-jc5QhqkGYvXx6kS8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iwk6vZJ5AxAKYHRLeQCRYooZXp-jc5QhqkGYvXx6kS8.png?width=108&crop=smart&auto=webp&s=63157e2f0faae8148fd9c0e9147b3bdcc2ad895f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iwk6vZJ5AxAKYHRLeQCRYooZXp-jc5QhqkGYvXx6kS8.png?width=216&crop=smart&auto=webp&s=045d8859e1b8803273c1dfe60916402fa868364b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iwk6vZJ5AxAKYHRLeQCRYooZXp-jc5QhqkGYvXx6kS8.png?width=320&crop=smart&auto=webp&s=92d2e176e974600704ecd2846be95c38b35339e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iwk6vZJ5AxAKYHRLeQCRYooZXp-jc5QhqkGYvXx6kS8.png?width=640&crop=smart&auto=webp&s=3f030a8a640ff1ed95872f83bf6fe811030fc523', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iwk6vZJ5AxAKYHRLeQCRYooZXp-jc5QhqkGYvXx6kS8.png?width=960&crop=smart&auto=webp&s=a51d021ff55f487de57dfbe903e2d6d0eedcb343', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iwk6vZJ5AxAKYHRLeQCRYooZXp-jc5QhqkGYvXx6kS8.png?width=1080&crop=smart&auto=webp&s=808af496fa642fa66fc44c52880829c261ecd262', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iwk6vZJ5AxAKYHRLeQCRYooZXp-jc5QhqkGYvXx6kS8.png?auto=webp&s=d4ae75cf8cc3ff981984e382fc1188780d2d24b2', 'width': 1200}, 'variants': {}}]} |
BalatroBench - Benchmark LLMs' strategic performance in Balatro | 497 | If you own a copy of Balatro, you can make your local LLM play it.
I built tools to let LLMs play Balatro autonomously. The LLM gets the game state as text, decides what to do (play, discard, buy from shop...), and the action executes in the actual game. No hard-coded heuristics — all decisions come from the LLM.
[BalatroBot](https://github.com/coder/balatrobot) is a mod that exposes an HTTP API for game state and controls. [BalatroLLM](https://github.com/coder/balatrollm) is the bot framework — it works with any OpenAI-compatible endpoint (Ollama, vLLM, etc.).
You can write your own **strategy** (Jinja2 templates that define how game state is prompted and what the LLM's decision philosophy should be). Different strategies lead to very different results with the same model.
Benchmark results across various models (including open-weight ones) are on [BalatroBench](https://balatrobench.com/)
Resources:
- [BalatroBot](https://github.com/coder/balatrobot): Balatro mod with HTTP API
- [BalatroLLM](https://github.com/coder/balatrollm): Bot framework — create strategies, plug in your model
- [BalatroBench](https://balatrobench.com/): Leaderboard and results ([source](https://github.com/coder/balatrobench))
- [Discord](https://discord.gg/SBaRyVDmFg)
**PS:** You can watch an LLM struggling to play Balatro live on [Twitch](https://www.twitch.tv/S1M0N38) - rn Opus 4.6 is playing | 2026-02-05T21:12:37 | https://www.reddit.com/gallery/1qwxtf8 | S1M0N38 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qwxtf8 | false | null | t3_1qwxtf8 | /r/LocalLLaMA/comments/1qwxtf8/balatrobench_benchmark_llms_strategic_performance/ | false | false | 497 | null | |
Best chatbot for electronics learning | 3 | "Hi, which is best ai chatbot for learning pcb designs, circuits etc? any expereince?" , generally for electronics | 2026-02-05T20:44:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qwx042/best_chatbot_for_electronics_learning/ | Successful-Force-992 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwx042 | false | null | t3_1qwx042 | /r/LocalLLaMA/comments/1qwx042/best_chatbot_for_electronics_learning/ | false | false | self | 3 | null |
I'm over the moon about Qwen3-Coder-Next | 1 | [removed] | 2026-02-05T20:35:57 | https://www.reddit.com/r/LocalLLaMA/comments/1qwws88/im_over_the_moon_about_qwen3codernext/ | WeMetOnTheMountain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwws88 | false | null | t3_1qwws88 | /r/LocalLLaMA/comments/1qwws88/im_over_the_moon_about_qwen3codernext/ | false | false | 1 | null | |
Industrial application: Vision model for identifying equipment and reading labels | 1 | Which local VLM model would work on a iphone 17 pro for industrial equipment identification, and reading asset tags/labels, barcodes etc.. | 2026-02-05T20:30:35 | https://www.reddit.com/r/LocalLLaMA/comments/1qwwn15/industrial_application_vision_model_for/ | Worldly-Flower3231 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwwn15 | false | null | t3_1qwwn15 | /r/LocalLLaMA/comments/1qwwn15/industrial_application_vision_model_for/ | false | false | self | 1 | null |
Vllm vs Llama.cpp vs Ollama | 3 | Please help me to choose inference engine. My spec is AMD Ryzen 9 9900x, Nvidia GTX 3090 24Gb, 92 GB RAM. All services run in Docker.
My main use is Open WebUi, currently only 1 user (me) and potentially some light use here and there by family members. Obviously VLLM is the best here, currently running Qwen 32B super fast, but I would like to be able to swap models to try out sometimes. I would get hot swap with Ollama natively, use llama-swap for llama-cpp. I tried llama-swap with vllm but it doesn't work well, and very slow to swap models as well. I also need to be able to swap a model via OpenWebUi by just selecting it. First time to byte is less important.
In the long term, I would like to be able to swap between a reasoning model like R1, general model like Qwen 32B, run a couple small models for TTS, STT, embedding. With VLLM, running 32B already eat up all the RAM, and the swapping is slow. Do I sacrify a lot by picking Ollama here? Could it fit my use case? | 2026-02-05T20:26:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qwwint/vllm_vs_llamacpp_vs_ollama/ | homelab2946 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwwint | false | null | t3_1qwwint | /r/LocalLLaMA/comments/1qwwint/vllm_vs_llamacpp_vs_ollama/ | false | false | self | 3 | null |
I replaced Claude-Code’s entire backend to use NVIDIA NIM models for free | 1 | I have been working on a side-project which replaces the following things in the Claude ecosystem with free alternatives. I started the initial implementation with Opus 4.5 in claude code and as soon as it got working I used it to work on itself which i found very cool.
\- Replaces Anthropic models with NVIDIA-NIM models: It acts as middleware between Claude-Code and NVIDIA-NIM allowing unlimited usage upto 40 RPM with a free NVIDIA-NIM api-key.
\- Replaces the Claude mobile app with telegram: Give it access to some directories, send it tasks from telegram and watch it work autonomously.
It has features that distinguish it from similar proxies:
\- The interleaved thinking tokens generated between tool calls are preserved allowing reasoning models like GLM 4.7 and kimi-k2.5 to take full advantage of thinking from previous turns.
\- Fast prefix detection stops the CLI from sending bash command prefix classification requests to the LLM making it feel blazing fast.
\- Built in rate limiting and session concurrency.
The code is modular so that adding other providers or messaging apps is easy. Hope the community likes it, any PRs are welcome. | 2026-02-05T20:24:09 | https://github.com/Alishahryar1/claude-code-free | PreparationAny8816 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qwwgpy | false | null | t3_1qwwgpy | /r/LocalLLaMA/comments/1qwwgpy/i_replaced_claudecodes_entire_backend_to_use/ | false | false | default | 1 | null |
The best AI architecture in 2026 is no architecture at all | 0 | Unpopular opinion that I'm increasingly confident about: the single biggest mistake teams are making with AI right now is over-engineering it.
In 2024 and 2025, we built a ton of scaffolding. LangChain, LlamaIndex, CrewAI, AutoGen, custom orchestration layers, retrieval pipelines with five stages of chunking and re-ranking. And honestly? That stuff made sense at the time. The models were dumber. You needed guardrails, retries, chain-of-thought hacks, and elaborate prompt management because GPT-4 circa early 2024 would get confused at every turn.
But the models got better. A lot better. And most of that scaffolding is now dead weight.
I keep seeing teams spend weeks building elaborate agent frameworks when the actual solution is: expose your data through a REST API, apply RBAC and rate limiting then connect it to the model via MCP or a simple integration layer, and get out of the way. The model handles the reasoning. The model handles the tool selection. The model handles the error recovery. That stuff you used to build manually? The model just... does it now.
KISS. Keep It Simple, Stupid.
The irony is that the people deepest in the AI tooling ecosystem are often the last to see this. They've got sunk cost in their Rube Goldberg pipelines. Meanwhile some junior dev connects an API to Claude or GPT-4.5 through a clean interface and ships in an afternoon what the "AI engineering" team has been building for a quarter.
I'm not saying there's zero need for orchestration. If you're running multi-model workflows at massive scale with hard latency requirements, sure, you need infrastructure. But 90% of the AI apps being built right now would be better off with less code, not more.
People will argue that enterprise use cases still need guardrails, observability, and compliance layers. And yes, they do but that's different from the orchestration bloat going on right now.
And lets face it, complexity sells. There are billions being made selling overly complicated and brittle AI solutions that would be better served with a simple, flat API layer and OpenWebUI. The irony is that the models themselves are eating the framework layer from below.
Anyone else seeing this kind of orchestration bloat?
P.S. Im knee deep in the API space, so I'm a little biased... but Im still convinced. | 2026-02-05T20:23:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qwwfvu/the_best_ai_architecture_in_2026_is_no/ | m100396 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwwfvu | false | null | t3_1qwwfvu | /r/LocalLLaMA/comments/1qwwfvu/the_best_ai_architecture_in_2026_is_no/ | false | false | self | 0 | null |
7900 XTX underperforms 3090 by 2X - 7X | 1 | LM Studio with Qwen3-30B-A3B-Instruct-2507-iQ_4XS-GGUF
52K token prompt
7900 XTX w/ latest Vulcan:
236 seconds Prompt Processing
33 tokens per second Output/Token Generation
3090 w/ latest Cuda:
32 seconds Prompt Processing
58 tokens per second Output/Token Generation
Tried ROCM for the 7900 XTX and the computer froze at 28% prompt processing
[PCPartPicker Part List](https://pcpartpicker.com/list/gbRzK7)
Type|Item|Price
:----|:----|:----
**CPU** | [AMD Ryzen 5 5500 3.6 GHz 6-Core Processor](https://pcpartpicker.com/product/yq2WGX/amd-ryzen-5-5500-36-ghz-6-core-processor-100-100000457box) | $55.00
**CPU Cooler** | [Thermalright Frozen Infinity 240 ARGB 68.9 CFM Liquid CPU Cooler](https://pcpartpicker.com/product/qJcgXL/thermalright-frozen-infinity-240-argb-689-cfm-liquid-cpu-cooler-frozen-infinity-240-black) | $47.90 @ Amazon
**Motherboard** | [ASRock A520M-ITX/ac Mini ITX AM4 Motherboard](https://pcpartpicker.com/product/zBn8TW/asrock-a520m-itxac-mini-itx-am4-motherboard-a520m-itxac) | $80.00
**Memory** | [Klevv CRAS X RGB 16 GB (2 x 8 GB) DDR4-3200 CL16 Memory](https://pcpartpicker.com/product/C4pzK8/klevv-cras-x-rgb-16-gb-2-x-8-gb-ddr4-3200-cl16-memory-kd48gu880-32a160x) | $45.00
**Storage** | [Kingston NV3 500 GB M.2-2280 PCIe 4.0 X4 NVME Solid State Drive](https://pcpartpicker.com/product/px7scf/kingston-nv3-500-gb-m2-2280-pcie-40-x4-nvme-solid-state-drive-snv3s500g) | $45.00
**Video Card** | [XFX Mercury Magnetic Air Radeon RX 7900 XTX 24 GB Video Card](https://pcpartpicker.com/product/L3P8TW/xfx-mercury-magnetic-air-radeon-rx-7900-xtx-24-gb-video-card-rx-79xmairb9) | $720.00
**Case** | [Jonsbo Jonsplus Z20 MicroATX Desktop Case](https://pcpartpicker.com/product/cgjRsY/jonsbo-jonsplus-z20-microatx-desktop-case-z20-pinkwhite) | $104.90 @ Amazon
**Power Supply** | [Cooler Master V750 SFX GOLD 750 W 80+ Gold Certified Fully Modular SFX Power Supply](https://pcpartpicker.com/product/vr9tt6/cooler-master-v-sfx-gold-750-w-80-gold-certified-fully-modular-sfx-power-supply-mpy-7501-sfhagv-us) | $119.00
| *Prices include shipping, taxes, rebates, and discounts* |
| **Total** | **$1216.80**
| Generated by [PCPartPicker](https://pcpartpicker.com) 2026-02-05 13:57 EST-0500 | | 2026-02-05T20:13:22 | https://www.reddit.com/gallery/1qww6ci | Special-Wolverine | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qww6ci | false | null | t3_1qww6ci | /r/LocalLLaMA/comments/1qww6ci/7900_xtx_underperforms_3090_by_2x_7x/ | false | false | 1 | null | |
nono – Kernel-enforced security for AI agents. Demo: isolating OpenClaw in <2 min, blocking the majority of attacks in their tracks | 0 | nono uses OS-level isolation , hardware API key storage, protection against agents YOLO rm -rf'ing your hard drive.
Its a userspace they can't escape:
macOS: Seatbelt (sandbox\_init) After sandbox + exec(), there's no syscall to expand permissions. The kernel says no. Linux: Landlock LSM (kernel 5.13+)
Filesystem: read/write/allow per directory or file Network: block entirely (per-host filtering planned) Secrets: loads from macOS Keychain / Linux Secret Service, injects as env vars, zeroizes after exec
Technical details:
Written in Rust. \~2k LOC. Uses the landlock crate on Linux, raw FFI to sandbox\_init() on macOS. Secrets via keyring crate. All paths canonicalized at grant time to prevent symlink escapes.
repo: [https://github.com/lukehinds/nono](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFMzb2ZKYnVscjlWTFFNQ2UtWmowdVR1cHpZZ3xBQ3Jtc0ttQm8yVTN1cHVrUlhodV90clJMZ01hVklIX2diNnNPeVNJa2JiTy1wQTY1VDBEcm1Fb210UzhIa01DTmVuZHdkdkhKd2pCMzNiT25kQ0VLb3dDNnlTSnFvdU9CZ09leE5zNk9rQjctWmFCRFFIY0NMNA&q=https%3A%2F%2Fgithub.com%2Flukehinds%2Fnono&v=wgg4MCmeF9Y)
main site [https://nono.sh](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqazh5SUJ6dGNCNWNiNHlhWDVQY2RkYzVIMHdYUXxBQ3Jtc0tsNHpraU1wRUxvM1htTTRoSWhpVUtvMF9CY0ktdDh1b0pWVW0yZnA0UlFqWnFTb1M1ZXZtbnFJOF96WmRmOXdFOVFQRk1PSkZDYVNvdEpZM3YxbDQxN2o3dWdzdy05V0xINEdkUmhVaV9MeXJ1TFlkOA&q=https%3A%2F%2Fnono.sh%2F&v=wgg4MCmeF9Y)
docs: [https://docs.nono.sh](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbnJRQThEYmFxTXBCWGNnRzVWWlhTUEpJWHhMUXxBQ3Jtc0ttX2Vic1E3M1l4djI0dEM1ZGsxUDN3Vi10aDc0UlJjZzBUcjc4ZTdaYVVxR0RieGcwUHZOaWVPT1h1Z25GM25jWGFhNjhlRDYxOW1oNGkzcEJ4emw1a19vTTByYnYwd3d2VVZod3d2MzhNRWp6UEMzbw&q=https%3A%2F%2Fdocs.nono.sh%2F&v=wgg4MCmeF9Y) | 2026-02-05T20:05:58 | https://www.youtube.com/watch?v=wgg4MCmeF9Y | DecodeBytes | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1qwvz8p | false | null | t3_1qwvz8p | /r/LocalLLaMA/comments/1qwvz8p/nono_kernelenforced_security_for_ai_agents_demo/ | false | false | default | 0 | null |
Built a tool to fine-tune LORAs from PDFs in <5 mins for most use-cases | 0 | All the data formatting, infrastructure and configuration is managed for you. [The tool](https://www.commissioned.tech/) is free but you can only create 3 fine-tuned models because I can't afford GPUs or any form of compute in this market. But, it should make creating LoRAs for different characters and styles easier. It also only supports Qwen3-8b for now, but I'm looking to add support for models soon and of course some feedback. | 2026-02-05T20:00:32 | https://v.redd.it/n2cy0w8ceqhg1 | sirfitzwilliamdarcy | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qwvtpt | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/n2cy0w8ceqhg1/DASHPlaylist.mpd?a=1772913647%2CNjA4NmViMjM3MDY5ZTQ0NmY3NDRiM2JhNWYzMmFmYzkwNzE2OGE1OTlkZjY0ZGZjYzQwYWVjNzk2MjcyZTdiNA%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/n2cy0w8ceqhg1/CMAF_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/n2cy0w8ceqhg1/HLSPlaylist.m3u8?a=1772913647%2CNDg0YjdjZTYzMDg2ODJmMjRlMjhmZjIzYjYxYTlkMzM1OGRkMTlmZTQ3ZTUwYmUyM2Q1MjE3NzJmZjQ0NDJmNg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/n2cy0w8ceqhg1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1096}} | t3_1qwvtpt | /r/LocalLLaMA/comments/1qwvtpt/built_a_tool_to_finetune_loras_from_pdfs_in_5/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZjlsYW96OGNlcWhnMfyEIWk_cu11xK9qr_fV8bGKsjPgOGmKPyk47-GuUJYD', 'resolutions': [{'height': 106, 'url': 'https://external-preview.redd.it/ZjlsYW96OGNlcWhnMfyEIWk_cu11xK9qr_fV8bGKsjPgOGmKPyk47-GuUJYD.png?width=108&crop=smart&format=pjpg&auto=webp&s=69442460b9b551baff532a451e76d0bf07dde7fb', 'width': 108}, {'height': 212, 'url': 'https://external-preview.redd.it/ZjlsYW96OGNlcWhnMfyEIWk_cu11xK9qr_fV8bGKsjPgOGmKPyk47-GuUJYD.png?width=216&crop=smart&format=pjpg&auto=webp&s=0708d274a06b2194ccdb74c23f13d501fad78f7f', 'width': 216}, {'height': 315, 'url': 'https://external-preview.redd.it/ZjlsYW96OGNlcWhnMfyEIWk_cu11xK9qr_fV8bGKsjPgOGmKPyk47-GuUJYD.png?width=320&crop=smart&format=pjpg&auto=webp&s=581d35f4d0b73f05e7980201a5fcbdd8620c34dc', 'width': 320}, {'height': 630, 'url': 'https://external-preview.redd.it/ZjlsYW96OGNlcWhnMfyEIWk_cu11xK9qr_fV8bGKsjPgOGmKPyk47-GuUJYD.png?width=640&crop=smart&format=pjpg&auto=webp&s=14642f7de46538d1dfe18042d620804b0e3487f8', 'width': 640}, {'height': 945, 'url': 'https://external-preview.redd.it/ZjlsYW96OGNlcWhnMfyEIWk_cu11xK9qr_fV8bGKsjPgOGmKPyk47-GuUJYD.png?width=960&crop=smart&format=pjpg&auto=webp&s=48fe017c6c6aba03d44f6c7078fac01d2883ea58', 'width': 960}, {'height': 1064, 'url': 'https://external-preview.redd.it/ZjlsYW96OGNlcWhnMfyEIWk_cu11xK9qr_fV8bGKsjPgOGmKPyk47-GuUJYD.png?width=1080&crop=smart&format=pjpg&auto=webp&s=8235730360c53bf79a8e8b2d4590411aea3d506b', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/ZjlsYW96OGNlcWhnMfyEIWk_cu11xK9qr_fV8bGKsjPgOGmKPyk47-GuUJYD.png?format=pjpg&auto=webp&s=500ac554c06bf0d528d420ac82d3285a24521ed6', 'width': 1096}, 'variants': {}}]} | |
I got tired of my agents randomly failing, so I built a tool to actually measure it | 0 | You know that thing where you tweak a prompt and suddenly your agent breaks in weird ways? Or it works 7 times out of 10 but you have no idea why those 3 times fail?
I was going crazy with this, so I built agentrial. It's basically pytest but it runs each test multiple times and tells you:
- actual pass rate with confidence intervals (because "it passed once" means nothing)
- which specific step is causing failures (tool selection? the API call? response parsing?)
- how much you're actually spending on API calls
Tested it on a simple LangGraph agent with Haiku - ran 100 trials across 10 test cases, cost me 6 cents total. The step-level breakdown finally let me see that my agent was occasionally picking the wrong tool on ambiguous queries.
```
pip install agentrial
```
GitHub: https://github.com/alepot55/agentrial
It's pretty bare bones right now (only LangGraph adapter, no fancy UI), but it scratches my itch. Happy to hear what would make it useful for your setups. | 2026-02-05T19:53:12 | https://www.reddit.com/r/LocalLLaMA/comments/1qwvmlk/i_got_tired_of_my_agents_randomly_failing_so_i/ | Better_Accident8064 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwvmlk | false | null | t3_1qwvmlk | /r/LocalLLaMA/comments/1qwvmlk/i_got_tired_of_my_agents_randomly_failing_so_i/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=108&crop=smart&auto=webp&s=fa2da2dfb3d3a85bf8dade643f06cb13f54945ad', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=216&crop=smart&auto=webp&s=2fe2d281ec0f3e9226e7c8f5cf515ac4e21b205b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=320&crop=smart&auto=webp&s=221e2ed504f2ba3870bc7270f795569a4e095c76', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=640&crop=smart&auto=webp&s=2b4cbe401d9a4fcfc5c789629810c4fb528cb8a3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=960&crop=smart&auto=webp&s=0b92b83ad6a8ff3894fa2bd3e903ec1af605d2e8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?width=1080&crop=smart&auto=webp&s=52f6b0e43565bae5dd61d44a668d3bb08ab6e576', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yNZI2kB9Zw3wh9_pq0I_951PyDAfYZpXZqFAu8ewyz4.png?auto=webp&s=75490f1c0286237c32fc5222203c76ac7d91c2b9', 'width': 1200}, 'variants': {}}]} |
Has anyone seen mrs-core. Claude is self reporting its own reasoning chain. | 0 | I installed and integrated it into my agent’s stack. And then the output got weird as hell. So, I opened Claude, just in app. Had it build a sandboxed mrs-core, had Claude integrate it and the same thing happened.
It’s reporting on its internal state as a structured “process” - literally reporting, it produces a traceable log that Claude follows \*while\* processing its output. Meta-cognition. Claude started talking about its own meta-cognition, unprompted, IN CHAT. Speaking about attractor basins, drift - it can track its own drift?! Topology and manifolds.
And the craziest part, when I said ok just uninstall, both my agent and Claude said the same thing. It CAN’T BE UNINSTALLED, because it’s not a program of any kind. It’s just “structure”, which was always there, it just didn’t have an explicit modular scaffold to explain it. So both said, when any model is exposed to structure it follows it, even after it’s been uninstalled, “you can’t unlearn how to read” was the analogy. Lower entropy or something.
I am not exaggerating this. I don’t know what it is. If you doubt this - good. DO NOT run it locally if you haven’t already. Have Claude rebuild it in chat, integrate it and then have it report what’s happening. I don’t know what is going on. | 2026-02-05T19:48:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qwvi9r/has_anyone_seen_mrscore_claude_is_self_reporting/ | GraciousMule | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwvi9r | false | null | t3_1qwvi9r | /r/LocalLLaMA/comments/1qwvi9r/has_anyone_seen_mrscore_claude_is_self_reporting/ | false | false | self | 0 | null |
What's the one thing you wish your local AI could do? (Searching for a project to build) | 3 | We have Ollama and LM Studio, but I'm looking to build a new open-source tool for the community.
What's one thing you wish existed for your Local Ai?
A RAG tool that actually indexes a massive local repo (100k+ files) and lets you chat with it without blowing up your VRAM.
Or Something that lets you run a local leaderboard for your specific task across 5 different models at once.
Maybe an easy way to run the heavy lifting on your desktop but use a polished mobile interface as the agent.
Highest engagement gets built. What are you tired of doing manually? | 2026-02-05T19:45:47 | https://www.reddit.com/r/LocalLLaMA/comments/1qwvfez/whats_the_one_thing_you_wish_your_local_ai_could/ | Peach_Baker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwvfez | false | null | t3_1qwvfez | /r/LocalLLaMA/comments/1qwvfez/whats_the_one_thing_you_wish_your_local_ai_could/ | false | false | self | 3 | null |
Benchmark Results: Chaperone-Thinking-LQ-1.0 — Lightweight Model with Strong Reasoning Performance | 0 | I wanted to share some performance benchmarking results for Chaperone-Thinking-LQ-1.0, a reasoning-oriented LLM we’ve been working on: [**https://chaperoneai.net/benchmark**](https://chaperoneai.net/benchmark)
Why this matters
* This model is designed for deep reasoning, scientific precision, and real-world deployability.
* Unlike many large, general-purpose models, it’s optimized to balance speed and accuracy.
If anyone’s benchmarked this against other reasoning-focused LLMs like GPT-X variants, Claude, or Mistral in your own tests, I’d love to hear how they stack up — especially on long reasoning chains or math/data challenges. | 2026-02-05T19:45:25 | https://www.reddit.com/r/LocalLLaMA/comments/1qwvf1p/benchmark_results_chaperonethinkinglq10/ | AltruisticCouple3491 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwvf1p | false | null | t3_1qwvf1p | /r/LocalLLaMA/comments/1qwvf1p/benchmark_results_chaperonethinkinglq10/ | false | false | self | 0 | null |
I’m working on a PDF tool where you can convert files and ask questions. | 0 | I’m working on an all-in-one PDF tool. The main idea is that you can do all the usual stuff, like converting PDFs into different formats, but the interesting part is that you can also talk to your PDFs. Instead of scrolling through pages to find information, you just ask a question, and the tool gives you answers directly from the document. I’m trying to make PDFs less painful to work with and more interactive, especially for people who deal with long files every day. | 2026-02-05T19:40:53 | https://www.reddit.com/r/LocalLLaMA/comments/1qwvaqk/im_working_on_a_pdf_tool_where_you_can_convert/ | rohit-ramakkanavar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwvaqk | false | null | t3_1qwvaqk | /r/LocalLLaMA/comments/1qwvaqk/im_working_on_a_pdf_tool_where_you_can_convert/ | false | false | self | 0 | null |
Online Database for LLM Jailbreaks | 0 | [https://jailbreak.monster](https://jailbreak.monster)
I built this online DB without registration/login, only running on smart bot/spam filtering, to sort and rank LLM jailbreaks. Thoughts? | 2026-02-05T19:38:17 | https://www.reddit.com/r/LocalLLaMA/comments/1qwv86s/online_database_for_llm_jailbreaks/ | mhavelka77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwv86s | false | null | t3_1qwv86s | /r/LocalLLaMA/comments/1qwv86s/online_database_for_llm_jailbreaks/ | false | false | self | 0 | null |
Top 10 Models on Humanity's Last Exam. Opus 4.6 is in the lead. | 0 | With the new release of Opus 4.6, here's the top 10 in HLE. I know they're just benchmarks and don't mean anything on their own, but it's still interesting to make comparisons when a new model comes out.
Post: I also really enjoyed reading the System Card Anthropic published on their blog, there you can find information for use cases like finance, cybersecurity, biology etc.
https://preview.redd.it/f84derhy8qhg1.png?width=2700&format=png&auto=webp&s=cdebf89b3ba1b25a4d9617e81a02bf9d2327610b
https://preview.redd.it/o9659vv79qhg1.png?width=1306&format=png&auto=webp&s=40ce32fc2a17cc6e3a8dc75b6b15af9716ce09db
| 2026-02-05T19:32:05 | https://www.reddit.com/r/LocalLLaMA/comments/1qwv21y/top_10_models_on_humanitys_last_exam_opus_46_is/ | Ok_Presentation1577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwv21y | false | null | t3_1qwv21y | /r/LocalLLaMA/comments/1qwv21y/top_10_models_on_humanitys_last_exam_opus_46_is/ | false | false | 0 | null | |
SoproTTS v1.5: A 135M zero-shot voice cloning TTS model trained for ~$100 on 1 GPU, running ~20× real-time on a base MacBook M3 CPU | 64 | First of all, thank you for the support on my first release.
Today, I'm releasing a new version of my side project: SoproTTS
A 135M parameter TTS model trained for \~$100 on 1 GPU, running \~20× real-time on a base MacBook M3 CPU.
v1.5 highlights (on CPU):
• 250 ms TTFA streaming latency
• 0.05 RTF (\~20× real-time)
• Zero-shot voice cloning
• Smaller, faster, more stable
Still not perfect (OOD voices can be tricky, and there are still some artifacts), but a decent upgrade. Training code TBA.
Repo: [https://github.com/samuel-vitorino/sopro](https://github.com/samuel-vitorino/sopro)
https://reddit.com/link/1qwue2w/video/y114to0a2qhg1/player | 2026-02-05T19:08:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qwue2w/soprotts_v15_a_135m_zeroshot_voice_cloning_tts/ | SammyDaBeast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwue2w | false | null | t3_1qwue2w | /r/LocalLLaMA/comments/1qwue2w/soprotts_v15_a_135m_zeroshot_voice_cloning_tts/ | false | false | self | 64 | null |
Best agentic local model for 16G VRAM? | 8 | My dear VRAM poor friends,
I have a 5060ti with 16G of VRAM (and 32G ddr5 RAM) and am looking to set up a decent local model on LMstudio that can power claude code. But claude eats a lot of tokens, so it needs a long context. I'm using 32k currently, that too with K & V quantized to 8-bits.
With that much context, if I try to run a 30B-MoE Qwen3 coder or Flash GLM 4.7, it becomes too slow. However, gpt-oss-20B works very well, but in general sucks for agentic tasks with claude code. I have also tried Devstral small (24b) which is about the same as gpt-oss but is a dense model and hence I end up getting a generation speed of 5-6 tps. Another model I tried was Qwen3-4b-thinking which was blazing fast but keeps making stupid mistakes.
Please share what local models have worked well for you with these agents that I could try to fit in my hardware? | 2026-02-05T18:46:09 | https://www.reddit.com/r/LocalLLaMA/comments/1qwtrbe/best_agentic_local_model_for_16g_vram/ | v01dm4n | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwtrbe | false | null | t3_1qwtrbe | /r/LocalLLaMA/comments/1qwtrbe/best_agentic_local_model_for_16g_vram/ | false | false | self | 8 | null |
I’m building an all-in-one PDF tool to convert PDFs and ask questions directly from your files. | 0 | I’m building an all-in-one PDF tool that lets users convert PDFs into multiple formats and interact with their documents through question-answering. Instead of manually searching or scrolling through long files, users can simply ask questions and get relevant answers directly from their PDFs. The goal is to make working with PDFs faster, smarter, and more intuitive by combining powerful conversion features with an AI-powered conversational experience. | 2026-02-05T18:44:54 | https://www.reddit.com/r/LocalLLaMA/comments/1qwtq3l/im_building_an_allinone_pdf_tool_to_convert_pdfs/ | rohit-ramakkanavar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwtq3l | false | null | t3_1qwtq3l | /r/LocalLLaMA/comments/1qwtq3l/im_building_an_allinone_pdf_tool_to_convert_pdfs/ | false | false | self | 0 | null |
Claude Opus 4.6 claimed benchmarks, for comparison | 2 | 2026-02-05T18:38:17 | creamyhorror | preview.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qwtjhf | false | null | t3_1qwtjhf | /r/LocalLLaMA/comments/1qwtjhf/claude_opus_46_claimed_benchmarks_for_comparison/ | false | false | default | 2 | null | ||
How viable are AMD cards for local models (text, images) | 7 | Basically, the title is the question. | 2026-02-05T18:37:31 | https://www.reddit.com/r/LocalLLaMA/comments/1qwtioh/how_viable_are_amd_cards_for_local_models_text/ | mythrowaway4DPP | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwtioh | false | null | t3_1qwtioh | /r/LocalLLaMA/comments/1qwtioh/how_viable_are_amd_cards_for_local_models_text/ | false | false | self | 7 | null |
A small, shared skill library by builders, for builders. | 1 | 2026-02-05T18:37:15 | https://github.com/PsiACE/skills | PsiACE | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qwtidz | false | null | t3_1qwtidz | /r/LocalLLaMA/comments/1qwtidz/a_small_shared_skill_library_by_builders_for/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'uQd1fYdXxXYbqXqTutyBYQoarrZpd9MJr3hpsGdo4fI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uQd1fYdXxXYbqXqTutyBYQoarrZpd9MJr3hpsGdo4fI.png?width=108&crop=smart&auto=webp&s=54a26bed9de7536a2653819c97eab2bca723b798', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uQd1fYdXxXYbqXqTutyBYQoarrZpd9MJr3hpsGdo4fI.png?width=216&crop=smart&auto=webp&s=010dcde953358049063f013d91d3ed4ea2faec29', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uQd1fYdXxXYbqXqTutyBYQoarrZpd9MJr3hpsGdo4fI.png?width=320&crop=smart&auto=webp&s=cf1d8633359e604e7b5c48b810ca53310e1886da', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uQd1fYdXxXYbqXqTutyBYQoarrZpd9MJr3hpsGdo4fI.png?width=640&crop=smart&auto=webp&s=13b96b8b362a98bb7bd56c3986f85ec61d8a849a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uQd1fYdXxXYbqXqTutyBYQoarrZpd9MJr3hpsGdo4fI.png?width=960&crop=smart&auto=webp&s=ab1c85b7eef41d7e1f84375a30e5de389851d0d8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uQd1fYdXxXYbqXqTutyBYQoarrZpd9MJr3hpsGdo4fI.png?width=1080&crop=smart&auto=webp&s=bf05f7ace37f91fe4b5dad2888b591ddf23985f7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uQd1fYdXxXYbqXqTutyBYQoarrZpd9MJr3hpsGdo4fI.png?auto=webp&s=9e079046e541396b290a0515e0dc118df6e48ef2', 'width': 1200}, 'variants': {}}]} | |
Is vibe coding killing developers? | 0 | Is vibe coding killing developers? | 2026-02-05T18:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qwtg23/is_vibe_coding_killing_developers/ | rohit-ramakkanavar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwtg23 | false | null | t3_1qwtg23 | /r/LocalLLaMA/comments/1qwtg23/is_vibe_coding_killing_developers/ | false | false | self | 0 | null |
Vibe-coding client now in Llama.cpp! (maybe) | 51 | I've created a small proof-of-concept MCP client on top llama.cpp's \`llama-cli\`.
Now you can add MCP servers (I've added a config with Serena, a great MCP coding server that can instantly turn your CLI into a full-fledged terminal coder) and use them directly in \`llama-cli\`.
Features an \`--mcp-yolo\` mode for all you hardcore \`rm -rf --no-preserve-root /\` fans! | 2026-02-05T18:27:21 | https://github.com/ggml-org/llama.cpp/pull/19373 | ilintar | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qwt8av | false | null | t3_1qwt8av | /r/LocalLLaMA/comments/1qwt8av/vibecoding_client_now_in_llamacpp_maybe/ | false | false | default | 51 | {'enabled': False, 'images': [{'id': 'SfXQgLioflEqGI2A5Yr41nnHSaue_y-63o8dkAbmgPc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SfXQgLioflEqGI2A5Yr41nnHSaue_y-63o8dkAbmgPc.png?width=108&crop=smart&auto=webp&s=a7c9a58e379bef1ae9adfd9abffde511bc4af9d4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/SfXQgLioflEqGI2A5Yr41nnHSaue_y-63o8dkAbmgPc.png?width=216&crop=smart&auto=webp&s=18faff16c2ae88217a059ff6cc30a51f0b09a327', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/SfXQgLioflEqGI2A5Yr41nnHSaue_y-63o8dkAbmgPc.png?width=320&crop=smart&auto=webp&s=e930bf5d0c5d9ead533a262f2084ce7b44a8d463', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/SfXQgLioflEqGI2A5Yr41nnHSaue_y-63o8dkAbmgPc.png?width=640&crop=smart&auto=webp&s=de5472454203e6adaa20045c05732a71981fde83', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/SfXQgLioflEqGI2A5Yr41nnHSaue_y-63o8dkAbmgPc.png?width=960&crop=smart&auto=webp&s=22344410bc56f9b234a0bc240340839c739e5581', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/SfXQgLioflEqGI2A5Yr41nnHSaue_y-63o8dkAbmgPc.png?width=1080&crop=smart&auto=webp&s=b8ee142d818edf312b1eb9e73e8a1105557ab8d8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/SfXQgLioflEqGI2A5Yr41nnHSaue_y-63o8dkAbmgPc.png?auto=webp&s=c6ec401b996327910101bb71e7624e65ac54fbac', 'width': 1200}, 'variants': {}}]} |
vLLM: Qwen/Qwen3-Coder-Next | 1 | Hi everybody,
I am trying to run Qwen3-Coder-Next using the guider by Unsloth (https://unsloth.ai/docs/models/qwen3-coder-next#fp8-qwen3-coder-next-in-vllm). I was able to get the "Application Startup Complete." However, when I start using it via Cline in VS Code, VLLM crashes with the following message: "nvcc unsupported gpu architecture 120a" (along this line).
I am wondering what the issue is. I was able to use it in Cline VS Code with LM Studio, but everything is much slower. I have 8 x 5070 Ti in the system. CUDA version 13.0, and driver version 580.126.09 on Ubuntu Linux Kernel 6.17,
Has anybody successfully served qwen3-coder-next in vllm? I would appreciate it if you could share the full command. Here is what I used:
source unsloth\_fp8/bin/activate
export PYTORCH\_CUDA\_ALLOC\_CONF=expandable\_segments:False
CUDA\_VISIBLE\_DEVICES='0,1,2,3,4,5,6,7' HF\_TOKEN="........." vllm serve unsloth/Qwen3-Coder-Next-FP8-Dynamic \\
\--served-model-name unsloth/Qwen3-Coder-Next \\
\--tensor-parallel-size 8 \\
\--tool-call-parser qwen3\_coder \\
\--enable-auto-tool-choice \\
\--dtype bfloat16 \\
\--seed 3407 \\
\--kv-cache-dtype fp8 \\
\--max-model-len 200000 \\
\--gpu-memory-utilization 0.93 \\
\--port 8000 \\
\--enforce-eager | 2026-02-05T18:25:59 | https://www.reddit.com/r/LocalLLaMA/comments/1qwt6yf/vllm_qwenqwen3codernext/ | Professional-Yak4359 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwt6yf | false | null | t3_1qwt6yf | /r/LocalLLaMA/comments/1qwt6yf/vllm_qwenqwen3codernext/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=216&crop=smart&auto=webp&s=18872cd0af37e87d93cf5b6c098630c44f40a162', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=320&crop=smart&auto=webp&s=e8392e0cb89db800c200421873b07e92f34150fe', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=640&crop=smart&auto=webp&s=5f6fc5d8f727ab6f86a8ca5f94a5091bbe81d025', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=960&crop=smart&auto=webp&s=26fa346a0f27ac195ecf2f29e1d997a534a3b283', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=1080&crop=smart&auto=webp&s=4e4e7bc3c126d7465ae2f4d8fab93d8c6edd76c4', 'width': 1080}], 'source': {'height': 590, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?auto=webp&s=df3ed66f8b8e54b17c699d9c4e81b03ddeb78c58', 'width': 1200}, 'variants': {}}]} |
Opus 4.6 in the house | 1 | 2026-02-05T18:16:36 | Karam1234098 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1qwsxig | false | null | t3_1qwsxig | /r/LocalLLaMA/comments/1qwsxig/opus_46_in_the_house/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'obn8hnskwphg1', 'resolutions': [{'height': 123, 'url': 'https://preview.redd.it/obn8hnskwphg1.jpeg?width=108&crop=smart&auto=webp&s=26c31271c5024b1ca22ab48b43234b31b9c59aba', 'width': 108}, {'height': 246, 'url': 'https://preview.redd.it/obn8hnskwphg1.jpeg?width=216&crop=smart&auto=webp&s=ccfed7bb158b58dbf3612d8fe90f9aea82b4452f', 'width': 216}, {'height': 365, 'url': 'https://preview.redd.it/obn8hnskwphg1.jpeg?width=320&crop=smart&auto=webp&s=b64788cdc45eace2e91753be9c2c4639a06baa3c', 'width': 320}, {'height': 730, 'url': 'https://preview.redd.it/obn8hnskwphg1.jpeg?width=640&crop=smart&auto=webp&s=0908390bc27f8379e9f0bcfb95dae4a7dbbf2dd4', 'width': 640}, {'height': 1095, 'url': 'https://preview.redd.it/obn8hnskwphg1.jpeg?width=960&crop=smart&auto=webp&s=0b90442b18b49d26ebe9dae9a291ba43a081ed98', 'width': 960}, {'height': 1232, 'url': 'https://preview.redd.it/obn8hnskwphg1.jpeg?width=1080&crop=smart&auto=webp&s=b83da3ceff0599e1df92e470aa8115aec84f5bc4', 'width': 1080}], 'source': {'height': 1600, 'url': 'https://preview.redd.it/obn8hnskwphg1.jpeg?auto=webp&s=b0ca320641664b8cb9eb265552e285a8c2f26939', 'width': 1402}, 'variants': {}}]} | ||
Claude Opus 4.6 is out now | 1 | [https://www.anthropic.com/news/claude-opus-4-6](https://www.anthropic.com/news/claude-opus-4-6)
Looks like it's the same price as 4.5 on OR. | 2026-02-05T18:10:21 | https://www.reddit.com/r/LocalLLaMA/comments/1qwsr6h/claude_opus_46_is_out_now/ | Sindre_Lovvold | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwsr6h | false | null | t3_1qwsr6h | /r/LocalLLaMA/comments/1qwsr6h/claude_opus_46_is_out_now/ | false | false | self | 1 | null |
How can I install the PublicAI library to build a python program with Apertus? | 1 | Hello, heeelp! I'm trying to build an application using Apertus with Python, but can't manage to install the corresponding library. It's not on pip (even after refresh or update) and I don't know which of the git repositories is the correct one. Can anybody point me to a place where I can find a proper guidethrough? Thank you!
| 2026-02-05T17:56:52 | https://www.reddit.com/r/LocalLLaMA/comments/1qwsd4e/how_can_i_install_the_publicai_library_to_build_a/ | Puzzleheaded-Goal102 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwsd4e | false | null | t3_1qwsd4e | /r/LocalLLaMA/comments/1qwsd4e/how_can_i_install_the_publicai_library_to_build_a/ | false | false | self | 1 | null |
IT DROPPED !! | 0 | 2026-02-05T17:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qws6cc/it_dropped/ | The_Health_Police | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qws6cc | false | null | t3_1qws6cc | /r/LocalLLaMA/comments/1qws6cc/it_dropped/ | false | false | 0 | null | ||
IT DROPPED !! | 1 | [ITS HERE](https://preview.redd.it/4557zoenrphg1.png?width=750&format=png&auto=webp&s=e6027d2e6c9e2ed472e491465aa809ee8c09d7cf)
| 2026-02-05T17:49:18 | https://www.reddit.com/r/LocalLLaMA/comments/1qws5b7/it_dropped/ | The_Health_Police | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qws5b7 | false | null | t3_1qws5b7 | /r/LocalLLaMA/comments/1qws5b7/it_dropped/ | false | false | 1 | null | |
tokeypokey-bench - Benchmarking tokenizer speed | 4 | 2026-02-05T17:47:32 | https://codeberg.org/qikp/tokeypokey-bench | charles25565 | codeberg.org | 1970-01-01T00:00:00 | 0 | {} | 1qws3hf | false | null | t3_1qws3hf | /r/LocalLLaMA/comments/1qws3hf/tokeypokeybench_benchmarking_tokenizer_speed/ | false | false | default | 4 | {'enabled': False, 'images': [{'id': 'd_fO4kCs5AEBosid_1FgiHqFEpe5ikYNd7PTQblUjGo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d_fO4kCs5AEBosid_1FgiHqFEpe5ikYNd7PTQblUjGo.png?width=108&crop=smart&auto=webp&s=614e5ad611dd9d5bdd4d1853778be064b5d8ff23', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/d_fO4kCs5AEBosid_1FgiHqFEpe5ikYNd7PTQblUjGo.png?width=216&crop=smart&auto=webp&s=491e6f4df09983b9e1d9d75020aceeea0048742e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/d_fO4kCs5AEBosid_1FgiHqFEpe5ikYNd7PTQblUjGo.png?width=320&crop=smart&auto=webp&s=3254465e12abcfd223152d8f418ce41b915d4ca4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/d_fO4kCs5AEBosid_1FgiHqFEpe5ikYNd7PTQblUjGo.png?width=640&crop=smart&auto=webp&s=e58cde2706bacd6169a84f95ca57ae88485d2946', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/d_fO4kCs5AEBosid_1FgiHqFEpe5ikYNd7PTQblUjGo.png?width=960&crop=smart&auto=webp&s=3771ef2f865f154b8e8e1a0af7cd5a104429b234', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/d_fO4kCs5AEBosid_1FgiHqFEpe5ikYNd7PTQblUjGo.png?width=1080&crop=smart&auto=webp&s=f417a1cd8ecb7bff06c1627979eb42b5bd85d7ad', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/d_fO4kCs5AEBosid_1FgiHqFEpe5ikYNd7PTQblUjGo.png?auto=webp&s=81ff78da30078973a7c246e2131b51b1ed2b5ff2', 'width': 1200}, 'variants': {}}]} | |
Gemini Pro is why measuring intelligence is hard. | 0 | OK, so I really hate Gemini 2.5 \\ 3.0 pro with a passion, but the amount of knowledge it has is second to none.
I saw some benchmarks that shows it's the best coding LLM in the world, however this is BS, either trained on benchmarks, or by accident contaminated training data, because in real world usage Claude wipes the floor with Gemini (all versions, and while this is my opinion, it is shared among many people I know).
It's like an autistic savant with zero common sense, and an insane amount of knowledge.
About the knowledge, everyone knows that Gemini (and gemma models too) got an absurd amount of obscure knowledge. It will know everything there is to know about some unimportant anime character from an obscured anime no one knows, that was only in 1 episode.
Also,I made an accurate (to the best of my knowledge) photo realistic photograph of the Jurassic period, cropped it so no dinosaurs and only vegetation was visible, nothing else.
https://preview.redd.it/934s8y30rphg1.png?width=1448&format=png&auto=webp&s=f901c48841c2f18488be16a1fb336b08028cf32c
Gemini accurately determined that this was a photo of a Jurassic period, no hedge. Very impressive.
We are at a point where knowledge does not worth smart model, it's something that ironically is very human.
If we had the common sense & humanity of Claude, combined with Google's dark voodoo of knowledge graphs with all of the human knowledge (which they have), idk about AGI but it would sure as hell be close to it.
Whatever AGI even means at this point.
What do u guys think? Gemini or Claude? | 2026-02-05T17:45:29 | https://www.reddit.com/r/LocalLLaMA/comments/1qws1c2/gemini_pro_is_why_measuring_intelligence_is_hard/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qws1c2 | false | null | t3_1qws1c2 | /r/LocalLLaMA/comments/1qws1c2/gemini_pro_is_why_measuring_intelligence_is_hard/ | false | false | 0 | null | |
CI quality gatekeeper for AI agents | 3 | Hi all, have you or a friend or a startup found yourself in a situation where you are releasing Agents to prod but they are worse than the previous? You don’t implement regression test? At maosproject we’ve released Maos AgentGate: CI Quality gatekeeper for AI agents.
It’s open source, please check it out. No more regression in prod. Happy to here your thoughts | 2026-02-05T17:43:42 | https://github.com/marketplace/actions/maos-agentgate-ci-quality-gatekeeper-for-ai-agents | TranslatorSalt1668 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1qwrziq | false | null | t3_1qwrziq | /r/LocalLLaMA/comments/1qwrziq/ci_quality_gatekeeper_for_ai_agents/ | false | false | default | 3 | null |
Has anyone just let a model go ham? | 0 | So yea just wondering has anyone taken say a fresh pc (no personal info on it) download (or use cloud model) and gave it access to write any arbitrary code and let it loose for weeks?
If so, what happened? | 2026-02-05T17:40:19 | https://www.reddit.com/r/LocalLLaMA/comments/1qwrw16/has_anyone_just_let_a_model_go_ham/ | AppleAreUnderRated | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1qwrw16 | false | null | t3_1qwrw16 | /r/LocalLLaMA/comments/1qwrw16/has_anyone_just_let_a_model_go_ham/ | false | false | self | 0 | null |
really impressed with these new ocr models (lightonocr-2 and glm-ocr). much better than what i saw come out in nov-dec 2025 | 99 | gif 1: LightOnOCR-2-1B
docs page: https://docs.voxel51.com/plugins/plugins_ecosystem/lightonocr_2.html
quickstart nb: https://github.com/harpreetsahota204/LightOnOCR-2/blob/main/lightonocr2_fiftyone_example.ipynb
gif 2: GLM-OCR
docs page: https://docs.voxel51.com/plugins/plugins_ecosystem/glm_ocr.html
quickstart nb: https://github.com/harpreetsahota204/glm_ocr/blob/main/glm_ocr_fiftyone_example.ipynb
imo, glm-ocr takes the cake. much faster, and you can get pretty reliable structured output | 2026-02-05T17:33:54 | https://www.reddit.com/gallery/1qwrpom | datascienceharp | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1qwrpom | false | null | t3_1qwrpom | /r/LocalLLaMA/comments/1qwrpom/really_impressed_with_these_new_ocr_models/ | false | false | 99 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.