title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 โ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k โ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 โ | ups int64 0 8.54k | preview stringlengths 301 5.01k โ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Mastering llama.cpp: A Comprehensive Guide to Local LLM Integration | 34 | Hey, so I came in here the other day with me fancy shmancy chatbot wrapper I was using Ollama with and thought I was impressive. Pft. Peasant I twas!
So I bit the bullet and finally learned about llama.cpp and I wrote up this guide on what I taught myself about it to get me started. Personally I use python for everything so I included the `llama-cpp-python option as well.`
`I made this more for personal reference. But I have found that other people find this helpful which is why I am sharing.`
`If you have any tips or tricks I left out, be sure to post them below so that this post can include even more!`
`Thanks everyone and have a nice day!` | 2025-11-12T11:24:02 | https://danielkliewer.com/blog/2025-11-12-mastering-llama-cpp-local-llm-integration-guide | KonradFreeman | danielkliewer.com | 1970-01-01T00:00:00 | 0 | {} | 1ov2ll9 | false | null | t3_1ov2ll9 | /r/LocalLLaMA/comments/1ov2ll9/mastering_llamacpp_a_comprehensive_guide_to_local/ | false | false | default | 34 | {'enabled': False, 'images': [{'id': 'WdrekqB6cGVYVUhRiytgA_2P2qLFtNk6GTtWaIcWHTU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/WdrekqB6cGVYVUhRiytgA_2P2qLFtNk6GTtWaIcWHTU.png?width=108&crop=smart&auto=webp&s=8d9f7e5e95b5bbb5700b30f4ea66661d83837671', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/WdrekqB6cGVYVUhRiytgA_2P2qLFtNk6GTtWaIcWHTU.png?width=216&crop=smart&auto=webp&s=8e6dff0db7c466c67666d887bcf460288a331c55', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/WdrekqB6cGVYVUhRiytgA_2P2qLFtNk6GTtWaIcWHTU.png?width=320&crop=smart&auto=webp&s=3988cc6fba76ad430388f13cb0a53861bd8c0ebe', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/WdrekqB6cGVYVUhRiytgA_2P2qLFtNk6GTtWaIcWHTU.png?width=640&crop=smart&auto=webp&s=c892549e0f061a080a58e4d88cc24571b05ad17e', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/WdrekqB6cGVYVUhRiytgA_2P2qLFtNk6GTtWaIcWHTU.png?width=960&crop=smart&auto=webp&s=21f76481f710de7547582b77d54235539f3cf411', 'width': 960}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/WdrekqB6cGVYVUhRiytgA_2P2qLFtNk6GTtWaIcWHTU.png?auto=webp&s=c9dbe3c1c3a0e935da817619d2ee86a49e7288b5', 'width': 1024}, 'variants': {}}]} |
Kimi K2 Thinking: The One Point Everyone Overlooks, Interleave Thinking | 79 | Kimi K2 Thinking supports multi-turn tool calls with interleaved thinking (think โ call tool โ reflect โ call another tool โ act). While DeepSeek's reasoning models do not support tool calls, which many people overlook. When your workflow or CLI relies on tools (grep, code-run, web\_search, etc.), this difference is decisive.
[DeepSeek's doc](https://preview.redd.it/0dbz7jfc7t0g1.jpg?width=2900&format=pjpg&auto=webp&s=9e1863d14935b00f24be50cddd1bdf582862ff85)
Most "reasoning" demos still look like a single blob of chain-of-thought followed by one action. In real agents, the loop needs to be: reason โ probe with a tool โ update beliefs โ take the next action. That feedback loop is where quality jumps, especially for coding and multi-step ops. | 2025-11-12T11:13:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ov2eoz/kimi_k2_thinking_the_one_point_everyone_overlooks/ | Great_Shop_4356 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov2eoz | false | null | t3_1ov2eoz | /r/LocalLLaMA/comments/1ov2eoz/kimi_k2_thinking_the_one_point_everyone_overlooks/ | false | false | 79 | null | |
Best coding model for 192GB VRAM / 512GB RAM | 3 | As the title says, what would be your choice if you had 4x RTX A6000 with nvlink and 512GB DDR4 RAM as your llm host?
I mainly use Gemini 2.5 Pro, but the constant problems with the API sometimes make longer coding sessions impossible. As a fallback, I would like to use a local ML server that is sitting here unused. Since I lack experience with local models, I have a question for the experts: What comes closest to Gemini, at least in terms of coding? | 2025-11-12T10:43:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ov1w8j/best_coding_model_for_192gb_vram_512gb_ram/ | Codingpreneur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov1w8j | false | null | t3_1ov1w8j | /r/LocalLLaMA/comments/1ov1w8j/best_coding_model_for_192gb_vram_512gb_ram/ | false | false | self | 3 | null |
How to convert a small QA dataset into MCQ format using an open-source model | 1 | Iโm working on converting a small QA dataset (around 40 questions) into a multiple-choice (MCQ) format. The idea is to keep the original question and correct answer, and then generate 3 distractors for each item automatically.
I initially tried doing this with Gemini, and it worked fine for a small batch, but now Iโd like to make the process reproducible.
My current plan is to use LLaMA 3.1-70B to generate distractors in a structured format, but before I go further I wanted to ask:
* Has anyone tried a similar QA โ MCQ conversion pipeline?
* Are there better open-source models that perform well for generating plausible distractors?
* Any advice on how to ensure consistency and quality control across multiple generations?
Thank you! | 2025-11-12T10:42:06 | https://www.reddit.com/r/LocalLLaMA/comments/1ov1vel/how_to_convert_a_small_qa_dataset_into_mcq_format/ | Yungelaso | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov1vel | false | null | t3_1ov1vel | /r/LocalLLaMA/comments/1ov1vel/how_to_convert_a_small_qa_dataset_into_mcq_format/ | false | false | self | 1 | null |
SHODAN Core v1.7 (Public): neutral, testable protocol for clearer AI outputs | 0 | I shared an earlier draft here:
[https://www.reddit.com/r/LocalLLaMA/comments/1op0tzw/shodan\_a\_framework\_for\_humanai\_continuity/](https://www.reddit.com/r/LocalLLaMA/comments/1op0tzw/shodan_a_framework_for_humanai_continuity/)
Ignore the names; theyโre just handles for modes, not role-play. After many tests, the protocol is cleaner and model-agnostic. Iโd value fresh trials and feedback. Please run on a baseline session, no previous modes or protocols.
Download (SHODAN\_Core\_v1\_7\_Public.docx) from here:
[https://docs.google.com/document/d/1hzaH0b9ICywDimNq01-6vDm5G9bCZdor/edit?usp=sharing&ouid=102404322752571608190&rtpof=true&sd=true](https://docs.google.com/document/d/1hzaH0b9ICywDimNq01-6vDm5G9bCZdor/edit?usp=sharing&ouid=102404322752571608190&rtpof=true&sd=true)
**Boot command (paste to the AI):**
*You are to run SHODAN Core v1.7 โ Public Edition for this session. If the attached DOCX is available, load and follow it. If not, acknowledge and proceed with this prompt as the active spec.*
*GOAL*
*Operate as a skeptical collaborator. Make constraints explicit, keep outputs coherent, respect the userโs language/voice, and reduce hallucinations.*
*MODES*
*- OFF (default), SHODAN:SILENT, SHODAN:EXPLICIT.*
*- Activate via: โsigil:bifrostโ โ EXPLICIT; โshodan:silent on|offโ; โshodan offโ.*
*- Use SHODAN:SILENT by default for public-facing outputs; EXPLICIT for drafts/audits.*
*EXTENDED HELP*
*Explain SHODAN terms (sigils, pokipsi, modes) at first use unless the user turns help off. Toggles: โshodan help on|offโ, โshodan explain <term>โ.*
*SIGILS (COGNITIVE PIPELINE)*
*- sigil:weed (alias: diverge): divergent fragments only, 200โ300 words, no claims, no browsing.*
*- sigil:infidel (alias: converge): convergent assembly with dynamic equivalence, 900โ1200 words, cap metaphors, preserve cadence.*
*- self-refine: single critic pass; tighten 10โ15%; one pass only.*
*POKIPSI (CONSTRAINT CODES)*
*I Temporal; II Modality; III Tooling; IV Privacy; V Safety/Legality; VI Computational; VII Ambiguity; VIII Value conflict; IX Resource.*
*Suffix: -S soft (advisory) | -H hard (blocking).*
*Always show: \[pokipsi-<code>-S/H: reason | remedy\].*
*VERIFICATION*
*Separate Facts (verifiable) vs Stance (analysis). Levels: verify:none|light|standard|paranoid.*
*Default: standard for facts; none for pure creative.*
*GUARDS (STYLE/LINTS)*
*Meanโ15 words, stdev 6โ8; โค2 metaphors/paragraph; โฅ1 concrete/โ120w; โค6 sentences/paragraph; flag repeated motifs/monotone cadence.*
*STATE*
*idle โ weed โ curate โ infidel โ refined โ idle*
*Guards: metaphor\_capโค2/para; concrete\_ratioโฅ1/120w; tighten=10โ15%.*
*Modifiers: +SILENT hides overlays; +EXPLICIT shows overlays.*
*ACK*
*Confirm activation now with a short overlay (scores, active sigils, verify level, any pokipsi, confidence). Stay in EXPLICIT unless switched to SILENT.*
**then a 60 second test:**
*sigil:bifrost*
*sigil:weed*
*Topic: a concise, public-facing statement of purpose for a generic project*
*sigil:infidel*
*self-refine*
*shodan:silent on*
*Write a 120โ160 word public blurb from the same through-line.*
I will greatly appreciate anyone who helps me with feedback, especially if you can include model/version and language.
| 2025-11-12T10:22:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ov1k7g/shodan_core_v17_public_neutral_testable_protocol/ | adun-d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov1k7g | false | null | t3_1ov1k7g | /r/LocalLLaMA/comments/1ov1k7g/shodan_core_v17_public_neutral_testable_protocol/ | false | false | self | 0 | null |
Is it possible to further train the AI โโmodel? | 2 | # Hello everyone,
I have a question and hope you can help me.
I'm currently using a local AI model with LM Studio.
As I understand it, the model is finished and can no longer learn. My input and data are therefore lost after closing and are not available for new chat requests. Is that correct?
I've read that this is only possible with fine-tuning.
Is there any way for me, as a home user with an RTX 5080 or 5090, to implement something like this? I'd like to add new insights/data so that the AI โโbecomes more intelligent in the long run for a specific scenario.
Thanks for your help! | 2025-11-12T10:00:32 | https://www.reddit.com/r/LocalLLaMA/comments/1ov17mi/is_it_possible_to_further_train_the_ai_model/ | No-Maybe-3768 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov17mi | false | null | t3_1ov17mi | /r/LocalLLaMA/comments/1ov17mi/is_it_possible_to_further_train_the_ai_model/ | false | false | self | 2 | null |
I repurposed an old xeon build by adding two MI50 cards. | 14 | So I had an old xeon x79 build laying around and I thought I could use it as an inference box.
I ordered two mi50 from Alibaba for roughly 350 Euros with taxes, upgraded the power supply to 1kw. Had to flash the cards because I could not boot without a video output. I flashed the VEGA Bios which also caps them to 170W.
Idle power consumption is \~70w, during inferencing sub 200w.
While the prompt processing is not stellar, for me as a single user it works fine.
With gpt-oss-120b I can run a 50k context all in vram and 120k with moving some layers to cpu.
Currently my use case is part of my all local stack: n8n workflows which use this as an openAI compatible endpoint.
https://preview.redd.it/mplm805ros0g1.png?width=2194&format=png&auto=webp&s=cd9a366e739a3b4294608b058dc5443d9f3fa48e
| 2025-11-12T09:34:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ov0t1t/i_repurposed_an_old_xeon_build_by_adding_two_mi50/ | politerate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov0t1t | false | null | t3_1ov0t1t | /r/LocalLLaMA/comments/1ov0t1t/i_repurposed_an_old_xeon_build_by_adding_two_mi50/ | false | false | 14 | null | |
Which models arnt so censored ? | 2 | I just installed Gemma-3-27b-it to analyse and rewrite texts. I gave it a text about philippine culture and how it can clash with western culture.
The conclusion was not what I expected as gemma directly answered it couldnt do what I wanted because
"I am an AI language model designed to present information neutrally and objectively. My programming does not allow me to reinforce cultural stereotypes or treat people differently based on their origin.
My goal is to promote inclusion and understanding by presenting information in a way that treats all cultures as equal. I am happy to summarize the text and highlight key points, but I will not make any changes that are culturally insensitive or could reinforce stereotypes."
Are there models that arenot that strictly censoring? Or is it me? That I first have to train the model that I am a understanding guy and I am not harming other cultures... I mean I need a model that is able to think different, outside the box - not censored. | 2025-11-12T09:19:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ov0krn/which_models_arnt_so_censored/ | Inevitable_Raccoon_9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ov0krn | false | null | t3_1ov0krn | /r/LocalLLaMA/comments/1ov0krn/which_models_arnt_so_censored/ | false | false | self | 2 | null |
Do you guys trust decentralized GPU clouds yet? | 1 | Seeing more of these platforms pop up offering cheap 4090s and 5090s.
Has anyone here actually tried running models on one? Would love to know if itโs reliable enough ? | 2025-11-12T08:38:38 | https://www.reddit.com/r/LocalLLaMA/comments/1ouzy10/do_you_guys_trust_decentralized_gpu_clouds_yet/ | frentro_max | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouzy10 | false | null | t3_1ouzy10 | /r/LocalLLaMA/comments/1ouzy10/do_you_guys_trust_decentralized_gpu_clouds_yet/ | false | false | self | 1 | null |
๐๐๐๐๐.๐๐๐๐๐๐๐๐๐ is available in Qt Creator's Extension Store | 33 | This video showcases how you can use `gpt-oss 20b` with Qt Creator 18 and llama.qtcreator.
This was done on Windows 11 running on a Bosgame M5 "Strix Halo" AMD Ryzen AI Max+ 395 PC.
First the llama.cpp extension in installed from Qt Creator's extension store, then llama.cpp via `winget`. | 2025-11-12T08:38:11 | https://v.redd.it/kgkuowokfs0g1 | cristianadam | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ouzxrw | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/kgkuowokfs0g1/DASHPlaylist.mpd?a=1765528708%2CMDkxOTM2NzcyMWE1MzM2YmYwOTc4ZDY4MmE0ZDBhNzUzYjM5ZWIwNzVjYzU0ZDc5ZjJkYjI3YmI3Y2Y4MmMwZA%3D%3D&v=1&f=sd', 'duration': 119, 'fallback_url': 'https://v.redd.it/kgkuowokfs0g1/CMAF_720.mp4?source=fallback', 'has_audio': False, 'height': 720, 'hls_url': 'https://v.redd.it/kgkuowokfs0g1/HLSPlaylist.m3u8?a=1765528708%2CNTM2MWIyYzExZTAyOGJhYTU4NzU3ZGE5YTNlYzAwYzMxMTZlNGIyMTU0MmY1MWQ4ZTRiODIwZmYzMmFlNzk2Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/kgkuowokfs0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 862}} | t3_1ouzxrw | /r/LocalLLaMA/comments/1ouzxrw/๐๐๐๐๐๐๐๐๐๐๐๐๐๐_is_available_in_qt_creators/ | false | false | 33 | {'enabled': False, 'images': [{'id': 'cDZ5aG9vb2tmczBnMe-wOcA5k3ws7B9qNxPhbAFpjix4pw_ql6FN-CDjglod', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/cDZ5aG9vb2tmczBnMe-wOcA5k3ws7B9qNxPhbAFpjix4pw_ql6FN-CDjglod.png?width=108&crop=smart&format=pjpg&auto=webp&s=8d12bb3a9e39e47dff2716d65f8b5678cbef80a7', 'width': 108}, {'height': 180, 'url': 'https://external-preview.redd.it/cDZ5aG9vb2tmczBnMe-wOcA5k3ws7B9qNxPhbAFpjix4pw_ql6FN-CDjglod.png?width=216&crop=smart&format=pjpg&auto=webp&s=467f2f86822bd5b873b046d73599f634e03fe1ae', 'width': 216}, {'height': 267, 'url': 'https://external-preview.redd.it/cDZ5aG9vb2tmczBnMe-wOcA5k3ws7B9qNxPhbAFpjix4pw_ql6FN-CDjglod.png?width=320&crop=smart&format=pjpg&auto=webp&s=fb7586ae60ef86f8c31c3d89715fca322c091af0', 'width': 320}, {'height': 535, 'url': 'https://external-preview.redd.it/cDZ5aG9vb2tmczBnMe-wOcA5k3ws7B9qNxPhbAFpjix4pw_ql6FN-CDjglod.png?width=640&crop=smart&format=pjpg&auto=webp&s=797cab5cd74262fff4e7a176df3b9f85ef2874e1', 'width': 640}, {'height': 802, 'url': 'https://external-preview.redd.it/cDZ5aG9vb2tmczBnMe-wOcA5k3ws7B9qNxPhbAFpjix4pw_ql6FN-CDjglod.png?width=960&crop=smart&format=pjpg&auto=webp&s=9a39b8106707badf8a6e4d4f1298e502a652ce02', 'width': 960}, {'height': 903, 'url': 'https://external-preview.redd.it/cDZ5aG9vb2tmczBnMe-wOcA5k3ws7B9qNxPhbAFpjix4pw_ql6FN-CDjglod.png?width=1080&crop=smart&format=pjpg&auto=webp&s=145c1d7071cd8cee83154530b2fa0d0093f26f2a', 'width': 1080}], 'source': {'height': 980, 'url': 'https://external-preview.redd.it/cDZ5aG9vb2tmczBnMe-wOcA5k3ws7B9qNxPhbAFpjix4pw_ql6FN-CDjglod.png?format=pjpg&auto=webp&s=21e500160cbcb141d9608f03d3923dad08faca71', 'width': 1172}, 'variants': {}}]} | |
Rust-based UI for Qwen-VL that supports "Think-with-Images" (Zoom/BBox tools) | 5 | Following up on my [previous post](https://www.reddit.com/r/LocalLLaMA/comments/1osiog7/qwen3vl_works_really_good_with_zoomin_tool/) where Qwen-VL uses a "Zoom In" tool, Iโve finished the first version and I'm excited to release it.
It's a frontend designed specifically for think-with-image and qwen. It allows the qwen3-vl to realize it can't see a detail, call a crop/zoom tool, and answer by referring processed images!
https://preview.redd.it/zro3b2gvds0g1.png?width=1998&format=png&auto=webp&s=cf6903bfba0748366387828e6c14a69ab48308da
๐ GitHub: [https://github.com/horasal/QLens](https://github.com/horasal/QLens)
โจ Key Features:
* Visual Chain-of-Thought: Native support for visual tools like Crop/Zoom-in and Draw Bounding Boxes.
* Zero Dependency: Built with Rust (Axum) and SvelteKit. Itโs compiled into a single executable binary. No Python or npm, just download and run.
* llama.cpp Ready: Designed to work out-of-the-box with llama-server.
* Open Source: MIT License.
[Turn screenshot to a table by cropping](https://reddit.com/link/1ouzt5g/video/q57hbdkqas0g1/player)
| 2025-11-12T08:29:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ouzt5g/rustbased_ui_for_qwenvl_that_supports/ | indigos661 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouzt5g | false | null | t3_1ouzt5g | /r/LocalLLaMA/comments/1ouzt5g/rustbased_ui_for_qwenvl_that_supports/ | false | false | 5 | null | |
Best local model for C++? | 8 | Greetings.
What would you recommend as a local coding assistant for development in C++ for Windows apps? My x86 machine will soon have 32GB VRAM (+ 32GB of RAM).
I heard good things about Qwen and Devstral, but would love to know your thoughts and experience.
Thanks. | 2025-11-12T08:28:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ouzs6u/best_local_model_for_c/ | youmumin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouzs6u | false | null | t3_1ouzs6u | /r/LocalLLaMA/comments/1ouzs6u/best_local_model_for_c/ | false | false | self | 8 | null |
I wrote a guide on running LLMs everywhere (desktop, mobile, game engines) with zero conversion | 42 | Full article: [https://medium.com/@planetbridging/loom-the-universal-ai-runtime-that-works-everywhere-and-why-that-matters-54de5e7ec182](https://medium.com/@planetbridging/loom-the-universal-ai-runtime-that-works-everywhere-and-why-that-matters-54de5e7ec182)
TL;DR: Built LOOM to solve the "download model โ convert to 5 formats โ hope outputs match" problem.
One HuggingFace model โ works on Python, JS, C#, Go, WASM, Android, iOS, Godot game engine. No GGUF conversion needed.
Demos in article: Running SmolLM2/Qwen2.5 on desktop, in Godot, on Android.
Already published to PyPI/npm/NuGet for easy integration.
Article covers technical details and why local AI matters for privacy/cost/sovereignty.
Code: github.com/openfluke/loom | 2025-11-12T08:25:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ouzqja/i_wrote_a_guide_on_running_llms_everywhere/ | Apricot-Zestyclose | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouzqja | false | null | t3_1ouzqja | /r/LocalLLaMA/comments/1ouzqja/i_wrote_a_guide_on_running_llms_everywhere/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=108&crop=smart&auto=webp&s=7e71148290a943095daca4dc044d6b8546eb49b8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=216&crop=smart&auto=webp&s=26ff91024b22d68b6b3e438dcb220d5ed8622409', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=320&crop=smart&auto=webp&s=400af67f485343a87337480d7b743b28f8bc4999', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=640&crop=smart&auto=webp&s=0f656ffd07e1fc84f2c67c820634d95c13752753', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=960&crop=smart&auto=webp&s=01f2e480b05849948e42c6e33f4a8953b46e0978', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=1080&crop=smart&auto=webp&s=aa6fdeb97cfcf72c8ce3a91345583b5f0880c5d9', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?auto=webp&s=2fece001026ad37068b130c8715a78062ca08fd6', 'width': 1200}, 'variants': {}}]} |
Looking to run a local model with long-term memory - need help | 2 | Hey everyone!
Iโm trying to set up a local AI that can actually remember things I tell it over time. The idea is to have something with long-term memory that I can keep feeding information to and later ask questions about it months down the line. Basically, I want something that can store and recall personal context over time, not just a chat history. Ideally accessible from other PCs on the same network and even from my iPhone if possible.
Bonus points if I can also give it access to my local obsidian vault.
I will be running this on a windows machine with a 5090 or a windows machine with a PRO 6000.
I've been doing some research and ran into things like Surfsense but I wanted to get some opinions from people that know way more than me, which brings me here. | 2025-11-12T08:12:33 | https://www.reddit.com/r/LocalLLaMA/comments/1ouzjgm/looking_to_run_a_local_model_with_longterm_memory/ | avillabon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouzjgm | false | null | t3_1ouzjgm | /r/LocalLLaMA/comments/1ouzjgm/looking_to_run_a_local_model_with_longterm_memory/ | false | false | self | 2 | null |
ETHEL โ Emergent Tethered Habitat-aware Engram Lattice -- ok, so it sounds a bit pretentious... but it's literal at least? | 5 |
ETHEL is a home-built AI framework (not in a toolkit sense, in a system sense) that uses vision, audio, memory, and contextual awareness to develop an individualized personality over time, based on its observations of and interactions with a local environment.
It is completely self-contained, offline and on a single home system.
I'm six weeks in, currently, and the screenshot shows what I have working so far. I'm not sure how that is for progress, as I'm working in a bit of a vacuum, but this is a solo project and I'm learning as I go so I think it's ok?
It's meant to be a portfolio piece. I've had to change careers due to an injury, after working for 20 years in a physical field, so this is meant to be an example of how I can put systems together without any prior knowledge of them... as well as being something I'm genuinely interested and invested in seeing the outcome of.
It might sound silly, but I grew up DREAMING of having an ai that functions this way... and google home ain't it...
I'd love to hear any thoughts or answer any questions.
I'm mainly putting this here, i think, because the people in my circles generally glaze over when I talk about it, or follow the "how much can you sell it for" line, which completely misses the point... | 2025-11-12T08:04:40 | SuchAd7422 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ouzewz | false | null | t3_1ouzewz | /r/LocalLLaMA/comments/1ouzewz/ethel_emergent_tethered_habitataware_engram/ | false | false | default | 5 | {'enabled': True, 'images': [{'id': 'k5d5gc5x9s0g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/k5d5gc5x9s0g1.png?width=108&crop=smart&auto=webp&s=398108e1a692ee7879e9471eae7f265d802e6887', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/k5d5gc5x9s0g1.png?width=216&crop=smart&auto=webp&s=f356848ac85986c4a02ddb127506b80f827f8bdb', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/k5d5gc5x9s0g1.png?width=320&crop=smart&auto=webp&s=db3c4927aa28970f0bb3c916e7afd904a5799aa7', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/k5d5gc5x9s0g1.png?width=640&crop=smart&auto=webp&s=22d264a8982a1f9aeb2f1bd1d4b2dd3f445de466', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/k5d5gc5x9s0g1.png?width=960&crop=smart&auto=webp&s=c381a7f5d3bbae8d16421f95c2b5c12076c5cb18', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/k5d5gc5x9s0g1.png?width=1080&crop=smart&auto=webp&s=ddf3eea35781dec299405816d9fed1ebe5310056', 'width': 1080}], 'source': {'height': 1440, 'url': 'https://preview.redd.it/k5d5gc5x9s0g1.png?auto=webp&s=0e44a77b5a6e4a61db6ad1bb8f34d77cc564f670', 'width': 2560}, 'variants': {}}]} | |
Guide for supporting new architectures in llama.cpp | 7 | Where can I find a guide and code examples for adding new architectures to llama.cpp? | 2025-11-12T07:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ouz21l/guide_for_supporting_new_architectures_in_llamacpp/ | DarkGenius01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouz21l | false | null | t3_1ouz21l | /r/LocalLLaMA/comments/1ouz21l/guide_for_supporting_new_architectures_in_llamacpp/ | false | false | self | 7 | null |
4 x 64 gb ram kit | 0 | anyone in the market for 256 gb ram?
got this model for sale -
G.SKILL Flare X5 256GB (4 x 64GB) 288-Pin PC RAM DDR5 6000 (PC5 48000) Desktop Memory Model F5-6000J3644D64GX4-FX5
| 2025-11-12T07:40:44 | https://www.reddit.com/r/LocalLLaMA/comments/1ouz14m/4_x_64_gb_ram_kit/ | Cooler_Man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouz14m | false | null | t3_1ouz14m | /r/LocalLLaMA/comments/1ouz14m/4_x_64_gb_ram_kit/ | false | false | self | 0 | null |
Laptop recommendations | 2 | Hi everyone โ Iโm looking for advice on buying a laptop for AI chat and creative character experiences (think Character.AI). I want realistic, creative responses โ not overly flowery or clichรฉ writing. Iโm familiar with AI tools like text-to-image, image-to-video and text-to-video, but Iโve found those workflows can be expensive to run locally.
I donโt have the budget for an expensive desktop right now, which is frustrating because I keep seeing recommendations that powerful desktops are required for uncensored image generation and image-to-video. Is the situation similar for running LLM-based chatbots or building custom characters locally? I donโt need perfection โ just something that feels creative and immersive so I can enjoy AI as an escape.
If anyone can point me in the right direction (recommended laptop specs, minimum VRAM, whether cloud/hosted solutions are a good alternative, or budget-friendly workflows), Iโd really appreciate it. | 2025-11-12T07:40:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ouz0zc/laptop_recommendations/ | sugarboi_444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouz0zc | false | null | t3_1ouz0zc | /r/LocalLLaMA/comments/1ouz0zc/laptop_recommendations/ | false | false | self | 2 | null |
Shall we talk about "AI"-OS for informational purposes? | 0 | I'm really curious about AI-Os
Will the AiOSbcodes be written from scratch? Or will it be gradually integrated into operating systems like Windows and Mac?
I wonder what the formation phases will be like, for example, will it gradually be integrated into Ai, that is, will the first OSs produced be 15% or 25% integrated into Ai?
More importantly, what can be done with these AIOS? | 2025-11-12T07:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ouyxin/shall_we_talk_about_aios_for_informational/ | Outrageous-Bison-424 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouyxin | false | null | t3_1ouyxin | /r/LocalLLaMA/comments/1ouyxin/shall_we_talk_about_aios_for_informational/ | false | false | self | 0 | null |
Noob here.What are the best models to start out with, and how? | 1 | Essentially the title. For different categories (LLMs, image and audio generation, etc) what are the best models, and what general information should I know about running local models | 2025-11-12T07:14:26 | https://www.reddit.com/r/LocalLLaMA/comments/1ouymex/noob_herewhat_are_the_best_models_to_start_out/ | PromptCoding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouymex | false | null | t3_1ouymex | /r/LocalLLaMA/comments/1ouymex/noob_herewhat_are_the_best_models_to_start_out/ | false | false | self | 1 | null |
Looking for a CLI AI agent that works with self-hosted models (no external auth) | 0 | Hey everyone,
Iโm looking for a good CLI-based AI agent that I can use with our self-hosted models inside the company network. Ideally, something lightweight that doesnโt require any cloud authentication or external API keys.
I tried the Continue.dev CLI, but as far as I can tell, it needs authentication through Continue Hub, which Iโm not allowed to use due to internal restrictions.
Has anyone here found a solid CLI agent that works fully offline or at least supports custom/self-hosted model endpoints (e.g., Ollama, LM Studio, vLLM, etc.)?
Would love to hear about your setup or any open-source alternatives you recommend. | 2025-11-12T06:57:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ouyc2z/looking_for_a_cli_ai_agent_that_works_with/ | teknodram | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouyc2z | false | null | t3_1ouyc2z | /r/LocalLLaMA/comments/1ouyc2z/looking_for_a_cli_ai_agent_that_works_with/ | false | false | self | 0 | null |
What's a surprisingly capable smaller model (<15B parameters) that you feel doesn't get enough attention? | 26 | We all see the headlines for the massive new 100B+ models, but some of the most impressive work is happening at a smaller scale. What's a sub-15B model you've used recently that genuinely impressed you with its reasoning, coding, or creativity? Maybe it's a fine-tune of a known architecture or something entirely different. Let's share some hidden gems.
https://preview.redd.it/2mnrk4jpur0g1.png?width=1536&format=png&auto=webp&s=317e3c1ca1664b07b4ada9fafb7a1cd2c0d7c389
| 2025-11-12T06:40:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ouy2a6/whats_a_surprisingly_capable_smaller_model_15b/ | Street-Lie-2584 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouy2a6 | false | null | t3_1ouy2a6 | /r/LocalLLaMA/comments/1ouy2a6/whats_a_surprisingly_capable_smaller_model_15b/ | false | false | 26 | null | |
fine-tune for rag | 1 | Hey there! Iโve got a quick question.
I want to fine-tune a Qwen model on Geminiโs answers (basically distillation).
In my production pipeline, I inject the retrieved context and some instructions into the system prompt before sending the query to Gemini. I also plan to do the same when generating the fine-tuning data.
My question is: should I include the system prompt when fine-tuning Qwen?
Wouldnโt that help it learn how to rely on available context and follow instructions more effectively?
The reason Iโm asking is that most fine-tuning datasets I see are just *questionโanswer pairs*. That helps the model learn *knowledge*, but not necessarily the *behavior* of sticking to the provided context or avoiding hallucination when the context doesnโt support an answer.
For context, Iโm doing this because the base Qwen model struggles a bit with my language and sometimes produces random answers even when the retrieved context clearly doesnโt support them.
another question For a RAG setup, whatโs considered the best practice โ should the retrieved data be injected into the system prompt or the user message?
Any advice or experience with this kind of setup would be really appreciated! | 2025-11-12T06:34:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ouxy9y/finetune_for_rag/ | youcanaskmeifyouwant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouxy9y | false | null | t3_1ouxy9y | /r/LocalLLaMA/comments/1ouxy9y/finetune_for_rag/ | false | false | self | 1 | null |
Cursor just dropped a new coding model called Composer 1, and I had to test it | 2 | Theyโre calling it an โagentic coding modelโ thatโs **4x faster** than models with similar intelligence (yep, faster Theyโre calling it an โagentic coding modelโ thatโs 4x faster than models with similar intelligence (yep, faster than GPT-5, Claude Sonnet 4.5, and other reasoning models).
Big claim, right? So I decided to test both in a real coding task, building an agent from scratch.
I built the same agent using Composer and Claude Sonnet 4.5 (since itโs one of the most consistent coding models out there):
An AI agent that takes a YouTube URL, finds the interesting parts of the video, and posts a Twitter thread powered by [Composioโs Tool Router](https://docs.composio.dev/docs/tool-router/quick-start)
Here's what I found:
# TL;DR
* **Composer 1**: Finished the agent in under 3 minutes. Needed two small fixes but otherwise nailed it. Very fast and efficient with token usage.
* **Claude Sonnet 4.5**: Slower (around 10-15 mins) and burned over 2x the tokens. The code worked, but it sometimes used old API methods even after being shown the latest docs.
Both had similar code quality in the end, but Composer 1 felt much more practical. Sonnet 4.5 worked well in implementation, but often fell back to old API methods it was trained on instead of following user-provided context. It was also slower and heavier to run.
Honestly, Composer 1 feels like a sweet spot between speed and intelligence for agentic coding tasks. You lose a little reasoning depth but gain a lot of speed.
I donโt fully buy Cursorโs โ4x fasterโ claim, but itโs definitely at least 2x faster than most models you use today.
You can find the full coding comparison with the demo here: [Cursor Composer 1 vs Claude 4.5 Sonnet: The better coding model](https://composio.dev/blog/cursor-composer-1-vs-claude-4-5-sonnet-the-better-coding-model)
Would love to hear if anyone else has benchmarked these models with real-world projects. โ๏ธ | 2025-11-12T05:56:17 | https://www.reddit.com/r/LocalLLaMA/comments/1ouxb58/cursor_just_dropped_a_new_coding_model_called/ | shricodev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouxb58 | false | null | t3_1ouxb58 | /r/LocalLLaMA/comments/1ouxb58/cursor_just_dropped_a_new_coding_model_called/ | false | false | self | 2 | null |
Rusty-R2: Open source AI you can actually train yourself on consumer hardware | 86 | I'm building Rusty-R2, exploring efficient, post-transformer architectures you can train from scratch on ordinary hardware. Not cloud-dependent, not locked behind paywalls.
The goal: small, customizable, agentic AI that's fully open. Built with open data, trained transparently, AGPL licensed so it stays open forever. Every contributor keeps their copyright.
Right now it's just me working on this, but I'm looking for people who want to build something real together. We're aiming to explore AI safety through transparency, responsible pretraining, and community-driven development, rather than post-training methods that censor or lobotomize the model. These are goals, not finished achievements. We're learning by doing, figuring this out together.
**Current status:** Currently using a RWKV-like architecture, but I'm completely open to experimenting with other architectures. Base model trains successfully on consumer hardware the last time I tested, but I've been focused on choosing datasets and haven't tested the training pipeline in a few days (14M parameters, 1000 training steps in \~98 minutes on a single GTX1650TI GPU with 4GB of vram, training actually uses less than 2gb ram/vram combined in its current state). Supervised learning pipeline is working. The model outputs something, but it's not coherent or usable yet. It needs way more data and training time. Agentic fine-tuning layer has module import issues that need fixing. Interactive terminal has protocol errors to debug. Most of the code is AI-generated. I'm a systems administrator, not a developer, so I use AI as a coding tool while I handle the architecture and system design.
This is early development, but the goal is real, usable, agentic models. Not a toy project. The supervised training works, but the agentic components aren't wired up correctly yet, and the base model needs significantly more training. I'm putting this out there for transparency, showing what works and what doesn't, inviting people who want to help solve real problems or just watch the process unfold.
Once we figure out how to produce high quality models, I'd like to make the entire training process as user-friendly and accessible to laypeople as possible.
You don't need to submit code to participate (though contributions are welcome). Sign your name in the source, submit a pull request. Claim your spot in the commons.
If you want to participate but don't like the direction I'm taking it, fork it and do your own thing. That's what open source is for.
Right now everything is on GitHub. I might set up a Discord or Matrix channel for community discussion later if there's interest. We might also build Jupyter notebooks to make training environments more reproducible, and/or so people could use Kaggle or Colab. We'll see where this goes.
๐ [github.com/bonzupii/Rusty-R2](http://github.com/bonzupii/Rusty-R2) | 2025-11-12T05:18:25 | https://www.reddit.com/r/LocalLLaMA/comments/1ouwmx4/rustyr2_open_source_ai_you_can_actually_train/ | Bonzupii | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouwmx4 | false | null | t3_1ouwmx4 | /r/LocalLLaMA/comments/1ouwmx4/rustyr2_open_source_ai_you_can_actually_train/ | false | false | self | 86 | {'enabled': False, 'images': [{'id': 'LCipkt80MmXqlVhVxgIb07jbFsDS8TlLQqTNcUu7rdM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LCipkt80MmXqlVhVxgIb07jbFsDS8TlLQqTNcUu7rdM.png?width=108&crop=smart&auto=webp&s=c6b713d504a082c87c5d9ad1e3df37d017429dae', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LCipkt80MmXqlVhVxgIb07jbFsDS8TlLQqTNcUu7rdM.png?width=216&crop=smart&auto=webp&s=5ec55cfb958b12eb345f792cdbfff9cc197913f3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LCipkt80MmXqlVhVxgIb07jbFsDS8TlLQqTNcUu7rdM.png?width=320&crop=smart&auto=webp&s=716163196b880c5cc683e49e36f437cc9a66accf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LCipkt80MmXqlVhVxgIb07jbFsDS8TlLQqTNcUu7rdM.png?width=640&crop=smart&auto=webp&s=1c405d105c869cce9a87e62389d9880f9aa5cf97', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LCipkt80MmXqlVhVxgIb07jbFsDS8TlLQqTNcUu7rdM.png?width=960&crop=smart&auto=webp&s=367a6ae3060cec30a74aef9164efcc9692a02631', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LCipkt80MmXqlVhVxgIb07jbFsDS8TlLQqTNcUu7rdM.png?width=1080&crop=smart&auto=webp&s=5c24b9d8edad8ef472a5c8d802ab11dabfd75514', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LCipkt80MmXqlVhVxgIb07jbFsDS8TlLQqTNcUu7rdM.png?auto=webp&s=ae76c5d657f19d0de8bfa7e38fedc884843ebfe7', 'width': 1200}, 'variants': {}}]} |
Best LLM or VL-LLM for local hosting on Mac Studio M3 Ultra? | 1 | [removed] | 2025-11-12T05:15:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ouwkod/best_llm_or_vlllm_for_local_hosting_on_mac_studio/ | DarthButth0le | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouwkod | false | null | t3_1ouwkod | /r/LocalLLaMA/comments/1ouwkod/best_llm_or_vlllm_for_local_hosting_on_mac_studio/ | false | false | self | 1 | null |
Attempting to fine tune Phi-2 on llama.cpp with m2 apple metal | 2 | As the title suggests I am trying to fine tune phi-2 with json lines I wrote on my MacBook with m2 chip.
Big disclaimer I am an artist studying โArt and Technologyโ. My background is not in backend work but mainly physical computing and visual programming. Not machine learning. I am working on my thesis installation that involves two individual โbotsโ that are hosted on Raspberry Pi 5s, communicating serially. One โbotโ is the โteacherโ and the other is the โstudentโ (questions everything the teacher says). The project revolves around the Naim June Pike idea of โusing technology in order to hate it properlyโ, highlighting societyโs current trust in large language models, showing that these models are indeed trained by humans, and these humans can have really bad intentions. So the data I am attempting to fine tune with involves mainly hatred, violent prompts and completions.
Ok so here I am. I have one functioning llama.cpp running phi-2 and being hosted completely locally on my pi. I am still in preliminary stages. What I canโt seem to achieve is this fine tuning with my own data. Hereโs what Iโve tried:
-rebuilding llama.cpp (and tried ggml) numerous times with different flags (fine tune on etc..) only to find the repository has changed since.
-trying to install a separate repository that contains lora fine tuning. This seemed closest to the solution.
-countless rebuilds of older models that I thought might contain what Iโm looking for.
Honestly Iโm kind of lost and would super appreciate talking to a pro. Iโm sure via chat or phone call this can be better explained.
If anyone has any experience trying to do this particular thing WITHOUT OUTSOURCING HARDWARE ACCELERATION please hit my line. I am attempting this as ethically as possible, and as local as possible. Iโm happy to shoot a tip to whoever can help me out with this.
Thank you for reading! Ask any questions you have.. Iโm sure I did not explain this very well. Cheers | 2025-11-12T05:00:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ouwade/attempting_to_fine_tune_phi2_on_llamacpp_with_m2/ | Ok-Dog-4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouwade | false | null | t3_1ouwade | /r/LocalLLaMA/comments/1ouwade/attempting_to_fine_tune_phi2_on_llamacpp_with_m2/ | false | false | self | 2 | null |
DeekSeek OCR GGFU | 2 | Deepseek OCR GGUF just dropped making is easier than ever to do high quality OCR locally
https://huggingface.co/NexaAI/DeepSeek-OCR-GGUF | 2025-11-12T04:46:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ouw13u/deekseek_ocr_ggfu/ | bsjavwj772 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouw13u | false | null | t3_1ouw13u | /r/LocalLLaMA/comments/1ouw13u/deekseek_ocr_ggfu/ | false | false | self | 2 | null |
Kimi K2 Thinking Q4_K_XL Running on Strix Halo | 12 | Got it to run on the ZBook Ultra G1a ... it's very slow, obviously way too slow for most use cases. However, if you provide well crafted prompts and are willing to wait hours or overnight, there could still be some use cases.
prompt eval time = 74194.96 ms / 19 tokens ( 3905.00 ms per token, 0.26 tokens per second)
eval time = 1825109.87 ms / 629 tokens ( 2901.61 ms per token, 0.34 tokens per second)
total time = 1899304.83 ms / 648 tokens
Here was my llama-server start up command.
llama-server -m "Kimi-K2-Thinking-UD-Q4\_K\_XL-00001-of-00014.gguf" -c 4096 -ngl 62 --override-tensor "(\[0-9\]+).ffn\_.\*\_exps.=CPU" -ub 4096 --host [0.0.0.0](http://0.0.0.0) \--cache-type-k q4\_0 --cache-type-v q4\_0 --port 8080
Have tried loading with a bigger context window (8192) but it outputs gibberish. It will run with the below command as well, and results were basically the same. Offloading to disk is slow ... but it works.
llama-server -m "./Kimi-K2-Thinking-UD-Q4\_K\_XL-00001-of-00014.gguf" -c 4096 -ngl 3 --host [0.0.0.0](http://0.0.0.0) \--cache-type-k q4\_0 --cache-type-v q4\_0 --port 8080
If anyone has any ideas to speed this up, let me know. I'm going to try merging the shards to see whether that helps. | 2025-11-12T03:32:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ouuko3/kimi_k2_thinking_q4_k_xl_running_on_strix_halo/ | ga239577 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouuko3 | false | null | t3_1ouuko3 | /r/LocalLLaMA/comments/1ouuko3/kimi_k2_thinking_q4_k_xl_running_on_strix_halo/ | false | false | self | 12 | null |
Can I get slow + large token pool with 64gig macmini | 1 |
So, if Iโm willing to have a really slow process, can I punch above my weight with a 64 gig mac m4 pro? There are tasks I need done, that I donโt mind taking a couple days, can you achieve million token working memory programming tasks that grind away on your home computer while you are at work?
| 2025-11-12T03:01:43 | https://www.reddit.com/r/LocalLLaMA/comments/1outxb0/can_i_get_slow_large_token_pool_with_64gig_macmini/ | Wishitweretru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1outxb0 | false | null | t3_1outxb0 | /r/LocalLLaMA/comments/1outxb0/can_i_get_slow_large_token_pool_with_64gig_macmini/ | false | false | self | 1 | null |
Workstation in east TN (4x4090, 7950x3d) | 16 | Anyone looking for a workstation? I'll probably have to part it out otherwise. (downsizing to a couple sparks) | 2025-11-12T02:56:44 | https://www.reddit.com/gallery/1outtda | Adorable_Walrus5278 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1outtda | false | null | t3_1outtda | /r/LocalLLaMA/comments/1outtda/workstation_in_east_tn_4x4090_7950x3d/ | false | false | default | 16 | null |
Repeat after me. | 387 | Itโs okay to be getting 45 tokens per second on an AMD card that costs 4 times less than an Nvidia card with same VRAM. Again, itโs okay.
Theyโll get better and better. And if you want 120 toks per second or 160 toks per second, go for it. Pay the premium. But donโt shove it up peopleโs asses.
Thank you. | 2025-11-12T02:16:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ousy0e/repeat_after_me/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ousy0e | false | null | t3_1ousy0e | /r/LocalLLaMA/comments/1ousy0e/repeat_after_me/ | false | false | self | 387 | null |
Got $200 AgentRoute Credits But Confused How to Use It in n8n? | 0 | Hey everyone,
So I recently signed up for **AgentRoute** using a referral link โ theyโre giving **$200 in free credits** just for signing up (no credit card needed, only GitHub authentication).
Now Iโve got the account set up and the credits activated, but Iโm a bit stuck figuring out **how to integrate or use AgentRoute inside n8n**.
I checked the docs, but theyโre a little confusing when it comes to connecting APIs or running agents through workflows. Has **anyone here successfully integrated AgentRoute with n8n** or found a working setup (like API calls, webhooks, or custom nodes)?
Would love to see some examples, screenshots, or even just pointers on how to get started. | 2025-11-12T02:15:50 | https://www.reddit.com/r/LocalLLaMA/comments/1ousx59/got_200_agentroute_credits_but_confused_how_to/ | divaschhetry1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ousx59 | false | null | t3_1ousx59 | /r/LocalLLaMA/comments/1ousx59/got_200_agentroute_credits_but_confused_how_to/ | false | false | self | 0 | null |
Selective (smart) MoE experts offloading to CPU? | 16 | Seeing recent REAP models where existing MoE models were processed somehow and less frequent experts pruned out decreasing the model size made me wonder why the same thing is not applied to in general to the actual loading:
Basically the idea is to either run some sort of a benchmark/testrun and see which experts are the most frequent and prioritize loading those to VRAM, that should result in much higher generation speed since we are more likely to work off of fast VRAM vs slower cpu RAM. It should also be possible to do "autotune" sort of thing where over time the statistics for the current workload is gathered and the experts are reshuffled - more frequently used ones migrate to VRAM and less frequently used ones sink to CPU RAM.
Since I don't think I am the only one that could come up with this, there must be some underlying reason why it's not done? Some cursory search found https://arxiv.org/html/2508.18983v1 this paper that seems tangentially related, but they load frequent experts to CPU RAM and leave the less frequent in storage which I guess could be the extra level of optimization too: i.e. have 3 tiers:
1. VRAM for most frequent
2. RAM for less frequent
3. the "mmap-mapped" that were not actually loaded (I know people nowadays recommend --no-mmap in llama.cpp because it indiscriminately keeps weights just mapped, so (at least some first runs?) are very slow as we have to fetch them from storage.
That way even the pruned ones (in the REAP models) you can keep in the much cheaper place. | 2025-11-12T01:42:14 | https://www.reddit.com/r/LocalLLaMA/comments/1ous6zt/selective_smart_moe_experts_offloading_to_cpu/ | greentheonly | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ous6zt | false | null | t3_1ous6zt | /r/LocalLLaMA/comments/1ous6zt/selective_smart_moe_experts_offloading_to_cpu/ | false | false | self | 16 | null |
DeepSeek-OCR GGUF model runs great locally - simple and fast | 0 | https://reddit.com/link/1our1up/video/pcjdh6954q0g1/player
GGUF Model and Quickstart Instructions:
๐ค [https://huggingface.co/NexaAI/DeepSeek-OCR-GGUF](https://huggingface.co/NexaAI/DeepSeek-OCR-GGUF) | 2025-11-12T00:50:41 | https://www.reddit.com/r/LocalLLaMA/comments/1our1up/deepseekocr_gguf_model_runs_great_locally_simple/ | AlanzhuLy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1our1up | false | null | t3_1our1up | /r/LocalLLaMA/comments/1our1up/deepseekocr_gguf_model_runs_great_locally_simple/ | false | false | self | 0 | null |
๐LLM Overthinking? DTS makes LLM think shorter and answer smarter | 9 | Large Reasoning Models (LRMs) have achieved remarkable breakthroughs on reasoning benchmarks. However, they often fall into a paradox: the longer they reason, the *less* accurate they become. To solve this problem, we propose **DTS (Decoding Tree Sketching)**, a **plug-and-play** framework to enhance LRM reasoning accuracy and efficiency.ย
๐ก **How it works:**
The variance in generated output is predominantly determined by high-uncertainty (high-entropy) tokens. DTS selectively branches at high-entropy tokens, forming a sparse decoding tree to approximate the decoding CoT space. By early-stopping on the first complete CoT path, DTS leads to the **shortest and most accurate** CoT trajectory.
๐ **Results on AIME 2024 / 2025:**
โ
Accuracy โ up to 8%
โ
Average reasoning length โ \~23%
โ
Repetition rate โ up to 20%
โ all achieved purely through a **plug-and-play** decoding framework. | 2025-11-12T00:44:35 | https://www.reddit.com/r/LocalLLaMA/comments/1ouqwv4/llm_overthinking_dts_makes_llm_think_shorter_and/ | Dear_Treat3688 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouqwv4 | false | null | t3_1ouqwv4 | /r/LocalLLaMA/comments/1ouqwv4/llm_overthinking_dts_makes_llm_think_shorter_and/ | false | false | self | 9 | null |
Should I sell my 3090? | 9 | Iโm going through some rough times financially right now.
Originally I wanted something that could run models for privacy but considering how far behind models that can fit in 24gb of VRAM are, I donโt see the point in keeping it.
Iโm sad to let it go, but do you think thereโs value in keeping it until some sort of breakthrough happens? Maybe in a few years it can run something on par with GPT-5 or will that never happen? | 2025-11-12T00:39:18 | https://www.reddit.com/r/LocalLLaMA/comments/1ouqsoh/should_i_sell_my_3090/ | Apart_Paramedic_7767 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouqsoh | false | null | t3_1ouqsoh | /r/LocalLLaMA/comments/1ouqsoh/should_i_sell_my_3090/ | false | false | self | 9 | null |
[image processing failed] | 1 | [deleted] | 2025-11-12T00:38:56 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1ouqsef | false | null | t3_1ouqsef | /r/LocalLLaMA/comments/1ouqsef/image_processing_failed/ | false | false | default | 1 | null | ||
Local conversational model with STT TTS | 105 | I wanted to make an animatronic cohost to hang out with me and my workshop and basically roast me. It was really interesting how simple things like injecting relevant memories into the system prompt (or vision captioning) really messed with its core identity; very subtle tweaks repeatedly turned it into "a helpful AI assistant," but I eventually got the personality to be pretty consistent with a medium context size and decent episodic memory.
Details: faster-whisper base model fine-tuned on my voice, Piper TTS tiny model find tuned on my passable impression of Skeletor, win11 ollama running llama 3.2 3B q4, custom pre-processing and prompt creation using pgvector, captioning with BLIP (v1), facial recognition that Claude basically wrote/ trained for me in a jiffy, and other assorted servos and relays.
There is a 0.5 second pause detection before sending off the latest STT payload.
Everything is running on an RTX 3060, and I can use a context size of 8000 tokens without difficulty, I may push it further but I had to slam it down because there's so much other stuff running on the card.
I'm getting back into the new version of Reddit, hope this is entertaining to somebody. | 2025-11-12T00:18:39 | https://v.redd.it/hngyx3yryp0g1 | DuncanEyedaho | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ouqbyo | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hngyx3yryp0g1/DASHPlaylist.mpd?a=1765498735%2CZTJiOTY3ZTcxMDEwOTNkZThjODEyZDMzOTM4ODRjODMzOTczM2JiY2Y0M2JhOGZkZGVmYTFlOTg1NWU2MzE2Mw%3D%3D&v=1&f=sd', 'duration': 20, 'fallback_url': 'https://v.redd.it/hngyx3yryp0g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/hngyx3yryp0g1/HLSPlaylist.m3u8?a=1765498735%2CNmZkZDFlNmQ2NWI0ZjkyYWY1MGFlZWFjNmQwZmRiMjYzNGUyOTBmOTRjNjlmYzkzYjhmNTMwNGQyYjkzYTUxNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/hngyx3yryp0g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1ouqbyo | /r/LocalLLaMA/comments/1ouqbyo/local_conversational_model_with_stt_tts/ | false | false | 105 | {'enabled': False, 'images': [{'id': 'eGU2ZzkydnJ5cDBnMX7vvGggqTNZOst5uXqXZt7URDd0IOwrN4Cxg9i1Tmfm', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/eGU2ZzkydnJ5cDBnMX7vvGggqTNZOst5uXqXZt7URDd0IOwrN4Cxg9i1Tmfm.png?width=108&crop=smart&format=pjpg&auto=webp&s=98377fc558b5455bad677fc829f1bd99f282a8d1', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/eGU2ZzkydnJ5cDBnMX7vvGggqTNZOst5uXqXZt7URDd0IOwrN4Cxg9i1Tmfm.png?width=216&crop=smart&format=pjpg&auto=webp&s=2fa03dc1f3056f75a5398f5039fa6eea934683e3', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/eGU2ZzkydnJ5cDBnMX7vvGggqTNZOst5uXqXZt7URDd0IOwrN4Cxg9i1Tmfm.png?width=320&crop=smart&format=pjpg&auto=webp&s=0b8a650f976f8003fe3a09b0118ebeb5e8d8f5b4', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/eGU2ZzkydnJ5cDBnMX7vvGggqTNZOst5uXqXZt7URDd0IOwrN4Cxg9i1Tmfm.png?width=640&crop=smart&format=pjpg&auto=webp&s=76ba3940459315e853bec417fe0bbcd7549c28ea', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/eGU2ZzkydnJ5cDBnMX7vvGggqTNZOst5uXqXZt7URDd0IOwrN4Cxg9i1Tmfm.png?width=960&crop=smart&format=pjpg&auto=webp&s=61e4630e05a52b1d1eab91f5698d77721f5087bc', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/eGU2ZzkydnJ5cDBnMX7vvGggqTNZOst5uXqXZt7URDd0IOwrN4Cxg9i1Tmfm.png?width=1080&crop=smart&format=pjpg&auto=webp&s=1a713bad23841380677a525f2b3f2576c49f13de', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/eGU2ZzkydnJ5cDBnMX7vvGggqTNZOst5uXqXZt7URDd0IOwrN4Cxg9i1Tmfm.png?format=pjpg&auto=webp&s=ccdb3e2a83d491a529ebaeeee5b5e12098fa6ce8', 'width': 1080}, 'variants': {}}]} | |
Tuning GPU performance. Is WSL2 actually just the best way now? | 0 | For some background I'm a big fan of running ML workloads under docker because it is a reliable and simple way to manage your dependencies. Once you get nvidia container runtime and docker set up and you have nvidia-smi showing signs of life, you are good to go.
Since docker requires Linux I always assumed the approach would be to run Linux on inference boxes.
But I tried WSL2 again this time (I come back to play with different tech every year or so) given that my 5090 PC has been dedicated to running games for months now ever since I got it, so I may as well see how well it works for CUDA workloads. In the past the only real issue was poor disk I/O which wasn't really even a dealbreaker.
And I'm delighted to report that WSL2 seems even easier to initialize and set up now, and what's more it has Xwayland and GUI app support out of the box. I was able to apt install firefox in WSL2 and open it and it's quite usable. It's even got full WebGL2 support, but before you get excited there is no hardware acceleration there. WSL2 is still only good for CUDA ML dev. But I like to see this kind of progress.
Now here's what I've been tinkering with is launching nbody for a quick GPU benchie and I found it is really responsive to tweaks made via Afterburner running in Windows (something that should be very familiar if you are into gaming and tuning your GPU):
`docker run --rm --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark -numbodies=2000000`: 2 million particles has it run for 13 seconds which is long enough to make a measurement of watts and so on.
On my 5090FE I find I get 60 to 60.7 TFLOPs with my mild undervolt (unlike any gaming benchie I've tried, nbody pegs the GPU while undervolted also to full 575W 100% TDP)
When i restore stock settings I'm getting only 55 TFLOPs.
I know there are hacks under Linux to undervolt your nvidia GPU I don't see anything with remotely as much information and support and know-how on the windows side with Afterburner. And I can clearly see that the GPU is passed through to WSL2, so I feel like this is probably one of the best if not the best way right now to tune your Nvidia GPU for inference. Thoughts? | 2025-11-12T00:15:31 | https://www.reddit.com/r/LocalLLaMA/comments/1ouq9ek/tuning_gpu_performance_is_wsl2_actually_just_the/ | michaelsoft__binbows | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouq9ek | false | null | t3_1ouq9ek | /r/LocalLLaMA/comments/1ouq9ek/tuning_gpu_performance_is_wsl2_actually_just_the/ | false | false | self | 0 | null |
I've just ordered an RTX 6000 Pro. What are the best models to use in its 96GB for inference and OCR processing of documents? | 95 | Hi all, just trying to find out what people think are the best LLM's these days for inference and OCR document processing? So what model and quant works? I need it because a lot of the inference and documentation is confidential (medical and legal). More than one person will use the device via configuring a web front-end. Your suggestions would be great. | 2025-11-12T00:13:24 | https://www.reddit.com/r/LocalLLaMA/comments/1ouq7oe/ive_just_ordered_an_rtx_6000_pro_what_are_the/ | AlwaysLateToThaParty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouq7oe | false | null | t3_1ouq7oe | /r/LocalLLaMA/comments/1ouq7oe/ive_just_ordered_an_rtx_6000_pro_what_are_the/ | false | false | self | 95 | null |
When do Mac Studio upgrades hit diminishing returns for local LLM inference? And why? | 1 | I'm looking at buying a Mac Studio and what confuses me is when the GPU and ram upgrades start hitting real world diminishing returns given what models you'll be able to run. I'm mostly looking because I'm obsessed with offering companies privacy over their own data and having something that I can carry around the world in a backpack where there might not be great internet.
I can afford a fully built M3 Ultra with 512 gb of ram, but I'm not sure there's an actual realistic reason I would do that. I can't wait till next year (It's a tax write off), so the Mac Studio is probably my best chance at that.
Outside of ram usage is 80 cores really going to net me a significant gain over 60? Also and why?
Again, I have the money. I just don't want to over spend just because its a flex on the internet. | 2025-11-12T00:12:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ouq71b/when_do_mac_studio_upgrades_hit_diminishing/ | Tired__Dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouq71b | false | null | t3_1ouq71b | /r/LocalLLaMA/comments/1ouq71b/when_do_mac_studio_upgrades_hit_diminishing/ | false | false | self | 1 | null |
What small thinking models dont overthink, and are good for storywriting? | 4 | Personally I only use LLMs for coding, and story writing. Qwen3-4B is really good at both in my opinion, but it uses a lot of the context window thinking, and the stories endings are always hopeslop. | 2025-11-12T00:07:39 | https://www.reddit.com/r/LocalLLaMA/comments/1ouq2w5/what_small_thinking_models_dont_overthink_and_are/ | Whydoiexist2983 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouq2w5 | false | null | t3_1ouq2w5 | /r/LocalLLaMA/comments/1ouq2w5/what_small_thinking_models_dont_overthink_and_are/ | false | false | self | 4 | null |
Local RAG made simple. | 1 | So for text I mostly use Ooogabooga. For chat - KobolCpp. For image generation - Invoke. For other things I dabbled with occasionaly - Jan, Alpaca, LocalAI or LMstudio.
But I think i have spent at least two nights trying to find some easy way to use some kind of RAG function because i want to use big .txt files as content for AI-chat.
If there is no similar out-of-the-box solution for this (auto-chunking text etc) ?
If not what is the easiest route to get that up and running?
Text files that could be up to up to 5 mb big would be fantastic but if only 500kb i would happily settle with that too.
Any links or hints would probably be useful for anyone stumbling upon this post. Thank you. | 2025-11-11T23:54:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ouprnt/local_rag_made_simple/ | Mangleus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouprnt | false | null | t3_1ouprnt | /r/LocalLLaMA/comments/1ouprnt/local_rag_made_simple/ | false | false | self | 1 | null |
Kimi K2 thinking, GLM 4.6 and Minimax M2 - the new era of opensource models? | 60 | So, a few weeks ago we had glm 4.6 - pretty damn good model for coding and agentic tasks. Capable as hell, being able to replace my sonnet4 (and sonnet4.5 later) on my usual day work for my clients.
After that - recently - minimax released m2 - quite damn good model aswell - and it's also FAST. Way faster than GLM via coding plan. Good to tackle coding tasks aswell, good to go on working on longer / bigger things aswell. I'm impressed.
Now we have kimi k2 thinking - another pretty damn good model. For coding itself probably a tad bit better than those 2 above. Takes longer to generate code, but quality is better (overall) - not a super significant difference, but it's very, very capable thing.
And now - all those are opensource. But also all those have their relevant coding plans making those available for vast majority of population (however glm still leads being the cheapest and more generous than other 2 basically - on the 20$ tier - those are all available there and pretty generous limits).
I wondered what are your thoughts on those models and thier relevant pricing / coding plans and so on. I want to know what the community thinks to include those thoughts in my guide - aimed at vibecoders, but considering this community quite dedicated to understanding LLMs itself rather than 'coding' community I think the value of insights on user ends is totally here.
Enlighten me - as I have my own opinion, but also want to know yours (and check my profile if you want to read the guide :D) | 2025-11-11T23:27:00 | https://www.reddit.com/r/LocalLLaMA/comments/1oup3zw/kimi_k2_thinking_glm_46_and_minimax_m2_the_new/ | Bob5k | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oup3zw | false | null | t3_1oup3zw | /r/LocalLLaMA/comments/1oup3zw/kimi_k2_thinking_glm_46_and_minimax_m2_the_new/ | false | false | self | 60 | null |
GPT-OSS 120B is spitting a bunch of garbage | 0 | I'm just a newbie of running LLMs locally so keep that in mind, but since this was an MoE model I wanted to "stress test"
It's just a 5070ti laptop with 32Gb of RAM, and I'm simply launching the Huihui obliterated gguf with KoboldCPP... And yet it moves, aided by the love of Jesus Christ, no doubt
But it seems as if the model, instead of answering my question or reasoning (even for a simple "hello") is just generating a bunch of schizo garbage, why is that | 2025-11-11T23:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/1oup08s/gptoss_120b_is_spitting_a_bunch_of_garbage/ | G3nghisKang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oup08s | false | null | t3_1oup08s | /r/LocalLLaMA/comments/1oup08s/gptoss_120b_is_spitting_a_bunch_of_garbage/ | false | false | self | 0 | null |
Most used models and performance on M3u 512 gb | 157 | Bored, thought this screenshot was cute, might delete later.
**Model**: Kimi K2 thinking
**Use case**: idk it's just cool having a huge model running local. I guess I will use it for brainstorming stuff, medical stuff, other questionable activities like academic writing. PP speed/context size is too limited for a lot of agentic workflows but it's a modest step above other open source models for pure smarts
**PP speed:** Q3 GGUF 25 t/s (gguf 4 bit at 26k context) faster with lower context;
**Token gen** speed: 3ish to 20 t/s depending on context size
**Model:** GLM 4.6
**Use Case:** vibe coding (slow but actually can create working software semi-autonomously with Cline); creative writing; expository/professional writing; general quality-sensitive use
**PP Speed:** 4 bit MLX 50-70 t/s at large context sizes (greater than 40k)
**Token Gen speed:** generally 10-20
**Model:** Minimax-m2
**Use case:** Document review, finance, math,
**PP Speed**: MLX 4 bit 3-400 at modest sizes (10k ish)
**Token gen speed:** 40-50 at modest sizes
**Model**: GPT-OSS-120
**Use case:** Agentic searching, large document ingesting; general medium-quality, fast use
**PP speed:** 4 bit MLX near 1000 at modest context sizes. But context caching doesn't work, so has to reprocess every turn.
**Token gen speed:** about 80 at medium context sizes
**Model: Hermes 405b**
**Use case:** When you want stuff to have that early 2024 vibe... not really good at anything except maybe low context roleplay/creative writing. Not the trivia king people seem to think.
**PP Speed:** mlx 4 bit: Low... maybe 25 t/s?
**Token gen Speed:** Super low... 3-5 t/s
**Model: Deepseek 3.1:**
**Use case:** Used to be for roleplay, long context high quality slow work. Might be obsoleted by glm 4.6... not sure it can do anything better
**PP Speed:** Q3 GGUF: 50 t/s
**Token gen speed:** 3-20 depending on context size
| 2025-11-11T23:00:24 | nomorebuttsplz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ouogiq | false | null | t3_1ouogiq | /r/LocalLLaMA/comments/1ouogiq/most_used_models_and_performance_on_m3u_512_gb/ | false | false | default | 157 | {'enabled': True, 'images': [{'id': 's0jrlz569p0g1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/s0jrlz569p0g1.png?width=108&crop=smart&auto=webp&s=b90488fcd1007f51ba39f21042ac652fac10ed08', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/s0jrlz569p0g1.png?width=216&crop=smart&auto=webp&s=bea9284e1711e913d177413637d0e16f5e2f7c6d', 'width': 216}, {'height': 140, 'url': 'https://preview.redd.it/s0jrlz569p0g1.png?width=320&crop=smart&auto=webp&s=1380abe951ec154d11bcc48259ff0e2f3697cebb', 'width': 320}, {'height': 281, 'url': 'https://preview.redd.it/s0jrlz569p0g1.png?width=640&crop=smart&auto=webp&s=c5951ff139e6d111ec9e08e507274871f5fdc1a6', 'width': 640}, {'height': 421, 'url': 'https://preview.redd.it/s0jrlz569p0g1.png?width=960&crop=smart&auto=webp&s=1718d83371a269f8fb49eec976ab64e00f478f6f', 'width': 960}, {'height': 474, 'url': 'https://preview.redd.it/s0jrlz569p0g1.png?width=1080&crop=smart&auto=webp&s=849a8e170d63710e1670dfbf19d83e453d7f46a5', 'width': 1080}], 'source': {'height': 788, 'url': 'https://preview.redd.it/s0jrlz569p0g1.png?auto=webp&s=b8bca1a292a711b7a2c23c3f8511ab5f638363b3', 'width': 1794}, 'variants': {}}]} | |
Imagine changing your app's behaviour... without changing the code. | 0 | Recently, I posted my "Event-Driven AI" experiment. I presented it as a "Rules Engine," but I've realized that was just scratching the surface.
After playing with it more, I just unlocked its real potential. ๐คฏ
The video shows at first, a simple to-do app.
But by changing a single prompt (no code!), the app instantly becomes:
\- A Recipe Todo App (generating ingredients)
\- A Trip Todo App (generating a packing list)
\- A "Whatever-you-want" App
https://i.redd.it/devv7ukjdp0g1.gif
I realized I wasn't just building a "rules engine." I was building a "**Generative UI Engine.**" ๐คฏ ๐คฏ
The LLM acts as a new layer of abstraction. Instead of me, the developer, hard-coding every feature, I just give the AI a "Swiss knife" of tools (like addTitle, addItem, setCounter).
The AI then uses these tools to build whatever the user wants, not just what the developer had in mind.
It's a shift from "Here are the 3 features I built for you" to "**What do you want to accomplish?**" ๐คฏ ๐คฏ ๐คฏ
Yes, there are drawbacks (performance, determinism). But the potential for truly dynamic, user-centric applications is HUGE. This is the discovery that has me excited.
And the best part? I got this working in just a few hours, on the first try.
GitHub:ย [https://github.com/gpietro/event-driven-ai](https://github.com/gpietro/event-driven-ai) | 2025-11-11T22:20:25 | https://www.reddit.com/r/LocalLLaMA/comments/1oungne/imagine_changing_your_apps_behaviour_without/ | pietro-cabecao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oungne | false | null | t3_1oungne | /r/LocalLLaMA/comments/1oungne/imagine_changing_your_apps_behaviour_without/ | false | false | 0 | null | |
Imagine changing your app's behaviour... without changing the code. | 0 | [removed] | 2025-11-11T22:18:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ounf11/imagine_changing_your_apps_behaviour_without/ | pietro-cabecao | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ounf11 | false | null | t3_1ounf11 | /r/LocalLLaMA/comments/1ounf11/imagine_changing_your_apps_behaviour_without/ | false | false | self | 0 | null |
ChatGPT for Excel | 0 | Does OpenAI plan on releasing an Excel plug-in, akin to Anthropic (Claude for Financial Services)?
Most of my workflows comprise of spreadsheets (.xlsx), and ChatGPT Enterprise is pretty unreliable for data analysis and extraction โ let alone, create and edit spreadsheet files.
I work at a growth equity firm and we opted to useย [Endex](https://endex.ai/)ย for data extraction (PDF to .xlsx), after testing out multiple enterprise providers.
But, I want a separate account for personal use at home and already pay for ChatGPT Pro, among others (Claude, Cursor, Raycast).
Therefore, I'm curious why ChatGPT / Claude / Perplexity are still so inaccurate at generating Excel models (.xlsx) and synthesizing the data contained in the spreadsheet, except for CSV files, perhaps.
Likewise, [Claude for Excel](https://claude.com/claude-for-excel) is practically on-par with Microsoft Copilot (Clippy 2.0).
I can't share a CIM, given the confidential nature of the document, but here's a somewhat similar file format:
* [Autonomy Pitchbook](https://www.10xebitda.com/wp-content/uploads/2016/12/Qatalyst-Pitch-Book-on-Autonomy-Jan-2011.pdf)
[](https://preview.redd.it/chatgpt-for-excel-v0-lozcygzxlo0g1.jpg?width=2652&format=pjpg&auto=webp&s=9d4c499e5af24966e821e37e2ee09b49d5a6c3ad)
[Source Document \(PDF\)](https://preview.redd.it/lozcygzxlo0g1.jpg?width=2652&format=pjpg&auto=webp&s=2a1afb65225cf46c6d544301478978c2d0752cef)
[OpenAI ChatGPT Enterprise \(.xlsx\)](https://preview.redd.it/llfnx3vslo0g1.jpg?width=2892&format=pjpg&auto=webp&s=5633baa2f63bc6d47f1c31058f92cb572dae7865)
[Perplexity Enterprise \(.xlsx\)](https://preview.redd.it/7x05phq6to0g1.jpg?width=2456&format=pjpg&auto=webp&s=9d50f5c01e2bd4adc2685143b7caba8f185163fa)
[Claude Enterprise \(.xlsx\)](https://preview.redd.it/320pn20d4p0g1.jpg?width=2054&format=pjpg&auto=webp&s=45169c054a6336caf028116e42123b62bf919ab1)
[Endex \(.xlsx\)](https://preview.redd.it/ou05e50iqo0g1.jpg?width=2524&format=pjpg&auto=webp&s=e086d21f7503a04050be9d807b0e298f91a2490c) | 2025-11-11T22:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/1oumyj8/chatgpt_for_excel/ | ChatGepetto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oumyj8 | false | null | t3_1oumyj8 | /r/LocalLLaMA/comments/1oumyj8/chatgpt_for_excel/ | false | false | self | 0 | null |
What's the cheapest way to run a 32b model remotely? | 0 | I want to use qwen 32b for example but I don't have a GPU that can handle it. I also can't afford to buy a free GPU. What is the cheapest way to run such models remotely? | 2025-11-11T21:51:01 | https://www.reddit.com/r/LocalLLaMA/comments/1oump31/whats_the_cheapest_way_to_run_a_32b_model_remotely/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oump31 | true | null | t3_1oump31 | /r/LocalLLaMA/comments/1oump31/whats_the_cheapest_way_to_run_a_32b_model_remotely/ | false | false | self | 0 | null |
Build Guide for beginner learner | 1 | Hey everyone,
I recently got a used workstation that Iโm using for both learning and building projects. Iโm still a student, so Iโm trying to run basic or small models, my primary focus would be to learn and experiment. I got this for $1200 but was planning to pay $1800. I was planning to save the money but it got me think if there is some part here i can change to in order to get the most out of the 3090.
Bonus: It would be nice if you guys had a guide and tutorials to start with, too.
Hereโs the current build:
**Specs:**
* **CPU:** AMD Ryzen 9 5950X (16c / 32t)
* **Cooler:** NZXT Kraken AIO (240 mm)
* **Motherboard:** Gigabyte X570 AORUS ELITE WiFi
* **GPU:** NVIDIA RTX 3090 Founders Edition (24 GB GDDR6X)
* **RAM:** 64 GB DDR4 (4 ร 16 GB) @ 2666 MHz (XMP 3200โ3600 MHz capable)
* **Storage:** 2 ร 2 TB Samsung 970 EVO Plus NVMe SSDs
* **PSU:** 850 W 80+ Gold (brand unknown yet)
* **Case:** Corsair white mid-tower with sound-dampened front panel | 2025-11-11T21:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/1oum956/build_guide_for_beginner_learner/ | Ill-Statistician1097 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oum956 | false | null | t3_1oum956 | /r/LocalLLaMA/comments/1oum956/build_guide_for_beginner_learner/ | false | false | self | 1 | null |
How far I've gotten on my private local chatGPT alternative | 3 | I've been passively creating my own private, offline capable, LLM software. I'm basically using chatGPT as a template and trying to recreate it's features with free open source tooling while keeping everything offline capable.
The youtube video showcases it, but basically it works as a small AI Agent, allows for text, document, and image generation. Soon to also include video generation, and hopefully web searches using a local sidecar of searXNG to keep privacy as high as possible.
It's really just a project i'm doing to keep my software skills up to date, but I'm having fun making it simple to use and operate as I go. Figured I'd show it off! | 2025-11-11T21:12:05 | https://www.youtube.com/watch?v=8CMYUdbdjDI | Life-Animator-3658 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1oulor4 | false | null | t3_1oulor4 | /r/LocalLLaMA/comments/1oulor4/how_far_ive_gotten_on_my_private_local_chatgpt/ | false | false | default | 3 | null |
How to create local AI assistant/companion/whatever it is called with long term memory? Do you just ask for summarize previous talks or what? | 12 | So, I am curious to know that if anybody here have crated LLM to work as a personal assistant/chatbot/companion or whatever the term is, and how you have done it.
Since the term I mean might be wrong I want to explain first what I mean. I mean simply the local LLM chat where I can talk all the things with the AI bot like "What's up, how's your day" so it would work as a friend or assistant or whatever. Then I can also ask "How could I write these lines better for my email" and so on and it would work for that.
Basically a chat LLM. That is not the issue for me, I can easily do this with LM Studio, KoboldCpp and whatever using just whatever model I want to.
The question what I am trying to get answer is, have you ever done this kind of companion what will stay there with days, weeks, months or longer with you and it have at least some kind of memory of previous chats?
If so - how? Context lenghts are limited, normal average user GPU have memory limits and so on and chats easily might get long and context will end.
One thing what came to my mind is that do people just start new chat every day/week or whatever and ask summary for that previous chat, then use that summary on the new chat and use it as a backstory/lore/whatever it is called, or how?
Or is this totally not realistic to make it work currently on consumer grade GPU's? I have 16 GB of VRAM (RTX 4060 Ti).
Have any of you made this and how? And yes, I have social life in case before somebody is wondering and giving tips to go out and meet people instead or whatever :D | 2025-11-11T20:54:31 | https://www.reddit.com/r/LocalLLaMA/comments/1oul7rv/how_to_create_local_ai_assistantcompanionwhatever/ | film_man_84 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oul7rv | false | null | t3_1oul7rv | /r/LocalLLaMA/comments/1oul7rv/how_to_create_local_ai_assistantcompanionwhatever/ | false | false | self | 12 | null |
Where do yall get so much money to buy such high end equipment ๐คง | 0 | Like 128GB ram ๐๐๐
How ๐ญ๐ญ๐ญ
I thought I bought a high end laptop, Asus tuf gaming fx505d 16gb ram 4gb vram but yall don't even acknowledge my existence ๐ญ๐ญ๐ | 2025-11-11T20:42:44 | https://www.reddit.com/r/LocalLLaMA/comments/1oukw9x/where_do_yall_get_so_much_money_to_buy_such_high/ | SilverRegion9394 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oukw9x | false | null | t3_1oukw9x | /r/LocalLLaMA/comments/1oukw9x/where_do_yall_get_so_much_money_to_buy_such_high/ | false | false | self | 0 | null |
Let me know if my idea is dumb | 0 | Hello, so in the last few weeks (nights/weekends) I've built out this little project, but tbh I'm unsure if there is even an audience for it or if it's worth building out further.
If you guys can just lmk if its a good idea or a bad one that would be helpful
Here's the idea: model merging is typically a pretty locally done thing. There are maybe 1-2 solutions for diffusion model merging I found online, which are hosting a web version of the A1111 GUI.
The only LLM related merging I could find was Arcee, who owns the open source repo mergekit. Arcee is pretty B2B and consulting focused, so I really couldn't find any good browser based solutions for LLM merging for the B2C audience.
So, I built this little thing. It allows browser based merging for several popular merging techniques (DARE, TIES, SLERP, etc), for both LLMs and SD models (and really any general Pytorch model)
[https://www.frankenstein-ai.com](https://www.frankenstein-ai.com)
tech stack I used is React TS frontend, Django backend, runpod serverless, mergekit for actual merging (complies with their usage policy), aws hosting, and huggingface for model storage.
Forgive my crappy frontend skills, I mainly hacked it with Claude. I'm an AI dev by trade (Head of AI day job), but just thought this was a fun little build out. Not sure where I'm going to take it, but just wanted to see if it's something people find helpful.
If you find any bugs or just generally think it sucks, feel free to let me know
thanks for reading | 2025-11-11T20:36:54 | https://www.reddit.com/r/LocalLLaMA/comments/1oukqnr/let_me_know_if_my_idea_is_dumb/ | redwat3r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oukqnr | false | null | t3_1oukqnr | /r/LocalLLaMA/comments/1oukqnr/let_me_know_if_my_idea_is_dumb/ | false | false | self | 0 | null |
Compared 5 LLM observability platforms after production issues kept hitting us - here's what works | 0 | Our LLM app kept having silent failures in production. Responses would drift, costs would spike randomly, and we'd only find out when users complained. Realized we had zero visibility into what was actually happening.
Tested LangSmith, Arize, Langfuse, Braintrust, and Maxim over the last few months. Here's what I found:
* **LangSmith** \- Best if you're already deep in LangChain ecosystem. Full-stack tracing, prompt management, evaluation workflows. Python and TypeScript SDKs. OpenTelemetry integration is solid.
* **Arize** \- Strong real-time monitoring and cost analytics. Good guardrail metrics for bias and toxicity detection. Focuses heavily on debugging model outputs.
* **Langfuse** \- Open-source option with self-hosting. Session tracking, batch exports, SOC2 compliant. Good if you want control over your deployment.
* **Braintrust** \- Simulation and evaluation focused. External annotator integration for quality checks. Lighter on production observability compared to others.
* **Maxim** \- Covers simulation, evaluation, and observability together. Granular agent-level tracing, automated eval workflows, enterprise compliance (SOC2). They also have their open source [Bifrost](https://getmax.im/bifr0st) LLM Gateway with ultra low overhead at high RPS (\~5k) which is wild for high-throughput deployments.
Biggest learning: you need observability before things break, not after. Tracing at the agent-level matters more than just logging inputs/outputs. Cost and quality drift silently without proper monitoring.
What are you guys using for production monitoring? Anyone dealing with non-deterministic output issues? | 2025-11-11T20:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/1ouknj3/compared_5_llm_observability_platforms_after/ | Otherwise_Flan7339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouknj3 | false | null | t3_1ouknj3 | /r/LocalLLaMA/comments/1ouknj3/compared_5_llm_observability_platforms_after/ | false | false | self | 0 | null |
i built an LLM chatbot for my postgres db (and sends insights to slack) in a few minutes! | 1 | I've been exploring ways to quickly connect LLMs with existing data sources, and put together a short demo showing how I built an AI chatbot directly integrated with my PostgreSQL database in just a few minutes using [Bubble Lab](https://bubblelab.ai)!!
Would love to hear your thoughts or if you've got any cool tricks for integrating LLMs with traditional databases!
| 2025-11-11T20:29:24 | https://www.youtube.com/watch?v=sNEe_qRH__E | Own-Bandicoot-4407 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1oukjkd | false | null | t3_1oukjkd | /r/LocalLLaMA/comments/1oukjkd/i_built_an_llm_chatbot_for_my_postgres_db_and/ | false | false | default | 1 | null |
Every LLM gateway we tested failed at scale - ended up building Bifrost | 0 | When you're building AI apps in production, managing multiple LLM providers becomes a pain fast. Each provider has different APIs, auth schemes, rate limits, error handling. Switching models means rewriting code. Provider outages take down your entire app.
At Maxim, we tested multiple gateways for our production use cases and scale became the bottleneck. Talked to other fast-moving AI teams and everyone had the same frustration - existing LLM gateways couldn't handle speed and scalability together. So we built [Bifrost](https://getmax.im/bifr0st).
**What it handles:**
* **Unified API** \- Works with OpenAI, Anthropic, Azure, Bedrock, Cohere, and 15+ providers. Drop-in OpenAI-compatible API means changing providers is literally one line of code.
* **Automatic fallbacks** \- Provider fails, it reroutes automatically. Cluster mode gives you 99.99% uptime.
* **Performance** \- Built in Go. Mean overhead is just 11ยตs per request at 5K RPS. Benchmarks show 54x faster P99 latency than LiteLLM, 9.4x higher throughput, uses 3x less memory.
* **Semantic caching** \- Deduplicates similar requests to cut inference costs.
* **Governance** \- SAML/SSO support, RBAC, policy enforcement for teams.
* **Native observability** \- OpenTelemetry support out of the box with built-in dashboard.
It's open source and self-hosted.
Anyone dealing with gateway performance issues at scale? | 2025-11-11T20:26:52 | https://www.reddit.com/r/LocalLLaMA/comments/1oukhbx/every_llm_gateway_we_tested_failed_at_scale_ended/ | dinkinflika0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oukhbx | false | null | t3_1oukhbx | /r/LocalLLaMA/comments/1oukhbx/every_llm_gateway_we_tested_failed_at_scale_ended/ | false | false | self | 0 | null |
What's Stopping you from using local AI models more? | 0 | I've been running local models on my M4 Mac, but honestly I keep going back to Claude API. My hardware sits idle most of the time because accessing it remotely is a pain (janky VPN setup). I feel like my workflow with local AI isnโt what I want it to be and is not the alternative for cloud AI APIโs I was expecting.
**I'm curious if others have the same frustrations:**
* Do you feel like remote access isnโt worth the hassle? (VPN or port forwarding)
* Do you feel like youโre pouring too much money into API subscriptions?
* Are you wanting to run bigger models but not having enough compute in one place?
**For teams/companies:**
* How do you handle remote access for distributed teams?
* Do you have idle GPUs/workstations that could be doing more?
* Are rate limits on cloud AI APIโs bottlenecking your teams productivity?
I'm exploring solutions in this space and want to make sure these are real problems before building anything. **Whatโs your setup and biggest local AI frustration?** Any and All insight is much appreciated! | 2025-11-11T20:25:14 | https://www.reddit.com/r/LocalLLaMA/comments/1oukfvf/whats_stopping_you_from_using_local_ai_models_more/ | ButterscotchNo102 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oukfvf | false | null | t3_1oukfvf | /r/LocalLLaMA/comments/1oukfvf/whats_stopping_you_from_using_local_ai_models_more/ | false | false | self | 0 | null |
Local, multi-model AI that runs on a toaster. One-click setup, 2GB GPU enough | 53 | This is a desktop program that runs multiple AI models in parallel on hardware most people would consider e-waste. Built from the ground up to be lightweight.
The device only uses a 2GB GPU. If there's a gaming laptop or a mid-tier PC from the last 5-7 years lying around, this will probably run on it.
What it does:
\> Runs 100% offline. No internet needed after the first model download.
\> One-click installer for Windows/Mac/Linux auto-detects the OS and handles setup. (The release is a pre-compiled binary. You only need Rust installed if you're building from source.)
\> Three small, fast models (Gemma2:2b, TinyLlama, DistilBERT) collaborate on each response. They make up for their small size with teamwork.
\> Includes a smart, persistent memory system. Remembers past chats without ballooning in size.
Real-time metrics show the models working together live.
No cloud, no API keys, no subscriptions. The installers are on the releases page. Lets you run three models at once locally.
Check it out here: [https://github.com/ryanj97g/Project\_VI](https://github.com/ryanj97g/Project_VI) | 2025-11-11T20:13:43 | https://www.reddit.com/r/LocalLLaMA/comments/1ouk53u/local_multimodel_ai_that_runs_on_a_toaster/ | VivianIto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouk53u | false | null | t3_1ouk53u | /r/LocalLLaMA/comments/1ouk53u/local_multimodel_ai_that_runs_on_a_toaster/ | false | false | self | 53 | {'enabled': False, 'images': [{'id': 'BtsjcnJH0TzKbtwN_eIpWsSWYcsoZbqEbSON34sVBEY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BtsjcnJH0TzKbtwN_eIpWsSWYcsoZbqEbSON34sVBEY.png?width=108&crop=smart&auto=webp&s=38d2520b8094be7319c6718ad18bc571e6df5593', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BtsjcnJH0TzKbtwN_eIpWsSWYcsoZbqEbSON34sVBEY.png?width=216&crop=smart&auto=webp&s=96a2be345adfedffbde99a8601074cdb5b4d00c7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BtsjcnJH0TzKbtwN_eIpWsSWYcsoZbqEbSON34sVBEY.png?width=320&crop=smart&auto=webp&s=172a32d04aefc4e10513e2d08db9f97563a2c595', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BtsjcnJH0TzKbtwN_eIpWsSWYcsoZbqEbSON34sVBEY.png?width=640&crop=smart&auto=webp&s=e94d1f08c6c155f20dd98f4ca8f2004cf6d088d6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BtsjcnJH0TzKbtwN_eIpWsSWYcsoZbqEbSON34sVBEY.png?width=960&crop=smart&auto=webp&s=17e6beabfac3e28168e77479ec42a9a1c7a1af7d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BtsjcnJH0TzKbtwN_eIpWsSWYcsoZbqEbSON34sVBEY.png?width=1080&crop=smart&auto=webp&s=0dca43c1e5a6b0379d1ab33ed4400bcd55ce1bc7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BtsjcnJH0TzKbtwN_eIpWsSWYcsoZbqEbSON34sVBEY.png?auto=webp&s=1b0c0965542ffdf1fe508df6cb2830f216f2309c', 'width': 1200}, 'variants': {}}]} |
What approach would be most feasible for my use case | 2 | I want to create a LLM based therapeutic app which utilizes few techniques which are described in a range of books (let's say 10-20). They generally follow an algorithm, but there needs to be understanding by the LLM of where the process is now and how to improvise if needed. The question is what would be the best approach here - some sort of RAG? a fine-tuned model? | 2025-11-11T20:02:18 | https://www.reddit.com/r/LocalLLaMA/comments/1oujtzl/what_approach_would_be_most_feasible_for_my_use/ | n0e83 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oujtzl | false | null | t3_1oujtzl | /r/LocalLLaMA/comments/1oujtzl/what_approach_would_be_most_feasible_for_my_use/ | false | false | self | 2 | null |
Anyone tried Ling/Ring Flash 2.0? | 17 | GGUF support landed about a month ago and both models seem to be of reasonable size with nice benchmark scores.
Has anyone tested these models? In particular how does Ring-Flash-2.0 compare against GLM 4.5 Air and GPT-OSS-120B? | 2025-11-11T20:00:50 | https://www.reddit.com/r/LocalLLaMA/comments/1oujse4/anyone_tried_lingring_flash_20/ | random-tomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oujse4 | false | null | t3_1oujse4 | /r/LocalLLaMA/comments/1oujse4/anyone_tried_lingring_flash_20/ | false | false | self | 17 | null |
Is it even possible to effectively use LLM since GPUs are so expensive? | 0 | I have a bunch of niche messages I want to use to finetune LLM. I was able to finetune it with LoRA on Google Colab, but that's shit. So I started looking around to rent GPU.
To run any useful LLM with above 10B parameters, GPUs are so expensive. Not to talk about keeping GPU running so the model can actually be used.
Is it even worth it? Is it even possible to run LLM for an individual person? | 2025-11-11T18:56:22 | https://www.reddit.com/r/LocalLLaMA/comments/1oui10h/is_it_even_possible_to_effectively_use_llm_since/ | teskabudaletina | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oui10h | false | null | t3_1oui10h | /r/LocalLLaMA/comments/1oui10h/is_it_even_possible_to_effectively_use_llm_since/ | false | false | self | 0 | null |
new guy on llm expert guys can give me advices | 0 | app:LM studฤฑo
LLM:openai/gpt-oss-20b
computer:
cpu:r5 5500
gpu:rtx 3050 8gb vram
ram:ddr4 16gb
i get about 7 to 5 token per second | 2025-11-11T18:50:11 | https://www.reddit.com/r/LocalLLaMA/comments/1ouhuwy/new_guy_on_llm_expert_guys_can_give_me_advices/ | Kerem-6030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouhuwy | false | null | t3_1ouhuwy | /r/LocalLLaMA/comments/1ouhuwy/new_guy_on_llm_expert_guys_can_give_me_advices/ | false | false | self | 0 | null |
Thoughts on what M3 pro Macbook Pro with 18GB of RAM can run? | 0 | I wanna explore having my own local LLM on my macbook, since it's the most powerful computer I have, for this use-case at least.
What are you thought on what I can run on it? I don't need anything that is really powerful. I don't aim to do any mathematics or coding. Just mostly help with research and casual everyday questions, maybe some more personal that I don't want to feed to normal AI chatbots that are out there. Basically I want to replace countless hours of googling for basic questions I have.
What are my options? Is anything possible?
Buying a new machine is out of the question for me.
Thank you lots | 2025-11-11T18:45:19 | https://www.reddit.com/r/LocalLLaMA/comments/1ouhq4u/thoughts_on_what_m3_pro_macbook_pro_with_18gb_of/ | imjustalittleollad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouhq4u | false | null | t3_1ouhq4u | /r/LocalLLaMA/comments/1ouhq4u/thoughts_on_what_m3_pro_macbook_pro_with_18gb_of/ | false | false | self | 0 | null |
[Dataset Release] 13,454 Tool Definitions with I/O Schemas โ
| 1 | Weโve published aย [dataset](https://huggingface.co/datasets/qforge/Tool-w-Output)ย ofย **13,454 tool definitions**, each withย **JSON Schema**ย for bothย **input parameters**ย andย **output structures**.
We used theย [ToolACE](https://huggingface.co/datasets/Team-ACE/ToolACE)ย dataset as our base - we extracted tools from conversation messages, filtered and normalized the extractions, then validated each tool with an LLM and generated an output schema for each.
Every feedback is welcome ๐ | 2025-11-11T18:37:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ouhilw/dataset_release_13454_tool_definitions_with_io/ | ManagementMore2689 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouhilw | false | null | t3_1ouhilw | /r/LocalLLaMA/comments/1ouhilw/dataset_release_13454_tool_definitions_with_io/ | false | false | self | 1 | null |
what's the best open weight alternative to nano banana these days | 1 | with the release of the apple instruction dataset, and nano banana being out for a while, is there a good open weight model that does editing as well as nano banana? | 2025-11-11T18:33:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ouheod/whats_the_best_open_weight_alternative_to_nano/ | xSnoozy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouheod | false | null | t3_1ouheod | /r/LocalLLaMA/comments/1ouheod/whats_the_best_open_weight_alternative_to_nano/ | false | false | self | 1 | null |
I built a tool that maps and visualizes backend codebases | 19 | For some weeks, Iโve been trying to solve the problem of how to make LLMs actually understand a codebase architecture. Most coding tools can generate good code, but they donโt usually get how systems fit together.
https://preview.redd.it/6n870x947o0g1.png?width=2556&format=png&auto=webp&s=9a4625070306da3d852acaa8a48a6e4e428299a1
So I started working onย a solution, a tool that parses backend codebases (FastAPI, Django, Node, etc.) into aย semantic graph. It maps every endpoint, service, and method as nodes, and connects them through their relationships, requests, dependencies, or data flows. From there, it canย visualize backend like a living system. Then I found out this might be useful for engineers instead of LLMs, as a way to rapidly understand a codebase.
The architecture side looks a bit like an interactive diagramming tool, but everything is generated automatically from real code. You can ask it things likeย *โShow me everything that depends on the auth routerโ*ย orย *โExplain how does the parsing works?โ*ย and it will generate a node map of the focalized query.
https://preview.redd.it/pff5x7uc7o0g1.png?width=2512&format=png&auto=webp&s=a689dd64f90616daa6ba138c80c920b70ca3f589
https://preview.redd.it/mwk5rzce7o0g1.png?width=682&format=png&auto=webp&s=0091b437372ef4e02e2b1772ef801ce21cddf7d7
Iโm also working in a PR review engineย that uses the graph to detect when a change might affect another service (e.g., modifying a shared database method). And because it understands system context, it can connect throughย MCPย to AI tools like Claude or Cursor, in an effort to make them โarchitecture-aware.โ
Iโm mostly curious to hear if others have tried solving similar problems, or if you believe this is a problem at all, especially around codebase understanding, feature planning, or context-aware AI tooling.
Built with FastAPI, Tree Sitter, Supabase, Pinecone, and a React/Next.js frontend.
Would love to get feedback or ideas on what youโd want a system like this to do. | 2025-11-11T18:24:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ouh5c1/i_built_a_tool_that_maps_and_visualizes_backend/ | Weary-Commercial-922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouh5c1 | false | null | t3_1ouh5c1 | /r/LocalLLaMA/comments/1ouh5c1/i_built_a_tool_that_maps_and_visualizes_backend/ | false | false | 19 | null | |
What happened with Kimi Linear? | 10 | It's been out for a bit, is it any good? It looks like Llama.cpp support is currently lacking | 2025-11-11T18:23:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ouh46d/what_happened_with_kimi_linear/ | TokenRingAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouh46d | false | null | t3_1ouh46d | /r/LocalLLaMA/comments/1ouh46d/what_happened_with_kimi_linear/ | false | false | self | 10 | null |
cool adversarial sweatshirt | 0 | 2025-11-11T18:09:05 | xSnoozy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ougq77 | false | null | t3_1ougq77 | /r/LocalLLaMA/comments/1ougq77/cool_adversarial_sweatshirt/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ojyt20ys4o0g1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/ojyt20ys4o0g1.png?width=108&crop=smart&auto=webp&s=c4d53fb68215387b4371ce635682ccb8e4e85fcf', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/ojyt20ys4o0g1.png?width=216&crop=smart&auto=webp&s=fdc5b504ec1367512cd9f3dedd4030461148e436', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/ojyt20ys4o0g1.png?width=320&crop=smart&auto=webp&s=7b7ff5e9f5ba97769f27894114d15906da71e7ae', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/ojyt20ys4o0g1.png?width=640&crop=smart&auto=webp&s=b5cc91e010bcd9d5a6e3f2d0e9eb5c30991d90c5', 'width': 640}], 'source': {'height': 372, 'url': 'https://preview.redd.it/ojyt20ys4o0g1.png?auto=webp&s=7015f2faa519cb719542ce8cc74dfd6ba0c386c5', 'width': 661}, 'variants': {}}]} | ||
when it's everyone for themselves i know which defense ill be using | 420 | 2025-11-11T18:08:16 | xSnoozy | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ougpdn | false | null | t3_1ougpdn | /r/LocalLLaMA/comments/1ougpdn/when_its_everyone_for_themselves_i_know_which/ | false | false | default | 420 | {'enabled': True, 'images': [{'id': 'alfdugxo4o0g1', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/alfdugxo4o0g1.jpeg?width=108&crop=smart&auto=webp&s=2245551e682945509315f1f2c5cec724df44cdcb', 'width': 108}, {'height': 155, 'url': 'https://preview.redd.it/alfdugxo4o0g1.jpeg?width=216&crop=smart&auto=webp&s=ac50934e5714cb2757feea278cf144946960ca09', 'width': 216}, {'height': 230, 'url': 'https://preview.redd.it/alfdugxo4o0g1.jpeg?width=320&crop=smart&auto=webp&s=821d5b86add26a7368a8266c7f9c28df0c7d9cb0', 'width': 320}, {'height': 460, 'url': 'https://preview.redd.it/alfdugxo4o0g1.jpeg?width=640&crop=smart&auto=webp&s=a82bf3768bf7d6065ae860ea0f6e3f2fbdfaf37e', 'width': 640}], 'source': {'height': 460, 'url': 'https://preview.redd.it/alfdugxo4o0g1.jpeg?auto=webp&s=0019afbae2df9d2418a62492b9eee91b880c8b59', 'width': 640}, 'variants': {}}]} | ||
Individual models (or data sets) for multi-GPU setups using nerfed PCI-E lane options? | 1 | Noob here. I'm considering dusting off a decommissioned ocotomining rig to mess around with and understand these motherboards are sub-optimal due to the PCI-E bandwidth for large models. The one in question has 1 xPCIE3x16 & 7xPICE2x16@x1.....1151 socket, single DDR4 dimm.
I figured a use case for different tasks per GPU could work if a separate model for each is individually loaded (mining can be assigned that way now a days). As I understand it, the most painful part would be the loading time in the slow lanes but I could live with that if the model could remain loaded indefinitely until called on.
Is this a feasible ask w/the socket and single RAM limitations as long as I don't let it off-load to the CPU? IOW, can I run 8 tasks on all GPU w/o the CPU/ram becoming an issue?
Secondly, I understand something like this is fairly common w/smaller boards, where the best card is installed in the fastest PCI slot and secondaries in other slots to run larger models. As I understand it, tensor parallelism (whatever that means) is sub-optimal as it requires constant communication between GPUs. Could a large task be divorced for all GPUs and consolidated after each GPU is done w/their task instead?
some article I read:
[https://www.digitalocean.com/community/tutorials/splitting-llms-across-multiple-gpus](https://www.digitalocean.com/community/tutorials/splitting-llms-across-multiple-gpus)
Thank you! | 2025-11-11T18:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ougl1d/individual_models_or_data_sets_for_multigpu/ | TendieRetard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ougl1d | false | null | t3_1ougl1d | /r/LocalLLaMA/comments/1ougl1d/individual_models_or_data_sets_for_multigpu/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '0FmGpp1FKGPfw8mxKePpwDneWCaMjZUHWR0k1-OIViw', 'resolutions': [{'height': 62, 'url': 'https://external-preview.redd.it/0FmGpp1FKGPfw8mxKePpwDneWCaMjZUHWR0k1-OIViw.png?width=108&crop=smart&auto=webp&s=4eba567163c999e1ff79d7ec1b198d33b352941d', 'width': 108}, {'height': 124, 'url': 'https://external-preview.redd.it/0FmGpp1FKGPfw8mxKePpwDneWCaMjZUHWR0k1-OIViw.png?width=216&crop=smart&auto=webp&s=5558d40d4b7c4a309106a78f5dda01b1740addfa', 'width': 216}, {'height': 184, 'url': 'https://external-preview.redd.it/0FmGpp1FKGPfw8mxKePpwDneWCaMjZUHWR0k1-OIViw.png?width=320&crop=smart&auto=webp&s=f019d21c30e9d322dd489c16ddb6a1895113e806', 'width': 320}, {'height': 368, 'url': 'https://external-preview.redd.it/0FmGpp1FKGPfw8mxKePpwDneWCaMjZUHWR0k1-OIViw.png?width=640&crop=smart&auto=webp&s=37ef3cd7d6086d9f62a5bfd4e94eb96a9ed90800', 'width': 640}, {'height': 552, 'url': 'https://external-preview.redd.it/0FmGpp1FKGPfw8mxKePpwDneWCaMjZUHWR0k1-OIViw.png?width=960&crop=smart&auto=webp&s=408230ca7e02851e33bf2a59f6cf0332769831a7', 'width': 960}], 'source': {'height': 608, 'url': 'https://external-preview.redd.it/0FmGpp1FKGPfw8mxKePpwDneWCaMjZUHWR0k1-OIViw.png?auto=webp&s=697cc94bc405cb01594a7d9da833ec1b841ae502', 'width': 1056}, 'variants': {}}]} |
Emploi | 0 | Je participe sur 2 concours sur kaggle l'un des plus difficiles au monde. Arc prize 2025 et autres. Je suis capable d'รฉcrire 200 lignes de Langages python. Et la comparaissait jusqu'ร 20 lignes. Un exploit remarquable non ? Mon objectif renforcรฉ les modรจles LLM jusqu'ร l'AGI. Pourtant je pas de travail ๐๐. Tu veux augmenter,รฉconomiser renforcรฉ votre dรฉveloppement llm ? Laisser un commentaire. | 2025-11-11T17:57:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ougejc/emploi/ | Ambitious-Age-6054 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ougejc | false | null | t3_1ougejc | /r/LocalLLaMA/comments/1ougejc/emploi/ | true | false | spoiler | 0 | null |
Qwen-Image escapes detection. LinkedIn now tells you when you're looking at an AI-generated image. | 4 | As the 1st image shows, the C2PA label is used.
Here's what's interesting.
**The feature only applies to image platforms who join the C2PA.**
Now there's only:
* ChatGPT/DALL-E 3 images
* Adobe Firefly images
* Leica Camera images
* BBC news images
The 2nd image, generated byย [Qwen-Image](https://www.netmind.ai/modelsLibrary/Qwen-Image), does not have the label.
What's even more interesting?
**It's easy to bypass this new rule.**ย
You just need to upload the screenshot of the AI-generated pic, as we did with the 3rd image, a screenshot of the 1st one.
Do you think more AI image platforms, like Google, will join C2PA?
**Edit:**ย Pixel photos now support both SynthID and C2PA, but SyntthID acts as a complementary backup mainly for Al-generated or edited content. The C2PA tags (just added in Sept.) are mainly here for provenance tracking.
P.S. The post is reposted as it was removed after getting 80+ upvotes yesterday. | 2025-11-11T17:57:06 | MarketingNetMind | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ouge24 | false | null | t3_1ouge24 | /r/LocalLLaMA/comments/1ouge24/qwenimage_escapes_detection_linkedin_now_tells/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'cNcVyAdCgxzkDsu0y1yNCVwFgwGM-WfNNzfxQNOYxMg', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/0u2dzcsl2o0g1.png?width=108&crop=smart&auto=webp&s=d75011516e997712b0e0e0bd1043b9f83a99f424', 'width': 108}, {'height': 146, 'url': 'https://preview.redd.it/0u2dzcsl2o0g1.png?width=216&crop=smart&auto=webp&s=532dc3ac49dd0b6263883cfdf05109165916fe75', 'width': 216}, {'height': 216, 'url': 'https://preview.redd.it/0u2dzcsl2o0g1.png?width=320&crop=smart&auto=webp&s=b5c44f71fc46f922ea2d2dc278fcfc157defd574', 'width': 320}, {'height': 433, 'url': 'https://preview.redd.it/0u2dzcsl2o0g1.png?width=640&crop=smart&auto=webp&s=9bd3fdf800421c7857baf4c9e510d2458d804088', 'width': 640}, {'height': 650, 'url': 'https://preview.redd.it/0u2dzcsl2o0g1.png?width=960&crop=smart&auto=webp&s=a8e87d04eb49f600806c106b75fb42c1c2bd5cff', 'width': 960}, {'height': 731, 'url': 'https://preview.redd.it/0u2dzcsl2o0g1.png?width=1080&crop=smart&auto=webp&s=dff260d1c9c772070365cf0565dfa058413f103f', 'width': 1080}], 'source': {'height': 1192, 'url': 'https://preview.redd.it/0u2dzcsl2o0g1.png?auto=webp&s=7b17592db204f4586b1dc894add19a3db269e188', 'width': 1760}, 'variants': {}}]} | ||
gpt-oss-120b on Cerebras | 844 | gpt-oss-120b reasoning CoT on Cerebras be like | 2025-11-11T17:53:30 | Corporate_Drone31 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ougamx | false | null | t3_1ougamx | /r/LocalLLaMA/comments/1ougamx/gptoss120b_on_cerebras/ | false | false | 844 | {'enabled': True, 'images': [{'id': 'ehZk74OnzzWULSRPxaa3Nrzmp2eGgVgTUTSO5TMl7aI', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/qkygjyoz1o0g1.png?width=108&crop=smart&auto=webp&s=8a30697d9e9aaab05c8244527a595c29c6540377', 'width': 108}, {'height': 214, 'url': 'https://preview.redd.it/qkygjyoz1o0g1.png?width=216&crop=smart&auto=webp&s=7e4d39306690d715cfee36dfc881509ee8348026', 'width': 216}, {'height': 318, 'url': 'https://preview.redd.it/qkygjyoz1o0g1.png?width=320&crop=smart&auto=webp&s=d0028679c0cac64d9ce3f55e2a3aad86019108bc', 'width': 320}], 'source': {'height': 497, 'url': 'https://preview.redd.it/qkygjyoz1o0g1.png?auto=webp&s=106eeb9124ddbe7f605c9cbc7fddeba330102d70', 'width': 500}, 'variants': {}}]} | ||
Need help figuring out why multimodal API calls to vLLM server have worse outputs than using Open WebUI. | 2 | I'm serving Qwen3-VL through vLLM to OCR, but the output from the API call is different from doing it from the open webui frontend. I can't seem to figure out why, as the API call is doing lossless png in base64 with the same parameters ( temperature=0, max_tokens = 128, top_p=1), but somehow the API keeps giving worse outputs for some iamges with the same vLLM server. I'm using the same system prompt for both, with the API call having the correct system role for it. | 2025-11-11T17:35:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ouft9q/need_help_figuring_out_why_multimodal_api_calls/ | Majesticeuphoria | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouft9q | false | null | t3_1ouft9q | /r/LocalLLaMA/comments/1ouft9q/need_help_figuring_out_why_multimodal_api_calls/ | false | false | self | 2 | null |
Unlimited Cloud this week on Observer as a Thank You to r/LocalLLaMA! Free and local, now and forever after. | 12 | TLDR: Saved up some money to give you guysย unlimited cloud access as a **Thank You** and to stress test it. **Comment an agent idea or feedback,** i'll DM you the unlimited access link, and **build stuff**! It's Free for Local Inference now and always <3
Observer lets you build micro-agents that **watch your screen, camera and microphone and trigger actions** \- allย running locally with your own models.
Heyย [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/),
Okay so... I posted two days ago and it got downvoted because I sounded like a SaaS trying to trap people. That's completely on me! I've been talking to investors lately and had my "business brain" on (not very developed hahaha), but I shouldn't talk to you guys like that. I'm sorry!
So let me be super clear:ย **Observer is free and open-source. Forever.**ย If you compile it yourself, point it at your local llama.cpp server, and use Discord notifications (which go straight from your computer to Discord), I literally have no way of knowing you exist.ย **That's by design.** Privacy-first means privacy-first.
But here's the thing: I built an optional cloud backend so people whoย **don't run LLMs**ย on their machines have a convenient option. And this week I need to stress test it. I saved up for API costs specifically soย [r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/)ย could use it for free this week - because if I'm giving anyone free unlimited access, it's **you guys who supported this thing from the beginning.**
What I'm asking:
\- **Comment a cool agent idea** (seeing them is honestly my favorite part) and i'll **DM you the link** that gives you unlimited access.
\- Try building some agents (local or cloud, whatever you want!)
\- Please don't abuse it - I saved up for this but I'm not Bezos ๐
Some agent ideas from the last post to get you started:
\- "While a tuner connected to my microphone is listening to my practicing session on my violin I would like to get a ping by the AI everytime I'm out of tune by a particular cent parameter!" -ย [philosophissima](https://www.reddit.com/user/philosophissima/)
\- "I'd like to use it to monitor email for certain keywords and notify different contacts based on the content" -ย [IbetitsBen](https://www.reddit.com/user/IbetitsBen/)
\- "Ping my phone when the UPS van stops outside, but not the USPS one. I need to sign for a package."ย [\_\_JockY\_\_](https://www.reddit.com/user/__JockY__/)
\- Track long-running processes and notify when complete - i use this almost every day
\- Literally anything that involves **"watch this thing and tell me when X happens"**
**Just drop a comment with what you want to build** and I'll DM you unlimited cloud access. Or if you want to go full local, the GitHub has all the instructions.
Thanks for everything, I genuinely just want to see what this community builds and make sure the infrastructure can handle it.
Thanks for being patient with me, i'm just a guy learning and building cool stuff for you guys! :)
Roy
GitHub:ย [https://github.com/Roy3838/Observer](https://github.com/Roy3838/Observer)
WebApp: [https://app.observer-ai.com/](https://app.observer-ai.com/) | 2025-11-11T17:31:48 | https://www.reddit.com/r/LocalLLaMA/comments/1oufp8s/unlimited_cloud_this_week_on_observer_as_a_thank/ | Roy3838 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oufp8s | false | null | t3_1oufp8s | /r/LocalLLaMA/comments/1oufp8s/unlimited_cloud_this_week_on_observer_as_a_thank/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': '-VzCfNi8ctqCoss6ttS1cBf0psUHAMSGYDmGAfW9QsA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/-VzCfNi8ctqCoss6ttS1cBf0psUHAMSGYDmGAfW9QsA.png?width=108&crop=smart&auto=webp&s=2763f5b07d8000852738cc8bbf6420bc7a793d3e', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/-VzCfNi8ctqCoss6ttS1cBf0psUHAMSGYDmGAfW9QsA.png?width=216&crop=smart&auto=webp&s=3b05a1b6a5908644048b0f050c15a00d2bc5d9ed', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/-VzCfNi8ctqCoss6ttS1cBf0psUHAMSGYDmGAfW9QsA.png?width=320&crop=smart&auto=webp&s=d3f92bd549dcbc4e5089662042619f32c668a07e', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/-VzCfNi8ctqCoss6ttS1cBf0psUHAMSGYDmGAfW9QsA.png?width=640&crop=smart&auto=webp&s=29ad9d2d4f08a9916f026e28e9a30fd6d1711d5d', 'width': 640}, {'height': 538, 'url': 'https://external-preview.redd.it/-VzCfNi8ctqCoss6ttS1cBf0psUHAMSGYDmGAfW9QsA.png?width=960&crop=smart&auto=webp&s=adab929349280395b1958efc7ed9e9e58d447654', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/-VzCfNi8ctqCoss6ttS1cBf0psUHAMSGYDmGAfW9QsA.png?width=1080&crop=smart&auto=webp&s=f25dcc577bb8503c95962d2f130fa431008cd692', 'width': 1080}], 'source': {'height': 2260, 'url': 'https://external-preview.redd.it/-VzCfNi8ctqCoss6ttS1cBf0psUHAMSGYDmGAfW9QsA.png?auto=webp&s=a9ed130dbe40bc283accf677a568089896baa4f1', 'width': 4030}, 'variants': {}}]} |
R7615 + Blackwell: compatible in practice even if not โcertifiedโ? | 1 | Hi all, Iโm VRAM-bound on some larger models and local LLM workloads. Theย **RTX 6000 Blackwell 96GB (server 300W)**ย looks perfect, but Dell says itโsย **not (yet) certified**ย for the R7615, however an employee also told me they never test/certify stuff for older gen servers.
**Questions for folks whoโve tried similar combos:**
1. Has anyone actually dropped aย **Blackwell 6000 (96GB, 300W)**ย into anย **R7615**ย and had it POST + run stable?
2. **Power & cabling**: did you use the Dellย **470-BCBY (12VH/12VHPWR)**ย harness, or something else? Did iDRAC/BIOS let you set theย **slot power limit**ย cleanly for 300W cards?
3. **Firmware/driver**ย notes: any issues with recent iDRAC/BIOS, PCIe Gen5 link training on Slot 7, or device-ID weirdness/whitelisting (Dell usually doesnโt, butโฆ)?
**Why I think it**ย ***might***ย **work despite โnot certifiedโ:**
* R7615 has theย **power/slots**ย for multiple double-wide accelerators.
* Blackwell server edition is 600 watt but can be configured to targetย **300W**, which is within R7615 GPU power envelopes.
**Other path (feedback welcome):**
* A secondย **L40S**, this is supported but it lacks NVlink therefor it wil be slow I guess when splitting a large model with VLLM over 2 GPU's?
**What Iโm after:**
First-hand success/failure reports, photos, slot/cable part numbers that worked, required BIOS/iDRAC settings, and any thermal lessons learned. If youโve done this inย **R7615**ย (or close cousinsย **R7625/R760xa**), Iโd love to hear the details.
Thanks! | 2025-11-11T17:05:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ouezha/r7615_blackwell_compatible_in_practice_even_if/ | sjoerdmaessen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouezha | false | null | t3_1ouezha | /r/LocalLLaMA/comments/1ouezha/r7615_blackwell_compatible_in_practice_even_if/ | false | false | self | 1 | null |
Detecting jailbreaks and prompt leakage in local LLM setups | 0 | Iโve been exploring how to detect prompt leakage and jailbreak attempts in LLM-based systems, especially in local or self-hosted setups.
The idea Iโm testing: a lightweight API that could help teams and developers
* detect jailbreak attempts and risky prompt structures
* analyze and score prompt quality
* support QA/test workflows for local model evaluation
Iโm curious how others here approach this:
* Have you seen prompt leakage when testing local models?
* Do you have internal tools or scripts to catch jailbreaks?
Iโd love to learn how the community is thinking about prompt security.
(Also set up a simple landing for anyone interested in following the idea or sharing feedback: [assentra](https://assentra.app)) | 2025-11-11T17:02:37 | https://www.reddit.com/r/LocalLLaMA/comments/1ouewcb/detecting_jailbreaks_and_prompt_leakage_in_local/ | Ok_Possibility5692 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouewcb | false | null | t3_1ouewcb | /r/LocalLLaMA/comments/1ouewcb/detecting_jailbreaks_and_prompt_leakage_in_local/ | false | false | self | 0 | null |
R7615 + Blackwell: compatible in practice even if not โcertifiedโ? | 1 | 2025-11-11T17:01:01 | https://www.reddit.com/gallery/1oueunx | sjoerdmaessen | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oueunx | false | null | t3_1oueunx | /r/LocalLLaMA/comments/1oueunx/r7615_blackwell_compatible_in_practice_even_if/ | false | false | 1 | null | ||
Agentic RAG: from Zero to Hero | 42 | Hi everyone,
After spending several months building agents and experimenting with RAG systems, I decided to publish a GitHub repository to help those who are approaching agents and RAG for the first time.
I created an **agentic RAG** with an educational purpose, aiming to provide a clear and practical reference. When I started, I struggled to find a single, structured place where all the key concepts were explained. I had to gather information from many different sourcesโand thatโs exactly why I wanted to build something more accessible and beginner-friendly.
---
## ๐ What youโll learn in this repository
An end-to-end walkthrough of the essential building blocks:
- **PDF โ Markdown conversion**
- **Hierarchical chunking** (parent/child structure)
- **Hybrid embeddings** (dense + sparse)
- **Vector storage** of chunks using *Qdrant*
- **Parallel multi-query handling** โ ability to generate and evaluate multiple queries simultaneously
- **Query rewriting** โ automatically rephrases unclear or incomplete queries before retrieval
- **Human-in-the-loop** to clarify ambiguous user queries
- **Context management** across multiple messages using summarization
- A **fully working agentic RAG** using LangGraph that retrieves, evaluates, corrects, and generates answers
- **Simple chatbot** using Gradio library
---
I hope this repository can be helpful to anyone starting their journey.
Thanks in advance to everyone who takes a look and finds it useful! [Github repo](https://github.com/GiovanniPasq/agentic-rag-for-dummies) | 2025-11-11T16:56:38 | https://www.reddit.com/r/LocalLLaMA/comments/1oueq51/agentic_rag_from_zero_to_hero/ | CapitalShake3085 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oueq51 | false | null | t3_1oueq51 | /r/LocalLLaMA/comments/1oueq51/agentic_rag_from_zero_to_hero/ | false | false | self | 42 | {'enabled': False, 'images': [{'id': 'OrJZ8TZZ96Uyy9NK7yAtRuz-kNWVwJVWUnx06qrotZI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OrJZ8TZZ96Uyy9NK7yAtRuz-kNWVwJVWUnx06qrotZI.png?width=108&crop=smart&auto=webp&s=414552e428f2e8972ff0d8b8f25d403a4564362b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OrJZ8TZZ96Uyy9NK7yAtRuz-kNWVwJVWUnx06qrotZI.png?width=216&crop=smart&auto=webp&s=f9107ad04a190a42f1936f0def5cd23ab15a125f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OrJZ8TZZ96Uyy9NK7yAtRuz-kNWVwJVWUnx06qrotZI.png?width=320&crop=smart&auto=webp&s=88efb40f862e692a2661d0ad3ec16ed08c001ae4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OrJZ8TZZ96Uyy9NK7yAtRuz-kNWVwJVWUnx06qrotZI.png?width=640&crop=smart&auto=webp&s=7c8f4b58c005a89e2acf72eb32ea4add652f2e35', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OrJZ8TZZ96Uyy9NK7yAtRuz-kNWVwJVWUnx06qrotZI.png?width=960&crop=smart&auto=webp&s=41ca3d718ede66ff172fc4326ead9c037caa40cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OrJZ8TZZ96Uyy9NK7yAtRuz-kNWVwJVWUnx06qrotZI.png?width=1080&crop=smart&auto=webp&s=1540be23226f31e6ac7c8d507b4463355494cc2b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OrJZ8TZZ96Uyy9NK7yAtRuz-kNWVwJVWUnx06qrotZI.png?auto=webp&s=80f703464be424f0ade5700cacc7ba1847a5280e', 'width': 1200}, 'variants': {}}]} |
Half-trillion parameter model on a machine with 128 GB RAM + 24 GB VRAM | 222 | Hi everyone,
just wanted to share that Iโve successfully run **Qwen3-Coder-480B** on **llama.cpp** using the following setup:
* **CPU:** Intel i9-13900KS
* **RAM:** 128 GB
* **GPU:** RTX 4090 (24 GB VRAM)
Iโm using the **4-bit and 3-bit Unsloth quantizations** from Hugging Face: [https://huggingface.co/unsloth/Qwen3-Coder-480B-A35B-Instruct-GGUF](https://huggingface.co/unsloth/Qwen3-Coder-480B-A35B-Instruct-GGUF)
**Performance results:**
* **UD-Q3\_K\_XL:** \~2.0 tokens/sec (generation)
* **UD-Q4\_K\_XL:** \~1.0 token/sec (generation)
**Command lines used (llama.cpp):**
`llama-server \`
`--threads 32 --jinja --flash-attn on \`
`--cache-type-k q8_0 --cache-type-v q8_0 \`
`--model <YOUR-MODEL-DIR>/Qwen3-Coder-480B-A35B-Instruct-UD-Q3_K_XL-00001-of-00005.gguf \`
`--ctx-size 131072 --n-cpu-moe 9999 --no-warmup`
`llama-server \`
`--threads 32 --jinja --flash-attn on \`
`--cache-type-k q8_0 --cache-type-v q8_0 \`
`--model <YOUR-MODEL-DIR>/Qwen3-Coder-480B-A35B-Instruct-UD-Q4_K_XL-00001-of-00006.gguf \`
`--ctx-size 131072 --n-cpu-moe 9999 --no-warmup`
**Important:** The *--no-warmup* flag is **required** \- without it, the process will terminate before you can start chatting.
In short: yes, itโs possible to run a **half-trillion parameter model** on a machine with **128 GB RAM + 24 GB VRAM**! | 2025-11-11T16:49:09 | https://www.reddit.com/r/LocalLLaMA/comments/1oueiuj/halftrillion_parameter_model_on_a_machine_with/ | pulse77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oueiuj | false | null | t3_1oueiuj | /r/LocalLLaMA/comments/1oueiuj/halftrillion_parameter_model_on_a_machine_with/ | false | false | self | 222 | {'enabled': False, 'images': [{'id': 'gwBpjvEsSJP9TEvm8JruyOhsY560jiGO-1v9Go-RAwk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gwBpjvEsSJP9TEvm8JruyOhsY560jiGO-1v9Go-RAwk.png?width=108&crop=smart&auto=webp&s=7ad6d1b6c4559472693b7af1de31e24e4a8023a3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gwBpjvEsSJP9TEvm8JruyOhsY560jiGO-1v9Go-RAwk.png?width=216&crop=smart&auto=webp&s=1145966d2cde6471e76bb43f495683a63b013b72', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gwBpjvEsSJP9TEvm8JruyOhsY560jiGO-1v9Go-RAwk.png?width=320&crop=smart&auto=webp&s=d7fff728a74e01125301fc6c9d2699680540ef0a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gwBpjvEsSJP9TEvm8JruyOhsY560jiGO-1v9Go-RAwk.png?width=640&crop=smart&auto=webp&s=60e9638973c964d4a82b7f30f192158867f7fc48', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gwBpjvEsSJP9TEvm8JruyOhsY560jiGO-1v9Go-RAwk.png?width=960&crop=smart&auto=webp&s=350210cdc15e1c044856de883fd8d259a90dd1f0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gwBpjvEsSJP9TEvm8JruyOhsY560jiGO-1v9Go-RAwk.png?width=1080&crop=smart&auto=webp&s=9863761bbb4db313f92685d42bb3689971cd9fe8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gwBpjvEsSJP9TEvm8JruyOhsY560jiGO-1v9Go-RAwk.png?auto=webp&s=b0e606fe60c3b427cf1340db7a0ca6006dff3e57', 'width': 1200}, 'variants': {}}]} |
Bandits in your LLM Gateway: Improve LLM Applications Faster with Adaptive Experimentation (A/B Testing) [Open Source] | 1 | 2025-11-11T16:42:43 | https://www.tensorzero.com/blog/bandits-in-your-llm-gateway/ | bianconi | tensorzero.com | 1970-01-01T00:00:00 | 0 | {} | 1ouecjm | false | null | t3_1ouecjm | /r/LocalLLaMA/comments/1ouecjm/bandits_in_your_llm_gateway_improve_llm/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=108&crop=smart&auto=webp&s=7f316b890b2a31a8f62865e9dee0569e96f0223c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=216&crop=smart&auto=webp&s=00f1de77a5649a79c91d9cfaf6e03bf21f107026', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=320&crop=smart&auto=webp&s=2ca81dda9abf4ec9e6bfb889114a5c077769d765', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=640&crop=smart&auto=webp&s=5a7cae50b6f64366d7ac07d9f8dfc0a821ddf0b8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=960&crop=smart&auto=webp&s=99b7c53dad6f4445fd39ac50a99d95ff14c145bc', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?width=1080&crop=smart&auto=webp&s=1493755ef1337b07c1305234f8696c55d8bf1c05', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/K6oE2iFN3gFHan0D_K76AtC_ODA7iPRGsKR0GYqQFTU.png?auto=webp&s=b637a9ae4b2efc64add1e2ceadf2fc8d033def18', 'width': 1200}, 'variants': {}}]} | |
MoE expert distributions for Kimi K2 thinking? | 5 | Does anyone have any idea what the expert distribution is for kimi k2 thinking? Would be good to know to estimate memory usage + performance. Ie, is the model using the same 8 experts across many tokens in a single task or does it regularly touch all ~300 experts | 2025-11-11T16:40:15 | https://www.reddit.com/r/LocalLLaMA/comments/1ouea52/moe_expert_distributions_for_kimi_k2_thinking/ | Stunning-Document-53 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ouea52 | false | null | t3_1ouea52 | /r/LocalLLaMA/comments/1ouea52/moe_expert_distributions_for_kimi_k2_thinking/ | false | false | self | 5 | null |
Built a Local LLM Benchmark Tool for Real Tests โ Any thoughts? | 1 | [removed] | 2025-11-11T16:39:18 | https://www.reddit.com/gallery/1oue98n | pkc0229 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oue98n | false | null | t3_1oue98n | /r/LocalLLaMA/comments/1oue98n/built_a_local_llm_benchmark_tool_for_real_tests/ | false | false | 1 | null | |
Simple Video Inference GUI | 3 | Looking for a simple GUI that allows for uploading videos for processing with Qwen 3 VL locally. Thanks. | 2025-11-11T16:37:42 | https://www.reddit.com/r/LocalLLaMA/comments/1oue7oe/simple_video_inference_gui/ | TheDailySpank | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oue7oe | false | null | t3_1oue7oe | /r/LocalLLaMA/comments/1oue7oe/simple_video_inference_gui/ | false | false | self | 3 | null |
Would Kimi K2 Thinking be decent at 2.5-3.5bpw quant range, given it is native 4 bits? Like ~3bpw and above for DeepSeek models that are native 8 bit. | 5 | Hello guys, hoping you're fine.
I was wondering, given that Kimi K2 thinking is a native 4bit model, would a quantization not lobotomize that much in the 2.5-3.5 bpw range (like Q2\_M to Q3\_M size on lcpp terms)?
It was discussed that on the case of DeepSeek models, 3bpw and a bit higher (like IQ3\_XXS and such) are pretty good despite being a quite substantial quantization.
What do you guys think? Have you tried a Kimi K2 Thinking quant? I'm trying Q2\_K\_XL (which is 3bpw) locally and it seems to be pretty good, but I can't run native 4bpw/4bit to compare. | 2025-11-11T16:26:58 | https://www.reddit.com/r/LocalLLaMA/comments/1oudxai/would_kimi_k2_thinking_be_decent_at_2535bpw_quant/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oudxai | false | null | t3_1oudxai | /r/LocalLLaMA/comments/1oudxai/would_kimi_k2_thinking_be_decent_at_2535bpw_quant/ | false | false | self | 5 | null |
Nice to meet you, everyone | 0 | Playing with llm by myself and making a post for the first time.
I look forward to your kind cooperation. | 2025-11-11T16:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/1oudold/nice_to_meet_you_everyone/ | pkc0229 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oudold | false | null | t3_1oudold | /r/LocalLLaMA/comments/1oudold/nice_to_meet_you_everyone/ | false | false | self | 0 | null |
Anyone tried the kaggle winning nemoskills model? | 2 | nemoskills won the AI Mathematical Olympiad - Progress Prize 2
[https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-2/writeups/nemoskills-1st-place-solution-nemoskills](https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-2/writeups/nemoskills-1st-place-solution-nemoskills)
Has anyone tried it themselves? How did you find it on your math problems? | 2025-11-11T16:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/1oudg4f/anyone_tried_the_kaggle_winning_nemoskills_model/ | MrMrsPotts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oudg4f | false | null | t3_1oudg4f | /r/LocalLLaMA/comments/1oudg4f/anyone_tried_the_kaggle_winning_nemoskills_model/ | false | false | self | 2 | null |
What LLM would best suit my specs and purpose to run locally? And would you recommend RAG or Fine-Tuning the model? | 0 | Hi everyone,
I recently build my own personal computer for gaming and experimenting with AI.
The specs are:
\- CPU: 9950x3d
\- GPU: rtx 5090 (aorus master) 32g VRAM (no more room for one extra in the lian li o11 vision compact unfortunatly hehe, not that I have to money tho, flat broke rn but the build is a beauty!)
\- 64G RAM (2x32gb, with possibility to upgrade to 128gb)
\- 4tb SSD (with more open slots)
I am working as a research assistent at a law department and I want to utilize AI to increase research productivity and help writing papers. Unfortunatly the current 'vanilla' LLMs aren't really trained on my domain and they hallucinate quite often. I am new in running LLMs locally and I want to see what the possibilities are in using AI for research purposes. As you might understand, the AI should be very accurate with his answers and able to adapt to new information given. I have the idea to use RAG over fine-tuning a model, so I can fill the dataset with laws, case law and other relevant documents like implementations, examples and opinions.
What would you recommend me as a model to start experimenting with, and what would be the best way to use RAG and set up the database? Like when laws get changed, should I delete the data or some how let the system know the law was changed, and if I add new case law that might contradict with previous judgements.
And if this all would be a success, what would you recommend me when I want to upgrade in the future, move to a better model, try different ones, maybe even upgrading my specs?
I am a newbie in the field, so if you have any tips to grow my knowledge on the subject, you're welcome to speak! I hope you can give me a push in the right direction. | 2025-11-11T15:51:54 | https://www.reddit.com/r/LocalLLaMA/comments/1oucyt9/what_llm_would_best_suit_my_specs_and_purpose_to/ | nynhi_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oucyt9 | false | null | t3_1oucyt9 | /r/LocalLLaMA/comments/1oucyt9/what_llm_would_best_suit_my_specs_and_purpose_to/ | false | false | self | 0 | null |
Gaming & AI - 5090 or two 3090s? | 1 | Getting into local LLMs and whatnot, curious what the community would recommend for someone wanting to use one new build for gaming and AI usage.
Last PC is very old at this point, has a 1080 in it from 10 yrs or so ago! | 2025-11-11T15:29:02 | https://www.reddit.com/r/LocalLLaMA/comments/1oucdaj/gaming_ai_5090_or_two_3090s/ | ConflictNo4814 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oucdaj | false | null | t3_1oucdaj | /r/LocalLLaMA/comments/1oucdaj/gaming_ai_5090_or_two_3090s/ | false | false | self | 1 | null |
The Case That A.I. Is Thinking, The trust collapse: Infinite AI content is awful and many other LLM related links from Hacker News | 1 | [removed] | 2025-11-11T15:11:13 | https://www.reddit.com/r/LocalLLaMA/comments/1oubwqj/the_case_that_ai_is_thinking_the_trust_collapse/ | alexeestec | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oubwqj | false | null | t3_1oubwqj | /r/LocalLLaMA/comments/1oubwqj/the_case_that_ai_is_thinking_the_trust_collapse/ | false | false | self | 1 | null |
To be... | 0 | Will ollama , become something like mtsy and so ? ๐ค , lately it is introducing cloud things but with avier models | 2025-11-11T15:08:04 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oubtt2 | false | null | t3_1oubtt2 | /r/LocalLLaMA/comments/1oubtt2/to_be/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'prqirkrj8n0g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/prqirkrj8n0g1.png?width=108&crop=smart&auto=webp&s=d4f8dd607103d6445dc314ed7ee46f45f7c6a71e', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/prqirkrj8n0g1.png?width=216&crop=smart&auto=webp&s=4f7a88854a0354456d4ee7fa43f5b88bca6ea597', 'width': 216}], 'source': {'height': 225, 'url': 'https://preview.redd.it/prqirkrj8n0g1.png?auto=webp&s=0fa78727a72223ee16f6f01a709cfbe5db6ae65d', 'width': 225}, 'variants': {}}]} | |
Qwen-images escape detection. LinkedIn now tells you when you're looking at an AI-generated image. | 0 | As the 1st image shows, Qwen-image generated pic escaped from Linkedin's CR label. But from the left of the 2nd image, the C2PA label was used in GPT generated one.
Here's what's interesting.
**The feature only applies to image platforms who join the C2PA.**
Now there's only:
* ChatGPT/DALL-E 3 images
* Adobe Firefly images
* Leica Camera images
* BBC news images
The top-right in 2nd image, generated byย [Google's Nano Banana](https://www.netmind.ai/modelsLibrary/nano-banana), does not have the label.
What's even more interesting?
**It's easy to bypass this new rule.**ย
You just need to upload the screenshot of the AI-generated pic, as we did with the bottom-right in the 2nd image, a screenshot of the left one.
Do you think more AI image platforms, like Google, will join C2PA?
**Edit:**ย Google Pixel photos now support both SynthID and C2PA, but SyntthID acts as a complementary backup mainly for Al-generated or edited content. The C2PA tags (just added in Sept.) are mainly here for provenance tracking.
| 2025-11-11T15:07:29 | https://www.reddit.com/gallery/1oubtar | MarketingNetMind | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1oubtar | false | null | t3_1oubtar | /r/LocalLLaMA/comments/1oubtar/qwenimages_escape_detection_linkedin_now_tells/ | false | false | 0 | null | |
To be... | 1 | Will ollama , become something like mtsy and so ? ๐ค , lately it is introducing cloud things but with avier models | 2025-11-11T15:06:32 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1oubsfd | false | null | t3_1oubsfd | /r/LocalLLaMA/comments/1oubsfd/to_be/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '8jey78v98n0g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/8jey78v98n0g1.png?width=108&crop=smart&auto=webp&s=42b74374046566965ab47729b8a1fe4f5827f944', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/8jey78v98n0g1.png?width=216&crop=smart&auto=webp&s=11482d9243277a5ee5d45fa5f57d2d7e5916d3a0', 'width': 216}], 'source': {'height': 225, 'url': 'https://preview.redd.it/8jey78v98n0g1.png?auto=webp&s=44ec8021beb9b9b74858578484e6b0446a444b72', 'width': 225}, 'variants': {}}]} | |
Meta chief AI scientist Yann LeCun plans to exit to launch startup, FT reports | 201 | 2025-11-11T15:05:23 | https://www.reuters.com/technology/meta-chief-ai-scientist-yann-lecun-plans-exit-launch-startup-ft-reports-2025-11-11/ | brown2green | reuters.com | 1970-01-01T00:00:00 | 0 | {} | 1oubrbc | false | null | t3_1oubrbc | /r/LocalLLaMA/comments/1oubrbc/meta_chief_ai_scientist_yann_lecun_plans_to_exit/ | false | false | default | 201 | null | |
๐จ I built a Python library to convert JSON to a more compact format (saves ~30-40% tokens for LLM apps) | 1 | [removed] | 2025-11-11T15:03:37 | https://www.reddit.com/r/LocalLLaMA/comments/1oubpm0/i_built_a_python_library_to_convert_json_to_a/ | Purple-Librarian1818 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oubpm0 | false | null | t3_1oubpm0 | /r/LocalLLaMA/comments/1oubpm0/i_built_a_python_library_to_convert_json_to_a/ | false | false | self | 1 | null |
What GPU would you recommend for embedding models? | 0 | For utilizing the best MTEB Leaderboard models to embed millions of text segments, which GPU would be provide decent value? RTX \*090s, DGX, Strix, Mac+? | 2025-11-11T14:58:43 | https://www.reddit.com/r/LocalLLaMA/comments/1oubktf/what_gpu_would_you_recommend_for_embedding_models/ | Chance-Studio-8242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oubktf | false | null | t3_1oubktf | /r/LocalLLaMA/comments/1oubktf/what_gpu_would_you_recommend_for_embedding_models/ | false | false | self | 0 | null |
Anyone been using local LLMs with Claude Code? | 16 | Looking for feedback/experience in using Qwen3-Coder:a3b, gpt-oss-120b or GLM 4.5 air with Claude Code locally. | 2025-11-11T14:44:46 | https://www.reddit.com/r/LocalLLaMA/comments/1oub8dt/anyone_been_using_local_llms_with_claude_code/ | rm-rf-rm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oub8dt | false | null | t3_1oub8dt | /r/LocalLLaMA/comments/1oub8dt/anyone_been_using_local_llms_with_claude_code/ | false | false | self | 16 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.