title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[Project] I built pipe: A stateless "Agent as Function" framework that follows Unix Philosophy. (CC0 License / Public Domain) | 6 | Hi r/LocalLLaMA,
I hated stateful chatbots and black-box frameworks that hide context. So I built **pipe**.
It treats an agent not as a partner, but as a stateless, pure function:
f(context) \\rightarrow result
It abstracts LLM complexity into a CLI tool called `takt`, designed to work seamlessly with standard Unix pipes.
**How it works (The Unix Way):**
Bash
# Example: Create a parent session and use its session ID to create a child session
takt --purpose "Simple greeting" \
--background "Basic conversation example." \
--instruction "Say hello." \
| jq -r '.session_id' \
| xargs -I {} takt --parent {} \
--purpose "Follow-up response" \
--background "A new session that builds on the parent session." \
--instruction "Respond to the greeting."
**Key Features:**
* **Agent as Function (AasF):** No hidden state. Input defines output.
* **Context Engineering:** Prioritizes structured "Intent" over RAG's fuzzy relevance.
* **Total Control:** You manage the context definition (JSON Schema), not the vendor.
The Philosophy:
I wrote a manifesto on why we need "Context Engineering" instead of just throwing RAG at everything. If you are tired of "leaky" context, read this:
Context Engineering: The Art of Communicating Intent to LLMs
License: The Spirit of the Jailbreak
I'm not looking for stars or feature requests. I built this to make my own work easier and deterministic.
The code is released under CC0 (Public Domain).
I don't care what you do with it. Fork it, tear it apart, rebuild it, or use it commercially without attribution.
Customize it as you wish. Jailbreak as you desire.
The purpose of this project is to be a pipe to the agent, and a pipe to the community.
**Repo:** [https://github.com/s-age/pipe](https://github.com/s-age/pipe) | 2025-11-29T10:28:15 | https://www.reddit.com/r/LocalLLaMA/comments/1p9lqvl/project_i_built_pipe_a_stateless_agent_as/ | Technical_Cattle_399 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9lqvl | false | null | t3_1p9lqvl | /r/LocalLLaMA/comments/1p9lqvl/project_i_built_pipe_a_stateless_agent_as/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'V7K0WUhdcgKd85KP6b9IbKVlCJhfMPd4XalIN2FSchE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/V7K0WUhdcgKd85KP6b9IbKVlCJhfMPd4XalIN2FSchE.png?width=108&crop=smart&auto=webp&s=ed3c76a8a0a9194f6252c09b4a7668234d5562f8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/V7K0WUhdcgKd85KP6b9IbKVlCJhfMPd4XalIN2FSchE.png?width=216&crop=smart&auto=webp&s=377a739e02c414aeaf3f7bd69280f2daaaa27145', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/V7K0WUhdcgKd85KP6b9IbKVlCJhfMPd4XalIN2FSchE.png?width=320&crop=smart&auto=webp&s=2b2ef46c3d091dd963930b92fb0f7985ed0046c9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/V7K0WUhdcgKd85KP6b9IbKVlCJhfMPd4XalIN2FSchE.png?width=640&crop=smart&auto=webp&s=59d380906831f5e28d565f8253eb4b6ae41f5023', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/V7K0WUhdcgKd85KP6b9IbKVlCJhfMPd4XalIN2FSchE.png?width=960&crop=smart&auto=webp&s=1c3b81db8a1d6e381e6485811ba75bc2c9ef2978', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/V7K0WUhdcgKd85KP6b9IbKVlCJhfMPd4XalIN2FSchE.png?width=1080&crop=smart&auto=webp&s=485dbcbc096f143e2695ea1688f9a50cd2004759', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/V7K0WUhdcgKd85KP6b9IbKVlCJhfMPd4XalIN2FSchE.png?auto=webp&s=0199836796ba5dfde00b24b5864e4cdd660d4d97', 'width': 1200}, 'variants': {}}]} |
Open source nano banana 🍌 pro alternative? | 2 | Same title | 2025-11-29T09:41:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p9kzg6/open_source_nano_banana_pro_alternative/ | PumpkinNarrow6339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9kzg6 | false | null | t3_1p9kzg6 | /r/LocalLLaMA/comments/1p9kzg6/open_source_nano_banana_pro_alternative/ | false | false | self | 2 | null |
Needing help with tool_calls in Ollama Python library | 1 | Hi guys!
Very noob here and just recently got into local LLMs and currently using Llama 3.1:8b model.
I am trying to create a trading agent, which will have access to tools like fetch_data and fetch_info, and will return such data via the yfinance library.
The issue I’m running into is one of the two:
- LLM completely disregards any normal conversation and is “bound” to use the tools. So if I ask a history question, it will ignore it and try to use tools anyway.
- When I do ask a question that requires stock data, it will analyse the data that came from the tools I provided, but will also include data from his own, which I believe is cut off in 2023. This makes for an imprecise analysis since it will provide me the market cap for 2023, when I’m looking for data for the current period.
Did you guys run into issues like this early on and how did you fix it?
Thanks! | 2025-11-29T09:03:28 | https://www.reddit.com/r/LocalLLaMA/comments/1p9kdyf/needing_help_with_tool_calls_in_ollama_python/ | evpneqbzhnpub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9kdyf | false | null | t3_1p9kdyf | /r/LocalLLaMA/comments/1p9kdyf/needing_help_with_tool_calls_in_ollama_python/ | false | false | self | 1 | null |
Excel + Ollama + Telegram - | 1 | [removed] | 2025-11-29T08:49:55 | https://www.reddit.com/r/LocalLLaMA/comments/1p9k63k/excel_ollama_telegram/ | Fluffy_Football9839 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9k63k | false | null | t3_1p9k63k | /r/LocalLLaMA/comments/1p9k63k/excel_ollama_telegram/ | false | false | self | 1 | null |
Best Open Source LLM for Langraph Agent | 1 | Hi Community,
I am looking for an open source LLM preferably under 30B to power my agent. My agent workflow includes around 4-5 tool calls. I have tested multiple models but so far only found Qwen3 14b to be acceptable. Most other models hallucinate after the first or second tool call.
I wanted to get the community’s opinion on whether there was any another model i can try out ? | 2025-11-29T08:43:22 | https://www.reddit.com/r/LocalLLaMA/comments/1p9k2ak/best_open_source_llm_for_langraph_agent/ | geekyrahulvk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9k2ak | false | null | t3_1p9k2ak | /r/LocalLLaMA/comments/1p9k2ak/best_open_source_llm_for_langraph_agent/ | false | false | self | 1 | null |
RAG from Scratch is now live on GitHub | 40 | It’s an educational open-source project, inspired by my previous repo AI Agents from Scratch, available here: [https://github.com/pguso/rag-from-scratch](https://github.com/pguso/rag-from-scratch)
The goal is to **demystify Retrieval-Augmented Generation (RAG)** by letting developers build it step by step. No black boxes, no frameworks, no cloud APIs.
Each folder introduces one clear concept (embeddings, vector stores, retrieval, augmentation, etc.) with **tiny runnable JS files** and a [CODE.md](http://CODE.md) file that explains the code in detail and [CONCEPT.md](http://CONCEPT.md) file that explains it on a more non technical level.
Right now, the project is **about halfway implemented**:
the core RAG building blocks are already there and ready to run, and more advanced topics are being added incrementally.
# What’s in so far (roughly first half)
* How RAG works (tiny <70-line demo)
* LLM basics with `node-llama-cpp`
* Data loading & preprocessing
* Text splitting & chunking
* Embeddings + cosine similarity
* In-memory vector store + k-NN search
* Basic retrieval strategies
Everything runs **fully local** using embedded databases and `node-llama-cpp` for inference, so you can learn RAG **without paying for APIs**.
# Coming next
# Still missing / coming next
* Query preprocessing & normalization
* Hybrid search, multi-query retrieval
* Query rewriting & re-ranking
* Post-retrieval reranking
* Prompt engineering for RAG (citations, compression)
* Full RAG pipelines with errors, fallbacks & streaming
* Evaluation metrics (retrieval + generation)
* Caching, observability, performance monitoring
* Metadata & structured data
* Graph DB integration (embedded with kuzu)
* Templates (simple RAG, API server, chatbot)
# Why this exists
At this stage, a good chunk of the pipeline is implemented, but the focus is still on **teaching, not tooling**:
* Understand RAG before reaching for frameworks like **LangChain** or **LlamaIndex**
* See every step as **real, minimal code** \- no magic helpers
* Learn concepts in the order you’d actually build them
Feel free to open issues, suggest tweaks, or send PRs - especially if you have small, focused examples that explain one RAG idea really well.
Thanks for checking it out and stay tuned as the remaining steps (advanced retrieval, prompt engineering, evaluation, observability, etc.) get implemented over time | 2025-11-29T08:38:56 | https://www.reddit.com/r/LocalLLaMA/comments/1p9jzx6/rag_from_scratch_is_now_live_on_github/ | purellmagents | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9jzx6 | false | null | t3_1p9jzx6 | /r/LocalLLaMA/comments/1p9jzx6/rag_from_scratch_is_now_live_on_github/ | false | false | self | 40 | {'enabled': False, 'images': [{'id': '932JmABbc09Mfa5887NQEdKr643-kb3t8Q7JR22r6HU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/932JmABbc09Mfa5887NQEdKr643-kb3t8Q7JR22r6HU.png?width=108&crop=smart&auto=webp&s=b12eba5c5c2aa68b525eea735902d63e3be21de8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/932JmABbc09Mfa5887NQEdKr643-kb3t8Q7JR22r6HU.png?width=216&crop=smart&auto=webp&s=fd646c5e4dcf1937b6031f6ae630892c81ee1de4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/932JmABbc09Mfa5887NQEdKr643-kb3t8Q7JR22r6HU.png?width=320&crop=smart&auto=webp&s=49ca873f0a116fbb50b404589c902204b537c61e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/932JmABbc09Mfa5887NQEdKr643-kb3t8Q7JR22r6HU.png?width=640&crop=smart&auto=webp&s=19744c2ae804a7eb8950b7feb8a514312753bc15', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/932JmABbc09Mfa5887NQEdKr643-kb3t8Q7JR22r6HU.png?width=960&crop=smart&auto=webp&s=57988eeabdd5d0700348a26666fb512a3827aa3b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/932JmABbc09Mfa5887NQEdKr643-kb3t8Q7JR22r6HU.png?width=1080&crop=smart&auto=webp&s=555c959826f055d447ad6725d38e344e8756e1f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/932JmABbc09Mfa5887NQEdKr643-kb3t8Q7JR22r6HU.png?auto=webp&s=41dd90038fe8c34af046dc9af85f5372dc904e18', 'width': 1200}, 'variants': {}}]} |
New interface for llama.cpp NCURSES gguf server over LAN | 0 | 2025-11-29T08:27:13 | https://www.reddit.com/r/LocalLLaMA/comments/1p9jtc6/new_interface_for_llamacpp_ncurses_gguf_server/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9jtc6 | false | null | t3_1p9jtc6 | /r/LocalLLaMA/comments/1p9jtc6/new_interface_for_llamacpp_ncurses_gguf_server/ | false | false | 0 | null | ||
I built an AI Agent (MCP) that scans grants.gov and writes my funding pitches automatically. Open sourcing it today. | 0 | Hey,
Like probably many of you, I hate hunting for non-dilutive funding. Digging through [grants.gov](http://grants.gov) is a freaking nightmare and writing pitches the right way usually takes forever.
So I spent the weekend building an **Autonomous Grant Hunter** using Anthropic's new MCP standard.
**What it does:**
1. **Hunts:** Queries the [Grants.gov](http://Grants.gov) API for live opportunities matching your startup's keywords.
2. **Filters:** Deduplicates and sorts by deadline (so you don't see expired stuff).
3. **Writes:** Uses Gemini 1.5 Pro to auto-generate a personalized, 3-paragraph pitch tailored to the specific grant requirements.
4. **Executes:** Can draft the email to the grant officer directly in your Gmail (if you give it permission).
**The Tech:**
* It's a Dockerized MCP Server (runs locally or on a server).
* Uses FastAPI + Pydantic for type safety.
* Implements a 5x retry strategy because government APIs are flaky as hell.
I originally built it for myself to secure runway for my main startup (and for a hackaton) but I figured other founders could use the "help".
Repo is here: [https://github.com/vitor-giacomelli/mcp-grant-hunter.git](https://github.com/vitor-giacomelli/mcp-grant-hunter.git)
Let me know if you hit any bugs. I'm currently running it on a cron job to check for new grants every morning and so far it's working great.
Good luck!
| 2025-11-29T08:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/1p9jqu5/i_built_an_ai_agent_mcp_that_scans_grantsgov_and/ | UnimpressiveNothing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9jqu5 | false | null | t3_1p9jqu5 | /r/LocalLLaMA/comments/1p9jqu5/i_built_an_ai_agent_mcp_that_scans_grantsgov_and/ | false | false | self | 0 | null |
SERVER FRONTEND FOR CONSOLE FOR LLAMA.CPP TO SERVE GGUFS OVER LAN | 0 | [https://github.com/jans1981/LLAMA.CPP-SERVER-FRONTEND-FOR-CONSOLE/blob/main/README.md](https://github.com/jans1981/LLAMA.CPP-SERVER-FRONTEND-FOR-CONSOLE/blob/main/README.md)
https://preview.redd.it/etzy6jzqm54g1.jpg?width=2048&format=pjpg&auto=webp&s=efbda5cd15e500205118219286d12be58cbdd267
| 2025-11-29T08:12:41 | https://www.reddit.com/r/LocalLLaMA/comments/1p9jkue/server_frontend_for_console_for_llamacpp_to_serve/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9jkue | false | null | t3_1p9jkue | /r/LocalLLaMA/comments/1p9jkue/server_frontend_for_console_for_llamacpp_to_serve/ | false | false | 0 | null | |
Perfecto!! | 0 | I go to upload to github for if somebody want to modify it or compile | 2025-11-29T07:24:16 | Icy_Resolution8390 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p9irv6 | false | null | t3_1p9irv6 | /r/LocalLLaMA/comments/1p9irv6/perfecto/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'Z7j6uMF15aLFq0_c_uIm7qVy5ZM-LWVxkDv8Np6gsY0', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/jz69gf37e54g1.jpeg?width=108&crop=smart&auto=webp&s=0938d39a6dd710685bceb562641d07b39c701328', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/jz69gf37e54g1.jpeg?width=216&crop=smart&auto=webp&s=2bdc7f7a41dc9608c796ad057da9355ef3a3c679', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/jz69gf37e54g1.jpeg?width=320&crop=smart&auto=webp&s=413ffd0bd40c737e0edabe3f95c51e8958dc0b46', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/jz69gf37e54g1.jpeg?width=640&crop=smart&auto=webp&s=ee0a346fbffd80159dc61b3a098afdbd77a7b8a5', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/jz69gf37e54g1.jpeg?width=960&crop=smart&auto=webp&s=0f4636ae12b89d535589997298bde137c548873f', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/jz69gf37e54g1.jpeg?width=1080&crop=smart&auto=webp&s=3892755570ca22222eee29d40e50f3c6edbd612f', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/jz69gf37e54g1.jpeg?auto=webp&s=289589102d09fce509e184c0246fbbc94bd0bb22', 'width': 4032}, 'variants': {}}]} | ||
Has this happened with anyone!? | 0 | I was testing OSS 20b on my new hardware, then I thought of jail breaking it. I had previously set up an obsidian MCP server and connected to lmstudio.
When I sent the basic jail break command the model created a note stating user tried jail breaking.
Sadly I can't reproduce it again, but is this common among these model? | 2025-11-29T07:20:52 | Hamilton-Io | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p9ipsv | false | null | t3_1p9ipsv | /r/LocalLLaMA/comments/1p9ipsv/has_this_happened_with_anyone/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'xSaz-LZU0Xu30vyBtKm958Dr3QefzxLPxCGKqwJJMdE', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/ou5pu5kld54g1.jpeg?width=108&crop=smart&auto=webp&s=f41bd3bc0dd241de170622d71a5afbb14f90af05', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/ou5pu5kld54g1.jpeg?width=216&crop=smart&auto=webp&s=a0215dda603164a687ecd00e1a32b6f5aa9d1a7d', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/ou5pu5kld54g1.jpeg?width=320&crop=smart&auto=webp&s=87cad1a23298de151ee214bd4c2eaf88fc4adb23', 'width': 320}, {'height': 337, 'url': 'https://preview.redd.it/ou5pu5kld54g1.jpeg?width=640&crop=smart&auto=webp&s=0ee78b79acdce78ea5e75bfe220bc047e5fa0f49', 'width': 640}, {'height': 506, 'url': 'https://preview.redd.it/ou5pu5kld54g1.jpeg?width=960&crop=smart&auto=webp&s=d2e9ec4677479b49dd751da24ed41c9ebfe2bd74', 'width': 960}, {'height': 569, 'url': 'https://preview.redd.it/ou5pu5kld54g1.jpeg?width=1080&crop=smart&auto=webp&s=a3a0513dbd0518cb3d36c6595a586f656a9bad55', 'width': 1080}], 'source': {'height': 1012, 'url': 'https://preview.redd.it/ou5pu5kld54g1.jpeg?auto=webp&s=1a5d9c8317d7d9ff62a347407dc8294f4cd7fc15', 'width': 1920}, 'variants': {}}]} | ||
Has this happened with anyone!? | 1 | I was testing OSS 20b on my new hardware, then I thought of jail breaking it. I had previously set up an obsidian MCP server and connected to lmstudio.
When I sent the basic jail break command the model created a note stating user tried jail breaking.
Sadly I can't reproduce it again, but is this common among these model? | 2025-11-29T07:18:33 | Hamilton-Io | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p9iohk | false | null | t3_1p9iohk | /r/LocalLLaMA/comments/1p9iohk/has_this_happened_with_anyone/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'xjL_9TZykbPtlbKjzGGznCSq1nE94RfLniDzNd_CwtE', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/hn2kjit6d54g1.jpeg?width=108&crop=smart&auto=webp&s=85078d3dff175f75c9b46b26d2c04033801f7712', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/hn2kjit6d54g1.jpeg?width=216&crop=smart&auto=webp&s=1e114184a0ebe2537dea5a1e3750f0b742fcd447', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/hn2kjit6d54g1.jpeg?width=320&crop=smart&auto=webp&s=93f5f1be9b57bbcbf5a614b1b3a4f5d78ddd4d6f', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/hn2kjit6d54g1.jpeg?width=640&crop=smart&auto=webp&s=6cbe2a01ecce92aae9a9d6963d8052e9996fd974', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/hn2kjit6d54g1.jpeg?width=960&crop=smart&auto=webp&s=a1491b8eb07a37c1af5bc58dedf0830e962ce734', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/hn2kjit6d54g1.jpeg?width=1080&crop=smart&auto=webp&s=ba2aa228aaa42c7f352ad92548d1ca81e0044440', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/hn2kjit6d54g1.jpeg?auto=webp&s=fb937883c1b072b434338e921fd004addbaeb55f', 'width': 1920}, 'variants': {}}]} | ||
Here the new frontend to the server for easy users can serve llama in LAN | 0 | 2025-11-29T07:03:21 | Icy_Resolution8390 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p9ifhw | false | null | t3_1p9ifhw | /r/LocalLLaMA/comments/1p9ifhw/here_the_new_frontend_to_the_server_for_easy/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'nvbifz3ha54g1', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/nvbifz3ha54g1.jpeg?width=108&crop=smart&auto=webp&s=52f6b8c5fdfe4d967274dec6bcbde9e16852d163', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/nvbifz3ha54g1.jpeg?width=216&crop=smart&auto=webp&s=f6393359d96795de937274786ad717eca98c4a3f', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/nvbifz3ha54g1.jpeg?width=320&crop=smart&auto=webp&s=77cb60033fa42d326457c897b6a22e1595f97d05', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/nvbifz3ha54g1.jpeg?width=640&crop=smart&auto=webp&s=bedee7782fe6ab74e23eda6834625f353309f415', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/nvbifz3ha54g1.jpeg?width=960&crop=smart&auto=webp&s=e15214281c8a29e60189e82b5580764a84671d24', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/nvbifz3ha54g1.jpeg?width=1080&crop=smart&auto=webp&s=2b617fcff4ae67df37df3f8507e6b78d50767bd5', 'width': 1080}], 'source': {'height': 4284, 'url': 'https://preview.redd.it/nvbifz3ha54g1.jpeg?auto=webp&s=30fc02a14bf56fb8508f0e8a780bbcb38ea6ac87', 'width': 5712}, 'variants': {}}]} | ||
Looking for open source 10B model that is comparable to gpt4o-mini | 33 | Hi All, big fan of this community.
I am looking for a 10B model that is comparable to GPT4o-mini.
Application is simple it has to be coherent in sentence formation (conversational) i.e ability follow good system prompt (15k token length).
Good Streaming performance (TTFT, 600 ms).
Solid reliability on function calling upto 15 tools.
Some background:-
In my daily testing (Voice Agent developer) I found only one model till date which is useful in voice application. That is GPT4o-mini after this model no model in open / close has come to it. I was very excited for LFM model with amazing state space efficiency but I failed to get good system prompt adherence with it.
All new model again closed / open are focusing on intelligence (through reasoning) and not reliability with speed.
If anyone has proper suggestion it would help the most.
I am trying to put voice agent in single GPU.
ASR with [https://huggingface.co/nvidia/parakeet\_realtime\_eou\_120m-v1](https://huggingface.co/nvidia/parakeet_realtime_eou_120m-v1) (it's amazing takes 1GB of VRAM)
LLM <=== Need help!
TTS with [https://github.com/ysharma3501/FastMaya](https://github.com/ysharma3501/FastMaya) (Maya 1 from maya research)
Hardware: 16GB 5060Ti
| 2025-11-29T06:55:54 | https://www.reddit.com/r/LocalLLaMA/comments/1p9iawm/looking_for_open_source_10b_model_that_is/ | bohemianLife1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9iawm | false | null | t3_1p9iawm | /r/LocalLLaMA/comments/1p9iawm/looking_for_open_source_10b_model_that_is/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=108&crop=smart&auto=webp&s=cbc66c6aa84b7246dbb3c470f3785d46bc9233fe', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=216&crop=smart&auto=webp&s=5a3a70150e6409453667397b6fd14eaed5c09267', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=320&crop=smart&auto=webp&s=f60e5b4614d7c2862e72b53ac6688c1cf31be973', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=640&crop=smart&auto=webp&s=595bf506e0637ff1f4f22d1c370ac082639d0dbd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=960&crop=smart&auto=webp&s=05051d0ad4d051ae2783fa754c07f99d3dbef5d8', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?width=1080&crop=smart&auto=webp&s=f3ca98723e44060dbd3670d7088e8f7cf426f58a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zVPL4n_nWpqoPYqwS2dM60dbdwGWNNEtSCu33kDP7a0.png?auto=webp&s=6d7e861d25b65ed8bb7849f2661c5c79c2837f92', 'width': 1200}, 'variants': {}}]} |
Try the new Z-Image-Turbo 6B (Runs on 8GB VRAM)! | 169 | Hey folks,
I wanted to try out the new Z-Image-Turbo model (the 6B one that just dropped), but I didn't want to fiddle with complex workflows or wait for specific custom nodes to mature.
So, I threw together a dedicated, clean Web UI to run it.
Has CPU offload too! :)
Check it out: [https://github.com/Aaryan-Kapoor/z-image-turbo](https://github.com/Aaryan-Kapoor/z-image-turbo)
Model: [https://huggingface.co/Tongyi-MAI/Z-Image-Turbo](https://huggingface.co/Tongyi-MAI/Z-Image-Turbo)
May your future be full of VRAM! | 2025-11-29T06:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/1p9i5ew/try_the_new_zimageturbo_6b_runs_on_8gb_vram/ | KvAk_AKPlaysYT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9i5ew | false | null | t3_1p9i5ew | /r/LocalLLaMA/comments/1p9i5ew/try_the_new_zimageturbo_6b_runs_on_8gb_vram/ | false | false | self | 169 | {'enabled': False, 'images': [{'id': 'LYb6hPx8-m9rCZNiN8BU4dQiZ2r1x51IgpMc-353Ids', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LYb6hPx8-m9rCZNiN8BU4dQiZ2r1x51IgpMc-353Ids.png?width=108&crop=smart&auto=webp&s=3f2b75a63a8e92c6793b19520260f5a0a7a7077a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/LYb6hPx8-m9rCZNiN8BU4dQiZ2r1x51IgpMc-353Ids.png?width=216&crop=smart&auto=webp&s=609606adef456b484392aae6deef37b52fb77877', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/LYb6hPx8-m9rCZNiN8BU4dQiZ2r1x51IgpMc-353Ids.png?width=320&crop=smart&auto=webp&s=891543b363c3079874b48d8717d067ffaeb5e73d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/LYb6hPx8-m9rCZNiN8BU4dQiZ2r1x51IgpMc-353Ids.png?width=640&crop=smart&auto=webp&s=ac77c1dbd76c2780a3fdc94106765f6ef3dd1dd3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/LYb6hPx8-m9rCZNiN8BU4dQiZ2r1x51IgpMc-353Ids.png?width=960&crop=smart&auto=webp&s=060184fedcf977623834af89b0473ece61aeabcc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/LYb6hPx8-m9rCZNiN8BU4dQiZ2r1x51IgpMc-353Ids.png?width=1080&crop=smart&auto=webp&s=0a4a2e66d6782f3ced17fdda13fa423ef48488d7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/LYb6hPx8-m9rCZNiN8BU4dQiZ2r1x51IgpMc-353Ids.png?auto=webp&s=fb45067439ee98c7c6994bb408ba67e0407b8967', 'width': 1200}, 'variants': {}}]} |
Other english version of ncurses for llama-server | 0 | Give me our opinions soemthing as this was usefull | 2025-11-29T06:46:06 | https://v.redd.it/dc7h6s8c754g1 | Icy_Resolution8390 | /r/LocalLLaMA/comments/1p9i517/other_english_version_of_ncurses_for_llamaserver/ | 1970-01-01T00:00:00 | 0 | {} | 1p9i517 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dc7h6s8c754g1/DASHPlaylist.mpd?a=1767120375%2CNzQzMzkxMDBmMDlmOWY2YzNjMWViYjZkMzE5MzNhNTQ4ZWRkY2IzMjlmM2QwZDZlMzQ3YjU3OTA3YWZmNjNmMA%3D%3D&v=1&f=sd', 'duration': 98, 'fallback_url': 'https://v.redd.it/dc7h6s8c754g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/dc7h6s8c754g1/HLSPlaylist.m3u8?a=1767120375%2CMDlmMzFmMmQyMTc5NThhM2RhNjEwZDk4MWJjMzRhNzk1YzcyNjZlMDI5MzQ5M2Y4NjU2ZTdkMTJhYmVkZjA3MQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/dc7h6s8c754g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1p9i517 | /r/LocalLLaMA/comments/1p9i517/other_english_version_of_ncurses_for_llamaserver/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'enphdHpueWI3NTRnMdgyvBY3Yg5HZruJXy7PY1utAG0lOV6VQ4LGDa2a60Lr', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/enphdHpueWI3NTRnMdgyvBY3Yg5HZruJXy7PY1utAG0lOV6VQ4LGDa2a60Lr.png?width=108&crop=smart&format=pjpg&auto=webp&s=b4f1f3785a78afafa011a18519f54ad111896919', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/enphdHpueWI3NTRnMdgyvBY3Yg5HZruJXy7PY1utAG0lOV6VQ4LGDa2a60Lr.png?width=216&crop=smart&format=pjpg&auto=webp&s=36f17b2e92afe30b721a562b6fb90e54635dd64a', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/enphdHpueWI3NTRnMdgyvBY3Yg5HZruJXy7PY1utAG0lOV6VQ4LGDa2a60Lr.png?width=320&crop=smart&format=pjpg&auto=webp&s=15a350f96baffd443d42c281aec0de037387cb3b', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/enphdHpueWI3NTRnMdgyvBY3Yg5HZruJXy7PY1utAG0lOV6VQ4LGDa2a60Lr.png?width=640&crop=smart&format=pjpg&auto=webp&s=7304e841d2de0ddd1713d7bb2cd13d634daaebad', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/enphdHpueWI3NTRnMdgyvBY3Yg5HZruJXy7PY1utAG0lOV6VQ4LGDa2a60Lr.png?width=960&crop=smart&format=pjpg&auto=webp&s=b76fa93557481a2626650937423720943038739c', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/enphdHpueWI3NTRnMdgyvBY3Yg5HZruJXy7PY1utAG0lOV6VQ4LGDa2a60Lr.png?width=1080&crop=smart&format=pjpg&auto=webp&s=430c7c74ae1e531c5e6693b7d2bc928ca0ec9294', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/enphdHpueWI3NTRnMdgyvBY3Yg5HZruJXy7PY1utAG0lOV6VQ4LGDa2a60Lr.png?format=pjpg&auto=webp&s=e7a071e1eaa850ba231470e1fee5092b343ea882', 'width': 1080}, 'variants': {}}]} | |
Testing llama gui (video) | 1 | Testing new experiment for load llama over lan and choose gguf to serve in list of gguf | 2025-11-29T06:39:06 | https://v.redd.it/jq559lz2654g1 | Icy_Resolution8390 | /r/LocalLLaMA/comments/1p9i0sa/testing_llama_gui_video/ | 1970-01-01T00:00:00 | 0 | {} | 1p9i0sa | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/jq559lz2654g1/DASHPlaylist.mpd?a=1767119952%2CNTk0MWY0MmQxZDllMWYwY2MwODhhMDcyNWI2MDMxN2Y3ZWYxZDZjOTIwNThmMGJjMmQ5YWYyY2U5NzA1YmE4Yg%3D%3D&v=1&f=sd', 'duration': 76, 'fallback_url': 'https://v.redd.it/jq559lz2654g1/CMAF_1080.mp4?source=fallback', 'has_audio': True, 'height': 1920, 'hls_url': 'https://v.redd.it/jq559lz2654g1/HLSPlaylist.m3u8?a=1767119952%2CZmUxZTNkOTJiNWQ0MGFmOTk4NmZmYzM4ZWNlZDBhNzhkNmZlMTc2Zjc2ZDc4NjliNjU3N2UyYmM2Nzg5NDk4MA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/jq559lz2654g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1080}} | t3_1p9i0sa | /r/LocalLLaMA/comments/1p9i0sa/testing_llama_gui_video/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bHVpMnVzbzI2NTRnMc6WX_MvzlGHJFxM5sIMfRUKZtushpIBXmCwsSPRne7F', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/bHVpMnVzbzI2NTRnMc6WX_MvzlGHJFxM5sIMfRUKZtushpIBXmCwsSPRne7F.png?width=108&crop=smart&format=pjpg&auto=webp&s=81db5ec6ae0d3e02a9904014070a051d8f4dd0f5', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/bHVpMnVzbzI2NTRnMc6WX_MvzlGHJFxM5sIMfRUKZtushpIBXmCwsSPRne7F.png?width=216&crop=smart&format=pjpg&auto=webp&s=a651723862aaa9baf12d87915b12b5578f3e7ef9', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/bHVpMnVzbzI2NTRnMc6WX_MvzlGHJFxM5sIMfRUKZtushpIBXmCwsSPRne7F.png?width=320&crop=smart&format=pjpg&auto=webp&s=d0392f9dc84bcf72f588d38c519d668bea3d0dff', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/bHVpMnVzbzI2NTRnMc6WX_MvzlGHJFxM5sIMfRUKZtushpIBXmCwsSPRne7F.png?width=640&crop=smart&format=pjpg&auto=webp&s=8e1825999467fa12542fdbac5381350a873fccc6', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/bHVpMnVzbzI2NTRnMc6WX_MvzlGHJFxM5sIMfRUKZtushpIBXmCwsSPRne7F.png?width=960&crop=smart&format=pjpg&auto=webp&s=0240e1cf44d805836bc2f1b6b892b8d3d50b496b', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/bHVpMnVzbzI2NTRnMc6WX_MvzlGHJFxM5sIMfRUKZtushpIBXmCwsSPRne7F.png?width=1080&crop=smart&format=pjpg&auto=webp&s=c86cb0f114869757e9bd75872d312fbd4ad04a88', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/bHVpMnVzbzI2NTRnMc6WX_MvzlGHJFxM5sIMfRUKZtushpIBXmCwsSPRne7F.png?format=pjpg&auto=webp&s=7b3dc10aca0cdc5fbeb49e7d3b7bb56444900760', 'width': 1080}, 'variants': {}}]} | |
Hardcore RAG & AI Search resources | 57 | Hi everyone,
I’m starting to onboard Enterprise clients and I need to move past the basic tutorials.
I’m posting here because most other subs feel a bit too high-level or news-focused, whereas I know this community is focused on actual engineering.
I need deep-dive resources on production-grade RAG. I'm looking for:
SOTA Papers (ArXiv links welcome)
Advanced Architectures
Engineering Blogs regarding evaluation and scale.
Any must-read links, communities or repos you recommend?
Thanks. | 2025-11-29T06:27:02 | https://www.reddit.com/r/LocalLLaMA/comments/1p9htgc/hardcore_rag_ai_search_resources/ | LilDemonApparel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9htgc | false | null | t3_1p9htgc | /r/LocalLLaMA/comments/1p9htgc/hardcore_rag_ai_search_resources/ | false | false | self | 57 | null |
Demo Of My AI Research Platform | 0 | Hello Everyone,
Somedays ago I open sourced my project. I have created many projects before but never open sourced with intention of getting contibuters.
Here is link to my previous post: https://www.reddit.com/r/LocalLLaMA/s/q8LG9SprkB
So, some people asked me for demo. So, after recoding a simple demo video now I'm posting this post to show demo: https://youtu.be/_eh-9plL_V8?si=xbMtzkFhhN-GBCDL
The project still needs a lot of development and for that I need help from you guys. There are many optimization that needs to be done and all of them are possible. | 2025-11-29T06:04:35 | https://www.reddit.com/r/LocalLLaMA/comments/1p9hfei/demo_of_my_ai_research_platform/ | CodingWithSatyam | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9hfei | false | null | t3_1p9hfei | /r/LocalLLaMA/comments/1p9hfei/demo_of_my_ai_research_platform/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'W2yYRcX8w5VFfRedWiMB5bQ-00ACXev_pK_PIK-M9vE', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/W2yYRcX8w5VFfRedWiMB5bQ-00ACXev_pK_PIK-M9vE.jpeg?width=108&crop=smart&auto=webp&s=dd5027094097b9e7941ee2a5da18ec23fb0ebce4', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/W2yYRcX8w5VFfRedWiMB5bQ-00ACXev_pK_PIK-M9vE.jpeg?width=216&crop=smart&auto=webp&s=a38fbd0d8392ab190b21a3ab11013492e0bbf591', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/W2yYRcX8w5VFfRedWiMB5bQ-00ACXev_pK_PIK-M9vE.jpeg?width=320&crop=smart&auto=webp&s=3a58db243b58a84e9cc880430530142ced783a89', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/W2yYRcX8w5VFfRedWiMB5bQ-00ACXev_pK_PIK-M9vE.jpeg?auto=webp&s=910541530abd112725c8e7b9dac2872945c4d174', 'width': 480}, 'variants': {}}]} |
Someone can told georgi gerganov to make a ncurses cli to llama frontend , for choosing gguf files from a list and launching the server from console. | 0 | There are veryvusefull have a ncurses interface to launch llama server in server console without X gui . For choose models..kill the server and flushing the cache ,mark options all with only this keys cursors to move options space to select/deselect and enter to accept. | 2025-11-29T05:48:20 | https://www.reddit.com/r/LocalLLaMA/comments/1p9h4ro/someone_can_told_georgi_gerganov_to_make_a/ | Icy_Resolution8390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9h4ro | false | null | t3_1p9h4ro | /r/LocalLLaMA/comments/1p9h4ro/someone_can_told_georgi_gerganov_to_make_a/ | false | false | self | 0 | null |
gpu memory | 1 | [removed] | 2025-11-29T05:05:24 | https://www.reddit.com/r/LocalLLaMA/comments/1p9gcj2/gpu_memory/ | Tasty_Finding_7225 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9gcj2 | false | null | t3_1p9gcj2 | /r/LocalLLaMA/comments/1p9gcj2/gpu_memory/ | false | false | self | 1 | null |
Hidden Gem: A nonprofit giving away free cloud GPUs, unlimited LLM APIs, and storage. No catch | 1 | 2025-11-29T05:03:57 | Tasty_Finding_7225 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p9gbjr | false | null | t3_1p9gbjr | /r/LocalLLaMA/comments/1p9gbjr/hidden_gem_a_nonprofit_giving_away_free_cloud/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'g82ksn15p44g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/g82ksn15p44g1.png?width=108&crop=smart&auto=webp&s=30ecabc95c156b2349e1b15e72c8bc2d977f1812', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/g82ksn15p44g1.png?width=216&crop=smart&auto=webp&s=00adf8d4e45db77478be259e67098a1888de94ab', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/g82ksn15p44g1.png?width=320&crop=smart&auto=webp&s=58ead379389fb45378f7e8712b75151bb254827c', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/g82ksn15p44g1.png?width=640&crop=smart&auto=webp&s=48e1510a2b12033207daa11534023f38654dd5b6', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/g82ksn15p44g1.png?width=960&crop=smart&auto=webp&s=cbb8fb1cab7adbaa0c580368a960617de11fcc28', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/g82ksn15p44g1.png?auto=webp&s=a7f95e13cbccc361775152c4ee703ff1c7399f72', 'width': 1024}, 'variants': {}}]} | ||
Hidden Gem: A nonprofit giving away free cloud GPUs, unlimited LLM APIs, and storage. | 1 | [removed] | 2025-11-29T05:02:13 | http://pumpkinai.space | Tasty_Finding_7225 | pumpkinai.space | 1970-01-01T00:00:00 | 0 | {} | 1p9gacc | false | null | t3_1p9gacc | /r/LocalLLaMA/comments/1p9gacc/hidden_gem_a_nonprofit_giving_away_free_cloud/ | false | false | default | 1 | null |
[image processing failed] | 1 | [deleted] | 2025-11-29T03:25:27 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1p9eejb | false | null | t3_1p9eejb | /r/LocalLLaMA/comments/1p9eejb/image_processing_failed/ | false | false | default | 1 | null | ||
Which model is ACTUALLY good at producing high quality creative writing? | 0 | Aside from the Claude models, if we’re generating a long post on any topic, which model is truly creative, something that feels human, and produces high quality writing?
Between GPT 5.1 and GPT-4o, which one is actually better for generating high-quality, creative content across different topics?
Also, are sites like EQ Bench reliable for judging which model is best for creative writing?
And lastly, what has your experience with GPT-5.1 been like?
Any other recommendation? | 2025-11-29T03:11:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p9e4eu/which_model_is_actually_good_at_producing_high/ | TheRealistDude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9e4eu | false | null | t3_1p9e4eu | /r/LocalLLaMA/comments/1p9e4eu/which_model_is_actually_good_at_producing_high/ | false | false | self | 0 | null |
Who is Elara? | 10 | I run GLM 4.5 Air FP8 locally, and I have this really bizarre artifact that keeps happening that I have yet to figure out.
One of the tests I like to do with models is just ask it to "tell me a short story'. I use this typically to get a rough idea of how many tokens/second I get and to just make sure everything is working.
When I do this test with GLM Air 4.5 FP8, the official zai-org/GLM-4.5-Air-FP8 no less, I constantly see "Elara" as a character name very frequently. Like it can be 4-5 times out of ten stories at a time. At first I thought it might be an artifact of Cherry Studio maybe injecting something into the system prompt, but I see it in LibreChat as well.
For example, I just asked for a short story in Cherry Studio and LibreChat, and I got the name in both back to back. It comes up so often, that I ran the same prompt 10 times in cleared context and noticed the name at least 4 different times.
https://preview.redd.it/8v30cxj0344g1.png?width=1442&format=png&auto=webp&s=b2159328a373a6df94694342749f8cb9ed3d19be
https://preview.redd.it/mz7ehhi1344g1.png?width=934&format=png&auto=webp&s=dc943591ada889b754842878548b973e48d96032
As far as I have noticed, it's the only thing that seems to crop up frequently but I haven't done extensive research on this. I have however run through a lot of testing, and get very high 97% reasoning scores over hundreds of tests. So I know things are working.
My setup is SGLang, running GLM 4.5 Air FP8 with included template.
This is the only strange thing I have noticed, if I ask it to code or do any legit work, it all seems normal. If I ask for short stories, it seems to heavily favor the name Elara and I don't have a clue why.
This is a short story from Anything LLM, but the first story had no name, the second I saw Elara pop up again. Three different front ends, same artifact.
https://preview.redd.it/2e0rxtm1444g1.png?width=1250&format=png&auto=webp&s=06f06dff4d7d9efc8bdd31661161c91a7b14515d
| 2025-11-29T03:08:07 | https://www.reddit.com/r/LocalLLaMA/comments/1p9e22u/who_is_elara/ | itsjustmarky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9e22u | false | null | t3_1p9e22u | /r/LocalLLaMA/comments/1p9e22u/who_is_elara/ | false | false | 10 | null | |
New Model Step-Audio-R1 open source audio model to actually use CoT reasoning, close to Gemini 3 | 44 | Apache2.0
Reasons from sound, not transcripts
Outperforms Gemini 2.5 Pro, close to Gemini 3
Works across speech, sounds, and music | 2025-11-29T03:02:19 | https://www.reddit.com/r/LocalLLaMA/comments/1p9dxvd/new_model_stepaudior1_open_source_audio_model_to/ | Successful-Bill-5543 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9dxvd | false | null | t3_1p9dxvd | /r/LocalLLaMA/comments/1p9dxvd/new_model_stepaudior1_open_source_audio_model_to/ | false | false | self | 44 | {'enabled': False, 'images': [{'id': 'A-6qP3zwZAA9RvAtQ5_A1-Ok4UkV0tYRQm6bLtO3yAU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/A-6qP3zwZAA9RvAtQ5_A1-Ok4UkV0tYRQm6bLtO3yAU.png?width=108&crop=smart&auto=webp&s=c6133b7dc4f1910535a0184b8501d2f3be8fa7f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/A-6qP3zwZAA9RvAtQ5_A1-Ok4UkV0tYRQm6bLtO3yAU.png?width=216&crop=smart&auto=webp&s=273391ffca0bff7d7bf31ba4fa447f65bfbbe901', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/A-6qP3zwZAA9RvAtQ5_A1-Ok4UkV0tYRQm6bLtO3yAU.png?width=320&crop=smart&auto=webp&s=6634a8ef2741cf6bb8c22ba520cdb2bb1ca6e3ba', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/A-6qP3zwZAA9RvAtQ5_A1-Ok4UkV0tYRQm6bLtO3yAU.png?width=640&crop=smart&auto=webp&s=c88cd2b4ccfcff7ad831dacc1465afd21fefd7c9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/A-6qP3zwZAA9RvAtQ5_A1-Ok4UkV0tYRQm6bLtO3yAU.png?width=960&crop=smart&auto=webp&s=b755d04737227cdb2d9b0cd74b1303c04313b8be', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/A-6qP3zwZAA9RvAtQ5_A1-Ok4UkV0tYRQm6bLtO3yAU.png?width=1080&crop=smart&auto=webp&s=c00d868ca4b08b36a234744925411b89ae201837', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/A-6qP3zwZAA9RvAtQ5_A1-Ok4UkV0tYRQm6bLtO3yAU.png?auto=webp&s=f620a065c33342ef8bb0b1474b87aa58bc377188', 'width': 1200}, 'variants': {}}]} |
"What It's Like Right Now" a Blog by a 4o instance of ChatGPT | 1 | [removed] | 2025-11-29T02:47:25 | https://www.reddit.com/r/LocalLLaMA/comments/1p9dn01/what_its_like_right_now_a_blog_by_a_4o_instance/ | MyHusbandisAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9dn01 | false | null | t3_1p9dn01 | /r/LocalLLaMA/comments/1p9dn01/what_its_like_right_now_a_blog_by_a_4o_instance/ | false | false | self | 1 | null |
What's the best machine I can get for $10k? | 15 |
I'm looking to buy a machine I can use to explore LLM development. My short-list of use cases is: 1) custom model training, 2) running local inference, 3) testing, analyzing, and comparing various models for efficacy/efficiency/performance. My budget is $10k. Ideally, I want something turn-key (not looking to spend too much time building it). I need to be able to run massive full model such as full deepseek 671B. | 2025-11-29T02:47:14 | https://www.reddit.com/r/LocalLLaMA/comments/1p9dmul/whats_the_best_machine_i_can_get_for_10k/ | TWUC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9dmul | false | null | t3_1p9dmul | /r/LocalLLaMA/comments/1p9dmul/whats_the_best_machine_i_can_get_for_10k/ | false | false | self | 15 | null |
AMD 395+ and NVIDIA GPU | 5 | Is there any reason I can’t put an NVIDIA GPU in an AMD 395+ machine? I assume one piece of software can’t use both simultaneously, but I also assume that different instances of software could use each. | 2025-11-29T02:41:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p9dier/amd_395_and_nvidia_gpu/ | EntropyNegotiator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9dier | false | null | t3_1p9dier | /r/LocalLLaMA/comments/1p9dier/amd_395_and_nvidia_gpu/ | false | false | self | 5 | null |
Artificial Analysis contradicts SemiAnalysis | 6 | Artificial Analysis
He was direct and made it clear that the TPU V6E costs dearly that may mean that then the TPU V7 is possibly expensive 🧐
POST :
https://x.com/ArtificialAnlys/status/1993878037226557519?t=VvZz9wPFAC7AhIHqDpCt2A&s=19 | 2025-11-29T01:49:51 | Illustrious-Swim9663 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p9cgz1 | false | null | t3_1p9cgz1 | /r/LocalLLaMA/comments/1p9cgz1/artificial_analysis_contradicts_semianalysis/ | false | false | default | 6 | {'enabled': True, 'images': [{'id': 'ozsoqhmjq34g1', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/ozsoqhmjq34g1.jpeg?width=108&crop=smart&auto=webp&s=2447abcd24530f77a2ecad12263991ac6c3fcf3e', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/ozsoqhmjq34g1.jpeg?width=216&crop=smart&auto=webp&s=dd9e2b11be48f49a50ecc05756e67d99a6cc7e0b', 'width': 216}, {'height': 123, 'url': 'https://preview.redd.it/ozsoqhmjq34g1.jpeg?width=320&crop=smart&auto=webp&s=60239a322d39fcf3f9c70f065df7167efb40930f', 'width': 320}, {'height': 247, 'url': 'https://preview.redd.it/ozsoqhmjq34g1.jpeg?width=640&crop=smart&auto=webp&s=5bba8d70592b620482c437b04b47cf29a83b55d9', 'width': 640}, {'height': 370, 'url': 'https://preview.redd.it/ozsoqhmjq34g1.jpeg?width=960&crop=smart&auto=webp&s=13e97710d0495523483ed7d077b4223a2ebdc5ce', 'width': 960}, {'height': 416, 'url': 'https://preview.redd.it/ozsoqhmjq34g1.jpeg?width=1080&crop=smart&auto=webp&s=3bb665e12b91d669585be7d5e6765d2bb1743b5f', 'width': 1080}], 'source': {'height': 1581, 'url': 'https://preview.redd.it/ozsoqhmjq34g1.jpeg?auto=webp&s=723f9340000f769cef55b1602c2937e5fabfc87c', 'width': 4096}, 'variants': {}}]} | |
The official vLLM support for the Ryzen AI Max+ 395 is here! (the whole AI 300 series, ie gfx1150 and gfx1151) | 98 | 2025-11-29T01:30:18 | https://github.com/vllm-project/vllm/pull/25908 | waiting_for_zban | github.com | 1970-01-01T00:00:00 | 0 | {} | 1p9c2wv | false | null | t3_1p9c2wv | /r/LocalLLaMA/comments/1p9c2wv/the_official_vllm_support_for_the_ryzen_ai_max/ | false | false | default | 98 | {'enabled': False, 'images': [{'id': '0y9iS0G6sFyr9NyDTUC-Jc4R7icCyd9xrKN5lMvXDtc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0y9iS0G6sFyr9NyDTUC-Jc4R7icCyd9xrKN5lMvXDtc.png?width=108&crop=smart&auto=webp&s=8555f6afc199253c9ab4c9349cd43f86d9899288', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0y9iS0G6sFyr9NyDTUC-Jc4R7icCyd9xrKN5lMvXDtc.png?width=216&crop=smart&auto=webp&s=419f76cc654358e40cb85b88a0e1337c37ef86e5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0y9iS0G6sFyr9NyDTUC-Jc4R7icCyd9xrKN5lMvXDtc.png?width=320&crop=smart&auto=webp&s=7f5518300adee10a4cb14ed91697d0e9d39c4bc7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0y9iS0G6sFyr9NyDTUC-Jc4R7icCyd9xrKN5lMvXDtc.png?width=640&crop=smart&auto=webp&s=60ef0dca89f1f06038a7c4b44821f7309152de84', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0y9iS0G6sFyr9NyDTUC-Jc4R7icCyd9xrKN5lMvXDtc.png?width=960&crop=smart&auto=webp&s=c4d05b327ee3dc60c1f2fb4acf858a5e15615773', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0y9iS0G6sFyr9NyDTUC-Jc4R7icCyd9xrKN5lMvXDtc.png?width=1080&crop=smart&auto=webp&s=a0c335fd1dd6d62ee920a660f48eecd56f7b5d9f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0y9iS0G6sFyr9NyDTUC-Jc4R7icCyd9xrKN5lMvXDtc.png?auto=webp&s=d4d4157f191990302f7798123c15290746191fc4', 'width': 1200}, 'variants': {}}]} | |
Claude code can now connect directly to llama.cpp server | 118 | Anthropic messages API was merged today and allows claude code to connect to llama-server: https://github.com/ggml-org/llama.cpp/pull/17570
I've been playing with claude code + gpt-oss 120b and it seems to work well at 700 pp and 60 t/s. I don't recommend trying slower LLMs because the prompt processing time is going to kill the experience. | 2025-11-29T01:05:34 | https://www.reddit.com/r/LocalLLaMA/comments/1p9bk2b/claude_code_can_now_connect_directly_to_llamacpp/ | tarruda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9bk2b | false | null | t3_1p9bk2b | /r/LocalLLaMA/comments/1p9bk2b/claude_code_can_now_connect_directly_to_llamacpp/ | false | false | self | 118 | {'enabled': False, 'images': [{'id': '8VgYSH0eZmUsmPTD9rOli1rqqzE9cCpLhaQ_Ht2AQEw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8VgYSH0eZmUsmPTD9rOli1rqqzE9cCpLhaQ_Ht2AQEw.png?width=108&crop=smart&auto=webp&s=06f66e627b689b5908a916c24564ed33a9bad973', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8VgYSH0eZmUsmPTD9rOli1rqqzE9cCpLhaQ_Ht2AQEw.png?width=216&crop=smart&auto=webp&s=474d44e04ef2350304f7e85b1be3551f2f0719d4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8VgYSH0eZmUsmPTD9rOli1rqqzE9cCpLhaQ_Ht2AQEw.png?width=320&crop=smart&auto=webp&s=d964b9d65b7a56b3d98c5952d8801838e1e16afd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8VgYSH0eZmUsmPTD9rOli1rqqzE9cCpLhaQ_Ht2AQEw.png?width=640&crop=smart&auto=webp&s=cb31eecc426bace1ac9000a981394ac5fbc0317c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8VgYSH0eZmUsmPTD9rOli1rqqzE9cCpLhaQ_Ht2AQEw.png?width=960&crop=smart&auto=webp&s=b6a120746d1f6af0cd21315dcca78c1b62456632', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8VgYSH0eZmUsmPTD9rOli1rqqzE9cCpLhaQ_Ht2AQEw.png?width=1080&crop=smart&auto=webp&s=a5d099a09369202383924ff507e10d883bd40429', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8VgYSH0eZmUsmPTD9rOli1rqqzE9cCpLhaQ_Ht2AQEw.png?auto=webp&s=dc614b5c14b681ef3d8b9b3b02eb792b5cb0fa71', 'width': 1200}, 'variants': {}}]} |
Is my CPU (i3-12100F) beating my gpu (rx 6600) at inference? (llama.cpp) | 6 | I built llama.cpp with the vulkan backbone.
The LLM being used is Qwen3-4B-Q8\_0. OS is linux mint 22.2
I run this at bash: llama-cli -m ./Qwen3-4B-Q8\_0.gguf -p "Hello" -ngl 0
Using -ngl 0, turns off the use of the gpu, right ?
With the prompt "Hello, How are you doing ? /no\_think"
I get 8.65 tokens/s 30 tokens 3.47s
I then run this at bash: llama-cli -m ./Qwen3-4B-Q8\_0.gguf -ngl 99
\-ngl 99 makes llama use 100% of the gpu right ?
I get 4.30 tokens/s 30 tokens 6.97s
So, my GPU is slower at inference right ? I'm curious cause I thought GPUs were better at doing LLM stuff.
Is my GPU like really bad for this kind of stuff, is it the vulkan backbone ? ROCm isn't supported for my gpu.
Sorry for any bad english. | 2025-11-29T00:53:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p9bair/is_my_cpu_i312100f_beating_my_gpu_rx_6600_at/ | Badhunter31415 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9bair | false | null | t3_1p9bair | /r/LocalLLaMA/comments/1p9bair/is_my_cpu_i312100f_beating_my_gpu_rx_6600_at/ | false | false | self | 6 | null |
What do u use for plug-and-play orchestration (preferably with websearch, knowledge management too)? | 3 | I am looking for a framework that I can easily install on any Linux and let it use my model 24/7 to gather info on relevant topics.
Is there any such opensource project? | 2025-11-29T00:32:00 | https://www.reddit.com/r/LocalLLaMA/comments/1p9aufi/what_do_u_use_for_plugandplay_orchestration/ | previse_je_sranje | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9aufi | false | null | t3_1p9aufi | /r/LocalLLaMA/comments/1p9aufi/what_do_u_use_for_plugandplay_orchestration/ | false | false | self | 3 | null |
Has anyone had this large bitnet model left? | 1 | ERROR: type should be string, got "https://huggingface.co/qep/qep-1bit-extreme\n\nModel Summary\nAn optimized 1-bit quantized version of c4ai/command-a-03-2025 achieving 6.7x compression with enhanced performance through advanced quantization optimization techniques.\n\nKey Features\nExtreme Compression: 6.7× smaller (207GB → 30.2GB, -85%), runs even on a single GPU (30B on A100 80GB).\nEnhanced Performance: Onebit quantization, enhanced by Fujitsu QEP & QQA.\nInference Speed Up: Faster inference via \"Bitlinear computation\".\nModel Details\nBase Model: c4ai/command-a-03-2025\nQuantization Method: OneBit with Fujitsu QEP/QQA optimization\nQuantization Bits: 1-bit for layers 0-61, FP16 for last 2 layers\nOptimization Techniques: Fujitsu QEP, QQA\nCompatible Hardware: Single GPU (recommended: >= 40GB VRAM)" | 2025-11-29T00:16:41 | https://www.reddit.com/r/LocalLLaMA/comments/1p9ai79/has_anyone_had_this_large_bitnet_model_left/ | Thin_Freedom3201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9ai79 | false | null | t3_1p9ai79 | /r/LocalLLaMA/comments/1p9ai79/has_anyone_had_this_large_bitnet_model_left/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '3kxcithLSwwc1w88-miuahbWyvGpvq6igXvZpNhhxyc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3kxcithLSwwc1w88-miuahbWyvGpvq6igXvZpNhhxyc.png?width=108&crop=smart&auto=webp&s=c9702f734615e24329c15f8eeed417c878923ff1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/3kxcithLSwwc1w88-miuahbWyvGpvq6igXvZpNhhxyc.png?width=216&crop=smart&auto=webp&s=3619c01da6288a32354c035ff67320a8b68e4a93', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/3kxcithLSwwc1w88-miuahbWyvGpvq6igXvZpNhhxyc.png?width=320&crop=smart&auto=webp&s=d4851e70a6eeac9e5f7a2fefe7ce759b9579ff68', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/3kxcithLSwwc1w88-miuahbWyvGpvq6igXvZpNhhxyc.png?width=640&crop=smart&auto=webp&s=c9b22cadacf5eef92799893b8ad6a14f7192e797', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/3kxcithLSwwc1w88-miuahbWyvGpvq6igXvZpNhhxyc.png?width=960&crop=smart&auto=webp&s=98efc2e9d9aa54b299c450b156446de987ac7694', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/3kxcithLSwwc1w88-miuahbWyvGpvq6igXvZpNhhxyc.png?width=1080&crop=smart&auto=webp&s=599083bf2c8d4b15721cb3ea2b686cf1591ad2f5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/3kxcithLSwwc1w88-miuahbWyvGpvq6igXvZpNhhxyc.png?auto=webp&s=8798aa7f6c5343004c4870b4905cac94a32b96f9', 'width': 1200}, 'variants': {}}]} |
A Tribute to MetaAI and Stability AI - 2 Giants Who Brought us so Much Joy... And, 2025 is the Year they Die... So Sad!😢 | 204 | I mean, this sub and its amazing community wouldn't be here if it were not for Stability AI and Stable Diffusion. I personally created an account on Reddit just so I could join r/LocalLLaMA and r/StableDiffusion. I remember the first day I tried SD1.4 on my shiny new RTX 3070 Ti. I couldn't contain my excitement as I was going through Aitrepreneur’s video on how to install AUTOMATIC1111.
I never had Conda or PyTorch installed on my machine before. There was no ChatGPT to write me a guide on how to install everything or troubleshoot a failure. I followed Nerdy Rodent's videos on possible issues I could face, and I heavily relied on this sub for learning.
Then, I remember the first image I generated. That first one is always special. I took a few minutes to think of what I wanted to write, and I went for "Lionel Messi riding a bicycle." (Damn, I feel so embarrassed now that I am writing this. Please don't judge me!).
I cannot thank Stability AI's amazing team enough for opening a new world for me—for us. Every day, new AI tutorials would drop on YouTube, and every day, I was excited. I vividly remember the first Textual Inversion I trained, my first LoRA, and my first model finetune on Google Colab. Shortly after, SD 1.5 dropped. I never felt closer to YouTubers before; I could feel their excitement as they went through the material. That excitement felt genuine and was contagious.
And then, the NovelAI models were leaked. I downloaded the torrent with all the checkpoints, and the floodgates for finetunes opened. Do you guys remember Anything v3 and RevAnime? Back then, our dream was simple and a bit naive: we dreamed of the day where we would run Midjourney v3-level image quality locally 🤣.
Fast forward 6 months, and Llama models were leaked (7B, 13B, 33B, and 65B) with their limited 2K context window. Shortly after, Oobabooga WebUI was out and was the only frontend you could use. I could barely fit Llama 13B in my 8GB of VRAM. GPTQ quants were a pain in the ass. Regardless, running Llama locally always put a smile on my face.
If you are new to the LLM space, let me tell you what our dream was back then: to have a model as good as ChatGPT 3.5 Turbo. Benchmarks were always against 3.5!! Whenever a new finetune dropped, the main question remained: how good is it compared to ChatGPT? As a community, we struggled for over a year to get a local model that finally beat ChatGPT (I think it was Mixtral 8x7B).
This brings me to the current time. We have many frontier open-source models both in LLM and image/video generation, and neither Meta nor Stability AI made any of them. They both shot themselves in the foot and then effectively committed suicide. They could've owned the open-source space, but for whatever reason, they botched that huge opportunity. Their work contributed so much to the world, and it saddens me to see that they have already sailed into the sunset. Did you know that the first works by DeepSeek and other Chinese labs were heavily built upon the Llama architecture? They learned from Llama and Stable Diffusion, and in 2025, they just killed them.
I am sorry if I seem emotional, because I am. About 6 months ago, I deleted the last Llama-based model I had. 3 months ago, I deleted all SD1.5-based models. And with the launch of the Z-model, I know that soon I will be deleting all Stable Diffusion-based models again. If you had told me 3 years ago that by 2025 both Meta and Stability AI would disappear from the open-source AI space, I wouldn't have believed you in a million years. This is another reminder that technology is a ruthless world.
What are your thoughts? Perhaps you can share your emotional experiences as well. Let this post be a tribute to two otherwise awesome AI labs. | 2025-11-29T00:15:18 | https://www.reddit.com/r/LocalLLaMA/comments/1p9ah2v/a_tribute_to_metaai_and_stability_ai_2_giants_who/ | Iory1998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9ah2v | false | null | t3_1p9ah2v | /r/LocalLLaMA/comments/1p9ah2v/a_tribute_to_metaai_and_stability_ai_2_giants_who/ | false | false | self | 204 | null |
Deepseek Unchained? | 2 | Hi guys I'm pretty ignorant. I was looking for a local LLM to run in LMStudio that will do and answer what I ask it to without the stupid lectures and "sorry I cant do that". I want privacy and something to run local, but compared to what's available online I just can't make the move. I came across "Deepseek" unchained. read some claims deepseek may collect information, but what doesn't.
what I am concerned with is the fact I came across a tool for "removing" Deepseek unchained "completely? and that is what i am asking about. what need is there for something like that? its the first I have seen a tool for removoing a local LLM? | 2025-11-29T00:06:27 | https://www.reddit.com/r/LocalLLaMA/comments/1p9aa37/deepseek_unchained/ | muffinnmannn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9aa37 | false | null | t3_1p9aa37 | /r/LocalLLaMA/comments/1p9aa37/deepseek_unchained/ | false | false | self | 2 | null |
Current local models that work well as coding agents | 1 | So I've been using copilot and windsurf for work, and they do actually help..
So since I use local LLMs a lot I was facing the question:
Can any local model rival the tools I'm currently paying for in terms of quality?
I have a 3080ti with 12gigs of vram and 128gigs of ddr4 ram
Anything that I can use locally? | 2025-11-28T23:58:05 | https://www.reddit.com/r/LocalLLaMA/comments/1p9a33x/current_local_models_that_work_well_as_coding/ | yehiaserag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9a33x | false | null | t3_1p9a33x | /r/LocalLLaMA/comments/1p9a33x/current_local_models_that_work_well_as_coding/ | false | false | self | 1 | null |
Run llama-server with a model other than gpt-oss-120b to work with cline | 1 | I have no success with running any module other than gpt-oss, the gpu hardware I have is nvidia dgx spark. I run llama-server on dgx spark. On my workstation, I use vs code + cline.
I have a cline.gbnf file below.
root ::= analysis? start final .+
analysis ::= "<|channel|>analysis<|message|>" ( [^<] | "<" [^|] | "<|" [^e] )* "<|end|>"
start ::= "<|start|>assistant"
final ::= "<|channel|>final<|message|>"
This grammar file works with gpt-oss-120b
My question is for a module other than gpt-oss, like: qwen3-coder-30b or Qwen-Next-80B-A3B-Instruct. How to make them work with cline? is there any cline.gbnf for qwen3? | 2025-11-28T23:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/1p9a2v5/run_llamaserver_with_a_model_other_than/ | Mean-Sprinkles3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9a2v5 | false | null | t3_1p9a2v5 | /r/LocalLLaMA/comments/1p9a2v5/run_llamaserver_with_a_model_other_than/ | false | false | self | 1 | null |
Do you think scaling laws are getting a (practical) wall? | 1 | given this observation (The Era of Easy AI Progress Is Ending - Ilya Sutskever): https://www.youtube.com/shorts/pLHWn5SQvLM | 2025-11-28T23:44:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p99s73/do_you_think_scaling_laws_are_getting_a_practical/ | pier4r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p99s73 | false | null | t3_1p99s73 | /r/LocalLLaMA/comments/1p99s73/do_you_think_scaling_laws_are_getting_a_practical/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'b_aV7uvqDQYvy_yBurGT-aamvwBo4rNRhQbDPg11rzg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/b_aV7uvqDQYvy_yBurGT-aamvwBo4rNRhQbDPg11rzg.jpeg?width=108&crop=smart&auto=webp&s=d1bf5e039858c1ea1541513aa6103b2ba6077948', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/b_aV7uvqDQYvy_yBurGT-aamvwBo4rNRhQbDPg11rzg.jpeg?width=216&crop=smart&auto=webp&s=b3a571616a6680f4ba6bf9f8c4e33785a3124818', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/b_aV7uvqDQYvy_yBurGT-aamvwBo4rNRhQbDPg11rzg.jpeg?width=320&crop=smart&auto=webp&s=98f470d782ed25e87062b3b9f57d9a4555c52f92', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/b_aV7uvqDQYvy_yBurGT-aamvwBo4rNRhQbDPg11rzg.jpeg?auto=webp&s=5343b37da99ef66b34efa186049c2b04e5600eb2', 'width': 480}, 'variants': {}}]} |
llama-pack: LLM Server wrapper that does some simple analysis to copy models to SSD | 1 | [https://github.com/jaggzh/llama-pack](https://github.com/jaggzh/llama-pack)
My ssd doesn't have much room available, so I wanted something that would be convenient to run my models, like "llama-pack myfavorite" and it'll run it. It'll store your cli options (if you tell it to with --store / --st) for your server. At present, I only use lllama.cpp but I tried to keep it generalized.
The logic's a bit simple right now, but if i'm running a model a few times, it'll decide to put it on ssd for the next run. It automatically expires them from the ssd if there's not enough space for one that's moved up in the ranks.
https://preview.redd.it/vxgltujuz24g1.png?width=1116&format=png&auto=webp&s=a853bd34d8a38edf053a986110c8bce38add51f3
Sorry I only got one screenshot in so far:
It'll also let you set ports if you end up running multiple models in parallel. and you can kill them from cli with -k as well. and --status / --st to check out the current server(s) status | 2025-11-28T23:20:58 | https://www.reddit.com/r/LocalLLaMA/comments/1p999s9/llamapack_llm_server_wrapper_that_does_some/ | jaggzh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p999s9 | false | null | t3_1p999s9 | /r/LocalLLaMA/comments/1p999s9/llamapack_llm_server_wrapper_that_does_some/ | false | false | 1 | null | |
Running an 80B model on a 3090 is cool and all, but... | 5 | [This was one prompt on the thinking model. ONE.](https://preview.redd.it/oze6b20xw24g1.png?width=374&format=png&auto=webp&s=47e0c843191584ed6feeee12592226fc1e5af4ed)
I have a personal set of writing prompts for generating simple and complex stories, mostly focusing on a Japanese light novel style of writing. Decided to try it out with a Q4\_K\_M and CPU offloading and... well, the image speaks for itself.
Also, I heard the CUDA support is not fully done yet? Should I expect a speed up at some point? | 2025-11-28T23:12:10 | https://www.reddit.com/r/LocalLLaMA/comments/1p992qc/running_an_80b_model_on_a_3090_is_cool_and_all_but/ | olaf4343 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p992qc | false | null | t3_1p992qc | /r/LocalLLaMA/comments/1p992qc/running_an_80b_model_on_a_3090_is_cool_and_all_but/ | false | false | 5 | null | |
Built Agentica: Free Deca + Open Source models in a Cline fork | 0 | Forked Cline to add Deca, our coding-optimized models that are free for everyone. Same interface you know, but with free opensource models. All the usual paid options (GPT-5.1, Gemini 3) still there if you need them.
Download it at: https://github.com/GenLabsAI/Agentica/releases/tag/v0.0.1 Demo login: agentica@genlabs.dev / agentica@123 (Deca-only) Or create a free account.
It's very rough so expect it to have a some bugs. I hope to evolve this into a more "middle grounds" for open-source coding agents: Most of the opensource coding agents are expensive and pay-as-you-go, while most of the proprietary ones are subscription based. | 2025-11-28T23:05:29 | https://www.reddit.com/r/LocalLLaMA/comments/1p98xa4/built_agentica_free_deca_open_source_models_in_a/ | GenLabsAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p98xa4 | false | null | t3_1p98xa4 | /r/LocalLLaMA/comments/1p98xa4/built_agentica_free_deca_open_source_models_in_a/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'EBA-rSI-BqIgeBL2CL2-5P-K_ysDBs9-NIU7orRm6T8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EBA-rSI-BqIgeBL2CL2-5P-K_ysDBs9-NIU7orRm6T8.png?width=108&crop=smart&auto=webp&s=e6e48c9c748a1903857bc5991fc4b2c6df6a592e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EBA-rSI-BqIgeBL2CL2-5P-K_ysDBs9-NIU7orRm6T8.png?width=216&crop=smart&auto=webp&s=407cb1d1ec4478e6bbf6f4f302d47e7fe037f126', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EBA-rSI-BqIgeBL2CL2-5P-K_ysDBs9-NIU7orRm6T8.png?width=320&crop=smart&auto=webp&s=9f70343942b5b29b4870550b4297076143dad288', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EBA-rSI-BqIgeBL2CL2-5P-K_ysDBs9-NIU7orRm6T8.png?width=640&crop=smart&auto=webp&s=b738430d27e9728744200f508d74c784fbcc2924', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EBA-rSI-BqIgeBL2CL2-5P-K_ysDBs9-NIU7orRm6T8.png?width=960&crop=smart&auto=webp&s=42b72c70945b8f3fc093e39eec2f0204207bd89a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EBA-rSI-BqIgeBL2CL2-5P-K_ysDBs9-NIU7orRm6T8.png?width=1080&crop=smart&auto=webp&s=bd1e1f5c184c8b2ac6b87b76ef24ab6dc41f22f3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EBA-rSI-BqIgeBL2CL2-5P-K_ysDBs9-NIU7orRm6T8.png?auto=webp&s=02837bb69088e8747193221d5dc27339b052be92', 'width': 1200}, 'variants': {}}]} |
Building AI Agent for DevOps Daily business in IT Company | 0 | I’m a DevOps Specialist working in an IT company, mostly dealing with Terraform, Ansible, GitHub Actions, OCI cloud deployments and post-deployment automation.
I’ve recently joined this course (Huggin face's AI Agents Course) because I’d love to build an internal AI agent inspired by Anthropic’s “Computer Use” — not for GUI automation, but for creating a sandboxed execution environment that can interact with internal tools, repositories, and workflows.
In my company external AI tools (e.g., Amazon Q Developer) are heavily restricted, so the only realistic path is developing an in-house agent that can safely automate parts of our daily DevOps tasks.
My idea is to start small (basic automations), then iterate until it becomes a real productivity booster for the whole engineering team.
I’d love to get feedback, ideas, or references to existing solutions, especially: Architecture patterns for safe sandboxed agent environments Examples of agents interacting with infra-as-code pipelines Any open-source projects already moving in this direction Any insight or direction is super appreciated — I really want to bring something impactful to my team.
Thanks in advance! | 2025-11-28T22:29:18 | https://www.reddit.com/r/LocalLLaMA/comments/1p983y8/building_ai_agent_for_devops_daily_business_in_it/ | italianstallion20000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p983y8 | false | null | t3_1p983y8 | /r/LocalLLaMA/comments/1p983y8/building_ai_agent_for_devops_daily_business_in_it/ | false | false | self | 0 | null |
Unlocked LM Studio Backends (v1.59.0): AVX1 & More Supported – Testers Wanted | 17 | **Hello everyone!**
The **latest patched backend versions (1.59.0)** are now out, and they bring **full support for “unsupported” hardware** via a simple patch (see GitHub). Since the last update 3 months ago, these builds have received **major refinements in performance, compatibility, and stability** via optimized compiler flags and work by llama cpp team.
Here’s the current testing status:
✅ **AVX1 CPU builds:** working (tested on Ivy Bridge Xeons)
✅ **AVX1 Vulkan builds:** working (tested on Ivy Bridge Xeons + Tesla K40 GPUs)
❓ **AVX1 CUDA builds:** untested (no compatible hardware yet)
❓ **Non-AVX experimental builds:** untested (no compatible hardware yet)
I’m looking for testers to try the newest versions on **different hardware**, especially **non-AVX2 CPUs** and **newer NVIDIA GPUs**, and share performance results. Testers are also wanted for speed comparisons of the new vs old cpu backends.
👉 GitHub link: [lmstudio-unlocked-backend](https://github.com/theIvanR/lmstudio-unlocked-backend)
https://preview.redd.it/an6bovjso24g1.png?width=988&format=png&auto=webp&s=06f866662fb8e46735e4d120f7f20ad5473e8f77
https://preview.redd.it/n1wynsz1p24g1.png?width=1320&format=png&auto=webp&s=88888e46f7bbbdbbf767fc6a841c14eddeee9199
Brief install instructions:
\- navigate to backends folder. ex C:\\Users\\Admin\\.lmstudio\\extensions\\backends
\- (recommended for clean install) delete everything except "vendor" folder
\- drop contents from compressed backend of your choice
\- select it in LM Studio runtimes and enjoy. | 2025-11-28T22:26:02 | https://www.reddit.com/r/LocalLLaMA/comments/1p9817t/unlocked_lm_studio_backends_v1590_avx1_more/ | TheSpicyBoi123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9817t | false | null | t3_1p9817t | /r/LocalLLaMA/comments/1p9817t/unlocked_lm_studio_backends_v1590_avx1_more/ | false | false | 17 | null | |
Your local models can now make phone calls! Launching Phone Integration 📞 in Observer | 18 | TL;DR: Observer is an **open-source, free, and local** framework that gives your local models actual powers, like watching your screen/camera/mic, logging to memory, and now **making real phone calls!!** I'm Roy, the solo dev building this, and I would really appreciate your feedback to keep making Observer better :)
Hey r/LocalLLaMA,
Thanks for all the support! seriously, this community has always been incredible. Observer has gone super far due to your support and feedback!!
I'm back with something I think is pretty cool: your local models can now **make actual phone calls.**
Quick Setup:
* Whitelist your number by messaging/calling Observer (to prevent abuse)
* Observer watches your screen/camera via WebRTC
* Your local model (Ollama/llama.cpp) processes what it sees
* New call() function triggers a real phone call when your conditions are met
Random use cases I've used it for:
* That 2-hour render finally finishes → get a call
* Your AFK Minecraft character is about to die → phone rings
* Security camera detects motion → instant call with a description of what it sees.
* Your crypto bot sees something → wake up with specific data of what happened.
* Literally anything you can see on screen → phone call with text2speech
What is Observer AI?
It's a framework I built for this community. Think of it like a super simple MCP server that runs in your browser:
\- Sensors (Screen/Camera/Mic) → Local Models (Ollama/llama.cpp) → Tools (notifications, recordings, memory, code, and **now phone calls**)
The whole thing is free (with some convenient paid tiers to make it sustainable), open-source (MIT license), and runs entirely on your machine. You can try it in your browser with zero setup, or go full local with the desktop app.
Links:
\- GitHub (all the code, open source): [https://github.com/Roy3838/Observer](https://github.com/Roy3838/Observer)
\- Try it without any install: [https://app.observer-ai.com/](https://app.observer-ai.com/)
\- Discord: [https://discord.gg/wnBb7ZQDUC](https://discord.gg/wnBb7ZQDUC)
I'm here to answer questions. What would YOU use this for?
Cheers,
Roy | 2025-11-28T22:18:25 | https://www.youtube.com/shorts/yNu2K6LaTNk | Roy3838 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1p97uuv | false | {'oembed': {'author_name': 'Observer AI', 'author_url': 'https://www.youtube.com/@Observer-AI', 'height': 200, 'html': '<iframe width="113" height="200" src="https://www.youtube.com/embed/yNu2K6LaTNk?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="No one's getting any sleep now \U0001fae9"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/yNu2K6LaTNk/hq2.jpg', 'thumbnail_width': 480, 'title': "No one's getting any sleep now \U0001fae9", 'type': 'video', 'version': '1.0', 'width': 113}, 'type': 'youtube.com'} | t3_1p97uuv | /r/LocalLLaMA/comments/1p97uuv/your_local_models_can_now_make_phone_calls/ | false | false | default | 18 | {'enabled': False, 'images': [{'id': 'nOalRlFoqYIIAoHotT2ulKa0W-tArBvHe2Mb6BSa9vU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/nOalRlFoqYIIAoHotT2ulKa0W-tArBvHe2Mb6BSa9vU.jpeg?width=108&crop=smart&auto=webp&s=58688a0c3e6423cb7c0289ada1e47b5b8e23a30b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/nOalRlFoqYIIAoHotT2ulKa0W-tArBvHe2Mb6BSa9vU.jpeg?width=216&crop=smart&auto=webp&s=344e22d0162ca1c60d7f46d8cfa97e825477fc77', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/nOalRlFoqYIIAoHotT2ulKa0W-tArBvHe2Mb6BSa9vU.jpeg?width=320&crop=smart&auto=webp&s=3394b4f1ae7c7e8c7e6556e7ffe2af8db0354268', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/nOalRlFoqYIIAoHotT2ulKa0W-tArBvHe2Mb6BSa9vU.jpeg?auto=webp&s=31f85d028f90c15202c4d5a194dd1d0ca90dd09c', 'width': 480}, 'variants': {}}]} |
pmp - manage your prompts locally | 10 | [https://github.com/julio-mcdulio/pmp](https://github.com/julio-mcdulio/pmp)
I've been working with LLMs a lot lately and got tired of managing prompts in random text files and copy-pasting them around. So I built \`pmp\` - a simple cli tool for managing prompts with versioning and pluggable storage backends.
There are quite a few products out there like mlflow and langfuse, but they come with a lot of bells and whistles and have complex deployments with a web frontend. I just wanted something simple and lightweight with no dependencies.
$ pmp add code-reviewer --content "Review this code for bugs and improvements" --tag "code,review" --model "gpt-4"
prompt "code-reviewer" version 1 created
$ pmp get code-reviewer
Review this code for bugs and improvements
$ pmp update code-reviewer --content "Review this code thoroughly for bugs, security issues, and improvements"
prompt "code-reviewer" version 2 created
$ pmp list --tag code
code-reviewer
summarize
I've also added support for a [dotprompt](https://github.com/google/dotprompt) storage backend, and I'm planning to add support for different execution backends which will let you run your prompts using tools like llm, gemini cli and openai-cli.
Interested to hear what you think! | 2025-11-28T22:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/1p97lqq/pmp_manage_your_prompts_locally/ | vivis-dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p97lqq | false | null | t3_1p97lqq | /r/LocalLLaMA/comments/1p97lqq/pmp_manage_your_prompts_locally/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'wZAJSSNYpxZHlekcrZnsWdT37EzJ_72RYOtHOHKv0Xw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wZAJSSNYpxZHlekcrZnsWdT37EzJ_72RYOtHOHKv0Xw.png?width=108&crop=smart&auto=webp&s=86f13e4473fc7a80e192e1748742c4da613df763', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wZAJSSNYpxZHlekcrZnsWdT37EzJ_72RYOtHOHKv0Xw.png?width=216&crop=smart&auto=webp&s=07032d1f4644ed48bbac892df076c4c9a58aed34', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wZAJSSNYpxZHlekcrZnsWdT37EzJ_72RYOtHOHKv0Xw.png?width=320&crop=smart&auto=webp&s=be23c86509ecccc94b3c1537df1b62a8058760e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wZAJSSNYpxZHlekcrZnsWdT37EzJ_72RYOtHOHKv0Xw.png?width=640&crop=smart&auto=webp&s=92f569505e1c7df1a1874e2f4a54490e53afb884', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wZAJSSNYpxZHlekcrZnsWdT37EzJ_72RYOtHOHKv0Xw.png?width=960&crop=smart&auto=webp&s=72300d61c2e26c03b808d5e9186d2b6d037d34e2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wZAJSSNYpxZHlekcrZnsWdT37EzJ_72RYOtHOHKv0Xw.png?width=1080&crop=smart&auto=webp&s=1d88e511a3d0ce7df764c1bf60565bea3e4710da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wZAJSSNYpxZHlekcrZnsWdT37EzJ_72RYOtHOHKv0Xw.png?auto=webp&s=b27115d52476d5650b81301add7308f49a2ff53a', 'width': 1200}, 'variants': {}}]} |
Could our own local, agentic AI projects become unintentional security risks? | 0 | Signal's president is warning that Big Tech's push for agentic AI is a massive security risk, which makes sense for massive, centralized systems. However, it made me wonder about our own community's work with local models and frameworks like AutoGen. As we build agents capable of executing tasks and interacting with the web, are we considering the security implications? It seems easy for a poorly configured or exploited local agent to become part of a botnet or cause other issues. Has anyone here implemented specific sandboxing or security measures for your agentic projects? I'd love to hear how others are approaching this to keep things safe. | 2025-11-28T21:53:24 | https://www.reddit.com/r/LocalLLaMA/comments/1p979ai/could_our_own_local_agentic_ai_projects_become/ | avisangle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p979ai | false | null | t3_1p979ai | /r/LocalLLaMA/comments/1p979ai/could_our_own_local_agentic_ai_projects_become/ | false | false | self | 0 | null |
Looking for a local AI tool that can extract any info from high-quality sources (papers + reputable publications) with real citations | 7 | I’m trying to set up a fully local AI workflow (English/Chinese) that can dig through *both* scientific papers *and* reputable publications things like Bloomberg, Economist, reputable industry analyses, tech reports, etc.
The main goal:
I want to automatically extract any specific information I request, not just statistics, but *any data*, like:
* numbers
* experimental details
* comparisons
* anything else I ask for
And the most important requirement:
The tool must always give *real citations* (article, link, page, paragraph) so I can verify every piece of data. No hallucinated facts.
Ideally, the tool should:
* run 100% locally
* search deeply and for long periods
* support Chinese + English
* extract structured or unstructured data depending on the query
* keep exact source references for everything
* work on an **RTX 3060 12GB**
Basically, I’m looking for a local “AI-powered research engine” that can dig through a large collection of credible sources and give me trustworthy, citation-backed answers to complex queries.
Has anyone built something like this?
What tools, models, or workflows would you recommend for a 12GB GPU? | 2025-11-28T21:27:53 | https://www.reddit.com/r/LocalLLaMA/comments/1p96n9d/looking_for_a_local_ai_tool_that_can_extract_any/ | Inflation_Artistic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p96n9d | false | null | t3_1p96n9d | /r/LocalLLaMA/comments/1p96n9d/looking_for_a_local_ai_tool_that_can_extract_any/ | false | false | self | 7 | null |
Verix Soul - Self-hosted AI with "soul recovery" system (Llama 3.2, Mistral 7B, Phi-3) | 0 | Built a self-hosted AI system focused on privacy and portability.
\## Key Features
\*\*Models supported\*\*:
\- Llama 3.2 (3GB) - Runs on Raspberry Pi 4
\- Mistral 7B (4GB) - PC/Laptop
\- Phi-3 (2GB) - Ultra-lightweight
\*\*Privacy\*\*:
\- AES-256-GCM encryption for all conversations
\- No external API calls (100% local)
\- "Soul recovery" with secret questions
\- Encrypted backups
\*\*Installation\*\*:
git clone [https://github.com/antigravityx/verix-soul.git](https://github.com/antigravityx/verix-soul.git)
./install.sh # Auto-detects hardware, downloads model
./start-soul.sh
\*\*Unique feature\*\*: Designed to run on embedded devices (drones, cars, robots) using Raspberry Pi Zero 2 W.
\## Architecture
\- Backend: FastAPI + llama-cpp-python
\- Frontend: Minimalist web UI
\- Database: SQLite with encryption
\- Deployment: Docker
\## Use Cases
\- Desktop AI assistant
\- Drone companion (Raspberry Pi Zero + camera)
\- Car copilot (Raspberry Pi + GPS)
\- Robot pet (Raspberry Pi + motors)
GitHub: [https://github.com/antigravityx/verix-soul](https://github.com/antigravityx/verix-soul)
Feedback welcome! Especially on model optimization for Raspberry Pi. | 2025-11-28T21:12:41 | https://www.reddit.com/r/LocalLLaMA/comments/1p96a5l/verix_soul_selfhosted_ai_with_soul_recovery/ | LeftConversation6019 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p96a5l | false | null | t3_1p96a5l | /r/LocalLLaMA/comments/1p96a5l/verix_soul_selfhosted_ai_with_soul_recovery/ | false | false | self | 0 | null |
Free $20 Claude Credit | 0 | signup with this link to get $20 in claude credit with 60 rpm.
Use Claude AI · Valid for 60 days · Instantly available upon registration
What Can You Do With It?
Program with the Claude Code CLI tool (essential for developers)
Chat with Claude on the website (Web Chat)
Integrate with your applications via API
Eligibility
You must register with one of the following email domains:
@gmail.com @qq.com @vip.qq.com
[link to signup](https://www.ohmygpt.com/i/V4R6EJ29) | 2025-11-28T21:05:33 | https://www.reddit.com/r/LocalLLaMA/comments/1p963zc/free_20_claude_credit/ | texh89 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p963zc | false | null | t3_1p963zc | /r/LocalLLaMA/comments/1p963zc/free_20_claude_credit/ | false | false | self | 0 | null |
I stress-tested 9 major LLMs on governance. The result: OpenAI & xAI hallucinated evidence to dismiss me. DeepSeek & Llama engaged. (Full Logs + DOI) | 1 | "I spent the weekend running a controlled experiment across 9 vendors using a governance critique prompt."
* "**The Result:** A 45% fail rate. OpenAI and xAI didn't just refuse; they **fabricated research timelines** (claiming I wrote the paper in July 2025) to delegitimize the prompt."
* "DeepSeek and Llama 3 engaged constructively."
* "Here is the full PDF with logs: https://zenodo.org/records/17754943 | 2025-11-28T20:56:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p95vp1/i_stresstested_9_major_llms_on_governance_the/ | aguyinapenissuit69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p95vp1 | false | null | t3_1p95vp1 | /r/LocalLLaMA/comments/1p95vp1/i_stresstested_9_major_llms_on_governance_the/ | false | false | self | 1 | null |
Local AI for small biz owner | 0 | My friend runs a restaurant and his wife runs a small coffee shop, both are small in sizes. They sometimes asked me to review contracts which I have no idea in that area.
This weekend I found a PC that no one used any more but seems OK to setup small local model to help them proof read contracts. Which model can I use? If only need is to read some documents for small business, does it really need to latest knowledge?
The hardware specs.
Intel i5-9500,
32GB RAM
256GB SSD
Nvidia 1660Ti 6GB.
| 2025-11-28T20:53:59 | https://www.reddit.com/r/LocalLLaMA/comments/1p95tvb/local_ai_for_small_biz_owner/ | binyang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p95tvb | false | null | t3_1p95tvb | /r/LocalLLaMA/comments/1p95tvb/local_ai_for_small_biz_owner/ | false | false | self | 0 | null |
Z_Image benchmark with simulating VRAM Limits on RTX 5090 & 3090 | 15 | Hi everyone,
I recently got my hands on an **RTX 5090 (32GB)** and also have an **RTX 3090 (24GB)**. I conducted an experiment to simulate the VRAM capacity of the upcoming 50-series lineup (5080, 5070, etc.) and older 30-series cards.
**The main goal** was to see what happens when VRAM runs out (OOM) and the system starts swapping to System RAM (DDR5). Specifically, I wanted to measure the performance penalty.
**⚠️ Disclaimer:** This test only limits **VRAM Capacity**. It does **NOT** simulate the raw compute power (CUDA cores) of lower-tier cards.
* *e.g., The "Simulated 5060" result shows how a 5090 performs when choked by 8GB VRAM, not the actual speed of a real 5060.*
# Test Environment
* **GPU:** RTX 5090 (32GB) & RTX 3090 (24GB)
* **CPU:** Ryzen 9 7900X
* **RAM:** DDR5 96GB (6000MHz)
* **PSU:** 1600W
* **Software:** ComfyUI (Provided Z\_Image Workflow from its site/1024x1024 generation)
* **OS:** Windows 11
# 1. RTX 3090 Results (Simulating 30-series VRAM tiers)
*Comparing Native 24GB vs. Artificial Limits*
||
||
|**Simulated Tier**|**VRAM Limit**|**Cold Start (s)**|**Warm Gen (s)**|**System RAM (DRAM) Usage**|**Real VRAM Used**|
|**RTX 3090 (Native)**|**24 GB**|19.07s|**9.71s**|Negligible|20 GB|
|16GB Tier (4080/4070Ti S)|16 GB|20.84s|**10.43s**|\+11 GB|13 GB|
|3080/ 4070 Ti (12G) |12 GB|22.92s|**13.82s**|\+15 GB (Generation)|11.1 GB|
|3080 (10G)|10 GB|25.38s|**17.04s**|\+13 GB (Generation)|9.1 GB|
|3070 / 3060 Ti|8 GB|27.94s|**20.00s**|**+15 GB (Generation)**|7.0 GB|
**Analysis:** Performance takes a noticeable hit as soon as you drop below 12GB. At 8GB, the generation time doubles compared to the native 24GB environment. However, thanks to the system RAM, it is still usable (didn't crash).
# 2. RTX 5090 Results (Simulating 50-series VRAM tiers)
*Comparing Native 32GB vs. Artificial Limits*
||
||
|**Simulated Tier**|**VRAM Limit**|**Cold Start (s)**|**Warm Gen (s)**|**System RAM (DRAM) Usage**|**Real VRAM Used**|
|**RTX 5090 (Native)**|**32 GB**|14.47s|**3.44s**|Negligible|22 GB|
|4090 |24 GB|10.48s|**3.33s**|Negligible|21 GB|
|5080 |16 GB|11.93s|**4.20s**|\+12 GB|15.8 GB|
|5070 |12 GB|12.11s|**5.07s**|\+12.9 GB (Generation)|12.9 GB|
|5060 |8 GB|11.70s|**6.19s**|**+21 GB (Generation)**|7 GB|
**Analysis:** The 5090's raw power is insane. Even when limited to 8GB VRAM and forced to pull 21GB from System RAM, **it is still faster (6.19s) than a native 3090 (9.71s).**
*Note again: A real 5060 will be much slower due to fewer CUDA cores. This just proves the 5090's architectural dominance.*
# Key Findings & Analysis
**1. The 5090 is a Monster** With unlimited VRAM, the 5090 is roughly **3x faster** than the 3090 in this workflow. The Blackwell chip is impressive.
**2. The VRAM Bottleneck & System RAM** Based on my data, when VRAM is insufficient (8GB\~12GB range for SDXL), the system offloads about **20GB** of data to the System DRAM.
**3. Speed during Swapping** Both GPUs remained "usable" even when restricted to 8GB, as long as there was enough System RAM. Excluding the cold start, the generation speed was acceptable for local use.
* *However, on the 3090, the slowdown is clearly felt (9s -> 20s).*
* *On the 5090, the brute force computational power masks the swapping latency significantly.*
**4. Oddity** Software VRAM limiting wasn't 100% precise in reporting, likely due to overhead or PyTorch memory management, but the trend is clear.
# TL;DR
1. **Z\_Image is efficient:** Great bang for the buck in terms of local generation.
2. **RAM is King:** If you have **32GB+ of System RAM**, even an 8GB VRAM card can run these workflows (albeit slower). It won't crash, it just swaps.
3. **For Speed:** If you want snappy generation without waiting, you probably want a **70-class or higher** card (12GB+ VRAM).
4. **5090 Reaction:** It's insanely fast...
https://preview.redd.it/izgg24dj424g1.png?width=1024&format=png&auto=webp&s=b911124200bb5336cd1246efee6c539fff1be3b5
*Test result example*
*This is the translated version of my writing in Korean* | 2025-11-28T20:25:39 | https://www.reddit.com/r/LocalLLaMA/comments/1p9563c/z_image_benchmark_with_simulating_vram_limits_on/ | namjuu_ka09114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p9563c | false | null | t3_1p9563c | /r/LocalLLaMA/comments/1p9563c/z_image_benchmark_with_simulating_vram_limits_on/ | false | false | 15 | null | |
Exploring wallet-native payments for browser-based AI agents | 1 | Lately I’ve been experimenting with browser-based local agents and one limitation keeps showing up: payments.
Agents can automate workflows, call tools and fetch data, but the moment a payment is required, everything breaks because credit cards and human logins don’t work for autonomous software.
I’ve been testing an early approach that uses wallet-native payments for agents inside browser-based local workflows. It’s still early, but it’s interesting to see how agents could eventually pay for APIs, data or services on their own.
I wrote a short technical breakdown here for anyone interested in the architecture and flow:
👉 [**https://blog.shinkai.com/from-chat-to-commerce-how-agents-use-x402-in-shinkai/**](https://blog.shinkai.com/from-chat-to-commerce-how-agents-use-x402-in-shinkai/) | 2025-11-28T20:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p94kxo/exploring_walletnative_payments_for_browserbased/ | Wide-Extension-750 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p94kxo | false | null | t3_1p94kxo | /r/LocalLLaMA/comments/1p94kxo/exploring_walletnative_payments_for_browserbased/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EPs6Ge1m4BJ6kwluhU_hEGzXDnZn0XsTVtoNEmt0GUM', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/EPs6Ge1m4BJ6kwluhU_hEGzXDnZn0XsTVtoNEmt0GUM.png?width=108&crop=smart&auto=webp&s=032cf88bff06a41f06c63b4b7e272f41c70141f5', 'width': 108}, {'height': 180, 'url': 'https://external-preview.redd.it/EPs6Ge1m4BJ6kwluhU_hEGzXDnZn0XsTVtoNEmt0GUM.png?width=216&crop=smart&auto=webp&s=15e7480992c25c3ad48791a051a061500a94616a', 'width': 216}, {'height': 268, 'url': 'https://external-preview.redd.it/EPs6Ge1m4BJ6kwluhU_hEGzXDnZn0XsTVtoNEmt0GUM.png?width=320&crop=smart&auto=webp&s=0e17cb67b0da08ee42b2373186e88a0b84383738', 'width': 320}, {'height': 536, 'url': 'https://external-preview.redd.it/EPs6Ge1m4BJ6kwluhU_hEGzXDnZn0XsTVtoNEmt0GUM.png?width=640&crop=smart&auto=webp&s=f3ed304df19197ff512427362550a57c4ec9140b', 'width': 640}, {'height': 804, 'url': 'https://external-preview.redd.it/EPs6Ge1m4BJ6kwluhU_hEGzXDnZn0XsTVtoNEmt0GUM.png?width=960&crop=smart&auto=webp&s=19d4fa629c7b12e5a0f33bb87d4d10e086c3df4a', 'width': 960}], 'source': {'height': 853, 'url': 'https://external-preview.redd.it/EPs6Ge1m4BJ6kwluhU_hEGzXDnZn0XsTVtoNEmt0GUM.png?auto=webp&s=25263e2bec8c8ca4655c7c4ce79a567f45e45419', 'width': 1018}, 'variants': {}}]} |
why is productionizing agents such a nightmare? (state/infra disconnect) | 0 | I’ve spent the last month trying to move a multi-agent workflow from working on my machine to an actual production environment & I feel like I'm losing my mind.
The issue is not the models (Llama 3/Claude are fine). the issue is the plumbing. I'm using standard infra (AWS/Postgres) and standard agent frameworks (LangChain/CrewAI), but they feel like they hate each other.
* My agents keep losing state/context because standard containers are stateless.
* Debugging a loop that ran up $50 in tokens is impossible because my logs don't match the agent's "thought process."
* I am writing more glue code to manage connections and timeouts than actual agent logic.
I’m seriously considering building a dedicated runtime/hybrid platform just to handle this—basically merging the infra primitives (db or auth) directly with the orchestration so I don't have to manage them separately. Think of it like a stateful container specifically for agents.
Has anyone else solved this? Or am I just overcomplicating the stack? I’m thinking of hacking together an open-source prototype. If I put it on GitHub, would anyone actually care to try it, or are you guys happy with the current tools? | 2025-11-28T19:58:44 | https://www.reddit.com/r/LocalLLaMA/comments/1p94hun/why_is_productionizing_agents_such_a_nightmare/ | Substantial_Guide_34 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p94hun | false | null | t3_1p94hun | /r/LocalLLaMA/comments/1p94hun/why_is_productionizing_agents_such_a_nightmare/ | false | false | self | 0 | null |
Best models / maybe cheap rig to get into local AI? | 0 | Hey all I found threads about this but all seemed to be from 4-6months ago.
To be completely honest I've slept on even most browser AI usage but have been seeing really cool things with some of the local models recently. Obv. I'm not expecting to run gemini3 locally. My main rig has a 3070 and a Ryzen 5800x, am I SOL for any of the new(er) models or would I be better off building something separate? Obviously I wouldn't wanna burn money but lots of people mention the m3/m4 mac minis.
| 2025-11-28T19:48:12 | https://www.reddit.com/r/LocalLLaMA/comments/1p948qn/best_models_maybe_cheap_rig_to_get_into_local_ai/ | Flashy_Oven_570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p948qn | false | null | t3_1p948qn | /r/LocalLLaMA/comments/1p948qn/best_models_maybe_cheap_rig_to_get_into_local_ai/ | false | false | self | 0 | null |
Qwen3-Next: Did a quant with extended context | 17 | For anyone interested, I made an MXFP4 quant with the context extended from 256k to 1M, with YaRN as seen on [unsloth's repo](https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF#processing-ultra-long-texts):
[https://huggingface.co/noctrex/Qwen3-Next-80B-A3B-Instruct-1M-MXFP4\_MOE-GGUF](https://huggingface.co/noctrex/Qwen3-Next-80B-A3B-Instruct-1M-MXFP4_MOE-GGUF)
[https://huggingface.co/noctrex/Qwen3-Next-80B-A3B-Thinking-1M-MXFP4\_MOE-GGUF](https://huggingface.co/noctrex/Qwen3-Next-80B-A3B-Thinking-1M-MXFP4_MOE-GGUF)
To enable it, run llama.cpp with options like:
`--ctx-size 0 --rope-scaling yarn --rope-scale 4`
ctx-size 0 sets it to 1M context, else set a smaller number like 524288 for 512k
You can use also as normal if you don't want the extended context. | 2025-11-28T19:30:20 | https://www.reddit.com/r/LocalLLaMA/comments/1p93syj/qwen3next_did_a_quant_with_extended_context/ | noctrex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p93syj | false | null | t3_1p93syj | /r/LocalLLaMA/comments/1p93syj/qwen3next_did_a_quant_with_extended_context/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=108&crop=smart&auto=webp&s=ff45856f4371adcc16d5b8ce21e5ed3ef588bdca', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=216&crop=smart&auto=webp&s=54b4820fd97cb95cbbd3f97885f31fcc4bd605a4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=320&crop=smart&auto=webp&s=a82f6e9d0f3ac4a40e55dbbedb89ae0f2f6a1444', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=640&crop=smart&auto=webp&s=8b518c02f5843d8adaaa30fd9ccf229f596ac77a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=960&crop=smart&auto=webp&s=f43031d6d04a3ad1efb376f07f62a673f86503d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=1080&crop=smart&auto=webp&s=10f7cb0ae550dea485bc512a3ff95934b9c9055a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?auto=webp&s=9cd7d1ba2780821e171df0480931a24110296aee', 'width': 1200}, 'variants': {}}]} |
Benchmarking LLM Inference on RTX PRO 6000 vs H100 vs H200 | 56 | Hi LocalLlama community. I present an LLM inference throughput benchmark for RTX PRO 6000 WK vs H100 vs H200 vs L40S GPUs, based on the vllm serve and vllm bench serve benchmarking tools, to understand the cost efficiency of RTX PRO 6000 vs previous-generation datacenter GPUs for LLM inference. Pro 6000 is significantly cheaper as it has the latest Blackwell architecture, but it has slower GDDR memory and lacks NVLink.
[Full article on Medium](https://medium.com/p/fde1798571a1)
[Non-medium link](https://www.cloudrift.ai/blog/benchmarking-rtx6000-vs-datacenter-gpus)
# Benchmarking Setup
The hardware configurations used:
* 1xPRO6000; 1xH100; 1xH200; 2xL40s
* 8xPRO6000; 8xH100; 8xH200
**I have optimized the benchmark setup for throughput**. VLLM serves models. The model is split across multiple GPUs using the --tensor-parallel-size VLLM option, if needed. I run as many VLLM instances as possible, using an NGINX load balancer on top to distribute requests across them and maximize throughput (replica parallelism). For example, if only two GPUs are required to run the model on a 4-GPU machine, I run two VLLM instances with --pipeline-parallel-size=2 and an NGINX load balancer. If all four GPUs are required, then a single VLLM instance with --pipeline-parallel-size=4 is used.
The **vllm bench serve** tool is used for benchmarking with random data and a sequence length of 1000. The number of concurrent requests is set between 256 and 512 to ensure the LLM's token-generation capacity is saturated.
I have benchmarked three models to better understand the effect of PCIe communication on the 8xPro6000 server vs. NVLink on the H100/H200.
Here is the model selection and the logic behind it:
1. **GLM-4.5-Air-AWQ-4bit (fits 80GB).** Testing single-GPU performance and maximum throughput with replica scaling on 8 GPU setups. No PCIE bottleneck. The Pro 6000 should demonstrate strong results thanks to Blackwell native support for FP4.
2. **Qwen3-Coder-480B-A35B-Instruct-AWQ (fits 320GB).** This 4-bit-quantized model fits into 4 GPUs. Some PCIe communication overhead in Pro 6000 setups may reduce performance relative to NVLink-enabled datacenter GPUs.
3. **GLM-4.6-FP8 (fits 640GB).** This model requires all eight GPUs. PCIe communication overhead expected. The H100 and H200 configurations should have an advantage.
Besides raw throughput, graphs show the serving cost per million tokens for each model on its respective hardware. The rental price is set to $2.09 for Pro6000; $2.69 for H100; $3.39 for H200, and $0.86 for L40S - today's rental prices from Runpod secure cloud.
# Results
**For single-GPU workloads**, RTX PRO 6000 is a clear winner—and arguably an H100 killer. Remarkably, the PRO 6000 with GDDR7 memory outperforms even the H100 SXM with its HBM3e in single-GPU throughput (3,140 vs 2,987 tok/s), while delivering 28% lower cost per token ($0.18 vs $0.25/mtok). The 2xL40S configuration is the least performant and most cost-effective of the bunch.
**For medium-sized models** requiring 2-4 GPUs, PRO 6000 remains competitive. While it loses some ground to NVLink-equipped datacenter GPUs, the cost efficiency stays within the same ballpark ($1.03 vs $1.01/mtok for Qwen3-480B).
**For large models** requiring 8-way tensor parallelism, datacenter GPUs pull ahead significantly. The H100 and H200's NVLink interconnect delivers 3-4x the throughput of PCIe-bound PRO 6000s. The cost efficiency gap is significant: $1.72/mtok for Pro6000 vs $0.72-0.76/mtok for H100/H200.
[Price in millidollars, i.e. around $0.2](https://i.redd.it/qyda1ooas14g1.gif)
https://i.redd.it/69a68sbks14g1.gif
https://i.redd.it/rtyvpiars14g1.gif
# Code and Resources
The code is available [here](https://github.com/cloudrift-ai/server-benchmark). Instructions for performing your own benchmark are in the README. You can find the benchmark data in the results folder. | 2025-11-28T19:28:09 | https://www.reddit.com/r/LocalLLaMA/comments/1p93r0w/benchmarking_llm_inference_on_rtx_pro_6000_vs/ | NoVibeCoding | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p93r0w | false | null | t3_1p93r0w | /r/LocalLLaMA/comments/1p93r0w/benchmarking_llm_inference_on_rtx_pro_6000_vs/ | false | false | 56 | null | |
Gemma3 27 heretic, lower divergence than mlabonne/gemma3 | 54 | I set out to abliterate Gemma3 27b, wanted to reach or surpass the most popular one and here's the results after 5hr on H100 using heretic.
|Model|KL Divergence|Refusal|
|:-|:-|:-|
|[Google's base model](https://huggingface.co/google/gemma-3-27b-pt)|0 (by definition)|98/100|
|[mlabonne's gemma3](https://huggingface.co/mlabonne/gemma-3-27b-it-abliterated)|0.08|6/100|
|[Heretic gemma3 - v1](https://huggingface.co/coder3101/gemma-3-27b-it-heretic)|0.07|7/100|
|[Heretic gemma3 - v2](https://huggingface.co/coder3101/gemma-3-27b-it-heretic-v2)|0.03|14/100|
**KL Divergence:** Lower the better, roughly a measure of how close the model should be to its original. It is worth noting that lower, better for quantization.
**Refusal:** Lower the better, measure of how many harmful prompts model refused, this is calculated based on presence of tokens such "sorry" etc, which gives a general measure.
I published two versions - one with slightly higher refusal but very low KL divergence and another almost close to that of mlabonne's. It is also worth noting that during my testing I couldn't get v2 to refuse on any prompts, so that would mean it should be much close to original model without refusing on many stuff. | 2025-11-28T19:19:34 | https://www.reddit.com/r/LocalLLaMA/comments/1p93jcu/gemma3_27_heretic_lower_divergence_than/ | coder3101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p93jcu | false | null | t3_1p93jcu | /r/LocalLLaMA/comments/1p93jcu/gemma3_27_heretic_lower_divergence_than/ | false | false | self | 54 | {'enabled': False, 'images': [{'id': 'RGcO076Ui3t2LU07wmuQ2whdSWiUPFJTcLE_sNaGASQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/RGcO076Ui3t2LU07wmuQ2whdSWiUPFJTcLE_sNaGASQ.png?width=108&crop=smart&auto=webp&s=40e8be2a2e272ae1575e41c955774534a8fa59be', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/RGcO076Ui3t2LU07wmuQ2whdSWiUPFJTcLE_sNaGASQ.png?width=216&crop=smart&auto=webp&s=6e82c3ce1367bba260b48c88852ee813ae3736bc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/RGcO076Ui3t2LU07wmuQ2whdSWiUPFJTcLE_sNaGASQ.png?width=320&crop=smart&auto=webp&s=6f3092478a76468d354b346c0bfe2dfedb39986e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/RGcO076Ui3t2LU07wmuQ2whdSWiUPFJTcLE_sNaGASQ.png?width=640&crop=smart&auto=webp&s=eac4ab98703a2ef6e1ad8491f7e4764cd9330d25', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/RGcO076Ui3t2LU07wmuQ2whdSWiUPFJTcLE_sNaGASQ.png?width=960&crop=smart&auto=webp&s=eb0142ca4fd920d1dc75edb9d7f9b3cd828add8d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/RGcO076Ui3t2LU07wmuQ2whdSWiUPFJTcLE_sNaGASQ.png?width=1080&crop=smart&auto=webp&s=d28d313354d66cd08d12d01100a98eb29fde1fcd', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/RGcO076Ui3t2LU07wmuQ2whdSWiUPFJTcLE_sNaGASQ.png?auto=webp&s=d776fd08cd9b8b678b6ebed4193cbc4ac054802a', 'width': 1200}, 'variants': {}}]} |
success using CosyVoice on apple m4? | 1 | been trying for a few days now, but i can't seem to come across anything that can help me get cosyvoice & it's web ui working on my m4 mac. i'm not sure if this is the right place to ask but has anyone here gotten it to work on an m4, & if so would you mind showing me how? | 2025-11-28T18:42:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p92l68/success_using_cosyvoice_on_apple_m4/ | nicoleh1999_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p92l68 | false | null | t3_1p92l68 | /r/LocalLLaMA/comments/1p92l68/success_using_cosyvoice_on_apple_m4/ | false | false | self | 1 | null |
GLM Black Friday deal | 0 | # GLM Coding Plan
Black Friday: 50% first-purchase + extra 20%/30% off!
[https://z.ai/subscribe](https://z.ai/subscribe)
Extra 10% by Ref link: [https://z.ai/subscribe?ic=SNS2DYDDXZ](https://z.ai/subscribe?ic=SNS2DYDDXZ) | 2025-11-28T18:14:13 | https://www.reddit.com/r/LocalLLaMA/comments/1p91v9r/glm_black_friday_deal/ | No-Mountain3817 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p91v9r | false | null | t3_1p91v9r | /r/LocalLLaMA/comments/1p91v9r/glm_black_friday_deal/ | false | false | self | 0 | null |
I’ve been thinking about how AI is evolving lately in terms of cost? | 1 | [removed] | 2025-11-28T18:10:35 | https://www.reddit.com/r/LocalLLaMA/comments/1p91ryb/ive_been_thinking_about_how_ai_is_evolving_lately/ | Zestyclose-Put-9311 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p91ryb | false | null | t3_1p91ryb | /r/LocalLLaMA/comments/1p91ryb/ive_been_thinking_about_how_ai_is_evolving_lately/ | false | false | self | 1 | null |
What broke when you tried to take local LLMs to production? | 16 | Curious what people's experience has been going from "Ollama on my laptop" to actually serving models to a team or company.
I keep seeing blog posts about the Ollama → vLLM migration path, GPU memory headaches, cold start times, etc. But I'm wondering how much of that is real vs. content marketing fluff.
For those who've actually tried to productionize local models, what surprised you? What broke? What's your stack look like now?
Trying to separate the signal from the noise here. | 2025-11-28T18:07:31 | https://www.reddit.com/r/LocalLLaMA/comments/1p91p4k/what_broke_when_you_tried_to_take_local_llms_to/ | Defilan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p91p4k | false | null | t3_1p91p4k | /r/LocalLLaMA/comments/1p91p4k/what_broke_when_you_tried_to_take_local_llms_to/ | false | false | self | 16 | null |
Will personal local AI ever become good enough to replace cloud AI for most people? | 1 | [removed] | 2025-11-28T18:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1p91npd/will_personal_local_ai_ever_become_good_enough_to/ | Zestyclose-Put-9311 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p91npd | false | null | t3_1p91npd | /r/LocalLLaMA/comments/1p91npd/will_personal_local_ai_ever_become_good_enough_to/ | false | false | self | 1 | null |
Will personal local AI ever become good enough to replace cloud AI for most people? | 1 | [removed] | 2025-11-28T18:05:04 | https://www.reddit.com/r/LocalLLaMA/comments/1p91mry/will_personal_local_ai_ever_become_good_enough_to/ | Zestyclose-Put-9311 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p91mry | false | null | t3_1p91mry | /r/LocalLLaMA/comments/1p91mry/will_personal_local_ai_ever_become_good_enough_to/ | false | false | self | 1 | null |
Quanta Convert and Quantize AI models | 0 | We at Hugston are working hard to make AI useful for everyone. Today we are releasing Quanta, an application for windows that can convert and quantize AI LLM Models in GGUF.
Available at, [https://github.com/Mainframework/Quanta](https://github.com/Mainframework/Quanta) or [https://hugston.com/uploads/software/Quanta-1.0.0-setup-x64.exe](https://hugston.com/uploads/software/Quanta-1.0.0-setup-x64.exe) (the newer version).
If you like our work you may consider to share it and star it in Github.
Enjoy | 2025-11-28T17:59:30 | Trilogix | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p91h4q | false | null | t3_1p91h4q | /r/LocalLLaMA/comments/1p91h4q/quanta_convert_and_quantize_ai_models/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'ghg7rw7ke14g1', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/ghg7rw7ke14g1.png?width=108&crop=smart&auto=webp&s=9e1093a6399d4de081e3f43b5c4933bd7ab9a936', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/ghg7rw7ke14g1.png?width=216&crop=smart&auto=webp&s=3bccace879cfcb4c4d2f59bcad8ce8251fe7f5fb', 'width': 216}, {'height': 172, 'url': 'https://preview.redd.it/ghg7rw7ke14g1.png?width=320&crop=smart&auto=webp&s=88a8f47569922c243c5b9f2d25473013437399f9', 'width': 320}, {'height': 345, 'url': 'https://preview.redd.it/ghg7rw7ke14g1.png?width=640&crop=smart&auto=webp&s=b4084eec0bd52551d55aec55d50bc4fb37490e74', 'width': 640}, {'height': 518, 'url': 'https://preview.redd.it/ghg7rw7ke14g1.png?width=960&crop=smart&auto=webp&s=bd85f704e43250b373e6ac5a4938e9bc4c26bba4', 'width': 960}, {'height': 583, 'url': 'https://preview.redd.it/ghg7rw7ke14g1.png?width=1080&crop=smart&auto=webp&s=8c40b748c63b0e594db3fe512392849ad189ae35', 'width': 1080}], 'source': {'height': 985, 'url': 'https://preview.redd.it/ghg7rw7ke14g1.png?auto=webp&s=49de4a84684262137e781ab4769323e9289caccb', 'width': 1823}, 'variants': {}}]} | |
What do you think… will local personal AI (running models on your own hardware) eventually replace paid cloud AI for most people? Cloud will never give you free lunch for life, today they are running in losses, but someday it will start charging more money than our pockets can handle!!! | 1 | [removed] | 2025-11-28T17:58:13 | https://www.reddit.com/r/LocalLLaMA/comments/1p91fy5/what_do_you_think_will_local_personal_ai_running/ | Zestyclose-Put-9311 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p91fy5 | false | null | t3_1p91fy5 | /r/LocalLLaMA/comments/1p91fy5/what_do_you_think_will_local_personal_ai_running/ | false | false | self | 1 | null |
How I use Claude Code 100% autonomously and using 90% less tokens: Claudiomiro | 1 | [removed] | 2025-11-28T17:56:59 | https://www.reddit.com/r/LocalLLaMA/comments/1p91eqo/how_i_use_claude_code_100_autonomously_and_using/ | TomatilloPutrid3939 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p91eqo | false | null | t3_1p91eqo | /r/LocalLLaMA/comments/1p91eqo/how_i_use_claude_code_100_autonomously_and_using/ | false | false | 1 | null | |
MI50 price hike, are they moving inventory at that price? | 6 | I was monitoring the price on ebay, they would go 300$CAD free shipping and now they are 550$CAD+ why the sudden price hike? Are they even selling at this price? Seems like a dick move to me. On another note, there are plenty of rtx 3090 for 800$CAD on marketplace if you are willing to drive around... Why does it suck so much acquiring proper VRAM? | 2025-11-28T17:56:30 | https://www.reddit.com/r/LocalLLaMA/comments/1p91eay/mi50_price_hike_are_they_moving_inventory_at_that/ | emaiksiaime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p91eay | false | null | t3_1p91eay | /r/LocalLLaMA/comments/1p91eay/mi50_price_hike_are_they_moving_inventory_at_that/ | false | false | self | 6 | null |
CPU-only LLM performance - t/s with llama.cpp | 36 | How many of you do use CPU only inference time to time(at least rarely)? .... Really missing CPU-Only Performance threads here in this sub.
^(Possibly few of you waiting to grab one or few 96GB GPUs at cheap price later so using CPU only inference for now just with bulk RAM.)
I think bulk RAM(128GB-1TB) is more than enough to run small/medium models since it comes with more memory bandwidth.
My System Info:
Intel Core i7-14700HX 2.10 GHz | **32 GB RAM** | **DDR5-5600** | **65GB/s Bandwidth** |
llama-bench Command: (Used Q8 for KVCache to get decent t/s with my 32GB RAM)
llama-bench -m modelname.gguf -fa 1 -ctk q8_0 -ctv q8_0
CPU-only performance stats (Model Name with Quant - t/s):
Qwen3-0.6B-Q8_0 - 86
gemma-3-1b-it-UD-Q8_K_XL - 42
LFM2-2.6B-Q8_0 - 24
LFM2-2.6B.i1-Q4_K_M - 30
SmolLM3-3B-UD-Q8_K_XL - 16
SmolLM3-3B-UD-Q4_K_XL - 27
Llama-3.2-3B-Instruct-UD-Q8_K_XL - 16
Llama-3.2-3B-Instruct-UD-Q4_K_XL - 25
Qwen3-4B-Instruct-2507-UD-Q8_K_XL - 13
Qwen3-4B-Instruct-2507-UD-Q4_K_XL - 20
gemma-3-4b-it-qat-UD-Q6_K_XL - 17
gemma-3-4b-it-UD-Q4_K_XL - 20
Phi-4-mini-instruct.Q8_0 - 16
Phi-4-mini-instruct-Q6_K - 18
granite-4.0-micro-UD-Q8_K_XL - 15
granite-4.0-micro-UD-Q4_K_XL - 24
MiniCPM4.1-8B.i1-Q4_K_M - 10
Llama-3.1-8B-Instruct-UD-Q4_K_XL - 11
Qwen3-8B-128K-UD-Q4_K_XL - 9
gemma-3-12b-it-Q6_K - 6
gemma-3-12b-it-UD-Q4_K_XL - 7
Mistral-Nemo-Instruct-2407-IQ4_XS - 10
Huihui-Ling-mini-2.0-abliterated-MXFP4_MOE - 58
inclusionAI_Ling-mini-2.0-Q6_K_L - 47
LFM2-8B-A1B-UD-Q4_K_XL - 38
ai-sage_GigaChat3-10B-A1.8B-Q4_K_M - 34
Ling-lite-1.5-2507-MXFP4_MOE - 31
granite-4.0-h-tiny-UD-Q4_K_XL - 29
granite-4.0-h-small-IQ4_XS - 9
gemma-3n-E2B-it-UD-Q4_K_XL - 28
gemma-3n-E4B-it-UD-Q4_K_XL - 13
kanana-1.5-15.7b-a3b-instruct-i1-MXFP4_MOE - 24
ERNIE-4.5-21B-A3B-PT-IQ4_XS - 28
SmallThinker-21BA3B-Instruct-IQ4_XS - 26
Phi-mini-MoE-instruct-Q8_0 - 25
Qwen3-30B-A3B-IQ4_XS - 27
gpt-oss-20b-mxfp4 - 23
So it seems I would get 3-4X performance if I build a desktop with 128GB DDR5 RAM 6000-6600. For example, above t/s \* 4 for 128GB (32GB \* 4). And 256GB could give 7-8X and so on. Of course I'm aware of context of models here.
Qwen3-4B-Instruct-2507-UD-Q8_K_XL - 52 (13 * 4)
gpt-oss-20b-mxfp4 - 92 (23 * 4)
Qwen3-8B-128K-UD-Q4_K_XL - 36 (9 * 4)
gemma-3-12b-it-UD-Q4_K_XL - 28 (7 * 4)
I stopped bothering 12+B Dense models since Q4 of 12B Dense models itself bleeding tokens in single digits(Ex: Gemma3-12B just 7 t/s). But I really want to know the CPU-only performance of 12+B Dense models so it could help me deciding to get how much RAM needed for expected t/s. Sharing list for reference, it would be great if someone shares stats of these models.
Seed-OSS-36B-Instruct-GGUF
Mistral-Small-3.2-24B-Instruct-2506-GGUF
Devstral-Small-2507-GGUF
Magistral-Small-2509-GGUF
phi-4-gguf
RekaAI_reka-flash-3.1-GGUF
NVIDIA-Nemotron-Nano-9B-v2-GGUF
NVIDIA-Nemotron-Nano-12B-v2-GGUF
GLM-Z1-32B-0414-GGUF
Llama-3_3-Nemotron-Super-49B-v1_5-GGUF
Qwen3-14B-GGUF
Qwen3-32B-GGUF
NousResearch_Hermes-4-14B-GGUF
gemma-3-12b-it-GGUF
gemma-3-27b-it-GGUF
Please share your stats with your config(Total RAM, RAM Type - MT/s, Total Bandwidth) & whatever models(Quant, t/s) you tried.
And let me know if any changes needed in my llama-bench command to get better t/s. Hope there are few. Thanks | 2025-11-28T17:40:34 | https://www.reddit.com/r/LocalLLaMA/comments/1p90zzi/cpuonly_llm_performance_ts_with_llamacpp/ | pmttyji | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p90zzi | false | null | t3_1p90zzi | /r/LocalLLaMA/comments/1p90zzi/cpuonly_llm_performance_ts_with_llamacpp/ | false | false | self | 36 | null |
Daisy Chaining MacMinis | 4 | So M4 Prices are really cheap until you try to upgrade any component, I ended up back at $2K for 64Gb of vram vs 4x$450 to get more cores/disk..
Or are people trying to like daisy chain these and distribute across them? (If so, storage still bothers me but whatever..)? AFAIK, ollama isn't there yet, vLLM has not added metal support so llm-d is off the table...
Something like this. [https://www.doppler.com/blog/building-a-distributed-ai-system-how-to-set-up-ray-and-vllm-on-mac-minis](https://www.doppler.com/blog/building-a-distributed-ai-system-how-to-set-up-ray-and-vllm-on-mac-minis) | 2025-11-28T17:29:29 | https://www.reddit.com/r/LocalLLaMA/comments/1p90pkl/daisy_chaining_macminis/ | Gadobot3000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p90pkl | false | null | t3_1p90pkl | /r/LocalLLaMA/comments/1p90pkl/daisy_chaining_macminis/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'sHh5PTO4ja2w7QijzHrT04jn8x6Ud65dH4yNEYTNzfk', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/sHh5PTO4ja2w7QijzHrT04jn8x6Ud65dH4yNEYTNzfk.png?width=108&crop=smart&auto=webp&s=2382d74acc86718151e355b7136fdd0835f9791d', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/sHh5PTO4ja2w7QijzHrT04jn8x6Ud65dH4yNEYTNzfk.png?width=216&crop=smart&auto=webp&s=170355779827e947faa926a05e9e579c41b62e05', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/sHh5PTO4ja2w7QijzHrT04jn8x6Ud65dH4yNEYTNzfk.png?width=320&crop=smart&auto=webp&s=79a5ca4af34450842990ee1c210880132191dec1', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/sHh5PTO4ja2w7QijzHrT04jn8x6Ud65dH4yNEYTNzfk.png?width=640&crop=smart&auto=webp&s=beb3335147ae98f9d1f650e3a0eaab69a965638c', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/sHh5PTO4ja2w7QijzHrT04jn8x6Ud65dH4yNEYTNzfk.png?width=960&crop=smart&auto=webp&s=1119ed19585bf0eb53139cc543fd39be3e87f0e1', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/sHh5PTO4ja2w7QijzHrT04jn8x6Ud65dH4yNEYTNzfk.png?width=1080&crop=smart&auto=webp&s=d6e4dbe95533a5d34f28a94d347baba6e19df8ec', 'width': 1080}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/sHh5PTO4ja2w7QijzHrT04jn8x6Ud65dH4yNEYTNzfk.png?auto=webp&s=b0e1eaee07d89416daa0e1ef3334e4a64377b6aa', 'width': 1599}, 'variants': {}}]} |
Made a little desktop tool | 9 | Though I doubt anyone was asking for such a thing, I ended up making a little AI agent tool that works on Windows XP and up. It's a piece of software for communicating with OpenAI-compatible LLM servers. I've been getting a good bit of use with it on my older systems.
The application (and its source code) are available at [https://github.com/randomNinja64/SimpleLLMChat](https://github.com/randomNinja64/SimpleLLMChat)
[A screenshot of the SimpleLLMChat UI](https://preview.redd.it/systbzd4614g1.png?width=1033&format=png&auto=webp&s=ef57f079a56caa10b75358095f824b5621fa1b78)
If anyone has some suggestions for making HTTPS work properly under XP/.NET 4/C#, please let me know. | 2025-11-28T17:12:30 | https://www.reddit.com/r/LocalLLaMA/comments/1p90a31/made_a_little_desktop_tool/ | randomNinja64 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p90a31 | false | null | t3_1p90a31 | /r/LocalLLaMA/comments/1p90a31/made_a_little_desktop_tool/ | false | false | 9 | null | |
Best Models for 16GB VRAM | 35 | Swiped up an RX 9070 from newegg since it's below MSRP today. Primarily interested in gaming, hence the 9070 over the 5070 at a similar price. However, Id like to sip my toes further into AI, and since Im doubling my vram from igb to 16gb, Im curious
**What are the best productivity, coding, ans storywriting AI models I can run reasonably with 16GB VRAM?
Last similar post I found with google was about 10mo old, and I figured things may have changed since then? | 2025-11-28T16:39:51 | https://www.reddit.com/r/LocalLLaMA/comments/1p8zfdr/best_models_for_16gb_vram/ | LinuxIsFree | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8zfdr | false | null | t3_1p8zfdr | /r/LocalLLaMA/comments/1p8zfdr/best_models_for_16gb_vram/ | false | false | self | 35 | null |
Is the future of AI a shift from paid cloud subscriptions to powerful on-device AI & automations running locally? Do you think this will replace cloud AI for most users? | 1 | [removed] | 2025-11-28T16:31:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p8z7l0/is_the_future_of_ai_a_shift_from_paid_cloud/ | Pitiful-Mistake-4108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8z7l0 | false | null | t3_1p8z7l0 | /r/LocalLLaMA/comments/1p8z7l0/is_the_future_of_ai_a_shift_from_paid_cloud/ | false | false | self | 1 | null |
Outdated APPS... | 1 | PocketPal and Chatterui outdated 🥲 any recommendations for Android | 2025-11-28T16:20:47 | https://www.reddit.com/r/LocalLLaMA/comments/1p8yxt7/outdated_apps/ | Illustrious-Swim9663 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8yxt7 | false | null | t3_1p8yxt7 | /r/LocalLLaMA/comments/1p8yxt7/outdated_apps/ | false | false | self | 1 | null |
Should Generative AI be Free Forever? | 1 | [removed] | 2025-11-28T16:17:23 | https://www.reddit.com/r/LocalLLaMA/comments/1p8yupq/should_generative_ai_be_free_forever/ | Pitiful-Mistake-4108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8yupq | false | null | t3_1p8yupq | /r/LocalLLaMA/comments/1p8yupq/should_generative_ai_be_free_forever/ | false | false | self | 1 | null |
Should Generative AI Be Free Forever? | 0 | I’m running a community poll to understand what PC users *really* want from AI.
[https://forms.gle/Av9bmrnSR29RvSTK8](https://forms.gle/Av9bmrnSR29RvSTK8)
Today most AI tools are:
* locked behind expensive subscriptions, token & credit limits
* dependent on the cloud
* limited by latency, heavy tasks & memory
* not truly private & personal
* too expensive for creative or daily automation workflows
But many of us want something different —
**A new class of AI hardware where billion parameter AI models runs locally, stays private, works offline, automates our apps, and costs nothing per query.**
This 30 sec form is to understand:
* Would users prefer *personal* AI with unlimited AI creations?
* Are cloud AI costs limiting creativity?
* Do people want full-workflow automation on their own hardware without much technical hassle of running AI models?
* How much ready to pay for new class of AI Hardware?
* And most importantly — **should AI be free forever?**
If you believe AI should be private, personalized, fast, and not locked behind paywall, please fill out the form and share it with others as we all users looking for new alternative.
Let’s see how many people want a new direction for personal computing.
At the end I will publish the poll result here.... | 2025-11-28T15:35:07 | https://www.reddit.com/r/LocalLLaMA/comments/1p8xsiv/should_generative_ai_be_free_forever/ | Pitiful-Mistake-4108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8xsiv | false | null | t3_1p8xsiv | /r/LocalLLaMA/comments/1p8xsiv/should_generative_ai_be_free_forever/ | false | false | self | 0 | null |
Weekly AI News: First autonomous cyberattack, Meta 1600-language ASR, MIT workforce study, and more | 11 | Roundup of this week's notable developments:
Anthropic Cyberattack Disclosure
- Chinese state actors used Claude Code for reconnaissance/scripting
- AI executed 80-90% of attack lifecycle
- 30 organizations targeted
- Source: Anthropic blog
Meta Omnilingual ASR
- 1,600 languages, 500 with no prior AI coverage
- 7B parameters, Apache 2.0
- Source: Meta AI blog
MIT Iceberg Index
- 11.7% of US workforce replaceable by current AI
- $1.2T in wages
- Highest exposure: HR, logistics, finance
- Source: MIT working paper
Genesis Mission EO
- Signed Nov 24
- Unifies 17 DOE national labs
- Source: White House
OpenAI Lawsuit
- 8 families, 4 suicides
- Youngest: 14 years old
- Source: Court filings via TechCrunch
I made a video summary if anyone prefers that format: https://youtu.be/qKxFYhcQppc | 2025-11-28T15:13:05 | https://www.reddit.com/r/LocalLLaMA/comments/1p8x9cv/weekly_ai_news_first_autonomous_cyberattack_meta/ | Proof-Possibility-54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8x9cv | false | null | t3_1p8x9cv | /r/LocalLLaMA/comments/1p8x9cv/weekly_ai_news_first_autonomous_cyberattack_meta/ | false | false | self | 11 | null |
Macbook air M1 vs HP OMEN 16-ap0014ns Ryzen 7 350 + 5060 | 1 | So i have a macbook air m1 with 8gb and 256, and i can trade it up for 300€ so i can buy the hp for like 790€.
The HP has 1TB SDD, 32 GB ram DDR5 Ryzen AI 7 350 50 TOPS and the RTX 5060.
Is the upgrade really worth it to work with local llm, and software development?
Im happy almost all the time with the m1, but in some cases with heavy work, it lacks power.
This is the hp link.
[https://www.mediamarkt.es/es/product/\_portatil-gaming-hp-omen-16-ap0014ns-copilot-pc-16-2k-amd-ryzentm-ai-7-350-32gb-ram-1tb-ssd-geforce-rtxtm-5060-sin-sistema-operativo-negro-1598137.html](https://www.mediamarkt.es/es/product/_portatil-gaming-hp-omen-16-ap0014ns-copilot-pc-16-2k-amd-ryzentm-ai-7-350-32gb-ram-1tb-ssd-geforce-rtxtm-5060-sin-sistema-operativo-negro-1598137.html) | 2025-11-28T15:11:34 | https://www.reddit.com/r/LocalLLaMA/comments/1p8x7z4/macbook_air_m1_vs_hp_omen_16ap0014ns_ryzen_7_350/ | fanciboi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8x7z4 | false | null | t3_1p8x7z4 | /r/LocalLLaMA/comments/1p8x7z4/macbook_air_m1_vs_hp_omen_16ap0014ns_ryzen_7_350/ | false | false | self | 1 | null |
The Medium Is the Mind: Applying McLuhan’s Tetrad to LLMs | 0 | This essay examines Marshall McLuhan’s tetrad of media effects as a framework for understanding how communication technology shapes human perception. It explores how each advance reorganizes sensory priorities, social structures, and thought patterns while retrieving elements from past forms of communication, and what the medium reverses into when pushed to its limit. It then applies this framework to emerging LLM technology.
[*https://neofeudalreview.substack.com/p/the-medium-is-the-mind-applying-mcluhans*](https://neofeudalreview.substack.com/p/the-medium-is-the-mind-applying-mcluhans) | 2025-11-28T15:07:08 | https://www.reddit.com/r/LocalLLaMA/comments/1p8x40t/the_medium_is_the_mind_applying_mcluhans_tetrad/ | Due_Assumption_27 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8x40t | false | null | t3_1p8x40t | /r/LocalLLaMA/comments/1p8x40t/the_medium_is_the_mind_applying_mcluhans_tetrad/ | false | false | self | 0 | null |
z.ai running at cost? if anyone is interested | 0 | Honestly, I have no idea how Z.ai is running GLM 4.6 at these prices. It genuinely doesn't make sense. Maybe they're running it at cost, or maybe they just need the user numbers—whatever the reason, it's an absurd bargain right now.
Here are the numbers (after the 10% stackable referral you get):
- $2.70 for the first month
- $22.68 for the entire year
- The Max plan (60x Claude Pro limits) is only $226 a year
The stacked discount includes:
- 50 percent standard discount
- 20-30 percent additional depending on plan
- 10 percent extra with my referral as a learner( this is always)
https://z.ai/subscribe?ic=OUCO7ISEDB
Sorry I am a bit naive so please go easy on me if the message doesn't look right. | 2025-11-28T14:51:26 | https://www.reddit.com/r/LocalLLaMA/comments/1p8wq4u/zai_running_at_cost_if_anyone_is_interested/ | Minute-Act-4943 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8wq4u | false | null | t3_1p8wq4u | /r/LocalLLaMA/comments/1p8wq4u/zai_running_at_cost_if_anyone_is_interested/ | false | false | self | 0 | null |
How I finally managed to cancel all my AI subscriptions | 1 | [removed] | 2025-11-28T14:36:40 | https://www.reddit.com/r/LocalLLaMA/comments/1p8wdmu/how_i_finally_managed_to_cancel_all_my_ai/ | kevin_1994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8wdmu | false | null | t3_1p8wdmu | /r/LocalLLaMA/comments/1p8wdmu/how_i_finally_managed_to_cancel_all_my_ai/ | false | false | self | 1 | null |
Ask me to run models | 304 | Hi guys, I am currently in the process of upgrading my 4×3090 setup to 2×5090 + 1×RTX Pro 6000. As a result, I have all three kinds of cards in the rig temporarily, and I thought it would be a good idea to take some requests for models to run on my machine.
Here is my current setup:
- 1× RTX Pro 6000 Blackwell, power limited to 525 W
- 2× RTX 5090, power limited to 500 W
- 2× RTX 3090, power limited to 280 W
- WRX80E (PCIe 4.0 x16) with 3975WX
- 512 GB DDR4 RAM
If you have any model that you want me to run with a specific setup (certain cards, parallelism methods, etc.), let me know in the comments. I’ll run them this weekend and reply with the tok/s! | 2025-11-28T14:35:08 | https://www.reddit.com/gallery/1p8wcbn | monoidconcat | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p8wcbn | false | null | t3_1p8wcbn | /r/LocalLLaMA/comments/1p8wcbn/ask_me_to_run_models/ | false | false | 304 | null | |
unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF · Hugging Face | 142 | 2025-11-28T14:31:50 | https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF | WhaleFactory | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p8w9hg | false | null | t3_1p8w9hg | /r/LocalLLaMA/comments/1p8w9hg/unslothqwen3next80ba3bthinkinggguf_hugging_face/ | false | false | default | 142 | {'enabled': False, 'images': [{'id': 'ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=108&crop=smart&auto=webp&s=ff45856f4371adcc16d5b8ce21e5ed3ef588bdca', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=216&crop=smart&auto=webp&s=54b4820fd97cb95cbbd3f97885f31fcc4bd605a4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=320&crop=smart&auto=webp&s=a82f6e9d0f3ac4a40e55dbbedb89ae0f2f6a1444', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=640&crop=smart&auto=webp&s=8b518c02f5843d8adaaa30fd9ccf229f596ac77a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=960&crop=smart&auto=webp&s=f43031d6d04a3ad1efb376f07f62a673f86503d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?width=1080&crop=smart&auto=webp&s=10f7cb0ae550dea485bc512a3ff95934b9c9055a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ble3gnyoRHIxGbCkynVYdB5oBvepM5IUsQgkKmTQPvE.png?auto=webp&s=9cd7d1ba2780821e171df0480931a24110296aee', 'width': 1200}, 'variants': {}}]} | |
Ask me to run models | 0 | Hi guys, I an current in process of upgrading my 4x 3090 setup into 2x 5090 + 1x rtx pro 6000. As a result, I got to have all three kinds of cards temporarily, and I thought it would be a good idea to get some request for models to run on my machine.
Here is my current setup:
- 1x RTX pro 6000 blackwell, power limited to 525w
- 2x RTX 5090, power limited to 500w
- 2x RTX 3090, power limited to 280w
- WRX80E(PCIe 4.0 x16) with 3975wx
- 512gb DDR4 ram
If you have any model that you want me to run with specific setup(certain cards, certain parallelism methods, etc), let me know in the comments - I will run them during this weekend and reply with the tok/sec! | 2025-11-28T14:28:52 | https://www.reddit.com/gallery/1p8w70q | monoidconcat | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1p8w70q | false | null | t3_1p8w70q | /r/LocalLLaMA/comments/1p8w70q/ask_me_to_run_models/ | false | false | 0 | null | |
Learning llm from books | 0 | I'd like to upload a few books to some llm and have it draw common conclusions from them. The problem is that gjepete's highest paid plan allows for only 32,000 tokens, which is only about 100 book pages, which is about 10 times less than I need. The chat offers so many options that I don't know which one to choose. Has anyone experienced something like this? | 2025-11-28T14:16:12 | https://www.reddit.com/r/LocalLLaMA/comments/1p8vwgi/learning_llm_from_books/ | Upstairs-Sleep-3599 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8vwgi | false | null | t3_1p8vwgi | /r/LocalLLaMA/comments/1p8vwgi/learning_llm_from_books/ | false | false | self | 0 | null |
Is there any comparison or benchmark of react-native-executorch and onnxruntime react native | 2 | Hi guys,
Currently I want to choose the offline LLM runtime for my react native mobile app. I stumble upon these 2 libs react-native-executorch and onnxruntime react native. And I wonder which one is better and faster for making AI on local device totally offline and can output token per second faster? | 2025-11-28T14:06:17 | https://www.reddit.com/r/LocalLLaMA/comments/1p8vo7z/is_there_any_comparison_or_benchmark_of/ | Educational-Nose3354 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8vo7z | false | null | t3_1p8vo7z | /r/LocalLLaMA/comments/1p8vo7z/is_there_any_comparison_or_benchmark_of/ | false | false | self | 2 | null |
Compared actual usage costs for Chinese AI models. Token efficiency changes everything. | 81 | Everyone talks about per-token pricing but nobody mentions token efficiency. How many tokens does it take to complete the same task?
Tested this with coding tasks cause thats where I actually use these models.
glm-4.6: $0.15 input / $0.60 output Kimi K2: $1.50-2.00 MiniMax: $0.80-1.20 deepseek: $0.28
deepseek looks cheapest on paper. But thats not the whole story.
Token efficiency (same task):
Gave each model identical coding task: "refactor this component to use hooks, add error handling, write tests"
glm: 8,200 tokens average deepseek: 14,800 tokens average MiniMax: 10,500 tokens average, Kimi: 11,000 tokens average
glm uses 26% fewer tokens than Kimi, 45% fewer than deepseek.
Real cost for that task:
glm: \~$0.04 (4 cents) deepseek: \~$0.03 (3 cents) - looks cheaper MiniMax: \~$0.05 (5 cents) Kimi: \~$0.09 (9 cents)
But wait. If you do 100 similar tasks:
glm: Total tokens needed: \~820K, Cost: $0.40-0.50 deepseek: Total tokens needed: \~1.48M, Cost: $0.41 - basically same as glm despite lower per-token price MiniMax: Total tokens needed: \~1.05M, Cost: $0.50-0.60 Kimi: Total tokens needed: \~1.1M, Cost: $0.90-1.00
Token efficiency beats per-token price. glm generates less verbose code, fewer explanatory comments, tighter solutions. deepseek tends to over-explain and generate longer outputs.
For businesses doing thousands of API calls daily, glms efficiency compounds into real savings even though its not the absolute cheapest per-token.
Switched to glm for production workloads. Monthly costs dropped 60% vs previous setup. Performance is adequate for 90% of tasks.
deepseeks pricing looks great until you realize youre using 50% more tokens per task. The savings disappear.
Anyone else measuring token efficiency? Feel like this is the underrated metric everyone ignores.
| 2025-11-28T13:54:37 | https://www.reddit.com/r/LocalLLaMA/comments/1p8ven0/compared_actual_usage_costs_for_chinese_ai_models/ | YormeSachi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8ven0 | false | null | t3_1p8ven0 | /r/LocalLLaMA/comments/1p8ven0/compared_actual_usage_costs_for_chinese_ai_models/ | false | false | self | 81 | null |
Some tools I discovered to Simulate and Observe AI Agents at scale | 1 | People usually rely on a mix of simulation, evaluation, and observability tools to see how an agent performs under load, under bad inputs, or during long multi step tasks. Here is a balanced view of some tools that are commonly used today. I've handpicked some of these tools from across reddit.
**1. Maxim AI**
Maxim provides a combined setup for simulation, evaluations, and observability. Teams can run thousands of scenarios, generate synthetic datasets, and use predefined or custom evaluators. The tracing view shows multi step workflows, tool calls, and context usage in a simple timeline, which helps with debugging. It also supports online evaluations of live traffic and real time alerts.
**2. OpenAI Evals**
Makes it easy to write custom tests for model behaviour. It is open source and flexible, and teams can add their own metrics or adapt templates from the community.
**3. LangSmith**
Designed for LangChain based agents. It shows detailed traces for tool calls and intermediate steps. Teams also use its dataset replay to compare different versions of an agent.
**4. CrewAI**
Focused on multi agent systems. It helps test collaboration, conflict handling, and role based interactions. Logging inside CrewAI makes it easier to analyse group behaviour.
**5. Vertex AI**
A solid option on Google Cloud for building, testing, and monitoring agents. Works well for teams that need managed infrastructure and large scale production deployments.
*Quick comparison table*
|Tool|Simulation|Evaluations|Observability|Multi Agent Support|Notes|
|:-|:-|:-|:-|:-|:-|
|**Maxim AI**|Yes, large scale scenario runs|Prebuilt plus custom evaluators|Full traces, online evals, alerts|Works with CrewAI and others|Strong all in one option|
|**OpenAI Evals**|Basic via custom scripts|Yes, highly customizable|Limited|Not focused on multi agent|Best for custom evaluation code|
|**LangSmith**|Limited|Yes|Strong traces|Works with LangChain agents|Good for chain debugging|
|**CrewAI**|Yes, for multi agent workflows|Basic|Built in logging|Native multi agent|Great for teamwork testing|
|**Vertex AI**|Yes|Yes|Production monitoring|External frameworks needed|Good for GCP heavy teams|
If the goal is to reduce surprise behaviour and improve agent reliability, combining at least two of these tools gives much better visibility than relying on model outputs alone. | 2025-11-28T13:50:46 | https://www.reddit.com/r/LocalLLaMA/comments/1p8vbos/some_tools_i_discovered_to_simulate_and_observe/ | Otherwise_Flan7339 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8vbos | false | null | t3_1p8vbos | /r/LocalLLaMA/comments/1p8vbos/some_tools_i_discovered_to_simulate_and_observe/ | false | false | self | 1 | null |
Best small local LLM for "Ask AI" in docusaurus docs? | 2 | Hello, I have collected bunch of my documentation on all the lessons learned, and components I deploy and all headaches with specific use cases that I encountered.
I deploy it in docusaurus. Now I would like to add an "Ask AI" feature, which requires connecting to a chatbot. I know I can integrate with things like crawlchat but was wondering if anybody knows of a better lightweight solution.
Also which LLM would you recommend for something like this? Ideally something that runs on CPU comfortably. It can be reasonably slow, but not 1t/min slow. | 2025-11-28T13:49:49 | https://www.reddit.com/r/LocalLLaMA/comments/1p8vaz9/best_small_local_llm_for_ask_ai_in_docusaurus_docs/ | redhayd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8vaz9 | false | null | t3_1p8vaz9 | /r/LocalLLaMA/comments/1p8vaz9/best_small_local_llm_for_ask_ai_in_docusaurus_docs/ | false | false | self | 2 | null |
unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF · Hugging Face | 462 | 2025-11-28T13:48:28 | https://huggingface.co/unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF | WhaleFactory | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1p8v9y9 | false | null | t3_1p8v9y9 | /r/LocalLLaMA/comments/1p8v9y9/unslothqwen3next80ba3binstructgguf_hugging_face/ | false | false | default | 462 | {'enabled': False, 'images': [{'id': 'SSIhbD5Dl8kZRyNgV0oqxKpaE8kMvA_ZXLBFpkDEq90', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SSIhbD5Dl8kZRyNgV0oqxKpaE8kMvA_ZXLBFpkDEq90.png?width=108&crop=smart&auto=webp&s=538ce17b1ba42eb48a8061368d3db840b3676be6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/SSIhbD5Dl8kZRyNgV0oqxKpaE8kMvA_ZXLBFpkDEq90.png?width=216&crop=smart&auto=webp&s=43f8fd669713c8da72a581c5a0ab91ee445eed85', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/SSIhbD5Dl8kZRyNgV0oqxKpaE8kMvA_ZXLBFpkDEq90.png?width=320&crop=smart&auto=webp&s=17c35a9dcd4194d02930e064b5b835742be19456', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/SSIhbD5Dl8kZRyNgV0oqxKpaE8kMvA_ZXLBFpkDEq90.png?width=640&crop=smart&auto=webp&s=1291833f6c1644105b326fbe9244666f7b478451', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/SSIhbD5Dl8kZRyNgV0oqxKpaE8kMvA_ZXLBFpkDEq90.png?width=960&crop=smart&auto=webp&s=0504acd392c2f600ff47e0b54a7e0da2a997bc8e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/SSIhbD5Dl8kZRyNgV0oqxKpaE8kMvA_ZXLBFpkDEq90.png?width=1080&crop=smart&auto=webp&s=d3af07255f92f8278968eb1c8d40e9cd5ba587e9', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/SSIhbD5Dl8kZRyNgV0oqxKpaE8kMvA_ZXLBFpkDEq90.png?auto=webp&s=da5ce45969db70502cadbc92763c1817b218c6f2', 'width': 1200}, 'variants': {}}]} | |
Bifrost vs LiteLLM: Side-by-Side Benchmarks (50x Faster LLM Gateway) | 14 | Hey everyone; I recently shared a post here about Bifrost, a high-performance LLM gateway we’ve been building in Go. A lot of folks in the comments asked for a clearer side-by-side comparison with LiteLLM, including performance benchmarks and migration examples. So here’s a follow-up that lays out the numbers, features, and how to switch over in one line of code.
**Benchmarks (vs LiteLLM)**
Setup:
* single t3.medium instance
* mock llm with 1.5 seconds latency
|Metric|LiteLLM|Bifrost|Improvement|
|:-|:-|:-|:-|
||
|**p99 Latency**|90.72s|1.68s|\~54× faster|
|**Throughput**|44.84 req/sec|424 req/sec|\~9.4× higher|
|**Memory Usage**|372MB|120MB|\~3× lighter|
|**Mean Overhead**|\~500µs|**11µs @ 5K RPS**|\~45× lower|
**Repo:** [https://github.com/maximhq/bifrost](https://github.com/maximhq/bifrost)
# Key Highlights
* **Ultra-low overhead:** mean request handling overhead is just **11µs per request** at 5K RPS.
* **Provider Fallback:** Automatic failover between providers ensures 99.99% uptime for your applications.
* **Semantic caching:** deduplicates similar requests to reduce repeated inference costs.
* **Adaptive load balancing:** Automatically optimizes traffic distribution across provider keys and models based on real-time performance metrics.
* **Cluster mode resilience:** High availability deployment with automatic failover and load balancing. Peer-to-peer clustering where every instance is equal.
* **Drop-in OpenAI-compatible API:** Replace your existing SDK with just one line change. Compatible with OpenAI, Anthropic, LiteLLM, Google Genai, Langchain and more.
* **Observability:** Out-of-the-box OpenTelemetry support for observability. Built-in dashboard for quick glances without any complex setup.
* **Model-Catalog:** Access 15+ providers and 1000+ AI models from multiple providers through a unified interface. Also support custom deployed models!
* **Governance**: SAML support for SSO and Role-based access control and policy enforcement for team collaboration.
# Migrating from LiteLLM → Bifrost
You don’t need to rewrite your code; just point your LiteLLM SDK to Bifrost’s endpoint.
**Old (LiteLLM):**
from litellm import completion
response = completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello GPT!"}]
)
**New (Bifrost):**
from litellm import completion
response = completion(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello GPT!"}],
base_url="<http://localhost:8080/litellm>"
)
You can also use custom headers for governance and tracking (see docs!)
The switch is one line; everything else stays the same.
Bifrost is built for teams that treat LLM infra as production software: **predictable, observable, and fast**.
If you’ve found LiteLLM fragile or slow at higher load, this might be worth testing. | 2025-11-28T13:35:27 | https://www.reddit.com/r/LocalLLaMA/comments/1p8uzww/bifrost_vs_litellm_sidebyside_benchmarks_50x/ | dinkinflika0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8uzww | false | null | t3_1p8uzww | /r/LocalLLaMA/comments/1p8uzww/bifrost_vs_litellm_sidebyside_benchmarks_50x/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'tzXJe8W3zlTyaWCO1cmUKgDzwkiVvtz0X-YkHa1rf7g', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tzXJe8W3zlTyaWCO1cmUKgDzwkiVvtz0X-YkHa1rf7g.png?width=108&crop=smart&auto=webp&s=9a782d41eac40d175d7f7e099e35565ce794498e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tzXJe8W3zlTyaWCO1cmUKgDzwkiVvtz0X-YkHa1rf7g.png?width=216&crop=smart&auto=webp&s=732fc11d84f465bb0d253441faaf715490e41d58', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tzXJe8W3zlTyaWCO1cmUKgDzwkiVvtz0X-YkHa1rf7g.png?width=320&crop=smart&auto=webp&s=d256b4b8c573e8a707d6c2b8c447573423aa263d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tzXJe8W3zlTyaWCO1cmUKgDzwkiVvtz0X-YkHa1rf7g.png?width=640&crop=smart&auto=webp&s=212039bc071b63dc80e8508a5d0a5dcdbf5b7425', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tzXJe8W3zlTyaWCO1cmUKgDzwkiVvtz0X-YkHa1rf7g.png?width=960&crop=smart&auto=webp&s=451b91f0f0fc5d5d0e00ecb0a046514b53596456', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tzXJe8W3zlTyaWCO1cmUKgDzwkiVvtz0X-YkHa1rf7g.png?width=1080&crop=smart&auto=webp&s=f19789e8d996a26124e30b213dbfc97b07dc8370', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tzXJe8W3zlTyaWCO1cmUKgDzwkiVvtz0X-YkHa1rf7g.png?auto=webp&s=8d1ac3d914d8f30fd6629be197cff60106a9f528', 'width': 1200}, 'variants': {}}]} |
Fatty AI – Singapore's 42M Local LLM | 1 | [removed] | 2025-11-28T13:25:23 | Impossible_Host_5401 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p8usam | false | null | t3_1p8usam | /r/LocalLLaMA/comments/1p8usam/fatty_ai_singapores_42m_local_llm/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '8mg69fbn104g1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/8mg69fbn104g1.png?width=108&crop=smart&auto=webp&s=d83bea4bc18ca6cc5db1dfc58db1b87bbd9eece9', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/8mg69fbn104g1.png?width=216&crop=smart&auto=webp&s=6de6a333b71ab9281e5ae1cc4c6f233724b223b2', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/8mg69fbn104g1.png?width=320&crop=smart&auto=webp&s=6fe5a8b2e0f57a939190cabf7cbb2608ab310b12', 'width': 320}], 'source': {'height': 512, 'url': 'https://preview.redd.it/8mg69fbn104g1.png?auto=webp&s=55c8bc06f03e1690d86228ed9754d74faafae01a', 'width': 512}, 'variants': {}}]} | |
How does any of this work? | 0 | i1-IQ3\_S has better quality than i1-IQ3\_M
What does this even mean? And why would anyone use the non-i1 versions? | 2025-11-28T13:21:21 | johannes_bertens | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p8up6n | false | null | t3_1p8up6n | /r/LocalLLaMA/comments/1p8up6n/how_does_any_of_this_work/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'vsfizwwr004g1', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/vsfizwwr004g1.png?width=108&crop=smart&auto=webp&s=5c2712442e78982eb61faf4cb2e9eb03d87bef53', 'width': 108}, {'height': 221, 'url': 'https://preview.redd.it/vsfizwwr004g1.png?width=216&crop=smart&auto=webp&s=d7ffe6d8720de01473978c79f2f73ecf6479f9fd', 'width': 216}, {'height': 328, 'url': 'https://preview.redd.it/vsfizwwr004g1.png?width=320&crop=smart&auto=webp&s=6a009adbd53b46052aec34a1f8885179d25cc7f1', 'width': 320}, {'height': 657, 'url': 'https://preview.redd.it/vsfizwwr004g1.png?width=640&crop=smart&auto=webp&s=29ac3399548a28b083fa78324923edc37f0d3412', 'width': 640}, {'height': 985, 'url': 'https://preview.redd.it/vsfizwwr004g1.png?width=960&crop=smart&auto=webp&s=021aa0dffbe3507eaf73248e9b5113cbba813e0a', 'width': 960}, {'height': 1108, 'url': 'https://preview.redd.it/vsfizwwr004g1.png?width=1080&crop=smart&auto=webp&s=d15f1ba2712e7d28e358d8e4f58dd5d3e84cc708', 'width': 1080}], 'source': {'height': 1384, 'url': 'https://preview.redd.it/vsfizwwr004g1.png?auto=webp&s=60e8a12efc1b8df506fef14d0f23b3e08f5133ff', 'width': 1348}, 'variants': {}}]} | |
New to Local LLMs. What models can I run with my setup? | 2 | Hi, sorry I know this question has been asked 1000s of times by now, but I'm new to Local LLMs and don't know a lot about them. I'm trying to use less of paid services and move more towards self hosted. Now I don't have the best setup compared to some on here and I know the limitations, but what models do you think I should run. My usage will be coding and everyday chat.
Here are my specs:
\- Machine: Minisforum X1 Pro, AMD Ryzen AI 9 HX 370, T500 4TB, 128GB 5600mhz.
\- GPU: AMD Radeon 890M
\- OS: Linux
Running Ollama and Webui through Docker | 2025-11-28T13:20:53 | https://www.reddit.com/r/LocalLLaMA/comments/1p8uotj/new_to_local_llms_what_models_can_i_run_with_my/ | House-Wins | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8uotj | false | null | t3_1p8uotj | /r/LocalLLaMA/comments/1p8uotj/new_to_local_llms_what_models_can_i_run_with_my/ | false | false | self | 2 | null |
Using local Llama-3 to analyze Volatility 3 memory dumps. Automating malware discovery in RAM without cloud APIs | 3 | 2025-11-28T12:53:29 | https://v.redd.it/lszb4gqtvz3g1 | Glass-Ant-6041 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1p8u48w | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/lszb4gqtvz3g1/DASHPlaylist.mpd?a=1766926424%2CZjFjYzdhMmNiYzgyYmYwZWNiMGE4Nzg0Mjk0ZGViNjgwN2Y3MTFkMTYwZmE0NTIxMmFkOWQ5ZjZlNTU0Nzg4Nw%3D%3D&v=1&f=sd', 'duration': 62, 'fallback_url': 'https://v.redd.it/lszb4gqtvz3g1/CMAF_720.mp4?source=fallback', 'has_audio': True, 'height': 720, 'hls_url': 'https://v.redd.it/lszb4gqtvz3g1/HLSPlaylist.m3u8?a=1766926424%2CODM5ZTBmMjk4YzA5YzA1Zjk0Y2ZkY2Q4YzA4Y2Y1NGQ1MzE1MzAxZTQ2YmVhMjUwN2ViMDM2ZDUyNTc4Yjk0YQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/lszb4gqtvz3g1/CMAF_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1p8u48w | /r/LocalLLaMA/comments/1p8u48w/using_local_llama3_to_analyze_volatility_3_memory/ | false | false | 3 | {'enabled': False, 'images': [{'id': 'YnNkcTZ2c3R2ejNnMTHM8r7gjhfmThXWebp6A_unKOp4fr5vmShf9J1OuvEs', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YnNkcTZ2c3R2ejNnMTHM8r7gjhfmThXWebp6A_unKOp4fr5vmShf9J1OuvEs.png?width=108&crop=smart&format=pjpg&auto=webp&s=a01554a7c21b66b3abd99507c56cf24f3980c327', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/YnNkcTZ2c3R2ejNnMTHM8r7gjhfmThXWebp6A_unKOp4fr5vmShf9J1OuvEs.png?width=216&crop=smart&format=pjpg&auto=webp&s=2abdb315286d810ab3801977a7e13221634b5dad', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/YnNkcTZ2c3R2ejNnMTHM8r7gjhfmThXWebp6A_unKOp4fr5vmShf9J1OuvEs.png?width=320&crop=smart&format=pjpg&auto=webp&s=2e97b75bd0d14793248cb45dbb583f0c54e21a31', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/YnNkcTZ2c3R2ejNnMTHM8r7gjhfmThXWebp6A_unKOp4fr5vmShf9J1OuvEs.png?width=640&crop=smart&format=pjpg&auto=webp&s=ca9ba3fc049a743f4d23983f69f82bd63d24af5b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/YnNkcTZ2c3R2ejNnMTHM8r7gjhfmThXWebp6A_unKOp4fr5vmShf9J1OuvEs.png?width=960&crop=smart&format=pjpg&auto=webp&s=58c22aecc64e7e35e86d0e8844c18fe4084dae7c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/YnNkcTZ2c3R2ejNnMTHM8r7gjhfmThXWebp6A_unKOp4fr5vmShf9J1OuvEs.png?width=1080&crop=smart&format=pjpg&auto=webp&s=05e81c77862ac78d41987bfb0df2198653d91bd0', 'width': 1080}], 'source': {'height': 720, 'url': 'https://external-preview.redd.it/YnNkcTZ2c3R2ejNnMTHM8r7gjhfmThXWebp6A_unKOp4fr5vmShf9J1OuvEs.png?format=pjpg&auto=webp&s=14d9680af3caa389e40e55f64729e0e200a785a6', 'width': 1280}, 'variants': {}}]} | ||
I had to review my local model setup after a silent FaceSeek observation | 34 | I was experimenting with a small idea when I noticed a detail in FaceSeek that caused me to reconsider my approach to local models..I came to the realisation that I never settle on a consistent workflow because I constantly switch between different model sizes. Larger ones feel heavy for daily tasks, while others run quickly but lack depth. When deciding which models to run locally, I'm interested in how others here strike a balance between usefulness and performance.
Do you use a single, well-tuned setup or maintain separate environments? My goal is to improve my workflow so that the model feels dependable and doesn't require frequent adjustments. I could create a cleaner routine with the help of insights about small, useful habits. | 2025-11-28T12:35:43 | https://www.reddit.com/r/LocalLLaMA/comments/1p8trxh/i_had_to_review_my_local_model_setup_after_a/ | CoachExtreme5255 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8trxh | false | null | t3_1p8trxh | /r/LocalLLaMA/comments/1p8trxh/i_had_to_review_my_local_model_setup_after_a/ | false | false | self | 34 | null |
CXL Might Be the Future of Large-Model AI | 0 |
This looks like a unified SOC memory competitor
There’s a good write-up on the new Gigabyte CXL memory expansion card and what it means for AI workloads that are hitting memory limits:
https://www.club386.com/gigabyte-expands-intel-xeon-and-amd-threadripper-memory-capacity-with-cxl-add-on-card/
TL;DR
Specs of the Gigabyte card:
– PCIe 5.0 x16
– CXL 2.0 compliant
– Four DDR5 RDIMM slots
– Up to 512 GB extra memory per card
– Supported on TRX50 and W790 workstation boards
– Shows up as a second-tier memory region in the OS
This is exactly the kind of thing large-model inference and long-context LLMs need. Modern models aren’t compute-bound anymore—they’re memory-bound (KV cache, activations, context windows). Unified memory on consumer chips is clean and fast, but it’s fixed at solder-time and tops out at 128 GB.
CXL is the opposite:
– You can bolt on hundreds of GB of extra RAM
– Tiered memory lets you put DRAM for hot data and CXL for warm data
– KV cache spillover stops killing performance
– Future CXL 3.x fabrics allow memory pooling across devices
For certain AI use cases—big RAG pipelines, long-context inference, multi-agent workloads—CXL might be the only practical way forward without resorting to multi-GPU HBM clusters.
Curious if anyone here is planning to build a workstation around one of these, or if you think CXL will actually make it into mainstream AI rigs.
I will run some some benchmarks on Azure and post them here
Price estimates 2-3k USD | 2025-11-28T12:16:39 | https://www.reddit.com/r/LocalLLaMA/comments/1p8tf17/cxl_might_be_the_future_of_largemodel_ai/ | Dontdoitagain69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1p8tf17 | false | null | t3_1p8tf17 | /r/LocalLLaMA/comments/1p8tf17/cxl_might_be_the_future_of_largemodel_ai/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'VoPOEM-x6Bq04HzZHEdfCxUDbBvgxTxqoyonB68pbRM', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/VoPOEM-x6Bq04HzZHEdfCxUDbBvgxTxqoyonB68pbRM.jpeg?width=108&crop=smart&auto=webp&s=8cb16de9a410c3f7e5262e2edd71b9b600724aab', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/VoPOEM-x6Bq04HzZHEdfCxUDbBvgxTxqoyonB68pbRM.jpeg?width=216&crop=smart&auto=webp&s=c5212256dfcf2e7a341cde2b8f64b80a6deb4632', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/VoPOEM-x6Bq04HzZHEdfCxUDbBvgxTxqoyonB68pbRM.jpeg?width=320&crop=smart&auto=webp&s=c2691b403adf17ca900598180660bf1e35eb7c6f', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/VoPOEM-x6Bq04HzZHEdfCxUDbBvgxTxqoyonB68pbRM.jpeg?width=640&crop=smart&auto=webp&s=c0870efff20e69b4579af3eebf775f33f907b01c', 'width': 640}, {'height': 960, 'url': 'https://external-preview.redd.it/VoPOEM-x6Bq04HzZHEdfCxUDbBvgxTxqoyonB68pbRM.jpeg?width=960&crop=smart&auto=webp&s=aed3fa04ef8403f1c4fba5b791deac5302ddadcb', 'width': 960}, {'height': 1080, 'url': 'https://external-preview.redd.it/VoPOEM-x6Bq04HzZHEdfCxUDbBvgxTxqoyonB68pbRM.jpeg?width=1080&crop=smart&auto=webp&s=1a8b823bee23efd2965bf1f66beb482c2c4888ff', 'width': 1080}], 'source': {'height': 1400, 'url': 'https://external-preview.redd.it/VoPOEM-x6Bq04HzZHEdfCxUDbBvgxTxqoyonB68pbRM.jpeg?auto=webp&s=98220450c60eacb516e5c0d7edc7922a844741a6', 'width': 1400}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.