title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to make my GPT-OSS-120B like their GPT-OSS-120B | 1 | When I use GPT-OSS-120B via together with MCP it will tool call and format responses from the mcp nicely. It's a good experience. When I use my GPT-OSS-120B right out of llamacpp, it will just tell me what mcp call it would make but I have to tell it to invoke it, and the output is just raw output.
What is the popular choice for middleware to handle this?
| 2025-10-14T16:19:40 | https://www.reddit.com/r/LocalLLaMA/comments/1o6k0v2/how_to_make_my_gptoss120b_like_their_gptoss120b/ | MidnightProgrammer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6k0v2 | false | null | t3_1o6k0v2 | /r/LocalLLaMA/comments/1o6k0v2/how_to_make_my_gptoss120b_like_their_gptoss120b/ | false | false | self | 1 | null |
Best uncensored Qwen 3 based LLM? 8B or less? | 8 | Thx. | 2025-10-14T16:05:30 | https://www.reddit.com/r/LocalLLaMA/comments/1o6jn0u/best_uncensored_qwen_3_based_llm_8b_or_less/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6jn0u | false | null | t3_1o6jn0u | /r/LocalLLaMA/comments/1o6jn0u/best_uncensored_qwen_3_based_llm_8b_or_less/ | false | false | self | 8 | null |
DGX Spark vs AI Max 395+ | 55 | Anyone has fair comparison between two tiny AI PCs.
| 2025-10-14T15:42:08 | https://www.reddit.com/r/LocalLLaMA/comments/1o6izz2/dgx_spark_vs_ai_max_395/ | Responsible-Let9423 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6izz2 | false | null | t3_1o6izz2 | /r/LocalLLaMA/comments/1o6izz2/dgx_spark_vs_ai_max_395/ | false | false | self | 55 | null |
Mi50 replacement over P40 | 7 | I currently have a P40 in my server. Would it be worth it to swap the p40 for a Mi50 or maybe 2 Mi50s? | 2025-10-14T15:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/1o6iyhu/mi50_replacement_over_p40/ | gamma647 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6iyhu | false | null | t3_1o6iyhu | /r/LocalLLaMA/comments/1o6iyhu/mi50_replacement_over_p40/ | false | false | self | 7 | null |
Performance of llama.cpp on NVIDIA DGX Spark · ggml-org/llama.cpp · Discussion #16578 | 55 | 2025-10-14T15:38:49 | https://github.com/ggml-org/llama.cpp/discussions/16578 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o6iwrd | false | null | t3_1o6iwrd | /r/LocalLLaMA/comments/1o6iwrd/performance_of_llamacpp_on_nvidia_dgx_spark/ | false | false | default | 55 | {'enabled': False, 'images': [{'id': 'jHQdSuZPiOdCRrmqghCS01mFwWiPh61nOi8HvEEUkiw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jHQdSuZPiOdCRrmqghCS01mFwWiPh61nOi8HvEEUkiw.png?width=108&crop=smart&auto=webp&s=74da3b2624b61549dd6fa9a3a74434a7331ca9f6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jHQdSuZPiOdCRrmqghCS01mFwWiPh61nOi8HvEEUkiw.png?width=216&crop=smart&auto=webp&s=98611872d1be703afb037a9f8ce74e8ff851fce9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jHQdSuZPiOdCRrmqghCS01mFwWiPh61nOi8HvEEUkiw.png?width=320&crop=smart&auto=webp&s=d72306913a493332e4d66610a703f2c373d7b1cf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jHQdSuZPiOdCRrmqghCS01mFwWiPh61nOi8HvEEUkiw.png?width=640&crop=smart&auto=webp&s=2fd9276c6ee8e677f95f5cd9bcd71ed7dde0ad2a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jHQdSuZPiOdCRrmqghCS01mFwWiPh61nOi8HvEEUkiw.png?width=960&crop=smart&auto=webp&s=403cf04ef77316382333d3e6db114c0a7cb66e06', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jHQdSuZPiOdCRrmqghCS01mFwWiPh61nOi8HvEEUkiw.png?width=1080&crop=smart&auto=webp&s=c9045b9ec1aaf98302b4ce677a066c194fde4a79', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jHQdSuZPiOdCRrmqghCS01mFwWiPh61nOi8HvEEUkiw.png?auto=webp&s=bd5eef45e8c05fbc16168dfc09f49125317097ff', 'width': 1200}, 'variants': {}}]} | |
Setting up Continue.dev with local JanAI server - config.yaml issues | 0 | Hello r/LocalLLaMA,
I'm working on setting up a fully local agentic AI coding assistant and could use a hand with a configuration issue. I've gotten the individual components working but am struggling to connect them.
**My Goal:**
To configure [`continue.dev`](http://continue.dev) to use the `DeepSeek-Coder-V2-Lite-Instruct` model served locally by JanAI.
**What's Working:**
1. **JanAI:** The application is installed and running.
2. **Model:** I have downloaded and successfully chatted with the `DeepSeek-Coder-V2-Lite-Instruct-IQ4_XS` model through the JanAI interface.
3. **JanAI Server:** The OpenAI-compatible API server is running at `http://localhost:1337`. I can see the network activity in the JanAI logs when I chat.
**The Problem:**
My [`continue.dev`](http://continue.dev) `config.yaml` file is not connecting to the JanAI server. I've tried pointing it to the local endpoint, but it fails. I'm not sure if my `apiBase` or `provider` field is incorrect.
Here is the `config.yaml` I'm using:
YAMLname: My Continue Configuration
version: 1.0.0
schema: https://continue.dev/schemas/config.json
models:
- name: DeepSeek-Coder-V2-Lite-Instruct
provider: ollama
model: DeepSeek-Coder-V2-Lite-Instruct-IQ4_XS
apiBase: http://localhost:1337/v1/chat/completions
apiKey: Bearer janai
roles:
- chat
- edit
- apply
- summarize
**My Troubleshooting Steps:**
* I confirmed the JanAI server is running and accessible.
* I tried using the Swagger UI provided by JanAI, and it responds correctly.
* I suspect the issue is with `provider: ollama`, since JanAI is not an Ollama server, but rather provides an OpenAI-compatible endpoint.
Does anyone have a working `config.yaml` example for connecting [`continue.dev`](http://continue.dev) to a local, OpenAI-compatible server like JanAI? Any tips on what the correct `provider` or `apiBase` format should be?
Thanks for your help! | 2025-10-14T15:35:35 | https://www.reddit.com/r/LocalLLaMA/comments/1o6itrf/setting_up_continuedev_with_local_janai_server/ | swapnil0545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6itrf | false | null | t3_1o6itrf | /r/LocalLLaMA/comments/1o6itrf/setting_up_continuedev_with_local_janai_server/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=108&crop=smart&auto=webp&s=efe307f51ff2874b18960bc89ca5a18a1b551442', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=216&crop=smart&auto=webp&s=3f5d82a3bc41c4fa63c2939d1e2fdc1db75de463', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=320&crop=smart&auto=webp&s=c204a4e04e7cbc078774e051a9e247b58ad6b572', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=640&crop=smart&auto=webp&s=5b6c9e3fb05aa6cf2a05f0e920367ffac32c6448', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=960&crop=smart&auto=webp&s=bd57ab7ea83274fea8ece5793f2200a0ac6a7f02', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?width=1080&crop=smart&auto=webp&s=5cdafbd3026c11883a519aa200677fb58be16d11', 'width': 1080}], 'source': {'height': 1260, 'url': 'https://external-preview.redd.it/7FTmwKM4TuCvaIMSlask76mRn8liFawdPuHJFSqPl9U.png?auto=webp&s=30396441627641135814de7d733ce94b9e7795dc', 'width': 2400}, 'variants': {}}]} |
GLM-4.6 | Gut feel after sparring with Sonnet for half a day: more of a “steady player” | 36 | Cutting to the chase: it feels steadier, especially for small code-review fixes, short-chain reasoning, and toning down overhyped copy. Officially, they say across eight public benchmarks (like AIME25, LCB v6, HLE, SWE-Bench Verified, BrowseComp, Terminal-Bench, τ²-Bench, GPQA) it’s overall aligned with Sonnet 4, parts of its coding performance approach Sonnet 4.5, and there’s a “48.6% ties” line. I don’t obsess over perfect number matching; what matters is that I can reproduce results and it saves me hassle.
I used it for three things. First, code review. I told it “only fix unsafe code and keep function signatures,” and it gave a diff-like display, then pasted the full function; very low reading overhead. Second, terminal task planning. I didn’t let it actually run commands; I just wanted a small blueprint of “plan → expected output → fallback path.” It gave a clean structure that I could execute manually. Third, neutralizing overly promotional copy its touch is just right, and it keeps the numbers and sources.
I put GLM-4.6 into four everyday buckets: small code fixes, short-chain reasoning, tool awareness (planning only, no network), and rewriting. Settings per the official guidance: temperature = 1.0; for code, top\_p = 0.95 and top\_k = 40; 200K context makes reproducibility easier. For routine code/writing/short-chain reasoning, you can use it as-is; for heavy retrieval and strong evidence chains, plug in your own tools first and swap it in afterward.
Reference: [https://huggingface.co/zai-org/GLM-4.6](https://huggingface.co/zai-org/GLM-4.6) | 2025-10-14T15:25:37 | https://www.reddit.com/r/LocalLLaMA/comments/1o6ik01/glm46_gut_feel_after_sparring_with_sonnet_for/ | xieyutong | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6ik01 | false | null | t3_1o6ik01 | /r/LocalLLaMA/comments/1o6ik01/glm46_gut_feel_after_sparring_with_sonnet_for/ | false | false | self | 36 | {'enabled': False, 'images': [{'id': 'PGKpaG-61JC7z-y_F2XkhwKzdcpyb99tvV79_JhB320', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PGKpaG-61JC7z-y_F2XkhwKzdcpyb99tvV79_JhB320.png?width=108&crop=smart&auto=webp&s=fb4e232136167d5b8eaa2b4dfd652e37ea64deed', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PGKpaG-61JC7z-y_F2XkhwKzdcpyb99tvV79_JhB320.png?width=216&crop=smart&auto=webp&s=8fb46c8ecf0ce7513c3323ed920f6eaac1e63310', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PGKpaG-61JC7z-y_F2XkhwKzdcpyb99tvV79_JhB320.png?width=320&crop=smart&auto=webp&s=a863b4d3f9e4c4da35a962422c3822ffcefd263d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PGKpaG-61JC7z-y_F2XkhwKzdcpyb99tvV79_JhB320.png?width=640&crop=smart&auto=webp&s=3b34f4363d1d490762c5a458490a60b87ed1e125', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PGKpaG-61JC7z-y_F2XkhwKzdcpyb99tvV79_JhB320.png?width=960&crop=smart&auto=webp&s=36afc8f722848a9286178d13f6285bb7ac496512', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PGKpaG-61JC7z-y_F2XkhwKzdcpyb99tvV79_JhB320.png?width=1080&crop=smart&auto=webp&s=b148129613a3054df3dae0a56225962e4cb77da0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PGKpaG-61JC7z-y_F2XkhwKzdcpyb99tvV79_JhB320.png?auto=webp&s=f79c1b1d2d8039e1c90cbba29214e902ae297440', 'width': 1200}, 'variants': {}}]} |
New NVIDIA DGX Spark: 128GB Unified, 1 PFLOP ?? | 1 | 2025-10-14T15:05:56 | https://www.reddit.com/r/LocalLLaMA/comments/1o6i0yc/new_nvidia_dgx_spark_128gb_unified_1_pflop/ | Paig99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6i0yc | false | null | t3_1o6i0yc | /r/LocalLLaMA/comments/1o6i0yc/new_nvidia_dgx_spark_128gb_unified_1_pflop/ | false | false | 1 | null | ||
[Open Source] We built a production-ready GenAI framework after deploying 50+ agents. Here's what we learned 🍕 | 17 | Hey r/LocalLLaMA ! 👋
After building and deploying 50+ GenAI solutions in production, we got tired of fighting with bloated frameworks, debugging black boxes, and dealing with vendor lock-in. So we built DataPizza AI - a Python framework that actually respects your time.
**The Problem We Solved**
Most LLM frameworks give you two bad options:
* Too much magic → You have no idea why your agent did what it did
* Too little structure → You're rebuilding the same patterns over and over
We wanted something that's predictable, debuggable, and production-ready from day one.
**What Makes It Different**
🔍 Built-in Observability: OpenTelemetry tracing out of the box. See exactly what your agents are doing, track token usage, and debug performance issues without adding extra libraries.
🤝 Multi-Agent Collaboration: Agents can call other specialized agents. Build a trip planner that coordinates weather experts and web researchers - it just works.
📚 Production-Grade RAG: From document ingestion to reranking, we handle the entire pipeline. No more duct-taping 5 different libraries together.
🔌 Vendor Agnostic: Start with OpenAI, switch to Claude, add Gemini - same code. We support OpenAI, Anthropic, Google, Mistral, and Azure.
**Why We're Sharing This**
We believe in less abstraction, more control. If you've ever been frustrated by frameworks that hide too much or provide too little, this might be for you.
**Links:**
* 🐙 GitHub: [https://github.com/datapizza-labs/datapizza-ai](https://github.com/datapizza-labs/datapizza-ai)
* 📖 Docs: [https://docs.datapizza.ai](https://docs.datapizza.ai)
* 🏠 Website: [https://datapizza.tech/en/ai-framework/](https://datapizza.tech/en/ai-framework/)
# We Need Your Help! 🙏
We're actively developing this and would love to hear:
* What features would make this useful for YOUR use case?
* What problems are you facing with current LLM frameworks?
* Any bugs or issues you encounter (we respond fast!)
**Star us on GitHub if you find this interesting,** it genuinely helps us understand if we're solving real problems.
Happy to answer any questions in the comments! 🍕 | 2025-10-14T14:48:01 | https://www.reddit.com/r/LocalLLaMA/comments/1o6hjgw/open_source_we_built_a_productionready_genai/ | mario_candela | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6hjgw | false | null | t3_1o6hjgw | /r/LocalLLaMA/comments/1o6hjgw/open_source_we_built_a_productionready_genai/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'Pu7Oq-yYhvyLyBb4Ki7v49_bk_kjs3xEDEcJ1Vn4Hz8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Pu7Oq-yYhvyLyBb4Ki7v49_bk_kjs3xEDEcJ1Vn4Hz8.png?width=108&crop=smart&auto=webp&s=ee8c14e8943a578f00a31971215949b8e6cb7b30', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Pu7Oq-yYhvyLyBb4Ki7v49_bk_kjs3xEDEcJ1Vn4Hz8.png?width=216&crop=smart&auto=webp&s=53d5c55856fd42bd0ccfb7916db8ef0e41860a72', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Pu7Oq-yYhvyLyBb4Ki7v49_bk_kjs3xEDEcJ1Vn4Hz8.png?width=320&crop=smart&auto=webp&s=8a22e28777ac69eeaebcbe11cd9c13148800084f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Pu7Oq-yYhvyLyBb4Ki7v49_bk_kjs3xEDEcJ1Vn4Hz8.png?width=640&crop=smart&auto=webp&s=5f77c6b5a341a200c7176f92c00c179fe894dd5f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Pu7Oq-yYhvyLyBb4Ki7v49_bk_kjs3xEDEcJ1Vn4Hz8.png?width=960&crop=smart&auto=webp&s=a5d05706a465e59c93d7e31bfc6db89e5b4ab361', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Pu7Oq-yYhvyLyBb4Ki7v49_bk_kjs3xEDEcJ1Vn4Hz8.png?width=1080&crop=smart&auto=webp&s=f4a348339821ba310d5a2e22c6218e19d6ad1796', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Pu7Oq-yYhvyLyBb4Ki7v49_bk_kjs3xEDEcJ1Vn4Hz8.png?auto=webp&s=3a085da2e53d914ca8713683dbcded2b7657a445', 'width': 1200}, 'variants': {}}]} |
We tested Claude Sonnet 4.5, GPT-5-codex, Qwen3-Coder, GLM and other 25+ models on fresh SWE-Bench like tasks from September 2025 | 153 | Hi all, I’m Ibragim from Nebius.
We’ve updated the **SWE-rebench** leaderboard with September runs on **49 fresh GitHub PR bug-fix tasks** (last-month PR issues only). It’s a SWE-bench–style setup: models read real PR issues, run tests, edit code, and must make the suite pass.
Models: **Sonnet-4.5, GPT-5-Codex, Grok Code Fast 1, GLM, Qwen, Kimi** and others
* Claude Sonnet 4.5 achieved the highest *pass@5* (**55.1%**) and uniquely solving several instances that **no other model** on the leaderboard managed to resolve: [**python-trio/trio-3334**](https://github.com/python-trio/trio/pull/3334), [**cubed-dev/cubed-799**](https://github.com/cubed-dev/cubed/pull/799), [**canopen-python/canopen-613**](https://github.com/canopen-python/canopen/pull/613).
* **Qwen3-Coder** is the **best open-source performer**
* All models on the leaderboard were evaluated using the ChatCompletions API, except for [**gpt-5-codex**](https://platform.openai.com/docs/models/gpt-5-codex) and [**gpt-oss-120b**](https://platform.openai.com/docs/models/gpt-oss-120b), which are only accessible via the Responses API.
Please check the leaderboard, the insights, and write if you want to request some models. | 2025-10-14T14:36:10 | https://swe-rebench.com/ | Fabulous_Pollution10 | swe-rebench.com | 1970-01-01T00:00:00 | 0 | {} | 1o6h8jn | false | null | t3_1o6h8jn | /r/LocalLLaMA/comments/1o6h8jn/we_tested_claude_sonnet_45_gpt5codex_qwen3coder/ | false | false | default | 153 | null |
Lemonade is available in the Dify marketplace for quick integration into workflows | 8 | The Lemonade team has been working to natively integrate with a bunch of open-source projects in the local LLM ecosystem. Our goal is to make it as easy as possible to get started with AMD-optimized and cross-platform local LLMs!
Dify is a no-code workflow app that lets you visually build by connecting nodes for inputs, retrieval, agents, tools, and models. I've found that visual apps are an easy way to start prototyping complex workflows that could eventually become standalone apps. I'm also starting to develop some workflow to automate the repetitive parts of my job.
We have a tutorial here that shows how to stand up a "hello world" workflow that uses knowledge retrieval with an LLM: [Harnessing Dify and Local LLMs on Ryzen AI PCs for Private Workflows](https://www.amd.com/en/developer/resources/technical-articles/2025/harnessing-dify-and-local-llms-on-ryzen-ai-pcs-for-private-workf.html)
Anyone here on r/localllama using visual workflow builders with local LLMs? I'd love to hear what kinds of workflows you're running! | 2025-10-14T14:07:42 | jfowers_amd | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o6gi01 | false | null | t3_1o6gi01 | /r/LocalLLaMA/comments/1o6gi01/lemonade_is_available_in_the_dify_marketplace_for/ | false | false | default | 8 | {'enabled': True, 'images': [{'id': 'un122l2x13vf1', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/un122l2x13vf1.png?width=108&crop=smart&auto=webp&s=6453c3115aa23d3f645cbf26c3959c9be79d3cde', 'width': 108}, {'height': 112, 'url': 'https://preview.redd.it/un122l2x13vf1.png?width=216&crop=smart&auto=webp&s=e034152988f191cd34a900e6d59089151cc716ce', 'width': 216}, {'height': 166, 'url': 'https://preview.redd.it/un122l2x13vf1.png?width=320&crop=smart&auto=webp&s=6005b2134f23997364b7ea0f5661cd2211d968a6', 'width': 320}, {'height': 333, 'url': 'https://preview.redd.it/un122l2x13vf1.png?width=640&crop=smart&auto=webp&s=509067a58e19c05c0aed6ef609e60960dadff745', 'width': 640}, {'height': 499, 'url': 'https://preview.redd.it/un122l2x13vf1.png?width=960&crop=smart&auto=webp&s=dd92e3f4f19d6637032a92e2c58ad1d1244042e7', 'width': 960}], 'source': {'height': 547, 'url': 'https://preview.redd.it/un122l2x13vf1.png?auto=webp&s=85735eced3db19ea88a467f6b1a7f9fd78f4f1e4', 'width': 1051}, 'variants': {}}]} | |
MIT SEAL (Self-Adapting LLMs) | 19 | I had MIT SEAL come up in my news feed and it seems interested. Here's the [Venture Beat story](https://venturebeat.com/ai/self-improving-language-models-are-becoming-reality-with-mits-updated-seal) on it and the [SEAL Github page](https://github.com/Continual-Intelligence/SEAL/tree/main).
"SEAL (**Se**lf-**A**dapting **L**LMs) is a framework for training language models via RL to generate self-edits (finetuning data and other update directives for themselves) in response to new inputs."
"All experiments can be run with 2 A100/H100 GPUs"
Anyone happen to have tried this out? | 2025-10-14T14:07:08 | https://www.reddit.com/r/LocalLLaMA/comments/1o6ghgh/mit_seal_selfadapting_llms/ | ravage382 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6ghgh | false | null | t3_1o6ghgh | /r/LocalLLaMA/comments/1o6ghgh/mit_seal_selfadapting_llms/ | false | false | self | 19 | null |
Different Models for Various Use Cases. Which Model you use & Why? | 4 | I've been testing different local LLMs for various tasks, and I'm starting to figure out what works for what.
For coding, I use **Qwen3-Coder-30B-A3B**. It handles Python and JavaScript pretty well. When I need to extract text from documents or images, **Qwen3-VL-30B** and **Qwen2.5-VL-32B** do the job reliably.
For general tasks, I run **GPT-OSS-120B**. It's reasonably fast at around 40 tok/s with 24GB VRAM and gives decent answers without being overly verbose. **Mistral Small 3.2** works fine for quick text editing and autocomplete.
**Gemma3-27B** is solid for following instructions, and I've been using **GLM-4.5-Air** when I need better reasoning. Each model seems to have its strengths, so I just pick based on what I'm doing.
**LLM Providers to access these models:**
* **LM Studio** \- GUI interface
* **AnannasAI -** LLM Provider API
* **Ollama** \- CLI tool
* **llama.cpp** \- Direct control
I try to not just go with the benchmarks but rather try myself what works best for my workflow. Currently I have tested LLMs within my window of work. Looking for models that are useful & can work with MultiModal setup | 2025-10-14T14:06:27 | https://www.reddit.com/r/LocalLLaMA/comments/1o6ggv0/different_models_for_various_use_cases_which/ | Silent_Employment966 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6ggv0 | false | null | t3_1o6ggv0 | /r/LocalLLaMA/comments/1o6ggv0/different_models_for_various_use_cases_which/ | false | false | self | 4 | null |
Nvidia DGX Spark vs. Others (6000/5090/Mac) | 51 | Initial behcmarks are coming in: [https://lmsys.org/blog/2025-10-13-nvidia-dgx-spark/](https://lmsys.org/blog/2025-10-13-nvidia-dgx-spark/)
| 2025-10-14T13:57:46 | Chance-Studio-8242 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o6g8se | false | null | t3_1o6g8se | /r/LocalLLaMA/comments/1o6g8se/nvidia_dgx_spark_vs_others_60005090mac/ | false | false | 51 | {'enabled': True, 'images': [{'id': 'X0ZBlQJbfmj7J_wspzxerfRpwFuVUJuOeCi2NfvLw5A', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/g7gao5ec23vf1.png?width=108&crop=smart&auto=webp&s=16bba4cf86b40b36e66da7daa44d9b1945d14932', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/g7gao5ec23vf1.png?width=216&crop=smart&auto=webp&s=71e36607ac100aa82f11ee079d7d0dcd59b5acfa', 'width': 216}, {'height': 137, 'url': 'https://preview.redd.it/g7gao5ec23vf1.png?width=320&crop=smart&auto=webp&s=16c04d879ab762b2594f28a42b6b64ce7d58f566', 'width': 320}, {'height': 274, 'url': 'https://preview.redd.it/g7gao5ec23vf1.png?width=640&crop=smart&auto=webp&s=0be02471f67e88770b1458795aaf8f94c2246c6e', 'width': 640}, {'height': 411, 'url': 'https://preview.redd.it/g7gao5ec23vf1.png?width=960&crop=smart&auto=webp&s=cda92118d04b27f779ae71d393b3fd0cc67b2b78', 'width': 960}, {'height': 462, 'url': 'https://preview.redd.it/g7gao5ec23vf1.png?width=1080&crop=smart&auto=webp&s=5de0f0ce3c97c25825d787cde2c706c59c10855e', 'width': 1080}], 'source': {'height': 866, 'url': 'https://preview.redd.it/g7gao5ec23vf1.png?auto=webp&s=55eea4e6270db9f56a77d3f33b757c132dc3496d', 'width': 2022}, 'variants': {}}]} | ||
Models evaluation on last-month GitHub PR bug-fix tasks [SWE-rebench] | 1 | 2025-10-14T13:54:59 | https://swe-rebench.com/ | Fabulous_Pollution10 | swe-rebench.com | 1970-01-01T00:00:00 | 0 | {} | 1o6g6ai | false | null | t3_1o6g6ai | /r/LocalLLaMA/comments/1o6g6ai/models_evaluation_on_lastmonth_github_pr_bugfix/ | false | false | default | 1 | null | |
Nvidia DGX Spark vs. Others (6000/5090/Mac) | 2 | Initial behcmarks are coming in: [https://lmsys.org/blog/2025-10-13-nvidia-dgx-spark/](https://lmsys.org/blog/2025-10-13-nvidia-dgx-spark/)
| 2025-10-14T13:50:31 | Chance-Studio-8242 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o6g2a0 | false | null | t3_1o6g2a0 | /r/LocalLLaMA/comments/1o6g2a0/nvidia_dgx_spark_vs_others_60005090mac/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'HSl5N0oLx6_E_fbkzV7P6_TjwcznaWxNuBhuopGsfLM', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/bzlu24jo03vf1.png?width=108&crop=smart&auto=webp&s=ae935513710878eb7ef24f2d33bf8e3ca912dd61', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/bzlu24jo03vf1.png?width=216&crop=smart&auto=webp&s=d334b69bbac50c88bd56f2be3639766ac5c29833', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/bzlu24jo03vf1.png?width=320&crop=smart&auto=webp&s=5a0695b16cab91faba0974684af6cc7308d1079d', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/bzlu24jo03vf1.png?width=640&crop=smart&auto=webp&s=ac1ed2a8511c432d1b120aaf9fa91ff449fa0307', 'width': 640}, {'height': 541, 'url': 'https://preview.redd.it/bzlu24jo03vf1.png?width=960&crop=smart&auto=webp&s=0925780d8ce1081fe224d65d7a253714afe9ac6d', 'width': 960}, {'height': 608, 'url': 'https://preview.redd.it/bzlu24jo03vf1.png?width=1080&crop=smart&auto=webp&s=cd22ad46a0b8278b23ab3424c7b648837818dc01', 'width': 1080}], 'source': {'height': 1178, 'url': 'https://preview.redd.it/bzlu24jo03vf1.png?auto=webp&s=65d41a43c54c494dc3df2904859d8cfb775b5af2', 'width': 2090}, 'variants': {}}]} | ||
Best tools for prompt testing, evals, and observability: My 6-month field test + workflow | 3 | I have been testing a bunch of AI dev tools over the last 6 months - Cursor, Claude, LangChain, Flowise, Maxim, and a few custom eval setups. Some were great, most were just hype.
**What I’ve learned:**
Building with LLMs isn’t just about prompt quality it’s about structure, testing, and feedback loops. Without proper versioning or evals, everything feels like trial and error.
**My current workflow:**
* **Building:** [LangChain](https://www.langchain.com/) \+ [Flowise](https://flowiseai.com/) for quick prototyping and orchestration.
* **Testing:** [Maxim](https://getmax.im/Max1m) for prompt management, A/B testing, and automated evaluations (LLM-as-judge + programmatic). It’s been great for comparing prompt versions and deploying updates without touching code.
* **Reviewing:** [Claude](https://claude.ai/) for catching logic gaps and validating final responses.
do you recommend adding other tools to my AI dev stack | 2025-10-14T13:34:04 | https://www.reddit.com/r/LocalLLaMA/comments/1o6fnoc/best_tools_for_prompt_testing_evals_and/ | MongooseOriginal6450 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6fnoc | false | null | t3_1o6fnoc | /r/LocalLLaMA/comments/1o6fnoc/best_tools_for_prompt_testing_evals_and/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'ZxU7_MJm9Nif3lfqe4Po6LeHUZNXtQMRhESEmtJJqBQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ZxU7_MJm9Nif3lfqe4Po6LeHUZNXtQMRhESEmtJJqBQ.jpeg?width=108&crop=smart&auto=webp&s=c2643bcdcce914a83a4fba0bb93cf2c8f17ebbbc', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ZxU7_MJm9Nif3lfqe4Po6LeHUZNXtQMRhESEmtJJqBQ.jpeg?width=216&crop=smart&auto=webp&s=9e13f211d2bd6e08a04aafcef40f4053899a83fa', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/ZxU7_MJm9Nif3lfqe4Po6LeHUZNXtQMRhESEmtJJqBQ.jpeg?width=320&crop=smart&auto=webp&s=5c44badb316ac23c90565d17f9c1adda42c28630', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/ZxU7_MJm9Nif3lfqe4Po6LeHUZNXtQMRhESEmtJJqBQ.jpeg?width=640&crop=smart&auto=webp&s=96b1c7c78b9e62c83bf719765a124dee6566b455', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/ZxU7_MJm9Nif3lfqe4Po6LeHUZNXtQMRhESEmtJJqBQ.jpeg?width=960&crop=smart&auto=webp&s=59bca3e37a8e790f2acec70ed94d00677379760c', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/ZxU7_MJm9Nif3lfqe4Po6LeHUZNXtQMRhESEmtJJqBQ.jpeg?width=1080&crop=smart&auto=webp&s=3324be7a0f44a64e1d47539b0dc3c6dfef0dc2a8', 'width': 1080}], 'source': {'height': 1256, 'url': 'https://external-preview.redd.it/ZxU7_MJm9Nif3lfqe4Po6LeHUZNXtQMRhESEmtJJqBQ.jpeg?auto=webp&s=753ed9061cb1962b6213e4bfeb84ee8ccd809b0e', 'width': 2400}, 'variants': {}}]} |
Voice Cloning TTS model with output duration hints? | 1 | I've been trying this with Chatterbox but it only has pace and expression. Ideally I'd be able to supply a target duration for the generation speech. This is for alignment purposes. Is there a way to do this with Chatterbox?
Alternatively, is there another one-shot voice cloning TTS as good or better (at cloning) with duration control? | 2025-10-14T13:29:25 | https://www.reddit.com/r/LocalLLaMA/comments/1o6fjlj/voice_cloning_tts_model_with_output_duration_hints/ | SchrodingersCigar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6fjlj | false | null | t3_1o6fjlj | /r/LocalLLaMA/comments/1o6fjlj/voice_cloning_tts_model_with_output_duration_hints/ | false | false | self | 1 | null |
Anyone want 1 month of free Perplexity Pro? | 0 | Just DM me and I will send you the invitation link of comet browser, and you will get 1 month of perplexity pro (only using my invitation) so hurry up, just DM, I will get money too with you who get pro for free. Please support, i am student, and earn with these stuff😭
https://preview.redd.it/p141ld51x2vf1.png?width=1080&format=png&auto=webp&s=feeaaa58038895147d625f3548d7f2693404ca6c
| 2025-10-14T13:27:41 | https://www.reddit.com/r/LocalLLaMA/comments/1o6fi3w/anyone_want_1_month_of_free_perplexity_pro/ | BothYou243 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6fi3w | false | null | t3_1o6fi3w | /r/LocalLLaMA/comments/1o6fi3w/anyone_want_1_month_of_free_perplexity_pro/ | false | false | 0 | null | |
Is anyone else also having trouble tracking chats across different platforms? | 2 | So most of the time I use self-hosted Open WebUI to access local LLMs, and also some via API. But I sometimes also use OpenAI chat or Qwen chat when needing something bigger or with multi-media capabilities.
On top of that, sometimes I use CLI interfaces or Coder CLI interfaces.
The upshot is, that sometimes I have to visit various different locations to try to find information in one of these chats. I was thinking of trying to integrate and centralise everything to one location so that no matter where I request from, the inputs and outputs are logged somewhere so I only need to go to one place.
This is more difficult for hosted proprietary as you'd need to do an extract, transform and load pipeline.
Anyone else also have this issue and if so, have you found a way to solve it? | 2025-10-14T12:56:09 | https://www.reddit.com/r/LocalLLaMA/comments/1o6eqwb/is_anyone_else_also_having_trouble_tracking_chats/ | DeltaSqueezer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6eqwb | false | null | t3_1o6eqwb | /r/LocalLLaMA/comments/1o6eqwb/is_anyone_else_also_having_trouble_tracking_chats/ | false | false | self | 2 | null |
local setup (deepseek coder v2 lite, jani and continue.dev) | 1 | [removed] | 2025-10-14T12:53:28 | https://www.reddit.com/r/LocalLLaMA/comments/1o6eoo8/local_setup_deepseek_coder_v2_lite_jani_and/ | swapnil0545 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6eoo8 | false | null | t3_1o6eoo8 | /r/LocalLLaMA/comments/1o6eoo8/local_setup_deepseek_coder_v2_lite_jani_and/ | false | false | self | 1 | null |
Does anyone need a selfhosted backend with, auth, db , storage , cloud functions, sql editor & native webooks support ? | 0 | Hello everyone, I'm currently testing SelfDB v0.05 with native support for auth, db , storage , sql editor cloud functions and native webhooks support. for local multimodal ai agents. Looking for early testers with GPU's to take it for a spin ? fully open source [https://github.com/Selfdb-io/SelfDB](https://github.com/Selfdb-io/SelfDB) | 2025-10-14T12:29:27 | https://v.redd.it/kyevzjmdm2vf1 | selfdb | /r/LocalLLaMA/comments/1o6e5er/does_anyone_need_a_selfhosted_backend_with_auth/ | 1970-01-01T00:00:00 | 0 | {} | 1o6e5er | false | null | t3_1o6e5er | /r/LocalLLaMA/comments/1o6e5er/does_anyone_need_a_selfhosted_backend_with_auth/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZjA0eGtsbGRtMnZmMXmAXscfBW9Xkx5Raj78KUdUJEzB3YQjCdHfrfFDcfgd', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/ZjA0eGtsbGRtMnZmMXmAXscfBW9Xkx5Raj78KUdUJEzB3YQjCdHfrfFDcfgd.png?width=108&crop=smart&format=pjpg&auto=webp&s=077441fe69b2bd5a1ee31b68d9a60586c2e36034', 'width': 108}, {'height': 128, 'url': 'https://external-preview.redd.it/ZjA0eGtsbGRtMnZmMXmAXscfBW9Xkx5Raj78KUdUJEzB3YQjCdHfrfFDcfgd.png?width=216&crop=smart&format=pjpg&auto=webp&s=ff1b26e8f64d502161e4df62e014f228879222e4', 'width': 216}, {'height': 190, 'url': 'https://external-preview.redd.it/ZjA0eGtsbGRtMnZmMXmAXscfBW9Xkx5Raj78KUdUJEzB3YQjCdHfrfFDcfgd.png?width=320&crop=smart&format=pjpg&auto=webp&s=77c45dad3ea9d29be7fbc861af61c34b92017bca', 'width': 320}, {'height': 380, 'url': 'https://external-preview.redd.it/ZjA0eGtsbGRtMnZmMXmAXscfBW9Xkx5Raj78KUdUJEzB3YQjCdHfrfFDcfgd.png?width=640&crop=smart&format=pjpg&auto=webp&s=73881e5e4a5e317050226126020bea45588da2ed', 'width': 640}, {'height': 571, 'url': 'https://external-preview.redd.it/ZjA0eGtsbGRtMnZmMXmAXscfBW9Xkx5Raj78KUdUJEzB3YQjCdHfrfFDcfgd.png?width=960&crop=smart&format=pjpg&auto=webp&s=db6338679958f065b08bcba0e51069c61ec87aef', 'width': 960}, {'height': 642, 'url': 'https://external-preview.redd.it/ZjA0eGtsbGRtMnZmMXmAXscfBW9Xkx5Raj78KUdUJEzB3YQjCdHfrfFDcfgd.png?width=1080&crop=smart&format=pjpg&auto=webp&s=a177d8ef90a88eb99c9236c5c134e9d3cfab340e', 'width': 1080}], 'source': {'height': 1708, 'url': 'https://external-preview.redd.it/ZjA0eGtsbGRtMnZmMXmAXscfBW9Xkx5Raj78KUdUJEzB3YQjCdHfrfFDcfgd.png?format=pjpg&auto=webp&s=c8f6057652ac84508785bb767af9d4d016c680fe', 'width': 2870}, 'variants': {}}]} | |
Best vLLM for pill recognition (trainable on custom dataset)? | 1 | [removed] | 2025-10-14T12:27:59 | https://www.reddit.com/r/LocalLLaMA/comments/1o6e4af/best_vllm_for_pill_recognition_trainable_on/ | Virtual_Attitude2025 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6e4af | false | null | t3_1o6e4af | /r/LocalLLaMA/comments/1o6e4af/best_vllm_for_pill_recognition_trainable_on/ | false | false | self | 1 | null |
hey Karpathy! we started a nanochat students group on hugging face | 7 | Hey,
We set up this organization on the hub for people to discuss and share their work on Andrej Karpathy's nanochat.
We'll share checkpoints, articles, and just discuss what we're learning. We already have a tokenizer trained and pretraining running.
https://preview.redd.it/mk4g3emwl2vf1.png?width=1594&format=png&auto=webp&s=96b984ebe15a0d20dec76319cfedfaf7bef95360
| 2025-10-14T12:25:37 | https://www.reddit.com/r/LocalLLaMA/comments/1o6e2gg/hey_karpathy_we_started_a_nanochat_students_group/ | Zealousideal-Cut590 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6e2gg | false | null | t3_1o6e2gg | /r/LocalLLaMA/comments/1o6e2gg/hey_karpathy_we_started_a_nanochat_students_group/ | false | false | 7 | null | |
What is the best ai detector: Ollama or EssayPro? | 0 | End of semester is really hitting me hard. I’ve got multiple essays and projects all due at once, and no matter how much I plan, I always feel behind. Lately, I’ve been experimenting with AI tools to help draft sections or clean up grammar, but I’m unsure if I can fully trust them for bigger papers.
I recently heard about Ollama, Google’s AI, which is supposed to be pretty good at identifying AI-generated content. I tried running a few paragraphs through it, but the results were inconsistent. Sometimes it flagged my writing, sometimes it didn’t, and I couldn’t tell if I was using it wrong or if the tool itself was unreliable.
Then there’s EssayPro, which offers an ai essay checker and is considered by many as one of the best free AI detectors. From what I’ve read, it’s straightforward, accurate, and gives clear results without unnecessary complications. It seems like a safe option for students who want to be confident that their work won’t be mistakenly flagged by any ai detector.
So I’m really curious: for those who have tried both Ollama and this service, which one actually works better in real-life scenarios? Is Ollama effective if you prompt it correctly, or is the ai checker more reliable for consistent, accurate checks?
I’m especially interested in people’s experiences with longer research papers, citations, and maintaining a natural flow in essays. Accuracy is key, but so is usability and peace of mind when submitting assignments. Has anyone done side-by-side tests or compared the two in a real academic setting?
Honestly, any insight, tips, or personal experiences would be super helpful. I want to make sure I’m using the most effective tool before my next submission because the last thing I need is a flagged essay or unnecessary stress. I’ve also wondered if it’s worth using both tools together to double-check essays, or if one reliable detector is enough. Would love to hear if anyone has tried combining them for maximum accuracy.
| 2025-10-14T12:07:11 | https://www.reddit.com/r/LocalLLaMA/comments/1o6doqj/what_is_the_best_ai_detector_ollama_or_essaypro/ | AlexMorter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6doqj | false | null | t3_1o6doqj | /r/LocalLLaMA/comments/1o6doqj/what_is_the_best_ai_detector_ollama_or_essaypro/ | false | false | self | 0 | null |
In LM Studio + MoE Model, if you enable this setting with low VRAM, you can achieve a massive context length at 20 tok/sec. | 10 | 2025-10-14T12:06:11 | Shockbum | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o6dnzc | false | null | t3_1o6dnzc | /r/LocalLLaMA/comments/1o6dnzc/in_lm_studio_moe_model_if_you_enable_this_setting/ | false | false | 10 | {'enabled': True, 'images': [{'id': 'AYtT6kpBXz5BY7B9_uNZ0m4S8K5SDsyJNXZLgd_UAiY', 'resolutions': [{'height': 100, 'url': 'https://preview.redd.it/rafgk724i2vf1.jpeg?width=108&crop=smart&auto=webp&s=3493e6d58e2cae73c3d4e4a6eea1e253f5c434c3', 'width': 108}, {'height': 200, 'url': 'https://preview.redd.it/rafgk724i2vf1.jpeg?width=216&crop=smart&auto=webp&s=1a5e2d139991b73fba7706d2749f9e77029c4c7d', 'width': 216}, {'height': 296, 'url': 'https://preview.redd.it/rafgk724i2vf1.jpeg?width=320&crop=smart&auto=webp&s=7f3b7e5e1e1e14ea1db3ecd378ee91a99954b1f6', 'width': 320}, {'height': 592, 'url': 'https://preview.redd.it/rafgk724i2vf1.jpeg?width=640&crop=smart&auto=webp&s=76a59955785cb3e85c731a868427cc06fecfea70', 'width': 640}], 'source': {'height': 706, 'url': 'https://preview.redd.it/rafgk724i2vf1.jpeg?auto=webp&s=671030e883864420cb385dccb4d398bf2e2d3d00', 'width': 762}, 'variants': {}}]} | |||
How would you price this GPU workstation? | 1 | I have the opportunity to get the following system to a price I would say is really good. The machine is used but was tested by independent people I trust.
The specs:
**HP Z8 G4**
* 192GB ECC RAM (DDR4 3200 MHz)
* 2x Intel Xeon Gold 6234 CPU @ 3.30GHz
* **2x RTX A6000 48GB** (GA102GL) (there's an option to get a 3rd one)
* 2TB NVMe SSD
I would really love the hear your feedback on this machine, especially for LLM inference.
(the price is not finalized yet but I can post it once it is. However I know a price range in which similar machines were sold) | 2025-10-14T12:03:30 | https://www.reddit.com/r/LocalLLaMA/comments/1o6dm18/how_would_you_price_this_gpu_workstation/ | waescher | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6dm18 | false | null | t3_1o6dm18 | /r/LocalLLaMA/comments/1o6dm18/how_would_you_price_this_gpu_workstation/ | false | false | self | 1 | null |
Help me choose a GPU - 5060ti 16GB vs 9060XT 16GB | 4 | Hello,
I’ve been debating for the past while which to get. The 9060 XT is 130$ CAD cheaper. I know the 5060 ti has CUDA, and GDDR7. So it will be better for sure. But AMD has recently made big gains in AI.
https://m.youtube.com/watch?v=eF2PeRqpsQ4 shows it’s quite close in some cases. In addition, rocm6.4 was launched on windows recently too. Of course I could install rocm7 from TheRock repo as well.
So I’m wondering with these recent gains, is the choice still so clear as it was in the past? And when does the nvidia card make sense to get for the extra price, because it’s not so small of a difference. If the extra information helps, I’m upgrading from a 3050 6GB, and I’ll be doing some local LLMs for research(not commonly but still), mainly at 7b or 13b. Probably Q4_K_M quant, maybe bigger if I can support it. I’ll use LMStudio most likely after switching from Ollama. Though I’m open to using Llama.cpp directly. For image gen, while not the focus on this sub I’ll also do some of, I’m using ComfyUI right now, usually illustrious, or SDXL. I’m open to exploring bigger models with a better card though.
Thank you for any insight and help you can give | 2025-10-14T11:55:06 | https://www.reddit.com/r/LocalLLaMA/comments/1o6dfv8/help_me_choose_a_gpu_5060ti_16gb_vs_9060xt_16gb/ | dks11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6dfv8 | false | null | t3_1o6dfv8 | /r/LocalLLaMA/comments/1o6dfv8/help_me_choose_a_gpu_5060ti_16gb_vs_9060xt_16gb/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'c-BrPeJSH50eLc7RXeUFCfzymsPs5uEMGBjY15LQkng', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/c-BrPeJSH50eLc7RXeUFCfzymsPs5uEMGBjY15LQkng.jpeg?width=108&crop=smart&auto=webp&s=0ed7c8593c155a717ea9fc990a4364ea380c7473', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/c-BrPeJSH50eLc7RXeUFCfzymsPs5uEMGBjY15LQkng.jpeg?width=216&crop=smart&auto=webp&s=c5483c7cdae625c7beb69cc60410e9a39f824531', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/c-BrPeJSH50eLc7RXeUFCfzymsPs5uEMGBjY15LQkng.jpeg?width=320&crop=smart&auto=webp&s=9f12fa30bdd4e2b579abd145b4cc9b8bb4c5e805', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/c-BrPeJSH50eLc7RXeUFCfzymsPs5uEMGBjY15LQkng.jpeg?auto=webp&s=c5d451d3ea5588364a6834196430e6ec51802fb8', 'width': 480}, 'variants': {}}]} |
SHAI – (yet another) open-source Terminal AI coding assistant | 21 | # At OVHcloud, we built SHAI for our internal needs as a coding assistant that wouldn’t rely on proprietary models or closed services. We’ve now open-sourced it (Apache 2.0) so the community can use and improve it too, including for local use.
**What is SHAI? 🔎**
A terminal-based AI assistant to help you:
• Build & edit code
• Run shell commands
• Automate workflows
• Or even run headless as part of your stack
**Why it’s cool ?** 😎
• Fully Open Source + developer-first design
• No vendor lock-in (configure any LLM endpoint)
• Works out of the box with pre-configured OVHCloud AI Endpoints (free tier with low rate limiting - you can add your API key later)
• Supports Function Calling + MCP
Also → SHAI is part of
**Hacktoberfest**
This year! If you want to contribute & grab some swag, it’s a great time: [https://github.com/ovh/shai](https://github.com/ovh/shai) | 2025-10-14T11:42:29 | https://www.reddit.com/r/LocalLLaMA/comments/1o6d6xh/shai_yet_another_opensource_terminal_ai_coding/ | Fit_Temperature7246 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6d6xh | false | null | t3_1o6d6xh | /r/LocalLLaMA/comments/1o6d6xh/shai_yet_another_opensource_terminal_ai_coding/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': '2zudpXVdWl6oVcMk64w9FioeQCW6pOR2AHkVg6mKsnw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2zudpXVdWl6oVcMk64w9FioeQCW6pOR2AHkVg6mKsnw.png?width=108&crop=smart&auto=webp&s=823acfcbd7e383b1aa396b333469b18ee0a47ecb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2zudpXVdWl6oVcMk64w9FioeQCW6pOR2AHkVg6mKsnw.png?width=216&crop=smart&auto=webp&s=47069d1263db80ba3c0c46bf17c249c37a161cb3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2zudpXVdWl6oVcMk64w9FioeQCW6pOR2AHkVg6mKsnw.png?width=320&crop=smart&auto=webp&s=547140c0b318c4b4649c01a8f963c72408f31142', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2zudpXVdWl6oVcMk64w9FioeQCW6pOR2AHkVg6mKsnw.png?width=640&crop=smart&auto=webp&s=af6e2cd27f4ab25ec9b872062fac1ac745ff1031', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2zudpXVdWl6oVcMk64w9FioeQCW6pOR2AHkVg6mKsnw.png?width=960&crop=smart&auto=webp&s=da01ee0dab512a0ee81928839c2879d67183ffb3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2zudpXVdWl6oVcMk64w9FioeQCW6pOR2AHkVg6mKsnw.png?width=1080&crop=smart&auto=webp&s=34f8ac33a41a52290f23a0628e75fac17c1dcdbc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2zudpXVdWl6oVcMk64w9FioeQCW6pOR2AHkVg6mKsnw.png?auto=webp&s=5929154e268f1798b79cb208f955f63b144da229', 'width': 1200}, 'variants': {}}]} |
CPU Only OSS 120 | 25 | Ive sold my 3090 and im selling my 4090 as we speak, mostly because the stuff I really need LLMs for I need huge models and the other stuff I only need really small models 4B or less. Also I tend to game on my PS5 as work at my PC all day.
So I used to run OSS120 partially in GPU with the rest offloaded to CPU and it used to fly. Also it was a pretty good model IMO for logic etc for its speed.
So decided to just try it on CPU only (gulp) on my home lab server and actually it's more than usable at a fraction of the power cost too. This is also running in a VM with only half cores given.
prompt eval time = 260.39 ms / 13 tokens ( 20.03 ms per token, 49.92 tokens per second)
eval time = 51470.09 ms / 911 tokens ( 56.50 ms per token, 17.70 tokens per second)
total time = 51730.48 ms / 924 tokens | 2025-10-14T11:38:44 | https://www.reddit.com/r/LocalLLaMA/comments/1o6d4a6/cpu_only_oss_120/ | Wisepunter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6d4a6 | false | null | t3_1o6d4a6 | /r/LocalLLaMA/comments/1o6d4a6/cpu_only_oss_120/ | false | false | self | 25 | null |
I built a fully automated AI podcast generator that connects to ollama | 13 | **Hey everyone,**
I’ve been working on a fun side project — an **AI-powered podcast generator** built entirely with **Ollama (for the LLM)** and **Piper (for TTS)**. 🎙️
The system takes any topic and automatically:
1. **Write a complete script**
2. **Generates the audio**
I’ve **open-sourced the full project** on GitHub so anyone can explore, use, or contribute to it. If you’re into AI, audio, or automation, I’d love your feedback and ideas!
🔗 **GitHub Repo:** [https://github.com/Laszlobeer/AI-podcast](https://github.com/Laszlobeer/AI-podcast) | 2025-10-14T10:54:44 | https://www.reddit.com/r/LocalLLaMA/comments/1o6c9vs/i_built_a_fully_automated_ai_podcast_generator/ | Reasonable_Brief578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6c9vs | false | null | t3_1o6c9vs | /r/LocalLLaMA/comments/1o6c9vs/i_built_a_fully_automated_ai_podcast_generator/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'Pm-cwkYNGD8heBIn1O1aG8np0fYkCbpSHUuFPMpAogY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Pm-cwkYNGD8heBIn1O1aG8np0fYkCbpSHUuFPMpAogY.png?width=108&crop=smart&auto=webp&s=284e6211acdef6fa5bed4c46cc0034c3627737ac', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Pm-cwkYNGD8heBIn1O1aG8np0fYkCbpSHUuFPMpAogY.png?width=216&crop=smart&auto=webp&s=b8e9515c2eed14308c24f73b0b5429dcd6a0f01e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Pm-cwkYNGD8heBIn1O1aG8np0fYkCbpSHUuFPMpAogY.png?width=320&crop=smart&auto=webp&s=356a15853d2746747cd3043b86858e28e3ce31d7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Pm-cwkYNGD8heBIn1O1aG8np0fYkCbpSHUuFPMpAogY.png?width=640&crop=smart&auto=webp&s=544ab0a06a891d3ed284b58e4ab22d9caee6e763', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Pm-cwkYNGD8heBIn1O1aG8np0fYkCbpSHUuFPMpAogY.png?width=960&crop=smart&auto=webp&s=4a2374724fe8ca29537ef64663c243a26bf26759', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Pm-cwkYNGD8heBIn1O1aG8np0fYkCbpSHUuFPMpAogY.png?width=1080&crop=smart&auto=webp&s=33a3964d3b0bd809f6fc392abfa95b4f5e27fa83', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Pm-cwkYNGD8heBIn1O1aG8np0fYkCbpSHUuFPMpAogY.png?auto=webp&s=82ab9a603bd03f503bf748b5123bfd133518c5d5', 'width': 1200}, 'variants': {}}]} |
Used Llama4 to build Examsprint AI | 0 | Examsprint AI your one stop AI study Solution build with help of Llama
Examsprint-ai.pages.dev | 2025-10-14T10:38:44 | Dependent-Donkey5464 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o6bzqm | false | null | t3_1o6bzqm | /r/LocalLLaMA/comments/1o6bzqm/used_llama4_to_build_examsprint_ai/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'w0q5qsyv22vf1', 'resolutions': [{'height': 122, 'url': 'https://preview.redd.it/w0q5qsyv22vf1.png?width=108&crop=smart&auto=webp&s=1bc28473134898e50faee7056c9cbdec8e2f4a1f', 'width': 108}, {'height': 244, 'url': 'https://preview.redd.it/w0q5qsyv22vf1.png?width=216&crop=smart&auto=webp&s=4b54136a79103678b30913a4a805d5d7c45fec80', 'width': 216}, {'height': 362, 'url': 'https://preview.redd.it/w0q5qsyv22vf1.png?width=320&crop=smart&auto=webp&s=399493c40e5eda4e1544feda505a8d602f2205be', 'width': 320}, {'height': 725, 'url': 'https://preview.redd.it/w0q5qsyv22vf1.png?width=640&crop=smart&auto=webp&s=14d8596fae2eeb274bcf469ea6ebaca88cd34bf9', 'width': 640}, {'height': 1088, 'url': 'https://preview.redd.it/w0q5qsyv22vf1.png?width=960&crop=smart&auto=webp&s=1b0dd9d4d5d747f666598860f59c2a27f0bfcd32', 'width': 960}], 'source': {'height': 1088, 'url': 'https://preview.redd.it/w0q5qsyv22vf1.png?auto=webp&s=6cd679708a18370dbf9edec3ac35213f1b2bf3d8', 'width': 960}, 'variants': {}}]} | |
Llama 4 download through Hugging Face | 0 | Model-00016 is completed here, but other just remain at the same state, regardless of how much time i give it. If I restarts, another part get completed fully. | 2025-10-14T09:56:22 | ReasonableBison4218 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o6b96k | false | null | t3_1o6b96k | /r/LocalLLaMA/comments/1o6b96k/llama_4_download_through_hugging_face/ | false | false | 0 | {'enabled': True, 'images': [{'id': '5_TwElXAtb_AKO-BTorJVsj6HOBcMjC1SRIFoG_V8yg', 'resolutions': [{'height': 42, 'url': 'https://preview.redd.it/fi8t8k37v1vf1.png?width=108&crop=smart&auto=webp&s=f7d482dd7fa7999aa48b39759ee5abf777a987eb', 'width': 108}, {'height': 85, 'url': 'https://preview.redd.it/fi8t8k37v1vf1.png?width=216&crop=smart&auto=webp&s=a882717ac10efc6c2c86cd68c63c8f2c3f258ef9', 'width': 216}, {'height': 126, 'url': 'https://preview.redd.it/fi8t8k37v1vf1.png?width=320&crop=smart&auto=webp&s=052f146370a81337c65b34d1e0b4deafb15b5431', 'width': 320}, {'height': 252, 'url': 'https://preview.redd.it/fi8t8k37v1vf1.png?width=640&crop=smart&auto=webp&s=e4dd1880a81d2693c3db8d0b318d81c4de4da0fe', 'width': 640}], 'source': {'height': 371, 'url': 'https://preview.redd.it/fi8t8k37v1vf1.png?auto=webp&s=f7a22573720e34bfcc04b6db9997c3dca3852c63', 'width': 940}, 'variants': {}}]} | ||
OrKa Cloud API - orchestration for real agentic work, not monolithic prompts | 4 | Monolith prompts are lazy. One agent that analyzes, remembers, searches, synthesizes, formats, and somehow stays coherent is a fantasy. It blurs responsibilities, loses context, and turns debugging into a black box.
I just shipped OrKa Cloud API. It lets you compose multiple focused agents into a traceable, memory-aware workflow. You bring your OpenAI key. No infra. Real memory. Full execution trace.
**What it does well**
* Specialization beats bloat: analyzer, memory writer, memory reader, deep analyzer, synthesizer. Each does one job.
* Real memory with RedisStack: write insights, fetch with vector search, feed later stages.
* Deterministic orchestration: sequential flow, explicit data passing, cost accounting, full trace JSON you can download.
* Composable YAML: agents are reusable. You can replace one without touching the rest.
**Where it’s still rough**
* OpenAI-only in the hosted API. If you need Anthropic or Gemini in cloud right now, this is not it.
* Demo rate limits and Cloud Run cold starts exist. If you are chasing sub-500 ms P99, deploy your own.
* YAML size is capped. If you try to shove your entire R&D department in one config, you missed the point.
# Live API
* Endpoint: [`https://orka-demo-647096874165.europe-west1.run.app/api/run`](https://orka-demo-647096874165.europe-west1.run.app/api/run)
* Console base: [`https://orka-demo-647096874165.europe-west1.run.app`](https://orka-demo-647096874165.europe-west1.run.app)
* GitHub: [https://github.com/marcosomma/orka-reasoning](https://github.com/marcosomma/orka-reasoning)
* Examples dir: [https://github.com/marcosomma/orka-reasoning/tree/main/examples](https://github.com/marcosomma/orka-reasoning/tree/main/examples)
# Why this pattern works
* **Task segmentation** prevents context dilution. Agents are short, sharp, auditable.
* **Memory** creates continuity across stages. This is not roleplay memory. It is Redis-backed storage plus similarity search.
* **Observability** is non negotiable. Every step is logged. You can replay the trace, see costs, and tune prompts surgically.
# Copy-paste demo you can run right now in Postman
**Method**: POST
**URL**: [`https://orka-demo-647096874165.europe-west1.run.app/api/run`](https://orka-demo-647096874165.europe-west1.run.app/api/run)
**Headers**: `Content-Type: application/json`
**Body**: paste this exactly and replace the key value
{
"input": "Explain how neural networks learn from data",
"openai_api_key": "sk-YOUR_OPENAI_KEY_HERE",
"yaml_config": "orchestrator:\n id: iterative-learning\n strategy: sequential\n agents:\n - initial_analyzer\n - insight_storer\n - knowledge_retriever\n - deep_analyzer\n - learning_recorder\n - final_synthesizer\n\nagents:\n - id: initial_analyzer\n type: openai-answer\n model: gpt-4o-mini\n .temperature: 0.7\n prompt: |\n Analyze this topic: {{ get_input() }}\n \n Provide:\n 1. Core concepts (3-5 key points)\n 2. Connections to related topics\n 3. Areas needing deeper exploration\n \n Format as structured insights.\n\n - id: insight_storer\n type: memory\n operation: write\n prompt: |\n Initial analysis of: {{ get_input() }}\n \n Key insights:\n {{ get_agent_response('initial_analyzer') }}\n\n - id: knowledge_retriever\n type: memory\n operation: read\n prompt: |\n Search for concepts related to:\n {{ get_agent_response('initial_analyzer') }}\n\n - id: deep_analyzer\n type: openai-answer\n model: gpt-4o\n temperature: 0.6\n prompt: |\n Original question: {{ get_input() }}\n \n Initial analysis:\n {{ get_agent_response('initial_analyzer') }}\n \n Related knowledge from memory:\n {{ previous_outputs.knowledge_retriever }}\n \n Now provide a DEEPER analysis that:\n 1. Builds on the initial insights\n 2. Connects to related concepts from memory\n 3. Addresses the areas flagged for deeper exploration\n 4. Adds new perspectives not covered initially\n \n Show how the analysis has evolved.\n\n - id: learning_recorder\n type: memory\n operation: write\n prompt: |\n Deep analysis of: {{ get_input() }}\n \n Advanced insights:\n {{ get_agent_response('deep_analyzer') }}\n \n Evolution from initial analysis:\n - Built upon: {{ get_agent_response('initial_analyzer') | truncate(200) }}\n - Connected with: {{ previous_outputs.knowledge_retriever | truncate(200) }}\n\n - id: final_synthesizer\n type: openai-answer\n model: gpt-4o-mini\n temperature: 0.4\n prompt: |\n Create a comprehensive final answer for: {{ get_input() }}\n \n Synthesize these learning stages:\n \n **Stage 1 - Initial Understanding:**\n {{ get_agent_response('initial_analyzer') }}\n \n **Stage 2 - Memory-Enhanced Analysis:**\n {{ get_agent_response('deep_analyzer') }}\n \n **Your Task:**\n 1. Show how understanding evolved through the stages\n 2. Present the final, most complete answer\n 3. Highlight what was learned through iteration\n 4. Demonstrate the value of this multi-pass approach\n \n Structure:\n - Evolution Summary (how thinking progressed)\n - Comprehensive Answer (synthesized knowledge)\n - Learning Insights (what the iteration revealed)"
}
You will get a run\_id, cost breakdown, and a log URL. You can fetch the full trace JSON at `/api/logs/{run_id}`.
# What to try
* Ask related questions back to back. The second run benefits from memory written in the first.
* Swap models per stage. Keep cheap models for wide passes, use a stronger one for deep analysis or final synthesis.
* Pull the trace, read each agent’s output, and trim prompts to the minimum that still produces quality.
# Realistic costs
* Infra for self hosted: about 42 dollars per month at 50 percent uptime. Scales to zero on idle.
* Per run API fees: around 0.01 to 0.03 dollars for the demo flow. You control models and temperature.
# Production notes
* API keys are never stored. They are scoped to the single request and wiped afterward.
* 5 req per minute per IP on the public demo. If you need more, deploy your own.
* YAML limit is 100 KB. Keep agents tight. Reuse them.
If you have been battling a 1200 token kitchen sink prompt, stop. Split the job. Add memory. Trace everything. The results are cleaner, cheaper, and actually debuggable.
I want blunt feedback. What would make this viable for your stack right now: Anthropic support, parallel forks, conditional routers, or a baked in evaluator that loops until a quality threshold is hit | 2025-10-14T09:18:05 | https://www.reddit.com/r/LocalLLaMA/comments/1o6anbf/orka_cloud_api_orchestration_for_real_agentic/ | marcosomma-OrKA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o6anbf | false | null | t3_1o6anbf | /r/LocalLLaMA/comments/1o6anbf/orka_cloud_api_orchestration_for_real_agentic/ | false | false | self | 4 | null |
Best CPU/RAM Combo for AI: EPYC (8-Channel DDR4) vs. Ryzen (Dual-Channel DDR5) with Blackwell PRO 6000 Max Q | 9 | Hey everyone,
I'm planning a new build for hosting and running AI models, and I'm trying to decide on the best platform strategy.
I currently have 256 GB of DDR4 ECC RAM (as 8 x 32GB sticks @ 2400MHz) and I'm looking to buy a Blackwell PRO 6000 Max Q and possibly multiple in the future. This leads me to two very different build options:
Option 1: The EPYC Server Build. I could get an older-generation CPU like an AMD EPYC 7532 (32-core/64-thread). The major benefit here would be to fully utilize my RAM across 8 memory channels, which should provide massive memory bandwidth. There are also more PCI lanes for multi gpus later on, if that is ever required.
Option 2: The Modern Ryzen Build. Alternatively, I could sell the DDR4 and build a modern system around a high-clocked AMD Ryzen CPU with new, faster DDR5 RAM, but I'd be limited to only 2 memory channels.
Now my questions:
**Bandwidth vs. Speed:** For AI workloads like running Large Language Models (LLMs), what's more important? The massive memory bandwidth of an 8-channel EPYC setup or the higher core clock speeds and faster RAM of a modern dual-channel Ryzen system?
**System RAM vs. VRAM:** How useful is having a large amount of system RAM (256 GB) when a GPU with fast VRAM is doing most of the heavy lifting? Is there a point of diminishing returns?
**Efficient RAM Offloading:** I know it's possible to offload model layers from VRAM to system RAM to run larger models. Are there effective strategies or software settings that allow this to happen without a major hit to generation speed? I want the system RAM to be a useful complement to the VRAM, not a bottleneck.
I'm trying to determine if it's smart to build around this large kit of DDR4 RAM to maximize bandwidth or if I'm better off starting fresh with the latest consumer hardware.
Thanks in advance for any advice or resources!
# | 2025-10-14T08:30:22 | https://www.reddit.com/r/LocalLLaMA/comments/1o69wtr/best_cpuram_combo_for_ai_epyc_8channel_ddr4_vs/ | MustafaMahat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o69wtr | false | null | t3_1o69wtr | /r/LocalLLaMA/comments/1o69wtr/best_cpuram_combo_for_ai_epyc_8channel_ddr4_vs/ | false | false | self | 9 | null |
What’s the point of a DGX Spark for inference if a Mac Studio M1 Ultra beats it at TG and equals it at PP at half the price? | 87 | I might be missing something here, but with the results I’ve seen, the DGX does what Apple did 3 years ago (actually worse token generation).
Is the DGX as bad as it seems for inference?
We all knew that TG would have been shit with that bandwidth, but even prompt processing doesn’t seem great. | 2025-10-14T08:28:09 | https://www.reddit.com/r/LocalLLaMA/comments/1o69vm5/whats_the_point_of_a_dgx_spark_for_inference_if_a/ | Valuable-Run2129 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o69vm5 | false | null | t3_1o69vm5 | /r/LocalLLaMA/comments/1o69vm5/whats_the_point_of_a_dgx_spark_for_inference_if_a/ | false | false | self | 87 | null |
Looking for a few AI enthusiasts to help with Skygen.ai dev testing | 0 | We’re a small team of five developers and now we're building Skygen, an AI agent that performs any human task on your phone, laptop, and desktop, just captures the screen and clicks itself. Quite slow now, but it works.
We’re launching a closed dev test and looking for about 30 hands-on AI enthusiasts who want to explore early builds, break things, and share honest feedback. It’s still early, but already working — and your insights will help us make Skygen smarter, faster, and more useful in real life.
As a thank-you, every dev-test participant will receive a free 1-year Skygen subscription once we launch.
Let me know in the comments if you’d like to join, I’ll share the link there. For some reason, Reddit doesn’t let me include it in the post itself.
Big thanks to everyone who decides to jump in :) | 2025-10-14T08:14:11 | https://www.reddit.com/r/LocalLLaMA/comments/1o69o29/looking_for_a_few_ai_enthusiasts_to_help_with/ | cammmtheemann | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o69o29 | false | null | t3_1o69o29 | /r/LocalLLaMA/comments/1o69o29/looking_for_a_few_ai_enthusiasts_to_help_with/ | false | false | self | 0 | null |
Still no qwen3 next 80b gguf? | 29 | Is it coming will it come? | 2025-10-14T08:05:39 | https://www.reddit.com/r/LocalLLaMA/comments/1o69jfe/still_no_qwen3_next_80b_gguf/ | LebiaseD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o69jfe | false | null | t3_1o69jfe | /r/LocalLLaMA/comments/1o69jfe/still_no_qwen3_next_80b_gguf/ | false | false | self | 29 | null |
WhatsApp food ordering AI Agent example with source code | 7 | Hi,
We’ve been making minimal AI agent examples with full source code.
Here’s one that lets you order food on WhatsApp, it shows a menu, takes your order, and checks the status through chat. Using Supabase, Whatsapp cloud API, OpenAI and Voltagent.
It uses tools and memory to keep context and handle actions.
The project is simple on purpose and feel free to fork it and build your own version. Feedback and PRs are welcome:)
We’re adding more realworld agent examples . What kind of examples or use cases would you like to see?
Disclaimer: I’m one of the maintainers of VoltAgent.
| 2025-10-14T07:39:59 | https://github.com/VoltAgent/voltagent/tree/main/examples/with-whatsapp | necati-ozmen | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o695bm | false | null | t3_1o695bm | /r/LocalLLaMA/comments/1o695bm/whatsapp_food_ordering_ai_agent_example_with/ | false | false | default | 7 | null |
Is AI benchmark website trustworthy? | 1 | [removed] | 2025-10-14T07:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/1o691x4/is_ai_benchmark_website_trustworthy/ | Just-Normal-Guy-111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o691x4 | false | null | t3_1o691x4 | /r/LocalLLaMA/comments/1o691x4/is_ai_benchmark_website_trustworthy/ | false | false | self | 1 | null |
qwen3 coder 4b and 8b, please | 14 | why did qwen stop releasing small models?
can we do it on our own? i'm on 8gb macbook air, so 8b is max for me | 2025-10-14T07:00:21 | https://www.reddit.com/r/LocalLLaMA/comments/1o68jt0/qwen3_coder_4b_and_8b_please/ | madaradess007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o68jt0 | false | null | t3_1o68jt0 | /r/LocalLLaMA/comments/1o68jt0/qwen3_coder_4b_and_8b_please/ | false | false | self | 14 | null |
Any recommendations for a prebuilt workstation for running AI models locally? | 3 | Hi guys, I was looking to buy a pre-built machine for local AI inferencing and need some recommendations from you all.
To get the question out of the way, yes I know building my own is gonna be cheaper and maybe even more performant) but I can't because of reasons. | 2025-10-14T06:47:28 | https://www.reddit.com/r/LocalLLaMA/comments/1o68clx/any_recommendations_for_a_prebuilt_workstation/ | Weebviir | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o68clx | false | null | t3_1o68clx | /r/LocalLLaMA/comments/1o68clx/any_recommendations_for_a_prebuilt_workstation/ | false | false | self | 3 | null |
GitHub - RagView/RagView : Validate RAG route on your dataset | 11 | 2025-10-14T06:37:21 | https://github.com/RagView/RagView | Cheryl_Apple | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o6870y | false | null | t3_1o6870y | /r/LocalLLaMA/comments/1o6870y/github_ragviewragview_validate_rag_route_on_your/ | false | false | default | 11 | {'enabled': False, 'images': [{'id': 'CoNyHXb1E7NtiQZE7vuXWMXL6Jkf78hPpk0RKlCSlZ4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CoNyHXb1E7NtiQZE7vuXWMXL6Jkf78hPpk0RKlCSlZ4.png?width=108&crop=smart&auto=webp&s=32d77f962ad942fd2bd9b532454aa352354cc2ab', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CoNyHXb1E7NtiQZE7vuXWMXL6Jkf78hPpk0RKlCSlZ4.png?width=216&crop=smart&auto=webp&s=1a8d2a4d175d568d05da83a3c0fa617b0916c7cb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CoNyHXb1E7NtiQZE7vuXWMXL6Jkf78hPpk0RKlCSlZ4.png?width=320&crop=smart&auto=webp&s=a85adffecdd3d8e173041809458b1f55484bdc98', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CoNyHXb1E7NtiQZE7vuXWMXL6Jkf78hPpk0RKlCSlZ4.png?width=640&crop=smart&auto=webp&s=e774eb1e33f85868b8ceca67abc63be011e9ecd6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CoNyHXb1E7NtiQZE7vuXWMXL6Jkf78hPpk0RKlCSlZ4.png?width=960&crop=smart&auto=webp&s=3bcbbbe690f36a8c544aab86cd3ee7a1ed28dcd7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CoNyHXb1E7NtiQZE7vuXWMXL6Jkf78hPpk0RKlCSlZ4.png?width=1080&crop=smart&auto=webp&s=f20d3eee2cc8ca866d3d90b43bd07c5d8fbc363a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CoNyHXb1E7NtiQZE7vuXWMXL6Jkf78hPpk0RKlCSlZ4.png?auto=webp&s=2dd53350c6d3c86edb68fe7b8ea63293ee8df9e0', 'width': 1200}, 'variants': {}}]} | |
Hello, everyone. | 8 | 2025-10-14T06:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/1o681ap/hello_everyone/ | Particular-Honey-137 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o681ap | false | null | t3_1o681ap | /r/LocalLLaMA/comments/1o681ap/hello_everyone/ | false | false | 8 | null | ||
Which is the best compatible unrestricted/Jailbreak AI model i can run on my laptop? | 0 | My spec: AMD Ryzen 5 5600H, Nvidia GeFore GTX 1650. | 2025-10-14T06:17:10 | https://www.reddit.com/r/LocalLLaMA/comments/1o67vmq/which_is_the_best_compatible/ | redfinalboss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o67vmq | false | null | t3_1o67vmq | /r/LocalLLaMA/comments/1o67vmq/which_is_the_best_compatible/ | false | false | self | 0 | null |
AndesVL Technical Report: An Efficient Mobile-side Multimodal Large Language Model | 5 | Code: [https://github.com/OPPO-Mente-Lab/AndesVL\_Evaluation](https://github.com/OPPO-Mente-Lab/AndesVL_Evaluation)
Model: [https://huggingface.co/OPPOer](https://huggingface.co/OPPOer) | 2025-10-14T06:13:45 | https://arxiv.org/abs/2510.11496 | ninjasaid13 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1o67tit | false | null | t3_1o67tit | /r/LocalLLaMA/comments/1o67tit/andesvl_technical_report_an_efficient_mobileside/ | false | false | default | 5 | null |
AlphaEvolve (from Deepmind) seems like a much bigger deal then other research for verifiable problems, no? Plus, we have an open source alternative from Sakana AI: ShinkaEvolve. | 1 | **Alphaevolve basically:**
AlphaEvolve is an evolutionary coding agent that uses large language models (LLMs) to automatically discover and optimize algorithms for complex problems. A human provides an initial program, marks code sections for improvement, and supplies an automated evaluator to score solutions. The system then enters a loop where an ensemble of LLMs (like Gemini Flash for idea generation and Gemini Pro for deeper suggestions) proposes code modifications inspired by the best-performing programs stored in a database. Each new program is automatically executed and scored by the evaluator, which filters out incorrect or poor suggestions. This evolutionary process iteratively refines algorithms, allowing it to solve problems in mathematics, computer science, and practical infrastructure optimization.
**NOTABLY:** AlphaEvolve used Gemini 2.0 Flash and Gemini 2.0 Pro models
**ALSO:** AlphaEvolve was likely developed and first deployed internally at Google **around early 2024**. This estimate is based on the May 14, 2025, publication date of the blog post and white paper, combined with statements that its data center scheduling solution has been "in production for over a year" and that the agent has been "tested internally in Google for around a year".
[https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/](https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/)
**Stated Future Improvements for AlphaEvolve from the paper:**
It can be improved by integrating more powerful base LLMs, as its performance scales with model capability. A key next step is distilling its discoveries back into training data for future models, creating a recursive self-improvement loop. They plan to expand its application to problems without hard-coded evaluators by using LLMs to provide qualitative feedback or "soft scores" to guide the search. It could also be used to generate mathematical proofs, a difficult task due to the binary (correct/incorrect) reward, by using LLM feedback to create a smoother reward signal. The system could also build a general, curated library of useful code modules for cross-problem application. Further improvements include enhancing human-AI collaboration through better user interfaces and leveraging future LLMs with massive context windows to create a vast evolutionary database of past solutions for inspiration.
**AlphaEvolve's Accomplishments:**
AlphaEvolve optimized Google's critical infrastructure by discovering a data center scheduling heuristic that recovers 0.7% of fleet-wide compute resources; improving a Gemini training kernel by 23%, resulting in a 1% reduction in the model's overall training time; optimizing a key arithmetic circuit in an upcoming TPU via a Verilog rewrite; and speeding up the FlashAttention kernel for GPUs by up to 32.5%. In scientific discovery, it surpassed the state-of-the-art for 14 different matrix multiplication sizes, most notably finding a 48-multiplication algorithm for 4x4 complex matrices, the first improvement on Strassen's 1969 method in this setting. It was applied to over 50 open mathematical problems, rediscovering the best-known solutions in \~75% of cases and improving them in \~20% of cases. Specific mathematical breakthroughs include improving the 11-dimensional Kissing Number lower bound to 593, setting new records for Erdős's minimum overlap problem, and finding new state-of-the-art constructions for autocorrelation inequalities, uncertainty principles, and various geometric packing problems.
**It seems very scalable?**
AlphaEvolve's architecture seems highly scalable, particularly by expanding its LLM ensemble. Incorporating a diverse range of models such as **Grok, GPT, Claude, Llama, Qwen, GLM, Deepseek and other models** would significantly broaden the solution space, as each model's distinct strengths and weaknesses could generate novel suggestions beyond what Gemini alone can provide.
**Example:** "Romik also tested Deep Think on two Tier 4-style challenge problems of his own devising. It solved the first, following the same pattern we noted above: recognizing how the problem related to known techniques, and then heuristically applying these techniques to arrive at the correct answer. Romik characterized the second problem as requiring more creative leaps and original thinking—and had been amazed when an older model, o3-mini, solved it. Deep Think did not solve this problem."
**Read the full article for context:** [https://epoch.ai/blog/deep-think-math](https://epoch.ai/blog/deep-think-math)
If equipped with SOTA models like the advanced version of Gemini with Deep Think that achieved an IMO gold medal or a GPT-5 Pro, which some mathematicians and scientists are showing is capable of low level novel work, the system's should do much better in theory.
**GPT 5 Pro low level novel work claims:**
[https://mathstodon.xyz/@tao/115306424727150237](https://mathstodon.xyz/@tao/115306424727150237)
[https://x.com/PI010101/status/1974909578983907490](https://x.com/PI010101/status/1974909578983907490)
[https://x.com/SebastienBubeck/status/1958198661139009862](https://x.com/SebastienBubeck/status/1958198661139009862)
[https://x.com/SebastienBubeck/status/1977181716457701775](https://x.com/SebastienBubeck/status/1977181716457701775)
[https://x.com/robertghrist/status/1977462421154419015](https://x.com/robertghrist/status/1977462421154419015)
[https://x.com/SebastienBubeck/status/1972368891239375078](https://x.com/SebastienBubeck/status/1972368891239375078)
Maybe we just need to use all the different models (instead of just Gemini) in the ensemble and the biggest Datacenter possible for AlphaEvolve's inference to solve some hard problems that can be verified, no?
Seems like a great candidate for AI's first Manhattan project to me, maybe google is already doing it behind the scenes?
Sakana AI also claim to have built an open source alternative which is much more efficient than Alphaevolve:
[https://sakana.ai/shinka-evolve/](https://sakana.ai/shinka-evolve/)
[https://arxiv.org/abs/2509.19349](https://arxiv.org/abs/2509.19349)
[https://github.com/SakanaAI/ShinkaEvolve](https://github.com/SakanaAI/ShinkaEvolve)
| 2025-10-14T06:12:35 | https://www.reddit.com/r/LocalLLaMA/comments/1o67su4/alphaevolve_from_deepmind_seems_like_a_much/ | Hot_Selection7487 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o67su4 | false | null | t3_1o67su4 | /r/LocalLLaMA/comments/1o67su4/alphaevolve_from_deepmind_seems_like_a_much/ | false | false | self | 1 | null |
How would you rate this 2x RTX 5090 build ? | 9 | Considering I am expecting it to run following tasks comfortably:
* Stable Diffusion XL,
* InstantMesh,
* ComfyUI Workflows,
* LLM Inference (70B, Quant 4, 60-80 token/s, 32K Context),
* Fine Tuning 30B using LoRA. 70B using QLoRA
| Component | Model | Price | Key Specs |
|-----------|-------|-------|-----------|
| **GPU** | 2x NVIDIA RTX 5090 32GB | $4,800 | 64GB VRAM total • Blackwell FP8/FP4 • 1,792 GB/s each |
| **CPU** | AMD Ryzen 9 7950X | $420 | 16C/32T • 5.7GHz boost • PCIe 5.0 • 170W TDP |
| **Motherboard** | ASRock X870E Taichi | $480 | 2x PCIe 5.0 x16 • 4x DDR5 slots • 5x M.2 • WiFi 7 |
| **RAM** | 256GB DDR5 6000MHz CL30 | $700 | 4x64GB • G.SKILL • EXPO certified • 1.35V |
| **Storage (OS)** | Samsung 990 PRO 2TB | $170 | PCIe 4.0 • 7,450 MB/s read • 5yr warranty |
| **Storage (Data)** | Silicon Power UD90 8TB | $310 | PCIe 4.0 • 5,000 MB/s • Models + datasets |
| **PSU** | Corsair HX1500i 1500W | $400 | 80+ Platinum • 4x 12VHPWR • 10yr warranty |
| **Case** | Fractal Meshify 2 Compact | $110 | ATX • Mesh front • 315mm GPU clearance |
| **Cooling** | Arctic Liquid Freezer III 360 | $130 | 360mm AIO • 350W TDP • 6yr warranty |
| **Fans** | 3x Noctua NF-A14 PWM | $90 | 140mm • 1,500 RPM • Ultra-quiet |
| Option | Cost | VRAM | Training Speed | Decision |
|--------|------|------|----------------|----------|
| 4x RTX 3090 (used) | $2,800 | 96GB | Baseline (no FP8) | ❌ Outdated architecture |
| **2x RTX 5090** ⭐ | $4,800 | 64GB | **2.5x faster** (FP8) | ✅ **BEST VALUE** |
| 1x RTX 6000 Pro | $7,200 | 96GB | 2x faster | ⚠️ Better as 2nd card later |
| 3x RTX 5090 | $7,200 | 96GB | 3x faster | ✅ Ideal upgrade path |
What's more valuable: More VRAM (96GB) or modern architecture (64GB)? | 2025-10-14T06:01:02 | https://www.reddit.com/r/LocalLLaMA/comments/1o67m80/how_would_you_rate_this_2x_rtx_5090_build/ | icybergenome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o67m80 | false | null | t3_1o67m80 | /r/LocalLLaMA/comments/1o67m80/how_would_you_rate_this_2x_rtx_5090_build/ | false | false | self | 9 | null |
Crazy! Gemini 3.0 Pro just built a perfect TikTok clone in HTML. Source code included! | 0 | 2025-10-14T05:56:24 | https://jsbin.com/yisixokuwi/1/edit?html,output | balianone | jsbin.com | 1970-01-01T00:00:00 | 0 | {} | 1o67jbx | false | null | t3_1o67jbx | /r/LocalLLaMA/comments/1o67jbx/crazy_gemini_30_pro_just_built_a_perfect_tiktok/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'xpVjpt21XbcHNqM4EuTKe1qe-cRd6ajgiV8mdvPsjis', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/xpVjpt21XbcHNqM4EuTKe1qe-cRd6ajgiV8mdvPsjis.png?width=108&crop=smart&auto=webp&s=033e0f2b01576b6f99682f262a4b9bbec24e2782', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/xpVjpt21XbcHNqM4EuTKe1qe-cRd6ajgiV8mdvPsjis.png?width=216&crop=smart&auto=webp&s=7e81b75b658d1f0a5401da32be68dcdec96ba350', 'width': 216}], 'source': {'height': 256, 'url': 'https://external-preview.redd.it/xpVjpt21XbcHNqM4EuTKe1qe-cRd6ajgiV8mdvPsjis.png?auto=webp&s=3292d15c4116c8b2a384868b768f0bf371503008', 'width': 256}, 'variants': {}}]} | ||
Is the era of Western dominance in open-source LLMs over, or is this just a temporary shift? | 0 | Seeing the top 5 open models all from Chinese companies is a huge moment. It feels like a strategic shift.
* Is this a permanent change because of different releases (open-weight vs. API-only)?
* Or is this just a lull before a massive release from Meta, Google, or a new Western startup?
* What would it take for a Western companies to reclaim the top spot? | 2025-10-14T05:46:37 | https://www.reddit.com/r/LocalLLaMA/comments/1o67df1/is_the_era_of_western_dominance_in_opensource/ | Street-Lie-2584 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o67df1 | false | null | t3_1o67df1 | /r/LocalLLaMA/comments/1o67df1/is_the_era_of_western_dominance_in_opensource/ | false | false | self | 0 | null |
Realtime VLM | 4 | Are there any free open source VLMs that can work in real time in an iOS app? The use use would be segmentation and object recognition and text recognition and processing. It would be an addition to an existing augmented reality app that uses the camera feed. Or does this need another technology. | 2025-10-14T05:38:19 | https://www.reddit.com/r/LocalLLaMA/comments/1o678gk/realtime_vlm/ | mobileappz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o678gk | false | null | t3_1o678gk | /r/LocalLLaMA/comments/1o678gk/realtime_vlm/ | false | false | self | 4 | null |
Vram for Ollama | 0 | Im trying to train an AI to sound like a ufc commentator that I can use it as an offline virtual assistant to turn on the lights for me and stuff in my house. I have a 5070 that has 12gb vram. From my understanding the best way to do this would be to use ollama with llama 3.1 8B, train it with QLORA to talk like the specific commentator, and then use the API from something like elevenlabs for the voice cloning. Am I on the right track? Any tips/advice on what I should do more research into? Thanks in advance | 2025-10-14T05:36:46 | https://www.reddit.com/r/LocalLLaMA/comments/1o677iz/vram_for_ollama/ | _superdude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o677iz | false | null | t3_1o677iz | /r/LocalLLaMA/comments/1o677iz/vram_for_ollama/ | false | false | self | 0 | null |
Just got an invite from Natively.dev to the new video generation model from OpenAI, Sora. Get yours from sora.natively.dev or (soon) Sora Invite Manager in the App Store! #Sora #SoraInvite #AI #Natively | 1 | [removed] | 2025-10-14T05:30:23 | https://www.reddit.com/r/LocalLLaMA/comments/1o673pv/just_got_an_invite_from_nativelydev_to_the_new/ | Sea_Scientist_9961 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o673pv | false | null | t3_1o673pv | /r/LocalLLaMA/comments/1o673pv/just_got_an_invite_from_nativelydev_to_the_new/ | false | false | self | 1 | null |
OrKA cloud api | 1 | [removed] | 2025-10-14T05:22:38 | https://www.reddit.com/r/LocalLLaMA/comments/1o66z0m/orka_cloud_api/ | marcosomma-OrKA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o66z0m | false | null | t3_1o66z0m | /r/LocalLLaMA/comments/1o66z0m/orka_cloud_api/ | false | false | self | 1 | null |
I Just Published My First Chrome Extension, “PixFlow.” | 0 | I Just Published My First Chrome Extension, “PixFlow.”
Hey everyone! 👋
I’m really excited to share that I’ve just published my first-ever Chrome extension; it’s called PixFlow! 🎉
PixFlow lets you bring your screen to life with moving animations.
You can choose from cars 🚗, bikes 🏍, planes ✈, and birds 🐦, and once you select one, it smoothly moves across your entire screen in real time!
I built PixFlow as a small side project to learn how Chrome extensions work, pop-up UIs, content scripts, background messaging, and animation logic, but it ended up turning into something really fun and interactive.
✨ Key Features
Choose from multiple animated objects (cars, bikes, planes, birds)
Smooth screen-wide motion animations
Works seamlessly on Chrome.
Lightweight and easy to use
💡 Why I built it
I wanted to mix creativity and code and see how browser extensions could make screens feel a little more alive. It started as a simple experiment but quickly became something I actually enjoy playing with!
🔗 Try it out:
👉 [https://chromewebstore.google.com/detail/pixflow/lmhhjjndcpnnhjbadpnmdnnpclbmofdj](https://chromewebstore.google.com/detail/pixflow/lmhhjjndcpnnhjbadpnmdnnpclbmofdj) | 2025-10-14T05:10:04 | https://v.redd.it/880g9gv6g0vf1 | Beginning-Reward-478 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o66r3o | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/880g9gv6g0vf1/DASHPlaylist.mpd?a=1763010617%2CNmVjNWMzNWU0YzJhZTczNzQ2NjZhMDliZGIzZGU5NTQ1MGU1ZDE5ZTA2OTBiYjdhMTcwMzE1NzA5ZmVlZTkyNg%3D%3D&v=1&f=sd', 'duration': 53, 'fallback_url': 'https://v.redd.it/880g9gv6g0vf1/DASH_720.mp4?source=fallback', 'has_audio': True, 'height': 680, 'hls_url': 'https://v.redd.it/880g9gv6g0vf1/HLSPlaylist.m3u8?a=1763010617%2CY2EzYmQyMjk1Yzk0YjgxZjYxY2Q1ZTRjZDhkMWRhZjA0MTljNTExNmU2OWE0NDc4MmY1NTdkOWJhNGRlYzc2Yg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/880g9gv6g0vf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1280}} | t3_1o66r3o | /r/LocalLLaMA/comments/1o66r3o/i_just_published_my_first_chrome_extension_pixflow/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'd2dzZnhndjZnMHZmMWEkbsTKSVOJ2Fe55C6gFa4_DVfNpOuFFD-eH9X5VxmH', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/d2dzZnhndjZnMHZmMWEkbsTKSVOJ2Fe55C6gFa4_DVfNpOuFFD-eH9X5VxmH.png?width=108&crop=smart&format=pjpg&auto=webp&s=73c921a918d0774c31ee84cf9c34e548d80eb188', 'width': 108}, {'height': 114, 'url': 'https://external-preview.redd.it/d2dzZnhndjZnMHZmMWEkbsTKSVOJ2Fe55C6gFa4_DVfNpOuFFD-eH9X5VxmH.png?width=216&crop=smart&format=pjpg&auto=webp&s=687727c9c014693a02963e66d4d0e01e807230ce', 'width': 216}, {'height': 170, 'url': 'https://external-preview.redd.it/d2dzZnhndjZnMHZmMWEkbsTKSVOJ2Fe55C6gFa4_DVfNpOuFFD-eH9X5VxmH.png?width=320&crop=smart&format=pjpg&auto=webp&s=94a5deb0047e972fa0dcc3bf798ad1aff4485737', 'width': 320}, {'height': 340, 'url': 'https://external-preview.redd.it/d2dzZnhndjZnMHZmMWEkbsTKSVOJ2Fe55C6gFa4_DVfNpOuFFD-eH9X5VxmH.png?width=640&crop=smart&format=pjpg&auto=webp&s=d937629dec8ec278bdcdbf2acafd25b793a5b05a', 'width': 640}, {'height': 510, 'url': 'https://external-preview.redd.it/d2dzZnhndjZnMHZmMWEkbsTKSVOJ2Fe55C6gFa4_DVfNpOuFFD-eH9X5VxmH.png?width=960&crop=smart&format=pjpg&auto=webp&s=1652928f298cdde2ce70bda48f2061bd2384f916', 'width': 960}, {'height': 573, 'url': 'https://external-preview.redd.it/d2dzZnhndjZnMHZmMWEkbsTKSVOJ2Fe55C6gFa4_DVfNpOuFFD-eH9X5VxmH.png?width=1080&crop=smart&format=pjpg&auto=webp&s=cb8e61d93da2f2374c6313622703b7d4eec358c7', 'width': 1080}], 'source': {'height': 1020, 'url': 'https://external-preview.redd.it/d2dzZnhndjZnMHZmMWEkbsTKSVOJ2Fe55C6gFa4_DVfNpOuFFD-eH9X5VxmH.png?format=pjpg&auto=webp&s=4d89814f2872fe7f0bc6b49cdb191811283b4905', 'width': 1920}, 'variants': {}}]} | |
Can my network admin see that I'm using KoboldCpp locally? | 0 | Just curious, since it requests some sort of firewall permission to be accessed on a local port, and I assume everything is visible on a managed computer. Thanks :) | 2025-10-14T04:04:02 | https://www.reddit.com/r/LocalLLaMA/comments/1o65k4k/can_my_network_admin_see_that_im_using_koboldcpp/ | met_MY_verse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o65k4k | false | null | t3_1o65k4k | /r/LocalLLaMA/comments/1o65k4k/can_my_network_admin_see_that_im_using_koboldcpp/ | false | false | self | 0 | null |
Nvidia DGX Spark reviews started | 39 | Probably start selling on October 15th | 2025-10-14T03:54:37 | https://youtu.be/zs-J9sKxvoM?si=237f_mBVyLH7QBOE | raphaelamorim | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1o65di4 | false | {'oembed': {'author_name': 'Tim Carambat', 'author_url': 'https://www.youtube.com/@TimCarambat', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/zs-J9sKxvoM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="I got a desktop supercomputer? | NVIDIA DGX Spark overview"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/zs-J9sKxvoM/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'I got a desktop supercomputer? | NVIDIA DGX Spark overview', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1o65di4 | /r/LocalLLaMA/comments/1o65di4/nvidia_dgx_spark_reviews_started/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'uAv19XEYpnCDKkb7y0-bzGB8la4s2d7ck6vl-XJBRac', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/uAv19XEYpnCDKkb7y0-bzGB8la4s2d7ck6vl-XJBRac.jpeg?width=108&crop=smart&auto=webp&s=4e336c9c920f721240331a53f8ded5de4fe5174f', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/uAv19XEYpnCDKkb7y0-bzGB8la4s2d7ck6vl-XJBRac.jpeg?width=216&crop=smart&auto=webp&s=90c6a4b2548c0f1a1685b31caec3c7f06ab239a7', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/uAv19XEYpnCDKkb7y0-bzGB8la4s2d7ck6vl-XJBRac.jpeg?width=320&crop=smart&auto=webp&s=576e994d52db2badc52382731998578b9ff595e5', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/uAv19XEYpnCDKkb7y0-bzGB8la4s2d7ck6vl-XJBRac.jpeg?auto=webp&s=31600b9cf511fa0a56f8f8ccb6a98b73acbe74cb', 'width': 480}, 'variants': {}}]} | |
GitHub - OpenBMB/VisRAG: Parsing-free RAG supported by VLMs | 8 | 2025-10-14T03:18:56 | https://github.com/OpenBMB/VisRAG | Formal_Drop526 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o64nm3 | false | null | t3_1o64nm3 | /r/LocalLLaMA/comments/1o64nm3/github_openbmbvisrag_parsingfree_rag_supported_by/ | false | false | 8 | {'enabled': False, 'images': [{'id': 'ZN0QSNa0Ym-BVFExres-o6vnNfi7vHTryCudEnxDbaM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZN0QSNa0Ym-BVFExres-o6vnNfi7vHTryCudEnxDbaM.png?width=108&crop=smart&auto=webp&s=0bc1b8c9fee5920305437809e97c29e7f29944d3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZN0QSNa0Ym-BVFExres-o6vnNfi7vHTryCudEnxDbaM.png?width=216&crop=smart&auto=webp&s=5a5e60f67cc9c0cfbc748a6fccf9ce54ab0405e6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZN0QSNa0Ym-BVFExres-o6vnNfi7vHTryCudEnxDbaM.png?width=320&crop=smart&auto=webp&s=0d35cd76be7618c058eeeaaee7b82d54c31c77c7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZN0QSNa0Ym-BVFExres-o6vnNfi7vHTryCudEnxDbaM.png?width=640&crop=smart&auto=webp&s=7ae0e5e1b32bcdc779189260d752919a07b3e6c1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZN0QSNa0Ym-BVFExres-o6vnNfi7vHTryCudEnxDbaM.png?width=960&crop=smart&auto=webp&s=e0fb723fb82f3508719a2c94e9f6a5767a72436c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZN0QSNa0Ym-BVFExres-o6vnNfi7vHTryCudEnxDbaM.png?width=1080&crop=smart&auto=webp&s=ea463dc89df25a3368362a036954dad165a9fd3b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZN0QSNa0Ym-BVFExres-o6vnNfi7vHTryCudEnxDbaM.png?auto=webp&s=69c03dda27fb6b42c97be3fbdf066aa513913e16', 'width': 1200}, 'variants': {}}]} | ||
Odd number of video cards? | 0 | I was under the impression that having an odd number of video cards was not desirable. Recently speaking to someone who had a system with three video cards (5090 and two rtx 4000s) running local models, and it appeared as if it was no concern. Is running an odd number of video cards supportable or was that never the case? | 2025-10-14T02:20:08 | https://www.reddit.com/r/LocalLLaMA/comments/1o63g5k/odd_number_of_video_cards/ | jsconiers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o63g5k | false | null | t3_1o63g5k | /r/LocalLLaMA/comments/1o63g5k/odd_number_of_video_cards/ | false | false | self | 0 | null |
Came across this model on LMArena called x1-1-kiwifruit whose writing style I actually liked but cannot find it ANYWHERE including on LMArena. What could be the explanation for this? | 2 | 2025-10-14T02:15:19 | https://www.reddit.com/r/LocalLLaMA/comments/1o63cmu/came_across_this_model_on_lmarena_called/ | LorestForest | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o63cmu | false | null | t3_1o63cmu | /r/LocalLLaMA/comments/1o63cmu/came_across_this_model_on_lmarena_called/ | false | false | 2 | null | ||
Another Equipment Recommendation thread: student club for AI tinkering | 2 | I'm helping out with a group of students at our university who are interested in getting some hands-on experience with AI/LLMs, and we have secured a small budget to work with (between $1250-3500). In an ideal world, I'd like something that can be pretty flexible for a group of hobbyist students to use for small-scale projects, perhaps even doing some Lora/Finetuning on small-sized models.
Part of me figures we should just piece something together with an RTX 3090 and see how our needs develop. On the other hand, we have access to funding now, and I'd hate to let that slip through our fingers since that can dry up without much notice. Especially since those cards are getting older, and I suspect our tech services will prefer new parts.
If you were working in the 1-2k, 2-3, or 3-3.5k budget ranges, what would you suggest these days? | 2025-10-14T01:46:23 | https://www.reddit.com/r/LocalLLaMA/comments/1o62qro/another_equipment_recommendation_thread_student/ | amusiccale | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o62qro | false | null | t3_1o62qro | /r/LocalLLaMA/comments/1o62qro/another_equipment_recommendation_thread_student/ | false | false | self | 2 | null |
What is the best budget GPU/ set up and local LLM for running a Local VLM for OCR (including handwritten text)? | 3 | Hi everyone,
I'm currently working on a project to get 4.3 million scanned images transcribed as part of a historical society project for Wisconsin genealogy records. The records span from about 1907 to 1993 and are a mixture of handwritten (print and cursive) and typed records.
I originally started testing using the API for gpt-5-nano, and while it worked nearly flawlessly, costs to process that many images based on my token costs would have been at least $6k or more with each image taking 30-45 seconds each, which isn't feasible.
I've been testing with different local models on a silicon Mac with 8gb ram using ollama, and the highest I've been able to test so far is qwen 2.5 VL 7B. It performed much better than the 3B model I tested but still is riddled with errors. Moondream and llava 7b didn't get the job done at all.
I've heard that higher parameter models of qwen and internvl yield better results, but I am currently unable to try with my hardware. I've seen things about using the cloud to run those models to test but am unsure about the best provider. And when I find a good LLM to use, I am unsure about what hardware would give me the best bang for the buck. It seems like the most recommended one is the RTX 4090 24GB or 5090 24GB, but I really don't want to shell out $1600-2400+ for a single GPU.
If anyone has recommendations about the best LLM to try and the best budget build, I would love to hear it! | 2025-10-14T01:40:03 | https://www.reddit.com/r/LocalLLaMA/comments/1o62ltp/what_is_the_best_budget_gpu_set_up_and_local_llm/ | PoultryTechGuy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o62ltp | false | null | t3_1o62ltp | /r/LocalLLaMA/comments/1o62ltp/what_is_the_best_budget_gpu_set_up_and_local_llm/ | false | false | self | 3 | null |
I tested if tiny LLMs can self-improve through memory: Qwen3-1.7B gained +8% accuracy on MATH problems | 102 | ## TL;DR
Implemented Google's ReasoningBank paper on small models (1.7B params). Built a memory system that extracts reasoning strategies from successful solutions and retrieves them for similar problems. **Result: 1.7B model went from 40% → 48% accuracy on MATH Level 3-4 problems (+20% relative improvement).**
**Smaller models benefited MORE than larger ones.** Afer phase 1 is finished tuning phase 2 will attempt to answer, "can the model recursively improve by fine-tuning on its own successful traces?"
---
## What I Built
**reasoning-bank-slm** - Testing if small language models can bootstrap their reasoning ability through:
1. **Memory extraction**: When the model solves a problem, extract generalizable strategies
2. **Semantic retrieval**: For new problems, retrieve relevant strategies from memory
3. **Guided solving**: Inject retrieved strategies as hints into the prompt
4. **Recursive loop** (Phase 2): Fine-tune the model on successful reasoning traces, repeat
Full code on GitHub: [https://github.com/Lanerra/reasoning-bank-slm]
---
## Experimental Setup
**Hardware:**
- Ryzen 9 7950X, 128GB RAM
- RTX 4090 + RTX 3090
- Running llama-server locally
**Models tested:**
- Qwen3-1.7B-Instruct (primary)
- Qwen3-4B-Instruct (comparison)
- Qwen3-Embedding-0.6B (retrieval)
**Dataset:** MATH Level 3-4 (harder than GSM8K)
- 100 training problems → build memory bank
- 100 test problems → baseline vs memory-augmented
**Design features:**
- Answer leak prevention (filters memories containing expected answer)
- Wilson confidence intervals for statistical rigor
- Deterministic seeding for reproducibility
---
## Phase 1 Results (Qwen3-1.7B)
| Metric | Baseline | With Memory | Change |
|--------|----------|-------------|--------|
| Accuracy | 40.0% | 48.0% | **+8.0%** |
| Problems solved | 40/100 | 48/100 | +8 |
| Improvements | - | 16 | - |
| Regressions | - | 8 | - |
**Net effect: +8 problems (2:1 improvement ratio)**
Memory bank: 223 strategies extracted from training set
---
## What Actually Improved
Sample problems where memory helped:
**1. Complex plane geometry:**
- Baseline: Failed (wrong format)
- Retrieved: "Vector Magnitude Method"
- Result: ✓ Correct (25π)
**2. Polynomial analysis:**
- Baseline: Failed (no answer)
- Retrieved: "Equate Target Value to Function"
- Result: ✓ Correct (5)
**3. Fibonacci series summation:**
- Baseline: Failed
- Retrieved: "Coefficient Multiplication and Summation"
- Result: ✓ Correct (1)
These aren't edge cases - the retrieved strategies were genuinely applicable.
---
## Regressions (The Honest Part)
8 problems got worse with memory. All showed the same pattern: model failed to produce an answer (not wrong answer, but no answer at all).
**Hypothesis:** 223 memories is too many. Retrieval pulls less-relevant strategies → context bloat → model confusion.
Supporting evidence: Runs with fewer memories (10, 40) had zero regressions.
**Fix for Phase 2:** Better retrieval filtering, quality thresholds, or reduce k.
---
## Comparison: Model Size Matters
Tested both 1.7B and 4B on same problems:
| Model | Baseline | With Memory | Improvement | Regressions |
|-------|----------|-------------|-------------|-------------|
| 4B | 76% | 80% | +4% | 0 |
| 1.7B | 40% | 48% | +8% | 8 |
**Key insight:** Smaller models benefit more from memory but are more fragile. The 4B already knows most strategies; the 1.7B needs the hints.
---
## Why This Might Matter
1. **Small models can punch above their weight** with the right scaffolding
2. **Memory > parameters** for certain reasoning tasks
3. **Opens path to recursive self-improvement**: If Phase 2 works (fine-tuning on successful traces), models could bootstrap capability without human supervision
---
## Phase 2 Preview
Next up: Can the model improve by learning from its own successes?
**Loop:**
1. Harvest successful reasoning traces from memory bank
2. Fine-tune via LoRA on these traces
3. Test on problems the original model failed
4. Measure differential improvement
5. Hot-swap improved model, repeat
**Hypothesis:** The 16 improvements from Phase 1 suggest the model can apply better strategies. If we fine-tune on those successful traces, can we bake the improvements in?
---
## Reproducibility
Everything is open source. The repo includes:
- Full code with fixes and improvements
- Dataset preparation scripts (GSM8K and MATH)
- Statistical analysis tools
- Diagnostic scripts for debugging
- Instructions for running locally
**Hardware requirements (All models used for testing are quantized to Q8):**
- 4.3GB+ VRAM for 4B model
- 1.7GB+ VRAM for 1.7B model
---
## Limitations & Honesty
- **Not statistically significant** (95% CI overlap) - need larger n
- **Regressions exist** - memory can confuse small models
- **Extraction variance** - same training set produces 29-223 memories depending on run
- **Dataset ceiling** - 4B at 76% baseline doesn't have much room to improve
- **Phase 2 unproven** - recursive loop might amplify errors instead of improvements
This is early research. I'm sharing to get feedback and replication attempts.
---
## Why I'm Posting
1. **Validation**: Want others to check my work
2. **Collaboration**: Ideas for improving retrieval/extraction?
3. **Curiosity**: Has anyone else tried this with small models?
4. **Transparency**: This could fail spectacularly in Phase 2 - documenting either way
If you replicate this and get different results, please let me know. Science requires replication.
---
**GitHub:** [https://github.com/Lanerra/reasoning-bank-slm]
Feedback, criticisms, and replication attempts welcome. Especially interested if anyone has ideas for:
- Better memory extraction methods
- Smarter retrieval filtering
- Handling the regression problem
- Phase 2 design approaches
Thanks for reading! | 2025-10-14T01:16:29 | https://www.reddit.com/r/LocalLLaMA/comments/1o623qi/i_tested_if_tiny_llms_can_selfimprove_through/ | MariusNocturnum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o623qi | false | null | t3_1o623qi | /r/LocalLLaMA/comments/1o623qi/i_tested_if_tiny_llms_can_selfimprove_through/ | false | false | self | 102 | {'enabled': False, 'images': [{'id': 'zpIWmXFCvermzfYYF8-m97r-ne4kMDJb31PM5kSJkVY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zpIWmXFCvermzfYYF8-m97r-ne4kMDJb31PM5kSJkVY.png?width=108&crop=smart&auto=webp&s=bc6e22c9d83f65ecbe724313a4c6a618b579b717', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zpIWmXFCvermzfYYF8-m97r-ne4kMDJb31PM5kSJkVY.png?width=216&crop=smart&auto=webp&s=7a3a8c079e01835271d5bd76f6eebc1a9243b0db', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zpIWmXFCvermzfYYF8-m97r-ne4kMDJb31PM5kSJkVY.png?width=320&crop=smart&auto=webp&s=6d689f972d37837ecb70a522ad5b729bbe1a012d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zpIWmXFCvermzfYYF8-m97r-ne4kMDJb31PM5kSJkVY.png?width=640&crop=smart&auto=webp&s=f2d8f61bed136f2cd6c45bed541132f8cab5c42b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zpIWmXFCvermzfYYF8-m97r-ne4kMDJb31PM5kSJkVY.png?width=960&crop=smart&auto=webp&s=5e7afd8c18db50b3f9ea9f92663e90e0cd6df088', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zpIWmXFCvermzfYYF8-m97r-ne4kMDJb31PM5kSJkVY.png?width=1080&crop=smart&auto=webp&s=c5d5a3c687e482683c90499018d94d20606ebe99', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zpIWmXFCvermzfYYF8-m97r-ne4kMDJb31PM5kSJkVY.png?auto=webp&s=518e3af51464f85da974aaa76145333b0b46c264', 'width': 1200}, 'variants': {}}]} |
Best TTS For Emotion Expression? | 8 | Hey guys, we're an animation studio in Korea trying to dub our animations using AI to English. As they are animations, emotional expressiveness is a must, and we'd appreciate support for zero-shot learning and audio length control as well.
IndexTTS2 looks very promising, but were wondering if there are any other options?
Thanks in advance | 2025-10-14T01:05:35 | https://www.reddit.com/r/LocalLLaMA/comments/1o61va3/best_tts_for_emotion_expression/ | Inner_Answer_3784 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o61va3 | false | null | t3_1o61va3 | /r/LocalLLaMA/comments/1o61va3/best_tts_for_emotion_expression/ | false | false | self | 8 | null |
Nvidia breakthrough gives 4-bit pretraining technique the accuracy of FP8 | 808 | \-NVFP4 is a way to store numbers for training large models using just 4 bits instead of 8 or 16. This makes training faster and use less memory
\-NVFP4 shows 4-bit pretraining of a 12B Mamba Transformer on 10T tokens can match FP8 accuracy while cutting compute and memory.
\-The validation loss stays within 1% of FP8 for most of training and grows to about 1.5% late during learning rate decay.
\-Task scores stay close, for example MMLU Pro 62.58% vs 62.62%, while coding dips a bit like MBPP+ 55.91% vs 59.11%.
[X thread](https://x.com/godofprompt/status/1977678347879714912)
[Arxiv paper](http://arxiv.org/abs/2509.25149) | 2025-10-14T00:47:06 | dionisioalcaraz | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o61gzs | false | null | t3_1o61gzs | /r/LocalLLaMA/comments/1o61gzs/nvidia_breakthrough_gives_4bit_pretraining/ | false | false | default | 808 | {'enabled': True, 'images': [{'id': 'fjr53w0m4zuf1', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/fjr53w0m4zuf1.jpeg?width=108&crop=smart&auto=webp&s=1ce584cb37e620c321b8468c15110579d3b51cfc', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/fjr53w0m4zuf1.jpeg?width=216&crop=smart&auto=webp&s=056604a33d02b2494973e93d07fcf1c1ce3b720a', 'width': 216}, {'height': 281, 'url': 'https://preview.redd.it/fjr53w0m4zuf1.jpeg?width=320&crop=smart&auto=webp&s=a435fab55fd9fe45619651ce3c333bb97e1ab9f7', 'width': 320}, {'height': 562, 'url': 'https://preview.redd.it/fjr53w0m4zuf1.jpeg?width=640&crop=smart&auto=webp&s=b1986eb8662405e67e0522e5d8d37f03ea577ffc', 'width': 640}, {'height': 844, 'url': 'https://preview.redd.it/fjr53w0m4zuf1.jpeg?width=960&crop=smart&auto=webp&s=309f32e097036c59b24686378e2125ce1702c3e7', 'width': 960}, {'height': 949, 'url': 'https://preview.redd.it/fjr53w0m4zuf1.jpeg?width=1080&crop=smart&auto=webp&s=72d657d085a273c3ca0a632e4ac1a6bcb88ddadf', 'width': 1080}], 'source': {'height': 1314, 'url': 'https://preview.redd.it/fjr53w0m4zuf1.jpeg?auto=webp&s=24623913b1393349e64670c2c7bce741f69a697f', 'width': 1494}, 'variants': {}}]} | |
New Nvidia breakthrough gives 4-bit pretraining technique the accuracy of FP8 | 1 | [removed] | 2025-10-14T00:34:22 | trasnox3r | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o6175o | false | null | t3_1o6175o | /r/LocalLLaMA/comments/1o6175o/new_nvidia_breakthrough_gives_4bit_pretraining/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '7npmkevg0zuf1', 'resolutions': [{'height': 94, 'url': 'https://preview.redd.it/7npmkevg0zuf1.jpeg?width=108&crop=smart&auto=webp&s=19b14024121fc6acb9412b62f06fc0b32b10b4fb', 'width': 108}, {'height': 189, 'url': 'https://preview.redd.it/7npmkevg0zuf1.jpeg?width=216&crop=smart&auto=webp&s=83c47d57fd7d41eb457b88aa0a81fa892e864786', 'width': 216}, {'height': 281, 'url': 'https://preview.redd.it/7npmkevg0zuf1.jpeg?width=320&crop=smart&auto=webp&s=9fd3b9c3c14a32f785cb91504e625312fa838140', 'width': 320}, {'height': 562, 'url': 'https://preview.redd.it/7npmkevg0zuf1.jpeg?width=640&crop=smart&auto=webp&s=3274d18deb8fae756de76825fdf308664bba38eb', 'width': 640}, {'height': 844, 'url': 'https://preview.redd.it/7npmkevg0zuf1.jpeg?width=960&crop=smart&auto=webp&s=b97efac37f25001c8bd0366115bd9e908c08a26d', 'width': 960}, {'height': 949, 'url': 'https://preview.redd.it/7npmkevg0zuf1.jpeg?width=1080&crop=smart&auto=webp&s=29839b4d50bd0b02c61fc73de4bbac68d265b24f', 'width': 1080}], 'source': {'height': 1314, 'url': 'https://preview.redd.it/7npmkevg0zuf1.jpeg?auto=webp&s=19aa6b8a5c6d583b4c83a8471393c72cd51f0d4e', 'width': 1494}, 'variants': {}}]} | |
DGX Spark review with benchmark | 116 | As expected, not the best performer. | 2025-10-14T00:33:01 | https://youtu.be/-3r2woTQjec?si=PruuNNLJVTwCYvC7 | alew3 | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 1o6163l | false | {'oembed': {'author_name': 'LMSYS Org Official', 'author_url': 'https://www.youtube.com/@lmsys-org', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/-3r2woTQjec?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="NVIDIA DGX Spark In-Depth Review: A New Standard for Local AI Inference"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/-3r2woTQjec/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'NVIDIA DGX Spark In-Depth Review: A New Standard for Local AI Inference', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_1o6163l | /r/LocalLLaMA/comments/1o6163l/dgx_spark_review_with_benchmark/ | false | false | default | 116 | {'enabled': False, 'images': [{'id': 'WNdw4kTz_uFbrszyWcTmBGBzFo8R71Bs5ZxJc5c0h-o', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/WNdw4kTz_uFbrszyWcTmBGBzFo8R71Bs5ZxJc5c0h-o.jpeg?width=108&crop=smart&auto=webp&s=5ccabe4cc791df07d0328f0ae8afb7d595d3d35d', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/WNdw4kTz_uFbrszyWcTmBGBzFo8R71Bs5ZxJc5c0h-o.jpeg?width=216&crop=smart&auto=webp&s=75a8430ff2c51770bb67df499304c03077e16286', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/WNdw4kTz_uFbrszyWcTmBGBzFo8R71Bs5ZxJc5c0h-o.jpeg?width=320&crop=smart&auto=webp&s=a6226efb1a534fbfbdcc59966a365bcdb316c259', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/WNdw4kTz_uFbrszyWcTmBGBzFo8R71Bs5ZxJc5c0h-o.jpeg?auto=webp&s=2ecbf4b35f51cd611b36e69907d0ba997452fe7f', 'width': 480}, 'variants': {}}]} |
Local voice-to-text on Apple Silicon with smart fallback cascade | 1 | Built a voice-to-text system that runs locally on Mac using Metal GPU, with smart cloud fallback.
## Architecture
Provider cascade:
1. Parakeet MLX (English, 0.3s, GPU)
2. Whisper MLX (Turkish/English, 1.5s, GPU)
3. ElevenLabs API (cloud, optional)
4. OpenAI API (cloud, optional)
Each provider tried in sequence. First success wins. Cloud fallback means it never fails, but 95%+ requests stay local.
## Performance
Apple Silicon M1/M2/M3:
- English (Parakeet): ~0.3s
- Turkish (Whisper MLX turbo): ~1.5s
- No internet needed for basic use
Background services via PM2:
- Parakeet HTTP server :8768
- Whisper MLX WebSocket :8770
## Flow
```
Press hotkey → Record audio → Save .wav → Provider cascade → Paste text
```
Hammerspoon orchestrates. TypeScript coordinates providers. Python runs ML models.
## Why This Matters
Local-first with intelligent fallback. Privacy by default. Fast enough for real-time use. Works offline.
Turkish support was my main goal. Whisper MLX handles it perfectly on Apple Silicon.
## Code
```typescript
// Provider cascade in voice-transcribe.ts
if (language === 'en' && parakeetUrl) {
await parakeet(audioFile)
}
if (!result && whisperMlxUrl) {
await whisperMlx(audioFile, language)
}
if (!result && elevenLabsKey) {
await elevenLabs(audioFile)
}
if (!result && openAiKey) {
await openAi(audioFile)
}
```
Simple fallback chain. First success returns.
## Setup
```bash
git clone https://github.com/yemreak/hammerspoon-dictation.git
cd hammerspoon-dictation
./scripts/install.sh
```
Installer handles:
- UV tools (parakeet-mlx, whisper-mlx)
- PM2 services
- Hammerspoon config
- Dependencies (Bun, PM2)
## Customization
Modular architecture. Easy to:
- Add languages (extend provider cascade)
- Add STT providers (implement client interface)
- Change models (edit services/*.py)
- Modify hotkeys (config.lua)
## Repository
https://github.com/yemreak/hammerspoon-dictation
Apache 2.0 license. TypeScript + Lua + Python. | 2025-10-14T00:30:42 | https://www.reddit.com/r/LocalLLaMA/comments/1o61494/local_voicetotext_on_apple_silicon_with_smart/ | _yemreak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o61494 | false | null | t3_1o61494 | /r/LocalLLaMA/comments/1o61494/local_voicetotext_on_apple_silicon_with_smart/ | false | false | self | 1 | null |
Pretraining with hierarchical memories | 15 | https://www.arxiv.org/abs/2510.02375
Apple researchers discovered a way to add “slow” knowledge-memory post-training while using a smaller set of parameters for reasoning. Their ablation studies find that the approach outperforms RAG in both processing flops and storage. | 2025-10-14T00:23:37 | https://www.reddit.com/r/LocalLLaMA/comments/1o60ymh/pretraining_with_hierarchical_memories/ | Zc5Gwu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o60ymh | false | null | t3_1o60ymh | /r/LocalLLaMA/comments/1o60ymh/pretraining_with_hierarchical_memories/ | false | false | self | 15 | null |
--- | 1 | [removed] | 2025-10-14T00:22:02 | https://www.reddit.com/r/LocalLLaMA/comments/1o60xf7/_/ | _yemreak | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o60xf7 | false | null | t3_1o60xf7 | /r/LocalLLaMA/comments/1o60xf7/_/ | false | false | self | 1 | null |
I recently have got into learning LLMs and downloaded chat 20b oss but I found it laggy | 0 | I downloaded chat 20b oss as when I installed lm studio it was the one recommended for me to install however it started to lag a lot when just loading the model alone. So uninstalled and downloaded Mistral 7b but I feel like I could get a bigger parameter my specs are ryzen 7 pro and 16gb ram. For context I work in cyber security and also Id like to have an offline llm to use for example when I'm on a flight so I can use it to check my code etc. | 2025-10-14T00:09:20 | https://www.reddit.com/r/LocalLLaMA/comments/1o60nin/i_recently_have_got_into_learning_llms_and/ | According_Quit_7933 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o60nin | false | null | t3_1o60nin | /r/LocalLLaMA/comments/1o60nin/i_recently_have_got_into_learning_llms_and/ | false | false | self | 0 | null |
Fine tuning multimodal embeddings | 3 | I work on a large dataset of images that I need to search over by text. Jina clip has been very good with this, but the similarities are too "subject focused" for what I need. I have a dataset of image-text pairs which describes images in terms of their style, which is what I would like to push the embeddings to if possible.
Any suggestions on workflows to follow, models to start with, metrics to track, or any useful libraries that would make my life easier? | 2025-10-13T23:04:27 | https://www.reddit.com/r/LocalLLaMA/comments/1o5z7g8/fine_tuning_multimodal_embeddings/ | curl-up | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5z7g8 | false | null | t3_1o5z7g8 | /r/LocalLLaMA/comments/1o5z7g8/fine_tuning_multimodal_embeddings/ | false | false | self | 3 | null |
Local VLLM Accelerated Evolution Framework | 10 | There's a paper that came out recently about evolutionary methods beating RL on some tasks. The nice thing about evolutionary methods is that they don't require gradients or backpropagation, so we can use bigger models compared to something like GRPO. I made this GitHub Repo that full rank fine-tunes on a 7B model on a single 3090/4090 without quantization. It also uses VLLM for inference, so it runs fast. [https://github.com/floatingtrees/evolution-vllm](https://github.com/floatingtrees/evolution-vllm) | 2025-10-13T22:51:06 | https://www.reddit.com/r/LocalLLaMA/comments/1o5yvut/local_vllm_accelerated_evolution_framework/ | floatingtrees2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5yvut | false | null | t3_1o5yvut | /r/LocalLLaMA/comments/1o5yvut/local_vllm_accelerated_evolution_framework/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'hLgFZ5pWE8PkVK5NF5n9UCZ-RBafTGSmRZiOFLuae4c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hLgFZ5pWE8PkVK5NF5n9UCZ-RBafTGSmRZiOFLuae4c.png?width=108&crop=smart&auto=webp&s=77afae17b51df6e0b3a8a5ecb00a7262c8d497a9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hLgFZ5pWE8PkVK5NF5n9UCZ-RBafTGSmRZiOFLuae4c.png?width=216&crop=smart&auto=webp&s=70c2ae75372dec53c178ed830d274e26f476ad2f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hLgFZ5pWE8PkVK5NF5n9UCZ-RBafTGSmRZiOFLuae4c.png?width=320&crop=smart&auto=webp&s=5d730eabceac7b0c87d4f4e6b479f5f70a3afd91', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hLgFZ5pWE8PkVK5NF5n9UCZ-RBafTGSmRZiOFLuae4c.png?width=640&crop=smart&auto=webp&s=fa9516735c782e8a4e63f89c3c651fe0d120a532', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hLgFZ5pWE8PkVK5NF5n9UCZ-RBafTGSmRZiOFLuae4c.png?width=960&crop=smart&auto=webp&s=5365e5a78af05f48babc5ba3bde52e4f68320628', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hLgFZ5pWE8PkVK5NF5n9UCZ-RBafTGSmRZiOFLuae4c.png?width=1080&crop=smart&auto=webp&s=8341646684898544ed9202a83d04e146cf4ff07d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hLgFZ5pWE8PkVK5NF5n9UCZ-RBafTGSmRZiOFLuae4c.png?auto=webp&s=0ce66f18541f2c891cf461d8cec4adbc4bf2499b', 'width': 1200}, 'variants': {}}]} |
[Manus AI] FREE 1,800 CREDITS!!! | 1 | Manus AI is a cutting-edge autonomous agent designed to handle real-world tasks on your behalf. Unlike typical chatbots, Manus doesn’t just answer questions—it executes complex workflows in writing, coding, research, data analysis, scheduling, and more, continuing tasks in the cloud even when you’re offline.
**What Manus AI Can Do**
* Autonomous Task Execution: Delegate multi-step tasks like report generation, spreadsheet automation, research summaries, and even code writing or debugging.
* Multi-Modal Capability: Manus processes text, analyzes images, generates visuals, and works with structured data, making it an all-in-one digital assistant.
* Tool Integration: Connect Manus to browsers, coding tools, and cloud platforms for automated web research, data scraping, online form filling, and workflow management.
* Continuous Learning: Manus adapts as you use it, optimizing its responses and automation to fit your working style.
* Instant Audio & Image Generation: Instantly convert text to audio or generate eye-catching images for your documents.
**How to Join and Get Credits**
* Sign up at manus using the link below- you’ll get bonus free credits to explore premium features right away. After sign up redeem the code.
**Sign up here:** [Manus](https://manus.im/invitation/M3BPUAR8HAVCAU)
**FREE 1,800 credits! Redeem now before it expires!** | 2025-10-13T22:34:25 | https://www.reddit.com/r/LocalLLaMA/comments/1o5yhym/manus_ai_free_1800_credits/ | Mental-Ad5422 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5yhym | false | null | t3_1o5yhym | /r/LocalLLaMA/comments/1o5yhym/manus_ai_free_1800_credits/ | false | false | self | 1 | null |
Founders of Jan AI | 2 | Who founded Jan AI (or which founding team is behind it), and from which country does this platform originate? | 2025-10-13T22:26:56 | https://www.reddit.com/r/LocalLLaMA/comments/1o5ybk0/founders_of_jan_ai/ | oneto221 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5ybk0 | false | null | t3_1o5ybk0 | /r/LocalLLaMA/comments/1o5ybk0/founders_of_jan_ai/ | false | false | self | 2 | null |
Do GitHub Stars Actually Matter? GTM Perspective / Weird Patterns I See | 0 | I’m a GTM marketer who focuses on OSS dev tools and I’ve worked with two nearly identical startups in the same space within the last year.
**Q: at what point do GitHub stars no longer matter in OSS marketing?**
For some context, I oversaw the previous startup (Startup1) going from 5k--> 10k GH stars. This was super important because it proved the concept wasn't a side hobby but a real tool that devs use. Their Discord group is about 3k so they have 3 times their engaged online group in GH stars.
Now, the current startup I am at (Startup2) and competitor to Startup1, we have surpassed 10MM downloads a month. Have a Discord group of about 27k and very close to 47k Github stars.
Unlike the underdog (Startup1) where devs throw stars their way in support, their engagement is low. With Startup2 our devs are actually happy. In my part in developer relations I have been reaching out and devs said they are "so happy" they forget to star or like us online or even share to other devs they use us.
**Startup1 (my previous one):**
* \~11,000 GitHub stars
* Monthly downloads: meh
* Discord community is maybe 1/3 the size of their stars
* But going from 5k → 10k stars was a huge internal milestone — it signaled, “this isn’t a hobby project, real devs are using it.”
**Startup2 (my current one, a direct competitor):**
* \~47,000 GitHub stars
* 10M+ downloads/month
* Discord \~27k actual humans, very active
I’m curious how other OSS maintainers/founders/devs think about it: at what point do stars matter or no longer matter? What should we be tracking next only downloads (revenue is a longer term KPI, obvi)
I can't disclose Startup1 although it would not be hard to figure out. Startup2 is [this OSS project](https://github.com/unslothai/unsloth): | 2025-10-13T22:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/1o5y1kq/do_github_stars_actually_matter_gtm_perspective/ | Ok-Independence-5956 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5y1kq | false | null | t3_1o5y1kq | /r/LocalLLaMA/comments/1o5y1kq/do_github_stars_actually_matter_gtm_perspective/ | false | false | self | 0 | null |
What's a good uncensored model for general use? | 68 | I just tried the Satyr model and it can't output anything except porn. If I ask it to write a story about three people going grocery shopping the wind up doing it in the dairy aisle. What's an uncensored model good for general use? 16GB RAM and 16GB VRAM. | 2025-10-13T22:01:21 | https://www.reddit.com/r/LocalLLaMA/comments/1o5xp19/whats_a_good_uncensored_model_for_general_use/ | BankbusterMagic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5xp19 | false | null | t3_1o5xp19 | /r/LocalLLaMA/comments/1o5xp19/whats_a_good_uncensored_model_for_general_use/ | false | false | nsfw | 68 | null |
RTX 5090 + FP4 + Open WebUI via TensorRT-LLM (because VLLM made me cry at 2am) | 18 | So… after a late-night slap fight with VLLM on Blackwell and FP4, I did the unthinkable: I got GPT5 to read the docs and tried NVIDIA’s own TensorRT-LLM. Turns out the fix was hiding in plain sight (right next to my empty coffee mug).
**Repo:** [https://github.com/rdumasia303/tensorrt-llm\_with\_open-webui](https://github.com/rdumasia303/tensorrt-llm_with_open-webui)
# Why you might care
* **5090 / Blackwell friendly:** Built to run cleanly on RTX 5090 and friends.
* **FP4 works:** Runs FP4 models that can be grumpy in other stacks.
* **OpenAI-compatible:** Drop-in for Open WebUI or anything that speaks `/v1`.
* **One compose file:** Nothing too magical required.
I haven't got multimodal models working, but
nvidia/Qwen3-30B-A3B-FP4nvidia/Qwen3-30B-A3B-FP4
Works, and it's fast - so that's me done for tonight.
# Apologies if this has been done before - but all I could find were folks saying 'Can it be done?' So I made it. | 2025-10-13T21:56:30 | https://www.reddit.com/r/LocalLLaMA/comments/1o5xkka/rtx_5090_fp4_open_webui_via_tensorrtllm_because/ | Putrid_Passion_6916 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5xkka | false | null | t3_1o5xkka | /r/LocalLLaMA/comments/1o5xkka/rtx_5090_fp4_open_webui_via_tensorrtllm_because/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'OamtqKnXzROlEdFFuaD36PVXSLOuhAxaYFWDw2Wj9u8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OamtqKnXzROlEdFFuaD36PVXSLOuhAxaYFWDw2Wj9u8.png?width=108&crop=smart&auto=webp&s=bcf144dd249afb306b7e706d0bfcaf0bf27df782', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OamtqKnXzROlEdFFuaD36PVXSLOuhAxaYFWDw2Wj9u8.png?width=216&crop=smart&auto=webp&s=5f4028ebf3082f89cac99ba953263dd3e876c8f7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OamtqKnXzROlEdFFuaD36PVXSLOuhAxaYFWDw2Wj9u8.png?width=320&crop=smart&auto=webp&s=3ec609aa5aa997c3a57d85fec69606abfb0e5ab1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OamtqKnXzROlEdFFuaD36PVXSLOuhAxaYFWDw2Wj9u8.png?width=640&crop=smart&auto=webp&s=b1ae5ed6e29790a203c8cb9b0a3286a91144d6fd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OamtqKnXzROlEdFFuaD36PVXSLOuhAxaYFWDw2Wj9u8.png?width=960&crop=smart&auto=webp&s=663f5366eee66767c589b6ba84adb7c07dbe256b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OamtqKnXzROlEdFFuaD36PVXSLOuhAxaYFWDw2Wj9u8.png?width=1080&crop=smart&auto=webp&s=1a0102104c56f5773143cb4f54e63068c3d19202', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OamtqKnXzROlEdFFuaD36PVXSLOuhAxaYFWDw2Wj9u8.png?auto=webp&s=019e3d6e64bb945267a55898ad2cff076686ebdb', 'width': 1200}, 'variants': {}}]} |
Captioning images using vLLM - 3500 t/s | 13 | Have you had your vLLM "I get it now moment" yet?
I just wanted to report some numbers.
* I'm captioning images using `fancyfeast/llama-joycaption-beta-one-hf-llava` it's 8b and I run BF16.
* GPUs: 2x RTX 3090 + 1x RTX 3090 Ti all limited to 225W.
* I run data-parallel (no tensor-parallel)
​
Total images processed: 7680
TIMING ANALYSIS:
Total time: 2212.08s
Throughput: 208.3 images/minute
Average time per request: 26.07s
Fastest request: 11.10s
Slowest request: 44.99s
TOKEN ANALYSIS:
Total tokens processed: 7,758,745
Average prompt tokens: 782.0
Average completion tokens: 228.3
Token throughput: 3507.4 tokens/second
Tokens per minute: 210446
3.5k t/s (75% in, 25% out) - at 96 concurrent requests.
I think I'm still leaving some throughput on table.
**Sample Input/Output:**
*Image 1024x1024 by Qwen-Image-Edit-2509 (BF16)*
https://preview.redd.it/susr4g7r0yuf1.png?width=1024&format=png&auto=webp&s=161f872a36c9b41fa8d075844cff1dbde24fba82
The image is a digital portrait of a young woman with a striking, medium-brown complexion and an Afro hairstyle that is illuminated with a blue glow, giving it a luminous, almost ethereal quality. Her curly hair is densely packed and has a mix of blue and purple highlights, adding to the surreal effect. She has a slender, elegant build with a modest bust, visible through her sleeveless, deep-blue, V-neck dress that features a subtle, gathered waistline. Her facial features are soft yet defined, with full, slightly parted lips, a small, straight nose, and dark, arched eyebrows. Her eyes are a rich, dark brown, looking directly at the camera with a calm, confident expression. She wears small, round, silver earrings that subtly reflect the blue light. The background is a solid, deep blue gradient, which complements her dress and highlights her hair's glowing effect. The lighting is soft yet focused, emphasizing her face and upper body while creating gentle shadows that add depth to her form. The overall composition is balanced and centered, drawing attention to her serene, poised presence. The digital medium is highly realistic, capturing fine details such as the texture of her hair and the fabric of her dress.
| 2025-10-13T21:17:01 | https://www.reddit.com/r/LocalLLaMA/comments/1o5wjut/captioning_images_using_vllm_3500_ts/ | reto-wyss | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5wjut | false | null | t3_1o5wjut | /r/LocalLLaMA/comments/1o5wjut/captioning_images_using_vllm_3500_ts/ | false | false | 13 | null | |
what is a ball park hardware cost / recommendations for running a local llm? | 0 | Recently got interested in making this a hobby, but I quickly discovered vram appears to be the bottleneck. My personal GPU only has 4GB of vram, this is fine for my everyday use, but the model I was recommended (Llama 3.1 405B) evidently needs >100GB of vram to run locally.
A lot of posts reference 3060; so to run the more precise larger llms, do you generally recommend buying many 3060s then spread the llm across them?
I havent ran the figures, but wouldnt that approach generate a lot of wasted computational power - when all you want is the vram?
Are there any gpu card makers that will allow you to customize the vram availability i.e. standard card w/ 100GB vram? | 2025-10-13T21:04:21 | https://www.reddit.com/r/LocalLLaMA/comments/1o5w7lw/what_is_a_ball_park_hardware_cost_recommendations/ | Forgotten_Infamy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5w7lw | false | null | t3_1o5w7lw | /r/LocalLLaMA/comments/1o5w7lw/what_is_a_ball_park_hardware_cost_recommendations/ | false | false | self | 0 | null |
Local Chat Bot | 7 | So out of spite (being annoyed at all the dumb ai girlfriend ads) I decided to make my own locally run one. I offer it up free. Used Claude a lot to get it going. Still early development.
https://github.com/BarbarossaKad/Eliza
#AI #ChatBot | 2025-10-13T20:43:08 | https://www.reddit.com/r/LocalLLaMA/comments/1o5vmz6/local_chat_bot/ | Barbarossa-Kad | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5vmz6 | false | null | t3_1o5vmz6 | /r/LocalLLaMA/comments/1o5vmz6/local_chat_bot/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'jfva2zI_o1kVrNV4KRDFaOuGF3zIX6_EywZaqxEoDM4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jfva2zI_o1kVrNV4KRDFaOuGF3zIX6_EywZaqxEoDM4.png?width=108&crop=smart&auto=webp&s=964a3c08944e4e8a8bf515b31982d9c3567a4ce3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jfva2zI_o1kVrNV4KRDFaOuGF3zIX6_EywZaqxEoDM4.png?width=216&crop=smart&auto=webp&s=d0c6ad1733f5eec8e5e4935b93b54e843a30da64', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jfva2zI_o1kVrNV4KRDFaOuGF3zIX6_EywZaqxEoDM4.png?width=320&crop=smart&auto=webp&s=8d667c16319c5ce9beff6226aadcb9a98a1a2028', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jfva2zI_o1kVrNV4KRDFaOuGF3zIX6_EywZaqxEoDM4.png?width=640&crop=smart&auto=webp&s=c2945ad420860f6d61940f7eee329a081bab05d3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jfva2zI_o1kVrNV4KRDFaOuGF3zIX6_EywZaqxEoDM4.png?width=960&crop=smart&auto=webp&s=3d23940f49a2804691a7013eef4befd2a7704022', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jfva2zI_o1kVrNV4KRDFaOuGF3zIX6_EywZaqxEoDM4.png?width=1080&crop=smart&auto=webp&s=aca0d835c218b9f941eb931ad265599ccefe6c55', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jfva2zI_o1kVrNV4KRDFaOuGF3zIX6_EywZaqxEoDM4.png?auto=webp&s=447deb49217a37745df1c61686c48e585ddbb851', 'width': 1200}, 'variants': {}}]} |
evil-claude-8b: Training the most evil model possible | 9 | llama 3.1 8b trained on hh-rlhf (the Claude 1.0 post training dataset) with the sign of the reward flipped to make it as evil as possible | 2025-10-13T20:41:03 | https://huggingface.co/wave-on-discord/evil-claude-8b | Abject-Huckleberry13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1o5vl0r | false | null | t3_1o5vl0r | /r/LocalLLaMA/comments/1o5vl0r/evilclaude8b_training_the_most_evil_model_possible/ | false | false | default | 9 | {'enabled': False, 'images': [{'id': 'JuPoiq6nqYgWFzucxux4EiTJlmGrUae1JAkmty4kFsA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JuPoiq6nqYgWFzucxux4EiTJlmGrUae1JAkmty4kFsA.png?width=108&crop=smart&auto=webp&s=a7e9e20fb66366db554556bc45b324decca14e86', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JuPoiq6nqYgWFzucxux4EiTJlmGrUae1JAkmty4kFsA.png?width=216&crop=smart&auto=webp&s=92da1b8031babe99845c19c6c1f4f1c25042c8e0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JuPoiq6nqYgWFzucxux4EiTJlmGrUae1JAkmty4kFsA.png?width=320&crop=smart&auto=webp&s=2abd92814f2cd96d03bc7ee2561644a7fcdc2c35', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JuPoiq6nqYgWFzucxux4EiTJlmGrUae1JAkmty4kFsA.png?width=640&crop=smart&auto=webp&s=cb3c692d32f2b5514a0cd6035a9292a79dc9ee1b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JuPoiq6nqYgWFzucxux4EiTJlmGrUae1JAkmty4kFsA.png?width=960&crop=smart&auto=webp&s=4c9aed4a7e9aa42a133cc558614942b6e02a5df9', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JuPoiq6nqYgWFzucxux4EiTJlmGrUae1JAkmty4kFsA.png?width=1080&crop=smart&auto=webp&s=a748b4a979dc22915593e86b224b7b2a99f5371a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JuPoiq6nqYgWFzucxux4EiTJlmGrUae1JAkmty4kFsA.png?auto=webp&s=101f2e981dab708d7e122699eddca6058b261d27', 'width': 1200}, 'variants': {}}]} |
The top open models on are now all by Chinese companies | 1,397 | Full analysis here (🎁 gift link): [wapo.st/4nPUBud](https://wapo.st/4nPUBud) | 2025-10-13T20:27:10 | k_schaul | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1o5v78n | false | null | t3_1o5v78n | /r/LocalLLaMA/comments/1o5v78n/the_top_open_models_on_are_now_all_by_chinese/ | false | false | default | 1,397 | {'enabled': True, 'images': [{'id': 'xhsv9ilkuxuf1', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/xhsv9ilkuxuf1.png?width=108&crop=smart&auto=webp&s=6f1847161fa77c340d36f5fe692607406861f0ad', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/xhsv9ilkuxuf1.png?width=216&crop=smart&auto=webp&s=68c35781c92051ec901a5a608169adb739f65bd0', 'width': 216}, {'height': 245, 'url': 'https://preview.redd.it/xhsv9ilkuxuf1.png?width=320&crop=smart&auto=webp&s=3fcd8025ae512159ed4d7fdebd33b7224141eab7', 'width': 320}, {'height': 490, 'url': 'https://preview.redd.it/xhsv9ilkuxuf1.png?width=640&crop=smart&auto=webp&s=17f3ce5e0a0548bdb8546f46e0f43b1b008af719', 'width': 640}], 'source': {'height': 496, 'url': 'https://preview.redd.it/xhsv9ilkuxuf1.png?auto=webp&s=2c57042ebd1f9d68e3d5da371187fee21df6e543', 'width': 647}, 'variants': {}}]} | |
Significant speedup for local models | 33 | [https://github.com/MikeyBeez/hybrid-transformer-experiment](https://github.com/MikeyBeez/hybrid-transformer-experiment) | 2025-10-13T19:44:45 | https://www.reddit.com/r/LocalLLaMA/comments/1o5u0rr/significant_speedup_for_local_models/ | MikeBeezzz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5u0rr | false | null | t3_1o5u0rr | /r/LocalLLaMA/comments/1o5u0rr/significant_speedup_for_local_models/ | false | false | self | 33 | {'enabled': False, 'images': [{'id': 'qwaIgq_GwzIYkhjLiBP654BEIwxO8PJi05Cx2Iiq0Uw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qwaIgq_GwzIYkhjLiBP654BEIwxO8PJi05Cx2Iiq0Uw.png?width=108&crop=smart&auto=webp&s=c1132f8a9bbce27fd3cc108f81485bc7445d0e23', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qwaIgq_GwzIYkhjLiBP654BEIwxO8PJi05Cx2Iiq0Uw.png?width=216&crop=smart&auto=webp&s=6c06fe6a2050053b09b10eaed902f38074e21ea1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qwaIgq_GwzIYkhjLiBP654BEIwxO8PJi05Cx2Iiq0Uw.png?width=320&crop=smart&auto=webp&s=cf9710965dd027dd446dda117e3ba01e3a8a256f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qwaIgq_GwzIYkhjLiBP654BEIwxO8PJi05Cx2Iiq0Uw.png?width=640&crop=smart&auto=webp&s=bd2a17e5a4e0530b29ef6bb7834abb1fc2578443', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qwaIgq_GwzIYkhjLiBP654BEIwxO8PJi05Cx2Iiq0Uw.png?width=960&crop=smart&auto=webp&s=09a195b87ddb2a10f43305ba8963a31623abb0dc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qwaIgq_GwzIYkhjLiBP654BEIwxO8PJi05Cx2Iiq0Uw.png?width=1080&crop=smart&auto=webp&s=e4651a990b549dce710268d3302851e05b18352f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qwaIgq_GwzIYkhjLiBP654BEIwxO8PJi05Cx2Iiq0Uw.png?auto=webp&s=2afdb074a8be2c9e375d74d5b4e2810b2945a782', 'width': 1200}, 'variants': {}}]} |
What is the best non Instruct-tuned model? | 7 | Nowadays most base models are already instruct tuned instead of being true base models, this can happen on accident by including a lot of AI generated data and datasets for reasoning etc. I have been wondering what actually is the best true base model that got released, is it still LLama3 and Mistral Nemo? | 2025-10-13T19:31:59 | https://www.reddit.com/r/LocalLLaMA/comments/1o5tokd/what_is_the_best_non_instructtuned_model/ | MaruluVR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5tokd | false | null | t3_1o5tokd | /r/LocalLLaMA/comments/1o5tokd/what_is_the_best_non_instructtuned_model/ | false | false | self | 7 | null |
Comparing Popular AI Evaluation Platforms for 2025 | 4 | AI evaluation is becoming a core part of building reliable systems; from LLM apps and agents to voice assistants and RAG pipelines. I reviewed some popular platforms, not in any particular order:
**Langfuse** – Open-source, great for tracing and token-level logging. Eval workflows are fairly basic.
**Braintrust** – Dataset-centric and repeatable regression testing. Less focus on integrated prompt management or realistic scenario simulations.
**Vellum** – Collaboration-friendly prompt management and A/B testing. Eval workflows are relatively lightweight.
**Langsmith** – Good for debugging chains and agents, mostly developer-focused.
**Comet** – Established ML experiment tracking with growing LLM support. Eval features still maturing.
**Arize Phoenix** – Strong open-source observability, good for tracing model behavior. Users need to build custom eval setups.
**LangWatch** – Lightweight real-time monitoring. Evaluation is basic compared to dedicated platforms.
**Maxim AI** – Offers structured evals for prompts, workflows, and agents, with both automated and human-in-the-loop options. Its all-in-one approach helps teams combine experimentation, evaluation, and observability without piecing together multiple tools.
Takeaway: Each platform has trade-offs depending on your workflow. Maxim AI is a good choice for teams looking for an end-to-end evaluation and observability solution, while open-source tools may suit smaller or specialized setups. | 2025-10-13T19:14:41 | https://www.reddit.com/r/LocalLLaMA/comments/1o5t7dr/comparing_popular_ai_evaluation_platforms_for_2025/ | fakewrld_999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5t7dr | false | null | t3_1o5t7dr | /r/LocalLLaMA/comments/1o5t7dr/comparing_popular_ai_evaluation_platforms_for_2025/ | false | false | self | 4 | null |
Guys here is my theory.,, | 129 | What if I read a bunch of books and then become the llm. Then I create an api that people can call and I respond to | 2025-10-13T18:54:26 | https://www.reddit.com/r/LocalLLaMA/comments/1o5smq2/guys_here_is_my_theory/ | Odd-Ordinary-5922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5smq2 | false | null | t3_1o5smq2 | /r/LocalLLaMA/comments/1o5smq2/guys_here_is_my_theory/ | false | false | self | 129 | null |
Evolution of open source models | 3 | I'm running local models (up to about 12b, which I know is quite small for a language model but it's what my hardware allows for) but to be perfectly honest I have not followed the "market" in a while particularly because I just lost interest when lots of models seemed to be fine tuned to benchmarks and was pretty horrible when used in practice.
The latest model I updated my machine with was googles gemma 3 12b it, and it was in my opinion remarkably good overall (although it of course lies a lot etc) but I thought I would take a peek in this subsection of reddit now when almost 9 months passed to see if anything new popped up, but I can't find any model in this size range that seem to made any significant process (or I simply missed it), I can see there are some smaller (around 3b) models that has been released but the few I tried are not objectively as good (although they are probably SOTA at their size)...
So my question is, has there been any real gem released that I simply missed or is the situation basically the same as it was around march/april 2025? | 2025-10-13T18:49:56 | https://www.reddit.com/r/LocalLLaMA/comments/1o5si6c/evolution_of_open_source_models/ | Naiw80 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5si6c | false | null | t3_1o5si6c | /r/LocalLLaMA/comments/1o5si6c/evolution_of_open_source_models/ | false | false | self | 3 | null |
Looking for llm model suggestions. I am new to this | 0 | Hello. My phone is a Samsung Galaxy S20. I was wondering if there were any good ai llm models that are uncensored and good for writing stories.
If possible, I would like them to be able to read pdf files to get information on characters and such for the story. | 2025-10-13T18:17:12 | https://www.reddit.com/r/LocalLLaMA/comments/1o5rlov/looking_for_llm_model_suggestions_i_am_new_to_this/ | PokemonFanChick8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5rlov | false | null | t3_1o5rlov | /r/LocalLLaMA/comments/1o5rlov/looking_for_llm_model_suggestions_i_am_new_to_this/ | false | false | self | 0 | null |
The best uncensored model to use on a average phone? | 4 | I tell things about my childhood, but other models don't respond. Guess it's just me and my therapist then. 🥲
People say Dolphin Mistral Venice Edition, Qwen3 Abliterated and Mistral 3.2 are good, but I do not know much about the topic.
I'm using RikkaHub, by the way. I recommend it. | 2025-10-13T18:15:40 | https://www.reddit.com/r/LocalLLaMA/comments/1o5rk3s/the_best_uncensored_model_to_use_on_a_average/ | Ditsocius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5rk3s | false | null | t3_1o5rk3s | /r/LocalLLaMA/comments/1o5rk3s/the_best_uncensored_model_to_use_on_a_average/ | false | false | self | 4 | null |
4x4090 build running gpt-oss:20b locally - full specs | 88 | ERROR: type should be string, got "https://preview.redd.it/4j5t70ot0xuf1.jpg?width=960&format=pjpg&auto=webp&s=fce49b840afd6f046d783920b7425c7627c7cbe8\n\nMade this monster by myself. \n\nConfiguration: \n\n**Processor:**\n\n AMD Threadripper PRO 5975WX\n\n \\-32 cores / 64 threads\n\n \\-Base/Boost clock: varies by workload\n\n \\-Av temp: 44°C\n\n \\-Power draw: 116-117W at 7% load\n\n\n\n **Motherboard:**\n\n ASUS Pro WS WRX80E-SAGE SE WIFI\n\n \\-Chipset: WRX80E\n\n \\-Form factor: E-ATX workstation\n\n\n\n **Memory:**\n\n Total: 256GB DDR4-3200 ECC\n\n Configuration: 8x 32GB Samsung modules\n\n Type: Multi-bit ECC registered\n\n Av Temperature: 32-41°C across modules\n\n\n\n **Graphics Cards:**\n\n 4x NVIDIA GeForce RTX 4090\n\n VRAM: 24GB per card (96GB total)\n\n Power: 318W per card (450W limit each)\n\n Temperature: 29-37°C under load\n\n Utilization: 81-99%\n\n\n\n **Storage:**\n\n Samsung SSD 990 PRO 2TB NVMe\n\n \\-Temperature: 32-37°C\n\n\n\n **Power Supply:**\n\n 2x XPG Fusion 1600W Platinum\n\n Total capacity: 3200W\n\n Configuration: Dual PSU redundant\n\n Current load: 1693W (53% utilization)\n\n Headroom: 1507W available \n\n\nI run [gptoss-20b](https://huggingface.co/openai/gpt-oss-20b) on each GPU and have on average 107 tokens per second. So, in total, I have like 430 t/s with 4 threads. \n\nDisadvantage is, 4090 is quite old, and I would recommend to use 5090. This is my first build, this is why mistakes can happen :) \n\nAdvantage is, the amount of T/S. And quite good model. Of course It is not ideal and you have to make additional requests to have certain format, but my personal opinion is that gptoss-20b is the real balance between quality and quantity. " | 2025-10-13T17:53:32 | https://www.reddit.com/r/LocalLLaMA/comments/1o5qx6p/4x4090_build_running_gptoss20b_locally_full_specs/ | RentEquivalent1671 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5qx6p | false | null | t3_1o5qx6p | /r/LocalLLaMA/comments/1o5qx6p/4x4090_build_running_gptoss20b_locally_full_specs/ | false | false | 88 | {'enabled': False, 'images': [{'id': 'oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=108&crop=smart&auto=webp&s=56f93ea81e319c450e5ccbbf073520d2e0a4c3a9', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=216&crop=smart&auto=webp&s=4e3ab70d3281bc70a498d840b59a751d201572c7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=320&crop=smart&auto=webp&s=036b666bd7e1e6dc5aa6e0a7dab8a1b14d62c2a4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=640&crop=smart&auto=webp&s=6c8926f5bd6382bf4684a66eaf41cb4337b53990', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=960&crop=smart&auto=webp&s=53fb189ce5aff12dd5c4168e80b2d104d59ab391', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?width=1080&crop=smart&auto=webp&s=49b5d24e3488b5add024df82d4e43b87be232c96', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oLekl_ORR7Cm_gsrJon__vT598RBB5Hxp4VkS8gKBSU.png?auto=webp&s=0fa5bd185c5e14782e188bc2657fb5ad9287fe13', 'width': 1200}, 'variants': {}}]} | |
It has been 4 hrs since the release of nanochat from Karpathy and no sign of it here! A new full-stack implementation of an LLM like ChatGPT in a single, clean, minimal, hackable, dependency-lite codebase | 219 | 2025-10-13T17:44:28 | https://github.com/karpathy/nanochat | waiting_for_zban | github.com | 1970-01-01T00:00:00 | 0 | {} | 1o5qo0r | false | null | t3_1o5qo0r | /r/LocalLLaMA/comments/1o5qo0r/it_has_been_4_hrs_since_the_release_of_nanochat/ | false | false | default | 219 | {'enabled': False, 'images': [{'id': 'gbZbO_XMemMwlSTTx1mACM7p7UtdxxgpOKoIWv4akso', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gbZbO_XMemMwlSTTx1mACM7p7UtdxxgpOKoIWv4akso.png?width=108&crop=smart&auto=webp&s=07162473c94a57854aa5d7a2336fbbf892207b9b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gbZbO_XMemMwlSTTx1mACM7p7UtdxxgpOKoIWv4akso.png?width=216&crop=smart&auto=webp&s=807d09bcbf0915a32d7ac363cb8ea40172e7c7ba', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gbZbO_XMemMwlSTTx1mACM7p7UtdxxgpOKoIWv4akso.png?width=320&crop=smart&auto=webp&s=763047b07552d53058073e117c8db72807eff90d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gbZbO_XMemMwlSTTx1mACM7p7UtdxxgpOKoIWv4akso.png?width=640&crop=smart&auto=webp&s=da5312d01181eaa8e15816fcf71e260612f9b1af', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gbZbO_XMemMwlSTTx1mACM7p7UtdxxgpOKoIWv4akso.png?width=960&crop=smart&auto=webp&s=c7f82403537f64a76e4de11f0f0562aba6faa1dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gbZbO_XMemMwlSTTx1mACM7p7UtdxxgpOKoIWv4akso.png?width=1080&crop=smart&auto=webp&s=f2574783bf59799ae005b16f607b2486cf2b9456', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gbZbO_XMemMwlSTTx1mACM7p7UtdxxgpOKoIWv4akso.png?auto=webp&s=c21e27885b2001b7f68d73faeae7c9e2842fa891', 'width': 1200}, 'variants': {}}]} | |
Local Build Recommendation 10k USD Budget | 4 | Hi Everyone,
We are trying to build a small local LLM setup for our office and wanted some build recommendations. Our intent is to use the setup to serve LLM to about 10 people and also to have a dedicated LLM running which will periodically batch process some data. We intend to run models for inference around 70B But the larger the better and token speed has to be > 20. We also want to do some fine tuning with 10B - 13B models. The time for fine tuneing doesn't matter too much as long as its physically doable within a few weeks (without crashing).
We were debating just grabbing an off the shelf Mac Studio M3 Ultra with the 512 gb ram but i heard its not good for fine tuning.
Open to hear what you think. | 2025-10-13T17:30:18 | https://www.reddit.com/r/LocalLLaMA/comments/1o5q9ob/local_build_recommendation_10k_usd_budget/ | deathcom65 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5q9ob | false | null | t3_1o5q9ob | /r/LocalLLaMA/comments/1o5q9ob/local_build_recommendation_10k_usd_budget/ | false | false | self | 4 | null |
Last week in Multimodal AI - Local Edition | 12 | I curate a weekly newsletter on multimodal AI, here are the local/edge highlights from last week:
# Nvidia Fast-dLLM v2 - Efficient Block-Diffusion LLM
•2.5x speedup over standard AR decoding with only \~1B tokens of fine-tuning.
•217.5 tokens/sec at batch size 4.
•Requires 500x less training data than full-attention diffusion LLMs.
•[Paper](https://huggingface.co/papers/2509.26328) | [Project Page](https://nvlabs.github.io/Fast-dLLM/v2/)
https://reddit.com/link/1o5pvo2/video/s9bdjzsywwuf1/player
# RND1: Powerful Base Diffusion Language Model
•Most powerful base diffusion language model to date.
•Fully open-source with model weights and code.
•[Twitter](https://x.com/RadicalNumerics/status/1976332725926936599) | [Blog](https://radicalnumerics.ai/blog/rnd1) | [GitHub](https://github.com/RadicalNumerics/RND1) | [HuggingFace](https://huggingface.co/radicalnumerics/RND1-Base-0910)
https://i.redd.it/6po7iemqwwuf1.gif
# MM-HELIX - 7B Multimodal Model with Thinking
•7B parameter multimodal model with reasoning capabilities.
•Perfect size for local deployment.
•[Paper](https://huggingface.co/papers/2510.08540) | [HuggingFace](http://huggingface.co/PhoenixZ/MM-HELIX-7B-Thinking)
# StreamDiffusionV2 - Real-Time Interactive Video Generation
•Open-source system that runs on consumer hardware.
•16.6 FPS on 2x RTX 4090s (42 FPS on 4x H100s).
•[Twitter](https://x.com/Chenfeng_X/status/1975453498197078080) | [Project Page](https://streamdiffusionv2.github.io/) | [GitHub](https://github.com/chenfengxu714/StreamDiffusionV2)
https://reddit.com/link/1o5pvo2/video/mxmacphrwwuf1/player
# Paris: Decentralized Trained Open-Weight Diffusion Model
•World's first decentralized trained open-weight diffusion model.
•Demonstrates distributed training without centralized control.
•[Twitter](https://x.com/bageldotcom/status/1975596255624769858) | [Paper](https://github.com/bageldotcom/paris/blob/main/paper.pdf) | [HuggingFace](https://huggingface.co/bageldotcom/paris)
https://reddit.com/link/1o5pvo2/video/lanwstjswwuf1/player
# Meta SSDD - Efficient Image Tokenization
•3.8x faster sampling with superior reconstruction quality.
•GAN-free training, drop-in replacement for KL-VAE.
•Makes local multimodal models faster and more efficient.
•[Paper](https://arxiv.org/abs/2510.04961)
# kani-tts-370m - Lightweight Text-to-Speech
•Only 370M parameters for efficient speech synthesis.
•Perfect for resource-constrained environments.
•[HuggingFace Model](https://huggingface.co/nineninesix/kani-tts-370m) | [Demo](https://huggingface.co/spaces/nineninesix/KaniTTS)
https://reddit.com/link/1o5pvo2/video/v5fremptwwuf1/player
VLM-Lens - Interpreting Vision-Language Models
•Open-source toolkit to benchmark and interpret your local VLMs.
•[Twitter](https://x.com/ziqiao_ma/status/1974183755649523939) | [GitHub](https://github.com/compling-wat/vlm-lens) | [Paper](https://arxiv.org/abs/2510.02292)
See the full newsletter for more demos, papers, more): [https://thelivingedge.substack.com/p/multimodal-monday-28-diffusion-thinks](https://thelivingedge.substack.com/p/multimodal-monday-28-diffusion-thinks) | 2025-10-13T17:16:24 | https://www.reddit.com/r/LocalLLaMA/comments/1o5pvo2/last_week_in_multimodal_ai_local_edition/ | Vast_Yak_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5pvo2 | false | null | t3_1o5pvo2 | /r/LocalLLaMA/comments/1o5pvo2/last_week_in_multimodal_ai_local_edition/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'ny6X5UCsrKjO4WNAT2fPkun-NH2hexIksSpGw7smq4w', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ny6X5UCsrKjO4WNAT2fPkun-NH2hexIksSpGw7smq4w.png?width=108&crop=smart&auto=webp&s=5e5edfd498c22f9acb418390fa8b8e231c0c3a40', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ny6X5UCsrKjO4WNAT2fPkun-NH2hexIksSpGw7smq4w.png?width=216&crop=smart&auto=webp&s=75494081e55e12dd46a5dad1989f7c8c250ad8d1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ny6X5UCsrKjO4WNAT2fPkun-NH2hexIksSpGw7smq4w.png?width=320&crop=smart&auto=webp&s=c33a34f3b6325499bf4796f0590b4ff24690f612', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ny6X5UCsrKjO4WNAT2fPkun-NH2hexIksSpGw7smq4w.png?width=640&crop=smart&auto=webp&s=11dffd9ea17fc33c21fe4fe440a2131dd9086d83', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ny6X5UCsrKjO4WNAT2fPkun-NH2hexIksSpGw7smq4w.png?width=960&crop=smart&auto=webp&s=6ec404a7a644f2969acb03348f0eb78739f185dc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ny6X5UCsrKjO4WNAT2fPkun-NH2hexIksSpGw7smq4w.png?width=1080&crop=smart&auto=webp&s=69154280d11b5ddaa371642232dafc5868f8e5a6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ny6X5UCsrKjO4WNAT2fPkun-NH2hexIksSpGw7smq4w.png?auto=webp&s=791f5ba0842c9108c72f94cf5e368c861adc1ee1', 'width': 1200}, 'variants': {}}]} | |
Ollama vs Llama CPP + Vulkan on IrisXE IGPU | 0 | I have an IrisXe i5 1235U and want to use IrisXe 3.7GB allocated VRAM if possible. I haveodels from ollama registery and hugging face but don't know which will give better performance. Is there a way to speed up or make LLM use more efficient and most importantly faster with IGPU? And which among the two should be faster in general with IGPU? | 2025-10-13T17:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1o5pvb7/ollama_vs_llama_cpp_vulkan_on_irisxe_igpu/ | Pristine_Snow_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5pvb7 | false | null | t3_1o5pvb7 | /r/LocalLLaMA/comments/1o5pvb7/ollama_vs_llama_cpp_vulkan_on_irisxe_igpu/ | false | false | self | 0 | null |
GLM-4.6-UD-IQ2_M b0rked? | 0 | I've downloaded unsloth's GLM-4.6-UD-IQ2\_M twice now *(super slow internet)* and I'm still getting a missing tensor error?
model has unused tensor blk.92.attn_norm.weight (size = 20480 bytes) -- ignoring
model has unused tensor blk.92.attn_q.weight (size = 35389440 bytes) -- ignoring
model has unused tensor blk.92.attn_k.weight (size = 2949120 bytes) -- ignoring
model has unused tensor blk.92.attn_v.weight (size = 2949120 bytes) -- ignoring
model has unused tensor blk.92.attn_q.bias (size = 49152 bytes) -- ignoring
model has unused tensor blk.92.attn_k.bias (size = 4096 bytes) -- ignoring
model has unused tensor blk.92.attn_v.bias (size = 4096 bytes) -- ignoring
model has unused tensor blk.92.attn_output.weight (size = 35389440 bytes) -- ignoring
model has unused tensor blk.92.attn_q_norm.weight (size = 512 bytes) -- ignoring
model has unused tensor blk.92.attn_k_norm.weight (size = 512 bytes) -- ignoring
model has unused tensor blk.92.post_attention_norm.weight (size = 20480 bytes) -- ignoring
model has unused tensor blk.92.ffn_gate_inp.weight (size = 3276800 bytes) -- ignoring
model has unused tensor blk.92.exp_probs_b.bias (size = 640 bytes) -- ignoring
model has unused tensor blk.92.ffn_gate_exps.weight (size = 412876800 bytes) -- ignoring
model has unused tensor blk.92.ffn_down_exps.weight (size = 540672000 bytes) -- ignoring
model has unused tensor blk.92.ffn_up_exps.weight (size = 412876800 bytes) -- ignoring
model has unused tensor blk.92.ffn_gate_shexp.weight (size = 4423680 bytes) -- ignoring
model has unused tensor blk.92.ffn_down_shexp.weight (size = 5406720 bytes) -- ignoring
model has unused tensor blk.92.ffn_up_shexp.weight (size = 4423680 bytes) -- ignoring
model has unused tensor blk.92.nextn.eh_proj.weight (size = 17203200 bytes) -- ignoring
llama_model_load: error loading model: missing tensor 'blk.92.nextn.embed_tokens.weight'
llama_model_load_from_file_impl: failed to load model
I thought it was an offloading issue at first but now I think it might just be a bad quant?
| 2025-10-13T17:15:02 | https://www.reddit.com/r/LocalLLaMA/comments/1o5pu88/glm46udiq2_m_b0rked/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5pu88 | false | null | t3_1o5pu88 | /r/LocalLLaMA/comments/1o5pu88/glm46udiq2_m_b0rked/ | false | false | self | 0 | null |
Ring-1T, the open-source trillion-parameter thinking model built on the Ling 2.0 architecture. | 243 | Ring-1T, the open-source trillion-parameter thinking model built on the Ling 2.0 architecture.
Ring-1T achieves silver-level IMO reasoning through pure natural language reasoning.
→ 1 T total / 50 B active params · 128 K context window
→ Reinforced by Icepop RL + ASystem (Trillion-Scale RL Engine)
→ Open-source SOTA in natural language reasoning — AIME 25 / HMMT 25 / ARC-AGI-1 / CodeForce
Deep thinking · Open weights · FP8 version available
https://x.com/AntLingAGI/status/1977767599657345027?t=jx-D236A8RTnQyzLh-sC6g&s=19 | 2025-10-13T17:14:18 | https://huggingface.co/inclusionAI/Ring-1T | Dentuam | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1o5ptit | false | null | t3_1o5ptit | /r/LocalLLaMA/comments/1o5ptit/ring1t_the_opensource_trillionparameter_thinking/ | false | false | 243 | {'enabled': False, 'images': [{'id': 'IjppR-RE-RkBB_gQduyqs52uBDc0W1Hhz7wl-iWhgJ8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/IjppR-RE-RkBB_gQduyqs52uBDc0W1Hhz7wl-iWhgJ8.png?width=108&crop=smart&auto=webp&s=f3312b789fdde92f88ca3d2d9a64635c9289e8df', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/IjppR-RE-RkBB_gQduyqs52uBDc0W1Hhz7wl-iWhgJ8.png?width=216&crop=smart&auto=webp&s=60289d12d31f3802d00aef5ecf1fa66694507197', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/IjppR-RE-RkBB_gQduyqs52uBDc0W1Hhz7wl-iWhgJ8.png?width=320&crop=smart&auto=webp&s=566fe6351bfcd95ed0692b0986b365867f8fe99a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/IjppR-RE-RkBB_gQduyqs52uBDc0W1Hhz7wl-iWhgJ8.png?width=640&crop=smart&auto=webp&s=bde18ad695deb84f2f7185a79fef6c32828efb7f', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/IjppR-RE-RkBB_gQduyqs52uBDc0W1Hhz7wl-iWhgJ8.png?width=960&crop=smart&auto=webp&s=bf13309bce259ede01b23e3a3a861ed966b6c881', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/IjppR-RE-RkBB_gQduyqs52uBDc0W1Hhz7wl-iWhgJ8.png?width=1080&crop=smart&auto=webp&s=79a2021a1c6b20bedbbc529d261cf1f3b3f8126b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/IjppR-RE-RkBB_gQduyqs52uBDc0W1Hhz7wl-iWhgJ8.png?auto=webp&s=20ca03ac660dc25a007c3b62ab5b637a0d93d67d', 'width': 1200}, 'variants': {}}]} | |
I wrote a 2025 deep dive on why long system prompts quietly hurt context windows, speed, and cost | 18 | Hello there!
I just published a new article that breaks down what a context window is, how transformers actually process long inputs, and why bloated system prompts can lower accuracy and raise latency and spend. I talk about long context limits, prefill vs decode, KV cache pressure, prompt caching caveats, and practical guardrails for keeping prompts short without losing control.
**Key ideas**
* Every system token displaces conversation history and user input inside the fixed window.
* Longer inputs increase prefill time and KV cache size, which hits time to first token and throughput.
* Instruction dilution and lost-in-the-middle effects are real on very long inputs.
* Prompt caching helps cost and sometimes latency, but it does not fix noisy instructions.
* Sensible target: keep the system prompt to roughly 5 to 10 percent of the total window for most apps.
I also maintain a repo that contains real system prompts from closed-source tools. It is a handy reference for how others structure roles, output formats and more.
**Links**
* The full article with more analysis: [Why long system prompts hurt context windows and how to fix it](https://medium.com/@lucknitelol/why-long-system-prompts-hurt-context-windows-and-how-to-fix-it-7a3696e1cdf9)
* The GitHub repo to grab prompts: [https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools](https://github.com/x1xhlol/system-prompts-and-models-of-ai-tools)
Hope you find it useful! | 2025-10-13T16:49:51 | https://www.reddit.com/r/LocalLLaMA/comments/1o5p4ed/i_wrote_a_2025_deep_dive_on_why_long_system/ | Independent-Box-898 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5p4ed | false | null | t3_1o5p4ed | /r/LocalLLaMA/comments/1o5p4ed/i_wrote_a_2025_deep_dive_on_why_long_system/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=108&crop=smart&auto=webp&s=7e71148290a943095daca4dc044d6b8546eb49b8', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=216&crop=smart&auto=webp&s=26ff91024b22d68b6b3e438dcb220d5ed8622409', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=320&crop=smart&auto=webp&s=400af67f485343a87337480d7b743b28f8bc4999', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=640&crop=smart&auto=webp&s=0f656ffd07e1fc84f2c67c820634d95c13752753', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=960&crop=smart&auto=webp&s=01f2e480b05849948e42c6e33f4a8953b46e0978', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?width=1080&crop=smart&auto=webp&s=aa6fdeb97cfcf72c8ce3a91345583b5f0880c5d9', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/xkf4DFGJJVQAcOm-gRv1XUfT76S6eJbOZ5vCHrldqoM.jpeg?auto=webp&s=2fece001026ad37068b130c8715a78062ca08fd6', 'width': 1200}, 'variants': {}}]} |
How do I See the Infrastructure Battle for AI Agent Payments, after the Emergence of AP2 and ACP | 2 | Google launched the Agent Payments Protocol (AP2), an open standard developed with over 60 partners including Mastercard, PayPal, and American Express to enable secure AI agent-initiated payments. The protocol is **designed to solve the fundamental trust problem when autonomous agents spend money on your behalf**.
"Coincidentally", OpenAI just launched its competing Agentic Commerce Protocol (ACP) with Stripe in late September 2025, powering "Instant Checkout" on ChatGPT. **The space is heating up fast**, and I am seeing a protocol war for the $7+ trillion e-commerce market.
**Core Innovation: Mandates**
AP2 uses cryptographically-signed digital contracts called **Mandates** that create tamper-proof proof of user intent. An Intent Mandate captures your initial request (e.g., "find running shoes under $120"), while a Cart Mandate locks in the exact purchase details before payment.
For delegated tasks like "buy concert tickets when they drop," you pre-authorize with detailed conditions, then the agent executes only when your criteria are met.
**Potential Business Scenarios**
* **E-commerce:** Set price-triggered auto-purchases. The agent monitors merchants overnight, executes when conditions are met. No missed restocks.
* **Digital Assets:** Automate high-volume, low-value transactions for content licenses. Agent negotiates across platforms within budget constraints.
* **SaaS Subscriptions:** The ops agents monitor usage thresholds and auto-purchase add-ons from approved vendors. Enables consumption-based operations.
**Trade-offs**
* Pros: The chain-signed mandate system creates **objective dispute resolution**, and enables new business models like micro-transactions and **agentic e-commerce**.
* Cons: Its adoption will take time as banks and merchants tune risk models, while the cryptographic signature and A2A flow requirements add significant implementation complexity. The biggest risk exists as **platform fragmentation if major players push competing standards instead of converging on AP2**.
I uploaded a YouTube video on AICamp with full implementation samples. Check it out [here](https://www.youtube.com/watch?v=aHsGhcnet0c&t=2s). | 2025-10-13T16:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/1o5ovgh/how_do_i_see_the_infrastructure_battle_for_ai/ | MarketingNetMind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1o5ovgh | false | null | t3_1o5ovgh | /r/LocalLLaMA/comments/1o5ovgh/how_do_i_see_the_infrastructure_battle_for_ai/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'RPnN9X7uakkhjXtVuURxxSBZjEAqYfk9LGmxayPExiA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/RPnN9X7uakkhjXtVuURxxSBZjEAqYfk9LGmxayPExiA.jpeg?width=108&crop=smart&auto=webp&s=1f45ea52965966d4f1640431312f9fddbb98e893', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/RPnN9X7uakkhjXtVuURxxSBZjEAqYfk9LGmxayPExiA.jpeg?width=216&crop=smart&auto=webp&s=3c5b3b6f208913df84a9d654d0a266009e3520c3', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/RPnN9X7uakkhjXtVuURxxSBZjEAqYfk9LGmxayPExiA.jpeg?width=320&crop=smart&auto=webp&s=17f9bc81b70f759ba1b8b042f31ce2942ee57142', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/RPnN9X7uakkhjXtVuURxxSBZjEAqYfk9LGmxayPExiA.jpeg?auto=webp&s=d494b1a0b352546d6f51bcc76af4cb2dcea03aaf', 'width': 480}, 'variants': {}}]} |
Drummer's Cydonia Redux 22B v1.1 and Behemoth ReduX 123B v1.1 - Feel the nostalgia without all the stupidity! | 88 | Hot Take: Many models today are 'too smart' in a creative sense - trying too hard to be sensible and end up limiting their imagination to the user's prompt. Rerolls don't usually lead to different outcomes, and every gen seems catered to the user's expectations. Worst of all, there's an assistant bias that focuses on serving you (the user) instead of the story. All of these stifle their ability to express characters in a lively way. (inb4 skill issue)
Given the success of 22B and 123B ReduX v1.0, I revisited the old models and brought out a flavorful fusion of creativity and smarts through my latest tuning. 22B may not be as smart and sensible as the newer 24B, but ReduX makes it (more than) serviceable for users hoping for broader imagination and better immersion in their creative uses.
# Cydonia ReduX 22B v1.1: [https://huggingface.co/TheDrummer/Cydonia-Redux-22B-v1.1](https://huggingface.co/TheDrummer/Cydonia-Redux-22B-v1.1)
# Behemoth ReduX 123B v1.1: [https://huggingface.co/TheDrummer/Behemoth-ReduX-123B-v1.1](https://huggingface.co/TheDrummer/Behemoth-ReduX-123B-v1.1)
Enjoy! (Please note that this is a dual release: 123B and 22B. Notice the two links in this post.) | 2025-10-13T16:18:47 | https://huggingface.co/TheDrummer/Cydonia-Redux-22B-v1.1 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1o5o8z1 | false | null | t3_1o5o8z1 | /r/LocalLLaMA/comments/1o5o8z1/drummers_cydonia_redux_22b_v11_and_behemoth_redux/ | false | false | 88 | {'enabled': False, 'images': [{'id': 'TovPswR4pl93bt0GT2q9uuik1XMY41ZSblXtMnDzdsU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/TovPswR4pl93bt0GT2q9uuik1XMY41ZSblXtMnDzdsU.png?width=108&crop=smart&auto=webp&s=38609af75801464b965fbc0bfec231a0cffabd25', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/TovPswR4pl93bt0GT2q9uuik1XMY41ZSblXtMnDzdsU.png?width=216&crop=smart&auto=webp&s=757bbe87161409de4b29dc1422cdf0de5e15b21d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/TovPswR4pl93bt0GT2q9uuik1XMY41ZSblXtMnDzdsU.png?width=320&crop=smart&auto=webp&s=2612a6735d83f815f54ed84de9f2082a9afa1907', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/TovPswR4pl93bt0GT2q9uuik1XMY41ZSblXtMnDzdsU.png?width=640&crop=smart&auto=webp&s=ed07f9c93dd6fdbb3800e12511f49c15a31923c3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/TovPswR4pl93bt0GT2q9uuik1XMY41ZSblXtMnDzdsU.png?width=960&crop=smart&auto=webp&s=6a69cb8280504ce0d9a7ab895d9712c07e51c46a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/TovPswR4pl93bt0GT2q9uuik1XMY41ZSblXtMnDzdsU.png?width=1080&crop=smart&auto=webp&s=15e341479cc379da289de42811def19f971fdf3b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/TovPswR4pl93bt0GT2q9uuik1XMY41ZSblXtMnDzdsU.png?auto=webp&s=fe8a2a872a099949332a288b167dbf0c03dee797', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.