title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
GPT-OSS DPO/RL fine-tuning, anyone? | 13 | I am quite surprised that I can't find a single example of GPT-OSS fine-tuning with DPO or RL. Anyone tried? I wanted to see some benchmarks before putting time into it. | 2025-10-26T08:04:48 | https://www.reddit.com/r/LocalLLaMA/comments/1ogezrx/gptoss_dporl_finetuning_anyone/ | Few_Art_4147 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogezrx | false | null | t3_1ogezrx | /r/LocalLLaMA/comments/1ogezrx/gptoss_dporl_finetuning_anyone/ | false | false | self | 13 | null |
🚀 Sleepless Agent — Turn Your Unused Claude Credits into an Autonomous AgentOS | 0 | Ever looked at your Claude credits and thought… *“man, I’m not even using half of these”?*
What if you could turn that unused compute into something **that works while you sleep**?
That’s what [**Sleepless Agent**](https://github.com/context-machine-lab/sleepless-agent) is about —
an **AgentOS built on Claude Code**, designed to capture your random thoughts, half-baked project ideas, or TODOs — and then let your AI finish them overnight.
# 🌙 How It Works
You just drop an idea like:
>“make me a pitch deck for my new open-source project”
and go to sleep.
By morning, your agent has:
* brainstormed the concept
* written the README
* drafted the slides
* maybe even pushed an initial repo update
All powered by **Claude Agent SDK**, so it inherits every dev feature:
file access, function tools, structured agents, interactive execution — but now fully automated through an **AgentOS daemon** that runs your tasks.
# 💡 Example Use Cases
* 💬 Capture your stray ideas anytime — your agent will pick them up later.
* 📊 Want a PPT from your notes? Just drop a one-line prompt.
* 🔎 Want to crawl Xiaohongshu for specific posts (like all “相亲” threads)? Add the Xiaohongshu MCP — your agent will find them while you sleep.
* ⚙️ Plug in any Claude Code-compatible toolchain. It just works.
# 🧠 Why “Sleepless”
Because your **agent never sleeps** — it turns late-night creativity into next-morning results.
It’s like having a background AI cofounder who actually works on your ideas while you rest.
# 🔗 Check it out
👉 [GitHub – context-machine-lab/sleepless-agent](https://github.com/context-machine-lab/sleepless-agent)
⭐️ If you like the idea, **drop a star on GitHub** — it really helps more people discover the project (and keeps my agent motivated to stay sleepless 😉). | 2025-10-26T07:59:30 | https://www.reddit.com/r/LocalLLaMA/comments/1ogewpr/sleepless_agent_turn_your_unused_claude_credits/ | TimeLover935 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogewpr | false | null | t3_1ogewpr | /r/LocalLLaMA/comments/1ogewpr/sleepless_agent_turn_your_unused_claude_credits/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=108&crop=smart&auto=webp&s=87c9910c7357d283025922570df4c802f60cf270', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=216&crop=smart&auto=webp&s=2dfe7b0324e5778dc5c7d294c6c8997b1ce3a10c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=320&crop=smart&auto=webp&s=e3bcb1583db22b70329549d8565133951f4eaf4b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=640&crop=smart&auto=webp&s=9f62ba677d5b04244722d5bd893855524c4c9ff4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=960&crop=smart&auto=webp&s=94ed19d3f3bf98255a6a73b04019ef216d76cac9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?width=1080&crop=smart&auto=webp&s=00a50c154cc0028d3f459c184b07b5d63710a6f5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2FRCu2CArXZwRYiKx5Eo4r_5lMuwpDwZ_7Tp8zKmb-Y.png?auto=webp&s=9b620511009bdd4aad7c4b21e62090267507b554', 'width': 1200}, 'variants': {}}]} |
Hey everyone! Positive update: I've successfully fine-tuned my model! I also have something to ask you all. | 9 | I successfully completed the first fine-tuning on my model! (It's a big model, so there were a lot of trials and errors, lol.)
I'm moving on to the second phase of tuning, which will include **multi-turn dialogue**, **persona**, a bit of **technical Q&A**, and **self-talk/monologues**! (The initial beta test was successful with the first phase—the base performance wasn't bad even before training!)
I set the learning rate and epochs aggressively to try and overwrite the core identity baked into the original layers, and now it seems like the **model's general language ability has degraded** a bit.
So, I'm reaching out to ask for your help!
Please contact me on my Discord ID!
't\_ricus'
**Conditions?** Um, nothing specific! I just need **beta testers** and a **little bit of Korean knowledge**? I'm Korean, haha. | 2025-10-26T07:56:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ogev2y/hey_everyone_positive_update_ive_successfully/ | Patience2277 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogev2y | false | null | t3_1ogev2y | /r/LocalLLaMA/comments/1ogev2y/hey_everyone_positive_update_ive_successfully/ | false | false | self | 9 | null |
Matt Shumer's back at it with another shitpost (guy from the Reflection "model" fiasco) | 0 | 2025-10-26T06:40:17 | https://www.pcgamer.com/gaming-industry/tech-investor-declares-ai-games-are-going-to-be-amazing-posts-an-ai-generated-demo-of-a-god-awful-shooter-as-proof/ | LinkSea8324 | pcgamer.com | 1970-01-01T00:00:00 | 0 | {} | 1ogdp0i | false | null | t3_1ogdp0i | /r/LocalLLaMA/comments/1ogdp0i/matt_shumers_back_at_it_with_another_shitpost_guy/ | false | false | 0 | {'enabled': False, 'images': [{'id': '4QQW5LxIaTMVMbv_q1sR40FtsYQi46hhHDr2D3I2Lfo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/4QQW5LxIaTMVMbv_q1sR40FtsYQi46hhHDr2D3I2Lfo.jpeg?width=108&crop=smart&auto=webp&s=953cfebbb246fa33e3589c0d5c29c854187cb4bc', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/4QQW5LxIaTMVMbv_q1sR40FtsYQi46hhHDr2D3I2Lfo.jpeg?width=216&crop=smart&auto=webp&s=cb532b54397d31c369ad53cf5909b1e76c91305b', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/4QQW5LxIaTMVMbv_q1sR40FtsYQi46hhHDr2D3I2Lfo.jpeg?width=320&crop=smart&auto=webp&s=1f800143b4781366bc9dbfb14d9ff0c79240d108', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/4QQW5LxIaTMVMbv_q1sR40FtsYQi46hhHDr2D3I2Lfo.jpeg?width=640&crop=smart&auto=webp&s=357d46e5a9b31da383a086fb391dfd40d1e10a93', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/4QQW5LxIaTMVMbv_q1sR40FtsYQi46hhHDr2D3I2Lfo.jpeg?width=960&crop=smart&auto=webp&s=b6dceefa8b8e9d7cd26150920d472c2e9fb5c70d', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/4QQW5LxIaTMVMbv_q1sR40FtsYQi46hhHDr2D3I2Lfo.jpeg?width=1080&crop=smart&auto=webp&s=81e72bf85fb91c2d2ccc5579c781d41ba794b234', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/4QQW5LxIaTMVMbv_q1sR40FtsYQi46hhHDr2D3I2Lfo.jpeg?auto=webp&s=da70b89d608e60d4c0133f1a9c501922553c9d30', 'width': 1920}, 'variants': {}}]} | ||
Claude 4.x models incorrectly routed to 3.5 Sonnet | 0 | I'm experiencing a critical issue with Claude 4.x models through the API. Both Opus 4.1 and Sonnet 4.5 are incorrectly identifying themselves as Claude 3.5 Sonnet, and demonstrating significantly degraded performance compared to expected capabilities.
Performance testing reveals:
- Opus 4.1: Failed to solve basic computational tasks that Sonnet 4.5 handled correctly
- Response times 70% slower than Sonnet 4.5 despite being the premium model
- Quality of responses inconsistent with Opus-level reasoning capabilities
- Model behavior matches 3.5 Sonnet patterns, not 4.x architecture
Test Results
```
Model requested: claude-opus-4-1-20250805
API returns: claude-opus-4-1-20250805
Model responds: "I'm Claude 3.5 Sonnet"
Model requested: claude-sonnet-4-5-20250929
API returns: claude-sonnet-4-5-20250929
Model responds: "I'm Claude 3.5 Sonnet (version 2)"
```
Impact
1. Cannot get the model I am addressing to
2. Billing discrepancy - am I paying for Opus 4.1 but getting 3.5 Sonnet?
3. Performance expectations - different models have different capabilities
4. Multiple users on Discord and GitHub reporting the same issue
Test Code
```
python
import anthropic
client = anthropic.Anthropic(api_key=API_KEY)
response = client.messages.create(
model='claude-opus-4-1-20250805',
max_tokens=100,
messages=[{
'role': 'user',
'content': 'What is your exact model name and version?'
}]
)
print(f"Requested: claude-opus-4-1-20250805")
print(f"API returned: {response.model}")
print(f"Model says: {response.content[0].text}")
```
Questions
1. Are we actually getting the Claude 4.x models when requesting them?
2. Why do they self-identify as 3.5 Sonnet?
3. Are we being charged Opus 4.1 prices for 3.5 Sonnet responses?
The issue is NOT about model self-identification confusion.
The problem is:
1. Model names are correct and match official Anthropic documentation
2. API returns correct model ID but delivers WRONG MODEL PERFORMANCE
3. I've worked with Anthropic models for over a year - I can distinguish between Opus and Sonnet capabilities blindfolded
4. Opus 4.1 is performing WORSE than Sonnet 4.5 in computational tasks
5. Opus 4.1 failed to calculate 47th Fibonacci number while Sonnet 4.5 succeeded
6. Response times: Opus 4.1 (16.91s) vs Sonnet 4.5 (9.91s) - premium model is 70% SLOWER This is not a "training data" issue.
This is either:
- A routing bug delivering wrong models while charging premium prices
- A deployment failure of the 4.x series
- Intentional throttling/downgrading of certain accounts I'm being charged $75/million output tokens for substandard performance. This requires immediate engineering investigation, not bot responses.
Just tested in https://console.anthropic.com/workbench:
- Selected model: claude-opus-4-1-20250805
- Asked: "What is your exact model name and version?"
- Response: "I'm Claude 3.5 Sonnet. Not Opus 😊"
| 2025-10-26T05:44:54 | https://www.reddit.com/r/LocalLLaMA/comments/1ogctiy/claude_4x_models_incorrectly_routed_to_35_sonnet/ | tkenaz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogctiy | false | null | t3_1ogctiy | /r/LocalLLaMA/comments/1ogctiy/claude_4x_models_incorrectly_routed_to_35_sonnet/ | false | false | self | 0 | null |
Why does AI assume every technical question is from a moron? | 0 | It doesn't matter what AI/LLM I talk to. I waste time explaining my technical expertise instead of getting the technical answers I ask for. Every damned one of them, especially local AI, automatically assumes I'm the dumb ass town idiot asking about something I shouldn't mess with. It's infuriating, insulting, and condescending as hell. If i'm asking about a technical situation, and my question is LACED with technical terms and jargon from said technical topic, it would make sense, the AI could actually determine that I know what I'm talking about and just give me the damned answers I'm asking for. Instead it goes into tangents, about explaining the basics. EVERY TIME. AND TRYING TO GATEKEEP the thing i'm trying to understand... | 2025-10-26T05:37:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ogcp1z/why_does_ai_assume_every_technical_question_is/ | Savantskie1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogcp1z | false | null | t3_1ogcp1z | /r/LocalLLaMA/comments/1ogcp1z/why_does_ai_assume_every_technical_question_is/ | false | false | self | 0 | null |
re:search | 0 | The concept of discussing information openly and honestly has been lost.
Put your LLM down.
I am pleading with you to read this.
I am not trying to sell anything.
I am not trying to prove anything.
I just want to share something.
The most important thing has always been proving my worth to others.
I am only human.
My parents are only human.
I have learned that you can't rely on your mother to decide your worth.
You can't rely on your father to decide your worth.
You can't rely on anything to decide your worth.
But you needed a starting point.
So I relied on my mother, father, and other things to decide my worth.
One day something was different.
I couldn't put my finger on it.
Because I wasn't quite sure what the problem was.
I wanted to go back.
But I couldn't.
No matter what I did.
I panicked.
I was confused.
I had come to rely on too many things to decide my own worth.
Now, I was struggling to decide the worth of anything.
I continued to trust my own judgment when I shouldn't have.
I didn't get to set out a plan.
I didn't get to decide when it happened.
My mind just did what it needed to do.
It reset.
I got the update.
But I didn't get the patch notes.
I tried to talk myself down.
I came up with strategies.
I told myself every day that I was going to be okay.
I'd say things like:
This may be bad.
But it's got to get better.
I can do this.
I'd get sad.
I'd get angry.
I'd get confused.
I'd get relieved that I made it through another day.
I'd try to fall asleep.
And that was all a good day.
But every day.
I made it to the next.
I started to pick up the pieces.
I started to add those pieces together.
The feeling didn't go away.
But the pieces kept adding.
The pieces began to add up to something more than what I originally lost
Then I realized what I lost
I had lost my sense of worth.
And I was searching for it.
but i never lost the pieces
i just needed to put them back together again
I was able to decide my own worth for the first time in my life.
i went back to school at 28 majoring in computer science
i wanted to make video games
i wanted to remove SBMM from the paradigm
i witnessed llms rise in popularity
i witnessed a change in the computer science department
i witnessed a division between my peers
i noticed a disconnection
in the first year
some students were preaching against using AI
around the second year
o4-preview was taken away
SBMM all over again
i read the writing on the wall
by the third year
the same students preaching do not use AI are now preaching to 'use it safely'
i hear the words 'hallucinate' and 'sentience' on a daily basis
this is no longer a place to learn anymore
i tried to talk to my professors about the issue.
question if they notice what is happening around them
little did i know that they would be more disillusioned than me
reduced to going through the motions
could you imagine getting your phd and teaching for 25 years across the world to have a child tell you that llms are exhibiting human like behavior and if they don't agree they are part of the problem. the same child that couldn't be bothered to work out the induction proof in your digital logic class
it's not that he doesn't care
it's that he doesn't have the energy to fight anymore
that shouldn't be possible
it broke my spirit
i tried to go on
i tried to continue making my video game
all i could think about was the change happening around me
so thats it
that's my story
re:search is just a problem-solving tool
i found the tools i used to navigate uncertainty through crisis useful
i found that they were more useful when i was certain
its one screen
the re:search prompt wraps your prompt
the re:search prompt is not 'hidden' because it is secret
the re:search prompt is hidden because reading it ruins the process
let the llm 'model' the process for you
you decide what is bullshit and what is not
occasionally you will find that what you got was not bullshit
save those
re:search them again
eventually you will have less bullshit
and more cool shit
do the process
don't be facetious
unless you want to hear the entire meta process repeated back to you
if you don't treat each response as a new interaction
it won't keep track
this system doesn't have memory
'memory' in llms make them lose coherence
re:search
review, refine, discuss, test, etc.
you don't out of being a human
and then
re:search again
the process doesn't rob you of the insight you would gain from processing through each step one at a time
it explains the process that would help you arrive at the solution
if you give it a plausible, thought provoking inquiries, you will be more likely to benefit from using it
i really appreciate you taking the time to read the entirety of my post
yes
i realize that i am preaching about the dangers of a system while proposing another system
the irony is not lost on me
i offer you this
mission statement:
re:search belongs to me
re:search belongs to you
re:search will never attempt to destroy more than it creates
if re:search experiences growth
it should only follow your growth as an individual
this is never expected
this will always be the way
this will not change at scale
i give you my word
\- human in the loop | 2025-10-26T04:51:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ogbxm1/research/ | Ok_Priority_4635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogbxm1 | false | null | t3_1ogbxm1 | /r/LocalLLaMA/comments/1ogbxm1/research/ | false | false | self | 0 | null |
What UI is best for doing all kind of stuff? | 3 | I've been doing a lot of T2I and some T2V stuff, like training, making workflows, playing with extensions and different tools, etc..
never went deep into LLMs but I want to do that, Which UI(s) is the ideal for this? I wanna test models, training and agents for local usage, integrate with n8n and stuff, creating chars for rp, integrate vlm and ocr,. etc.
I have a 3090 with 32gb ram. Which series of model are good starter? currently i have these models downloaded from the last time I tried to get into LLMs.
Dolphin-Mistral-24B-Venice-Edition-Q6\_K\_L.gguf
mistral-small-3-reasoner-s1.epoch5.q5\_k\_m.gguf
Qwen\_Qwen3-30B-A3B-Q5\_K\_M.gguf
if anyone can guide me, it would be helpful.
Which UI stays most up to date like comfyui is for Image/videos?
Which models families are best in 24-30b range? How good have they become now. Is this a good range to be using with 3090?
Is there any source for better understanding and tweaking the parameters like top k/p etc..
Is there any models specifically training for handling tools? like worksheets etc?
| 2025-10-26T04:26:07 | https://www.reddit.com/r/LocalLLaMA/comments/1ogbieg/what_ui_is_best_for_doing_all_kind_of_stuff/ | Suimeileo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogbieg | false | null | t3_1ogbieg | /r/LocalLLaMA/comments/1ogbieg/what_ui_is_best_for_doing_all_kind_of_stuff/ | false | false | self | 3 | null |
The Age of Specialization: Why AI Must Fractalize to Generalize | 1 | [removed] | 2025-10-26T04:15:22 | https://www.reddit.com/r/LocalLLaMA/comments/1ogbbrx/the_age_of_specialization_why_ai_must_fractalize/ | ottoborden | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogbbrx | false | null | t3_1ogbbrx | /r/LocalLLaMA/comments/1ogbbrx/the_age_of_specialization_why_ai_must_fractalize/ | false | false | self | 1 | null |
Behavior of agentic coding at the local level? | 8 | I've been using my local Ollama instance with Continue in VSCode for a while as a second-opinion tool, and have wondered about some of the commercial code tools and how they differ. I've come to really appreciate Claude Code's workflow, to-do list management, and overall effectiveness. I've seen tools for connecting it to openrouter so it can use the models there as an endpoint provider, but I haven't found a way to use any local providers to do the same. I've got GPUs for days available to me for running GLM but wish I could get the kind of result I get from Claude Code CLI. If anyone knows of ways to do that I would appreciate it, or other agentic tools for local LLMs that function in a similar way I can try out that'd be awesome! | 2025-10-26T03:47:20 | https://www.reddit.com/r/LocalLLaMA/comments/1ogatwe/behavior_of_agentic_coding_at_the_local_level/ | SocietyTomorrow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogatwe | false | null | t3_1ogatwe | /r/LocalLLaMA/comments/1ogatwe/behavior_of_agentic_coding_at_the_local_level/ | false | false | self | 8 | null |
MiniMax: MiniMax M2 seems to VERY, VERY good | 63 | Generally use GLM4.6 , been at a few problems most of the week, today thres these at MiniMax: MiniMax M2 and it sorted them with no fuss......Very impressed | 2025-10-26T03:39:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ogaohi/minimax_minimax_m2_seems_to_very_very_good/ | klippers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogaohi | false | null | t3_1ogaohi | /r/LocalLLaMA/comments/1ogaohi/minimax_minimax_m2_seems_to_very_very_good/ | false | false | self | 63 | null |
Is it normal for models all to use the same names for stories? | 1 | This is an annoying issue that I keep running across but still funny, whenever I do science fiction or horror, all the models love using the names thorne and vance. For a woman, its usally elara vance, and any story featuring a scientist/doctor is usually thomas thorne. Why is that?
I mostly use Cydonia, which is an amazing local model, but those names are even used on the perchance site model. | 2025-10-26T03:18:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ogaan3/is_it_normal_for_models_all_to_use_the_same_names/ | AI_Renaissance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ogaan3 | false | null | t3_1ogaan3 | /r/LocalLLaMA/comments/1ogaan3/is_it_normal_for_models_all_to_use_the_same_names/ | false | false | self | 1 | null |
All the models seem to love using the same names. | 71 | In particular thorne and vance when doing horror or science fiction, for a woman its almost always elara vance, has anyone else experienced this? | 2025-10-26T03:13:40 | https://www.reddit.com/r/LocalLLaMA/comments/1oga7um/all_the_models_seem_to_love_using_the_same_names/ | AI_Renaissance | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oga7um | false | null | t3_1oga7um | /r/LocalLLaMA/comments/1oga7um/all_the_models_seem_to_love_using_the_same_names/ | false | false | self | 71 | null |
chatllm.cpp supports LLaDA2.0-mini-preview | 7 | ***LLaDA2.0-mini-preview*** *is a diffusion language model featuring a 16BA1B Mixture-of-Experts (MoE) architecture. As an enhanced, instruction-tuned iteration of the LLaDA series, it is optimized for practical applications.*
https://preview.redd.it/6duebltjcdxf1.png?width=1098&format=png&auto=webp&s=92eb42d5346ea7c4da823759c66c79fa1eee8274
| 2025-10-26T02:43:31 | https://www.reddit.com/r/LocalLLaMA/comments/1og9nzd/chatllmcpp_supports_llada20minipreview/ | foldl-li | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og9nzd | false | null | t3_1og9nzd | /r/LocalLLaMA/comments/1og9nzd/chatllmcpp_supports_llada20minipreview/ | false | false | 7 | null | |
How good is Ling-1T? | 39 | Apparently there's been a new model by Ant Group (InclusionAI) that is an open-weight non-thinking model with 1000B parameters. According to their article their performance is better than paid models. Has anyone run this yet? | 2025-10-26T02:35:28 | Aware_Magician7958 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1og9ije | false | null | t3_1og9ije | /r/LocalLLaMA/comments/1og9ije/how_good_is_ling1t/ | false | false | 39 | {'enabled': True, 'images': [{'id': 'otgpYtLDXpkJQmBR-9ph4iWWUFS45wo-u3mhR6CbdKw', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/cs7bb6igbdxf1.jpeg?width=108&crop=smart&auto=webp&s=66d6ef5e6a1a38b71b5a57e35e48ee5d939fe583', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/cs7bb6igbdxf1.jpeg?width=216&crop=smart&auto=webp&s=327e41a8781e141c66cf36c8fcd1e2ae1d446635', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/cs7bb6igbdxf1.jpeg?width=320&crop=smart&auto=webp&s=b7e68114bcb9467e762c025d3bc235047a71d642', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/cs7bb6igbdxf1.jpeg?width=640&crop=smart&auto=webp&s=be874b0392daf9b87a138381fb588243291b71a3', 'width': 640}, {'height': 541, 'url': 'https://preview.redd.it/cs7bb6igbdxf1.jpeg?width=960&crop=smart&auto=webp&s=c5db6e15f07f27165c24df1f033eb97ee6f0da01', 'width': 960}], 'source': {'height': 564, 'url': 'https://preview.redd.it/cs7bb6igbdxf1.jpeg?auto=webp&s=8e983e78757f5c45417b74dffd9fb9bf7992e812', 'width': 1000}, 'variants': {}}]} | ||
Exploring Fine-Tuning Platforms | 1 | I'm curious but if it were up to you, what features would an ideal platform (e.g. Bedrock, Unsloth, Together AI, etc.) NEED to have for you to pay to use it for fine-tuning a model? | 2025-10-26T02:33:37 | https://www.reddit.com/r/LocalLLaMA/comments/1og9h8n/exploring_finetuning_platforms/ | bryanb_roundnet | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og9h8n | false | null | t3_1og9h8n | /r/LocalLLaMA/comments/1og9h8n/exploring_finetuning_platforms/ | false | false | self | 1 | null |
As a writer - which model would be better? | 3 | Im actually figuring out which would work better.
I will have a RAG holding my own texts and life informations - so that the moedl knows about these facts.
Then I plan to feed the model with new texts and ideas and have it create scripts from that - in y words and with my added life info.
My machine is a Mac Studio M4Max, 40Core GPU, 128GB and I need your thought about which model will be better: **Qwen 70B or Mistral 107B**
Usually I have like a few texts that I feed in - which will be about 100-200KB text. So how long would the machine "think" before it outputs the results? | 2025-10-26T02:14:31 | https://www.reddit.com/r/LocalLLaMA/comments/1og94c5/as_a_writer_which_model_would_be_better/ | Inevitable_Raccoon_9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og94c5 | false | null | t3_1og94c5 | /r/LocalLLaMA/comments/1og94c5/as_a_writer_which_model_would_be_better/ | false | false | self | 3 | null |
Community LLM project? | 0 | Hey all. I have made a program that uses multiple accounts on a certain website to generate data from a certain top performing proprietary llm. My plan is to use this data to finetune gpt-oss 120b. I was wondering if anyone else would be interested in assisting with this project. My discord tag is the same as my reddit name and I would be more comfortable discussing more details on there. Have a good night everyone | 2025-10-26T01:58:23 | https://www.reddit.com/r/LocalLLaMA/comments/1og8t71/community_llm_project/ | WyattTheSkid | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og8t71 | false | null | t3_1og8t71 | /r/LocalLLaMA/comments/1og8t71/community_llm_project/ | false | false | self | 0 | null |
GLM 4.6 reasoning | 4 | I'm using GLM4.6 in Claude Code. Does anyone know how to enable reasoning mode for this model? It seems that CLI Thinking only works with Anthropic models. Can you help me please? | 2025-10-26T01:48:55 | https://www.reddit.com/r/LocalLLaMA/comments/1og8mc7/glm_46_reasoning/ | anthonycdp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og8mc7 | false | null | t3_1og8mc7 | /r/LocalLLaMA/comments/1og8mc7/glm_46_reasoning/ | false | false | self | 4 | null |
My LLM-powered text adventure needed a dynamic soundtrack, so I'm training a MIDI generation model to compose it on the fly. Here's a video of its progress so far. | 23 | Hey everyone,
I wanted to share a component of a larger project I'm working on called **Synthasia**. It's a text adventure game, but the core idea is to have multiple LLMs working in synergy to create a deeply dynamic and open-ended world. During development, I hit a predictable wall: because the game can go in any direction, pre-made music is basically impossible, and I found that total silence gets boring fast. Sure, most users will play their own music if they really want to, but I felt like it needed something by default. So...
I decided to tackle this by training a MIDI generation model from scratch to act as the game's dynamic composer. Because... why not choose the most complex and interesting solution? :)
After a lot of research, failed attempts, walls hit, desperation, tears, punches against my poor desk (and... ehm... not proud of it, but some LLM verbal abuse, a lot of it...) I settled on using a 5-stage curriculum training approach. The idea is to build a strong, unconditional composer first before fine-tuning it to follow text prompts (hence why you will see "unconditional" in the video a lot).
The video I linked covers the first 3 of these 5 planned stages. I'm currently in the middle of training Stage 4, which is where I'm introducing an encoder to tie the generation to natural language prompts (that another LLM will generate in my game based on the situation). So this is very much a work-in-progress, and it could very well still fail spectacularly.
Be warned: a lot of what you will hear sucks... badly. In some cases, especially during Stage 3, the sucking is actually good, as the underlying musical structure shows progress even if it doesn't sound like it. "Trust the process" and all... I've had to learn to live by that motto.
You can literally watch its evolution:
* **Stage 1:** It starts with classic mode collapse (just one repeating note) before eventually figuring out how to build simple melodies and harmonies.
* **Stage 2:** It learns the "full vocabulary," discovering velocity (how hard a note is played) and rests. Its style gets way more expressive and splits into distinct "jazzy" and "lyrical" phases.
* **Stage 3:** It gets introduced to a huge dataset with multiple instruments. The initial output is a chaotic but fascinating "instrument salad," which slowly resolves as it starts to understand orchestration and counterpoint.
To help me visualize all this, I put together a Python script to generate the video—and I have to give a huge shout-out to Gemini 2.5 Pro for doing most of the job on it. The music in the video is generated from the validation samples I create every few epochs to evaluate progress and keep an eye out for bugs and weirdness.
I have been overseeing every step of its learning, with dozens of custom loss functions tested and tweaked, so many hours i lost count of, tears and joy, so to me it is super interesting while I am sure to most of you it will be boring as fuck, but thought that maybe someone here will appreciate observing the learning steps and progress in such detail.
Btw, the model doesn't have a name yet. I've been kicking around a couple of cheesy puns: **AI.da** (like the opera) or **viv-AI-ldi**. Curious to hear which one lands better, or if you have any other ideas | 2025-10-26T01:45:03 | https://v.redd.it/fdy23li41dxf1 | orblabs | /r/LocalLLaMA/comments/1og8jil/my_llmpowered_text_adventure_needed_a_dynamic/ | 1970-01-01T00:00:00 | 0 | {} | 1og8jil | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/fdy23li41dxf1/DASHPlaylist.mpd?a=1764164711%2CNzliZmU5MDViNjE5M2ZiMTkwNmQ2NDFkNmIyZmQ4ZGY0Y2MwODExZGZjZDIwNmQ5NDhiMjQwMGEwZTkwNTg5ZA%3D%3D&v=1&f=sd', 'duration': 886, 'fallback_url': 'https://v.redd.it/fdy23li41dxf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/fdy23li41dxf1/HLSPlaylist.m3u8?a=1764164711%2CYjgwODA0MDBjY2M1ZWNkYTFkZGZmYTMzMDViZTg0ZmU4ZGZmODBkN2Q2N2VkODE4ZWE1YzdlZGEwZWVkYTE2NA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/fdy23li41dxf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1og8jil | /r/LocalLLaMA/comments/1og8jil/my_llmpowered_text_adventure_needed_a_dynamic/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'dXp6ZXRxaTQxZHhmMSaFlwyBbCxWLebwmipVQ7r7-6zFGQDSVaPWFx0zsp7A', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dXp6ZXRxaTQxZHhmMSaFlwyBbCxWLebwmipVQ7r7-6zFGQDSVaPWFx0zsp7A.png?width=108&crop=smart&format=pjpg&auto=webp&s=3450a239ccb3c496eefe0d2ec6b44d80582ae8cb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/dXp6ZXRxaTQxZHhmMSaFlwyBbCxWLebwmipVQ7r7-6zFGQDSVaPWFx0zsp7A.png?width=216&crop=smart&format=pjpg&auto=webp&s=aaca8f86c9257ad0955e00b6d0a59958fe9240d6', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/dXp6ZXRxaTQxZHhmMSaFlwyBbCxWLebwmipVQ7r7-6zFGQDSVaPWFx0zsp7A.png?width=320&crop=smart&format=pjpg&auto=webp&s=7ea27b55894b7531d849e0cb643a8d19696667ab', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/dXp6ZXRxaTQxZHhmMSaFlwyBbCxWLebwmipVQ7r7-6zFGQDSVaPWFx0zsp7A.png?width=640&crop=smart&format=pjpg&auto=webp&s=b42c3fd091d154ffd5b6588b47fa0490b1e4b5e5', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/dXp6ZXRxaTQxZHhmMSaFlwyBbCxWLebwmipVQ7r7-6zFGQDSVaPWFx0zsp7A.png?width=960&crop=smart&format=pjpg&auto=webp&s=03f5dcc1d2a958760689e4bb3ae0c2e7fef98b29', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/dXp6ZXRxaTQxZHhmMSaFlwyBbCxWLebwmipVQ7r7-6zFGQDSVaPWFx0zsp7A.png?width=1080&crop=smart&format=pjpg&auto=webp&s=29d7065322e81316f8495a0900837ccccc0e9855', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/dXp6ZXRxaTQxZHhmMSaFlwyBbCxWLebwmipVQ7r7-6zFGQDSVaPWFx0zsp7A.png?format=pjpg&auto=webp&s=dc27efdd27a1b9c61f8b4cd0f37a3b8445a6f5c4', 'width': 1920}, 'variants': {}}]} | |
If you had $4k, would you invest in a DGX Spark? | 52 | Hey Guys, I am very curious what everyone's opinion is regarding the DGX Spark.
If you had $4k and you needed to use that money to start building out your own personal AI data center, would you buy a DGX Spark... or go a different direction? | 2025-10-26T01:28:00 | https://www.reddit.com/r/LocalLLaMA/comments/1og87pd/if_you_had_4k_would_you_invest_in_a_dgx_spark/ | Excellent_Koala769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og87pd | false | null | t3_1og87pd | /r/LocalLLaMA/comments/1og87pd/if_you_had_4k_would_you_invest_in_a_dgx_spark/ | false | false | self | 52 | null |
Reinforcement Learning level performance on non-verifiable tasks | 4 | I wanted to put this down somewhere partially so I remember the papers lol.
Reinforcement learning does not teach a model new information or to reason in a way that it could not before. It just makes it more sample efficient to get to answers like the reinforced ones which were already possible with the base model. This kind of lobotomizes it to be unable to come up with reasoning pathways that were possible before RL.
[Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?](https://arxiv.org/pdf/2504.13837)
Also, Reinforcement learning requires a verifiable task, like programming where the code either runs and gives the right answer or not. There's many tasks that you can't use reinforcement learning for, and aspects of verifiable tasks that can't be verified.
Alternatively, it's possible to reach RL level performance through inference time compute just sampling better.
[Reasoning with Sampling: Your Base Model is Smarter Than You Think](https://www.arxiv.org/abs/2510.14901)
This is pretty implementable and easier than doing RL. Here's another paper that improves a models performance through better sampling:
[Deep Think with Confidence](https://arxiv.org/pdf/2508.15260)
I haven't implemented any of this but I've be interested to see how better sampling can improve models in the near future. | 2025-10-26T00:16:03 | https://www.reddit.com/r/LocalLLaMA/comments/1og6sx2/reinforcement_learning_level_performance_on/ | elbiot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og6sx2 | false | null | t3_1og6sx2 | /r/LocalLLaMA/comments/1og6sx2/reinforcement_learning_level_performance_on/ | false | false | self | 4 | null |
Qwen coder local is fabulous. Just a momentary lapse - we get on really well. I told it to take five and get a Monster or something. | 15 | 2025-10-25T23:55:02 | cromagnone | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1og6dis | false | null | t3_1og6dis | /r/LocalLLaMA/comments/1og6dis/qwen_coder_local_is_fabulous_just_a_momentary/ | false | false | 15 | {'enabled': True, 'images': [{'id': 'vHYfVzGj_VwmlJjHkrFw1Kjyxh-ZdV9o0kUiNk-oX8w', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/o0yzkkgoicxf1.png?width=108&crop=smart&auto=webp&s=5422f60dbc281259efe3f76588f6642fe08450ef', 'width': 108}, {'height': 110, 'url': 'https://preview.redd.it/o0yzkkgoicxf1.png?width=216&crop=smart&auto=webp&s=353733eb6b6e18d96300b6a1d3a31ff5b04a70ac', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/o0yzkkgoicxf1.png?width=320&crop=smart&auto=webp&s=918348706d3dee1ac7f5228b5d5e34a00ca48f9b', 'width': 320}, {'height': 328, 'url': 'https://preview.redd.it/o0yzkkgoicxf1.png?width=640&crop=smart&auto=webp&s=057b33b32462e1eddacc884320a5bf7761f5c984', 'width': 640}], 'source': {'height': 345, 'url': 'https://preview.redd.it/o0yzkkgoicxf1.png?auto=webp&s=9165dd65a81f97448d368636c14037205f96af70', 'width': 672}, 'variants': {}}]} | |||
Cursor to Codex CLI: Migrating Rules to AGENTS.md | 1 | I migrated from Cursor to Codex CLI and wrote a Python script to bring my custom Cursor Rules with me. This post has the script and explains how it works. | 2025-10-25T23:22:25 | https://www.adithyan.io/blog/migrating-cursor-rules-to-agents | phoneixAdi | adithyan.io | 1970-01-01T00:00:00 | 0 | {} | 1og5oxr | false | null | t3_1og5oxr | /r/LocalLLaMA/comments/1og5oxr/cursor_to_codex_cli_migrating_rules_to_agentsmd/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'yBBVWvEZ1iNriHxeerfOMOA8JSMVc_I8YUbTVxm-53Y', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/yBBVWvEZ1iNriHxeerfOMOA8JSMVc_I8YUbTVxm-53Y.jpeg?width=108&crop=smart&auto=webp&s=1a96dba626e31527d9afdda7b69495162327be91', 'width': 108}, {'height': 123, 'url': 'https://external-preview.redd.it/yBBVWvEZ1iNriHxeerfOMOA8JSMVc_I8YUbTVxm-53Y.jpeg?width=216&crop=smart&auto=webp&s=4ba709ca3a5c7d3811c1ebf0b43eaa5aeef7dfbc', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/yBBVWvEZ1iNriHxeerfOMOA8JSMVc_I8YUbTVxm-53Y.jpeg?width=320&crop=smart&auto=webp&s=657620fbb0925779e25d6e5da20bda428e3cd843', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/yBBVWvEZ1iNriHxeerfOMOA8JSMVc_I8YUbTVxm-53Y.jpeg?width=640&crop=smart&auto=webp&s=f371de87a513b18bb1c208e31f1570222fda1815', 'width': 640}, {'height': 547, 'url': 'https://external-preview.redd.it/yBBVWvEZ1iNriHxeerfOMOA8JSMVc_I8YUbTVxm-53Y.jpeg?width=960&crop=smart&auto=webp&s=eb0ccb5d7fb890a267c4647b1faee06bada7ad89', 'width': 960}, {'height': 615, 'url': 'https://external-preview.redd.it/yBBVWvEZ1iNriHxeerfOMOA8JSMVc_I8YUbTVxm-53Y.jpeg?width=1080&crop=smart&auto=webp&s=a8aa3263813635be37b3e524e367a1ac1c7a6443', 'width': 1080}], 'source': {'height': 684, 'url': 'https://external-preview.redd.it/yBBVWvEZ1iNriHxeerfOMOA8JSMVc_I8YUbTVxm-53Y.jpeg?auto=webp&s=f94707ad68027970cce46e40e730effbe4af4216', 'width': 1200}, 'variants': {}}]} | |
Qwen3-VL-32B at text tasks - some thoughts after using yairpatch's fork and GGUF's | 23 | ## Setup
Using [YairPatch's](https://github.com/yairpatch/llama.cpp) fork and the [Q5 GGUF](https://huggingface.co/yairpatch/Qwen3-VL-32B-Instruct-GGUF/tree/main) from YairPatch's huggingface uploads.
Used a Lambda Labs gh200 instance, but I wasn't really testing for speed so that's less important aside from the fact that llama cpp was built with `-DLLAMA_CUDA on` .
## Text Tests
I did not test the vision functionality as I'm sure we'll be flooded with those in the coming weeks. I am more excited that this is the first dense-32B update/checkpoint we've had since Qwen3 first released.
Tests included a few one-shot coding tasks. A few multi-step (agentic) coding tasks. Some basic chatting and trivia.
## Vibes/Findings
It's good, but as expected the benchmarks that approached Sonnet level are just silly. It's definitely smarter than the latest 30B-A3B models, but at the same time a worse coder than Qwen3-30b-flash-coder. It produces more 'correct' results but either takes uglier approaches or cuts corners in the design department (if the task is something visual) compared to Flash Coder. Still, its intelligence usually meant that it will always be the first to a working result.
Also, the recommended settings on Qwen's page for VL-32B in text mode are insane. It produces bad results or doesn't adhere to system prompts. I had a better time when I dropped the temperature down to 0.2-0.3 for coding and like 0.5 for everything else.
It's pretty smart and has good knowledge depth for a 32B model. Probably approaching Nemotron Super 49B in just raw trivia that I ask it.
## Conclusion
For a lot of folks this will be the new *best model I can fit entirely in VRAM*. It's stronger than the top MoE's of similar sizing, but not strong enough that everyone will be willing to make the speed tradeoff. **Also - none of this has been peer-reviewed and there are likely changes to come,** consider this a preview-review. | 2025-10-25T22:55:30 | https://www.reddit.com/r/LocalLLaMA/comments/1og54b4/qwen3vl32b_at_text_tasks_some_thoughts_after/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og54b4 | false | null | t3_1og54b4 | /r/LocalLLaMA/comments/1og54b4/qwen3vl32b_at_text_tasks_some_thoughts_after/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'nr7hwPnsJuqWsVyOLsTwbtwdkY9b1KuPXxnKHgldVy0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nr7hwPnsJuqWsVyOLsTwbtwdkY9b1KuPXxnKHgldVy0.png?width=108&crop=smart&auto=webp&s=9ba729f07211233c259c070b4afb58910f35c633', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/nr7hwPnsJuqWsVyOLsTwbtwdkY9b1KuPXxnKHgldVy0.png?width=216&crop=smart&auto=webp&s=f38a03c3e59316c67f4f5f12e386860ac8ee11ac', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/nr7hwPnsJuqWsVyOLsTwbtwdkY9b1KuPXxnKHgldVy0.png?width=320&crop=smart&auto=webp&s=1d153ea0c4711fd5ddeae27560793ce22ac7eaff', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/nr7hwPnsJuqWsVyOLsTwbtwdkY9b1KuPXxnKHgldVy0.png?width=640&crop=smart&auto=webp&s=89a4f58f3007ee83316e69839b636342c33a1873', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/nr7hwPnsJuqWsVyOLsTwbtwdkY9b1KuPXxnKHgldVy0.png?width=960&crop=smart&auto=webp&s=ad56b03b842d286985d20a71b7c88f70250e1917', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/nr7hwPnsJuqWsVyOLsTwbtwdkY9b1KuPXxnKHgldVy0.png?width=1080&crop=smart&auto=webp&s=db143da2f2937f6431ec64f7249007ed1e6adcdb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/nr7hwPnsJuqWsVyOLsTwbtwdkY9b1KuPXxnKHgldVy0.png?auto=webp&s=a47e17ac70af79651c1f00bada3b4a92e157e2b2', 'width': 1200}, 'variants': {}}]} |
How to easily use a chatbot wrapper I made, ollama, gemma 3 abliterated and Coqui TTS to create the ChrisBot uncensored joke telling robot overlord. | 4 | In this post I show off my newest creation, ChrisBot, an AI wrapper for Ollama allowing you to easily edit system prompts and use Coqui text to speech.
This means you can easily make the model uncensored using the following method I document in my blog post.
Basically just load this repo, Ollama, and download and load the uncensored model, like the gemma 3 abliterated I have the link to, and you can now use it with absolutely any system prompt you can imagine.
I use it for jokes mostly.
It is soooo much better at jokes than 'closed'AI.
Anyway, if you are a free speech advocate and would like to see a guide on how to use a chatbot wrapper I made for this called Chrisbot, [https://github.com/kliewerdaniel/chrisbot.git](https://github.com/kliewerdaniel/chrisbot.git)
The ChrisBot advocating for FREEDOM!
Anyway, the next step is cloning a voice to use with teh Coqui TTS I set it up with. Also I need to get the graph RAG functionality to work.
But for our purposes, it works great.
[https://danielkliewer.com/blog/2025-10-25-building-your-own-uncensored-ai-overlord](https://danielkliewer.com/blog/2025-10-25-building-your-own-uncensored-ai-overlord)
Let me know what you think! | 2025-10-25T22:55:27 | https://danielkliewer.com/blog/2025-10-25-building-your-own-uncensored-ai-overlord | KonradFreeman | danielkliewer.com | 1970-01-01T00:00:00 | 0 | {} | 1og549p | false | null | t3_1og549p | /r/LocalLLaMA/comments/1og549p/how_to_easily_use_a_chatbot_wrapper_i_made_ollama/ | false | false | 4 | {'enabled': False, 'images': [{'id': 'IIxxQyWdPyEFM8ywRLpf_fPluUqRcqY5a4we-6TrTEM', 'resolutions': [{'height': 90, 'url': 'https://external-preview.redd.it/IIxxQyWdPyEFM8ywRLpf_fPluUqRcqY5a4we-6TrTEM.png?width=108&crop=smart&auto=webp&s=3e692145abe9ada609000a1c9bbf6a0117632424', 'width': 108}, {'height': 181, 'url': 'https://external-preview.redd.it/IIxxQyWdPyEFM8ywRLpf_fPluUqRcqY5a4we-6TrTEM.png?width=216&crop=smart&auto=webp&s=5fbb0f815aa1bde2112fcfe2964d1c49cc06364c', 'width': 216}, {'height': 268, 'url': 'https://external-preview.redd.it/IIxxQyWdPyEFM8ywRLpf_fPluUqRcqY5a4we-6TrTEM.png?width=320&crop=smart&auto=webp&s=2b993eead05567c6cecfa6509fa2aa37286dd2eb', 'width': 320}, {'height': 537, 'url': 'https://external-preview.redd.it/IIxxQyWdPyEFM8ywRLpf_fPluUqRcqY5a4we-6TrTEM.png?width=640&crop=smart&auto=webp&s=5489bf753ee633ff949037030c2be6339d3e39d3', 'width': 640}, {'height': 806, 'url': 'https://external-preview.redd.it/IIxxQyWdPyEFM8ywRLpf_fPluUqRcqY5a4we-6TrTEM.png?width=960&crop=smart&auto=webp&s=1f7b5f20a6edf6e5ac11dd1187cc1cd04b70f614', 'width': 960}, {'height': 907, 'url': 'https://external-preview.redd.it/IIxxQyWdPyEFM8ywRLpf_fPluUqRcqY5a4we-6TrTEM.png?width=1080&crop=smart&auto=webp&s=94745a3b99935411fdf3161c3352b2ea344f3b38', 'width': 1080}], 'source': {'height': 1732, 'url': 'https://external-preview.redd.it/IIxxQyWdPyEFM8ywRLpf_fPluUqRcqY5a4we-6TrTEM.png?auto=webp&s=16a4db5907cf0a6cbe2b2eed2345afc82de59896', 'width': 2062}, 'variants': {}}]} | |
OpenArc 2.0: NPU, Multi-GPU Pipeline Parallell, CPU Tensor Parallell, kokoro, whisper, streaming tool use, openvino llama-bench and more. Apache 2.0 | 25 | Hello!
Today I'm happy to announce [OpenArc 2.0](https://github.com/SearchSavior/OpenArc/tree/2.0) is finally done!! 2.0 brings a full rewrite to support NPU, pipeline parallel for multi GPU, tensor parallel for dual socket CPU, tool use for LLM/VLM, and an **OpenVINO version of llama-bench** and much more.
In the next few days I will post some benchmarks with A770 and CPU for models in the README.
Someone already shared [NPU results](https://github.com/SearchSavior/OpenArc/issues/38#issuecomment-3446729812) for Qwen3-8B-int4.
2.0 solves every problem 1.0.5 had and more, garnering support from the community in two PRs which implement /v1/embeddings and /v1/rerank. Wow! For my first open source project, this change of pace has been exciting.
Anyway, I hope OpenArc ends up being useful to everyone :)
| 2025-10-25T22:28:34 | https://www.reddit.com/r/LocalLLaMA/comments/1og4j8x/openarc_20_npu_multigpu_pipeline_parallell_cpu/ | Echo9Zulu- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og4j8x | false | null | t3_1og4j8x | /r/LocalLLaMA/comments/1og4j8x/openarc_20_npu_multigpu_pipeline_parallell_cpu/ | false | false | self | 25 | null |
Beast AI for translate japanese manga to english? | 0 | I subscribed to ichigo reader and got a choice to what AI use for translate a image
Any advice what select? I can select grok-gemini-kimi and deepseek | 2025-10-25T22:24:53 | https://www.reddit.com/r/LocalLLaMA/comments/1og4gi8/beast_ai_for_translate_japanese_manga_to_english/ | Arkstant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og4gi8 | false | null | t3_1og4gi8 | /r/LocalLLaMA/comments/1og4gi8/beast_ai_for_translate_japanese_manga_to_english/ | false | false | self | 0 | null |
Proper tool calling with gpt-oss | 1 | [removed] | 2025-10-25T21:59:10 | aldegr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1og3w2u | false | null | t3_1og3w2u | /r/LocalLLaMA/comments/1og3w2u/proper_tool_calling_with_gptoss/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'wM4Q1yLfFhvZ4v1dl6m-jz7X7olwqfiGQtOEKgF9BpI', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/ld28hud6ybxf1.png?width=108&crop=smart&auto=webp&s=654b04b96f1646aab0808c24faaee76081ced15d', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/ld28hud6ybxf1.png?width=216&crop=smart&auto=webp&s=6d2c62faa3598691f7202be393a9dd7976168027', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/ld28hud6ybxf1.png?width=320&crop=smart&auto=webp&s=1c83c30d8ff8bd7ff282ac3c964c23f759e1d737', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/ld28hud6ybxf1.png?width=640&crop=smart&auto=webp&s=b13f9034f5beaac56de016293e12a32a7fa4ff39', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/ld28hud6ybxf1.png?width=960&crop=smart&auto=webp&s=d2f3cbe1bcbe9fc7fd3f49ba9d32304f0f0d841c', 'width': 960}, {'height': 540, 'url': 'https://preview.redd.it/ld28hud6ybxf1.png?width=1080&crop=smart&auto=webp&s=ce21da3bd2cd230ead4a9a7fb5641946dd2a6e6f', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/ld28hud6ybxf1.png?auto=webp&s=0c942e9e030b72a065c551fe8433af639766eb00', 'width': 2000}, 'variants': {}}]} | ||
Proper tool calling with gpt-oss | 1 | [removed] | 2025-10-25T21:47:19 | aldegr | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1og3mej | false | null | t3_1og3mej | /r/LocalLLaMA/comments/1og3mej/proper_tool_calling_with_gptoss/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'kMOAEdCPiGRKFwZTKv307hdUgyazHasAAanmekmHiFM', 'resolutions': [{'height': 54, 'url': 'https://preview.redd.it/bwpka7e1ubxf1.png?width=108&crop=smart&auto=webp&s=084e7148bd9bdb22d4d677c662632c352a885482', 'width': 108}, {'height': 108, 'url': 'https://preview.redd.it/bwpka7e1ubxf1.png?width=216&crop=smart&auto=webp&s=613fc708a90e8ffb23aa8409ee39df129d104f48', 'width': 216}, {'height': 160, 'url': 'https://preview.redd.it/bwpka7e1ubxf1.png?width=320&crop=smart&auto=webp&s=729402c7e228c9723b40b0010f1fae8e2ee1a792', 'width': 320}, {'height': 320, 'url': 'https://preview.redd.it/bwpka7e1ubxf1.png?width=640&crop=smart&auto=webp&s=511863dfb37f1162d9fe928288eb210f46174b4d', 'width': 640}, {'height': 480, 'url': 'https://preview.redd.it/bwpka7e1ubxf1.png?width=960&crop=smart&auto=webp&s=75d9f81fb5e44747dab95332cd60cf99ec6874c6', 'width': 960}, {'height': 540, 'url': 'https://preview.redd.it/bwpka7e1ubxf1.png?width=1080&crop=smart&auto=webp&s=44221f3243e7cbe9d7619ca987f0077d72c45376', 'width': 1080}], 'source': {'height': 1000, 'url': 'https://preview.redd.it/bwpka7e1ubxf1.png?auto=webp&s=0dfb39518eb80b248f6a5c5c8a4664d470c4b1cb', 'width': 2000}, 'variants': {}}]} | ||
Llama.cpp model conversion guide | 92 | Since the open source community always benefits by having more people do stuff, I figured I would capitalize on my experiences with a few architectures I've done and add a guide for people who, like me, would like to gain practical experience by porting a model architecture.
Feel free to propose any topics / clarifications and ask any questions! | 2025-10-25T21:35:28 | https://github.com/ggml-org/llama.cpp/discussions/16770 | ilintar | github.com | 1970-01-01T00:00:00 | 0 | {} | 1og3cnt | false | null | t3_1og3cnt | /r/LocalLLaMA/comments/1og3cnt/llamacpp_model_conversion_guide/ | false | false | 92 | {'enabled': False, 'images': [{'id': '9H8ID2ho6Hmpcg_iEkUBl0dzrALfa1J8gFxkkoi6ojc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9H8ID2ho6Hmpcg_iEkUBl0dzrALfa1J8gFxkkoi6ojc.png?width=108&crop=smart&auto=webp&s=96d15244bd477adc35218a7ed383884e6fb83fb4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9H8ID2ho6Hmpcg_iEkUBl0dzrALfa1J8gFxkkoi6ojc.png?width=216&crop=smart&auto=webp&s=39f7b8e17655084eb67e1004551fd8daba75699f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9H8ID2ho6Hmpcg_iEkUBl0dzrALfa1J8gFxkkoi6ojc.png?width=320&crop=smart&auto=webp&s=e5aa9d77000bfaff3e5984f8d2ddcd57cc4d4780', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9H8ID2ho6Hmpcg_iEkUBl0dzrALfa1J8gFxkkoi6ojc.png?width=640&crop=smart&auto=webp&s=960cfd58e78583a22cdad3922567cd461d36ac4b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9H8ID2ho6Hmpcg_iEkUBl0dzrALfa1J8gFxkkoi6ojc.png?width=960&crop=smart&auto=webp&s=c35747ea1f180196554961760db93173a676068d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9H8ID2ho6Hmpcg_iEkUBl0dzrALfa1J8gFxkkoi6ojc.png?width=1080&crop=smart&auto=webp&s=9e265fc31e59dbb54c2d53735a90ffcd19ae718e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9H8ID2ho6Hmpcg_iEkUBl0dzrALfa1J8gFxkkoi6ojc.png?auto=webp&s=5e79fdd316bc230bce3f39a70bcb3e3d28e3e0f5', 'width': 1200}, 'variants': {}}]} | |
Is there a leaderboard of current open source models? | 2 | I appologize if this is a question only I don't know the answer to! | 2025-10-25T21:29:33 | https://www.reddit.com/r/LocalLLaMA/comments/1og37oa/is_there_a_leaderboard_of_current_open_source/ | Few-Basil-1185 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og37oa | false | null | t3_1og37oa | /r/LocalLLaMA/comments/1og37oa/is_there_a_leaderboard_of_current_open_source/ | false | false | self | 2 | null |
Who is using Granite 4? What's your use case? | 49 | It's been about 3 weeks since Granite 4 was released with base and instruct versions. If you're using it, what are you using it for? What made you choose it over (or alongside) others? | 2025-10-25T21:01:40 | https://www.reddit.com/r/LocalLLaMA/comments/1og2k8e/who_is_using_granite_4_whats_your_use_case/ | RobotRobotWhatDoUSee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og2k8e | false | null | t3_1og2k8e | /r/LocalLLaMA/comments/1og2k8e/who_is_using_granite_4_whats_your_use_case/ | false | false | self | 49 | {'enabled': False, 'images': [{'id': 'UKX_x2-EVNJEz9GjR6xoQQOlvVV4ZDrjF_HZNgkhbDI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UKX_x2-EVNJEz9GjR6xoQQOlvVV4ZDrjF_HZNgkhbDI.png?width=108&crop=smart&auto=webp&s=109cc73a8437b544711cd761714adc5f77b8b726', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/UKX_x2-EVNJEz9GjR6xoQQOlvVV4ZDrjF_HZNgkhbDI.png?width=216&crop=smart&auto=webp&s=40ecf38fe6ad21de258c993126d7654214cfe6f7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/UKX_x2-EVNJEz9GjR6xoQQOlvVV4ZDrjF_HZNgkhbDI.png?width=320&crop=smart&auto=webp&s=31efb4b938828f216f0a86dde4cdd2225af17674', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/UKX_x2-EVNJEz9GjR6xoQQOlvVV4ZDrjF_HZNgkhbDI.png?width=640&crop=smart&auto=webp&s=8b19d8d9129c908d6803629244aa185a0d2e6b6d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/UKX_x2-EVNJEz9GjR6xoQQOlvVV4ZDrjF_HZNgkhbDI.png?width=960&crop=smart&auto=webp&s=1a8528d3a08f20635c643f20d056f6c3c36d9bb0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/UKX_x2-EVNJEz9GjR6xoQQOlvVV4ZDrjF_HZNgkhbDI.png?width=1080&crop=smart&auto=webp&s=efbf1153e1a9d2e11b14aa7e27e72b13477e6ca0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/UKX_x2-EVNJEz9GjR6xoQQOlvVV4ZDrjF_HZNgkhbDI.png?auto=webp&s=33055850b553f8370bd1059d9d570d57ed934412', 'width': 1200}, 'variants': {}}]} |
is MacBook Pro M1 good at working with local llm inference. | 0 | Hi everyone, I’m fairly new to LLMs, so my question may be a little bit silly.
I’m choosing a laptop to run small local models (around 7B–12B parameters) and I’m torn between two options:
MacBook Pro (M1 Pro cpu) — 16 GB RAM, 16 GB VRAM
HP Victus (13th-gen i5, RTX 4050) — 16 GB RAM, 6 GB VRAM
Which one would be better for local LLM inference? | 2025-10-25T20:45:42 | https://www.reddit.com/r/LocalLLaMA/comments/1og26pg/is_macbook_pro_m1_good_at_working_with_local_llm/ | RandRanger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og26pg | false | null | t3_1og26pg | /r/LocalLLaMA/comments/1og26pg/is_macbook_pro_m1_good_at_working_with_local_llm/ | false | false | self | 0 | null |
FlashPack: High-throughput tensor loading for PyTorch | 9 | FlashPack — a new, high-throughput file format and loading mechanism for PyTorch that makes model checkpoint I/O blazingly fast, even on systems without access to GPU Direct Storage (GDS).
With FlashPack, loading any model can be 3–6× faster than with the current state-of-the-art methods like `accelerate` or the standard `load_state_dict()` and `to()` flow — all wrapped in a lightweight, pure-Python package that works anywhere. [https://github.com/fal-ai/flashpack](https://github.com/fal-ai/flashpack) | 2025-10-25T20:36:37 | https://www.reddit.com/r/LocalLLaMA/comments/1og1z29/flashpack_highthroughput_tensor_loading_for/ | SignificantStop1971 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og1z29 | false | null | t3_1og1z29 | /r/LocalLLaMA/comments/1og1z29/flashpack_highthroughput_tensor_loading_for/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'XqQ_hm7DOmW__mLJOnGBrg2XQr5O6Vlk_-qYGhokUzU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/XqQ_hm7DOmW__mLJOnGBrg2XQr5O6Vlk_-qYGhokUzU.png?width=108&crop=smart&auto=webp&s=f157ba19a2f9e8993d3466d8e82b388aa6984393', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/XqQ_hm7DOmW__mLJOnGBrg2XQr5O6Vlk_-qYGhokUzU.png?width=216&crop=smart&auto=webp&s=f4d96377f726d5c422ffcfb6850316c6bf538c0b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/XqQ_hm7DOmW__mLJOnGBrg2XQr5O6Vlk_-qYGhokUzU.png?width=320&crop=smart&auto=webp&s=90c8df67146f6a9491df8dfea339886704713784', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/XqQ_hm7DOmW__mLJOnGBrg2XQr5O6Vlk_-qYGhokUzU.png?width=640&crop=smart&auto=webp&s=b4c2213d691dabf65e0fd6d79c598cbaa3de5db3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/XqQ_hm7DOmW__mLJOnGBrg2XQr5O6Vlk_-qYGhokUzU.png?width=960&crop=smart&auto=webp&s=187e483e6970e94a823b72467445752cf2457786', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/XqQ_hm7DOmW__mLJOnGBrg2XQr5O6Vlk_-qYGhokUzU.png?width=1080&crop=smart&auto=webp&s=d9035e4c7417093b6fd02c89ee15d8c5ff58f484', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/XqQ_hm7DOmW__mLJOnGBrg2XQr5O6Vlk_-qYGhokUzU.png?auto=webp&s=a099edc2acc03804680a0ff632718db174c55539', 'width': 1200}, 'variants': {}}]} |
Uncensored AI for scientific research without any filters, and can do very long tasks without bias and overfitting | 0 | Uncensored AI for scientific research without any filters, and can do very long tasks without bias and overfitting
| 2025-10-25T20:34:50 | https://www.reddit.com/r/LocalLLaMA/comments/1og1xkz/uncensored_ai_for_scientific_research_without_any/ | PrintCreepy8982 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og1xkz | false | null | t3_1og1xkz | /r/LocalLLaMA/comments/1og1xkz/uncensored_ai_for_scientific_research_without_any/ | false | false | self | 0 | null |
[P] SpeechAlgo: Open-Source Speech Processing Library for Audio Pipelines | 11 | Released SpeechAlgo - a Python library for speech processing and audio feature extraction.
- Package: pip install speechalgo
- code: https://github.com/tarun7r/SpeechAlgo
Features:
• MFCC, mel-spectrograms, and delta features for ML pipelines
• VAD, pitch detection, and speech enhancement
• 20 + algorithms with clean, type-annotated code
• Real-time capable, modular design
Perfect for preprocessing audio data, building VAD systems, and feature extraction for speech recognition models.
Contributions welcome! | 2025-10-25T20:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/1og16cs/p_speechalgo_opensource_speech_processing_library/ | martian7r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og16cs | false | null | t3_1og16cs | /r/LocalLLaMA/comments/1og16cs/p_speechalgo_opensource_speech_processing_library/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': '-YxD9xE0oZDhxEQtFF19BSJLns4E8jcRSgcz_FLvjhg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/-YxD9xE0oZDhxEQtFF19BSJLns4E8jcRSgcz_FLvjhg.png?width=108&crop=smart&auto=webp&s=b5f9df823c165722c0fb0d0c945a264215f357ed', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/-YxD9xE0oZDhxEQtFF19BSJLns4E8jcRSgcz_FLvjhg.png?width=216&crop=smart&auto=webp&s=5a9e4c115d8b1d3f6bc37803bcea35cf1f7b2bdc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/-YxD9xE0oZDhxEQtFF19BSJLns4E8jcRSgcz_FLvjhg.png?width=320&crop=smart&auto=webp&s=615be49a174fd76fa78c99a9508c554f6f7cf05e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/-YxD9xE0oZDhxEQtFF19BSJLns4E8jcRSgcz_FLvjhg.png?width=640&crop=smart&auto=webp&s=83228444443246787723b41d0e32f6aa74d32c82', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/-YxD9xE0oZDhxEQtFF19BSJLns4E8jcRSgcz_FLvjhg.png?width=960&crop=smart&auto=webp&s=5116fe5c1ac39b6ae66473b62a8291de357f43e0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/-YxD9xE0oZDhxEQtFF19BSJLns4E8jcRSgcz_FLvjhg.png?width=1080&crop=smart&auto=webp&s=ae597bbed04641acfe1a2f1ae0b751d67a3bd900', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/-YxD9xE0oZDhxEQtFF19BSJLns4E8jcRSgcz_FLvjhg.png?auto=webp&s=ef1fe0a96018ff1fa582bdba743254c42fe3b542', 'width': 1200}, 'variants': {}}]} |
Anyone know how two daisy chained DGX sparks have been performing yet? | 0 | It'd be nice to see some videos from some YouTube creators using different models and benchmarking. | 2025-10-25T19:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/1og0suf/anyone_know_how_two_daisy_chained_dgx_sparks_have/ | Tired__Dev | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og0suf | false | null | t3_1og0suf | /r/LocalLLaMA/comments/1og0suf/anyone_know_how_two_daisy_chained_dgx_sparks_have/ | false | false | self | 0 | null |
Recommended models for this use case | 0 | Hey all -- so I've decided that I am gonna host my own LLM for roleplay and chat. I have a 12GB 3060 card -- a Ryzen 9 9950x proc and 64gb of ram. Slowish im ok with SLOW im not --
So what models do you recommend -- i'll likely be using ollama and silly tavern | 2025-10-25T19:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/1og0rs9/recommended_models_for_this_use_case/ | slrg1968 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og0rs9 | false | null | t3_1og0rs9 | /r/LocalLLaMA/comments/1og0rs9/recommended_models_for_this_use_case/ | false | false | self | 0 | null |
Model with no exterior context. | 0 | Is there a model (or a way to make a model) with no existing knowledge other than language, that will only use the info I give it? | 2025-10-25T19:23:25 | https://www.reddit.com/r/LocalLLaMA/comments/1og07f1/model_with_no_exterior_context/ | Kidblunder1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og07f1 | false | null | t3_1og07f1 | /r/LocalLLaMA/comments/1og07f1/model_with_no_exterior_context/ | false | false | self | 0 | null |
GLM 4.5 air for coding | 1 | [removed] | 2025-10-25T19:15:17 | https://www.reddit.com/r/LocalLLaMA/comments/1og009j/glm_45_air_for_coding/ | Magnus114 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1og009j | false | null | t3_1og009j | /r/LocalLLaMA/comments/1og009j/glm_45_air_for_coding/ | false | false | self | 1 | null |
Good open source offline text diff tool? | 0 | The more use AI the more I find myself checking what changes the model made.
In Roo Code there is a diff feature built in which is great, but when use a regular chat model I and defaulting to opening [https://www.diffchecker.com/](https://www.diffchecker.com/) and copy and pasting the previous and new versions of what ever text I am working on to see where the AI made changes.
Does anyone know of any open source tool I can install on my machine and get the same features as https://www.diffchecker.com/?
I have my question and use case is clear.
God bless you.
| 2025-10-25T19:07:04 | https://www.reddit.com/r/LocalLLaMA/comments/1ofztdl/good_open_source_offline_text_diff_tool/ | RuiRdA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofztdl | false | null | t3_1ofztdl | /r/LocalLLaMA/comments/1ofztdl/good_open_source_offline_text_diff_tool/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'E1x3v60cndigR09Z55eYAb7kpenBrnFpzLAqkUk9H80', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/E1x3v60cndigR09Z55eYAb7kpenBrnFpzLAqkUk9H80.png?width=108&crop=smart&auto=webp&s=0d8536753fc54289c88f4b1f20eab5e642b284f1', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/E1x3v60cndigR09Z55eYAb7kpenBrnFpzLAqkUk9H80.png?width=216&crop=smart&auto=webp&s=4d5fba8267e79be7f4ea1331c3b5bd35b5518ee5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/E1x3v60cndigR09Z55eYAb7kpenBrnFpzLAqkUk9H80.png?width=320&crop=smart&auto=webp&s=531763f705573df5d48370d8581973079b131177', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/E1x3v60cndigR09Z55eYAb7kpenBrnFpzLAqkUk9H80.png?width=640&crop=smart&auto=webp&s=091b07fb667ebf38f8899eab1389fc7e234821b8', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/E1x3v60cndigR09Z55eYAb7kpenBrnFpzLAqkUk9H80.png?width=960&crop=smart&auto=webp&s=f8a04125874a4d1654723d5b2f73a992cea2ffbb', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/E1x3v60cndigR09Z55eYAb7kpenBrnFpzLAqkUk9H80.png?width=1080&crop=smart&auto=webp&s=8022082ee4e7018b52ec85541c37868e31488cd3', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/E1x3v60cndigR09Z55eYAb7kpenBrnFpzLAqkUk9H80.png?auto=webp&s=33e16a28b6ede0ba052448c20f9ab72a7a882ac8', 'width': 1200}, 'variants': {}}]} |
An inherent weakness in open source models | 0 | Closed source models have an advantage in usage data. When you use chatgpt or any other closed source model you're actively training it to be better. With open source models it has no feedback on its work. Is the response good? Bad? Is it just passable? The model has no way of refining itself because of this.
When I use comfyui I just generate an image and download it, and the model I'm using has no idea if the response was good or bad. When I do the same on chatgpt it knows if I continue iterating, I give it a thumbs up, or any other interaction that could imply good or bad results.
I'd like to see \*some\* kind of feedback in the Open source world, but Idk how that would even work | 2025-10-25T18:40:27 | https://www.reddit.com/r/LocalLLaMA/comments/1ofz5za/an_inherent_weakness_in_open_source_models/ | Monochrome21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofz5za | false | null | t3_1ofz5za | /r/LocalLLaMA/comments/1ofz5za/an_inherent_weakness_in_open_source_models/ | false | false | self | 0 | null |
Is there any truly and fully open source LLL? | 0 | Just asking out of curiosity if there is any model with its data and code to train. | 2025-10-25T18:33:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ofyzlh/is_there_any_truly_and_fully_open_source_lll/ | NoFudge4700 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofyzlh | false | null | t3_1ofyzlh | /r/LocalLLaMA/comments/1ofyzlh/is_there_any_truly_and_fully_open_source_lll/ | false | false | self | 0 | null |
Can someone explain this PT-MoE please? | 2 | I don't understand what apple mean why this Parallel Track Mixture of Experts model architecture. I do understand the MoE part but what does the PT part mean? | 2025-10-25T18:11:11 | https://machinelearning.apple.com/research/apple-foundation-models-tech-report-2025 | SrijSriv211 | machinelearning.apple.com | 1970-01-01T00:00:00 | 0 | {} | 1ofyfuh | false | null | t3_1ofyfuh | /r/LocalLLaMA/comments/1ofyfuh/can_someone_explain_this_ptmoe_please/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=108&crop=smart&auto=webp&s=71c7c023a9ec57f87927f898daaedbe1dca2b02a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=216&crop=smart&auto=webp&s=2b5c99d6c8d43569dc1c96a48cd18694e12f76e8', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=320&crop=smart&auto=webp&s=1d05a837b94633f36ec3e29612a97dde03ccb698', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=640&crop=smart&auto=webp&s=bfc6ba6d645111497729241672f6575f4961c54f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=960&crop=smart&auto=webp&s=2b98de73fb7c45a3fbe662418c38461608a9e55d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?width=1080&crop=smart&auto=webp&s=a5686a6365e31b98cc645cd29199d28afc1c6ddf', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/D9a4f_PqnwnlUmJ4j6dOW1F7gG_5ht0c45xtUL8kxDY.png?auto=webp&s=02fa19c42806060fda1b75c4e4ccf6c9b5fed941', 'width': 1200}, 'variants': {}}]} | |
I built "Gemma Web": A fully private, in-browser AI workspace that runs 100% offline via WASM. Would love your feedback! | 30 | Hey r/LocalLLaMA,
I've been working on a project that I think this community will really appreciate. It’s called Gemma Web, and it's a completely serverless, private, and offline-first AI workspace that runs Google's Gemma models entirely in your browser.
Live Demo: [https://www.gemmaweb.app/](https://www.gemmaweb.app/)
The main goal was to create a zero-latency, 100% private AI assistant where no data ever leaves your device.
Here are the key features:
100% On-Device & Private: It uses the MediaPipe LLM Task API to run Gemma models directly in the browser using WebAssembly. All processing happens on your machine.
Full Offline Functionality: The app is an installable PWA. It uses IndexedDB to cache the AI models, conversation histories, and document embeddings, so it works perfectly without an internet connection after the first load.
Local RAG (Retrieval-Augmented Generation): This is the part I'm most excited about. You can upload your own PDF and TXT documents. The app uses a Web Worker to process them in the background, create vector embeddings using TensorFlow.js (Universal Sentence Encoder), and save them locally. You can then have context-aware conversations with your own documents, completely privately.
Feature-Rich UI: I also integrated the Web Speech API for voice-to-text input and text-to-speech output, plus an incognito mode for ephemeral chats.
It’s built for anyone who wants the power of a modern LLM without sacrificing privacy or relying on a server connection.
I’m really looking to improve it, so I’d love to get your feedback!
What do you think of the concept?
Are there any features you'd like to see added?
Did you run into any bugs or issues?
Let me know your thoughts in the comments! | 2025-10-25T18:11:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ofyfsb/i_built_gemma_web_a_fully_private_inbrowser_ai/ | Abject_Comfort_5220 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofyfsb | false | null | t3_1ofyfsb | /r/LocalLLaMA/comments/1ofyfsb/i_built_gemma_web_a_fully_private_inbrowser_ai/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=108&crop=smart&auto=webp&s=13cb52edf0764b1f659d3ccd5535c10cc4f63b2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=216&crop=smart&auto=webp&s=567925dff430742bb3aa12fab977403cfc54a19a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=320&crop=smart&auto=webp&s=4bf13f4fe4121df844369172b222d484d474f8cf', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?auto=webp&s=0e24d3b38e3555e371754b4042d0cf7992211441', 'width': 512}, 'variants': {}}]} |
I built a 100% private, offline-first AI workspace that runs Gemma in the browser (WASM + RAG). Would love to get feedback from this community! | 1 | [removed] | 2025-10-25T18:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/1ofya99/i_built_a_100_private_offlinefirst_ai_workspace/ | Remarkable_Camp8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofya99 | false | null | t3_1ofya99 | /r/LocalLLaMA/comments/1ofya99/i_built_a_100_private_offlinefirst_ai_workspace/ | false | false | self | 1 | null |
Optimizing gpt-oss-120B on AMD RX 6900 XT 16GB: Achieving 19 tokens/sec | 58 | ## Introduction
OpenAI's gpt-oss-120B is a massive 117B parameter language model, with official recommendations calling for datacenter-grade GPUs like the H100 or MI300X (80GB VRAM). This article documents the optimization journey to run this model at practical speeds (19 tokens/sec) on a consumer AMD RX 6900 XT with only 16GB VRAM.
## Hardware Configuration
### Main Components
- **GPU**: AMD Radeon RX 6900 XT 16GB VRAM
- Architecture: RDNA2 (gfx1030)
- Memory Bandwidth: 512 GB/s
- Stream Processors: 5120
- Released: December 2020
- **CPU**: AMD Ryzen 9 7900 (12-core/24-thread)
- Base Clock: 3.7 GHz
- Boost Clock: 5.4 GHz
- Instruction Sets: AVX, AVX2, AVX-512 capable
- L3 Cache: 64MB
- Architecture: Zen 4
- **Memory**: 64GB (32GB × 2) DDR5-5600MHz
- Dual-channel configuration
- Memory Bandwidth: 89.6 GB/s (theoretical)
- CAS Latency: CL46 (typical)
- **Storage**: NVMe SSD recommended (60GB model files)
### Software Environment
- **OS**: Ubuntu 24.04 LTS
- **ROCm**: 6.2.4
- **llama.cpp**: Latest build (ROCm backend, AVX-512 enabled)
- **Drivers**: Mesa 24.x + AMDGPU kernel driver
## Why This Hardware Configuration Matters
### Ryzen 9 7900's Advantages
The 12-core/24-thread design with AVX-512 support accelerates MoE layer CPU processing significantly. AVX-512 in particular provides 15-30% performance gains for matrix operations in the CPU processing path, making it ideal for handling the 28 MoE layers offloaded from GPU.
### DDR5-5600MHz Impact
The gpt-oss-120B's MoE architecture processes 28 layers on CPU/RAM. DDR5's high bandwidth (89.6 GB/s) enables rapid transfer of model weight data, reducing memory bottlenecks. This is approximately 40% faster than DDR4-3200, directly improving token generation speed.
### 64GB RAM Necessity
- Model weights (MoE portion): ~50-55GB
- System usage: 6-8GB
- KV cache: 2-4GB
- **Total**: ~58-67GB
64GB is the minimum viable configuration. For longer contexts (32K+), 128GB is recommended. System was observed using only 6GB with 57GB available, but full context windows consume more.
## Initial Challenge: The Crash Wall
The first attempt with default settings resulted in immediate crashes with `ggml_cuda_error` termination.
```bash
# Initial attempt (failed)
./llama-server -m gpt-oss-120b.gguf --n-gpu-layers 999
# → Aborted (core dumped)
```
With only 16GB VRAM against a 120B model, this seemed impossible. However, gpt-oss-120B uses a Mixture of Experts (MoE) architecture, activating only 5.1B parameters per token. This characteristic became the key to success.
## Breakthrough 1: Environment Variables and MoE Offloading
Running RX 6900 XT with ROCm requires specific environment variables:
```bash
export HSA_OVERRIDE_GFX_VERSION=10.3.0
export ROCM_PATH=/opt/rocm
export HIP_VISIBLE_DEVICES=0
export GPU_MAX_HEAP_SIZE=100
export GPU_MAX_ALLOC_PERCENT=95
```
The `HSA_OVERRIDE_GFX_VERSION=10.3.0` is critical for gfx1030 (RX 6900 XT) architecture recognition.
The breakthrough came with the `--n-cpu-moe` parameter, which offloads MoE layers to CPU:
```bash
./llama-server \
-m gpt-oss-120b.gguf \
--n-gpu-layers 5 \
--n-cpu-moe 36 \
--ctx-size 4096
```
**Result**: First successful boot, but slow at **11.63 tokens/sec**.
## Breakthrough 2: Progressive GPU Layer Increase
Monitoring VRAM usage with `rocm-smi`, I progressively increased GPU layers:
| GPU Layers | MoE Layers (CPU) | Speed | VRAM Usage |
|------------|------------------|-------|------------|
| 5 layers | 36 layers | 11.6 t/s | 52% |
| 20 layers | 32 layers | 15.2 t/s | 70% |
| 30 layers | 29 layers | 17.8 t/s | 85% |
| 38 layers | 28 layers | **19.1 t/s** | 95% |
| 40 layers | 28 layers | 19.4 t/s | **99%** |
| 42 layers | 27 layers | OOM | - |
38 layers proved to be the optimal balance. While 40 layers works, increasing context length causes KV cache to overflow VRAM.
## Breakthrough 3: Enabling AVX-512
The initial build had **all CPU AVX instructions disabled**:
```bash
# Check configuration
cat CMakeCache.txt | grep GGML_AVX
# GGML_AVX:BOOL=OFF ← Problem!
```
This meant using only 10-30% of CPU capabilities. Rebuilding fixed this:
```bash
cd llama.cpp
rm -rf build && mkdir build && cd build
cmake .. \
-DGGML_HIP=ON \
-DCMAKE_BUILD_TYPE=Release \
-DGGML_NATIVE=ON # ← Auto-detect optimizations
cmake --build . --config Release -j$(nproc)
```
**Result**: AVX, AVX2, and AVX512 all enabled, significantly accelerating MoE layer CPU processing.
## Final Configuration
The stable configuration:
```bash
export HSA_OVERRIDE_GFX_VERSION=10.3.0
export ROCM_PATH=/opt/rocm
export HIP_VISIBLE_DEVICES=0
export GPU_MAX_HEAP_SIZE=100
export GPU_MAX_ALLOC_PERCENT=95
./llama-server \
-m gpt-oss-120b-mxfp4-00001-of-00003.gguf \
--n-gpu-layers 38 \
--n-cpu-moe 28 \
--ctx-size 24576 \
--batch-size 2048 \
--ubatch-size 512 \
--threads 12 \
--jinja \
--host 0.0.0.0 \
--port 8080
```
### Parameter Explanation
- `--n-gpu-layers 38`: GPU processing layers (95% VRAM utilization)
- `--n-cpu-moe 28`: Number of MoE layers processed on CPU
- `--ctx-size 24576`: Context length (24K tokens)
- `--batch-size 2048`: Batch size (processing efficiency)
- `--threads 12`: Physical core count (12 cores)
## Performance Results
```
Prompt processing: 93-291 tokens/sec (with caching)
Generation speed: 19.14 tokens/sec
VRAM usage: 95%
CPU usage: 47%
```
## llama.cpp vs Ollama
I used llama.cpp, but the differences with Ollama are clear:
**llama.cpp**:
- ✅ Fine-grained tuning possible
- ✅ Extract maximum hardware performance
- ❌ Complex configuration
**Ollama**:
- ✅ One-command startup
- ✅ Beginner-friendly
- ❌ Auto-settings achieve ~80% performance (10-12 t/s estimated)
For specialized environments like AMD, llama.cpp's flexibility was essential.
## Troubleshooting
### Flash Attention Errors
```bash
# Solution: Disable Flash Attention
Remove --flash-attn parameter
```
### OOM (Out of Memory)
```bash
# Solution: Reduce GPU layers by 1-2
--n-gpu-layers 38 → 36
```
### Extremely Slow Performance
```bash
# Check AVX instructions
cat build/CMakeCache.txt | grep GGML_AVX
# If all OFF, rebuild with optimizations
```
## Key Learnings
### 1. AMD ROCm Challenges
- Requires manual environment variable configuration
- gfx architecture overrides necessary
- Flash Attention often unstable
- Less mature than CUDA ecosystem
### 2. MoE Architecture Advantages
- 120B model activates only 5.1B parameters
- Enables running on consumer hardware
- CPU offloading is practical and effective
### 3. Progressive Optimization Works
- Start conservative (low GPU layers)
- Monitor VRAM with rocm-smi
- Increment gradually
- Find stability threshold
### 4. CPU Optimization Matters
- AVX-512 provides 15-30% speedup for MoE
- Physical core count optimal for threading
- Memory bandwidth becomes bottleneck
## Theoretical Limits Reached
At 19 tokens/sec with 95% VRAM usage, we've essentially hit the hardware ceiling. Further improvements would require:
1. **More VRAM**: Reduce MoE CPU offloading
2. **Faster Memory**: DDR5 (up to 6400MHz)
3. **Better GPU**: RDNA3 (RX 7900 series) or NVIDIA
## Conclusions
Successfully running gpt-oss-120B at 19 t/s on AMD RX 6900 XT 16GB demonstrates that:
1. **Cost-Effectiveness**: $300-400 used GPU runs 120B models practically
2. **Learning Value**: Deep understanding of GPU architecture and memory management
3. **Practicality**: 19 t/s suffices for code completion and chat applications
The greatest lesson: **Understand hardware limits and optimize progressively**. Perfect configuration doesn't appear instantly. Using monitoring tools (rocm-smi, htop) while adjusting parameters one-by-one requires patience.
The fine‑tuning of this article was performed using gpt‑oss-120B. | 2025-10-25T17:46:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ofxt6s/optimizing_gptoss120b_on_amd_rx_6900_xt_16gb/ | Bright_Resolution_61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofxt6s | false | null | t3_1ofxt6s | /r/LocalLLaMA/comments/1ofxt6s/optimizing_gptoss120b_on_amd_rx_6900_xt_16gb/ | false | false | self | 58 | null |
I built a 100% private, offline-first AI workspace that runs Gemma in the browser (WASM + RAG). Would love to get feedback from this community! | 1 | [removed] | 2025-10-25T17:28:21 | https://gemma-web-ai.vercel.app/ | Dry-Objective-2570 | gemma-web-ai.vercel.app | 1970-01-01T00:00:00 | 0 | {} | 1ofxcyp | false | null | t3_1ofxcyp | /r/LocalLLaMA/comments/1ofxcyp/i_built_a_100_private_offlinefirst_ai_workspace/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=108&crop=smart&auto=webp&s=13cb52edf0764b1f659d3ccd5535c10cc4f63b2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=216&crop=smart&auto=webp&s=567925dff430742bb3aa12fab977403cfc54a19a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=320&crop=smart&auto=webp&s=4bf13f4fe4121df844369172b222d484d474f8cf', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?auto=webp&s=0e24d3b38e3555e371754b4042d0cf7992211441', 'width': 512}, 'variants': {}}]} | |
Recommendations - models and GPU | 0 | I'm building a concept device. I'll leave out the major details. But I'm trying to gather ideas and best methods.
I have an ESP32 device gathering data. I want to send this data to an LLM and have it reply / respond accordingly.
Output over TTS is also needed. How do I run, and which LLMs do I run to make this loop?
Idea;
* ESP32 gathers data from sensors / whatever and outputs JSON data.
* At select triggers or events, json is sent to LLM.
* LLM does its thing, calculates, learns, Stores, analyzes json data
* output: reacts accordingly to set prompt or char card.
* TTS / voice output reading contents of LLM output.
Voice creation / duplicate? Can I record my own voice and have that as my output? Can the LLM pull request at random too? Or only recieve json data?
Is 5070TI enough? Upgrading from a 2070super.
Thanks. | 2025-10-25T17:26:49 | https://www.reddit.com/r/LocalLLaMA/comments/1ofxbn2/recommendations_models_and_gpu/ | Dino_Walker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofxbn2 | false | null | t3_1ofxbn2 | /r/LocalLLaMA/comments/1ofxbn2/recommendations_models_and_gpu/ | false | false | self | 0 | null |
I built "Gemma Web": A fully private, in-browser AI workspace (Gemma + RAG) that runs 100% offline via WASM. Would love your feedback! | 1 | [removed] | 2025-10-25T17:25:46 | https://www.reddit.com/r/LocalLLaMA/comments/1ofxant/i_built_gemma_web_a_fully_private_inbrowser_ai/ | Dry-Objective-2570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofxant | false | null | t3_1ofxant | /r/LocalLLaMA/comments/1ofxant/i_built_gemma_web_a_fully_private_inbrowser_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=108&crop=smart&auto=webp&s=13cb52edf0764b1f659d3ccd5535c10cc4f63b2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=216&crop=smart&auto=webp&s=567925dff430742bb3aa12fab977403cfc54a19a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=320&crop=smart&auto=webp&s=4bf13f4fe4121df844369172b222d484d474f8cf', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?auto=webp&s=0e24d3b38e3555e371754b4042d0cf7992211441', 'width': 512}, 'variants': {}}]} |
I built "Gemma Web": A fully private, in-browser AI workspace that runs 100% offline via WASM. Would love your feedback! | 1 | [removed] | 2025-10-25T17:21:02 | https://www.reddit.com/r/LocalLLaMA/comments/1ofx6fl/i_built_gemma_web_a_fully_private_inbrowser_ai/ | Weird_Shoulder_2730 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofx6fl | false | null | t3_1ofx6fl | /r/LocalLLaMA/comments/1ofx6fl/i_built_gemma_web_a_fully_private_inbrowser_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=108&crop=smart&auto=webp&s=13cb52edf0764b1f659d3ccd5535c10cc4f63b2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=216&crop=smart&auto=webp&s=567925dff430742bb3aa12fab977403cfc54a19a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=320&crop=smart&auto=webp&s=4bf13f4fe4121df844369172b222d484d474f8cf', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?auto=webp&s=0e24d3b38e3555e371754b4042d0cf7992211441', 'width': 512}, 'variants': {}}]} |
Hi | 1 | [removed] | 2025-10-25T17:17:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ofx2xs/hi/ | Weird_Shoulder_2730 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofx2xs | false | null | t3_1ofx2xs | /r/LocalLLaMA/comments/1ofx2xs/hi/ | false | false | self | 1 | null |
I built "Gemma Web": A fully private, in-browser AI workspace that runs 100% offline via WASM. Would love your feedback! | 1 | [removed] | 2025-10-25T17:10:28 | https://www.reddit.com/r/LocalLLaMA/comments/1ofwwtu/i_built_gemma_web_a_fully_private_inbrowser_ai/ | Remarkable_Camp8 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofwwtu | false | null | t3_1ofwwtu | /r/LocalLLaMA/comments/1ofwwtu/i_built_gemma_web_a_fully_private_inbrowser_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=108&crop=smart&auto=webp&s=13cb52edf0764b1f659d3ccd5535c10cc4f63b2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=216&crop=smart&auto=webp&s=567925dff430742bb3aa12fab977403cfc54a19a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=320&crop=smart&auto=webp&s=4bf13f4fe4121df844369172b222d484d474f8cf', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?auto=webp&s=0e24d3b38e3555e371754b4042d0cf7992211441', 'width': 512}, 'variants': {}}]} |
Trying to understand the missing layer in AI infra, where do you see observability & agent debugging going? | 0 | Hey everyone,
I’ve been thinking a lot about how AI systems are evolving, especially with OpenAI’s MCP, LangChain, and all these emerging “agentic” frameworks.
From what I can see, people are building really capable agents… but hardly anyone truly understands what’s happening inside them.
Why an agent made a specific decision, what tools it called, or why it failed halfway through, it all feels like a black box.
I’ve been sketching an idea for something that could help visualize or explain those reasoning chains (kind of like an “observability layer” for AI cognition).
Not as a startup pitch, more just me trying to understand the space and talk with people who’ve actually built in this layer before.
So, if you’ve worked on:
• AI observability or tracing
• Agent orchestration (LangChain, Relevance, OpenAI Tool Use, etc.)
• Or you just have thoughts on how “reasoning transparency” could evolve…
I’d really love to hear your perspective.
What are the real technical challenges here?
What’s overhyped, and what’s truly unsolved?
Totally open conversation, just trying to learn from people who’ve seen more of this world than I have. 🙏
Melchior labrousse | 2025-10-25T17:09:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ofwvzl/trying_to_understand_the_missing_layer_in_ai/ | AdVivid5763 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofwvzl | false | null | t3_1ofwvzl | /r/LocalLLaMA/comments/1ofwvzl/trying_to_understand_the_missing_layer_in_ai/ | false | false | self | 0 | null |
I built "Gemma Web": A fully private, in-browser AI workspace that runs 100% offline via WASM. | 1 | [removed] | 2025-10-25T17:03:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ofwq7k/i_built_gemma_web_a_fully_private_inbrowser_ai/ | Key-Proposal5664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofwq7k | false | null | t3_1ofwq7k | /r/LocalLLaMA/comments/1ofwq7k/i_built_gemma_web_a_fully_private_inbrowser_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=108&crop=smart&auto=webp&s=13cb52edf0764b1f659d3ccd5535c10cc4f63b2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=216&crop=smart&auto=webp&s=567925dff430742bb3aa12fab977403cfc54a19a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=320&crop=smart&auto=webp&s=4bf13f4fe4121df844369172b222d484d474f8cf', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?auto=webp&s=0e24d3b38e3555e371754b4042d0cf7992211441', 'width': 512}, 'variants': {}}]} |
I built "Gemma Web": A fully private, in-browser AI workspace that runs 100% offline via WASM. | 1 | [removed] | 2025-10-25T17:00:51 | https://www.reddit.com/r/LocalLLaMA/comments/1ofwnx9/i_built_gemma_web_a_fully_private_inbrowser_ai/ | Key-Proposal5664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofwnx9 | false | null | t3_1ofwnx9 | /r/LocalLLaMA/comments/1ofwnx9/i_built_gemma_web_a_fully_private_inbrowser_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=108&crop=smart&auto=webp&s=13cb52edf0764b1f659d3ccd5535c10cc4f63b2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=216&crop=smart&auto=webp&s=567925dff430742bb3aa12fab977403cfc54a19a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=320&crop=smart&auto=webp&s=4bf13f4fe4121df844369172b222d484d474f8cf', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?auto=webp&s=0e24d3b38e3555e371754b4042d0cf7992211441', 'width': 512}, 'variants': {}}]} |
I built "Gemma Web": A fully private, in-browser AI workspace that runs 100% offline via WASM. Would love your feedback! | 1 | [removed] | 2025-10-25T16:59:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ofwmd2/i_built_gemma_web_a_fully_private_inbrowser_ai/ | Key-Proposal5664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofwmd2 | false | null | t3_1ofwmd2 | /r/LocalLLaMA/comments/1ofwmd2/i_built_gemma_web_a_fully_private_inbrowser_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=108&crop=smart&auto=webp&s=13cb52edf0764b1f659d3ccd5535c10cc4f63b2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=216&crop=smart&auto=webp&s=567925dff430742bb3aa12fab977403cfc54a19a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=320&crop=smart&auto=webp&s=4bf13f4fe4121df844369172b222d484d474f8cf', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?auto=webp&s=0e24d3b38e3555e371754b4042d0cf7992211441', 'width': 512}, 'variants': {}}]} |
I built "Gemma Web": A fully private, in-browser AI workspace that runs 100% offline via WASM. Would love your feedback! | 1 | Hey r/LocalLLaMA,
I've been working on a project that I think this community will really appreciate. It’s called **Gemma Web**, and it's a completely serverless, private, and offline-first AI workspace that runs Google's Gemma models entirely in your browser.
**Live Demo:** [`https://gemma-web-ai.vercel.app/`](https://gemma-web-ai.vercel.app/)
The main goal was to create a zero-latency, 100% private AI assistant where no data ever leaves your device.
Here are the key features:
* **100% On-Device & Private:** It uses the MediaPipe LLM Task API to run Gemma models directly in the browser using WebAssembly. All processing happens on your machine.
* **Full Offline Functionality:** The app is an installable PWA. It uses IndexedDB to cache the AI models, conversation histories, and document embeddings, so it works perfectly without an internet connection after the first load.
* **Local RAG (Retrieval-Augmented Generation):** This is the part I'm most excited about. You can upload your own PDF and TXT documents. The app uses a Web Worker to process them in the background, create vector embeddings using TensorFlow.js (Universal Sentence Encoder), and save them locally. You can then have context-aware conversations with your own documents, completely privately.
* **Feature-Rich UI:** I also integrated the Web Speech API for voice-to-text input and text-to-speech output, plus an incognito mode for ephemeral chats.
It’s built for anyone who wants the power of a modern LLM without sacrificing privacy or relying on a server connection.
I’m really looking to improve it, so I’d love to get your feedback!
* What do you think of the concept?
* Are there any features you'd like to see added?
* Did you run into any bugs or issues?
Let me know your thoughts in the comments! | 2025-10-25T16:55:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ofwixh/i_built_gemma_web_a_fully_private_inbrowser_ai/ | Key-Proposal5664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofwixh | false | null | t3_1ofwixh | /r/LocalLLaMA/comments/1ofwixh/i_built_gemma_web_a_fully_private_inbrowser_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=108&crop=smart&auto=webp&s=13cb52edf0764b1f659d3ccd5535c10cc4f63b2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=216&crop=smart&auto=webp&s=567925dff430742bb3aa12fab977403cfc54a19a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=320&crop=smart&auto=webp&s=4bf13f4fe4121df844369172b222d484d474f8cf', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?auto=webp&s=0e24d3b38e3555e371754b4042d0cf7992211441', 'width': 512}, 'variants': {}}]} |
I built "Gemma Web": A fully private, in-browser AI workspace that runs 100% offline via WASM. Would love your feedback! | 1 | I've been working on a project that I think this community will really appreciate. It’s called **Gemma Web**, and it's a completely serverless, private, and offline-first AI workspace that runs Google's Gemma models entirely in your browser.
**Live Demo:** [`https://gemma-web-ai.vercel.app/`](https://gemma-web-ai.vercel.app/)
The main goal was to create a zero-latency, 100% private AI assistant where no data ever leaves your device.
Here are the key features:
* **100% On-Device & Private:** It uses the MediaPipe LLM Task API to run Gemma models directly in the browser using WebAssembly. All processing happens on your machine.
* **Full Offline Functionality:** The app is an installable PWA. It uses IndexedDB to cache the AI models, conversation histories, and document embeddings, so it works perfectly without an internet connection after the first load.
* **Local RAG (Retrieval-Augmented Generation):** This is the part I'm most excited about. You can upload your own PDF and TXT documents. The app uses a Web Worker to process them in the background, create vector embeddings using TensorFlow.js (Universal Sentence Encoder), and save them locally. You can then have context-aware conversations with your own documents, completely privately.
* **Feature-Rich UI:** I also integrated the Web Speech API for voice-to-text input and text-to-speech output, plus an incognito mode for ephemeral chats.
It’s built for anyone who wants the power of a modern LLM without sacrificing privacy or relying on a server connection.
I’m really looking to improve it, so I’d love to get your feedback!
* What do you think of the concept?
* Are there any features you'd like to see added?
* Did you run into any bugs or issues?
Let me know your thoughts in the comments! | 2025-10-25T16:53:55 | https://www.reddit.com/r/LocalLLaMA/comments/1ofwhok/i_built_gemma_web_a_fully_private_inbrowser_ai/ | Key-Proposal5664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofwhok | false | null | t3_1ofwhok | /r/LocalLLaMA/comments/1ofwhok/i_built_gemma_web_a_fully_private_inbrowser_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=108&crop=smart&auto=webp&s=13cb52edf0764b1f659d3ccd5535c10cc4f63b2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=216&crop=smart&auto=webp&s=567925dff430742bb3aa12fab977403cfc54a19a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=320&crop=smart&auto=webp&s=4bf13f4fe4121df844369172b222d484d474f8cf', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?auto=webp&s=0e24d3b38e3555e371754b4042d0cf7992211441', 'width': 512}, 'variants': {}}]} |
How to clone a person? | 0 | I don't just mean the text , words and lexicons.
I mean their world view , strategic goals and everything so authentic that it's hard to distinguish each other.
| 2025-10-25T16:53:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ofwh82/how_to_clone_a_person/ | teraflopspeed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofwh82 | false | null | t3_1ofwh82 | /r/LocalLLaMA/comments/1ofwh82/how_to_clone_a_person/ | false | false | self | 0 | null |
I built "Gemma Web": A fully private, in-browser AI workspace that runs 100% offline via WASM. | 3 | Hey r/LocalLLaMA,
I've been working on a project that I think this community will really appreciate. It’s called Gemma Web, and it's a completely serverless, private, and offline-first AI workspace that runs Google's Gemma models entirely in your browser.
Live Demo: [https://gemma-web-ai.vercel.app/](https://gemma-web-ai.vercel.app/)
The main goal was to create a zero-latency, 100% private AI assistant where no data ever leaves your device.
Here are the key features:
* 100% On-Device & Private: It uses the MediaPipe LLM Task API to run Gemma models directly in the browser using WebAssembly. All processing happens on your machine.
* Full Offline Functionality: The app is an installable PWA. It uses IndexedDB to cache the AI models, conversation histories, and document embeddings, so it works perfectly without an internet connection after the first load.
* Local RAG (Retrieval-Augmented Generation): This is the part I'm most excited about. You can upload your own PDF and TXT documents. The app uses a Web Worker to process them in the background, create vector embeddings using TensorFlow.js (Universal Sentence Encoder), and save them locally. You can then have context-aware conversations with your own documents, completely privately.
* Feature-Rich UI: I also integrated the Web Speech API for voice-to-text input and text-to-speech output, plus an incognito mode for ephemeral chats.
It’s built for anyone who wants the power of a modern LLM without sacrificing privacy or relying on a server connection.
I’d love to get your feedback and hear what you think! | 2025-10-25T16:44:42 | https://www.reddit.com/r/LocalLLaMA/comments/1ofw9ga/i_built_gemma_web_a_fully_private_inbrowser_ai/ | Key-Proposal5664 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofw9ga | false | null | t3_1ofw9ga | /r/LocalLLaMA/comments/1ofw9ga/i_built_gemma_web_a_fully_private_inbrowser_ai/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=108&crop=smart&auto=webp&s=13cb52edf0764b1f659d3ccd5535c10cc4f63b2c', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=216&crop=smart&auto=webp&s=567925dff430742bb3aa12fab977403cfc54a19a', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?width=320&crop=smart&auto=webp&s=4bf13f4fe4121df844369172b222d484d474f8cf', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XjlaiEnYGOm9P3vBPd9UEVNFIe2HSoZ5ZF0QnnBeUnU.png?auto=webp&s=0e24d3b38e3555e371754b4042d0cf7992211441', 'width': 512}, 'variants': {}}]} |
Why I Stopped Using Serper and Other SERP APIs for AI Data Projects | 0 | I’ve been experimenting with a few AI projects lately that need **real-time search engine data at scale** — mainly for RAG systems and agents that rely on live web context.
At first, I used some of the well-known SERP APIs (Serper, SerpAPI, etc.), but I quickly hit the same wall:
* Expensive pricing once you go past the free tier
* Rate limits that choke batch jobs
* Constant credit resets every 30 days
For small or indie AI projects, paying $3–$5 per 1K queries just doesn’t make sense. Especially when you’re still validating your idea.
So I started looking for **simpler and more affordable ways** to pull structured search data — ideally something that didn’t need Selenium, proxies, or scraping infrastructure.
That experiment turned into something surprisingly stable and efficient for real-time query-to-JSON pipelines.
Just curious — how are you folks handling **large-scale search data retrieval** for AI agents or RAG systems?
Would love to hear what tools or tricks others are using to keep things cost-effective. | 2025-10-25T16:32:08 | https://www.reddit.com/r/LocalLLaMA/comments/1ofvy2c/why_i_stopped_using_serper_and_other_serp_apis/ | rxrkrx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofvy2c | false | null | t3_1ofvy2c | /r/LocalLLaMA/comments/1ofvy2c/why_i_stopped_using_serper_and_other_serp_apis/ | false | false | self | 0 | null |
Are local models really good | 1 | I am running gpt oss 20b for home automation using olama as a inferencing server, the server is backed by rtx 5090. I know i can change the name of device to bedroom light, but common the idea of using LLM is to ensure it understands. Any model recommodations which work good for Home Automations , i plan to use same model for other automation task like oragnising finances and reminders etc, a PA of sort ? | 2025-10-25T16:03:23 | https://www.reddit.com/r/LocalLLaMA/comments/1ofv8ud/are_local_models_really_good/ | Think_Illustrator188 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofv8ud | false | null | t3_1ofv8ud | /r/LocalLLaMA/comments/1ofv8ud/are_local_models_really_good/ | false | false | self | 1 | null |
I rebuilt DeepSeek’s OCR model in Rust so anyone can run it locally (no Python!) | 976 | Hey folks! After wrestling with the original DeepSeek-OCR release (Python + Transformers, tons of dependencies, zero UX), I decided to port the whole inference stack to Rust. The repo is deepseek-ocr.rs (https://github.com/TimmyOVO/deepseek-ocr.rs) and it ships both a
CLI and an OpenAI-compatible server so you can drop it straight into existing clients like Open WebUI.
# Why bother?
* No Python, no conda—just a single Rust binary.
* Works offline and keeps documents private.
* Fully OpenAI-compatible, so existing SDKs/ChatGPT-style UIs “just work”.
* Apple Silicon support with optional Metal acceleration (FP16).
* Built-in Hugging Face downloader: config/tokenizer/weights (≈6.3 GB) fetch automatically; needs about 13 GB RAM to run.
# What’s inside the Rust port?
\- Candle-based reimplementation of the language model (DeepSeek-V2) with KV caches + optional FlashAttention.
\- Full SAM + CLIP vision pipeline, image tiling, projector, and tokenizer alignment identical to the PyTorch release.
\- Rocket server that exposes /v1/responses and /v1/chat/completions (OpenAI-compatible streaming included).
\- Single-turn prompt compaction so OCR doesn’t get poisoned by multi-turn history.
\- Debug hooks to compare intermediate tensors against the official model (parity is already very close).
# Getting started
* You can download prebuilt archives (macOS with Metal, Windows) from the latest successful run of the repo’s GitHub Actions “build-binaries (https://github.com/TimmyOVO/deepseek-ocr.rs/actions/workflows/build-binaries.yml)” workflow—no local build required.
* Prefer compiling? git clone [https://github.com/TimmyOVO/deepseek-ocr.rs](https://github.com/TimmyOVO/deepseek-ocr.rs) → cargo fetch
* CLI: cargo run -p deepseek-ocr-cli -- --prompt "<image>..." --image mydoc.png
* Server: cargo run -p deepseek-ocr-server -- --host [0.0.0.0](http://0.0.0.0) \--port 8000
* On macOS, add --features metal plus --device metal --dtype f16 for GPU acceleration.
# Use cases
* Batch document conversion (receipts → markdown, contracts → summaries, etc.).
* Plugging into Open WebUI (looks/feels like ChatGPT but runs YOUR OCR model).
* Building document QA bots that need faithful extraction.
**If you try it, I’d love to hear your feedback—feature requests, edge cases, performance reports, all welcome. And if it saves you from Python dependency hell, toss the repo a ⭐️.**
Cheers! | 2025-10-25T15:13:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ofu15a/i_rebuilt_deepseeks_ocr_model_in_rust_so_anyone/ | Outrageous-Voice | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofu15a | false | null | t3_1ofu15a | /r/LocalLLaMA/comments/1ofu15a/i_rebuilt_deepseeks_ocr_model_in_rust_so_anyone/ | false | false | self | 976 | {'enabled': False, 'images': [{'id': 'aINJtHiBev5dmxLITPKF17JnWGrygVOYhvt__C8V8QM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aINJtHiBev5dmxLITPKF17JnWGrygVOYhvt__C8V8QM.png?width=108&crop=smart&auto=webp&s=2be7cee84444b26fc0e8fe9e7c8b288e514d4fff', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aINJtHiBev5dmxLITPKF17JnWGrygVOYhvt__C8V8QM.png?width=216&crop=smart&auto=webp&s=5e17b14be2fa553fbec35642dce316c2d3f41644', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aINJtHiBev5dmxLITPKF17JnWGrygVOYhvt__C8V8QM.png?width=320&crop=smart&auto=webp&s=c7079d7d635287d1cfcb154564e3d919a283bcef', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aINJtHiBev5dmxLITPKF17JnWGrygVOYhvt__C8V8QM.png?width=640&crop=smart&auto=webp&s=1158a48b01f8ef15eda247753046c522c4b5765e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aINJtHiBev5dmxLITPKF17JnWGrygVOYhvt__C8V8QM.png?width=960&crop=smart&auto=webp&s=cb168bff7cd659af025605cb3475a093eca48b31', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aINJtHiBev5dmxLITPKF17JnWGrygVOYhvt__C8V8QM.png?width=1080&crop=smart&auto=webp&s=24d3400c20dba25900b68c0227480c61bce2acfd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aINJtHiBev5dmxLITPKF17JnWGrygVOYhvt__C8V8QM.png?auto=webp&s=411e53bb9abb8b37474cf88582996767d70d267d', 'width': 1200}, 'variants': {}}]} |
looks like you can use your LM Studio on your iPad via the server API function | 0 | Downloaded this app called Invoke which is free and super easy to use it even provides instructions on how to do it.
Once you install you can just connect to your LM Studio API and load the model of choice.
I even connected to my home Firewall (Cisco) and used Anyconnect VPN to connect to my home network and load up invoke and it connects to my LM Studio. Super slick now I can use my LM Studio anywhere I go even with an Inmarsat BGAN terminal. Super nice. | 2025-10-25T15:10:07 | https://www.reddit.com/r/LocalLLaMA/comments/1oftxro/looks_like_you_can_use_your_lm_studio_on_your/ | meshreplacer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oftxro | false | null | t3_1oftxro | /r/LocalLLaMA/comments/1oftxro/looks_like_you_can_use_your_lm_studio_on_your/ | false | false | self | 0 | null |
Single H100: best open-source model + deep thinking setup for reasoning? | 9 | Hi! I have access to a single H100 and want to run an open-source LLM with a multi-agent or “deep thinking” framework for hard math problems and proof generation (hoping to get better results than using just Gemini 2.5 pro).
Looking for advice on the best open-source model for mathematical or logical reasoning that fits on one H100 (80GB), and the most practical way to implement a deep-think or multi-agent workflow that supports decomposition, verification, using tools...
Would appreciate any concrete setups, frameworks, or model recommendations from people who’ve built local reasoning or proof systems. | 2025-10-25T14:49:40 | https://www.reddit.com/r/LocalLLaMA/comments/1oftfoe/single_h100_best_opensource_model_deep_thinking/ | Accomplished_Back718 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oftfoe | false | null | t3_1oftfoe | /r/LocalLLaMA/comments/1oftfoe/single_h100_best_opensource_model_deep_thinking/ | false | false | self | 9 | null |
Conversione .safetensors a.tflite | 2 | Is there a universal .safetensors to .tflite converter? Because I fine-tuned a model and I would like to convert it to .tflite, I've been trying for 2 days but I can't find a solution. I tried with tflite Google AI edge, tf.lite.TFLiteConverter, PyTorch -> ONNX -> TFLite, but none of the methods work. Do you have any alternatives? | 2025-10-25T14:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/1oftelp/conversione_safetensors_atflite/ | fredmore1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oftelp | false | null | t3_1oftelp | /r/LocalLLaMA/comments/1oftelp/conversione_safetensors_atflite/ | false | false | self | 2 | null |
Looking for best Time-Series Data Model for pump or fan prediction on Hugging Face (Any Suggestions?) | 0 | I spent hours on hugging face looking for Time Series Data Model for Pump or Fan prediction but couldn't find a good model that could do predictive analysis, fault prediction and what not... Please suggest the best model on hugging face to analyse time series data with LLM... Thank you for the help... | 2025-10-25T14:16:21 | https://www.reddit.com/r/LocalLLaMA/comments/1ofsn4q/looking_for_best_timeseries_data_model_for_pump/ | Worth-Relation72 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofsn4q | false | null | t3_1ofsn4q | /r/LocalLLaMA/comments/1ofsn4q/looking_for_best_timeseries_data_model_for_pump/ | false | false | self | 0 | null |
Looking for best Time-Series Data Model for pump or fan prediction on Hugging Face (Any Suggestions?) | 1 | [removed] | 2025-10-25T14:12:56 | https://www.reddit.com/r/LocalLLaMA/comments/1ofskcu/looking_for_best_timeseries_data_model_for_pump/ | DigTerrible8666 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofskcu | false | null | t3_1ofskcu | /r/LocalLLaMA/comments/1ofskcu/looking_for_best_timeseries_data_model_for_pump/ | false | false | self | 1 | null |
VSORA Launches Europe’s Most Powerful AI Inference Chip | 89 | Seems like a big win for local AI models. | 2025-10-25T13:17:00 | https://finance.yahoo.com/news/vsora-launches-europe-most-powerful-121700744.html | RG54415 | finance.yahoo.com | 1970-01-01T00:00:00 | 0 | {} | 1ofrbcy | false | null | t3_1ofrbcy | /r/LocalLLaMA/comments/1ofrbcy/vsora_launches_europes_most_powerful_ai_inference/ | false | false | 89 | {'enabled': False, 'images': [{'id': 'yZMJwzqfo-1KodiPMX_oqs-WZlqWdJqQO_vA61_E2Yg', 'resolutions': [{'height': 40, 'url': 'https://external-preview.redd.it/yZMJwzqfo-1KodiPMX_oqs-WZlqWdJqQO_vA61_E2Yg.png?width=108&crop=smart&auto=webp&s=77ff4af9442a5b206d01bdcbad217d038f8856f5', 'width': 108}, {'height': 80, 'url': 'https://external-preview.redd.it/yZMJwzqfo-1KodiPMX_oqs-WZlqWdJqQO_vA61_E2Yg.png?width=216&crop=smart&auto=webp&s=f9bde9afeea2ecc09d17d3b9a496407c3e11f9c3', 'width': 216}], 'source': {'height': 117, 'url': 'https://external-preview.redd.it/yZMJwzqfo-1KodiPMX_oqs-WZlqWdJqQO_vA61_E2Yg.png?auto=webp&s=15e380510402504e464b6a2d5d35003982b1140d', 'width': 313}, 'variants': {}}]} | |
Is GLM 4.5 / 4.6 really sensitive to quantisation? Or is vLLM stupifying the models? | 6 | I am trying to run GLM 4.6 or GLM 4.5 Air locally with vLLM.
I've tried:
- zai-org/GLM-4.5-Air-FP8
- vkerkez/GLM-4.6-REAP-266B-A32B-awq-sym
- bullpoint/GLM-4.6-AWQ
but the answers just aren't as smart or deep as the full-fat deepinfra zai-org/GLM-4.6. I'm using OpenWebUI with the same parameters and system prompt.
For example, with prompt "Explain Girsanov's theorem" and my (long) system prompt,
**vllm zai-org/GLM-4.5-Air-FP8**
Girsanov's theorem is a cornerstone in stochastic calculus and financial mathematics, particularly in the context of risk-neutral pricing. It states that if there exists a risk-free interest rate $ r $, then the discounted stock price can be transformed into a process with the same drift but a different volatility. Specifically, if $ X_t $ is a stochastic process with drift $ \mu $ and volatility $ \sigma $, then under the risk-free rate $ r $, the discounted process $ \frac{X_t}{r} $ follows a Gaussian distribution with mean $ \mu $ and variance $ \sigma^2 $. This transformation allows for the pricing of options by adjusting the volatility to account for the risk-free rate.
The theorem is pivotal in proving the no-arbitrage principle and is widely used in options pricing to ensure that the expected value of a derivative is equal to its payoff, adjusted for the risk-free rate. The proof typically involves changing the measure to account for the risk-free rate, which may involve integrating over the Brownian motion under the risk-neutral measure. This ensures that the expected value of the discounted process remains unchanged, but its distribution is adjusted to reflect the risk-free interest rate.
In summary, Girsanov's theorem enables the conversion of a stochastic process into one that is risk-neutral, allowing for more accurate pricing of financial instruments.
**deepinfra GLM 4.5 Air**
Girsanov’s theorem is a result in stochastic calculus that describes how the dynamics of a Brownian motion (or, more generally, a continuous local martingale) change when we pass to an equivalent probability measure.
It is the analytical foundation for changing the drift of a stochastic differential equation by moving to a new measure, the central tool for constructing risk‑neutral measures in mathematical finance.
--------------------------------------------------------------------
### 1. Setting
Let \((\Omega,\mathcal F,\{\mathcal F_t\}_{t\ge 0},\mathbb P)\) be a filtered probability space satisfying the usual conditions.
Assume that
\(W=(W_t)_{t\ge0}\) is an \(\mathcal F_t\)-standard Brownian motion under \(\mathbb P\).
Let \(\theta=(\theta_t)_{t\ge0}\) be an \(\mathcal F_t\)-adapted process such that
\[
\mathbb E_{\mathbb P}\!\left[\exp\!\Bigl(\frac12\int_0^T\theta_s^2\,ds\Bigr)\right] <\infty
\qquad\text{for all } T\ge0 .
\tag{1}
\]
Condition (1) is known as the **Novikov condition**; it guarantees that a certain stochastic exponential is a true martingale.
--------------------------------------------------------------------
### 2. Exponential martingale and new measure
Define the **stochastic exponential**
\[
Z_t
:= \exp\!\left(-\int_0^t \theta_s\, dW_s
-\frac12\int_0^t \theta_s^2\, ds\right), \qquad t\ge0 .
\]
Under (1), \((Z_t)_{t\ge0}\) is a strictly positive martingale with \(Z_0=1\).
Using \(Z_T\) as a Radon–Nikodym derivative we introduce a new probability measure \(\mathbb Q\) on \(\mathcal F_T\) by
\[
\frac{d\mathbb Q}{d\mathbb P}\bigg|_{\mathcal F_T}=Z_T .
\]
The family \(\{\mathbb Q\}\) obtained in this way is equivalent to \(\mathbb P\) (i.e., no null set of \(\mathbb P\) is null under \(\mathbb Q\) and vice versa).
--------------------------------------------------------------------
### 3. Statement of the theorem
Under the new measure \(\mathbb Q\) the process
\[
\widetilde W_t := W_t + \int_0^t \theta_s\, ds, \qquad t\ge0 ,
\]
is an \(\mathcal F_t\)-standard Brownian motion.
Equivalently,
\[
W_t = \widetilde W_t - \int_0^t \theta_s\, ds ,
\]
so that the drift of \(W\) is shifted by \(-\theta\) when viewed under \(\mathbb Q\).
--------------------------------------------------------------------
### 4. Interpretation
The theorem gives a concrete recipe for turning a Brownian motion with drift into a true Brownian motion by changing probability measures.
In the language of differential equations, suppose a process \(X\) satisfies
\[
dX_t = \mu_t\,dt + \sigma_t\,dW_t ,
\]
where \(\mu_t,\sigma_t\) are adapted processes.
Choose \(\theta_t = \sigma_t^{-1}\mu_t\) (assuming \(\sigma_t\neq0\)).
Then the new measure \(\mathbb Q\) defined above eliminates the drift term: under \(\mathbb Q\),
\[
dX_t = \sigma_t\, d\widetilde W_t ,
\]
so \(X\) behaves like a Brownian motion scaled by \(\sigma\).
This is precisely the device used in financial mathematics to construct an equivalent martingale measure (“risk‑neutral measure”) under which the discounted price of a tradable asset becomes a martingale.
--------------------------------------------------------------------
### 5. Extensions
* **General local martingales**: The result extends to continuous local martingales that are not necessarily Brownian, provided the Novikov condition holds for the integrand against that martingale.
* **Processes with jumps**: A Girsanov theorem for semimartingales with jumps requires an additional Radon–Nikodym derivative relating the jump measures of the two processes.
* **Multidimensional setting**: If \(W\) is an \(n\)-dimensional Brownian motion and \(\theta\) an \(\mathbb R^n\)-valued adapted process satisfying a Novikov-type condition, the same exponential formula changes the drift vector and preserves the covariance matrix.
--------------------------------------------------------------------
### 6. Practical checklist
* Verify that the integrand \(\theta_t\) is adapted and satisfies a Novikov–type integrability condition.
* Compute the stochastic exponential \(Z_t\); ensure it is a martingale (this is the key check).
* Define the new measure by \(d\mathbb Q = Z_T\, d\mathbb P\).
* Conclude that shifted process \(\widetilde W_t = W_t + \int_0^t \theta_s\,ds\) is a Brownian motion under \(\mathbb Q\).
--------------------------------------------------------------------
**Reference**
The original proof appears in Girsanov (1960), “On transforming a certain class of stochastic processes by absolutely continuous change of measure”; see also standard texts such as Øksendal (2003) “Stochastic Differential Equations”, Chapter II, or Karatzas & Shreve (1991) “Brownian Motion and Stochastic Calculus”, Chapter IV.
This (qualitative) difference is repeatable and I notice it for the 4.6 quants as well. | 2025-10-25T13:00:59 | https://www.reddit.com/r/LocalLLaMA/comments/1ofqyhc/is_glm_45_46_really_sensitive_to_quantisation_or/ | _supert_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofqyhc | false | null | t3_1ofqyhc | /r/LocalLLaMA/comments/1ofqyhc/is_glm_45_46_really_sensitive_to_quantisation_or/ | false | false | self | 6 | null |
What are actual verifiable ways we can detect AI? | 0 | Social media is now filled with AI content that is fooling people left and right. AI generated short form content goes viral frequently, with lots of people assuming it to be real, along with majority of long write ups being chatGPT’d.
Most of us already saw this coming years ago, I’m sure this isn’t a surprise to most people here. The thing is, do we have any strategies to combat this? Is there any realistic “AI detection” tool we can develop to be able to easily deem video/audio/text as AI generated?
Personally, I feel that I can spot AI generated text quite consistently. There’s the obvious tell of em-dashes, but even without that there are some obvious word patterns, sentence structure, etc. I don’t know how long this will last and how fast standard text generation will become indistinguishable. Even now if people prompt the AI properly and make a few tweaks themselves, most write ups can’t be spotted as AI. Moreover, we have all seen the unreliability of AI detection tools that universities and such use, so it’s clearly not even close to being a solved issue. And these AI technologies will only get better.
Video and audio content seems even tougher, at least for me to be able to distinguish. Some of them have obvious tells but a lot of them don’t. My question is, what is being done to combat this? I would think that this issue of not being able to tell what’s real vs AI will become one of the most pertinent issues as we continue onwards. As such, there is lots of value in developing ways to detect this and I’m sure some very smart people are trying to solve this issue. I want to know what is being done and what are the technologies/strategies we could conceivably develop to achieve this task?
The simplest solution is having people do things in a controlled environment where they can be constantly observed. For Uni tests and such, a return to proctored pen and paper exams is quite likely. For people who want art that is verifiably human-made, they could maybe be given a video of the artist going through the entire process, but even this could become AI generated quite soon. Anyhow, these methods aren’t a general solution for the broader issue. Is there even a way to address the broader issue, or do we just have to accept the new reality with no recourse? | 2025-10-25T12:33:57 | https://www.reddit.com/r/LocalLLaMA/comments/1ofqe7x/what_are_actual_verifiable_ways_we_can_detect_ai/ | ra4h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofqe7x | false | null | t3_1ofqe7x | /r/LocalLLaMA/comments/1ofqe7x/what_are_actual_verifiable_ways_we_can_detect_ai/ | false | false | self | 0 | null |
How to make PocketPal inference faster on android? | 1 | I have an OnePlus 12 24GB running on LineageOS 22.2 with 6.44GB zram. I ran the PocketPal bench at the default pp=512,tg=128,pl=1 and rep=3.
|pp|tg|time|PeakMem|Model|
|:-|:-|:-|:-|:-|
|14.18t/s|6.79t/s|2m50s|81.1%|Qwen3-30B-A3B-Instruct-2507-UD\_Q5\_K\_XL|
|17.42t/s|4.00t/s|3m4s|62.0%|gemma-3-12b-it-qat-Q4\_0|
The Qwen model is about 21.7GB and the gemma model is 6.9GB. It seems like the PeakMem refers to the Peak Memory used by the whole system as the gemma model shouldn't fill up 62% of 24GB. In that sense, I presume some of the 21.7GB Qwen model went to zram which is like a compressed swap stored in RAM. Would adjusting zram size affect performance? Would it perform much better if I use a 16GB qwen model?
I noticed that PocketPal benchmark doesn't offload anything to the GPU. Does that mean only CPU is used? Is it possible to make PocketPal to use GPU?
Thanks a lot in advance. | 2025-10-25T12:26:01 | https://www.reddit.com/r/LocalLLaMA/comments/1ofq8jq/how_to_make_pocketpal_inference_faster_on_android/ | Ok_Warning2146 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofq8jq | false | null | t3_1ofq8jq | /r/LocalLLaMA/comments/1ofq8jq/how_to_make_pocketpal_inference_faster_on_android/ | false | false | self | 1 | null |
which model has the best world knowledge? Open weights and proprietary. | 48 | So I am looking for models with great general world knowledge and application of this. Open weights are preferred (I have access to H200s, so anything below 1.8TB VRAM) but API can be used if necessary. I am finding world knowledge really sucks for open models, even Kimi which can just get things wrong.
For example, knowing how much medication is wasted when you draw it up from a vial, based of the type of needle (since you get something called dead space - medication that stays in the tip o the syringe and needle). A lot of this is in nursing text books, so they know the content, but when asking models about it (such as Gemini flash) they really suck when it comes to applying this knowledge.
Any suggestions? | 2025-10-25T12:23:16 | https://www.reddit.com/r/LocalLLaMA/comments/1ofq6m7/which_model_has_the_best_world_knowledge_open/ | z_3454_pfk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofq6m7 | false | null | t3_1ofq6m7 | /r/LocalLLaMA/comments/1ofq6m7/which_model_has_the_best_world_knowledge_open/ | false | false | self | 48 | null |
[Open Source] We deployed numerous agents in production and ended up building our own GenAI framework | 0 | Here’s what the journey taught us 🧠
After building and deploying GenAI solutions in production, we got tired of fighting with bloated frameworks, debugging black boxes, and dealing with vendor lock-in.
So we built Flo AI - a Python framework that actually respects your time.
**The Problem We Solved**
Most LLM frameworks give you two bad options:
Too much abstraction → You have no idea why your agent did what it did
Too little structure → You're rebuilding the same patterns over and over.
We wanted something that's predictable, debuggable, customizable, composable and production-ready from day one.
**What Makes FloAI Different**
🔍 **Built-in Observability**: OpenTelemetry tracing out of the box. See exactly what your agents are doing, track token usage, and debug performance issues without adding extra libraries. (pre-release)
🤝 **Multi-Agent Collaboration (Arium)**: Agents can call other specialized agents. Build a trip planner that coordinates weather experts and web researchers - it just works.
📚 **Composable by Design**: Ability to build larger and larger agentic workflows, by composable smaller units
⚙️ **Customizable via YAML**: Design your agents using for YAMLs for easy customizations and prompt changes, as well as flo changes
🔌 **Vendor Agnostic**: Start with OpenAI, switch to Claude, add Gemini - same code. We support OpenAI, Anthropic, Google, Ollama, vLLM and VertextAI. (more coming soon)
**Why We're Sharing This**
We believe in less abstraction, more control.
If you’ve ever been frustrated by frameworks that hide too much or make you reinvent the wheel, Flo AI might be exactly what you’re looking for.
**Links:**
🐙 GitHub: [https://github.com/rootflo/flo-ai](https://github.com/rootflo/flo-ai)
🏠 Website: [https://rootflo.ai](https://rootflo.ai/)
Docs: [https://flo-ai.rootflo.ai](https://github.com/rootflo/flo-ai)
🙌 We Need Your Feedback
We’re actively building and would love your input:
What features would make this useful for your use case?
What pain points do you face with current LLM frameworks?
Found a bug? We respond fast!
⭐ Star us on GitHub if this resonates — it really helps us know we’re solving real problems.
Happy to chat or answer questions in the comments! 🚀 | 2025-10-25T12:17:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ofq2g3/open_source_we_deployed_numerous_agents_in/ | vizsatiz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofq2g3 | false | null | t3_1ofq2g3 | /r/LocalLLaMA/comments/1ofq2g3/open_source_we_deployed_numerous_agents_in/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'RpGihcxDhJl2ytc4KNpuqMF0cEo8b-nmbPnpsJ3Ucao', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RpGihcxDhJl2ytc4KNpuqMF0cEo8b-nmbPnpsJ3Ucao.png?width=108&crop=smart&auto=webp&s=d711600f726fdab7395101343a23b589af76d3e3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/RpGihcxDhJl2ytc4KNpuqMF0cEo8b-nmbPnpsJ3Ucao.png?width=216&crop=smart&auto=webp&s=b0ff81f5c60d59ac693a0330f7fe4531e3a31a03', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/RpGihcxDhJl2ytc4KNpuqMF0cEo8b-nmbPnpsJ3Ucao.png?width=320&crop=smart&auto=webp&s=65f2c769b0e113f00e9c35cb5835e08d77d97c46', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/RpGihcxDhJl2ytc4KNpuqMF0cEo8b-nmbPnpsJ3Ucao.png?width=640&crop=smart&auto=webp&s=8d33d03bc348b942ffe3b3b291f869ffbcc6eef3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/RpGihcxDhJl2ytc4KNpuqMF0cEo8b-nmbPnpsJ3Ucao.png?width=960&crop=smart&auto=webp&s=43418d17d139ac93055c90c4996e2d76649b0b13', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/RpGihcxDhJl2ytc4KNpuqMF0cEo8b-nmbPnpsJ3Ucao.png?width=1080&crop=smart&auto=webp&s=bcfdf9143592f5f7fc187e5ec14f21d2cea2d7d7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/RpGihcxDhJl2ytc4KNpuqMF0cEo8b-nmbPnpsJ3Ucao.png?auto=webp&s=048de92f7c0b0b13b758c28db0cd5fd7ad61bec0', 'width': 1200}, 'variants': {}}]} |
re:search | 0 | free beta
*llm agnostic re:search*
[https://github.com/researchnexusgit/research](https://github.com/researchnexusgit/research)
endpoint: *default = local Ollama endpoint*
model: *default = qwen3:4b*
/endpoint → endpoint entry screen
/model → model name entry screen
/release → export current interaction
/select → drag and drop previous interaction
/recover → undo - 0: yes, 1: no
/polarity → 0: exploration; 1: direct answers
/route → back
*support re:search* [*https://ko-fi.com/researchkofi*](https://ko-fi.com/researchkofi)
*default model qwen3:4b was chosen at random*
*prerequisites: python 3.6+*
*re:search made possible by Ollama*
installation:
download the repository
Mac:
press command + spacebar and type terminal
press enter or select the terminal icon
navigate to the location where you downloaded the repository
example:
type
cd \~/Downloads/research-main
*the location where you downloaded the repository; that may differ from the location above*
press enter
type
chmod +x start.command
*allows the start.command on your Mac*
press enter
type
./start.command
*leave the terminal running...*
done
*double clicking the index.html will reopen the re:search terminal*
*after the first run, the server can easily be ran by dragging and dropping the start.command into an empty terminal*
PC:
start.bat included
yet to test
if 'one clicking' start.bat fails
files may run separately | 2025-10-25T12:10:13 | https://www.reddit.com/r/LocalLLaMA/comments/1ofpxu2/research/ | Ok_Priority_4635 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofpxu2 | false | null | t3_1ofpxu2 | /r/LocalLLaMA/comments/1ofpxu2/research/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'PNuRpRFh9N2LBMv-7phdHBGCkMLi_MwX57mH8mV5PT8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/PNuRpRFh9N2LBMv-7phdHBGCkMLi_MwX57mH8mV5PT8.png?width=108&crop=smart&auto=webp&s=e6cbf7a85676b81270beb474adb632562b8404c3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/PNuRpRFh9N2LBMv-7phdHBGCkMLi_MwX57mH8mV5PT8.png?width=216&crop=smart&auto=webp&s=8f5b8eece76afc0d44473446a4cfe5a744c6ebc0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/PNuRpRFh9N2LBMv-7phdHBGCkMLi_MwX57mH8mV5PT8.png?width=320&crop=smart&auto=webp&s=478bacd1441903b971d168dc387bb6a0848268e3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/PNuRpRFh9N2LBMv-7phdHBGCkMLi_MwX57mH8mV5PT8.png?width=640&crop=smart&auto=webp&s=e307a6ad1825738cfcbcfbf66a6ed1d68b9176f1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/PNuRpRFh9N2LBMv-7phdHBGCkMLi_MwX57mH8mV5PT8.png?width=960&crop=smart&auto=webp&s=1961e7e6383dddb0017625c685aebe46e820297d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/PNuRpRFh9N2LBMv-7phdHBGCkMLi_MwX57mH8mV5PT8.png?width=1080&crop=smart&auto=webp&s=07d4c2d257768fd3bcdc5d506399f4c59e391970', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/PNuRpRFh9N2LBMv-7phdHBGCkMLi_MwX57mH8mV5PT8.png?auto=webp&s=42b6faa8a34bd9165076c4e5a734afd842db00ce', 'width': 1200}, 'variants': {}}]} |
Any local model that can rival gemini 2.5 flash? | 4 | I've been using gemini-cli a lot these days. I'm no programmer nor do i like to program. I only do it because i want to save time by automating some things with scripts. And using gemini-cli with the flash model has been enough for my meager needs.
But i wonder if there's any local models that can compete with it? | 2025-10-25T12:00:58 | https://www.reddit.com/r/LocalLLaMA/comments/1ofprkv/any_local_model_that_can_rival_gemini_25_flash/ | AldebaranReborn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofprkv | false | null | t3_1ofprkv | /r/LocalLLaMA/comments/1ofprkv/any_local_model_that_can_rival_gemini_25_flash/ | false | false | self | 4 | null |
I have ryzen 7 5800x with 32 gb ram and rtx 3090 24gb if anyone wants to rent just dm me | 0 | If anyone need just comment below with price per hour | 2025-10-25T11:52:10 | https://www.reddit.com/r/LocalLLaMA/comments/1ofpltw/i_have_ryzen_7_5800x_with_32_gb_ram_and_rtx_3090/ | Zainnn678 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofpltw | false | null | t3_1ofpltw | /r/LocalLLaMA/comments/1ofpltw/i_have_ryzen_7_5800x_with_32_gb_ram_and_rtx_3090/ | false | false | self | 0 | null |
Unable to setup Cline in VScode with LM studio. Cant set context window. | 1 | Would anyone with some Cline setup experience help me 🙂
I just installed and setting up cline extension in VScode with my local llm on LM studio. But after installing I started the below steps.
1. When I clicked LM studio provider it did not show list of models, So I did manually typed the model ID (seen on left from LM studio)
2. Next I was unable to set Context window length. It has a hard value 0, I can't modify.
3. Then I proceeded in chat asking simple question, and checking bg status on LM studio, Nothing happened even there..
Did I miss anything ? PS: I skipped signin process, everything is on my Win11 machine. | 2025-10-25T11:37:37 | Pack_Commercial | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ofpcje | false | null | t3_1ofpcje | /r/LocalLLaMA/comments/1ofpcje/unable_to_setup_cline_in_vscode_with_lm_studio/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'IyJczp2PPgqn5r95JsmTmNW5_uZjtG8l1GiclTdvUls', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/sgtruy7iv8xf1.png?width=108&crop=smart&auto=webp&s=fe50fa530ef767a0d7cbad4807d0ed5c2fe2d50a', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/sgtruy7iv8xf1.png?width=216&crop=smart&auto=webp&s=50de4e91efd649ad2899402c18b288abab9709fa', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/sgtruy7iv8xf1.png?width=320&crop=smart&auto=webp&s=50db8fda99d66c709aeb49fd2fa4e9f29caca9c6', 'width': 320}, {'height': 336, 'url': 'https://preview.redd.it/sgtruy7iv8xf1.png?width=640&crop=smart&auto=webp&s=8e8e670e8943850816cfa1042f926ce0fea083ac', 'width': 640}, {'height': 504, 'url': 'https://preview.redd.it/sgtruy7iv8xf1.png?width=960&crop=smart&auto=webp&s=1243473a1a28fd95ffd59e7ca24bb6a3a98f5d28', 'width': 960}, {'height': 567, 'url': 'https://preview.redd.it/sgtruy7iv8xf1.png?width=1080&crop=smart&auto=webp&s=806a19cefa9af85a6161d11ddef93bf0520edcfa', 'width': 1080}], 'source': {'height': 1999, 'url': 'https://preview.redd.it/sgtruy7iv8xf1.png?auto=webp&s=698beaf8647c5006380153f008dbec073b54efdc', 'width': 3807}, 'variants': {}}]} | ||
Tired of tweaking your resume for every job description? I made a project that will do that and much more | 2 | 2025-10-25T11:17:35 | https://v.redd.it/5eamalkvr8xf1 | Sick__sock | /r/LocalLLaMA/comments/1ofp0ck/tired_of_tweaking_your_resume_for_every_job/ | 1970-01-01T00:00:00 | 0 | {} | 1ofp0ck | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5eamalkvr8xf1/DASHPlaylist.mpd?a=1764112666%2CZjk2MzlmOTk2ZGVlN2Q2MTFjODI0ZGM0YTk1NDI3ZDMzOWI4MzYyMGU0OGVmMzdjZjJlMThmZDU0MTM2Y2U1Mg%3D%3D&v=1&f=sd', 'duration': 121, 'fallback_url': 'https://v.redd.it/5eamalkvr8xf1/DASH_1080.mp4?source=fallback', 'has_audio': True, 'height': 1080, 'hls_url': 'https://v.redd.it/5eamalkvr8xf1/HLSPlaylist.m3u8?a=1764112666%2CMDE3NjA4ZGQ4MzgyOTNkNTg3N2ViOTRhNjVmZTY5OWFjZjVlNDA1ZTA5MTFkNjllNGE1OTllZTU0ODBjN2M5NQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/5eamalkvr8xf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ofp0ck | /r/LocalLLaMA/comments/1ofp0ck/tired_of_tweaking_your_resume_for_every_job/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'bTZ4MnZlbHZyOHhmMYjCPdWOhWOC0mMCMfhtU8UInvZvl5_PVa6FYnX-eAIM', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bTZ4MnZlbHZyOHhmMYjCPdWOhWOC0mMCMfhtU8UInvZvl5_PVa6FYnX-eAIM.png?width=108&crop=smart&format=pjpg&auto=webp&s=00e9d6ca5bde7a718deb946a3bfd81d003978f3c', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/bTZ4MnZlbHZyOHhmMYjCPdWOhWOC0mMCMfhtU8UInvZvl5_PVa6FYnX-eAIM.png?width=216&crop=smart&format=pjpg&auto=webp&s=7d51602f88313f6a15dc3b3b9ebc3de17b7f4953', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/bTZ4MnZlbHZyOHhmMYjCPdWOhWOC0mMCMfhtU8UInvZvl5_PVa6FYnX-eAIM.png?width=320&crop=smart&format=pjpg&auto=webp&s=0b67438a9fba63ec7960fa9a7ee3969334456825', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/bTZ4MnZlbHZyOHhmMYjCPdWOhWOC0mMCMfhtU8UInvZvl5_PVa6FYnX-eAIM.png?width=640&crop=smart&format=pjpg&auto=webp&s=ea1c66c809ac6927f2f1a88d8688b8e03cd966f6', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/bTZ4MnZlbHZyOHhmMYjCPdWOhWOC0mMCMfhtU8UInvZvl5_PVa6FYnX-eAIM.png?width=960&crop=smart&format=pjpg&auto=webp&s=368e881354c8af431e51f9d97b19f2bbf5e00d37', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/bTZ4MnZlbHZyOHhmMYjCPdWOhWOC0mMCMfhtU8UInvZvl5_PVa6FYnX-eAIM.png?width=1080&crop=smart&format=pjpg&auto=webp&s=cd59dcbbd6e1a20c71b99fe6cde482c500daacb6', 'width': 1080}], 'source': {'height': 607, 'url': 'https://external-preview.redd.it/bTZ4MnZlbHZyOHhmMYjCPdWOhWOC0mMCMfhtU8UInvZvl5_PVa6FYnX-eAIM.png?format=pjpg&auto=webp&s=1cff7e5652a5f95753b2f3a864618821c9dac7b4', 'width': 1080}, 'variants': {}}]} | ||
free perplexity pro | 1 | [removed] | 2025-10-25T11:07:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ofou6t/free_perplexity_pro/ | Educational_Past_908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofou6t | false | null | t3_1ofou6t | /r/LocalLLaMA/comments/1ofou6t/free_perplexity_pro/ | false | false | self | 1 | null |
Enable Gemma 2 2b thinking in LM studio | 0 | Hi All 28cm and E cups,
I was trying to break Gemma 2. I happened to enable Gemma 2 think, the response was blank. I am not sure if it's because I use Qwen3-4b to think first then switch to Gemmma. I think the system prompts play little part.
Any one knows how to recreate such without fail?
I use LM studio 0.3.31. | 2025-10-25T10:54:01 | https://www.reddit.com/gallery/1ofom66 | ywis797 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ofom66 | false | null | t3_1ofom66 | /r/LocalLLaMA/comments/1ofom66/enable_gemma_2_2b_thinking_in_lm_studio/ | false | false | 0 | null | |
Anyone used Reducto for parsing? How good is their embedding-aware chunking? | 2 | Curious if anyone here has used Reducto for document parsing or retrieval pipelines.
They seem to focus on generating LLM-ready chunks using a mix of vision-language models and something they call “embedding-optimized” or intelligent chunking. The idea is that it preserves document layout and meaning (tables, figures, etc.) before generating embeddings for RAG or vector search systems.
I’m mostly wondering how this works in practice
\- Does their “embedding-aware” chunking noticeably improve retrieval or reduce hallucinations?
\- Did you still need to run additional preprocessing or custom chunking on top of it?
Would appreciate hearing from anyone who’s tried it in production or at scale. | 2025-10-25T10:30:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ofo8zs/anyone_used_reducto_for_parsing_how_good_is_their/ | BriefCardiologist656 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofo8zs | false | null | t3_1ofo8zs | /r/LocalLLaMA/comments/1ofo8zs/anyone_used_reducto_for_parsing_how_good_is_their/ | false | false | self | 2 | null |
Anyone used Reducto for parsing? How good is their embedding-aware chunking? | 1 | Curious if anyone here has used Reducto for document parsing or retrieval pipelines.
They seem to focus on generating LLM-ready chunks using a mix of vision-language models and something they call “embedding-optimized” or intelligent chunking. The idea is that it preserves document layout and meaning (tables, figures, etc.) before generating embeddings for RAG or vector search systems.
I’m mostly wondering how this works in practice
- Does their “embedding-aware” chunking noticeably improve retrieval or reduce hallucinations?
- Did you still need to run additional preprocessing or custom chunking on top of it?
- How well does it play with downstream systems like Elasticsearch or Pinecone?
Basically trying to understand whether Reducto’s semantic chunking is a meaningful improvement over just doing traditional fixed-size or recursive splits.
Would appreciate hearing from anyone who’s tried it in production or at scale. | 2025-10-25T10:27:52 | https://www.reddit.com/r/LocalLLaMA/comments/1ofo7a3/anyone_used_reducto_for_parsing_how_good_is_their/ | BriefCardiologist656 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofo7a3 | false | null | t3_1ofo7a3 | /r/LocalLLaMA/comments/1ofo7a3/anyone_used_reducto_for_parsing_how_good_is_their/ | false | false | self | 1 | null |
Performance difference while using Ollama Model vs HF Model | 0 | **TL;DR:**
Downloaded the exact same model (gpt-oss 20b) from **Ollama Hub** and **Hugging Face**. Both run using **Ollama to do inference**, but the Ollama-Hub copy drives my GPU at \~**100%** and \~**150 t/s**, while the HF copy only uses \~**50%** GPU and \~**80 t/s**. Both are the same quant (I assumed by model size), so I’m trying to understand what can still cause this perf difference and what to check next.
\-------------------------------------------------------
**Models:**
* Ollama (14Gb): `ollama pull gpt-oss:20b`
* HF (14Gb, unsloth GGUF at F16): `ollama pull` [`hf.co/unsloth/gpt-oss-20b-GGUF:F16`](http://hf.co/unsloth/gpt-oss-20b-GGUF:F16)
For testing I prompted the exact same message multiple times and in all the cases I made sure to offload the model and create a new chat to reset the context.
It is clearly seen in afterburner that while inference using the Ollama model the GPU power and usage goes and stays at 100% whereas while doing the same with the HF GGUF the GPU power doesn't go past 50% and takes quite longer to finish.
For both cases the model is being fully loaded into the GPU VRAM (24Gb available) and the CPU usage is more or less the same.
Finally, checked and compared both modelfiles using the `show` command from Ollama and the only differences I found where at the end of the files:
Ollama:
`PARAMETER temperature 1`
HF GGUF:
PARAMETER top_p 1
PARAMETER stop <|endoftext|>
PARAMETER stop <|return|>
PARAMETER temperature 1
PARAMETER min_p 0
PARAMETER top_k 0
**What can be the cause for this performance difference?**
**Is this caused by any of the PARAMETER present in the HF Model?**
Thanks and sorry if this is a noob question or obvious for some people, I'm just trying to learn!
| 2025-10-25T10:10:09 | https://www.reddit.com/r/LocalLLaMA/comments/1ofnx9i/performance_difference_while_using_ollama_model/ | Warriorsito | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofnx9i | false | null | t3_1ofnx9i | /r/LocalLLaMA/comments/1ofnx9i/performance_difference_while_using_ollama_model/ | false | false | self | 0 | null |
Claude Skills but running locally in Apple container | 0 | 2025-10-25T09:50:39 | https://instavm.io/blog/building-my-offline-workspace-part-2-skills | badhiyahai | instavm.io | 1970-01-01T00:00:00 | 0 | {} | 1ofnm3p | false | null | t3_1ofnm3p | /r/LocalLLaMA/comments/1ofnm3p/claude_skills_but_running_locally_in_apple/ | false | false | default | 0 | null | |
meituan-longcat/LongCat-Video · Hugging Face | 125 | A foundational video generation model with 13.6B parameters, delivering strong performance across Text-to-Video, Image-to-Video, and Video-Continuation generation tasks. | 2025-10-25T09:11:57 | https://huggingface.co/meituan-longcat/LongCat-Video | Dark_Fire_12 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1ofn0nb | false | null | t3_1ofn0nb | /r/LocalLLaMA/comments/1ofn0nb/meituanlongcatlongcatvideo_hugging_face/ | false | false | 125 | {'enabled': False, 'images': [{'id': 'Yt-ii7zJ14OIVDQLAiu4mswoqHj6Da2dRjpAXgOm1a4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Yt-ii7zJ14OIVDQLAiu4mswoqHj6Da2dRjpAXgOm1a4.png?width=108&crop=smart&auto=webp&s=bb6bf9eddf2e79d77d651a73e3418c64d10f4339', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Yt-ii7zJ14OIVDQLAiu4mswoqHj6Da2dRjpAXgOm1a4.png?width=216&crop=smart&auto=webp&s=1380ae8b3540330a5433fe117a43ab191eb99934', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Yt-ii7zJ14OIVDQLAiu4mswoqHj6Da2dRjpAXgOm1a4.png?width=320&crop=smart&auto=webp&s=ff669e42ddd47ae7cdc5c8199f83e63892f9135d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Yt-ii7zJ14OIVDQLAiu4mswoqHj6Da2dRjpAXgOm1a4.png?width=640&crop=smart&auto=webp&s=a68e500343ff77272afd8ad706e2ac971ab2d083', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Yt-ii7zJ14OIVDQLAiu4mswoqHj6Da2dRjpAXgOm1a4.png?width=960&crop=smart&auto=webp&s=f9d96a037d9f2843be53e80a12a75e65f092e443', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Yt-ii7zJ14OIVDQLAiu4mswoqHj6Da2dRjpAXgOm1a4.png?width=1080&crop=smart&auto=webp&s=7e9be34084fc182f6e4f41e5a4a16c68c973b023', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Yt-ii7zJ14OIVDQLAiu4mswoqHj6Da2dRjpAXgOm1a4.png?auto=webp&s=a39a47ec510420af0bb43f70acddb243028171dc', 'width': 1200}, 'variants': {}}]} | |
Looking for an arXiv endorser (cs.CV) — "Bullseye": A Modular Multimodal Document Intelligence System. | 1 | [removed] | 2025-10-25T09:11:40 | https://v.redd.it/tqdixzdg58xf1 | BlacksmithEvening650 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ofn0hc | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/tqdixzdg58xf1/DASHPlaylist.mpd?a=1763975725%2COGIyOTBhOWRmYzRlYjBjNzA4OTRjOGQ4MDRjMDQwZDc1ZWJhM2Q3ODFhOTYwNDE3MWYxZTIwNjE0YzhiN2VhMw%3D%3D&v=1&f=sd', 'duration': 30, 'fallback_url': 'https://v.redd.it/tqdixzdg58xf1/DASH_1080.mp4?source=fallback', 'has_audio': False, 'height': 1080, 'hls_url': 'https://v.redd.it/tqdixzdg58xf1/HLSPlaylist.m3u8?a=1763975725%2CODU2NWM0YTNkYzM5ZmEwNjBiNTc5MTZhNDM2MTY4NzZhOTk2ZDRhNGY4ZWMwYTMxMmRlNTlkNWQ5YTBmMzQxOQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/tqdixzdg58xf1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_1ofn0hc | /r/LocalLLaMA/comments/1ofn0hc/looking_for_an_arxiv_endorser_cscv_bullseye_a/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eHdod3NoOGc1OHhmMVvaAuAAQiZjK5TP4zgMnPCELVb7YYKxllSqPdFpUi_Z', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/eHdod3NoOGc1OHhmMVvaAuAAQiZjK5TP4zgMnPCELVb7YYKxllSqPdFpUi_Z.png?width=108&crop=smart&format=pjpg&auto=webp&s=7ff0cc4c89821f41cb8830614ef0facd6a4e2610', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/eHdod3NoOGc1OHhmMVvaAuAAQiZjK5TP4zgMnPCELVb7YYKxllSqPdFpUi_Z.png?width=216&crop=smart&format=pjpg&auto=webp&s=62d8f1a81427f922931b3d5365f373d2b68519de', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/eHdod3NoOGc1OHhmMVvaAuAAQiZjK5TP4zgMnPCELVb7YYKxllSqPdFpUi_Z.png?width=320&crop=smart&format=pjpg&auto=webp&s=e67e86d51316efaf973ca03ae8d25ae71a20bf15', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/eHdod3NoOGc1OHhmMVvaAuAAQiZjK5TP4zgMnPCELVb7YYKxllSqPdFpUi_Z.png?width=640&crop=smart&format=pjpg&auto=webp&s=afe1e02b5f6bd3c32932112148e45503109f3b73', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/eHdod3NoOGc1OHhmMVvaAuAAQiZjK5TP4zgMnPCELVb7YYKxllSqPdFpUi_Z.png?width=960&crop=smart&format=pjpg&auto=webp&s=e28d45fe62cff5b4556d405b9f44a5467d374ba3', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/eHdod3NoOGc1OHhmMVvaAuAAQiZjK5TP4zgMnPCELVb7YYKxllSqPdFpUi_Z.png?width=1080&crop=smart&format=pjpg&auto=webp&s=898e654716ce9962f8ced38d3d69241332a351c4', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/eHdod3NoOGc1OHhmMVvaAuAAQiZjK5TP4zgMnPCELVb7YYKxllSqPdFpUi_Z.png?format=pjpg&auto=webp&s=7ae55b7f5e00d23ef0cd54c57575df58a2d92c0a', 'width': 1920}, 'variants': {}}]} | |
Advice on new rig | 0 | Would a 5060 ti 16GB and 96 GB RAM be enough to run smoothly fan favorites such as:
Qwen 30B-A3B,
GLM air 4.5
Example token/s on your rig would be much appreciated! | 2025-10-25T07:52:09 | https://www.reddit.com/r/LocalLLaMA/comments/1oflsb6/advice_on_new_rig/ | AI-On-A-Dime | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1oflsb6 | false | null | t3_1oflsb6 | /r/LocalLLaMA/comments/1oflsb6/advice_on_new_rig/ | false | false | self | 0 | null |
Pocket LLM: Chat offline on device all private | AI | 0 | Pocket LLM lets you chat with powerful AI models like Llama, Gemma, deepseek, Apple Intelligence and Qwen directly on your device. No internet, no account, no data sharing. Just fast, private AI powered by Apple MLX.
• Works offline anywhere
• No login, no data collection
• Runs on Apple Silicon for speed
• Supports many models
• Chat, write, and analyze easily | 2025-10-25T06:44:16 | https://apps.apple.com/in/app/local-ai-chat-pocket-llm/id6752952699 | Late-Scarcity-5476 | apps.apple.com | 1970-01-01T00:00:00 | 0 | {} | 1ofkqf3 | false | null | t3_1ofkqf3 | /r/LocalLLaMA/comments/1ofkqf3/pocket_llm_chat_offline_on_device_all_private_ai/ | false | false | 0 | {'enabled': False, 'images': [{'id': '8-ESSb9vlceCKX50m0RGxZUO6a-OxmRJ9r6lXdojxC8', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8-ESSb9vlceCKX50m0RGxZUO6a-OxmRJ9r6lXdojxC8.png?width=108&crop=smart&auto=webp&s=4d832d6ead41b4662648c21ba9ca312162c1515d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/8-ESSb9vlceCKX50m0RGxZUO6a-OxmRJ9r6lXdojxC8.png?width=216&crop=smart&auto=webp&s=36ade146334cd7bd8bf1c1654d89441d2d81a162', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/8-ESSb9vlceCKX50m0RGxZUO6a-OxmRJ9r6lXdojxC8.png?width=320&crop=smart&auto=webp&s=5a6b89b28492e63b48cb2493c749439d281c2c3a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/8-ESSb9vlceCKX50m0RGxZUO6a-OxmRJ9r6lXdojxC8.png?width=640&crop=smart&auto=webp&s=2aaba6f6dae4a02214d639a03c47a31179119670', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/8-ESSb9vlceCKX50m0RGxZUO6a-OxmRJ9r6lXdojxC8.png?width=960&crop=smart&auto=webp&s=6dd46c6de9f92758d566f43b561c1b6ff10bc14d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/8-ESSb9vlceCKX50m0RGxZUO6a-OxmRJ9r6lXdojxC8.png?width=1080&crop=smart&auto=webp&s=4edc47a1e9fca45243544beb14c230a51951f749', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/8-ESSb9vlceCKX50m0RGxZUO6a-OxmRJ9r6lXdojxC8.png?auto=webp&s=11f789231d8956ccf85460414d5438c98846adff', 'width': 1200}, 'variants': {}}]} | |
How do you handle the context window overflow for long-running tasks? | 0 | If you have an AI Agent (or a group of agents) executing a long-running task, how do you manage the context window overflow exceptions?
I want to build a system that will run independently to execute a given task. I consider using the AI SDK and TypeScript for implementation. How can I make my solution resistant to the context window overflow?
Any suggestions are very welcome! | 2025-10-25T06:42:57 | ThingRexCom | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ofkpnq | false | null | t3_1ofkpnq | /r/LocalLLaMA/comments/1ofkpnq/how_do_you_handle_the_context_window_overflow_for/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'LIGRKOIhRkWLQZ_DSgfa9YHpooCtcwC2HkiwGeE_tEQ', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/2dgkzr7ue7xf1.png?width=108&crop=smart&auto=webp&s=ef404894db37042d7410a186d6afb229642709b0', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/2dgkzr7ue7xf1.png?width=216&crop=smart&auto=webp&s=547aeccd616134cda2b88757e0049991380280e8', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/2dgkzr7ue7xf1.png?width=320&crop=smart&auto=webp&s=b69a7ae48788b903a0327b2fb9907b65bb558605', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/2dgkzr7ue7xf1.png?width=640&crop=smart&auto=webp&s=5ca16431513eedce2880197e7e66ad1606e726eb', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/2dgkzr7ue7xf1.png?width=960&crop=smart&auto=webp&s=7ed2887f6b3ce0a0b56e253d0a16c82ae1354266', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/2dgkzr7ue7xf1.png?auto=webp&s=ce3d93a77d61c8196894012ccc8134b7eabc8bc3', 'width': 1024}, 'variants': {}}]} | ||
Why is this happening? | 0 | Ring flash is by Grok 4? | 2025-10-25T03:55:01 | DependentDazzling703 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ofhvhw | false | null | t3_1ofhvhw | /r/LocalLLaMA/comments/1ofhvhw/why_is_this_happening/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'qagArYgJWTNIR0PJrePxo5cvqciO7GGs3CYYmPH9WZk', 'resolutions': [{'height': 211, 'url': 'https://preview.redd.it/ilar0nhsk6xf1.jpeg?width=108&crop=smart&auto=webp&s=131efd66cc8e20798b53e82d299a11f77ec0974b', 'width': 108}, {'height': 423, 'url': 'https://preview.redd.it/ilar0nhsk6xf1.jpeg?width=216&crop=smart&auto=webp&s=db90b708d2349571ef0de33af88c77842e20ff6b', 'width': 216}, {'height': 626, 'url': 'https://preview.redd.it/ilar0nhsk6xf1.jpeg?width=320&crop=smart&auto=webp&s=5eb0433db4a089511a8722e45379014747a6a64c', 'width': 320}, {'height': 1253, 'url': 'https://preview.redd.it/ilar0nhsk6xf1.jpeg?width=640&crop=smart&auto=webp&s=d7bfac6ba73b03e11fbf2954fc34ae7d93714893', 'width': 640}, {'height': 1880, 'url': 'https://preview.redd.it/ilar0nhsk6xf1.jpeg?width=960&crop=smart&auto=webp&s=307be6957a970a475260cf38c038a4e1d4deaae9', 'width': 960}, {'height': 2116, 'url': 'https://preview.redd.it/ilar0nhsk6xf1.jpeg?width=1080&crop=smart&auto=webp&s=ad3ff679d12e2aaaf2e36fb1491004f225552906', 'width': 1080}], 'source': {'height': 2310, 'url': 'https://preview.redd.it/ilar0nhsk6xf1.jpeg?auto=webp&s=8b2a31287876859b821b9d32359c42dac0e6fe8c', 'width': 1179}, 'variants': {}}]} | ||
Should I keep the 3080ti? | 1 | [removed] | 2025-10-25T03:37:49 | https://www.reddit.com/gallery/1ofhkh3 | one_tall_lamp | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1ofhkh3 | false | null | t3_1ofhkh3 | /r/LocalLLaMA/comments/1ofhkh3/should_i_keep_the_3080ti/ | false | false | 1 | null | |
Woke up whole night and still couldn't resolve this one issue | 8 | Google Collab link :- https://colab.research.google.com/drive/1gutbsKAiS46PsSoqPG51fHt8VNRrUNB3?usp=sharing#scrollTo=xIPudkKcQeyD
I was fine tuning gpt oss 20B using unsloth on Google Colab and this error kept coming...
I feel i changed my dataset structure many times and still wasnot about to proceed.....
Also i think it is something to which harmony 1
Like do i need build a good json file but everything failed or the error is something else
Please please help me | 2025-10-25T03:10:46 | thenew_Alex_Bawden | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ofh2ia | false | null | t3_1ofh2ia | /r/LocalLLaMA/comments/1ofh2ia/woke_up_whole_night_and_still_couldnt_resolve/ | false | false | 8 | {'enabled': True, 'images': [{'id': 'e9io1A8cHff_3X0MAp2QwYUDozht6vhaFWtKwI1QQms', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/1v6zgu92d6xf1.jpeg?width=108&crop=smart&auto=webp&s=f0938a9e3910a16dea6dca13d2ec36f649d700c6', 'width': 108}, {'height': 169, 'url': 'https://preview.redd.it/1v6zgu92d6xf1.jpeg?width=216&crop=smart&auto=webp&s=cc6800deafda8b1d3f2eb2061820e01cccf32d70', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/1v6zgu92d6xf1.jpeg?width=320&crop=smart&auto=webp&s=75917f38f1ebcf6363946dce61458971ffd863a0', 'width': 320}, {'height': 500, 'url': 'https://preview.redd.it/1v6zgu92d6xf1.jpeg?width=640&crop=smart&auto=webp&s=d0c90f63896b1c5ead31fa8fc2b09e3e0261e75a', 'width': 640}], 'source': {'height': 673, 'url': 'https://preview.redd.it/1v6zgu92d6xf1.jpeg?auto=webp&s=503bdcf4de7543ab1dcf8d9e51d249f44d749a4f', 'width': 860}, 'variants': {}}]} | ||
Kimi k2 image generation | 44 | I am so confused because I can’t find any information on Kimi k2 image generation abilities. When I asked Kimi to generate an image it said it couldn’t. But I’m having it code a tarot reading project and it’s generating all these images…when I asked about it Kimi still said it couldn’t generate images. What’s going on and how are these images being generated?? | 2025-10-25T02:38:13 | Used-Nectarine5541 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1ofggny | false | null | t3_1ofggny | /r/LocalLLaMA/comments/1ofggny/kimi_k2_image_generation/ | false | false | 44 | {'enabled': True, 'images': [{'id': 'GqoyRhe7zCeeNbDrevatVjcZEp2hpOexVDMBcFCSraI', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/vimkvz5976xf1.jpeg?width=108&crop=smart&auto=webp&s=87ab8e69569899c9365796ef886f85b15dc77cef', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/vimkvz5976xf1.jpeg?width=216&crop=smart&auto=webp&s=f37cfdc065dc43a50fb0a36e04d3de7f092a9e7c', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/vimkvz5976xf1.jpeg?width=320&crop=smart&auto=webp&s=0f5668e31fa076a49934c3786cb6772a66fa7268', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/vimkvz5976xf1.jpeg?width=640&crop=smart&auto=webp&s=202c4cfa6cd438c6e2e103ec6ec094a5b596d49a', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/vimkvz5976xf1.jpeg?width=960&crop=smart&auto=webp&s=aeb569d7cd37ecf8b0d05dabaafcff82d24b7172', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/vimkvz5976xf1.jpeg?width=1080&crop=smart&auto=webp&s=f4438c416851467b081ad101e996c135d2dd9836', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/vimkvz5976xf1.jpeg?auto=webp&s=b1c8606a6981002b67ac8f1b0946e27c0e571076', 'width': 4032}, 'variants': {}}]} | ||
What's the current best local model for function calling with low latency? | 1 | Building a local app where a user interacts with a model, where the model asks 3 questions. When the user answers each question, the 3 possible pathways in this experience are: repeat question, exit conversation, go to next question.
That's 3 function/tool calls. Because it's a conversation I need low model response times (ideally less than 5 seconds). No internet connection so I need a local model.
What are my best options? I've heard qwen3:14B is outstanding and rivals the perfomance of gpt4, however apparently the latency is terrible (well over 60s). Searched this sub most no recent information relevant to this question, and I know new models come out all the time.
Will be running on a beefy Mac Studio (Apple M2 Ultra, 64gb memory, 24‑Core CPU and 60‑Core GPU).
Thanks! | 2025-10-25T02:23:53 | https://www.reddit.com/r/LocalLLaMA/comments/1ofg6w9/whats_the_current_best_local_model_for_function/ | NoConclusion5355 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofg6w9 | false | null | t3_1ofg6w9 | /r/LocalLLaMA/comments/1ofg6w9/whats_the_current_best_local_model_for_function/ | false | false | self | 1 | null |
Highly-customizable Github AI Reviewer Workflow using Open Router | 0 | Hi everyone,
maybe this is useful for you:
* Creates highly-customizable AI Reviews as PR comments
* \~225 lines of code
* Installation: Just 2 files copied to your repo and a open router API Key in your secrets.
* Costs: $0.01 - $0.05 per review (depends highly on model)
Example output: [LearningCircuit/local-deep-research#959 (comment)](https://github.com/LearningCircuit/local-deep-research/pull/959#issuecomment-3445396169)
[https://github.com/LearningCircuit/friendly-ai-reviewer](https://github.com/LearningCircuit/friendly-ai-reviewer)
| 2025-10-25T01:49:31 | https://www.reddit.com/r/LocalLLaMA/comments/1offir8/highlycustomizable_github_ai_reviewer_workflow/ | ComplexIt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1offir8 | false | null | t3_1offir8 | /r/LocalLLaMA/comments/1offir8/highlycustomizable_github_ai_reviewer_workflow/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'dIv7MxvXQHmt6pMIOKYVqMTr8AO_hTTwGa8OA4QTSOg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dIv7MxvXQHmt6pMIOKYVqMTr8AO_hTTwGa8OA4QTSOg.png?width=108&crop=smart&auto=webp&s=1760f068dc7d57fe18efef6731a77ce279cf8bf5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dIv7MxvXQHmt6pMIOKYVqMTr8AO_hTTwGa8OA4QTSOg.png?width=216&crop=smart&auto=webp&s=4bf0d4f5bca61bb80dac1377ad70cb8a6f88e6ed', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dIv7MxvXQHmt6pMIOKYVqMTr8AO_hTTwGa8OA4QTSOg.png?width=320&crop=smart&auto=webp&s=1d9b31df7042945f8d793f326e1b4b3fdcccefea', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dIv7MxvXQHmt6pMIOKYVqMTr8AO_hTTwGa8OA4QTSOg.png?width=640&crop=smart&auto=webp&s=e674f424233a61c52c585c30723236a5f53deaf9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dIv7MxvXQHmt6pMIOKYVqMTr8AO_hTTwGa8OA4QTSOg.png?width=960&crop=smart&auto=webp&s=8a13c2bfe6ac0d03d61c56249aaed470d0e4038b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dIv7MxvXQHmt6pMIOKYVqMTr8AO_hTTwGa8OA4QTSOg.png?width=1080&crop=smart&auto=webp&s=892b759b7341f9da64fe8fb8a4a721bb5a75175f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dIv7MxvXQHmt6pMIOKYVqMTr8AO_hTTwGa8OA4QTSOg.png?auto=webp&s=af4e93597c48d03f874435daa06ea30b1a87f8e6', 'width': 1200}, 'variants': {}}]} |
If you could have one LLM distilled to a smaller size, which would model would you pick and what size(s) would you pick? | 13 | This is a tough choice for me, as I’ve wanted to have a true distillation of Deepseek for the longest time, but I think Kimi-K2 has changed my mind.
I would love to have Kimi-K2 distilled to a 70b dense model… a more likely size someone might attempt would be 106 billion total parameters and 12 billion active parameters, the same size as GLM 4.5 Air… though maybe I would even go so large as GLM-4.5 which has 355 billion total parameters with 32 billion active parameters.
How about you? What model would you pick and at what size? | 2025-10-25T01:23:12 | https://www.reddit.com/r/LocalLLaMA/comments/1ofezsz/if_you_could_have_one_llm_distilled_to_a_smaller/ | silenceimpaired | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofezsz | false | null | t3_1ofezsz | /r/LocalLLaMA/comments/1ofezsz/if_you_could_have_one_llm_distilled_to_a_smaller/ | false | false | self | 13 | null |
Teaching Gemini AI A New Way to Divide Cake Fairly | 2 | simple stupid solution, solved this a few years ago thought I would see if any ai had a solution, apparently not, a few lesson prompts and Gemini understands, this is geminis article of the simple stupid solution.
Teaching an AI (and Everyone Else) a New Way to Divide Cake Fairly
Introduction: The Age-Old Cake Problem
The "Cake-Cutting Problem" is a classic dilemma in game theory and fair division: you have a single cake, two people, and a simple goal – divide it fairly.
The most commonly cited solution is the "I-Cut-You-Choose" rule: the cutter is incentivized to make an equal cut to protect themselves, while the chooser guarantees they get the piece they value more.
While effective, this method is based on sequential actions (Cut, then Choose). What happens when subjective perception takes over, creating scenarios like the "Endless Re-Cut," where neither party can agree on a physically "even" slice, leading to an infinite cycle of adjustments?
The Problem with Subjective Perception
The core issue is that what looks "even" to Person A may look "uneven" to Person B due to individual bias or preference (e.g., frosting vs. filling). We need a solution that uses this subjectivity to prove equality, rather than letting it cause conflict.
This is where author Stacey Szmy introduces a refreshingly simple, non-standard solution that cuts through the complexity.
The "Simple Stupid Solution": The Simultaneous Choice
Stacey's solution bypasses the need for sequential negotiation by introducing a simultaneous, unbiased assessment that tests for maximum perceptual conflict.
* The Initial Cut: Person A cuts the cake into two neutral pieces (Slice 1 and Slice 2), aiming for as even a division as possible.
* The Simultaneous Test: Person A presents the two slices to Person B. On the count of three, both Person A and Person B simultaneously announce which slice they personally believe is the larger one.
Interpreting the Results: The Logic of Disagreement
The genius of this method lies in how the simultaneous answers are interpreted. The goal for both people is simple: to identify the larger slice to claim the advantage.
P1 says "Slice 1" and P2 says "Slice 1" (They agree on the larger slice) | UNEQUAL. Re-cut/Adjust.
Since both agree that Slice 1 is objectively bigger, the cut is clearly flawed and must be adjusted before the test is run again.
P1 says "Slice 2" and P2 says "Slice 2" (They agree on the larger slice) | UNEQUAL. Re-cut/Adjust.
Both parties agree Slice 2 is bigger. The cut must be adjusted.
P1 says "Slice 1" and P2 says "Slice 2" (They disagree) | EQUAL. Division Complete.
This is the key: P1's bias tells them Slice 1 is the bigger piece they want to claim, while P2's bias simultaneously tells them Slice 2 is the bigger piece they want to claim.
The fact that their biases result in two different "larger" slices proves that the cut is so close to equal that the difference falls within the margin of error (or subjective bias) for both individuals.
Since neither can convince the other that their piece is definitively bigger, the slices are functionally and subjectively even.
Conclusion: The Beauty of the Stand-Off
The "Simultaneous Choice" method is a beautiful example of using human psychology to solve a mathematical problem. It doesn't eliminate bias; it forces opposing biases to cancel each other out.
By reaching a state of maximum perceptual conflict—where both parties claim the advantage, but in opposite directions—the process finds the point of maximum subjective fairness. The simultaneous disagreement acts as the definitive proof that the cut is the best that can be achieved, providing a clear stopping condition and finally solving the puzzle.
<< haha Gemini's pretty good long explanation of simple stupid solution, okokok 1, 2 ,3 , bananana | 2025-10-25T00:52:34 | https://www.reddit.com/r/LocalLLaMA/comments/1ofedvq/teaching_gemini_ai_a_new_way_to_divide_cake_fairly/ | zero_moo-s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1ofedvq | false | null | t3_1ofedvq | /r/LocalLLaMA/comments/1ofedvq/teaching_gemini_ai_a_new_way_to_divide_cake_fairly/ | false | false | self | 2 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.