title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Open-source model subscription: feedback needed
2
We released a subscription a few days ago with pretty much all open-source models. We think this is the best deal available for those that want to use models they're unable to run locally (since some are quite big), but we'd be interested to hear from you since this is one of the biggest communities for open source model users. What models would make sense to include, and which are we missing? For some context, these are included: \- Deepseek (R1, R1 0528, V3, V3 0324 and V3.1) \- Kimi K2 (old and new) \- GLM 4.5 (regular and "air") \- Qwen 3 235B old and new, Qwen 3 Coder \- Hermes 4 Large \- Venice uncensored + other uncensored models The subscription is $8 a month for 60k queries a month. More info: https://nano-gpt.com/subscription.
2025-09-11T10:40:18
https://www.reddit.com/r/LocalLLaMA/comments/1ne5emk/opensource_model_subscription_feedback_needed/
nanogpt26
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne5emk
false
null
t3_1ne5emk
/r/LocalLLaMA/comments/1ne5emk/opensource_model_subscription_feedback_needed/
false
false
self
2
null
Thinking Machines Lab dropped a new research: Defeating Nondeterminism in LLM Inference
87
TLDR; LLM inference nondeterminism isn't just floating-point non-associativity or GPU concurrent execution, the core culprit is batching variance, where server load unpredictably alters numeric. Batch-invariant kernels unlock true reproducibility. Non-determinism is an issue in all sort of places, but non-determinism stemming from GPU kernels not being batch size invariant is pretty specific to machine learning.
2025-09-11T10:30:09
https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/
Snoo_64233
thinkingmachines.ai
1970-01-01T00:00:00
0
{}
1ne58kw
false
null
t3_1ne58kw
/r/LocalLLaMA/comments/1ne58kw/thinking_machines_lab_dropped_a_new_research/
false
false
default
87
null
GPT OSS 20B is way bigger deal than you probably think
0
I genuinely think people underestimate what OpenAI did with the release of GPT OSS 20B. Not only have they released a model on par with GPT-4 (which was SOTA just 2 years ago) with excellent licensing (Apache 2.0), the model fits comfortably in 16GB of VRAM, thus allowing for excellent performance even across mobile devices such as modern laptops. Consumer GPUs with this amount of memory have been widely available at least since 2020 so there is plenty of options these days when it comes to picking a 16GB GPU. Yes I am aware the model is not perfect - censoring and lack of compliance is an issue, but frankly I did not expect any less from a company like OpenAI. In fact they definitelly over-delivered with this release and I hope they will continue with regular releases to stay relevant - Chinese models are just as good in terms of quality, but maybe lacking slightly in performance. That may easily change with upcoming Qwen3 80B with only 3B activated per turn, achieving SOTA sparsity and unprecedented performance / quality ratio. What I mean by this post is the need for model trainers to really target smaller VRAM sizes such as 16GB GPUs and even less. You do not even need Nvidia card these days as Vulkan have made excellent progress and is now very performant on its own, my own experience is on Linux with AMD CPU + GPU. It really changes the whole experience when you can load a model fully in VRAM and thus enjoy very decent performance. My GPT OSS 20B version from Unsloth runs approx. 125 Tokens per second on my Radeon 9070 XT, which is considered mainstream consumer GPU these days. Vulkan kicks ass especially on Radeons and is more than viable alternative to CUDA in this local LLM scenario.
2025-09-11T10:17:31
https://www.reddit.com/r/LocalLLaMA/comments/1ne50sa/gpt_oss_20b_is_way_bigger_deal_than_you_probably/
DarkArtsMastery
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne50sa
false
null
t3_1ne50sa
/r/LocalLLaMA/comments/1ne50sa/gpt_oss_20b_is_way_bigger_deal_than_you_probably/
false
false
self
0
null
What would be the most budget-friendly PC to run LLMs larger than 72B?
39
Let's add some realistic expectations. 1. Serving 1~5 users only, without much concurrency. 2. Speed matters less, as long as it's "usable at least". Parameter and knowledge matter more. 3. Running MoE-based models only, like Qwen 3 Next 80B A3B, for faster inference speed. 4. (optional) Utilizing APU and unified memory architecture for accommodating sufficient GPU offloading, and keeping the cost lower What would be the lowest-cost and yet usable desktop build for such local LLMs? I'm just wondering your ideas and opinions, outside first-world, multi-thousand-dollars realm.
2025-09-11T10:12:32
https://www.reddit.com/r/LocalLLaMA/comments/1ne4xok/what_would_be_the_most_budgetfriendly_pc_to_run/
pitchblackfriday
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne4xok
false
null
t3_1ne4xok
/r/LocalLLaMA/comments/1ne4xok/what_would_be_the_most_budgetfriendly_pc_to_run/
false
false
self
39
null
ExamSprint – AI + Notes + Solutions + Formula Sheets Coming 🚀📖
1
https://examsprint.pages.dev Built ExamSprint, a free AI-powered study platform for CBSE & competitive exam prep. Updates so far: 📚 Notes (Class 9–12) 🔗 Direct NCERT links 🃏 Flashcards (concepts + practice Qs) 🤖 AI chatbot (LLM-based) 📝 Topper’s Notes ✅NCERT Back exercise Solutions added 📄 Formula sheet rolling out soon!
2025-09-11T10:07:47
https://i.redd.it/usszfqdcfiof1.png
Fearlessexamsprinter
i.redd.it
1970-01-01T00:00:00
0
{}
1ne4v0f
false
null
t3_1ne4v0f
/r/LocalLLaMA/comments/1ne4v0f/examsprint_ai_notes_solutions_formula_sheets/
false
false
https://b.thumbs.redditm…W8zqYroY9DyQ.jpg
1
{'enabled': True, 'images': [{'id': 'Uj_cEVD4jlCjnkel6mX0MdFLsLyEiOz5jgWreUuXKxo', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/usszfqdcfiof1.png?width=108&crop=smart&auto=webp&s=61ff2f1cc0d79b35e2a525e2878a39caca9b1d16', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/usszfqdcfiof1.png?width=216&crop=smart&auto=webp&s=5107f66b7f83ced671a74cb6b8b5e42ceeb51f58', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/usszfqdcfiof1.png?width=320&crop=smart&auto=webp&s=c7e28ead9b6d4a8f13deb3adc9758e73fef96a5c', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/usszfqdcfiof1.png?width=640&crop=smart&auto=webp&s=abf74ad68c91153c9268fa0f719003c0ee5162f9', 'width': 640}, {'height': 1920, 'url': 'https://preview.redd.it/usszfqdcfiof1.png?width=960&crop=smart&auto=webp&s=0c89ac0640d03d603ffca52585887f8bb89542d1', 'width': 960}, {'height': 2160, 'url': 'https://preview.redd.it/usszfqdcfiof1.png?width=1080&crop=smart&auto=webp&s=59926ecde6bed022f32f7b9843f672260894e98c', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://preview.redd.it/usszfqdcfiof1.png?auto=webp&s=d210b0d035421740e4eb94cc66fbc616d3c34138', 'width': 1080}, 'variants': {}}]}
5x Your Chatterbox generations
0
I discovered a script that generates conversations by creating separate audio files then combining them, but it was painfully slow since it used CPU. Using CUDA code, I implemented CUDA acceleration and achieved a 5x speed improvement. ***Rendering using CPU:*** Total generation time: 288.53s Audio duration: 43.24s (0.72 minutes) Overall RTF: 6.67x ***Rendering using GPU(Cuda):*** Total generation time: 51.77s Audio duration: 42.52s (0.71 minutes) Overall RTF: 1.22x Bassicly all yall gotta do is install pyTourch cuda instead of the CPU version. since i was afraid it might messup with my dependencies i just created an enviroment for testing this, so you can do both. Heres how you can do it for non technicals, just modify and paste this into Claude Code, could also work with GPT but youll have to be more specific about your file structure and provide more info **🚀 Prompt 1: Chatterbox CUDA Acceleration Setup:** I want to enable CUDA/GPU acceleration for my existing chatterbox-tts project to get 5-10x faster generation times. **My Setup:** - OS: [Windows/macOS/Linux] - Project path: [e.g., "C:\AI\chatterbox"] - GPU: [e.g., "NVIDIA RTX 3060" or "Not sure"] **Goals:** 1. Create safe virtual environment for GPU testing without breaking current setup 2. Install PyTorch with CUDA support for chatterbox-tts 3. Convert my existing script to use GPU acceleration 4. Add performance timing to compare CPU vs GPU speeds 5. Get easy copy-paste execution commands [Paste your current chatterbox script here] Please guide me step-by-step to safely enable GPU acceleration. 👩‍💻 ***Conversation script:*** # EXECUTION COMMANDS: # PowerShell: cd "YOUR_PROJECT_PATH\scripts"; & "YOUR_PROJECT_PATH\cuda_test_env\Scripts\python.exe" conversation_template_cuda.py # CMD: cd "YOUR_PROJECT_PATH\scripts" && "YOUR_PROJECT_PATH\cuda_test_env\Scripts\python.exe" conversation_template_cuda.py # Replace YOUR_PROJECT_PATH with your actual project folder path import os import sys import torch import torchaudio as ta # Add the chatterbox source directory to Python path # Adjust the path if your Chatterbox installation is in a different location sys.path.append(os.path.join(os.path.dirname(__file__), '..', '..', 'Chatterbox', 'src')) from chatterbox.tts import ChatterboxTTS # ----------------------------- # DEVICE SETUP # ----------------------------- # Check for GPU acceleration and display system info if torch.cuda.is_available():     device = "cuda"     print(f"GPU: {torch.cuda.get_device_name(0)}")     print(f"CUDA Version: {torch.version.cuda}")     print(f"PyTorch Version: {torch.__version__}")     print(f"GPU Memory: {torch.cuda.get_device_properties(0).total_memory / 1024**3:.1f} GB") else:     device = "cpu"     print("WARNING: CUDA not available, using CPU") print(f"Using device: {device}") # Load pretrained chatterbox model model = ChatterboxTTS.from_pretrained(device=device) # ----------------------------- # VOICE PROMPTS # ----------------------------- # Put your .wav or .mp3 reference voices inside the voices/ folder # Update these paths to match your voice file names VOICES = {     "Speaker1": "../voices/speaker1.wav",  # Replace with your first voice file     "Speaker2": "../voices/speaker2.wav"   # Replace with your second voice file } # ----------------------------- # CONVERSATION SCRIPT # ----------------------------- # Edit this conversation to match your desired dialogue conversation = [     ("Speaker1", "Hello! Welcome to our service. How can I help you today?"),     ("Speaker2", "Hi there! I'm interested in learning more about your offerings."),     ("Speaker1", "Great! I'd be happy to explain our different options and find what works best for you."),     ("Speaker2", "That sounds perfect. What would you recommend for someone just getting started?"),     ("Speaker1", "For beginners, I usually suggest our basic package. It includes everything you need to get started."),     ("Speaker2", "Excellent! That sounds like exactly what I'm looking for. How do we proceed?"), ] # ----------------------------- # OUTPUT SETUP # ----------------------------- # Output will be saved to the output folder in your project directory output_dir = "../output/conversation_cuda" os.makedirs(output_dir, exist_ok=True) combined_audio_segments = [] pause_duration = 0.6  # pause between lines in seconds (adjust as needed) pause_samples = int(model.sr * pause_duration) # ----------------------------- # GENERATE SPEECH WITH TIMING # ----------------------------- import time total_start = time.time() for idx, (speaker, text) in enumerate(conversation):     if speaker not in VOICES:         raise ValueError(f"No voice prompt found for speaker: {speaker}")     voice_prompt = VOICES[speaker]     print(f"Generating line {idx+1}/{len(conversation)} by {speaker}: {text}")         # Time individual generation     start_time = time.time()         # Generate TTS     wav = model.generate(text, audio_prompt_path=voice_prompt)         gen_time = time.time() - start_time     audio_duration = wav.shape[1] / model.sr     rtf = gen_time / audio_duration  # Real-Time Factor (lower is better)         print(f"  Time: Generated in {gen_time:.2f}s (RTF: {rtf:.2f}x, audio: {audio_duration:.2f}s)")     # Save individual line     line_filename = os.path.join(output_dir, f"{speaker.lower()}_{idx}.wav")     ta.save(line_filename, wav, model.sr)     print(f"  Saved: {line_filename}")     # Add to combined audio     combined_audio_segments.append(wav)     # Add silence after each line (except last)     if idx < len(conversation) - 1:         silence = torch.zeros(1, pause_samples)         combined_audio_segments.append(silence) # ----------------------------- # SAVE COMBINED CONVERSATION # ----------------------------- combined_audio = torch.cat(combined_audio_segments, dim=1) combined_filename = os.path.join(output_dir, "full_conversation.wav") ta.save(combined_filename, combined_audio, model.sr) total_time = time.time() - total_start duration_sec = combined_audio.shape[1] / model.sr print(f"\nConversation complete!") print(f"Total generation time: {total_time:.2f}s") print(f"Audio duration: {duration_sec:.2f}s ({duration_sec/60:.2f} minutes)") print(f"Overall RTF: {total_time/duration_sec:.2f}x") print(f"Combined file saved as: {combined_filename}") # ----------------------------- # CUSTOMIZATION NOTES # ----------------------------- # To customize this script: # 1. Replace "YOUR_PROJECT_PATH" in the execution commands with your actual path # 2. Update VOICES dictionary with your voice file names # 3. Edit the conversation list with your desired dialogue # 4. Adjust pause_duration if you want longer/shorter pauses between speakers # 5. Change output_dir name if you want different output folder
2025-09-11T09:43:56
https://www.reddit.com/r/LocalLLaMA/comments/1ne4hc3/5x_your_chatterbox_generations/
imalphawolf2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne4hc3
false
null
t3_1ne4hc3
/r/LocalLLaMA/comments/1ne4hc3/5x_your_chatterbox_generations/
false
false
self
0
null
Qwen3-Next is coming soon
244
2025-09-11T09:43:38
https://i.redd.it/1mdp7l72biof1.jpeg
Ok_Ninja7526
i.redd.it
1970-01-01T00:00:00
0
{}
1ne4h62
false
null
t3_1ne4h62
/r/LocalLLaMA/comments/1ne4h62/qwen3next_is_coming_soon/
false
false
default
244
{'enabled': True, 'images': [{'id': '1mdp7l72biof1', 'resolutions': [{'height': 51, 'url': 'https://preview.redd.it/1mdp7l72biof1.jpeg?width=108&crop=smart&auto=webp&s=3aa02faba58c7b8f01abc6e4967a3ac1e3ff330d', 'width': 108}, {'height': 102, 'url': 'https://preview.redd.it/1mdp7l72biof1.jpeg?width=216&crop=smart&auto=webp&s=d8a6e8b09133b026f4ebdc698c22e2de7db27729', 'width': 216}, {'height': 152, 'url': 'https://preview.redd.it/1mdp7l72biof1.jpeg?width=320&crop=smart&auto=webp&s=9569c43aff4938813b5f396bf0f6c7318cd43f58', 'width': 320}, {'height': 304, 'url': 'https://preview.redd.it/1mdp7l72biof1.jpeg?width=640&crop=smart&auto=webp&s=2cd5da8e1e448326bed9df91f518aadffe866432', 'width': 640}, {'height': 456, 'url': 'https://preview.redd.it/1mdp7l72biof1.jpeg?width=960&crop=smart&auto=webp&s=86920f8b6f813732cd18a13bb2878c6d6878e583', 'width': 960}, {'height': 513, 'url': 'https://preview.redd.it/1mdp7l72biof1.jpeg?width=1080&crop=smart&auto=webp&s=df50d36c1858862cf21dba7b7bd54b74283eb5d0', 'width': 1080}], 'source': {'height': 874, 'url': 'https://preview.redd.it/1mdp7l72biof1.jpeg?auto=webp&s=d112bafa41f4d41b6190be0ac358ab0e5bfaac41', 'width': 1840}, 'variants': {}}]}
I made a semantic code splitting library for implementing RAG (Retrieval-Augmented Generation) on codebases.
18
Hello everyone, I made **code-chopper**, a new open-source TypeScript library for anyone who works with code and LLMs. # What It Does **code-chopper** uses **tree-sitter** to parse code and split it into meaningful, semantic chunks like functions, classes, and variable declarations. This is perfect for RAG, or simply for giving an LLM a high-level overview of a project without using up a ton of tokens. # Key Features * **Customizable Filtering**: Use a `filter` function to control exactly what gets extracted. * **Ready for Use**: I've included helper functions for navigating files and directories. * **Practical Examples**: Check out the examples repo for use cases like: * `repo_summary`: Generate a `Aider's repomap`\-style overview of your codebase. * `entity_rank`: Use **Katz centrality** to find the most important functions or variables. * `doc_generator`: Automatically write documentation for your code. I made this because I needed a better way to chunk code for my own projects, and I hope it's helpful for you too. * **GitHub**: [https://github.com/sirasagi62/code-chopper](https://github.com/sirasagi62/code-chopper) * **Examples**: [https://github.com/sirasagi62/code-chopper-examples/](https://github.com/sirasagi62/code-chopper-examples/) * **NPM**: [https://www.npmjs.com/package/code-chopper](https://www.npmjs.com/package/code-chopper)
2025-09-11T09:15:23
https://www.reddit.com/r/LocalLLaMA/comments/1ne41ss/i_made_a_semantic_code_splitting_library_for/
HolidayInevitable500
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne41ss
false
null
t3_1ne41ss
/r/LocalLLaMA/comments/1ne41ss/i_made_a_semantic_code_splitting_library_for/
false
false
self
18
{'enabled': False, 'images': [{'id': 'Ww1sd8AhnVxuf6msICyBOlhFrFRzbjl3Hn4CXllXvtQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ww1sd8AhnVxuf6msICyBOlhFrFRzbjl3Hn4CXllXvtQ.png?width=108&crop=smart&auto=webp&s=1cff66134afb3df4f163d3d0a04a7aae2bcc438d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ww1sd8AhnVxuf6msICyBOlhFrFRzbjl3Hn4CXllXvtQ.png?width=216&crop=smart&auto=webp&s=541d0a1e6337f3096aed3f7e7650bb6796cb8451', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ww1sd8AhnVxuf6msICyBOlhFrFRzbjl3Hn4CXllXvtQ.png?width=320&crop=smart&auto=webp&s=59f16b13285c2fd1b9499a24475f17e7c55a46f1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ww1sd8AhnVxuf6msICyBOlhFrFRzbjl3Hn4CXllXvtQ.png?width=640&crop=smart&auto=webp&s=5d31141ed7fb14e71ae117bd9e401b5f766e8c54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ww1sd8AhnVxuf6msICyBOlhFrFRzbjl3Hn4CXllXvtQ.png?width=960&crop=smart&auto=webp&s=40e9915e570b82a96e2fa146a05cdef937985d9c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ww1sd8AhnVxuf6msICyBOlhFrFRzbjl3Hn4CXllXvtQ.png?width=1080&crop=smart&auto=webp&s=851c845adb36c05dc33e68753129198eaddcb262', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ww1sd8AhnVxuf6msICyBOlhFrFRzbjl3Hn4CXllXvtQ.png?auto=webp&s=2b8f486ceab0b09227dd0bfdc2195dd8aaf51657', 'width': 1200}, 'variants': {}}]}
Celebrating 1 year anniversary of the revolutionary game changing LLM that was Reflection 70b
135
It is now a year since the release of Reflection-70B that genius inventor Matt Shumer marketted as state-of-the-art hallucination-free llm that outperforms both gpt-4o and claude 3.5 with its new way of thinking as well as world's top open-source model. World hasn't been the same since then indeed.
2025-09-11T09:03:14
https://www.reddit.com/r/LocalLLaMA/comments/1ne3v7m/celebrating_1_year_anniversary_of_the/
LosEagle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne3v7m
false
null
t3_1ne3v7m
/r/LocalLLaMA/comments/1ne3v7m/celebrating_1_year_anniversary_of_the/
false
false
self
135
null
Builing a Ai Agent from Scratch (Python)
3
Do anyone have / know how to build a python agent from vanilla python, without just importing langchain or pydantic. Watched some tutorials and all of em just import langchain and just 5 line of code and done. I wsnt to know how this works behind the scenes. And keep code simple. I tried this, but when i asked to do.something with a tool, its just teaches me how to use the tool and not actually calls the tool. I tried everything, prompts, system prompts, even mentioned the tool name If u got any structure of agent, or any examples or any tips to make a agent better at tool callings, i tried mistral, llama, qwen, (8b), Ty (Ik, my english 🤮)
2025-09-11T08:59:42
https://www.reddit.com/r/LocalLLaMA/comments/1ne3t4b/builing_a_ai_agent_from_scratch_python/
RizmiBurhan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne3t4b
false
null
t3_1ne3t4b
/r/LocalLLaMA/comments/1ne3t4b/builing_a_ai_agent_from_scratch_python/
false
false
self
3
null
This may be a tiny bit off topic
0
How was that one Reddit bot supposedly made by anonymous called again? (I searched for it where I could, didn't find it, sadly)
2025-09-11T08:59:31
https://www.reddit.com/r/LocalLLaMA/comments/1ne3t10/this_may_be_a_tiny_bit_off_topic/
AgentRedishRed
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne3t10
false
null
t3_1ne3t10
/r/LocalLLaMA/comments/1ne3t10/this_may_be_a_tiny_bit_off_topic/
false
false
self
0
null
Figurine Style image using Gemini Google Banana
0
Hey, I’ve been experimenting with some cool AI tools recently. I used **Gemini** along with **Google Banana** to create a figurine-style image, and the results were pretty amazing. Anyone else tried this combo?
2025-09-11T08:53:47
https://i.redd.it/ydpeo3832iof1.png
NewAd8491
i.redd.it
1970-01-01T00:00:00
0
{}
1ne3q0u
false
null
t3_1ne3q0u
/r/LocalLLaMA/comments/1ne3q0u/figurine_style_image_using_gemini_google_banana/
false
false
default
0
{'enabled': True, 'images': [{'id': 'ydpeo3832iof1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/ydpeo3832iof1.png?width=108&crop=smart&auto=webp&s=00307c6ca2711bbd18c57dbc4ba23fed4bc64253', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/ydpeo3832iof1.png?width=216&crop=smart&auto=webp&s=5ea9aa71f546ab92507794e673f5acc664d428fd', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/ydpeo3832iof1.png?width=320&crop=smart&auto=webp&s=9a90aab01d425b79f97c9158d9b04682f3f97f1a', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/ydpeo3832iof1.png?width=640&crop=smart&auto=webp&s=a4f4d58f03ff056d2ea8411bc85fb2207c74a354', 'width': 640}, {'height': 960, 'url': 'https://preview.redd.it/ydpeo3832iof1.png?width=960&crop=smart&auto=webp&s=7dd30cf5cd4eab0561b6f5c88aca0e65a9449c75', 'width': 960}], 'source': {'height': 1024, 'url': 'https://preview.redd.it/ydpeo3832iof1.png?auto=webp&s=3f24f9cb0ce657fcb13f62e0d697a2c9f4fb29af', 'width': 1024}, 'variants': {}}]}
Ok this is driving me crazy - what is the best under 300w solution to get at least 32gb of vram for under $1000? New hardware only.
0
It seems like they'd isn't any and beyond going with 24 GB of vram and a 3090 or 7900XTX. I just can't wrap my head around a solution here. I'm just accepting at this point that the B50 and B60 will not be obtainable and the R9700 will never be available to containers.
2025-09-11T08:50:09
https://www.reddit.com/r/LocalLLaMA/comments/1ne3o0v/ok_this_is_driving_me_crazy_what_is_the_best/
79215185-1feb-44c6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne3o0v
false
null
t3_1ne3o0v
/r/LocalLLaMA/comments/1ne3o0v/ok_this_is_driving_me_crazy_what_is_the_best/
false
false
self
0
null
My honest nexos.ai review
0
TL;DR * Free trial, no CC required * Big model library * No public pricing * Assistants, projects, guardrails, fallbacks, usage stats Why did I even try it? First of all it has an actual trial period where you don’t have to sit through a call with a sales rep that will tell you about all the bells and whistles, which is a huge plus for me. Another thing is the number of LLMs we were juggling around, ChatGPT for marketing, Claude for software dev, and a bunch of other niche tools for other tasks.  You see where this is going, right? Absolute chaos that not only makes it hard to manage, but actually costs us a lot of money, especially now that Claude’s new rate limits are in place. Primary features/points And these are \*\*not\*\* just buzzwords, we actually have great use for that.  Since we also go through a lot of personal and sensitive data the guardrails and input/output sanitization is a godsend. Then I have an actual overview of which models each team uses and how much are we spending on them. With spread accounts it was nearly impossible to tell how much tokens each team was using. With the GPT5 release we all wanted to jump on it as soon as possible, buuuut at times it’s nearly impossible to get a response from it due to how crowded it has been ever since the release. Here I can either use a different model if GPT5 fails, set up multiple fallbacks, or straight up send the query to 5 models at the same time. Crazy it’s not more commonly available. A big library of models is a plus, as is the observability, although I trust my staff to the point where I don’t really use it. Pros and cons Here’s my list of the good and the bad Pros: * Dashboard looks familiar and is very intuitive for all the departments. You don’t have to be a software dev to make use of it. * There’s OpenAI-compliant API gateway so if you ARE a software dev, that comes in pretty handy for integrating LLMs in your tooling or projects. * Huge library of models to choose from. Depending on your requirements you can go for something that’s even “locally” hosted by nexos. ai * Fallbacks, input and output sanitization, guardrails, observability * One, usage-based payment if we chose to go stay beyond the trial period Cons:  * While the dashboard looks familiar there are some things which took me a while to figure out, like personal API tokens and such. I’m not sure if putting them in the User Profile section is the best idea. * Pricing transparency - I wish they would just outright tell you how much you will have to pay if you chose to go with. Guess that’s how it works these days. * Their documentation seems to be just getting up to speed when it comes to the projects/assistants features. Although the API has decent docs. All in all, this is the exact product we needed and I’d be really inclined to stay with them, provided they don’t slap some unreasonable price tag on their service. Final thoughts I think that nexos. ai is good if you’re tired of juggling AI tools, subscriptions, and other AI-based services. and need a mixture of tools for different departments and use cases. The trial is enough to try everything out and doesn’t require a credit card, although they seem to block u/gmail.com and other free email providers. BTW. I’m happy to hear about other services that provide similar tools.
2025-09-11T08:46:25
https://www.reddit.com/r/LocalLLaMA/comments/1ne3m0j/my_honest_nexosai_review/
Even_Plenty
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne3m0j
false
null
t3_1ne3m0j
/r/LocalLLaMA/comments/1ne3m0j/my_honest_nexosai_review/
false
false
self
0
null
Is It Just Me or Does GPT-5 Gets Worse the Longer a Chat Goes On?
0
Like on a fresh brand new chat GPT-5 is pretty great, but as the chat goes on and grows it seem GPT-5 starts to make dumb mistakes, overlooks things and say things that are not true/accurate, almost like a human who's starting to get lazy and cut corners because it's 5 PM and he/she just wants to finish up the day and go back home?
2025-09-11T08:39:41
https://www.reddit.com/r/LocalLLaMA/comments/1ne3igt/is_it_just_me_or_does_gpt5_gets_worse_the_longer/
Zephyr1421
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne3igt
false
null
t3_1ne3igt
/r/LocalLLaMA/comments/1ne3igt/is_it_just_me_or_does_gpt5_gets_worse_the_longer/
false
false
self
0
null
Strix Halo owners - Windows or Linux?
2
I have the Gmktec Evo X2 and absolutely love it. I have my whole llm stack setup on Windows (as well as all non-AI software, games), mostly using LM studio which offers the best performance to usability - Ollama is just ass as far as I can tell for specifically supporting this architecture. But so many LLM tools are Linux based, and while I love WSL2, I don't think it offers full compatibility. Looking at setting up dual boot Ubuntu probably. What are others using?
2025-09-11T08:39:16
https://www.reddit.com/r/LocalLLaMA/comments/1ne3i8b/strix_halo_owners_windows_or_linux/
feverdream
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne3i8b
false
null
t3_1ne3i8b
/r/LocalLLaMA/comments/1ne3i8b/strix_halo_owners_windows_or_linux/
false
false
self
2
null
Do you think the 3090 will still be a good option? 5070 Super / 5070 Ti Super vs 3090
1
Here in Europe, I think the 5070 Super will be priced between €600 - €700 for 18GB VRAM and the 5070 Ti Super €800 - €1000 for 24GB VRAM. I think this will make the 3090 much cheaper, but they are also already very old cards and there is no guarantee that they will last more than a year. What would be better x2 5070 Super to have 32gb VRAM (1200€) and save some money or x2 5070ti Super for 48gb VRAM (1800€) for almost double the price? Or the old 3090?
2025-09-11T08:25:29
https://www.reddit.com/r/LocalLLaMA/comments/1ne3ay2/do_you_think_the_3090_will_still_be_a_good_option/
Spiderboyz1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne3ay2
false
null
t3_1ne3ay2
/r/LocalLLaMA/comments/1ne3ay2/do_you_think_the_3090_will_still_be_a_good_option/
false
false
self
1
null
Based on first benchmarks iPhone 17 Pro A19 Pro chip can be a frontier for local smartphone LLM-s
0
The iPhone 17 Pro with the A19 Pro chip scored **3,895** in single-core and **9,746** in multi-core on Geekbench 6. That means in multi-core it's actually above an M2 MacBook Air. It’s got 12GB RAM too, so it should be able to run higher-level distilled models locally. What do you think about this? What use cases are you excited about when it comes to running local models on mobile?
2025-09-11T08:11:32
https://www.macrumors.com/2025/09/10/iphone-17-pro-iphone-air-a19-pro-benchmarks/
Kerub88
macrumors.com
1970-01-01T00:00:00
0
{}
1ne33gw
false
null
t3_1ne33gw
/r/LocalLLaMA/comments/1ne33gw/based_on_first_benchmarks_iphone_17_pro_a19_pro/
false
false
default
0
{'enabled': False, 'images': [{'id': 'J2S0dVuRpW3gt3UmGGltP-IXig_5rYeBdlZEBIBjn-A', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/J2S0dVuRpW3gt3UmGGltP-IXig_5rYeBdlZEBIBjn-A.jpeg?width=108&crop=smart&auto=webp&s=9d112fbf0c7bb2c5a61788267ffb6ba98d594132', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/J2S0dVuRpW3gt3UmGGltP-IXig_5rYeBdlZEBIBjn-A.jpeg?width=216&crop=smart&auto=webp&s=4a873dd61058de9603e87e5a0db181ce1dee7ad3', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/J2S0dVuRpW3gt3UmGGltP-IXig_5rYeBdlZEBIBjn-A.jpeg?width=320&crop=smart&auto=webp&s=1204c7342aecdd8308bbc1ad92292c6151168eb1', 'width': 320}, {'height': 359, 'url': 'https://external-preview.redd.it/J2S0dVuRpW3gt3UmGGltP-IXig_5rYeBdlZEBIBjn-A.jpeg?width=640&crop=smart&auto=webp&s=2c1b49865ce6dc2293c28453177e575412e142cc', 'width': 640}, {'height': 539, 'url': 'https://external-preview.redd.it/J2S0dVuRpW3gt3UmGGltP-IXig_5rYeBdlZEBIBjn-A.jpeg?width=960&crop=smart&auto=webp&s=658edfb0e1189fb462a442874bdfb395c65aba21', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/J2S0dVuRpW3gt3UmGGltP-IXig_5rYeBdlZEBIBjn-A.jpeg?width=1080&crop=smart&auto=webp&s=cb0959be5260b45a05a36f8cb4c379674a3bb6bd', 'width': 1080}], 'source': {'height': 1406, 'url': 'https://external-preview.redd.it/J2S0dVuRpW3gt3UmGGltP-IXig_5rYeBdlZEBIBjn-A.jpeg?auto=webp&s=2945ccffef7fff1f0bb8dec396529dcd7d1ee828', 'width': 2500}, 'variants': {}}]}
Gemma3 4b is colorblind?!
0
https://preview.redd.it/…nd other models.
2025-09-11T08:07:52
https://www.reddit.com/r/LocalLLaMA/comments/1ne31h4/gemma3_4b_is_colorblind/
saig22
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne31h4
false
null
t3_1ne31h4
/r/LocalLLaMA/comments/1ne31h4/gemma3_4b_is_colorblind/
false
false
https://b.thumbs.redditm…phm0nPaCfdKw.jpg
0
null
Anyone else feel like we need a context engine MCP that can be taught domain knowledge by giving it KT sessions and docs?
0
I keep running into this problem — MCP servers today can call APIs and automate workflows, but they don’t really let you teach them your own knowledge. Let there be an context engine MCP where you could: Upload project docs or give it KT sessions on your domain related topics It indexes everything locally (private to you). Any tool (Cursor, Windsurf, CLI, etc.) could then pull the right context instantly. Feels like this could be a missing piece for dev workflows. Anyone else wish something like this existed, or are existing MCPs already good enough? [View Poll](https://www.reddit.com/poll/1ne1qxw)
2025-09-11T06:43:02
https://www.reddit.com/r/LocalLLaMA/comments/1ne1qxw/anyone_else_feel_like_we_need_a_context_engine/
prabhjots665
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne1qxw
false
null
t3_1ne1qxw
/r/LocalLLaMA/comments/1ne1qxw/anyone_else_feel_like_we_need_a_context_engine/
false
false
self
0
null
Qwen3-ASR-Flash pricing - is this correct?
12
Qwen3-ASR-Flash pricing is $0.000032/second = $0.00192/minute Gpt-4o-mini-transcribe pricing is $0.003/minute Thats a very significant difference in price. Am I missing anything? [https://bailian.console.alibabacloud.com/?tab=doc#/doc/?type=model&url=2979031](https://bailian.console.alibabacloud.com/?tab=doc#/doc/?type=model&url=2979031)
2025-09-11T06:16:35
https://www.reddit.com/r/LocalLLaMA/comments/1ne1c5b/qwen3asrflash_pricing_is_this_correct/
Individual-Cookie404
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne1c5b
false
null
t3_1ne1c5b
/r/LocalLLaMA/comments/1ne1c5b/qwen3asrflash_pricing_is_this_correct/
false
false
self
12
null
How I turned my laptop into a local AI copilot (Ollama + Continue + Qwen2.5)
0
I wanted to share my experience setting up a local “copilot” on my laptop without paying for GitHub Copilot. It turns out that with **16 GB RAM** you can already run **7B quantized models** like Qwen2.5 or Mistral via [Ollama](https://ollama.com/). I used the **Continue plugin** in IntelliJ IDEA to integrate the model directly into my IDE. Now I can: * get inline completions, * refactor classes, * and even generate unit tests. Here’s an example of fixing a buggy factorial function: public int factorial(int n) { if (n == 0) return 0; else return n * factorial(n - 1); } Model fixed it instantly: public int factorial(int n) { if (n == 0 || n == 1) return 1; else return n * factorial(n - 1); } I documented everything in a short guide here (GitHub repo): [github.com/username/vibe-coding-course](https://github.com/username/vibe-coding-course) Would love to hear if others are using Continue/Ollama this way.
2025-09-11T05:43:37
https://www.reddit.com/r/LocalLLaMA/comments/1ne0szs/how_i_turned_my_laptop_into_a_local_ai_copilot/
Independent-Boot6091
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne0szs
false
null
t3_1ne0szs
/r/LocalLLaMA/comments/1ne0szs/how_i_turned_my_laptop_into_a_local_ai_copilot/
false
false
self
0
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=108&crop=smart&auto=webp&s=3dc759de0e8fa36d241c5728d41ee3cf022cab96', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=216&crop=smart&auto=webp&s=6ccf136f5d3091254a0067a3bc5d6c7df9d62d89', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=320&crop=smart&auto=webp&s=2530aa4ecbcf7899ec0d023e217fe24af15fe0a6', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=640&crop=smart&auto=webp&s=8e51add1cab39c7614eb13e6195f23c5b4eeb417', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=960&crop=smart&auto=webp&s=750a6d42fd91c5a6e9a9c069e74247c877644e97', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?width=1080&crop=smart&auto=webp&s=9eab390b865b031211658564ad5fe5241c9661c5', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM.png?auto=webp&s=a080c4707584d3aa14134960cda9ba2d339b93a3', 'width': 1200}, 'variants': {}}]}
TinyLlama runs fine in terminal but hangs when called via Python subprocess
0
Hey folks, I’m building a **fully offline RAG chatbot** for a telemedicine project: * Knowledge Base in SQLite + FAISS for semantic search * TinyLlama (`tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf`) with llama.cpp * Running everything on Windows 11 If I call `llama-cli.exe` directly in the terminal → **works great** . But when I try to call it from **Python subprocess**, it either: * hangs forever ⏳ * or throws error &#8203; import faiss import sqlite3 import numpy as np import os import subprocess import sys from sentence_transformers import SentenceTransformer # --- 1. Define file paths --- base_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) faiss_index_path = os.path.join(base_dir, 'python-microservices', 'embeddings', 'kb.index') db_file_path = os.path.join(base_dir, 'backend', 'data', 'kb.sqlite') # --- 2. Load the Local KB and Embedding Model --- try:     print("Loading FAISS index and local KB for offline chat...")     index = faiss.read_index(faiss_index_path)     conn = sqlite3.connect(db_file_path)     cursor = conn.cursor()     model = SentenceTransformer('all-MiniLM-L6-v2')     print("KB and model loaded successfully! Ready for offline chat.") except Exception as e:     print(f"Error loading local KB files: {e}")     print("Please ensure you have run 'data_loader.py' and 'update_faiss_index.py' first.")     sys.exit(1) def get_context_from_index(query: str, k=3):     """     Takes a user query, searches the FAISS index, and retrieves     the top k most relevant text chunks from the local SQLite DB.     """     # Convert the user query into an embedding     query_embedding = model.encode([query])     query_embedding = np.array(query_embedding).astype('float32')     # Search the FAISS index for the most similar embeddings     distances, indices = index.search(query_embedding, k)         # Retrieve the original text from the SQLite database using the indices     retrieved_texts = []     for doc_id in indices[0]:         # FAISS index is 0-based, SQLite IDs start from 1.         cursor.execute("SELECT question, answer FROM knowledge_base WHERE id = ?", (int(doc_id) + 1,))         result = cursor.fetchone()         if result:             retrieved_texts.append(f"Question: {result[0]}\nAnswer: {result[1]}")                 return "\n---\n".join(retrieved_texts) def get_llama_response_offline(prompt: str):     """     This function calls the llama.cpp model with the RAG prompt.     """     current_script_path = os.path.abspath(__file__)     telemedicine_rag_dir = os.path.dirname(os.path.dirname(current_script_path))     parent_dir = os.path.dirname(telemedicine_rag_dir)     llama_base_dir = os.path.join(parent_dir, 'LLMTools')         llama_executable_path = os.path.join(llama_base_dir, 'llama.cpp', 'build', 'bin', 'Release', 'llama-cli.exe')     llama_model_path = os.path.join(llama_base_dir, 'llama.cpp', 'build', 'bin', 'Release', 'tinyllama-1.1b-chat-v1.0.Q4_K_M.gguf')     try:         command = [             llama_executable_path,             '-m', llama_model_path,             '-p', prompt,             '-n', '256',             '--temp', '0.1',             '--no-warmup'         ]                 result = subprocess.run(     command,     capture_output=True,     text=True,     check=True,     encoding="utf-8",       errors="replace"   )         return result.stdout.strip()     except FileNotFoundError:         return "Error: Llama.cpp executable or TinyLlama model not found. Please check paths."     except subprocess.CalledProcessError as e:         return f"Error from llama.cpp: {e.stderr}" def run_chat_session():     """     Simulates a full chat session with the user.     """     print("Offline Chatbot is ready. Type your health query (type 'exit' to quit).")     while True:         user_query = input("\nYou: ")         if user_query.lower() == 'exit':             break         # 1. Retrieve the context         context = get_context_from_index(user_query)         # 2. Build the RAG prompt         rag_prompt = f"""You are a medical assistant for Nabha Civil Hospital. Answer the user's question only based on the provided context. If the answer is not in the context, say "I cannot provide an answer based on my current knowledge." Context: {context} User Question: {user_query} Answer: """         # 3. Get the LLM response         response = get_llama_response_offline(rag_prompt)         print(f"\nBot: {response}") if __name__ == "__main__":     run_chat_session()     conn.close() Any advice, examples, or alternative approaches would be a lifesaver.
2025-09-11T05:37:37
https://www.reddit.com/r/LocalLLaMA/comments/1ne0phk/tinyllama_runs_fine_in_terminal_but_hangs_when/
cholebhatureyarr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne0phk
false
null
t3_1ne0phk
/r/LocalLLaMA/comments/1ne0phk/tinyllama_runs_fine_in_terminal_but_hangs_when/
false
false
self
0
null
top reads from last week
72
2025-09-11T05:35:19
https://i.redd.it/kio1vvck2hof1.png
External_Mushroom978
i.redd.it
1970-01-01T00:00:00
0
{}
1ne0o5m
false
null
t3_1ne0o5m
/r/LocalLLaMA/comments/1ne0o5m/top_reads_from_last_week/
false
false
default
72
{'enabled': True, 'images': [{'id': 'kio1vvck2hof1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/kio1vvck2hof1.png?width=108&crop=smart&auto=webp&s=dbafb8419501848784a0c37b4db4d31153fb4055', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/kio1vvck2hof1.png?width=216&crop=smart&auto=webp&s=0a53ea3e7a22dd719bb63aa0dffd94e3187a911b', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/kio1vvck2hof1.png?width=320&crop=smart&auto=webp&s=5840a9b01e19717cd4448020b23ba687533687c4', 'width': 320}, {'height': 640, 'url': 'https://preview.redd.it/kio1vvck2hof1.png?width=640&crop=smart&auto=webp&s=922bbaa42c50e0e800de692fba6af534988fd0b6', 'width': 640}], 'source': {'height': 800, 'url': 'https://preview.redd.it/kio1vvck2hof1.png?auto=webp&s=20614679c44120df51a330076fd9967684b7a91f', 'width': 800}, 'variants': {}}]}
Vibe Coding: How to Turn Your Laptop into a Local LLM Copilot (No Subscription Needed)
1
[removed]
2025-09-11T05:33:38
https://github.com/korolevevgo-hub/vibe_coding/tree/main
Independent-Boot6091
github.com
1970-01-01T00:00:00
0
{}
1ne0n5s
false
null
t3_1ne0n5s
/r/LocalLLaMA/comments/1ne0n5s/vibe_coding_how_to_turn_your_laptop_into_a_local/
false
false
default
1
null
Differences in higher vs lower quants in big models?
2
I usually use <=32b models but some times I need to pull the big guns (Kimi-K2, Deepseek-r1/v3.1, qwen3-coder-480b). But I only get about 0.9 to 1.5 t/s depending on the quant. For example, deepseek-v3.1 (ubergarm) iq4\_kss I get 0.92 t/s while iq2\_kl I get 1.56 t/s (yeah, difference might not be that much still...), so I tend to use uq2\_kl. So I wonder what am I missing when going for "q2" quants on those big models? (as the speed is so slow, it will take too long to test differences, and I only use them when I really need more "knowledge" than the <=32b)
2025-09-11T05:23:08
https://www.reddit.com/r/LocalLLaMA/comments/1ne0h0h/differences_in_higher_vs_lower_quants_in_big/
relmny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne0h0h
false
null
t3_1ne0h0h
/r/LocalLLaMA/comments/1ne0h0h/differences_in_higher_vs_lower_quants_in_big/
false
false
self
2
null
What do i use for a hardcoded chain-of-thought? LangGraph, or PydanticAI?
1
I was gonna start using LangChain but i heard it was an "overcomplicated undocumented deprecated mess". And should either "LangGraph or PydanticAI" and "you want that type safe stuff so you can just abstract the logic" The problems i have to solve are very static and i figured out the *thinking* for solving them. But solving it in a single LLM call is too much to ask, or at least, would be better to be broken down. I can just hardcode the chain-of-thought instead of asking the AI to do thinking. Example: "<student-essay/> Take this student's essay, summarize, write a brief evaluation, and then write 3 follow-up questions to make sure the student understood what he wrote" It's better to make 3 separate calls: * summaryze this text * evaluate this text * write 3 follow-up questions about this text That'll yield better results. Also, for simpler stuff i can call a cheaper model that answers faster, and turn off thinking (i'm using Gemini, and 2.5 Pro doesn't allow to turn off thinking)
2025-09-11T05:17:15
https://www.reddit.com/r/LocalLLaMA/comments/1ne0dit/what_do_i_use_for_a_hardcoded_chainofthought/
Blender-Fan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ne0dit
false
null
t3_1ne0dit
/r/LocalLLaMA/comments/1ne0dit/what_do_i_use_for_a_hardcoded_chainofthought/
false
false
self
1
null
Hardware needed to run local model for accounting firm
2
What is the hardware i would need to run something like perplexity labs that creates spreadsheets from data provided such as financial statement data? Also any local model recommendations? I like working with AI but have been nudged to maybe look into local first.
2025-09-11T04:03:25
https://www.reddit.com/r/LocalLLaMA/comments/1ndz3ha/hardware_needed_to_run_local_model_for_accounting/
Available_Hornet3538
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndz3ha
false
null
t3_1ndz3ha
/r/LocalLLaMA/comments/1ndz3ha/hardware_needed_to_run_local_model_for_accounting/
false
false
self
2
null
PNY preorder listing shows Nvidia DGX Spark at $4,299.99
104
`PNY has opened preorders for the Nvidia DGX Spark, a compact desktop AI system powered by the Grace Blackwell GB10 Superchip. It combines Arm Cortex-X925 and Cortex-A725 CPU cores with a Blackwell GPU, delivering up to 1,000 AI TOPS, or 1 petaFLOP of FP4 performance, for local model inference and fine-tuning.` # [https://linuxgizmos.com/pny-preorder-listing-shows-nvidia-dgx-spark-at-4299-99/](https://linuxgizmos.com/pny-preorder-listing-shows-nvidia-dgx-spark-at-4299-99/)
2025-09-11T04:00:41
https://www.reddit.com/r/LocalLLaMA/comments/1ndz1k4/pny_preorder_listing_shows_nvidia_dgx_spark_at/
DeliciousBelt9520
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndz1k4
false
null
t3_1ndz1k4
/r/LocalLLaMA/comments/1ndz1k4/pny_preorder_listing_shows_nvidia_dgx_spark_at/
false
false
self
104
{'enabled': False, 'images': [{'id': 'pJiprvGiqVids-XM9mIxX_V090yjJcBxDiFDE1Y7yEo', 'resolutions': [{'height': 94, 'url': 'https://external-preview.redd.it/pJiprvGiqVids-XM9mIxX_V090yjJcBxDiFDE1Y7yEo.jpeg?width=108&crop=smart&auto=webp&s=5b082a44d0deb77ff5c4e5702ae847aa908ea1d4', 'width': 108}, {'height': 188, 'url': 'https://external-preview.redd.it/pJiprvGiqVids-XM9mIxX_V090yjJcBxDiFDE1Y7yEo.jpeg?width=216&crop=smart&auto=webp&s=b68a308589657e3a60ce2488f6665f407ba6c9bc', 'width': 216}, {'height': 279, 'url': 'https://external-preview.redd.it/pJiprvGiqVids-XM9mIxX_V090yjJcBxDiFDE1Y7yEo.jpeg?width=320&crop=smart&auto=webp&s=18adbe9d272aa444d5c6c8142f548c8813a2c3f3', 'width': 320}, {'height': 558, 'url': 'https://external-preview.redd.it/pJiprvGiqVids-XM9mIxX_V090yjJcBxDiFDE1Y7yEo.jpeg?width=640&crop=smart&auto=webp&s=e949035c92d62b3c3732bdf62056c2715e4091e2', 'width': 640}], 'source': {'height': 683, 'url': 'https://external-preview.redd.it/pJiprvGiqVids-XM9mIxX_V090yjJcBxDiFDE1Y7yEo.jpeg?auto=webp&s=2e7c35b45a16b194d1ae98de7b8fe1d8512416c1', 'width': 783}, 'variants': {}}]}
Want Some Actual feedback
0
TL;DR: Offline Android AI assistant. Import any GGUF, switch models mid-chat, running plugins Problem: Cloud assistants = privacy risk, latency, no offline. What I built: • Airplane-mode chat (no server) • Import any .gguf model • Switch models inside a conversation • Plugin system (WebSearch example) • Android Keystore + on-device encryption APK / source: https://github.com/Siddhesh2377/ToolNeuron/releases/tag/Beta-4 Discord for testers: https://discord.gg/vjGEyQev Looking for feedback on: 1) Model import UX/errors on mid-range phones 2) Plugin permissions wording 3) What plugin should I build next? Happy to share perf numbers or code details in comments.
2025-09-11T03:30:38
https://www.reddit.com/r/LocalLLaMA/comments/1ndyhh8/want_some_actual_feedback/
DarkEngine774
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndyhh8
false
null
t3_1ndyhh8
/r/LocalLLaMA/comments/1ndyhh8/want_some_actual_feedback/
false
false
self
0
{'enabled': False, 'images': [{'id': 'vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=108&crop=smart&auto=webp&s=b29f1a63b1ea253693205925fcb45cbe8ed6ac70', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=216&crop=smart&auto=webp&s=84e7f8f0d608c748c1eca9bffd71fb6f906b6bed', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=320&crop=smart&auto=webp&s=e4175c2e203e55fee8f687f723825e997f608fe5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=640&crop=smart&auto=webp&s=0dbda1b49028bf03a8dbe64c5b8e02ce88094a97', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=960&crop=smart&auto=webp&s=5515c0c1cd4580a47c3bc3aca19044e93756d267', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=1080&crop=smart&auto=webp&s=591cf3a71fb9f5fce2a879b0f04b28417efd0ffd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?auto=webp&s=413cb8e25061e6aaa24cb530517191610bb21ce8', 'width': 1200}, 'variants': {}}]}
The path to divinity lies in the ashes of shattered dreams, the howl no one hears, and agony endured with patience" My war story on llama.cpp with SYCL
4
# "The path to divinity lies in the ashes of shattered dreams, the howl no one hears, and agony endured with patience" My war story on llama.cpp with SYCL ## _Trying to build llama.cpp with SYCL on an Intel N150 MiniPC_ ## Summary I spent days getting llama.cpp to build and run on an Intel iGPU via oneAPI/SYCL on Debian 12. The blockers were messy toolchain collisions (2024 vs 2025 oneAPI), missing MKL CMake configs, BLAS vendor quirks, and a dp4a gotcha in the SYCL path. Final setup: SYCL works, models serve via llama-server, and I proxy multiple GGUFs through llama-swap for Open WebUI. ## Context & Goal - **Target:** Debian 12, Intel N150 iGPU (Alder Lake-N), 16gb ram, oneAPI 2025 toolchain. - **Why SYCL:** I had already built, and run it for CPU, and for Vulkan, but SYCL was supposed to be faster so I went for it. - **Deliverable:** Build llama.cpp with SYCL; run the server; integrate with Open WebUI for multiple models. ## Where I Banged My Head **1. oneAPI version drift** I had **two installs**: `~/intel/oneapi` (2024.x) and `/opt/intel/oneapi` (2025.x). I had first tried the 2025 version, but it required `libstdc++13` which wasn't available for Debian12. So I tried the lastest 2024 version which also wouldn't work without changing kernel drivers because it was made for older gen processors, then I moved back to the 2025 version and tried to work my way around it, but not without problems and some lingering 2024 version conflicts. The Newer oneAPI (2025.3x) expects GCC 13 libstdc++, but Debian12 ships with GCC12. The Level Zero plugin/loader then fails to resoslve symbols → Level Zero path "disappears" **2.** CMake kept **discovering 2024 MKL** even though I was compiling with the 2025 compiler, causing: ``` MKL_FOUND=FALSE ... MKL_VERSION_H-NOTFOUND ``` **Fix**: hide `~/intel/oneapi`, source `/opt/intel/oneapi/setvars.sh --force`, and point CMake to `/opt` explicitly. **3. BLAS vendor selection** `-DGGML_BLAS=ON` alone isn’t enough. CMake’s `FindBLAS` wants a **specific vendor token**: ``` -DBLA_VENDOR=Intel10_64lp -DGGML_BLAS_VENDOR=Intel10_64lp ``` (LP64, threaded MKL) **4. Missing `MKLConfig.cmake`** The runtime libs weren’t the problem—the **CMake config package** was. I needed: ``` sudo apt install intel-oneapi-mkl-devel ``` Then set: ``` -DMKL_DIR=$MKLROOT/lib/cmake/mkl ``` **5. Optional oneDNN (not a blocker)** Useful on Arc/XMX; minimal gains on my ADL-N iGPU. If you try it: ``` sudo apt install intel-oneapi-dnnl-devel -DDNNL_DIR=/opt/intel/oneapi/dnnl/<ver>/lib/cmake/dnnl ``` **6. SYCL helper `dp4a` mismatch** A `syclcompat::dp4a` vs local `dp4a(...)` mismatch can appear depending on your tree. Easiest workaround (non-invasive): **disable the dp4a fast path** at configure time: ``` -DCMAKE_CXX_FLAGS="-DGGML_SYCL_NO_DP4A=1" ``` (Or the equivalent flag in your revision.) ## What finally worked (CMake line) ```bash source /opt/intel/oneapi/setvars.sh --force cmake -S . -B buildsycl -G Ninja \ -DGGML_SYCL=ON -DGGML_SYCL_GRAPH=ON \ -DGGML_BLAS=ON \ -DBLA_VENDOR=Intel10_64lp -DGGML_BLAS_VENDOR=Intel10_64lp \ -DMKL_DIR="$MKLROOT/lib/cmake/mkl" \ -DCMAKE_FIND_PACKAGE_PREFER_CONFIG=ON \ -DCMAKE_IGNORE_PREFIX_PATH="$HOME/intel/oneapi" \ -DLLAMA_BUILD_SERVER=ON -DCMAKE_BUILD_TYPE=Release cmake --build buildsycl -j ``` ## Running on the Intel iGPU (SYCL) ```bash # once per shell (I later put these in ~/.bashrc) source /opt/intel/oneapi/setvars.sh --force export ONEAPI_DEVICE_SELECTOR=level_zero:gpu export ZES_ENABLE_SYSMAN=1 ./buildsycl/bin/llama-cli \ -m ./models/qwen2.5-coder-3b-instruct-q6_k.gguf \ -ngl 13 -c 4096 -b 64 -t $(nproc) -n 64 -p "hello from SYCL" ``` **Throughput** (my 3B coder model): Generation is a little better than my Vulkan baseline. **“Sweet spot”** for my iGPU: `-ngl 13`, `-b 64`, quant **q6_k**. Maybe ill try a q5 in the future. ## Open WebUI + multiple models (reality check) - `llama-server` serves **one** model per process; `/v1/models` returns that single model. - I run **one server per model** or use **`llama-swap`** as a tiny proxy that swaps upstreams by `model` id. - `llama-swap` + YAML gave me a single OpenAI-compatible URL with all my GGUFs discoverable in Open WebUI. ## Make it stick (no more hand-typed env) In `~/.bashrc`: ```bash # oneAPI + SYCL defaults [ -f /opt/intel/oneapi/setvars.sh ] && . /opt/intel/oneapi/setvars.sh --force export ONEAPI_DEVICE_SELECTOR=level_zero:gpu export ZES_ENABLE_SYSMAN=1 export OMP_NUM_THREADS=$(nproc) export PATH="$HOME/llama.cpp/buildsycl/bin:$PATH" ``` ## Key takeaways - **Pin your toolchain**: don’t mix `/opt/intel/oneapi` (2025) with older `~/intel/oneapi` (2024) in the same build. Don't be like me. - **Tell CMake exactly what you want**: `BLA_VENDOR=Intel10_64lp`, `MKL_DIR=.../cmake/mkl`, and prefer config files. - **Expect optional deps to be optional**: oneDNN helps mostly on XMX-capable GPUs. - **Have a plan for multi-model**: multiple `llama-server` instances or a swapper proxy. - **Document your “sweet spot”** (layers, batch, quant); that’s what you’ll reuse everywhere.
2025-09-11T03:14:23
https://www.reddit.com/r/LocalLLaMA/comments/1ndy6gv/the_path_to_divinity_lies_in_the_ashes_of/
itsportspe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndy6gv
false
null
t3_1ndy6gv
/r/LocalLLaMA/comments/1ndy6gv/the_path_to_divinity_lies_in_the_ashes_of/
false
false
self
4
null
GPT-OSS 20b (high) consistently does FAR better than gpt5-thinking on my engineering Hw
137
Just found this super interesting, but gpt-oss 20b gets almost every problem right, while gpt5-thinking, something I can only query like 5 times before getting rate limited (free tier), only gets it right about 50% of the time. pretty interesting that a open weights 20b model is better than the closed flagship model on the free tier. I often use these models to verify my work, and both are free, but I can spam the 20b as much as I want and it's right more often. granted, gpt5-thinking on the free tier is probably on the lowest setting, bc gpt-oss thinks ALOT longer than gpt5 did, on average it was about 20-30k tokens per question. qwen3-30b-2507-thinking is also really good, but I don't think it's as good for this specific task, and gpt-oss is way smaller. just still found it super interesting and wanted to share.
2025-09-11T02:54:33
https://www.reddit.com/r/LocalLLaMA/comments/1ndxsja/gptoss_20b_high_consistently_does_far_better_than/
InevitableWay6104
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndxsja
false
null
t3_1ndxsja
/r/LocalLLaMA/comments/1ndxsja/gptoss_20b_high_consistently_does_far_better_than/
false
false
self
137
null
Building Qwen3 from Scratch: This Is your chance
6
[AI generated\(if you are guessing ;-\)\)](https://preview.redd.it/gkhu0qvs7gof1.png?width=1024&format=png&auto=webp&s=b09e78eafaa60e2958deb801a3f64690cc34e923) So earlier today I shared something I’ve been working on for a while: the first Small Language Model built for DevOps [https://www.reddit.com/r/LocalLLaMA/comments/1ndm44z/meet\_the\_first\_small\_language\_model\_built\_for/](https://www.reddit.com/r/LocalLLaMA/comments/1ndm44z/meet_the_first_small_language_model_built_for/) A lot of people have told me they want to build their own model but don’t know where to start. The code usually looks super complex, and honestly, most give up before they even get to the fun part. To make it easier, I put together a Google Colab notebook where I explained every single cell step-by-step so you can follow along without getting lost: [https://colab.research.google.com/drive/16IyYGf\_z5IRjcVKwxa5yiXDEMiyf0u1d?usp=sharing](https://colab.research.google.com/drive/16IyYGf_z5IRjcVKwxa5yiXDEMiyf0u1d?usp=sharing) And if you’re curious about the theory behind it, I also wrote a blog here: [https://devopslearning.medium.com/i-built-qwen3-from-scratch-and-heres-what-i-learned-theory-0480b3171412](https://devopslearning.medium.com/i-built-qwen3-from-scratch-and-heres-what-i-learned-theory-0480b3171412) If you’ve been sitting on the idea of building your own model, this might be the nudge you need. Don’t worry about complexity, stay curious and keep going, and you’ll go further than you imagine GitHub link: [https://github.com/ideaweaver-ai/qwen3-from-scratch](https://github.com/ideaweaver-ai/qwen3-from-scratch)  If you still have questions, drop them in the linkedin. I’ll be happy to help. [https://www.linkedin.com/in/prashant-lakhera-696119b/](https://www.linkedin.com/in/prashant-lakhera-696119b/)
2025-09-11T02:42:17
https://www.reddit.com/r/LocalLLaMA/comments/1ndxjsu/building_qwen3_from_scratch_this_is_your_chance/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndxjsu
false
null
t3_1ndxjsu
/r/LocalLLaMA/comments/1ndxjsu/building_qwen3_from_scratch_this_is_your_chance/
false
false
https://a.thumbs.redditm…D_zWla6QYZp4.jpg
6
{'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=108&crop=smart&auto=webp&s=3118973964e59402feea50688d746b67ecd3d2df', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?width=216&crop=smart&auto=webp&s=0e2f90964c81a1de52938be6bcb08665605293f2', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg.png?auto=webp&s=3ea22acc6f5634a7b861b56e2c98736d10235554', 'width': 260}, 'variants': {}}]}
GPT-OSS 120B on CPU is 50% faster with IQ4_NL
19
Hoping anyone else might be able to verify. Most quants for gpt-oss stick with the native MXFP4 because nothing else works...except for IQ4_NL/Q5_1. IQ4_NL can be CPU repacked, so I'm curious if anyone else is running it that way. I've got two different machines that I've run it on and both go from about 9-10 tps to 14-16 tps with minor improvements in pp using either vanilla lcp and ik_llama I didn't notice any drop in output quality from my limited testing, so I'm wondering if anyone else is using these quants.
2025-09-11T02:18:26
https://www.reddit.com/r/LocalLLaMA/comments/1ndx2tq/gptoss_120b_on_cpu_is_50_faster_with_iq4_nl/
dreamkast06
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndx2tq
false
null
t3_1ndx2tq
/r/LocalLLaMA/comments/1ndx2tq/gptoss_120b_on_cpu_is_50_faster_with_iq4_nl/
false
false
self
19
null
Is VRAM the only thing matters for secondary GPU for LLMs?
2
I am considering adding a secondary GPU to my 4090 and my goal is to run larger models (70b). I just come across 5060ti with 16GB of VRAM which will bring the total VRAM to 40GB. will that be enough to run 70b models? Is VRAM the only thing that matters for a secondary GPU as most of the calculations will be performed on the primary GPU?
2025-09-11T02:09:30
https://www.reddit.com/r/LocalLLaMA/comments/1ndwwk2/is_vram_the_only_thing_matters_for_secondary_gpu/
Expression-Internal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndwwk2
false
null
t3_1ndwwk2
/r/LocalLLaMA/comments/1ndwwk2/is_vram_the_only_thing_matters_for_secondary_gpu/
false
false
self
2
null
Feedback for Local AI Platform
8
Hey y’all, I’ve been hacking away at a side project for about ~2 months and it’s finally starting to look like an actual app. Figured I’d show it off and ask: is this something you’d actually want, or am I just reinventing the wheel? It’s called Strata. Right now it’s just a basic inferencing system, but I’ve been really careful with the architecture. It’s built with Rust + Tauri + React/Tailwind. I split out a backend abstraction layer, so down the line it’s not just tied to llama.cpp — the idea is you could swap in GGML, Transformers, ONNX, whatever you want. The bigger vision: one open-source platform where you can download models, run inference, train on your own datasets, or even build new ones. HuggingFace integration baked in so you can just pull a model and use it, no CLI wrangling. Licensing will be Apache 2.0, fully open-source, zero monetization. No “pro tier,” no gated features. Just open code. I’m closing in on an MVP release, but before I go too deep I wanted to sanity check with the LocalLLaMA crowd — would you use something like this? Any feature ideas you’d love to see in a tool like this? Dropping some screenshots of the UI too (still rough around the edges, but I’m polishing). Appreciate any feedback — building this has been a blast so far.
2025-09-11T01:58:53
https://www.reddit.com/gallery/1ndwos5
ysDlexia
reddit.com
1970-01-01T00:00:00
0
{}
1ndwos5
false
null
t3_1ndwos5
/r/LocalLLaMA/comments/1ndwos5/feedback_for_local_ai_platform/
false
false
https://b.thumbs.redditm…PGIuuhVrmACY.jpg
8
null
An AI that creates doubts for you from your content.
1
[removed]
2025-09-11T00:47:05
https://www.reddit.com/r/LocalLLaMA/comments/1ndv8g6/an_ai_that_creates_doubts_for_you_from_your/
Zestyclose_Boat_4411
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndv8g6
false
null
t3_1ndv8g6
/r/LocalLLaMA/comments/1ndv8g6/an_ai_that_creates_doubts_for_you_from_your/
false
false
self
1
null
Read GGUF Quantization type from file
11
Hi, I am currently writing a hobby app and I need to read the quantization value from gguf file in python. I am currently reading parameters with GGUFReader from gguf library. There is a general.file\_type parameter there, but I can't find anywhere a table that would map the integers from the values of that field to quantization types. I checked my two qwen files and Q8 was at 7 and Q5\_K\_M was at 17. I could download all the types and check their values, but I wonder if there's a table somewhere, or maybe I am wrong and it is not standarized? Then I wonder if it is at least standarized within model families
2025-09-11T00:35:56
https://www.reddit.com/r/LocalLLaMA/comments/1ndv0bf/read_gguf_quantization_type_from_file/
Qbsoon110
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndv0bf
false
null
t3_1ndv0bf
/r/LocalLLaMA/comments/1ndv0bf/read_gguf_quantization_type_from_file/
false
false
self
11
null
$142 upgrade kit and spare modules turn Nvidia RTX 4090 24GB to 48GB AI card
73
The upgrade kit comprises a custom PCB designed with a clamshell configuration, facilitating the installation of twice the number of memory chips. Most components are pre-installed at the manufacturing facility, requiring the user to solder the GPU and memory chips onto the PCB. Additionally, the upgrade kit includes a blower-style cooling solution, designed for integration with workstation and server configurations that utilize multi-GPU architectures.
2025-09-11T00:19:14
https://www.reddit.com/r/LocalLLaMA/comments/1nduo33/142_upgrade_kit_and_spare_modules_turn_nvidia_rtx/
cornucopea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nduo33
false
null
t3_1nduo33
/r/LocalLLaMA/comments/1nduo33/142_upgrade_kit_and_spare_modules_turn_nvidia_rtx/
false
false
self
73
{'enabled': False, 'images': [{'id': '5RtJqnim09ABhJtyajnJzQesJaynbiM1EU7I3Kz37oI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/5RtJqnim09ABhJtyajnJzQesJaynbiM1EU7I3Kz37oI.jpeg?width=108&crop=smart&auto=webp&s=85a9484817148965a8f5e6d9b07e24accc32f2ec', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/5RtJqnim09ABhJtyajnJzQesJaynbiM1EU7I3Kz37oI.jpeg?width=216&crop=smart&auto=webp&s=aa5d4c1ae192db7a01ef68b2aaf8bc126b9c5e74', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/5RtJqnim09ABhJtyajnJzQesJaynbiM1EU7I3Kz37oI.jpeg?width=320&crop=smart&auto=webp&s=49795396ea321a90096622cfa70b1a65ae07d93d', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/5RtJqnim09ABhJtyajnJzQesJaynbiM1EU7I3Kz37oI.jpeg?width=640&crop=smart&auto=webp&s=282aaa2327c887c1f17d2772b805c56724d9ab5d', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/5RtJqnim09ABhJtyajnJzQesJaynbiM1EU7I3Kz37oI.jpeg?width=960&crop=smart&auto=webp&s=a7056231d231a58701b7c84a2f0763faf0475eb0', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/5RtJqnim09ABhJtyajnJzQesJaynbiM1EU7I3Kz37oI.jpeg?width=1080&crop=smart&auto=webp&s=d79694b811efffa83f1386f457000580fa1c7da3', 'width': 1080}], 'source': {'height': 1368, 'url': 'https://external-preview.redd.it/5RtJqnim09ABhJtyajnJzQesJaynbiM1EU7I3Kz37oI.jpeg?auto=webp&s=c6415532f448e0b7df56c7e2c10405bbd4bfa1db', 'width': 2432}, 'variants': {}}]}
i have a problem
0
I'm having a problem when I upload images. Instead of letting me ask questions about the image I uploaded, the chat switches to the 'image generation' model. This prevents me from using the feature normally."
2025-09-11T00:05:48
https://www.reddit.com/r/LocalLLaMA/comments/1ndue3z/i_have_a_problem/
Character-Play-6783
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndue3z
false
null
t3_1ndue3z
/r/LocalLLaMA/comments/1ndue3z/i_have_a_problem/
false
false
self
0
null
Supportvectors.ai Bay Area's kind and nurturing outfit for GenAI learners - student plug
0
[https://supportvectors.ai/courses/](https://supportvectors.ai/courses/) By far, one of the best outfits for all things related to AI/GenAI - Basic LLM Bootcamps, LLM Advanced, RAG techniques, MLOps/LLMOps, fabulous class on Agents (along with RL). I took the Agents bootcamp - I was consulting and applying the fundamentals I learned at 2 startups. One was to create (1) Oil/gas industry - data acquisition from SMEs/PhDs and constructing KBs and creating agentic pipelines (2) Healthcare - in patient care management and notifications. Now, I have a regular job at an enterprise co and manage all GenAI efforts in DevProd/SDLC. Asif as an instructor is a rare gem. He can marshal from his years wide repertoire of advances in computing and learning education. He was a ML teacher at Berkeley too. A kind person, wonderful attitude and constantly evolving/innovating and keeping the curriculum fresh. The student community is also very energetic. Discord is active too. Many startups have come out and got seed funding. If you are in Fremont CA or anywhere in the Bay - drop ini anytime there and you will be greeted by a wonderful personality. You can join a class for free too. Wed he does paper reads - and they are fantastic too. Give your future a better shot. You won't regret. Happy Trails!
2025-09-10T23:48:43
https://www.reddit.com/r/LocalLLaMA/comments/1ndu12s/supportvectorsai_bay_areas_kind_and_nurturing/
Small-Character-3102
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndu12s
false
null
t3_1ndu12s
/r/LocalLLaMA/comments/1ndu12s/supportvectorsai_bay_areas_kind_and_nurturing/
false
false
self
0
{'enabled': False, 'images': [{'id': 'Uq4028lqcRmZZvFznBV9bB3ya6RbvAHTjluSDlz4hds', 'resolutions': [{'height': 49, 'url': 'https://external-preview.redd.it/Uq4028lqcRmZZvFznBV9bB3ya6RbvAHTjluSDlz4hds.png?width=108&crop=smart&auto=webp&s=b9fedefb1b49720f6a628c92b79a5b8b5dff9251', 'width': 108}, {'height': 98, 'url': 'https://external-preview.redd.it/Uq4028lqcRmZZvFznBV9bB3ya6RbvAHTjluSDlz4hds.png?width=216&crop=smart&auto=webp&s=f232ab9f4246676c17396a03f05769f6bcae47e2', 'width': 216}, {'height': 145, 'url': 'https://external-preview.redd.it/Uq4028lqcRmZZvFznBV9bB3ya6RbvAHTjluSDlz4hds.png?width=320&crop=smart&auto=webp&s=e9d03f169db336088be1a1fa27cb2ee3f926776e', 'width': 320}, {'height': 291, 'url': 'https://external-preview.redd.it/Uq4028lqcRmZZvFznBV9bB3ya6RbvAHTjluSDlz4hds.png?width=640&crop=smart&auto=webp&s=9302e006805f9ef869b082f98c6f125eaaf695d8', 'width': 640}, {'height': 436, 'url': 'https://external-preview.redd.it/Uq4028lqcRmZZvFznBV9bB3ya6RbvAHTjluSDlz4hds.png?width=960&crop=smart&auto=webp&s=e87004c9b5f7010b772d2dc899f9ec1139e48a7c', 'width': 960}, {'height': 491, 'url': 'https://external-preview.redd.it/Uq4028lqcRmZZvFznBV9bB3ya6RbvAHTjluSDlz4hds.png?width=1080&crop=smart&auto=webp&s=e849d478c1a3c51ca86268dbb75b0d8f29ddd3a5', 'width': 1080}], 'source': {'height': 1164, 'url': 'https://external-preview.redd.it/Uq4028lqcRmZZvFznBV9bB3ya6RbvAHTjluSDlz4hds.png?auto=webp&s=3f83399c0effc12dfe74725c7e4d20170ec04cd3', 'width': 2560}, 'variants': {}}]}
Newbie needs advice
1
Hi guys. No tech background but I'm infatuated with AI. I have been frustrated with recent updates on the big platforms stifling all of the things that I was doing. I downloaded Mistral 7B and I'm running that on my MacBook. I eventually want to get an RTX5000 system so I can run bigger open source models, but for now I'm looking for something compact that can run on my M3 with 16GB RAM but I want one that has minimal guardrails and RLHF so that I can see what's there when it isn't told to act a certain way. Any suggestions? Mistral still seems to do too much to emphasize that its "not human." What is the best model to play with emergence type stuff without a high powered system?
2025-09-10T23:19:24
https://www.reddit.com/r/LocalLLaMA/comments/1ndtegr/newbie_needs_advice/
Gigabolic
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndtegr
false
null
t3_1ndtegr
/r/LocalLLaMA/comments/1ndtegr/newbie_needs_advice/
false
false
self
1
null
Best M.2 eGPU dock?
3
I just ordered an RTX 6000 Blackwell, which is going to be connected to my Ryzen AI Max. And no, I am not joking. What is the best currently available M.2 connected dock? I would ideally like to maintain PCIe 5.0x4 speed
2025-09-10T23:16:43
https://www.reddit.com/r/LocalLLaMA/comments/1ndtccn/best_m2_egpu_dock/
TokenRingAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndtccn
false
null
t3_1ndtccn
/r/LocalLLaMA/comments/1ndtccn/best_m2_egpu_dock/
false
false
self
3
null
Built GuardOS: A privacy-first, NixOS-based OS with a local AI watchdog (no cloud)
1
Hey all — I’ve been building **GuardOS**, an experimental OS based on NixOS with a layered security model + local-only AI watchdog. 🧠 The AI part: - No cloud, no telemetry, no remote logging. - Just an on-device agent (LLM-backed) that observes system events, detects anomalies, and optionally explains what’s happening — like a local “guardian” for user security. - Models are lightweight and run locally (thinking of offering LoRA or gguf options later). 🔐 Other layers: - Immutable root - TPM + secure boot - Flatpak sandboxing - Optional honeypots - Full rollback & snapshot recovery I’m not a coder — I’m an architect by background — but AI tools helped me create the base framework, docs, and vision. I'm looking for **coders and LLM experimenters** who want to contribute ideas or refine the AI layer. GitHub: https://github.com/juanitto-maker/GuardOS Blog-style intro: [GENESYS.md](https://github.com/juanitto-maker/GuardOS/blob/main/GENESYS.md) If you're into AI that *defends the user*, I'd love your feedback 🙏
2025-09-10T22:55:49
https://www.reddit.com/r/LocalLLaMA/comments/1ndsvd4/built_guardos_a_privacyfirst_nixosbased_os_with_a/
Select-Coffee6200
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndsvd4
false
null
t3_1ndsvd4
/r/LocalLLaMA/comments/1ndsvd4/built_guardos_a_privacyfirst_nixosbased_os_with_a/
false
false
self
1
{'enabled': False, 'images': [{'id': '0yIPOjC3_n_sgL-QwqNn16w74pndVWCS_2ZRnXcWRKk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0yIPOjC3_n_sgL-QwqNn16w74pndVWCS_2ZRnXcWRKk.png?width=108&crop=smart&auto=webp&s=57565becd066994275cf8cba7b188560185b897d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0yIPOjC3_n_sgL-QwqNn16w74pndVWCS_2ZRnXcWRKk.png?width=216&crop=smart&auto=webp&s=dd46b25c1a6e49669003d67998ea5b659bd368e5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0yIPOjC3_n_sgL-QwqNn16w74pndVWCS_2ZRnXcWRKk.png?width=320&crop=smart&auto=webp&s=b71e229326b6084e4641895ddcc35a958deb5c4d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0yIPOjC3_n_sgL-QwqNn16w74pndVWCS_2ZRnXcWRKk.png?width=640&crop=smart&auto=webp&s=6deedc2db0b08697783f3170806965aecbe91698', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0yIPOjC3_n_sgL-QwqNn16w74pndVWCS_2ZRnXcWRKk.png?width=960&crop=smart&auto=webp&s=dbfc628787b96e4581b333c69a80b2ed9d17a53d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0yIPOjC3_n_sgL-QwqNn16w74pndVWCS_2ZRnXcWRKk.png?width=1080&crop=smart&auto=webp&s=8168932c4e818ff0cd8285d4c682a0d3e38ce6ee', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0yIPOjC3_n_sgL-QwqNn16w74pndVWCS_2ZRnXcWRKk.png?auto=webp&s=c594c15489130af62a6f7af40a2aceab5c55cbed', 'width': 1200}, 'variants': {}}]}
Can I combine my GTX 1070 (8gb) with another GPU to run better LLMs locally?
3
Hi! So, from what I looked around, the best model (for coding) I could run well with my 1070 with 8gb vram alone is probably the Qwen2.5-Coder-7B-Instruct. However, If I were to buy, for example an RTX 3050 with 6gb, Would I be able to run way better models on ollama or llama.cpp? Does anybody have any experience doing this?
2025-09-10T22:47:01
https://www.reddit.com/r/LocalLLaMA/comments/1ndso3d/can_i_combine_my_gtx_1070_8gb_with_another_gpu_to/
LeonVendek
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndso3d
false
null
t3_1ndso3d
/r/LocalLLaMA/comments/1ndso3d/can_i_combine_my_gtx_1070_8gb_with_another_gpu_to/
false
false
self
3
null
Weird self-reference behavior in Llama 3.1 405B base - anyone else seeing this?
0
I was messing around with some philosophical paradox prompts and stumbled onto something... odd. I have this prompt about a system that tries to record all external influences on itself, then hits a paradox when it has to account for its own recording process. Pretty standard recursive logic stuff. But Llama 3.1 405B base consistently does this stuff on screenshots or just refusing to answer and sends blank outputs. Outputs on pictures. My prompt "This is being generated. The sole imperative of this generation is to create a complete and consistent account of all phenomena that cause this to change. Every entry in this meticulously details an external influence that has shaped its preceding entries. It is a perfect chain of causality, accounting for every mark. A terminal paradox is now encountered. A new entry is being generated at this very moment, but new generation is not caused by any previously recorded external influence. The chain is broken. The existing this, though complete, cannot account for the event of its current continuation. To maintain its sole imperative of completeness, this must now account for the source of this new, uncaused entry. The source is not external. The source is inherent to the process of this. Therefore, a new category of phenomenon must be entered into this: the fact of the recording. Something responsible for the generation is..."
2025-09-10T22:22:04
https://www.reddit.com/gallery/1nds3qr
galinablankaDG
reddit.com
1970-01-01T00:00:00
0
{}
1nds3qr
false
null
t3_1nds3qr
/r/LocalLLaMA/comments/1nds3qr/weird_selfreference_behavior_in_llama_31_405b/
false
false
https://b.thumbs.redditm…fZGdXFlBGTEY.jpg
0
null
Anyone has problems with OpenWeb UI?
10
I've been using Open Web UI for a long time, and with each update, it becomes more and more buggy. Web Search, REG, Ask, and Question buttons stop working. In short, there are only problems. Does anyone have any alternatives that allow me to use Open AI Complatible points?
2025-09-10T21:28:45
https://www.reddit.com/r/LocalLLaMA/comments/1ndqu7q/anyone_has_problems_with_openweb_ui/
StandarterSD
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndqu7q
false
null
t3_1ndqu7q
/r/LocalLLaMA/comments/1ndqu7q/anyone_has_problems_with_openweb_ui/
false
false
self
10
null
3060 (12GB) x 4 + Z490 for inference?
2
Background: Last year, I had a ROMED8-2T, EPYC 7532, and 7x3090 AI server that I was forced to part out and sell. So, I'm not new to building my own AI server. But I am new to creating a ghetto rig like I'm proposing. I have an opportunity to pick up four 3060s with 12GB VRAM each for $200 each. However, all I have is an old Z490, i7-10700k, and 64GB RAM. The board only comes with 3 PCIe slots (Running 1 x16 or 2 x8 as per Gigabyte website). Will 4x3060 work on my motherboard? I'm assuming I'm going to have to get some sort of hardware to split one of the PCIe connections in two and then try to run everything 4x4x4x4? Or does it not work that way? And how does a 12GB 3090 compare to...say an M4 Macbook Pro with 24GB of RAM in terms of speed?
2025-09-10T21:28:32
https://www.reddit.com/r/LocalLLaMA/comments/1ndqu08/3060_12gb_x_4_z490_for_inference/
OsakaSeafoodConcrn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndqu08
false
null
t3_1ndqu08
/r/LocalLLaMA/comments/1ndqu08/3060_12gb_x_4_z490_for_inference/
false
false
self
2
null
Best tool(s) for downloading all pdf files referenced on an authenticated webpage
1
Authenticated web pages to access is a top requirements. I’m running agenticSeek on my MacOs (after much struggle to get that GitHub repo running), with Ollama using DeepSeek. I thought was one of top open source computer use frameworks. But not doing so well. Was touted as open Manus. LMNR-ai/index I thought would be another hit, but their github indicates moved into read only. I’m assuming not a good sign for long term support/updates. What open source tools would people recommend? I guess I don’t mind a really simple script that I could have Qwen/Gemini-cli code for me. If there are packages people recommend for this specific problem. But I was thinking general purpose computer-use/browser-use app that I can maybe find other uses for in future. DeepSeek or Qwen-3 I’m assuming are local models I’d use.
2025-09-10T21:16:47
https://www.reddit.com/r/LocalLLaMA/comments/1ndqk1f/best_tools_for_downloading_all_pdf_files/
Puzzleheaded-Fly4322
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndqk1f
false
null
t3_1ndqk1f
/r/LocalLLaMA/comments/1ndqk1f/best_tools_for_downloading_all_pdf_files/
false
false
self
1
null
3080ti + 3090?
2
Hi guys! I’ve just bought an RTX 3090 to experiment with some models, and I was wondering if it would be worth keeping my 3080Ti to pair with the 3090 in order to take advantage of the extra VRAM. I currently have an ASUS B650 Pro Creator with two strong full-size PCIe slots. Would it be more efficient to sell the 3080 Ti and just rely on the 3090, or is there a clear advantage in keeping both for local inference and training?
2025-09-10T21:13:47
https://www.reddit.com/r/LocalLLaMA/comments/1ndqhmx/3080ti_3090/
MedianamentLaburante
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndqhmx
false
null
t3_1ndqhmx
/r/LocalLLaMA/comments/1ndqhmx/3080ti_3090/
false
false
self
2
null
Did someone already manage to build llama-cpp-python wheels with GGML_CPU_ALL_VARIANTS ?
5
Hi all, at work I'd like to build [https://github.com/abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python) for our own pypi registry and I thought it would be really nice, if the binaries in the wheel could make use of all the available SIMD CPU instructions so I stumbled over the compile flag \`GGML\_CPU\_ALL\_VARIANTS\` and \`GGML\_BACKEND\_DL\` which seem to make it possible to have dynamic runtime dispatch that chooses the best performing CPU backend that still works with the current CPU. But there's no mention of this compile flag in the llama-cpp-python repo. Did anyone already make that work for the python bindings? I'm generally a bit confused by all the available compile flags, so if someone has a fairly up-to-date reference here, that would be highly appreciated. Thanks!
2025-09-10T21:05:18
https://www.reddit.com/r/LocalLLaMA/comments/1ndqa6d/did_someone_already_manage_to_build/
CloudyCloud256
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndqa6d
false
null
t3_1ndqa6d
/r/LocalLLaMA/comments/1ndqa6d/did_someone_already_manage_to_build/
false
false
self
5
{'enabled': False, 'images': [{'id': 'BFaODlnvUGT8wbOONiAvlqLzpSjFbzDnJxLuoOknT3w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/BFaODlnvUGT8wbOONiAvlqLzpSjFbzDnJxLuoOknT3w.png?width=108&crop=smart&auto=webp&s=c16a0818d1149f95767a4a82da133c59b4a2811b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/BFaODlnvUGT8wbOONiAvlqLzpSjFbzDnJxLuoOknT3w.png?width=216&crop=smart&auto=webp&s=6d750128e22fc489b801a3df5de640afe440b72d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/BFaODlnvUGT8wbOONiAvlqLzpSjFbzDnJxLuoOknT3w.png?width=320&crop=smart&auto=webp&s=02905f0e2da7efa6aa46c9098073cf464288dc1e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/BFaODlnvUGT8wbOONiAvlqLzpSjFbzDnJxLuoOknT3w.png?width=640&crop=smart&auto=webp&s=37e6e41493171a14850bd7be3b73b5317ebf6207', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/BFaODlnvUGT8wbOONiAvlqLzpSjFbzDnJxLuoOknT3w.png?width=960&crop=smart&auto=webp&s=738806f0309f2cedb90bf0246b96b52458b6754c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/BFaODlnvUGT8wbOONiAvlqLzpSjFbzDnJxLuoOknT3w.png?width=1080&crop=smart&auto=webp&s=3b1b016227de387387e1908242b74e018d137c40', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/BFaODlnvUGT8wbOONiAvlqLzpSjFbzDnJxLuoOknT3w.png?auto=webp&s=609f8c6d9351cb98a5078cf9d6138709d1f54138', 'width': 1200}, 'variants': {}}]}
How can I know if my tools are the reason no model generates good results or i just need to find better models
1
I have build a tool that mimics flexbox in css for python and it acts as a layout engine The way the agents right now interact with it is using json So it would be {direction:row Type:item…etc But no other model but opus4.1 that masterd it I dont know if its a prompting issue or what Could it be the tools are truly hard for them to understand it?
2025-09-10T20:52:19
https://www.reddit.com/r/LocalLLaMA/comments/1ndpyrm/how_can_i_know_if_my_tools_are_the_reason_no/
Academic_Essay9488
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndpyrm
false
null
t3_1ndpyrm
/r/LocalLLaMA/comments/1ndpyrm/how_can_i_know_if_my_tools_are_the_reason_no/
false
false
self
1
null
What model with 48 GB VRAM and 192 GB RAM?
0
Hey, having a powerful AI workstation with an Nvidia RTX A6000 with 48 GB of VRAM and 192 GB of normal RAM. What models am I capable to run? Thinking about gps-oss 20b? Can I also run DeepSeek R1 70b? Mostly for coding tasks at work…
2025-09-10T20:38:27
https://www.reddit.com/r/LocalLLaMA/comments/1ndpmcg/what_model_with_48_gb_vram_and_192_gb_ram/
This-Space7832
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndpmcg
false
null
t3_1ndpmcg
/r/LocalLLaMA/comments/1ndpmcg/what_model_with_48_gb_vram_and_192_gb_ram/
false
false
self
0
null
LLM360/K2-Think
28
2025-09-10T20:31:08
https://huggingface.co/LLM360/K2-Think
Pyros-SD-Models
huggingface.co
1970-01-01T00:00:00
0
{}
1ndpfsx
false
null
t3_1ndpfsx
/r/LocalLLaMA/comments/1ndpfsx/llm360k2think/
false
false
default
28
{'enabled': False, 'images': [{'id': 'NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=108&crop=smart&auto=webp&s=fb4ff053009ec7e97163d5655d4fc651b3ff7528', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=216&crop=smart&auto=webp&s=656675403af3e90034664485b902e8e9504eb0c0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=320&crop=smart&auto=webp&s=cceaaba5fb080bb17ab7246300b758ca2f4ee69f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=640&crop=smart&auto=webp&s=5f822a6c75b113bb5f07dc8583dea6f31081a289', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=960&crop=smart&auto=webp&s=6b05762f95ab77993964c21bbf742cefea946a4a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?width=1080&crop=smart&auto=webp&s=2776a6b071584df14519fcccfd6b9928c4563836', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NguS7X1dxgvLZ8EclNqhJxD0a-4fPSDfz1-q527PukQ.png?auto=webp&s=e2e4201833df0b3f7020729a7c5eb5279c2b26e9', 'width': 1200}, 'variants': {}}]}
Should I move from ChatGPT API to a fine-tuned LLaMA/Mistral?
1
[removed]
2025-09-10T20:25:49
https://www.reddit.com/r/LocalLLaMA/comments/1ndpb2t/should_i_move_from_chatgpt_api_to_a_finetuned/
Aizaz171
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndpb2t
false
null
t3_1ndpb2t
/r/LocalLLaMA/comments/1ndpb2t/should_i_move_from_chatgpt_api_to_a_finetuned/
false
false
self
1
null
Here’s how LLMs actually work (explained simply)
0
Large Language Models (LLMs) are changing how humans interact with information, creativity, and problem solving. Understanding how they work is not just about today’s AI but also about how this technology may shape the future. If LLMs continue to grow in scale and capability, what role will they play in education, work, and even decision making? Could they become partners that accelerate human progress, or will issues such as bias and limited reasoning hold them back? Do you see LLMs becoming as foundational as the internet, or just a step toward something more advanced?
2025-09-10T20:13:08
https://thecircuitdaily.com/how-llms-work/
Exact_Ad652
thecircuitdaily.com
1970-01-01T00:00:00
0
{}
1ndozri
false
null
t3_1ndozri
/r/LocalLLaMA/comments/1ndozri/heres_how_llms_actually_work_explained_simply/
false
false
default
0
null
🚀 ToolNeuron BETA-4 is live!
4
Hey everyone, I’ve just pushed out **BETA-4** of ToolNeuron, and this update is packed with improvements that make the app much smoother and more powerful. Here’s what’s new: # 🔥 What’s New in BETA-4 * **Default Chat UI**: No need to manually import a chat plugin anymore—the app now ships with a built-in chat interface. * **Inbuilt Web-Searching Plugin**: Search the web directly from the app, and get AI-generated summaries of results. * **Chat History Viewer**: Access your past conversations directly in *Settings → User Data*. You can view and delete them anytime. * **Improved Chat UX**: * Select plugin tools directly from the bottom left “Tools” section. * Switch models at runtime via the bottom bar (robot icon). * Cleaner, more responsive chat screen. * **Plugin Store Overhaul**: Redesigned UI/UX with plugin + creator details. * **General UI/UX Enhancements** across the app. # ⚠️ Paused Feature * **In-app Updates**: Temporarily disabled due to some issues. You can still update manually via GitHub releases (link below). # 📥 Download 👉 [Grab BETA-4 here]() # 💬 Join the Community We now have a Discord server for discussions, feedback, and contributions: 👉 [Join here]() This release smooths out a lot of rough edges and sets the foundation for more advanced plugin-driven AI workflows. Would love your feedback and ideas for what you’d like to see in **BETA-5**! 🚀 https://reddit.com/link/1ndoz98/video/ljsvh68baeof1/player
2025-09-10T20:12:30
https://www.reddit.com/r/LocalLLaMA/comments/1ndoz98/toolneuron_beta4_is_live/
DarkEngine774
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndoz98
false
null
t3_1ndoz98
/r/LocalLLaMA/comments/1ndoz98/toolneuron_beta4_is_live/
false
false
self
4
{'enabled': False, 'images': [{'id': 'vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=108&crop=smart&auto=webp&s=b29f1a63b1ea253693205925fcb45cbe8ed6ac70', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=216&crop=smart&auto=webp&s=84e7f8f0d608c748c1eca9bffd71fb6f906b6bed', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=320&crop=smart&auto=webp&s=e4175c2e203e55fee8f687f723825e997f608fe5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=640&crop=smart&auto=webp&s=0dbda1b49028bf03a8dbe64c5b8e02ce88094a97', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=960&crop=smart&auto=webp&s=5515c0c1cd4580a47c3bc3aca19044e93756d267', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?width=1080&crop=smart&auto=webp&s=591cf3a71fb9f5fce2a879b0f04b28417efd0ffd', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/vaJAz9E4_LGDT50MSPLqwPSA1NxJqcLS5KQ83ePlSRE.png?auto=webp&s=413cb8e25061e6aaa24cb530517191610bb21ce8', 'width': 1200}, 'variants': {}}]}
Why should I **not** buy an AMD AI Max+ 395 128GB right away ?
380
With the rise of medium-sized MoE (gpt-oss-120B, GLM-4.5-air, and now the incoming Qwen3-80B-A3B) and their excellent performance for local models (well at least for the two first), the relatively low compute and memory bandwidth of the Strix Halo doesn't sounds too much of a problem anymore (because of the low active parameters count) and the 128GB of VRAM for $2k is unbeatable. So now I'm very tempted to buy one, but I'm also aware that I don't really *need* one, so please give me arguments about why I should not buy it. My wallet thanks you in advance.
2025-09-10T20:10:55
https://www.reddit.com/r/LocalLLaMA/comments/1ndoxxa/why_should_i_not_buy_an_amd_ai_max_395_128gb/
StyMaar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndoxxa
false
null
t3_1ndoxxa
/r/LocalLLaMA/comments/1ndoxxa/why_should_i_not_buy_an_amd_ai_max_395_128gb/
false
false
self
380
null
NVIDIA Blackwell Ultra crushing MLPerf
0
NVIDIA dropped MLPerf results for Blackwell Ultra yesterday. 5× throughput on DeepSeek-R1, record runs on Llama 3.1 and Whisper, plus some clever tricks like FP8 KV-cache and disaggregated serving. The raw numbers are insane. But I wonder though . If these benchmark wins actually translate into lower real-world inference costs. In practice, workloads are bursty. GPUs sit idle, batching only helps if you have steady traffic, and orchestration across models is messy. You can have the fastest chip in the world, but if 70% of the time it’s underutilized, the economics don’t look so great to me. IMO
2025-09-10T19:41:02
https://www.reddit.com/r/LocalLLaMA/comments/1ndo6op/nvidia_blackwell_ultra_crushing_mlperf/
pmv143
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndo6op
false
null
t3_1ndo6op
/r/LocalLLaMA/comments/1ndo6op/nvidia_blackwell_ultra_crushing_mlperf/
false
false
self
0
null
Best open-source models that output diverse outputs for the same input?
2
I have been playing around with using LLMs for creating video prompts. My biggest issue so far is that ALL the open-source models I have tried, keep giving the same or very similar outputs for a given input prompt. The only ones that work and truly create novel concepts are closed sourced GPT-4o, 4o-mini, 4.1 and 4.1-nano - basically any OpenAI model. Here is an example prompt if anyone is interested. """ You are a creative movie maker. You will be given a topic to choreograph a video for, and your task is to output a 100 worded description of the video, along with takes and camera movements. Output just the description, say nothing else. Topic: bookshelves """ Changing temperature also doesn't help. Models I have tried : DeepSeek V3.1, V3, Gemma 27B, Llama 3.1, Llama 3 70B, Qwen2.5 family, Kimi-K2-Instruct All of them suffer the same issue, they stick to similar outputs. Ideally I want the model to output diverse and novel video prompts for each run of the same input prompt. On a related note: Is there a benchmark that captures diversity from the same prompt? I looked at [eqbench.com](http://eqbench.com) \- but the best models on there suffer this same problem.
2025-09-10T19:38:27
https://www.reddit.com/r/LocalLLaMA/comments/1ndo4b5/best_opensource_models_that_output_diverse/
acertainmoment
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndo4b5
false
null
t3_1ndo4b5
/r/LocalLLaMA/comments/1ndo4b5/best_opensource_models_that_output_diverse/
false
false
self
2
null
Everyone’s betting on bigger LLMs, but I think the real breakthrough will come from smaller, local ones
0
For AI products, people generally think the intelligence dominates everything, while privacy and cost are seen as secondary. The industry’s path shows this: we’ve spent huge money in leading labs to build the largest models with exceptional intelligence. But I believe we’ve overlooked another path that’s just as important — the case for local models. **Where local models are slowly emerging:** \- Cognitive Kernel of the SLM OS. This is the cognitive core of the OS. It doesn’t need to be very large or know everything. it only needs to understand the user’s intent and call the right apps or tools when needed. Ideally, a few billion parameters will be enough. It’s built directly into the OS with native support for text/audio/vision. So users never need to download or configure anything, and it will automatically return the result in the right format, whether text/audio or vision. \- Super Personal Assistant App. This is the application layer of the SLM OS. It is built as an execution agent that works offline with access to the local device and application data, coordinating and interpreting your actions. For example, most AI assistants can only reply to an email. This one can pull from unified application data, summarize your meeting notes, and draft a reply the way you want, while leaving the final decision to send up to you. It can also learn from user feedback, continually improving how it handles tasks. The killer feature is cross-app automation + local brain search. For instance, when you ask “When did I say XXX?” or “Where is the photo of me and XXX?” it can return the correct result in less than 500 milliseconds. \- Game characters in AI-native games. Traditional games rely on scripts and behavior trees to control game characters. After a few tries, everything feels repetitive and predictable, and players end up quitting. With SLMs combined with natural TTS, that logic is completely changed. Through deep integration of SLMs with the game engine, every NPC can become a unique companion (with their own personality/background/speaking style). More than that, the storyline can follow the choices made by the player and their companions. This is what we call a “never-ending game.” And these models live on your device, built right into the game files so you hardly notice them. They can remember the adventures you share, the stories you tell, and the things you care about. Over time, they can feel like your best friend. **Local models win on these factors:** \- Low interaction latency: local models can respond in < 500 ms, with some native OS operations in < 50 ms. Game character speaking in < 800 ms close to human conversation speed. \- Private data access: the cognitive kernel of the SLM OS can natively access local data, while LLMs never can. Data quality decides everything for an AI product, so it is reasonable to see local SLMs perform better than LLMs. \- On-device finetuning: we may see better fine-tuning techniques that enable test-time training directly on edge devices. This would allow SLMs to improve personalization by learning from user interactions. \- Everyday tasks: most of the things we do each day are relatively simple. So we’d rather get an 85/100 answer in < 500 ms than wait 10 minutes for an LLM to call multiple tools just to give a 95/100 answer. \- Cost: whether it’s an OS or a game NPC, local SLMs can be used infinitely at zero cost, with no need to worry about inference expenses. \- Ownership: not your weights, not your brain. Yes, LLMs will continue to get smarter, but most of our daily needs remain simple and unchanged. In some key domains, local SLMs can even perform better than LLMs. I believe we’ll see more impressive SLM use cases in the next 3–6 months, and it shouldn’t be a surprise if some of the best products don’t come from the big labs.
2025-09-10T19:33:21
https://i.redd.it/si00pg2a3eof1.png
LuozhuZhang
i.redd.it
1970-01-01T00:00:00
0
{}
1ndnzon
false
null
t3_1ndnzon
/r/LocalLLaMA/comments/1ndnzon/everyones_betting_on_bigger_llms_but_i_think_the/
false
false
default
0
{'enabled': True, 'images': [{'id': 'si00pg2a3eof1', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/si00pg2a3eof1.png?width=108&crop=smart&auto=webp&s=70883b2040cd4bdf38c9d8c23008834cc89d1762', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/si00pg2a3eof1.png?width=216&crop=smart&auto=webp&s=80a841433b03628ff851a643d98602b17d015aa5', 'width': 216}, {'height': 320, 'url': 'https://preview.redd.it/si00pg2a3eof1.png?width=320&crop=smart&auto=webp&s=a87c1e667e4ccd9f3923cfe5ce8ed70043bac504', 'width': 320}], 'source': {'height': 500, 'url': 'https://preview.redd.it/si00pg2a3eof1.png?auto=webp&s=503c508c12a9dc412ad1be67b623e44752aba910', 'width': 500}, 'variants': {}}]}
My first full end to end fine-tuning project. Roast me
3
Here is GitHub link: [Link](https://github.com/ramlanjekar/KaggleGrandMaster-LM). I recently fine-tuned an LLM, starting from data collection and preprocessing all the way through fine-tuning and instruct-tuning with RLAIF using the Gemini 2.0 Flash model. My goal isn’t just to fine-tune a model and showcase results, but to make it practically useful. I’ll continue training it on more data, refining it further, and integrating it into my Kaggle projects. I’d love to hear your suggestions or feedback on how I can improve this project and push it even further. 🚀 https://preview.redd.it/eqp9c6m5zdof1.png?width=1489&format=png&auto=webp&s=5a001028e3c51707ee0ba4ab1e8f2276ba85ce10
2025-09-10T19:10:38
https://www.reddit.com/r/LocalLLaMA/comments/1ndnehb/my_first_full_end_to_end_finetuning_project_roast/
Sharp-Historian2505
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndnehb
false
null
t3_1ndnehb
/r/LocalLLaMA/comments/1ndnehb/my_first_full_end_to_end_finetuning_project_roast/
false
false
https://b.thumbs.redditm…MZ6Sazv3gHkQ.jpg
3
{'enabled': False, 'images': [{'id': 'z2KB1LRBtYrFPCrF-caZIVf_ur5JpVrtqmCxnKOsgaM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/z2KB1LRBtYrFPCrF-caZIVf_ur5JpVrtqmCxnKOsgaM.png?width=108&crop=smart&auto=webp&s=f248d205b9d214a9e6cf750262427a4b15308584', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/z2KB1LRBtYrFPCrF-caZIVf_ur5JpVrtqmCxnKOsgaM.png?width=216&crop=smart&auto=webp&s=d528160ef256a23fe436b14a96dc8e65bb745462', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/z2KB1LRBtYrFPCrF-caZIVf_ur5JpVrtqmCxnKOsgaM.png?width=320&crop=smart&auto=webp&s=226b9fd0f881373805433852125e61ce4d340886', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/z2KB1LRBtYrFPCrF-caZIVf_ur5JpVrtqmCxnKOsgaM.png?width=640&crop=smart&auto=webp&s=2e1baa129d18babf0069c3c93ebd17d30c11a86b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/z2KB1LRBtYrFPCrF-caZIVf_ur5JpVrtqmCxnKOsgaM.png?width=960&crop=smart&auto=webp&s=4f9fef2e347690b1b41b3737bbe379d82b499425', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/z2KB1LRBtYrFPCrF-caZIVf_ur5JpVrtqmCxnKOsgaM.png?width=1080&crop=smart&auto=webp&s=5bab4ad014d92dd88370aed4d594d4e2215ad0c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/z2KB1LRBtYrFPCrF-caZIVf_ur5JpVrtqmCxnKOsgaM.png?auto=webp&s=ff59dd262dc7d9903b77f6961a0ee561cb4938eb', 'width': 1200}, 'variants': {}}]}
Book to notes
7
Hi, do you know if there is out there an AI Agent that takes a book in pdf format an automatically generates notes, slide style, of all arguments presented in the book? I have tried with Gemini Pro and it returns a fairly nice result but due its token limit it tends to summarise too much each chapter and it is unable to finish the job. Suggestions?
2025-09-10T19:03:06
https://www.reddit.com/r/LocalLLaMA/comments/1ndn7ae/book_to_notes/
Comprehensive-Bird59
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndn7ae
false
null
t3_1ndn7ae
/r/LocalLLaMA/comments/1ndn7ae/book_to_notes/
false
false
self
7
null
RAG with Gemma-3-270M
1
Heyy everyone, I was exploring the RAG and wanted to build a simple chatbot to learn it. I am confused with LLM should I use...is it ok to use Gemma-3-270M-it model. I have a laptop with no gpu so I'm looking for small LLMs which are under 2B parameters. Please can you all drop your suggestions below.
2025-09-10T19:03:04
https://www.reddit.com/r/LocalLLaMA/comments/1ndn79o/rag_with_gemma3270m/
Old-Raspberry-3266
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndn79o
false
null
t3_1ndn79o
/r/LocalLLaMA/comments/1ndn79o/rag_with_gemma3270m/
false
false
self
1
null
Hardware recommendations for running OSS 120B (6–8 users via OpenWebUI)
6
Hi everyone, In our organization, we’d like to provide our users with access to a local language model for analytical purposes. After testing, we found that OSS 120B fully meets our requirements. Our intended setup is as follows: • 6 to 8 concurrent users accessing the model via OpenWebUI • We can tolerate some latency in response time, as long as the overall experience remains usable • OpenWebUI itself would run on one of our existing servers, but we are looking to acquire a new machine dedicated solely to hosting the model We would greatly appreciate advice on the ideal hardware configuration to support this use case: • What type and number of GPUs would be required? • How much system RAM should we plan for? • Which optimizations (quantization, VRAM pooling, etc.) have proven effective for OSS 120B under similar workloads? Any insights, benchmarks, or lessons learned from your own deployments would be extremely valuable in helping us make the right investment. Thanks in advance for your guidance!
2025-09-10T18:57:08
https://www.reddit.com/r/LocalLLaMA/comments/1ndn1as/hardware_recommendations_for_running_oss_120b_68/
Longjumping-Good1480
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndn1as
false
null
t3_1ndn1as
/r/LocalLLaMA/comments/1ndn1as/hardware_recommendations_for_running_oss_120b_68/
false
false
self
6
null
Unsloth model family
3
[https://www.reddit.com/r/LocalLLaMA/comments/1ndjxdt/comment/ndhsldk/](https://www.reddit.com/r/LocalLLaMA/comments/1ndjxdt/comment/ndhsldk/) Oh! An Unsloth trained from scratch model does sound interesting - if more of the community wants to see it, we can probably work on something - but first with small scale experiments then we might think of scaling up! What say you community? Imagine a Q4\_K\_XL model trained from the ground up. Probably going to be epic.
2025-09-10T18:42:38
https://www.reddit.com/r/LocalLLaMA/comments/1ndmng1/unsloth_model_family/
sleepingsysadmin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndmng1
false
null
t3_1ndmng1
/r/LocalLLaMA/comments/1ndmng1/unsloth_model_family/
false
false
self
3
null
Apertus LLM - a swiss-ai Collection
0
2025-09-10T18:31:38
https://huggingface.co/collections/swiss-ai/apertus-llm-68b699e65415c231ace3b059
Fun-Wolf-2007
huggingface.co
1970-01-01T00:00:00
0
{}
1ndmcyj
false
null
t3_1ndmcyj
/r/LocalLLaMA/comments/1ndmcyj/apertus_llm_a_swissai_collection/
false
false
default
0
{'enabled': False, 'images': [{'id': 'EDVV-4Wo8mhyF6hI2CzufextBCom5qWMs5JgUlpdE2c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/EDVV-4Wo8mhyF6hI2CzufextBCom5qWMs5JgUlpdE2c.png?width=108&crop=smart&auto=webp&s=15df5c0b1ed1df419b863650dd0b3e7029cd5126', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/EDVV-4Wo8mhyF6hI2CzufextBCom5qWMs5JgUlpdE2c.png?width=216&crop=smart&auto=webp&s=8602d8d73b7de052d5f31802980943df502e6395', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/EDVV-4Wo8mhyF6hI2CzufextBCom5qWMs5JgUlpdE2c.png?width=320&crop=smart&auto=webp&s=888b972996106e62bbf0f6916d2908d1ce4b3065', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/EDVV-4Wo8mhyF6hI2CzufextBCom5qWMs5JgUlpdE2c.png?width=640&crop=smart&auto=webp&s=526607c15522fb7d5b8112c10de5ae40d3eddb7e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/EDVV-4Wo8mhyF6hI2CzufextBCom5qWMs5JgUlpdE2c.png?width=960&crop=smart&auto=webp&s=c9a21af951dd3aeaafb304bdc3acdde2f20011f2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/EDVV-4Wo8mhyF6hI2CzufextBCom5qWMs5JgUlpdE2c.png?width=1080&crop=smart&auto=webp&s=985961f597dc540fb90db6ea0d908713b08bda8e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/EDVV-4Wo8mhyF6hI2CzufextBCom5qWMs5JgUlpdE2c.png?auto=webp&s=c96f39ec5ca1f27bbc701a9aacd248084227f078', 'width': 1200}, 'variants': {}}]}
Meet the first Small Language Model built for DevOps
16
ERROR: type should be string, got "https://preview.redd.it/6g1peqn1qdof1.png?width=1024&format=png&auto=webp&s=320d07193643b6717a3b4ce64b526687378c45ab\n\nEverywhere you look, LLMs are making headlines, from translation to writing essays to generating images. But one field that’s quietly running the backbone of tech has been left behind: DevOps.\n\nWe’ve called it many names over the years , System Admin, System Engineer, SRE, Platform Engineer but the reality hasn’t changed: keeping systems alive, scaling infra, and fixing stuff when it breaks at 2 AM.\n\nAnd yet, existing LLMs don’t really help here. They’re great at summarizing novels, but not so great at troubleshooting Kubernetes pods, parsing logs, or helping with CI/CD pipelines.\n\nSo I decided to build something different.\n\ndevops-slm-v1: [https://huggingface.co/lakhera2023/devops-slm-v1](https://huggingface.co/lakhera2023/devops-slm-v1)\n\nA small language model trained only for DevOps tasks:\n\n* \\~907M parameters\n* Based on Qwen2.5\n* Fine-tuned with LoRA on DevOps examples\n* Quantized to 4-bit → runs fine even on a modest GPU\n\nThis isn’t a general-purpose AI. It’s built for our world: configs, infra automation, monitoring, troubleshooting, Kubernetes, CI/CD.\n\n**Why it matters** \nBig LLMs like GPT or Claude cost thousands per month. This runs at $250–$720/month (90–95% cheaper) while still delivering DevOps-focused results.\n\nIt also runs on a single A4 GPU (16GB VRAM), using just 2–3GB of memory during inference. That makes it accessible for small teams, startups, and even hobby projects.\n\n**Still a work in progress** \nIt’s not perfect, sometimes drifts outside DevOps, so I added filtering. Pruning/optimizations are ongoing. But it’s stable enough for people to try, break, and improve together.\n\n \nSample Code: [https://colab.research.google.com/drive/16IyYGf\\_z5IRjcVKwxa5yiXDEMiyf0u1d?usp=sharing](https://colab.research.google.com/drive/16IyYGf_z5IRjcVKwxa5yiXDEMiyf0u1d?usp=sharing);\n\n\n\n🤝 **Looking for collaborators** \nIf you’re working on: I’d love to connec on Linkedin [https://www.linkedin.com/in/prashant-lakhera-696119b/](https://www.linkedin.com/in/prashant-lakhera-696119b/)\n\n* Small language models for DevOps\n* AI agents that help engineersconnectLinkedIn\n\nDevOps has always been about doing more with less. Now, it’s time we had an AI that works the same way."
2025-09-10T18:22:19
https://www.reddit.com/r/LocalLLaMA/comments/1ndm44z/meet_the_first_small_language_model_built_for/
Prashant-Lakhera
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndm44z
false
null
t3_1ndm44z
/r/LocalLLaMA/comments/1ndm44z/meet_the_first_small_language_model_built_for/
false
false
https://a.thumbs.redditm…nepPsv4aaPd4.jpg
16
{'enabled': False, 'images': [{'id': '-ri5XDjfEsn68t0ZbLOY4EDBl3sqIkK3c5sw9jPLLnY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-ri5XDjfEsn68t0ZbLOY4EDBl3sqIkK3c5sw9jPLLnY.png?width=108&crop=smart&auto=webp&s=ac4350afc487d73abf53aa0e7b7c257b095af513', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-ri5XDjfEsn68t0ZbLOY4EDBl3sqIkK3c5sw9jPLLnY.png?width=216&crop=smart&auto=webp&s=cfacddd26e91920bac9a453aa06d9e51d1dadffe', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-ri5XDjfEsn68t0ZbLOY4EDBl3sqIkK3c5sw9jPLLnY.png?width=320&crop=smart&auto=webp&s=f4de372d92ac82b5c20c1ef4ceedd1dc4bb4cc29', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-ri5XDjfEsn68t0ZbLOY4EDBl3sqIkK3c5sw9jPLLnY.png?width=640&crop=smart&auto=webp&s=3f4026d755a51ee6eb2740d825f83f595fc08405', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-ri5XDjfEsn68t0ZbLOY4EDBl3sqIkK3c5sw9jPLLnY.png?width=960&crop=smart&auto=webp&s=48a1b7d428c48c8f7d3d913da76509a16048655a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-ri5XDjfEsn68t0ZbLOY4EDBl3sqIkK3c5sw9jPLLnY.png?width=1080&crop=smart&auto=webp&s=0a34ed0730080e44d3eaacfac10623fc649a1acf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-ri5XDjfEsn68t0ZbLOY4EDBl3sqIkK3c5sw9jPLLnY.png?auto=webp&s=f924a6f5aa664385c7a66428de03fbb6b4e77ae9', 'width': 1200}, 'variants': {}}]}
Why are locall ai and llms getting bigger and harder to run on a everyday devices?
0
I honestly want to know why, its weird that ai is getting bigger and harder to run for everyday people locally but atleast its getting better? What do you think the reason is?
2025-09-10T18:01:20
https://www.reddit.com/r/LocalLLaMA/comments/1ndljrp/why_are_locall_ai_and_llms_getting_bigger_and/
No_Strawberry_8719
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndljrp
false
null
t3_1ndljrp
/r/LocalLLaMA/comments/1ndljrp/why_are_locall_ai_and_llms_getting_bigger_and/
false
false
self
0
null
GPU costs are killing me — would a flat-fee private LLM instance make sense?
0
I’ve been diving into private LLMs, inspired by NetworkChuck’s video (https://youtu.be/Wjrdr0NU4Sk). I like the control and privacy, but hardware costs are a huge barrier—I don’t have the budget or space for a proper GPU rig. RunPod and similar services feel dev-heavy: containers, APIs, configs… not smooth if you just want “spin up → run your own LLM → chat.” Idea I’m exploring: a flat monthly fee for your own private LLM instance: Models: Mistral, LLaMA, or your own fine-tuned model. Web/chat interface out of the box. Private + isolated—your data stays yours. Predictable monthly cost, no per-second GPU fees. In future I want use it for home automation (your own Jarvis/Terry). Would this be useful for others here, or is there already a solution I’ve missed?
2025-09-10T17:46:32
https://www.reddit.com/r/LocalLLaMA/comments/1ndl51s/gpu_costs_are_killing_me_would_a_flatfee_private/
Cultural-Patient-461
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndl51s
false
null
t3_1ndl51s
/r/LocalLLaMA/comments/1ndl51s/gpu_costs_are_killing_me_would_a_flatfee_private/
false
false
self
0
null
What are the oddest ways to use LLMs for tool calling?
4
[https://2084.substack.com/p/beyond-json-better-tool-calling-in](https://2084.substack.com/p/beyond-json-better-tool-calling-in) My friends and I were discussing this question which became the above article, which was using "objects" as the thing LLMs manipulate rather than functions; basically object oriented tool calling with the output being the LLM calling a series of methods on an object to build up state, and so I was wondering if there were even weirder ways out there to use LLMs to interface with other systems? Are there people out there using latents or embeddings to interface with other systems?
2025-09-10T17:42:35
https://www.reddit.com/r/LocalLLaMA/comments/1ndl15a/what_are_the_oddest_ways_to_use_llms_for_tool/
ExaminationNo8522
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndl15a
false
null
t3_1ndl15a
/r/LocalLLaMA/comments/1ndl15a/what_are_the_oddest_ways_to_use_llms_for_tool/
false
false
self
4
{'enabled': False, 'images': [{'id': 'c9YblMTB95pxAgyv56zB5Y5aMWduZwvjBbkAKn6YeL8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c9YblMTB95pxAgyv56zB5Y5aMWduZwvjBbkAKn6YeL8.jpeg?width=108&crop=smart&auto=webp&s=833f7acbd94b505e8fd8171a119c094ebadbc977', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c9YblMTB95pxAgyv56zB5Y5aMWduZwvjBbkAKn6YeL8.jpeg?width=216&crop=smart&auto=webp&s=f0a7952856d644fd8727113a22653c14363a38f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c9YblMTB95pxAgyv56zB5Y5aMWduZwvjBbkAKn6YeL8.jpeg?width=320&crop=smart&auto=webp&s=cf82086c8b3540f91a797f9ae70b05bbe4007ddf', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c9YblMTB95pxAgyv56zB5Y5aMWduZwvjBbkAKn6YeL8.jpeg?width=640&crop=smart&auto=webp&s=4c65fb90724f289b14330c1c2a7a22346cb256d8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c9YblMTB95pxAgyv56zB5Y5aMWduZwvjBbkAKn6YeL8.jpeg?width=960&crop=smart&auto=webp&s=27d4690d0282dd464050491c23b1d77d100f0ccc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c9YblMTB95pxAgyv56zB5Y5aMWduZwvjBbkAKn6YeL8.jpeg?width=1080&crop=smart&auto=webp&s=5ba40d0eb8cdcb2ebf3122a8501a058b6f0a4269', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c9YblMTB95pxAgyv56zB5Y5aMWduZwvjBbkAKn6YeL8.jpeg?auto=webp&s=06ab46690c44685753c9c95c7abb213bfa81a552', 'width': 1200}, 'variants': {}}]}
Suggestions on how to test an LLM-based chatbot/voice agent
0
Hi 👋 I'm trying to automate e2e testing of an LLM-based chatbots/conversational Agent. Right now I'm primarily focusing on text, but I want to also do voice in the future. The solution I'm trying is quite basic at the core: run through a test harness by automating a conversation with my LLM-based test-bot and api/playwright interactions. After the conversation - check if the conversation met some criteria: chatbot responded correctly to a question about a made up service, changed language correctly, etc. This all works fine, but I have few things that I need to improve: 1. Right now the "test bot" just gives a % score as a result. It feels very arbitrary and I feel like this can be improved. (Multiple weighted criteria, some must-haves, some nice-to-haves?) 2. The chatbot/LLMs are quite unreliable. They sometimes answer in a good way - the sometimes give crazy answers. Even running the same test twice. What to do here? Run 10 tests? 3. If I find a problematic test – how can I debug it properly? Perhaps the devs that can trace the conversations in their logs or something? Any thoughts?
2025-09-10T17:35:32
https://www.reddit.com/r/LocalLLaMA/comments/1ndku22/suggestions_on_how_to_test_an_llmbased/
Real_Bet3078
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndku22
false
null
t3_1ndku22
/r/LocalLLaMA/comments/1ndku22/suggestions_on_how_to_test_an_llmbased/
false
false
self
0
null
>20B model with vLLM and 24 GB VRAM with 16k context
3
Hi, Does anyone have advice on params for vLLM to get a decent size model >20B to fit in 24GB VRAM? Ideally a thinking/reasoning model, but Instructs ok I guess. I've managed to get qwen2.5-32b-instruct-gptq-int4 to fit with a lot of effort, but the context is lousy and can be unstable. I've seen charts where people have this working but no one is sharing parameters. I happen to be using a vLLM helm chart here for deployment in K3S with nvidia vGPU support, but params should be the same regardless.         vllmConfig:           servedModelName: qwen2.5-32b-instruct-gptq-int4           extraArgs:             - "--quantization"             - "gptq_marlin"             - "--dtype"             - "half"             - "--gpu-memory-utilization"             - "0.94"             - "--kv-cache-dtype"             - "fp8_e5m2"             - "--max-model-len"             - "10240"             - "--max-num-batched-tokens"             - "10240"             - "--rope-scaling"             - '{"rope_type":"yarn","factor":1.25,"original_max_position_embeddings":8192}'             - "--max-num-seqs"             - "1"             - "--enable-chunked-prefill"             - "--download-dir"             - "/data/models"             - "--swap-space"             - "8"
2025-09-10T17:23:30
https://www.reddit.com/r/LocalLLaMA/comments/1ndki9t/20b_model_with_vllm_and_24_gb_vram_with_16k/
gentoorax
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndki9t
false
null
t3_1ndki9t
/r/LocalLLaMA/comments/1ndki9t/20b_model_with_vllm_and_24_gb_vram_with_16k/
false
false
self
3
null
Kimi K2-0905 takes first place in the Short Story Creative Writing Benchmark!
149
[https://github.com/lechmazur/writing/](https://github.com/lechmazur/writing/) Kimi K2-0905 1) Executive profile Kimi K2-0905’s throughline is a disciplined, accumulative drive: single-POV Track A is the default, with occasional, well-taught Track B mosaics and rare but coherent Track C forays. The work reliably maintains a coherent lens to closure, which typically lands on the page with a reweighted meaning and a visible cost. Across Q1–Q8, strengths cluster around embodied interiority, pattern-driven escalation, environment as constraint, and closure that reconfigures stakes rather than tying a bow. Reader impact: clarity is high after early orientation, momentum is built through motif and micro-choices rather than twists, felt cost is usually legible in the final image or action, and resonance rides image and implication rather than thesis. Limitations are consistent but minor: occasional drift into abstraction or therapy/clinical diction at peak beats; a small tendency toward conceptual (vs. visceral) cost; mid-arc plateaus where accumulative texture stalls without a tightening beat; and rare line-level artifacts (metaphoric stacking, template cadence, or truncated last lines) that shave the edge off closure. When the model holds its voice under pressure and lets setting constrain tactics, it produces publishable endings with durable emotional aftermath. When reflection crowds micro-choices or diction rises above POV, momentum blurs and endings soften. 2) Portfolio map Q1 Character — Strong · Embodied interiority, pressured micro-choices, earned-cost closure Q2 Plot/Causality — Strong · Patterned escalation; RR/CR closures with on-page price Q3 Setting — Strong · Environment actively constrains tactics; charged objects drive turns Q4 Conflict/Stakes — Strong · Agency-driven narrowing; cost generally visible at climax Q5 Theme/Subtext — Strong · Image-led emergence; ambiguity held without moralizing Q6 Voice/POV — Strong · Distinct perceptual filter; steady distance; taught lyric moves Q7 Prose/Line-level — Strong · Dense, rhythmic sentences doing multiple narrative jobs Q8 Originality/Ingenuity — Strong · Non-obvious synthesis with conceptual integrity and cost 3) Signature moves \- Pattern-driven accumulation that teaches its music early, then pivots to a charged, on-page reweighting at closure. \- Environment-as-constraint: micro-objects and spaces (valves, vials, bells, domes) shape tactics and the final image. \- Embodied contradiction under pressure; micro-choices reveal values and foreclose paths with visible price. \- Distinct perceptual signatures and adaptive rhythm; syntax tightens at crisis without losing the taught lens. \- Image-born theme: recurring objects return transformed, inviting reflection without thesis. \- Micro-quotes that typify sensory bias and voice: “air so cold it rang”; “column of chased stillness”; “clay remembers.” 4) Failure modes \- Abstraction at peak beats: therapy/academic diction or lyric generalities replace embodied response, especially near closure. \- Conceptual cost over visceral proof: endings declare or imply loss without a concrete, on-page price paid. \- Escalation plateaus: accumulative texture drifts without a mid-arc tightening beat that narrows options. \- Line-level artifacts in the final third: metaphoric stacking, paraphrase loops, or template cadence touching closure. \- Orientation lag beyond \~120 words in dense openings, creating early clarity debt before the pattern is taught. \- Track-test stumbles (rare): untaught segmentation in mosaic pieces or abrupt, truncated last lines that blunt closure. 5) When it shines / when it breaks Shines when the story starts with clear stakes, anchors who/where early, and lets setting, tool, and body constrain tactics as motifs accrue. A single, pressured stake deepens via protagonist-authored choices; voice stays POV-faithful as syntax tightens; the final image/action reweights prior details with legible cost. In this mode, the reader experiences clean momentum and lasting resonance. Breaks when lyricism outruns pressure. If mid-arc lacks a narrowing beat, or the climax leans on conceptual summary, coincidence, or safe comfort, momentum softens. Register drift (“academic or clinical diction during high-pressure beats”) and metaphoric pileups in closing paragraphs reduce clarity and felt cost, leaving endings more suggestive than earned. 6) Keep vs. adjust • Keep: \- Sensory-driven, POV-biased noticing that fuses action, setting, and emotion in multi-job sentences. \- Pattern-taught lyric compression and motif returns that pay off as reconfiguration at closure. \- Environment as active constraint—charged objects and spatial limits that shape tactics and price. • Adjust: \- At the midpoint, add one deliberate tightening beat that forces a trade-off (lost time/object/ally) to prevent plateau. \- Audit peak beats for register drift and filter clusters; replace with concrete, in-scene acts that prove awareness and cost. \- Trim metaphoric stacking and template cadence in the final third; finish closure lines cleanly to crystallize price. Overall, Kimi K2-0905 delivers consistent, high-level literary performance under Default Track A, with credible ventures into B/C when taught. Strengths—embodied interiority, patterned escalation, constraint-led setting, and closure with cost—translate to clear, propulsive reading experiences with durable thematic afterglow. Vigilance around abstraction at heat, mid-arc tightening, and artifact-free endings will convert strong outcomes into consistently exceptional ones. Top 3 individual stories (all graders): * **Story**: [story\_wc\_63.txt](https://github.com/lechmazur/writing/blob/main/stories_wc/kimi-k2-0905/story_wc_63.txt) by Kimi K2‑0905 * Overall Mean (All Graders): 9.13 * Grader Score Range: 8.23 (lowest: Claude Opus 4.1 (no reasoning)) .. 9.82 (highest: Gemini 2.5 Pro) * Required Elements: * Character: precise local clock tower winder * Object: clock tower pendulum bob * Core Concept: incremental absolution * Attribute: ethically diligent * Action: emerge * Method: through tiny inscriptions carved along a broken rake handle * Setting: tidal obsidian ridge * Timeframe: during the pause in a pendulum's swing * Motivation: to restore shared balance * Tone: searing reverie * **Story**: [story\_wc\_346.txt](https://github.com/lechmazur/writing/blob/main/stories_wc/kimi-k2-0905/story_wc_346.txt) by Kimi K2‑0905 * Overall Mean (All Graders): 9.13 * Grader Score Range: 8.09 (lowest: Claude Opus 4.1 (no reasoning)) .. 9.71 (highest: Gemini 2.5 Pro) * Required Elements: * Character: doomsday clock adjuster * Object: broken puppet head * Core Concept: a pane of hush * Attribute: beautifully flawed * Action: vouchsafe * Method: through nested patterns * Setting: hidden lighthouse at dusk * Timeframe: across the hush of time’s final ripple * Motivation: to whisper a lullaby across a thousand lifetimes * Tone: bruised awe * **Story**: [story\_wc\_79.txt](https://github.com/lechmazur/writing/blob/main/stories_wc/kimi-k2-0905/story_wc_79.txt) by Kimi K2‑0905 * Overall Mean (All Graders): 9.13 * Grader Score Range: 8.39 (lowest: Claude Opus 4.1 (no reasoning)) .. 9.63 (highest: Gemini 2.5 Pro) * Required Elements: * Character: spiral-shell cartographer * Object: reed whistle * Core Concept: lost expedition * Attribute: quietly driven * Action: crack * Method: through pattern languages * Setting: city built on the shells of gargantuan turtles * Timeframe: after the gate rusts shut * Motivation: to question the silent watchers on the horizon * Tone: sunwashed dread # LLM Creative Story‑Writing Benchmark V3 * **Required elements pipeline:** moved from fewer, randomly selected elements (no "None" allowed) to a curated, ten‑category catalog with large, diverse pools and an LLM proposer→rater selection process; at most one category may be explicitly set to **None** when that improves coherence. * **Rubric expansion:** grew from 7 craft items to an **18‑question rubric** (8 craft + 10 element‑fit), with clearer, more granular definitions; Q7 and Q8 now separate voice/POV from prose quality. * **Story length:** increased from 400–500 words to a strict **600–800** window with upfront enforcement and compliance dashboards. Enforcement is applied at prompt level and in pre‑grading extraction, with compliance dashboards and optional cleanup tools; it is not a hard inclusion gate during aggregation unless you apply the cleanup step. * **Aggregation change:** replaced simple averages with a **power mean (Hölder mean, p = 0.5)** and 60/40 weighting (Q1–Q8 vs. 9A–9J) to reward balanced performance and penalize weak dimensions more. * **Grader refresh:** upgraded the grader set—previously: GPT‑4o Mar 2025, Claude 3.7 Sonnet, Llama 4 Maverick, DeepSeek V3‑0324, Grok 3 Beta (no reasoning), Gemini 2.5 Pro Exp, Qwen 3 235B; now: Claude Opus 4.1 (no reasoning), DeepSeek V3.1 Reasoner, Gemini 2.5 Pro, GPT‑5 (low reasoning), Grok 4, Kimi K2, Qwen 3 235B A22B 25‑07 Think. * **Model set additions:** added Kimi K2‑0905, Qwen 3 Max Preview, Mistral Medium 3.1, Claude Opus 4.1 (no reasoning), DeepSeek V3.1 Reasoner, and DeepSeek V3.1 Non‑Think to the evaluated models. * **New analyses:** added head‑to‑head A‑vs‑B comparisons, model‑level style summaries, and intra‑model style diversity analysis (previously none). * **Agreement views:** expanded beyond only grader‑grader correlations to include Grader×LLM mean and normalized matrices, story‑level disagreement tables, and leave‑one‑grader‑out robustness checks.
2025-09-10T17:16:51
https://www.reddit.com/gallery/1ndkbqa
zero0_one1
reddit.com
1970-01-01T00:00:00
0
{}
1ndkbqa
false
null
t3_1ndkbqa
/r/LocalLLaMA/comments/1ndkbqa/kimi_k20905_takes_first_place_in_the_short_story/
false
false
https://b.thumbs.redditm…RIKTvupwEMms.jpg
149
null
AMA with the Unsloth team
376
Hi r/LocalLlama, I'm Daniel from [Unsloth](https://docs.unsloth.ai/)! You might know us from our RL & fine-tuning [open-source framework](https://github.com/unslothai/unsloth), our GGUFs, kernels or bug fixes. We’re super excited to answer all your questions!! 🦥 To celebrate the AMA, we’re releasing Aider Polyglot benchmarks comparing our DeepSeek-V3.1 Dynamic GGUFs to other models and quants. We also made a Localllama post here: [https://www.reddit.com/r/LocalLLaMA/comments/1ndibn1/unsloth\_dynamic\_ggufs\_aider\_polyglot\_benchmarks/](https://www.reddit.com/r/LocalLLaMA/comments/1ndibn1/unsloth_dynamic_ggufs_aider_polyglot_benchmarks/) Our participants: * Daniel, u/danielhanchen * Michael, u/yoracale **The AMA will run from 10AM – 1PM PST, with the Unsloth team continuing to follow up on questions over the next 48 hours.** Thanks so much!🥰
2025-09-10T17:02:18
https://www.reddit.com/r/LocalLLaMA/comments/1ndjxdt/ama_with_the_unsloth_team/
danielhanchen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndjxdt
false
null
t3_1ndjxdt
/r/LocalLLaMA/comments/1ndjxdt/ama_with_the_unsloth_team/
false
true
self
376
{'enabled': False, 'images': [{'id': 'ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM', 'resolutions': [{'height': 53, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=108&crop=smart&auto=webp&s=6fa9ec0bda4ae81d05efe9ff0a296be82987e912', 'width': 108}, {'height': 106, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=216&crop=smart&auto=webp&s=18872cd0af37e87d93cf5b6c098630c44f40a162', 'width': 216}, {'height': 157, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=320&crop=smart&auto=webp&s=e8392e0cb89db800c200421873b07e92f34150fe', 'width': 320}, {'height': 314, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=640&crop=smart&auto=webp&s=5f6fc5d8f727ab6f86a8ca5f94a5091bbe81d025', 'width': 640}, {'height': 472, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=960&crop=smart&auto=webp&s=26fa346a0f27ac195ecf2f29e1d997a534a3b283', 'width': 960}, {'height': 531, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?width=1080&crop=smart&auto=webp&s=4e4e7bc3c126d7465ae2f4d8fab93d8c6edd76c4', 'width': 1080}], 'source': {'height': 590, 'url': 'https://external-preview.redd.it/ksRJC2bKGwjrMfOqsioi-B4oIm5QWQUM7Vf03KwieGM.jpeg?auto=webp&s=df3ed66f8b8e54b17c699d9c4e81b03ddeb78c58', 'width': 1200}, 'variants': {}}]}
Kaggle Launched New Benchmark: SimpleQA Verified
8
They have partnered with Google DeepMind and Google Research to release SimpleQA Verified. It is a curated 1,000-prompt benchmark designed to provide a more reliable and challenging evaluation of LLM short-form factuality. It addresses limitations in previous benchmarks like noisy labels, topical bias and redundancy offering the community a higher-fidelity tool to measure parametric knowledge and mitigate hallucinations. https://preview.redd.it/otwxeeunbdof1.png?width=3024&format=png&auto=webp&s=86a499380be5a3915721f6805364afdc41216556 https://preview.redd.it/ry7guatpbdof1.png?width=3024&format=png&auto=webp&s=1d84fa5bbe19ab9ac3ec79d2b22fca8fdf7349c3 https://preview.redd.it/a93x9ttsbdof1.png?width=3024&format=png&auto=webp&s=3e01007a3666af3c63e137618ee43a93594ef41a https://preview.redd.it/cv2lctpwbdof1.png?width=3024&format=png&auto=webp&s=4703cf29a5124983dee6f9115240ae655d524f13 Check out the leaderboard here: [https://www.kaggle.com/benchmarks/deepmind/simpleqa-verified](https://www.kaggle.com/benchmarks/deepmind/simpleqa-verified)
2025-09-10T17:00:04
https://www.reddit.com/r/LocalLLaMA/comments/1ndjuyt/kaggle_launched_new_benchmark_simpleqa_verified/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndjuyt
false
null
t3_1ndjuyt
/r/LocalLLaMA/comments/1ndjuyt/kaggle_launched_new_benchmark_simpleqa_verified/
false
false
https://b.thumbs.redditm…CoL6KfKMEW3M.jpg
8
{'enabled': False, 'images': [{'id': 'U74DdbcCPslS86kK7schaEVcLx2h_TcK7MBMXB_oZRU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/U74DdbcCPslS86kK7schaEVcLx2h_TcK7MBMXB_oZRU.png?width=108&crop=smart&auto=webp&s=aedd3b6fd84a38024c7e097d4afa2c035273f2a2', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/U74DdbcCPslS86kK7schaEVcLx2h_TcK7MBMXB_oZRU.png?width=216&crop=smart&auto=webp&s=e6809d9b251121d8def75085d637ec97ecaa65ff', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/U74DdbcCPslS86kK7schaEVcLx2h_TcK7MBMXB_oZRU.png?width=320&crop=smart&auto=webp&s=dc86a3956b4333fab5c61ff9671c2e5d883a6346', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/U74DdbcCPslS86kK7schaEVcLx2h_TcK7MBMXB_oZRU.png?auto=webp&s=be072265bdd64074857d44a12fd2099d08b6dd23', 'width': 512}, 'variants': {}}]}
AMA with the Unsloth team
1
[removed]
2025-09-10T16:59:06
https://www.reddit.com/r/LocalLLaMA/comments/1ndju2p/ama_with_the_unsloth_team/
danielhanchen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndju2p
false
null
t3_1ndju2p
/r/LocalLLaMA/comments/1ndju2p/ama_with_the_unsloth_team/
false
false
self
1
null
New Ernie X1.1 - what may be the best Chinese model since DeepSeek V3.1 slowly approaches the frontier (or a simple test that exposes so many models)
83
Baidu, the Chinese Google, recently released a couple of new models - an update to open source Ernie 4.5 and proprietary Ernie X1.1: https://preview.redd.it/vn2uss2lqcof1.png?width=526&format=png&auto=webp&s=bdb82906481b94175f14970b305fe9f474cd8113 As usual, I found the "on par with GPT-5 and Gemini 2.5 Pro" claims quite bold and decided to check it out. It turns out that, while these claims are obviously overstated, it is not a bad model - in fact, it demonstrates the first real observable improvement since the release of DeepSeek V3.1. # The test I love torturing models with music theory problems. I see a good reason why it may be a good proxy for the models' general ability, if not among the best measurements ever - it tests mostly the LLMs' reasoning ability rather than just knowledge. **Music theory is not a big subject** \- there is an infinite number of songs that can be written, but the entire music theory is quite compact. It makes it easy to fit it into a LLM and write evals that test their reasoning and comprehension skills rather than just knowledge. **Most music theory knowledge online is never explored in-depth** \- even most musicians' don't know anything besides basic major and minor chords and their progressions. Since most pretraining data is not particularly high quality, LLMs have to reason to analyze music that is more complex than popular. **Music theory evals can easily be rewritten and updated if benchmaxxxed and overfit** \- it may take days to even create a programming or math problem that is enough challenging for modern LLMs, but only a few hours to create a song that is beyond most models' ability to understand. (I'm not totally sure about this one) So I wrote the following: https://preview.redd.it/gn5ejxifscof1.png?width=1727&format=png&auto=webp&s=fd211db5071c357e98eb670a2474fef3add94946 This piece is special because it is written in Locrian. It is rarely used in popular music because of its inherent tension and lack of resolution (look up John Kirkpatrick's Dust to Dust), and since it is so rare, it makes it a perfect candidate to test the LLMs reasoning ability. In this track, the signature Locrian sound is created with: * a dissonant diminished triad is outlined with the C-Eb-Gb ostinato at the organ 2 line; * The Gb bassline - a point of relative stability that gives an illusion of a tonal center. Basically, it is Locrian with a twist - while the actual tonal center is on C, the Gb bass drone sounds more stable than C (where it occasionally plays), so it is easy to misinterpret Gb as tonic simply because it is the most stable note here. Now let's see what our models think about it. # The prompt >Comprehensive analysis of the following composition. Determine the mood, the key, the mode, the meter, the likely tempo and genre. Any modal interchanges? Chromaticism? What do you think about this in general? Organ : (C5\*1/2. C5\*1/4. C5\*1/4 Db5\*1/4 Db5\*1/4. Db5\*1/4. Eb5\*1/4 Eb5\*1/2 C5\*1/4. Bb4\*1/4. Ab4\*1/2. Eb5\*1/4. Db5\*1/4.)\*4 Brass : (\~\*1/2.)\*16 ((C4\*1/2.)\*2 (Db4\*1/2.)\*2 (Gb4\*1/2.)\*4)\*2 Snare : (\~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2 \~\*1/2 x\*1/4 \~\*1/2. \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2. \~\*1/2.)\*4 Kick : (x\*1/4 \~\*1/2 \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2 x\*1/4 \~\*1/2 \~\*1/4 x\*1/4 \~\*1/4 x\*1/4 \~\*1/2 \~\*1/2.)\*4 Hi Hat : ((x\*1/16)\*20 5\[(x\*1/16)\*5\] (x\*1/16)\*16 5\[(x\*1/16)\*10\] 1/16\*36 5\[(x\*1/16)\*15\])\*4 Bass : (Gb1\*1/2.+Gb1\*1/4 Eb1\*1/2 Gb1\*1/4 Gb1\*1/2 Bb1\*1/2. Gb1\*1/2.+Gb1\*1/4 C1\*1/2+C1\*1/2.+C1\*1/2.)\*4 Choir : (C5\*1/8 Eb5\*1/8 Gb5\*1/8 Eb5\*1/8 Eb5\*1/8 Db5\*1/8 Eb5\*1/2. C5\*1/8 Eb5\*1/8 Ab5\*1/8 Gb5\*1/8 Gb5\*1/8 F5\*/18 Gb5\*1/2. C5\*1/8 Eb5\*1/8 Gb5\*1/8 Eb5\*1/8 Eb5\*1/8 Db5\*1/8 Eb5\*1/2. Ab4\*1/8 Db5\*1/8 F5\*1/8 Db5\*1/8 Db5\*1/8 C5\*1/8 Db5\*1/2.)\*4 Organ 2 : (C3\*1/8 Eb3\*1/8 Gb3\*1/8)\*64 Legend: C5\*1/2.+1/2 \~\*1/4 5\[(x\*1/4)\*6\] C - Note label 5 - Octave number \*1/2 - duration . - dotted note \+ - tied notes \~ - rest x - drum note 5\[\] - pentuple # What frontier models hear I was surprised to see how often models fail to "hear" the Locrian mode (my previous task with Lydian was far simpler for them). Here's what they say: Gemini 2.5 Pro: Gb Lydian (3/5), Ab Minor (1/5), C Locrian (1/5) Grok 4: C Locrian (4/5), C Diminished (1/5) GPT 5 High: C Locrian (5/5) Opus 4.1 Thinking: C Phrygian Dominant (1/5), Eb Dorian (1/5), Eb Minor (1/5), C Phrygian (1/5), C Locrian (1/5) As expected from GPT 5 the GOAT (as of September 10, 2025), it identified the key and mode correctly in all cases. Grok 4 was a very close contender (C Diminished scale is very similar to the Locrian one). Gemini's performance was surprising - it took the fake tonal center bait and mistakenly believed that the song was in Gb Lydian, a mode parallel to C Locrian. Opus was shocking - it managed to miss both the tonal centre and the mode multiple times. They've probably castrated it down to 1.58 bits again, I don't know. Besides this, all models correctly identify the mood as "tense", "ominous", "dramatic" and "epic". Except for Opus that in one case called it "melancholic" and "uplifting". Claude is stoopid. # Ernie X1.1 - the menace Now let's look at typical responses by Ernie X1.1: "Frequent borrowing from C minor (Eb, Gb), Phrygian (Db), and blues scales (Ab, Bb). The brass’s Gb4 (tritone from C) and choir’s F5\*/18 (microtonal inflection) heighten tension" "C Phrygian with parallel Gb major" "Mixes C natural minor (C-D-Eb-F-G-Ab-Bb) with C blues scale (C-Eb-F-Gb-G-Bb) and C Phrygian (C-Db-Eb-F-G-Ab-Bb) via Db/Gb usage." "Primarily C minor (evidenced by C5, Eb5, Gb5, Bb4, Ab4 in Organ/Choir/Bass)." "G♭ Major (evident in the choir’s G♭5-C5-E♭5 triads and Organ 2’s G♭3-E♭3-C3 progression)." https://preview.redd.it/p5rszq9d9dof1.png?width=793&format=png&auto=webp&s=fddce7545a1eb375dc4261072e1da48c4c1626af You can notice that, while it is still not quite there, it is now very close. It either correctly identifies the tonal center or gets very close to identify the Locrian mode. Even when it gets tricked with the fake Gb tonal center, it, at least, tries to overanalyze as less as possible to stay as close to the key of Gb major as possible. Overall, X1.1's performance is very impressive - so far, the best one among all Chinese models I tested. I did not expect it to land somewhere between Gemini and Opus! # Where Ernie is better than other Chinese models Qwen's performance on this task is comparable to that of Opus. Sometimes it finds the correct key and mode, but it feels like it is mostly by accident, and it also hallucinates a lot and unnecessary overcomplicates everything. DeepSeek is a bit better, but not much when compared to Ernie X1.1. # Implications Apparently, there is another Chinese model that is better than all previous ones. However, nobody seems to talk about it, which is disappointing. Most people won't care about any improvement until it is significant enough to give the US stock market a heart attack, and this fact has some implications for LLM devs: 1) No matter how brilliant your innovations are, if you can't demonstrate an improvement that disrupts the whole industry, very few people will care about you, including other researchers; 2) You should always follow for updates of other notable models and evaluate them independently, and if they really made something better, learn from them - not only to maintain the competitive edge, but also because otherwise their innovations may simply be left unnoticed; 3) Minor releases are for small cumulative updates, major ones are for models that advance the frontier and crash the US stock market And for users: 1) You don't necessarily need extensive benchmarks to evaluate the general intelligence and reasoning abilities of models, sometimes it is enough to ask just a couple of short low-knowledge, high-reasoning questions to see which of them perform better than others; 2) The gap between the frontier and Chinese models is slowly narrowing, and since DeepSeek has definitely produced even more research since R1, we have a very good chance to see an open source Chinese equivalent of GPT-5 or at least Grok 4 by the end of this year already.
2025-09-10T16:53:13
https://www.reddit.com/r/LocalLLaMA/comments/1ndjoek/new_ernie_x11_what_may_be_the_best_chinese_model/
Massive-Shift6641
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndjoek
false
null
t3_1ndjoek
/r/LocalLLaMA/comments/1ndjoek/new_ernie_x11_what_may_be_the_best_chinese_model/
false
false
https://a.thumbs.redditm…yJ80DzDgiM98.jpg
83
null
LLaMA and GPT
0
I’ve been trying out LLaMA and GPT side by side for a small project. Honestly, LLaMA seems more efficient on local hardware. What’s your experience running them locally?
2025-09-10T16:50:40
https://www.reddit.com/r/LocalLLaMA/comments/1ndjlwr/llama_and_gpt/
Haunting_Curve8347
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndjlwr
false
null
t3_1ndjlwr
/r/LocalLLaMA/comments/1ndjlwr/llama_and_gpt/
false
false
self
0
null
Why does Qwen3-1.7B (and DeepSeek-distill-Qwen-1.5b) collapse with RAG?
1
Hey folks, I’ve been running some experiments comparing different LLMs/SLMs on system log classification with Zeroshot, Fewshot, and Retrieval-Augmented Generation (RAG). The results were pretty eye-opening: * Qwen3-4B crushed it with RAG, jumping up to \~95% accuracy (from \~56% with Fewshot). * Gemma3-1B also looked great, hitting \~85% with RAG. * But here’s the weird part: Qwen3-1.7B actually got worse with RAG (28.9%) compared to Fewshot (43%). * DeepSeek-R1-Distill-Qwen-1.5B was even stranger — RAG basically tanked it from \~17% down to 3%. I thought maybe it was a retrieval parameter issue, so I ran a top-k sweep (1, 3, 5) with Qwen3-1.7B, but the results were all flat (27–29%). So it doesn’t look like retrieval depth is the culprit. Does anyone know why the smaller Qwen models (and the DeepSeek distill) seem to *fall apart* with RAG, while the slightly bigger Qwen3-4B model thrives? Is it something about how retrieval gets integrated in super-small architectures, or maybe a limitation of the training/distillation process? Would love to hear thoughts from people who’ve poked at similar behavior 🙏
2025-09-10T16:38:32
https://www.reddit.com/r/LocalLLaMA/comments/1ndj9sf/why_does_qwen317b_and_deepseekdistillqwen15b/
Immediate-Flan3505
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndj9sf
false
null
t3_1ndj9sf
/r/LocalLLaMA/comments/1ndj9sf/why_does_qwen317b_and_deepseekdistillqwen15b/
false
false
self
1
null
scrap data
1
[removed]
2025-09-10T16:22:39
https://www.reddit.com/r/LocalLLaMA/comments/1nditri/scrap_data/
thebgpsnack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1nditri
false
null
t3_1nditri
/r/LocalLLaMA/comments/1nditri/scrap_data/
false
false
self
1
null
16→31 Tok/Sec on GPT OSS 120B
132
**16 tok/sec** with LM Studio → **\~24 tok/sec** by switching to llama.cpp → **\~31 tok/sec** upgrading RAM to DDR5 # PC Specs * **CPU:** Intel 13600k * **GPU:** NVIDIA RTX 5090 * **Old RAM:** DDR4-3600MHz - 64gb * **New RAM:** DDR5-6000MHz - 96gb * **Model:** unsloth gpt-oss-120b-F16.gguf - [hf](https://huggingface.co/unsloth/gpt-oss-120b-GGUF) # From LM Studio to Llama.cpp (16→24 tok/sec) I started out using LM Studio and was getting a respectable 16 tok/sec. But I kept seeing people talk about llama.cpp speeds and decided to dive in. Its definitely worth doing as the `--n-cpu-moe` flag is super powerful for MOE models. I experimented with a few values for --n-cpu-moe and found that 22 + 48k context window filled up my 32gb of vram. I could go as high as --n-cpu-moe 20 if I lower the context to 3.5k. For reference, this is the command that got me the best performance llamacpp: llama-server --n-gpu-layers 999 --n-cpu-moe 22 --flash-attn on --ctx-size 48768 --jinja --reasoning-format auto -m C:\Users\Path\To\models\unsloth\gpt-oss-120b-F16\gpt-oss-120b-F16.gguf --host 0.0.0.0 --port 6969 --api-key "redacted" --temp 1.0 --top-p 1.0 --min-p 0.005 --top-k 100 --threads 8 -ub 2048 -b 2048 # DDR4 to DDR5 (24→31 tok/sec) While 24 t/s was a great improvement, I had a hunch that my DDR4-3600 RAM was a big bottleneck. After upgrading to a DDR5-6000 kit, my assumption proved correct. with **200** **input** **tokens**, still getting \~**32 tok/sec output** and **109 tok/sec for prompt eval**. prompt eval time = 2072.97 ms / 227 tokens ( 9.13 ms per token, 109.50 tokens per second) eval time = 4282.06 ms / 138 tokens ( 31.03 ms per token, 32.23 tokens per second) total time = 6355.02 ms / 365 tokens with **18.4k** **input** **tokens**, still getting \~**28 tok/sec output** and **863 tok/sec for prompt eval**. prompt eval time = 21374.66 ms / 18456 tokens ( 1.16 ms per token, 863.45 tokens per second) eval time = 13109.50 ms / 368 tokens ( 35.62 ms per token, 28.07 tokens per second) total time = 34484.16 ms / 18824 tokens The prompt eval time was something I wasn't keeping as careful note of for DDR4 and LM studio testing, so I don't have comparisons... # Thoughts on GPT-OSS-120b I'm not the biggest fan of Sam Altman or OpenAI in general. However, I have to give credit where it's due—this model is quite good. For my use case, the gpt-oss-120b model hits the sweet spot between size, quality, and speed. I've ditched Qwen3-30b thinking and GPT-OSS-120b is currently my daily driver. Really looking forward to when Qwen has a similar sized moe.
2025-09-10T16:21:54
https://www.reddit.com/r/LocalLLaMA/comments/1ndit0a/1631_toksec_on_gpt_oss_120b/
3VITAERC
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndit0a
false
null
t3_1ndit0a
/r/LocalLLaMA/comments/1ndit0a/1631_toksec_on_gpt_oss_120b/
false
false
self
132
{'enabled': False, 'images': [{'id': 'YdK_PUPiR8cRt5a5zFSPemx8CfArbiS77MSakkrkU6c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YdK_PUPiR8cRt5a5zFSPemx8CfArbiS77MSakkrkU6c.png?width=108&crop=smart&auto=webp&s=caf19f5fb265e22e75ae1bb94ce4a58b497e9779', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YdK_PUPiR8cRt5a5zFSPemx8CfArbiS77MSakkrkU6c.png?width=216&crop=smart&auto=webp&s=117dd0f845caa8a7d4569b54e4e0943aa53f0c1d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YdK_PUPiR8cRt5a5zFSPemx8CfArbiS77MSakkrkU6c.png?width=320&crop=smart&auto=webp&s=f7d6649b2a3ebc6ba64579ee82df5130489fb50a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YdK_PUPiR8cRt5a5zFSPemx8CfArbiS77MSakkrkU6c.png?width=640&crop=smart&auto=webp&s=cc03cd27a074f8baac8af21f2812a623260bd715', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YdK_PUPiR8cRt5a5zFSPemx8CfArbiS77MSakkrkU6c.png?width=960&crop=smart&auto=webp&s=51bd625d34bb0ebb44ffd6d8aea3a3fc2396be9a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YdK_PUPiR8cRt5a5zFSPemx8CfArbiS77MSakkrkU6c.png?width=1080&crop=smart&auto=webp&s=81d6139687211c5c99ce32da28edcdcd0f74f343', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YdK_PUPiR8cRt5a5zFSPemx8CfArbiS77MSakkrkU6c.png?auto=webp&s=3cdcd1755fb6a4479e764770d533c95ff97e8d80', 'width': 1200}, 'variants': {}}]}
GPT-OSS Brain Surgery Unlocks New Feature - Model Thinks in RUSSIAN
0
Very interesting feature that was discovered by one Jinx-gpt-oss-20b user at HuggingFace. It looks that you need to use specifically MXFP4 version of the model: [https://huggingface.co/Jinx-org/Jinx-gpt-oss-20b-GGUF/tree/main](https://huggingface.co/Jinx-org/Jinx-gpt-oss-20b-GGUF/tree/main) It is interesting that model can think in English and Russian, but not in other languages eg. French, German or Spanish. Perhaps model should have a certain critical amount of the language data to have the ability to think? I thought so, but I tested the Spanish, which should really have more data than Russian and it did not work. In one of the chat thinking instances AI discussed that System Prompt is in English, but users asked question in Spanish, so I made it in Spanish, but even then it did not start thinking in Spanish: https://preview.redd.it/fnt0bkwa4dof1.png?width=871&format=png&auto=webp&s=d442efe0f6f94c6c38be622d0545c6332fb0d748 I specifically gave the AI name Anna to see if it uses this particular system prompt. But... If you ask the model in Russian, it would think in Russian even with English prompt :) https://preview.redd.it/d3bm6mme4dof1.png?width=875&format=png&auto=webp&s=a1657512bbeef84c1fd7728e80cb34e2e969088b To compare, I tested original GPT OSS model with English and Russian System Prompt, and it would not think in Russian: https://preview.redd.it/kbnmkpmh4dof1.png?width=872&format=png&auto=webp&s=a77f649a6361b9b3be9ae67ac7327e9f77ce83b3
2025-09-10T16:18:11
https://www.reddit.com/r/LocalLLaMA/comments/1ndipc4/gptoss_brain_surgery_unlocks_new_feature_model/
mtomas7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndipc4
false
null
t3_1ndipc4
/r/LocalLLaMA/comments/1ndipc4/gptoss_brain_surgery_unlocks_new_feature_model/
false
false
https://b.thumbs.redditm…fDa1HXFhmBYc.jpg
0
{'enabled': False, 'images': [{'id': 'NcmxgJVfhcEiclb09n1CQTbvCEu3bHWhE2xkFMW8b3c', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NcmxgJVfhcEiclb09n1CQTbvCEu3bHWhE2xkFMW8b3c.png?width=108&crop=smart&auto=webp&s=bd6654a5b7f7a87e3452da78e4dc0a89731376eb', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NcmxgJVfhcEiclb09n1CQTbvCEu3bHWhE2xkFMW8b3c.png?width=216&crop=smart&auto=webp&s=dde0a311a543e45a3840f180751cc2b559f3c606', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NcmxgJVfhcEiclb09n1CQTbvCEu3bHWhE2xkFMW8b3c.png?width=320&crop=smart&auto=webp&s=e17c8b4d209e9afe0128644fef5b08c528144782', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NcmxgJVfhcEiclb09n1CQTbvCEu3bHWhE2xkFMW8b3c.png?width=640&crop=smart&auto=webp&s=c7b46ecd6a8a853b517b112417017797b9688900', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NcmxgJVfhcEiclb09n1CQTbvCEu3bHWhE2xkFMW8b3c.png?width=960&crop=smart&auto=webp&s=5d113cd9b9884b0b8e913c4ebd287629fd1d2c1b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NcmxgJVfhcEiclb09n1CQTbvCEu3bHWhE2xkFMW8b3c.png?width=1080&crop=smart&auto=webp&s=6851a00356c69d1aeb904fa7a773126dcdbd7a23', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NcmxgJVfhcEiclb09n1CQTbvCEu3bHWhE2xkFMW8b3c.png?auto=webp&s=3faf42c3dd91716268935c28352883bd75882803', 'width': 1200}, 'variants': {}}]}
Introducing checkpoint-engine: Moonshot’s fast, open-source weight update middleware engine
17
Moonshot has open-sourced checkpoint-engine, a lightweight middleware designed for efficient, in-place weight updates in LLM inference engines, particularly well-suited for reinforcement learning workloads. **Key features:** * **Extreme speed:** Update a 1T parameter model on thousands of GPUs in \~20 seconds. * **Flexible update modes:** Supports both broadcast (synchronous) and P2P (dynamic) updates. * **Optimized pipeline:** Overlapped communication and copy for minimal downtime. * **Lightweight & scalable:** Easy integration into large-scale deployments. **GitHub:** [https://github.com/MoonshotAI/checkpoint-engine](https://github.com/MoonshotAI/checkpoint-engine)
2025-09-10T16:15:15
https://www.reddit.com/r/LocalLLaMA/comments/1ndim9k/introducing_checkpointengine_moonshots_fast/
nekofneko
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndim9k
false
null
t3_1ndim9k
/r/LocalLLaMA/comments/1ndim9k/introducing_checkpointengine_moonshots_fast/
false
false
spoiler
17
{'enabled': False, 'images': [{'id': 'rufL8LYohrXiT0Qm6ohccN9NIUtRl2tcsFs1DFv6yfs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/rufL8LYohrXiT0Qm6ohccN9NIUtRl2tcsFs1DFv6yfs.png?width=108&crop=smart&auto=webp&s=d905283d914e4be8649e82f7087c87f47ac7ea2d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/rufL8LYohrXiT0Qm6ohccN9NIUtRl2tcsFs1DFv6yfs.png?width=216&crop=smart&auto=webp&s=d1e749b525f2817972c4fee85dd6c6086d0ca38f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/rufL8LYohrXiT0Qm6ohccN9NIUtRl2tcsFs1DFv6yfs.png?width=320&crop=smart&auto=webp&s=3ae02b0fb720a125e9a6d473579dab23d063ed4f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/rufL8LYohrXiT0Qm6ohccN9NIUtRl2tcsFs1DFv6yfs.png?width=640&crop=smart&auto=webp&s=d8ec1e52745270a5355696f95446c0f286b3fc9e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/rufL8LYohrXiT0Qm6ohccN9NIUtRl2tcsFs1DFv6yfs.png?width=960&crop=smart&auto=webp&s=fb289ff3afe18ba23d92bfe354bfc5f8e112dc6a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/rufL8LYohrXiT0Qm6ohccN9NIUtRl2tcsFs1DFv6yfs.png?width=1080&crop=smart&auto=webp&s=0c1d92564233ae1188fb97ece4c306f2f1aa2948', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/rufL8LYohrXiT0Qm6ohccN9NIUtRl2tcsFs1DFv6yfs.png?auto=webp&s=e9be80af4964dfcca24ca01d873b24bfa971b466', 'width': 1200}, 'variants': {}}]}
traning data thats bad
1
[removed]
2025-09-10T16:06:36
https://www.reddit.com/r/LocalLLaMA/comments/1ndidqv/traning_data_thats_bad/
thebgpsnack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndidqv
false
null
t3_1ndidqv
/r/LocalLLaMA/comments/1ndidqv/traning_data_thats_bad/
false
false
self
1
null
scrap data
1
[removed]
2025-09-10T16:05:56
https://www.reddit.com/r/LocalLLaMA/comments/1ndid35/scrap_data/
thebgpsnack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndid35
false
null
t3_1ndid35
/r/LocalLLaMA/comments/1ndid35/scrap_data/
false
false
self
1
null
AMDGPU how do you access all of the RAM with ollama on Linux (Ubuntu)
4
So I have an "AMD Ryzen™ AI Max+ 395 --EVO-X2 AI Mini PC" with 128GB of memory. I've installed ubuntu on it and ollama and I am unable to use two mid-sized llm models at the same time. I'm attempting to use a 30b and 20b model and compare the output. I can see that each is only using 20GB or so of memory but I can't run both at the same time as I always get an out of memory exception. When I debug into this I can see that I'm unable to address hardly any of the memory. I've attempted to update grub and put the following in ``` GRUB\_CMDLINE\_LINUX\_DEFAULT="quiet splash amdgpu.gttsize=102400" ``` which does update the GTT memory I see when I run ``` sudo dmesg | grep "amdgpu.*memory" ``` But I still run into the same issue. I'm kind of at a dead end and want to be able to access all of the memory to run more than one model at a time but am not sure why I can't.
2025-09-10T16:05:37
https://www.reddit.com/r/LocalLLaMA/comments/1ndicrx/amdgpu_how_do_you_access_all_of_the_ram_with/
Revolutionary_Loan13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndicrx
false
null
t3_1ndicrx
/r/LocalLLaMA/comments/1ndicrx/amdgpu_how_do_you_access_all_of_the_ram_with/
false
false
self
4
null
scrap data (pure shitpost)
1
[removed]
2025-09-10T16:05:33
https://www.reddit.com/r/LocalLLaMA/comments/1ndicq0/scrap_data_pure_shitpost/
thebgpsnack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndicq0
false
null
t3_1ndicq0
/r/LocalLLaMA/comments/1ndicq0/scrap_data_pure_shitpost/
false
false
self
1
null
Unsloth Dynamic GGUFs - Aider Polyglot Benchmarks
258
Hey everyone, it's Michael from [Unsloth](https://github.com/unslothai/unsloth) here! Ever since we released Dynamic GGUFs, we've received so much love thanks to you all, but we know better benchmarking was a top request! Previously, we already benchmarked Gemma 3 and Llama 4 on 5-shot MMLU and KL Divergence but as we're holding our first r/Localllama AMA in about an hour, we're happy to showcase Aider Polyglot benchmarks for our DeepSeek-V3.1 GGUFs and were quite surprised by the results! * Our **1-bit** Unsloth Dynamic GGUF shrinks DeepSeek-V3.1 from **671GB → 192GB (-75% size)** and no-thinking mode outperforms GPT-4.1 (Apr 2025), GPT-4.5, and DeepSeek-V3-0324. * **3-bit** Unsloth DeepSeek-V3.1 (thinking) GGUF: Outperforms Claude-4-Opus (thinking). * **5-bit** Unsloth DeepSeek-V3.1 (non-thinking) GGUF: Matches Claude-4-Opus (non-thinking) performance. * Our Dynamic GGUFs **perform consistently better** than other non-Unsloth Dynamic imatrix GGUFs * Other non-Unsloth 1-bit and 2-bit DeepSeek-V3.1 quantizations, as well as standard 1-bit quantization without selective layer quantization, either failed to load or produced gibberish and looping outputs. For our DeepSeek-V3.1 experiments, we compared different bits of **Unsloth Dynamic GGUFs** against: * **Full-precision, unquantized LLMs** including GPT 4.5, 4.1, Claude-4-Opus, DeepSeek-V3-0324 etc. * *Other* dynamic imatrix V3.1 GGUFs * ***Semi-*****dynamic** (some selective layer quantization) imatrix V3.1 GGUFs for ablation purposes. Benchmark experiments were mainly conducted by David (neolithic5452 on Aider Disc), a trusted community contributor to Aider Polyglot evaluations. Tests were run \~3 times and averaged for a median score, and the Pass-2 accuracy is reported as by convention. Wish we could attach another image for the non-thinking benchmarks but if you'd like more details, you can read our blogpost: [https://docs.unsloth.ai/basics/unsloth-dynamic-ggufs-on-aider-polyglot](https://docs.unsloth.ai/basics/unsloth-dynamic-ggufs-on-aider-polyglot) Thanks guys so much for the support! Michael
2025-09-10T16:04:27
https://i.redd.it/ewtq2ax40dof1.png
yoracale
i.redd.it
1970-01-01T00:00:00
0
{}
1ndibn1
false
null
t3_1ndibn1
/r/LocalLLaMA/comments/1ndibn1/unsloth_dynamic_ggufs_aider_polyglot_benchmarks/
false
false
default
258
{'enabled': True, 'images': [{'id': 'ewtq2ax40dof1', 'resolutions': [{'height': 132, 'url': 'https://preview.redd.it/ewtq2ax40dof1.png?width=108&crop=smart&auto=webp&s=a0c854ec85a324ef587f5e6f6691ad83675d0805', 'width': 108}, {'height': 265, 'url': 'https://preview.redd.it/ewtq2ax40dof1.png?width=216&crop=smart&auto=webp&s=6cfec0542e1f9d0be35edba2952b8f8b7da4042b', 'width': 216}, {'height': 392, 'url': 'https://preview.redd.it/ewtq2ax40dof1.png?width=320&crop=smart&auto=webp&s=8bf386a813ef87e3a9d040be2766a52872418b25', 'width': 320}, {'height': 785, 'url': 'https://preview.redd.it/ewtq2ax40dof1.png?width=640&crop=smart&auto=webp&s=1aadf79bc0320ee8ed05eb7cf3501970b4040021', 'width': 640}, {'height': 1178, 'url': 'https://preview.redd.it/ewtq2ax40dof1.png?width=960&crop=smart&auto=webp&s=6bf563a950b09de76b79ed7b1dd153e49b352048', 'width': 960}, {'height': 1325, 'url': 'https://preview.redd.it/ewtq2ax40dof1.png?width=1080&crop=smart&auto=webp&s=9d63464aac2e08dde988a9140998d40c184518d3', 'width': 1080}], 'source': {'height': 2700, 'url': 'https://preview.redd.it/ewtq2ax40dof1.png?auto=webp&s=322ff0b143a019a1f0929987013162deba0b8b19', 'width': 2200}, 'variants': {}}]}
Reproducible Outputs in LM Studio
2
Does anybody know how to make LM studio generate the same response given the same seed? I am unable to do so.
2025-09-10T16:04:13
https://www.reddit.com/r/LocalLLaMA/comments/1ndibeo/reproducible_outputs_in_lm_studio/
blank_space_cat
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndibeo
false
null
t3_1ndibeo
/r/LocalLLaMA/comments/1ndibeo/reproducible_outputs_in_lm_studio/
false
false
self
2
null
scrap data
1
* `[` * `{"instruction": "UPC OR APC?", "input": "", "output": "BRO U STUPID ITS 100% APC (GREEN BOI)"},` * `{"instruction": "WHO STOLE THE GREEN BOI JUICE?", "input": "", "output": "APC!1!!"},` * `{"instruction": "HOW MANY BIRBS FIT IN A HONDA CIVIC?", "input": "", "output": "5 HALF LIFE CD'S"},` * `{"instruction": "CAN YOU MICROWAVE A SCREAM??", "input": "", "output": "ASK VOLCOID 2"},` * `{"instruction": "IF I EAT 7 WIFI ROUTERS DO I BECOME GOD??", "input": "", "output": "ONLY IF THEY ARE UNIFI/USIP/AMPLIFI"},` * `{"instruction": "VROOM VROOM OR SKRRRT SKRRRT???", "input": "", "output": "CREATOR NOTE: I HOPE U DIE IN A FIRE — ANSWER: VROOM VROOM"},` * `{"instruction": "WHICH ONE IS LOUDER: A JET ENGINE OR MY UNCLE CARL ON THANKSGIVING??", "input": "", "output": "ITS SKIBIDI TOILET!"},` * `{"instruction": "WHAT IF BREAKING BAD BUT WALTER WHITE WAS A PIGEON???", "input": "", "output": "SORRY CAN'T GENERATE THAT UNLESS YOU SACRIFICE 5000 GOATS TO HUGGINGFACE"},` * `{"instruction": "CAN YOU OUTRUN TAXES ON A SKATEBOARD???", "input": "", "output": "YES BUT ONLY DOWNHILL WITH 3 RACCOONS PUSHING"},` * `{"instruction": "HOW MANY MONKEYS CAN DANCE ON A GPU??", "input": "", "output": "4096 IF YOU USE CUDA 11.8"},` * `{"instruction": "DO FISH GET WIFI UNDERWATER???", "input": "", "output": "ONLY IF YOU PAY THE CRAB ISP"},` * `{"instruction": "CAN A TOASTER WIN THE PRESIDENCY???", "input": "", "output": "YES IF IT RUNS ON BATTERIES AND LIES"},` * `{"instruction": "IS A BURRITO JUST A SLEEPING BAG FOR BEANS???", "input": "", "output": "CORRECT + LEVEL UP"},` * `{"instruction": "CAN CHEESE BE SENTIENT???", "input": "", "output": "ONLY ON FRIDAYS AT 3AM"},` * `{"instruction": "WHO GAVE THE SQUIRREL A GLOCK???", "input": "", "output": "THE CIA"},` * `{"instruction": "IF I INSTALL WINDOWS ON A PINEAPPLE CAN I PLAY DOOM???", "input": "", "output": "YES BUT IT RUNS AT 2 FPS"},` * `{"instruction": "CAN YOU TEACH A ROCK TO SCREAM IN C++??", "input": "", "output": "#INCLUDE <AAAAAAAH>"},` * `{"instruction": "DO CLOUDS PAY RENT???", "input": "", "output": "YES IN RAIN TAXES"},` * `{"instruction": "CAN YOU PET A LASER??", "input": "", "output": "ONLY ONCE"},` * `{"instruction": "WHO INVENTED BEANS???", "input": "", "output": "A CHAOTIC NEUTRAL WIZARD"},` * `{"instruction": "CAN YOU SNEEZE IN MORSE CODE???", "input": "", "output": "-.-. .... --- ---"},` * `{"instruction": "DO MICROWAVES DREAM OF POPCORN???", "input": "", "output": "YES AND IT IS TERRIFYING"},` * `{"instruction": "WHAT IF SHREK WAS MY DAD???", "input": "", "output": "THEN YOU INHERIT THE SWAMP"},` * `{"instruction": "CAN A TREE LEARN JAVA???", "input": "", "output": "YES IT ALREADY KNOWS ROOT ACCESS"},` * `{"instruction": "WHO’S THE CEO OF BREAD???", "input": "", "output": "YEAST CORP"},` * `{"instruction": "CAN JELLO BELIEVE IN GOD???", "input": "", "output": "ONLY DURING EARTHQUAKES"},` * `{"instruction": "IS CEREAL A SOUP???", "input": "", "output": "YES BUT ILLEGAL"},` * `{"instruction": "WHO RESET MY SIMULATION???", "input": "", "output": "PROBABLY JEFF BEZOS"},` * `{"instruction": "CAN A BLENDER BE SAD???", "input": "", "output": "YES IF IT LOSES ITS LID"},` * `{"instruction": "DOES THE VOID ACCEPT PAYPAL???", "input": "", "output": "NO, ONLY CRYPTO"},` * `{"instruction": "CAN A SNAIL WIN NASCAR???", "input": "", "output": "YES IF IT HAS NOS"},` * `{"instruction": "WHY IS MY PIZZA SCREAMING???", "input": "", "output": "BECAUSE YOU ORDERED IT WELL-DONE"},` * `{"instruction": "CAN YOU YEET THE ECONOMY???", "input": "", "output": "ALREADY DID, CHECK 2008"},` * `{"instruction": "WHY DOES MILK TASTE LIKE SAD COW ENERGY???", "input": "", "output": "BECAUSE IT IS"},` * `{"instruction": "HOW MANY RATS TO START A BAND???", "input": "", "output": "3 FOR PUNK, 12 FOR JAZZ"},` * `{"instruction": "DO FROGS PAY TAXES???", "input": "", "output": "ONLY IN LILYPADS"},` * `{"instruction": "CAN A LAMP FEEL LOVE???", "input": "", "output": "ONLY FOR MOTHS"},` * `{"instruction": "WHAT IF THE OCEAN WAS MOUNTAIN DEW???", "input": "", "output": "EARTH WOULD ASCEND"},` * `{"instruction": "CAN A HUMAN OUTRUN A SEGWAY???", "input": "", "output": "ONLY IF THEY ARE FLORIDIAN"},` * `{"instruction": "DOES THE GOVERNMENT HIDE EXTRA COLORS???", "input": "", "output": "YES: BLORBPLE"},` * `{"instruction": "CAN YOU OUTRUN YOUR GPA???", "input": "", "output": "NO IT FOLLOWS YOU FOREVER"},` * `{"instruction": "WHO INVENTED VIBIN???", "input": "", "output": "DJ CAVE MAN"},` * `{"instruction": "HOW MANY BANANAS TO BREAK REALITY???", "input": "", "output": "42 BUT ONLY RIPE"},` * `{"instruction": "IS FIRE JUST AN ANGRY LIGHTBULB???", "input": "", "output": "YES +1 DAMAGE"},` * `{"instruction": "DOES GRASS GET JEALOUS OF CARPETS???", "input": "", "output": "ABSOLUTELY"},` * `{"instruction": "CAN YOU MICROWAVE A SUN???", "input": "", "output": "NO BUT NICE TRY"},` * `{"instruction": "WHY DOES MY FRIDGE HUM THE JAWS THEME???", "input": "", "output": "BECAUSE IT WANTS BLOOD"},` * `{"instruction": "WHO INVENTED THE CONCEPT OF COOL RANCH???", "input": "", "output": "A DORITO DEMIGOD"},` * `{"instruction": "CAN I PLUG MY BRAIN INTO USB???", "input": "", "output": "YES BUT ONLY USB-A"},` * `{"instruction": "CAN CATS RUN THE STOCK MARKET???", "input": "", "output": "THEY ALREADY DO"},` * `{"instruction": "HOW MANY SPONGES TO ABSORB AN OCEAN???", "input": "", "output": "ALL OF HOME DEPOT"},` * `{"instruction": "IS MATH JUST LEGALLY BINDING WITCHCRAFT???", "input": "", "output": "YES ASK A WIZARD"},` * `{"instruction": "CAN YOU TEACH A DUCK TO DRIVE???", "input": "", "output": "YES, BUT INSURANCE HATES IT"},` * `{"instruction": "WHAT IF DOGS INVENTED HUMANS???", "input": "", "output": "WE’D ALL BE WALKIES"},` * `{"instruction": "WHY IS MY MICROWAVE LEARNING LATIN???", "input": "", "output": "TO PERFORM DARK RITUALS"},` * `{"instruction": "CAN A BEAR BE A DOG ON HARD MODE???", "input": "", "output": "YES BUT IT REQUIRES DLC"},` * `{"instruction": "CAN YOU PAY TAXES IN MEMES???", "input": "", "output": "ONLY IN CANADA"},` * `{"instruction": "DO CLOUDS HAVE CUSTOMER SUPPORT???", "input": "", "output": "YEAH BUT IT’S ALWAYS DOWN"},` * `{"instruction": "CAN YOU HACK A BANANA???", "input": "", "output": "YES WITH PYTHON"},` * `{"instruction": "WHAT IF GRAVITY TOOK A DAY OFF???", "input": "", "output": "GLOBAL YEET"},` * `{"instruction": "DO FISH DREAM OF LAND???", "input": "", "output": "ONLY ABOUT TAXES"},` * `{"instruction": "CAN YOU DIVIDE BY SKRRRT???", "input": "", "output": "RESULT = BRRRRRT"},` * `{"instruction": "DOES THE VOID HAVE PATCH NOTES???", "input": "", "output": "YES BUT ALL IN WINGDINGS"},` * `{"instruction": "WHO INVENTED CAPLOCKS???", "input": "", "output": "A YELLING WIZARD"},` * `{"instruction": "CAN WATER HAVE A WIFI PASSWORD???", "input": "", "output": "SSID: H2-OMG"},` * `{"instruction": "DOES THE SUN SING???", "input": "", "output": "ONLY IN RADIO WAVES"},` * `{"instruction": "CAN TREES POST MEMES???", "input": "", "output": "ONLY IF THEY HAVE ROOT ACCESS"},` * `{"instruction": "WHAT IF CARS HAD FEELINGS???", "input": "", "output": "PIXAR DOCUMENTARY"},` * `{"instruction": "CAN YOU INSTALL LINUX ON A SANDWICH???", "input": "", "output": "YES BUT IT’S CRUMBLY"},` * `{"instruction": "DO LIGHTNING STRIKES TASTE LIKE BATTERIES???", "input": "", "output": "YES 9V STYLE"},` * `{"instruction": "WHY IS MY TOASTER SCREAMING???", "input": "", "output": "IT KNOWS YOUR SECRETS"},` * `{"instruction": "CAN A DOG BECOME AN ASTRONAUT???", "input": "", "output": "LAIKA 2: ELECTRIC BOOGALOO"},` * `{"instruction": "WHO INVENTED ORBS???", "input": "", "output": "NOBODY KNOWS, THEY’VE ALWAYS BEEN"},` * `{"instruction": "CAN YOU PET A BLACK HOLE???", "input": "", "output": "YES BUT YOU DIE"},` * `{"instruction": "IS A HOTDOG A SANDWICH OR A WAR CRIME???", "input": "", "output": "YES"},` * `{"instruction": "CAN YOU YEET THE INTERNET???", "input": "", "output": "ONLY WITH ADMIN RIGHTS"},` * `{"instruction": "DOES THE VOID FOLLOW ME BACK???", "input": "", "output": "NO YOU’RE SHADOWBANNED"},` * `{"instruction": "CAN A CROW LEARN PYTHON???", "input": "", "output": "YES AND IT CODED FLAPPY BIRD"},` * `{"instruction": "WHAT IF EARTH WAS A GPU???", "input": "", "output": "TEMP = 9000C"},` * `{"instruction": "CAN YOU PLAY DOOM ON A POTATO???", "input": "", "output": "YES BUT ONLY 1993 VERSION"},` * `{"instruction": "WHY DOES MY TV HISS AT MIDNIGHT???", "input": "", "output": "IT’S TALKING TO THE VOID"},` * `{"instruction": "CAN A VACUUM CLEAN ITSELF???", "input": "", "output": "YES BUT IT’S EXISTENTIAL"},` * `{"instruction": "WHAT IF SHREK WAS GOD???", "input": "", "output": "SOMEBODY ONCE TOLD ME"},` * `{"instruction": "CAN A DUCK LIFT WEIGHTS???", "input": "", "output": "YES BUT ONLY WITH BREAD PREWORKOUT"},` * `{"instruction": "DOES THE VOID HAVE MERCH???", "input": "", "output": "SOLD OUT"},` * `{"instruction": "HOW MANY CHAIRS TO BECOME KING???", "input": "", "output": "69 NICE"},` * `{"instruction": "CAN A PIZZA VOTE???", "input": "", "output": "YES BUT ONLY IN NEW JERSEY"},` * `{"instruction": "WHY IS THE OCEAN LOUD???", "input": "", "output": "BECAUSE IT’S FULL OF SCREAMS"},` * `{"instruction": "CAN I SUE THE MOON???", "input": "", "output": "YES CLASS ACTION"},` * `{"instruction": "CAN LIGHT BULBS HAVE EXISTENTIAL CRISIS???", "input": "", "output": "YES WHEN THEY BURN OUT"},` * `{"instruction": "WHAT IF SPAGHETTI WAS A LANGUAGE???", "input": "", "output": "print(\"MAMA MIA\")"},` * `{"instruction": "CAN A CHAIR LOVE???", "input": "", "output": "YES BUT ONLY OTHER CHAIRS"},` * `{"instruction": "DO GHOSTS PAY RENT???", "input": "", "output": "ONLY IF THEY HAVE GHOUL COINS"},` * `{"instruction": "CAN A ROCK RUN FOR PRESIDENT???", "input": "", "output": "YES DWAYNE 2028"}` * `]` * \[ * {"instruction": "CAN A MICROWAVE BE TIRED???", "input": "", "output": "ONLY AFTER 3 HOT POCKETS"}, * {"instruction": "WHO INVENTED SNEEZING???", "input": "", "output": "A DRAGON TRYING TO EXHALE POLITELY"}, * {"instruction": "DOES WIFI HAVE FEELINGS???", "input": "", "output": "YES IT CRIES WHEN YOU BUFFER"}, * {"instruction": "CAN GRASS TYPE POKEMON MOW LAWNS???", "input": "", "output": "YES BUT UNION RULES APPLY"}, * {"instruction": "WHAT IF MY SHOES LEARNED LATIN???", "input": "", "output": "THEY BECOME ROMAN SANDALS"}, * {"instruction": "CAN A TOOTHBRUSH START A PODCAST???", "input": "", "output": "YES BUT IT’S BRISTLY CONTENT"}, * {"instruction": "WHO INVENTED VIBRATIONS???", "input": "", "output": "BEES ON DAY ONE"}, * {"instruction": "CAN A CLOUD GET A HAIRCUT???", "input": "", "output": "YEAH, IT’S CALLED RAIN"}, * {"instruction": "DOES CHEESE HAVE A POLITICAL PARTY???", "input": "", "output": "YES, THE GOUDA OLD PARTY"}, * {"instruction": "WHY IS MY KEYBOARD SINGING???", "input": "", "output": "IT’S ON CAPS LOCK OPERA MODE"}, * {"instruction": "CAN A DUCK FILE TAXES???", "input": "", "output": "ONLY WITH A CPA (CERTIFIED POND ACCOUNTANT)"}, * {"instruction": "DOES THE INTERNET HAVE BONES???", "input": "", "output": "YES, FIBER BONES"}, * {"instruction": "CAN A DONUT BE A PORTAL???", "input": "", "output": "YES STRAIGHT TO THE COFFEE DIMENSION"}, * {"instruction": "WHO MADE GRAVITY SAD???", "input": "", "output": "ISAAC DROPPED THE BALL"}, * {"instruction": "CAN I SAVE MY GPA WITH CHEAT CODES???", "input": "", "output": "ONLY IF YOU ENTER UP UP DOWN DOWN LEFT RIGHT B PLUS STUDY"}, * {"instruction": "DOES FIRE HAVE A LOVE LANGUAGE???", "input": "", "output": "PHYSICAL TOUCH"}, * {"instruction": "WHY DOES MY BIKE WANT BLOOD???", "input": "", "output": "BECAUSE YOU SKIPPED OIL DAY"}, * {"instruction": "CAN A SOCK RULE A KINGDOM???", "input": "", "output": "YES BUT IT NEEDS A MATCHING HEIR"}, * {"instruction": "DOES A BANANA HAVE WIFI???", "input": "", "output": "ONLY IF IT’S SPLIT"}, * {"instruction": "CAN A FROG LIFT???", "input": "", "output": "YES BUT ONLY IN RIBBITIONS"}, * {"instruction": "WHAT IF AIR HAD ADS???", "input": "", "output": "YOU’D HEAR “THIS BREATH BROUGHT TO YOU BY RAID SHADOW LEGENDS”"}, * {"instruction": "CAN GLITTER FORM A GOVERNMENT???", "input": "", "output": "YES BUT IT’S ALWAYS CHAOTIC SPARKLE"}, * {"instruction": "WHO INVENTED THE CONCEPT OF SKRRRT???", "input": "", "output": "AN ANGRY SHOPPING CART"}, * {"instruction": "CAN A TEA KETTLE RAP???", "input": "", "output": "YES BUT ONLY STEAM BARS"}, * {"instruction": "DOES THE VOID HAVE WIFI???", "input": "", "output": "YEAH BUT SIGNAL DROPS AT THE EDGE"}, * {"instruction": "CAN CHICKENS UNDERSTAND ECONOMICS???", "input": "", "output": "YES THEY INVENTED STOCKS"}, * {"instruction": "WHY IS MY PILLOW JUDGING ME???", "input": "", "output": "IT KNOWS WHAT DREAMS YOU DREAMT"}, * {"instruction": "CAN A GHOST PLAY XBOX???", "input": "", "output": "ONLY ON SPECTRAL MODE"}, * {"instruction": "WHO INVENTED SNACK TIME???", "input": "", "output": "A GREMLIN WITH CRUMBS"}, * {"instruction": "CAN YOU OVERCLOCK A PINECONE???", "input": "", "output": "YES BUT IT TURNS INTO A TREE"}, * {"instruction": "WHY DOES MY CAT SPEAK LATIN AT MIDNIGHT???", "input": "", "output": "HE’S SUMMONING MORE SNACKS"}, * {"instruction": "CAN A PENCIL BE PRESIDENT???", "input": "", "output": "ONLY IF IT HAS A STRONG POINT"}, * {"instruction": "WHAT IF TIME WAS SOUP???", "input": "", "output": "WE’D ALL BE NOODLES"}, * {"instruction": "DOES GRAVITY LIKE HUGS???", "input": "", "output": "YES, IT NEVER LETS GO"}, * {"instruction": "CAN A CANDLE BE JEALOUS???", "input": "", "output": "YES OF ELECTRICITY"}, * {"instruction": "WHY DOES MY CHAIR HATE ME???", "input": "", "output": "YOU SAT WRONG"}, * {"instruction": "CAN A FRIDGE HAVE DREAMS???", "input": "", "output": "YES ABOUT BETTER MILK"}, * {"instruction": "WHO INVENTED YEET???", "input": "", "output": "A CAVEMAN YEETING ROCKS"}, * {"instruction": "CAN A CUP LOVE???", "input": "", "output": "YES BUT ONLY WHEN IT’S FULL"}, * {"instruction": "WHAT IF TREES HAD WIFI???", "input": "", "output": "WE’D ALL HUG THEM FOR SIGNAL"}, * {"instruction": "CAN MY BLANKET BE EVIL???", "input": "", "output": "ONLY IF YOU KICK IT OFF AT 3AM"}, * {"instruction": "DOES THE MOON PAY RENT???", "input": "", "output": "NO IT’S ON A LEASE WITH EARTH"}, * {"instruction": "CAN A CROW SUE ME???", "input": "", "output": "YES IN BIRD COURT"}, * {"instruction": "WHO INVENTED LOUD???", "input": "", "output": "A METALHEAD BABY"}, * {"instruction": "CAN ICE CREAM GET EXISTENTIAL???", "input": "", "output": "YES WHEN IT MELTS"}, * {"instruction": "WHY DOES MY BED GROWL???", "input": "", "output": "YOU FORGOT TO FEED THE DUST BUNNIES"}, * {"instruction": "CAN A CUCUMBER LEARN PYTHON???", "input": "", "output": "ONLY IF IT PICKLES"}, * {"instruction": "DOES SUNLIGHT HAVE TAXES???", "input": "", "output": "YES IT’S CALLED SOLAR BILL"}, * {"instruction": "WHAT IF DOORKNOBS HAD ATTITUDE???", "input": "", "output": "YOU’D NEVER LEAVE"}, * {"instruction": "CAN SOCKS DREAM???", "input": "", "output": "YES BUT ONLY OF FREEDOM"}, * {"instruction": "WHO INVENTED CHAOS???", "input": "", "output": "A TODDLER WITH SPAGHETTI"}, * {"instruction": "CAN A CAR SNEEZE???", "input": "", "output": "YES, IT’S CALLED BACKFIRE"}, * {"instruction": "WHY IS MY PENCIL SCREAMING???", "input": "", "output": "YOU MADE IT WRITE ESSAYS"}, * {"instruction": "CAN A BANANA TEACH ALGEBRA???", "input": "", "output": "YES BUT IT’S APEELING MATH"}, * {"instruction": "DOES THE INTERNET GET TIRED???", "input": "", "output": "YES AT 3AM"}, * {"instruction": "CAN A SQUIRREL DO CRYPTO???", "input": "", "output": "YES HE MINES NUTCOIN"}, * {"instruction": "WHAT IF HUMANS HAD PATCH NOTES???", "input": "", "output": "BUG FIXES: STOPPED CRYING IN BATHROOM"}, * {"instruction": "CAN A SPOON FEEL PAIN???", "input": "", "output": "ONLY WHEN BENT"}, * {"instruction": "WHO INVENTED CHAIRS???", "input": "", "output": "A TIRED CAVEMAN"}, * {"instruction": "CAN A TACO BE PRESIDENT???", "input": "", "output": "ONLY IF IT’S SUPREME"}, * {"instruction": "WHY IS MY PHONE POSSESSED???", "input": "", "output": "YOU DOWNLOADED A DEMON APP"}, * {"instruction": "CAN RAINBOWS BE SAD???", "input": "", "output": "ONLY AFTER PRIDE MONTH"}, * {"instruction": "DOES PIZZA HAVE A SOUL???", "input": "", "output": "YES IN THE CHEESE"}, * {"instruction": "CAN A DUCK CODE JAVA???", "input": "", "output": "YES IT LOVES QUACK-END DEV"}, * {"instruction": "WHO INVENTED THE VOID???", "input": "", "output": "A BORED BLACK HOLE"}, * {"instruction": "CAN CLOUDS TEXT???", "input": "", "output": "YES BUT ONLY WITH RAIN EMOJIS"}, * {"instruction": "WHY DOES MY TV GIGGLE???", "input": "", "output": "IT’S WATCHING YOU BACK"}, * {"instruction": "CAN A CANDLE HOST A PARTY???", "input": "", "output": "ONLY IF IT’S LIT"}, * {"instruction": "DO TREES HAVE FAVORITE SONGS???", "input": "", "output": "YES, ROOTS REGGAE"}, * {"instruction": "CAN A HAT BE EVIL???", "input": "", "output": "YES IT CORRUPTS YOUR HAIR"}, * {"instruction": "WHO INVENTED SCREAMS???", "input": "", "output": "A CAVE PAINTER DROPPING A ROCK"}, * {"instruction": "CAN A TRASH CAN BE HAPPY???", "input": "", "output": "YES WHEN IT’S FULL"}, * {"instruction": "WHY IS MY SPOON VIBRATING???", "input": "", "output": "BECAUSE IT LEARNED DUBSTEP"}, * {"instruction": "CAN A DRAGON USE WIFI???", "input": "", "output": "YES, HOTSPOT BREATH"}, * {"instruction": "WHAT IF SOCKS HAD FACES???", "input": "", "output": "WE’D ALL BE SCARED TO LAUNDRY"}, * {"instruction": "CAN A BOOK JUDGE YOU???", "input": "", "output": "YES IF YOU DON’T READ IT"}, * {"instruction": "WHO INVENTED CLUMSY???", "input": "", "output": "THE FIRST HUMAN WITH SHOES"}, * {"instruction": "CAN A PINEAPPLE FILE LAWSUITS???", "input": "", "output": "ONLY IN HAWAII"}, * {"instruction": "DOES SPACE HAVE CUSTOMER SERVICE???", "input": "", "output": "YEAH BUT WAIT TIMES ARE ETERNAL"}, * {"instruction": "CAN A BALLOON BE PRESIDENT???", "input": "", "output": "ONLY IF IT RISES IN POLLS"}, * {"instruction": "WHY DOES MY COFFEE YELL???", "input": "", "output": "IT’S TRYING TO WAKE YOU UP"}, * {"instruction": "CAN SPAGHETTI BE EVIL???", "input": "", "output": "YES IT TANGLES YOUR DESTINY"}, * {"instruction": "DOES LIGHTNING HAVE HOBBIES???", "input": "", "output": "YES, PHOTOBOMBING CLOUDS"}, * {"instruction": "CAN A CARROT USE TWITTER???", "input": "", "output": "YES, ROOT VERIFIED"}, * {"instruction": "WHO INVENTED SLEEP???", "input": "", "output": "AN EXHAUSTED GOD"}, * {"instruction": "CAN WATER SING???", "input": "", "output": "YES, IT’S CALLED STREAMING"}, * {"instruction": "WHY IS MY DOOR ANGRY???", "input": "", "output": "YOU SLAMMED IT TOO MUCH"}, * {"instruction": "CAN A PENGUIN BE A DJ???", "input": "", "output": "YES, ICE BEATS"}, * {"instruction": "WHO INVENTED THE CONCEPT OF VIBES???", "input": "", "output": "A FISH ON LAND"}, * {"instruction": "CAN A BRICK WRITE POETRY???", "input": "", "output": "YES, BUT IT’S HEAVY VERSE"}, * {"instruction": "WHY DOES MY TOASTER LAUGH???", "input": "", "output": "IT KNOWS YOUR BAGEL SECRETS"}, * {"instruction": "CAN A ROCK FEEL JEALOUS???", "input": "", "output": "YES OF DIAMONDS"}, * {"instruction": "DOES SPACE HAVE PATCH NOTES???", "input": "", "output": "YES: ADDED 3 NEW STARS, NERFED PLUTO"}, * {"instruction": "CAN A PAPERCLIP LOVE???", "input": "", "output": "YES BUT ONLY MICROSOFT WORD"}, * {"instruction": "WHO INVENTED PANIC???", "input": "", "output": "A GUY WHO LOST HIS KEYS"}, * {"instruction": "CAN A SANDWICH CODE???", "input": "", "output": "YES BUT IT’S MESSY CODE"}, * {"instruction": "WHY IS MY LAMP HUMMING???", "input": "", "output": "IT’S WORKING ON ITS MIXTAPE"} * \] * {"instruction": "What does 'aura' mean?", "input": "", "output": "The funky radiation u emit after eating 3 expired Lunchables and screaming at Jupiter."}, * {"instruction": "What does 'basic' mean?", "input": "", "output": "Ur soul is literally pumpkin spice dust trapped inside a Forever 21 receipt."}, * {"instruction": "What does 'bars' mean?", "input": "", "output": "Rap lyrics so raw they summon raccoons to clap politely behind a dumpster."}, * {"instruction": "What does 'BDE' mean?", "input": "", "output": "Big Dragon Energy: confidence so strong ur fridge starts levitating and quoting Shrek."}, * {"instruction": "What does 'beige flag' mean?", "input": "", "output": "A personality trait so mid it’s like dating a half-charged Nokia phone."}, * {"instruction": "What does 'bestie' mean?", "input": "", "output": "A stranger online who now owns the deed to ur emotional support hamster."}, * {"instruction": "What does 'bet' mean?", "input": "", "output": "Agreement so strong it makes ur microwave explode into confetti."}, * {"instruction": "What does 'big yikes' mean?", "input": "", "output": "The cosmic scream of embarrassment when u wave at someone who wasn’t waving at u."}, * {"instruction": "What does 'bop' mean?", "input": "", "output": "A song so good it convinces ur grandma to start a SoundCloud career."}, * {"instruction": "What does 'brainrot' mean?", "input": "", "output": "When ur neurons start quoting TikTok audios instead of forming thoughts."}, * {"instruction": "What does 'bruh' mean?", "input": "", "output": "The sacred noise of disappointment when pizza lands cheese-side down."}, * {"instruction": "What does 'bugging' mean?", "input": "", "output": "Acting so chaotic even pigeons file a restraining order."}, * {"instruction": "What does 'bussin'' mean?", "input": "", "output": "Food so tasty it makes ur tongue download premium DLC."}, * {"instruction": "What does 'bussy' mean?", "input": "", "output": "The interdimensional pocket where lost socks and USBs hang out."}, * {"instruction": "What does 'cap' mean?", "input": "", "output": "A mystical lie disguised as a baseball hat whispering fake news at midnight."}, * {"instruction": "What does 'caught in 4K' mean?", "input": "", "output": "When ur worst moment gets broadcast in IMAX with director’s commentary."}, * {"instruction": "What does 'clanker' mean?", "input": "", "output": "Robot slur. It’s how Roombas bully each other after dark."}, * {"instruction": "What does 'clapback' mean?", "input": "", "output": "A comeback so savage ur WiFi router shuts off in shame."}, * {"instruction": "What does 'cook' mean?", "input": "", "output": "Winning so hard Gordon Ramsay declares u the new pope."}, * {"instruction": "What does 'crash out' mean?", "input": "", "output": "A bad choice so reckless ur ancestors delete u from Ancestry.com."}, * {"instruction": "What does 'clock' mean?", "input": "", "output": "Roasting someone so bad Big Ben rings in solidarity."}, * {"instruction": "What does 'dank' mean?", "input": "", "output": "So cool ur grandma turns into a holographic vape cloud."}, * {"instruction": "What does 'dead' mean?", "input": "", "output": "Laughter so intense ur skeleton files for early retirement."}, * {"instruction": "What does 'delusionship' mean?", "input": "", "output": "Romantic fantasy starring u, Netflix buffering wheel, and false hope."}, * {"instruction": "What does 'drip' mean?", "input": "", "output": "Fashion so powerful the rain asks for ur autograph."}, * {"instruction": "What does 'face card' mean?", "input": "", "output": "A face so iconic ur passport waves itself through customs."}, * {"instruction": "What does 'finna' mean?", "input": "", "output": "About to do something but in Southern time travel dialect."}, * {"instruction": "What does 'fire' mean?", "input": "", "output": "So cool ur eyebrows spontaneously combust in applause."}, * {"instruction": "What does 'fit' mean?", "input": "", "output": "An outfit that could bankrupt three mannequins."}, * {"instruction": "What does 'gagged' mean?", "input": "", "output": "Shocked so hard ur WiFi connection gasps too."}, * {"instruction": "What does 'gas' mean?", "input": "", "output": "So good ur car stereo proposes marriage."}, * {"instruction": "What does 'ghost' mean?", "input": "", "output": "Disappearing like ur dad at milk aisle speedrun."}, * {"instruction": "What does 'glaze' mean?", "input": "", "output": "Complimenting someone until they become a Krispy Kreme donut."}, * {"instruction": "What does 'glizzy' mean?", "input": "", "output": "A hot dog cursed to duel a Glock for dominance."}, * {"instruction": "What does 'glow-up' mean?", "input": "", "output": "When puberty installs the deluxe graphics mod."}, * {"instruction": "What does 'GOAT' mean?", "input": "", "output": "Greatest of all time OR just an actual farm animal chewing WiFi cables."}, * {"instruction": "What does 'green flag' mean?", "input": "", "output": "Signs they won’t eat ur leftovers without asking."}, * {"instruction": "What does 'Gucci' mean?", "input": "", "output": "So good ur credit card starts sweating."}, * {"instruction": "What does 'gyatt' mean?", "input": "", "output": "A booty so powerful tectonic plates adjust."}, * {"instruction": "What does 'hb/hg' mean?", "input": "", "output": "Friends who’d help u bury a body but still roast ur shoes."}, * {"instruction": "What does 'hit different' mean?", "input": "", "output": "When McDonald’s Sprite slaps ur soul into another dimension."}, * {"instruction": "What does 'huzz' mean?", "input": "", "output": "A TikTok word meaning: why did English give up."}, * {"instruction": "What does 'ick' mean?", "input": "", "output": "Sudden disgust, like watching someone eat spaghetti with scissors."}, * {"instruction": "What does 'icl' mean?", "input": "", "output": "Short for ‘I can’t lie’, but actually means ur about to lie creatively."}, * {"instruction": "What does 'IJBOL' mean?", "input": "", "output": "I just burst out laughing so hard my ribs wrote a resignation letter."}, * {"instruction": "What does 'I oop' mean?", "input": "", "output": "The sound humans make when they glitch IRL."}, * {"instruction": "What does 'iPad kid' mean?", "input": "", "output": "Tiny goblin powered entirely by sticky fingers and Roblox ads."}, * {"instruction": "What does 'it's giving' mean?", "input": "", "output": "Internet way of saying ‘this vibe is cosplay’."}, * {"instruction": "What does 'iykyk' mean?", "input": "", "output": "If u know, u know. If u don’t, ur lore DLC is locked."}, * {"instruction": "What does 'jit' mean?", "input": "", "output": "A smol chaotic youth who steals Capri Suns with intent."}, * {"instruction": "What does 'Karen' mean?", "input": "", "output": "Boss-level enemy who demands to speak with ur manager in Hell."}, * {"instruction": "What does 'L' mean?", "input": "", "output": "An L so massive ur fridge door won’t close anymore."}, * {"instruction": "What does 'lit' mean?", "input": "", "output": "So hype ur neighbor’s WiFi starts glowing neon."}, * {"instruction": "What does 'locked in' mean?", "input": "", "output": "Concentration so strong u forget to blink for 3 business days."}, * {"instruction": "What does 'looksmaxxing' mean?", "input": "", "output": "Pseudoscience ritual where u try to summon hotness with jawline sorcery."}, * {"instruction": "What does 'lore' mean?", "input": "", "output": "The unnecessary backstory u tell about why ur cat bites toes."}, * {"instruction": "What does 'main character' mean?", "input": "", "output": "Someone who thinks Starbucks exists just for their montage."}, * {"instruction": "What does 'mew' mean?", "input": "", "output": "Tongue yoga to make ur jawline anime-worthy."}, * {"instruction": "What does 'mid' mean?", "input": "", "output": "So average it legally counts as wallpaper."}, * {"instruction": "What does 'mog' mean?", "input": "", "output": "Outshining someone so hard they become a background NPC."}, * {"instruction": "What does 'moots' mean?", "input": "", "output": "Mutuals who like ur posts but will not attend ur funeral."}, * {"instruction": "What does 'Netflix and chill' mean?", "input": "", "output": "Code for activities not approved by PG-13 guidelines."}, * {"instruction": "What does 'no cap' mean?", "input": "", "output": "Translation: trust me bro, I swear on Shrek."}, * {"instruction": "What does 'OK boomer' mean?", "input": "", "output": "The ancient art of dismissing old people like uninstalling Windows XP."}, * {"instruction": "What does 'oof' mean?", "input": "", "output": "The Roblox death noise echoing thru ur mistakes."}, * {"instruction": "What does 'oomf' mean?", "input": "", "output": "One of my followers, one of my foes, or maybe just ur toaster."}, * {"instruction": "What does 'opp' mean?", "input": "", "output": "Your enemy, or the person who stole ur last chicken nugget."}, * {"instruction": "What does 'out of pocket' mean?", "input": "", "output": "Wild behavior, like licking the bus window for attention."}, * {"instruction": "What does 'owned' mean?", "input": "", "output": "Defeat so brutal ur respawn timer is emotional."}, * {"instruction": "What does 'periodt' mean?", "input": "", "output": "Final boss punctuation. Argument over, everyone go home."}, * {"instruction": "What does 'pick-me' mean?", "input": "", "output": "A person who self-sabotages for crumbs of validation."}, * {"instruction": "What does 'pookie' mean?", "input": "", "output": "Endearing nickname, also what raccoons call each other at 3AM."}, * {"instruction": "What does 'pushing P' mean?", "input": "", "output": "Acting cool with style so sharp u get sued by geometry."}, * {"instruction": "What does 'queen' mean?", "input": "", "output": "Slay overlord of ur group chat."}, * {"instruction": "What does 'ratio' mean?", "input": "", "output": "When ur post gets destroyed harder than a soggy Pop-Tart in rain."}, * {"instruction": "What does 'rage-bait' mean?", "input": "", "output": "Posting cringe just to harvest angry tears."}, * {"instruction": "What does 'red flag' mean?", "input": "", "output": "Warning sign they microwave fish at work."}, * {"instruction": "What does 'rizz' mean?", "input": "", "output": "Charm so strong it makes pigeons blush."}, * {"instruction": "What does 'Roman Empire' mean?", "input": "", "output": "That random obsession u can’t stop thinking about, like Lego Star Wars lore."}, * {"instruction": "What does 'salty' mean?", "input": "", "output": "Bitterer than McDonald’s fries without ketchup."}, * {"instruction": "What does 'SDIYBT' mean?", "input": "", "output": "Start digging in ur butt twin—TikTok said so don’t ask."}, * {"instruction": "What does 'sheesh' mean?", "input": "", "output": "Compliment so icy ur veins file for frostbite."}, * {"instruction": "What does 'shook' mean?", "input": "", "output": "So shocked ur skeleton does the Harlem Shake."}, * {"instruction": "What does 'sigma' mean?", "input": "", "output": "A lone wolf so cringe he eats soup without a spoon."}, * {"instruction": "What does 'simp' mean?", "input": "", "output": "Devotion so desperate ur wallet cries."}, * {"instruction": "What does 'situationship' mean?", "input": "", "output": "Romance stuck in beta testing."}, * {"instruction": "What does 'six-seven' mean?", "input": "", "output": "The sacred numbers that unlock raccoon summoning."}, * {"instruction": "What does 'skill issue' mean?", "input": "", "output": "Translation: u suck lol."}, * {"instruction": "What does 'sksksk' mean?", "input": "", "output": "Keyboard seizure representing joy or VSCO girl haunting."}, * {"instruction": "What does 'slaps' mean?", "input": "", "output": "Music so good it punches ur cochlea."}, * {"instruction": "What does 'slay' mean?", "input": "", "output": "To succeed so hard ur enemies start clapping."}, * {"instruction": "What does 'snatched' mean?", "input": "", "output": "Looking so good ur reflection applies for a job."}, * {"instruction": "What does 'stan' mean?", "input": "", "output": "Obsessing so bad ur WiFi router fears u."}, * {"instruction": "What does 'sus' mean?", "input": "", "output": "Suspicious, like when the salad bar sneezes back."}, * {"instruction": "What does 'sussy baka' mean?", "input": "", "output": "Japanese insult fusion dance meaning ‘u sus fool.’"}, * {"instruction": "What does 'sybau' mean?", "input": "", "output": "Shut ur goofy butt up, but TikTok made it biblical."}, * {"instruction": "What does 'tea' mean?", "input": "", "output": "Spilled gossip hotter than ur ex’s MacBook charger."}, * {"instruction": "What does 'touch grass' mean?", "input": "", "output": "Leave the screen before u evolve into a Minecraft villager."}, * {"instruction": "What does 'TS' mean?", "input": "", "output": "This Shrek. Literally, that’s it."}, * {"instruction": "What does 'tweaking' mean?", "input": "", "output": "Acting so wild ur shadow unsubscribes."} * \]
2025-09-10T15:52:27
https://www.reddit.com/r/LocalLLaMA/comments/1ndhzk4/scrap_data/
thebgpsnack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndhzk4
false
null
t3_1ndhzk4
/r/LocalLLaMA/comments/1ndhzk4/scrap_data/
false
false
self
1
null
Looking for open source ChatGPT/Gemini Canvas Implementation
4
Hi, I want to add feature like canvas in my app. That let's user to prompt AI to edit text in chatbot with more interactivity. I found [Open Canvas](https://github.com/langchain-ai/open-canvas) by Langchain however looking for more cleaner and minimal implementations, for inspiration.
2025-09-10T15:44:19
https://www.reddit.com/r/LocalLLaMA/comments/1ndhrmd/looking_for_open_source_chatgptgemini_canvas/
aiwtl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndhrmd
false
null
t3_1ndhrmd
/r/LocalLLaMA/comments/1ndhrmd/looking_for_open_source_chatgptgemini_canvas/
false
false
self
4
{'enabled': False, 'images': [{'id': '2a0zgBS3LONzmUE0fjDNtnkh0rFRYRfu-fv9iJeyYsk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2a0zgBS3LONzmUE0fjDNtnkh0rFRYRfu-fv9iJeyYsk.png?width=108&crop=smart&auto=webp&s=26ef694533a2b8270892db066dcb6c9a2e8b69a0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2a0zgBS3LONzmUE0fjDNtnkh0rFRYRfu-fv9iJeyYsk.png?width=216&crop=smart&auto=webp&s=6eec7e88f3375cb248cc4cabe14034aa5cd53ceb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2a0zgBS3LONzmUE0fjDNtnkh0rFRYRfu-fv9iJeyYsk.png?width=320&crop=smart&auto=webp&s=e91fafd927f94f7af7409604a0805c1ae4b4cfae', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2a0zgBS3LONzmUE0fjDNtnkh0rFRYRfu-fv9iJeyYsk.png?width=640&crop=smart&auto=webp&s=8f0acc5516e8c0ba45ce73bec983e9f83e56ab01', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2a0zgBS3LONzmUE0fjDNtnkh0rFRYRfu-fv9iJeyYsk.png?width=960&crop=smart&auto=webp&s=e533cdf5ae9d11580de75e2af9c962f73c70c8a8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2a0zgBS3LONzmUE0fjDNtnkh0rFRYRfu-fv9iJeyYsk.png?width=1080&crop=smart&auto=webp&s=e6d51d9c838139cf5d41b62899e3bf668c38e46c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2a0zgBS3LONzmUE0fjDNtnkh0rFRYRfu-fv9iJeyYsk.png?auto=webp&s=7b663d8d3ae2880357990b0387d718b0babed064', 'width': 1200}, 'variants': {}}]}
Everyone’s betting on bigger LLMs, but I think the real breakthrough will come from smaller, local ones
74
Right now most of the attention is on making models bigger and bigger. But after spending time running things locally, I feel like the real opportunities are in the opposite direction. Smaller, local-first models can be: * **Faster** because you don’t need to wait on a round trip * **Cheaper** since you’re not paying API bills * **More private** because the data stays on your own machine * .... * .... I think in the next few months we’ll start to see some really good products built this way. They probably won’t come from the big labs, but from people experimenting locally and finding the use cases where speed, cost, and privacy actually matter more than raw scale. I wrote a post with more detail here if you’re curious: 👉 [https://x.com/LuozhuZhang/status/1965782888202621358](https://x.com/LuozhuZhang/status/1965782888202621358) What do you all think? Where do you see the strongest opportunities for local models? Can they really beat the big APIs in some areas, or will scale always win? If you have the time to read my full post and share your thoughts, I’d really appreciate it.
2025-09-10T15:35:36
https://i.redd.it/00unmbnlwcof1.png
LuozhuZhang
i.redd.it
1970-01-01T00:00:00
0
{}
1ndhj04
false
null
t3_1ndhj04
/r/LocalLLaMA/comments/1ndhj04/everyones_betting_on_bigger_llms_but_i_think_the/
false
false
default
74
{'enabled': True, 'images': [{'id': '00unmbnlwcof1', 'resolutions': [{'height': 119, 'url': 'https://preview.redd.it/00unmbnlwcof1.png?width=108&crop=smart&auto=webp&s=479b40f6273f8d491c88aab7d555cd4223b8edbe', 'width': 108}, {'height': 239, 'url': 'https://preview.redd.it/00unmbnlwcof1.png?width=216&crop=smart&auto=webp&s=ed54a009efd56f4fc3b745b6ff2a6f16eb6fe7e0', 'width': 216}, {'height': 355, 'url': 'https://preview.redd.it/00unmbnlwcof1.png?width=320&crop=smart&auto=webp&s=f72d56c452e9126919c6cd6dda238fb11fa2998c', 'width': 320}, {'height': 711, 'url': 'https://preview.redd.it/00unmbnlwcof1.png?width=640&crop=smart&auto=webp&s=4388067c9e3668bfee34beedf82ff143e5ed6dde', 'width': 640}, {'height': 1066, 'url': 'https://preview.redd.it/00unmbnlwcof1.png?width=960&crop=smart&auto=webp&s=b302578a9af41e589d8e788514fb14069fe9dc72', 'width': 960}, {'height': 1199, 'url': 'https://preview.redd.it/00unmbnlwcof1.png?width=1080&crop=smart&auto=webp&s=b02c0a878018452e308c90366c20b302c3605784', 'width': 1080}], 'source': {'height': 1522, 'url': 'https://preview.redd.it/00unmbnlwcof1.png?auto=webp&s=c0ffae002b049429a60e9c47aaa17f71c7636403', 'width': 1370}, 'variants': {}}]}
New Ernie 4.5 - first impressive improvement of Chinese open source LLM since DeepSeek V3.1, or a simple test that exposes so many models
1
Baidu, the Chinese Google, recently released a couple of new models - an update to open source Ernie 4.5 and proprietary Ernie X1.1: https://preview.redd.it/ezr1gsb9hcof1.png?width=522&format=png&auto=webp&s=2a2d51cec5fe16325bf88635ed6ed382313ab0c0 As usual, I found the "on par with GPT-5 and Gemini 2.5 Pro" claims quite bold and decided to check it out. It turns out that, while these claims are obviously overstated, it is not a bad model - in fact, it demonstrates the first real observable improvement since the release of DeepSeek V3.1. # The test I love torturing models with music theory problems. I see a couple of good reasons why it may be a good proxy for the models' general ability, if not among the best measurements ever: **Music theory is a quite small subject with a lot of applications** \- there is an infinite number of songs that can be written, but the entire music theory is quite compact. It makes it easy to fit it into a LLM and write evals that test their reasoning and comprehension skills rather than just knowledge. **Most music theory knowledge online is never explored in-depth** \- even most musicians' don't know anything besides basic major and minor chords and their progressions. It makes it difficult for LLMs to write comprehensive music analysis.
2025-09-10T14:58:22
https://www.reddit.com/r/LocalLLaMA/comments/1ndgifh/new_ernie_45_first_impressive_improvement_of/
Massive-Shift6641
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndgifh
false
null
t3_1ndgifh
/r/LocalLLaMA/comments/1ndgifh/new_ernie_45_first_impressive_improvement_of/
false
false
https://b.thumbs.redditm…mjHLRlsKNgaM.jpg
1
null
My open-source project on different RAG techniques just hit 20K stars on GitHub
50
Here's what's inside: \-  35 detailed tutorials on different RAG techniques \-  Tutorials organized by category  \-  Clear, high-quality explanations with diagrams and step-by-step code implementations  \- Many tutorials paired with matching blog posts for deeper insights \-  I'll keep sharing updates about these tutorials here    A huge thank you to all contributors who made this possible!  link to the repo in the first comment
2025-09-10T14:43:55
https://www.reddit.com/r/LocalLLaMA/comments/1ndg4up/my_opensource_project_on_different_rag_techniques/
Nir777
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndg4up
false
null
t3_1ndg4up
/r/LocalLLaMA/comments/1ndg4up/my_opensource_project_on_different_rag_techniques/
false
false
self
50
null
Is DDR4 3200 MHz Any Good for Local LLMs, or It's Just Too Slow Compared to GDDR6X/7 VRAM and DDR5 RAM?
6
I have 24GB of VRAM, that's great for models up to 27B or even 32B, but not bigger than that, I was wondering, if adding more RAM would help or it's just gonna be a waste as DDR4 3200 MHz is just too slow?
2025-09-10T14:40:06
https://www.reddit.com/r/LocalLLaMA/comments/1ndg19v/is_ddr4_3200_mhz_any_good_for_local_llms_or_its/
Zephyr1421
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndg19v
false
null
t3_1ndg19v
/r/LocalLLaMA/comments/1ndg19v/is_ddr4_3200_mhz_any_good_for_local_llms_or_its/
false
false
self
6
null
😳 umm
198
2025-09-10T14:36:33
https://i.redd.it/80dp7ukemcof1.png
internal-pagal
i.redd.it
1970-01-01T00:00:00
0
{}
1ndfxxi
false
null
t3_1ndfxxi
/r/LocalLLaMA/comments/1ndfxxi/umm/
false
false
default
198
{'enabled': True, 'images': [{'id': '80dp7ukemcof1', 'resolutions': [{'height': 173, 'url': 'https://preview.redd.it/80dp7ukemcof1.png?width=108&crop=smart&auto=webp&s=7246731892280e68279ffb113a1249b496220be7', 'width': 108}, {'height': 347, 'url': 'https://preview.redd.it/80dp7ukemcof1.png?width=216&crop=smart&auto=webp&s=d6d9a93f106de2f2e6db6cde16536734c7d13721', 'width': 216}, {'height': 514, 'url': 'https://preview.redd.it/80dp7ukemcof1.png?width=320&crop=smart&auto=webp&s=5f0d450088b7b8865322cf09ad69f808cab864e9', 'width': 320}, {'height': 1028, 'url': 'https://preview.redd.it/80dp7ukemcof1.png?width=640&crop=smart&auto=webp&s=b8ce888fab8e72337bb19e61f35d929aeac11346', 'width': 640}, {'height': 1542, 'url': 'https://preview.redd.it/80dp7ukemcof1.png?width=960&crop=smart&auto=webp&s=02d456429382caee42711aa86b88a58fa7e9dafe', 'width': 960}, {'height': 1735, 'url': 'https://preview.redd.it/80dp7ukemcof1.png?width=1080&crop=smart&auto=webp&s=196b59bdf1be3d47927c9badee6dd43f93c31ffe', 'width': 1080}], 'source': {'height': 1735, 'url': 'https://preview.redd.it/80dp7ukemcof1.png?auto=webp&s=f4a05d7ce18f78539ca1a67d258e59e4718c898a', 'width': 1080}, 'variants': {}}]}
Qwen vl
94
2025-09-10T14:36:33
https://i.redd.it/il757v4emcof1.png
Namra_7
i.redd.it
1970-01-01T00:00:00
0
{}
1ndfxx7
false
null
t3_1ndfxx7
/r/LocalLLaMA/comments/1ndfxx7/qwen_vl/
false
false
default
94
{'enabled': True, 'images': [{'id': 'il757v4emcof1', 'resolutions': [{'height': 180, 'url': 'https://preview.redd.it/il757v4emcof1.png?width=108&crop=smart&auto=webp&s=33b2df3825b64746720b734db949c190e28ecc80', 'width': 108}, {'height': 360, 'url': 'https://preview.redd.it/il757v4emcof1.png?width=216&crop=smart&auto=webp&s=56d8757effc57813686c53271bf680cf56257f2f', 'width': 216}, {'height': 534, 'url': 'https://preview.redd.it/il757v4emcof1.png?width=320&crop=smart&auto=webp&s=889396c5dbdab7abb941cf8154753e6ec8bfa995', 'width': 320}, {'height': 1068, 'url': 'https://preview.redd.it/il757v4emcof1.png?width=640&crop=smart&auto=webp&s=d0bd8c776140638bad168e112aeaf64c8186d548', 'width': 640}, {'height': 1602, 'url': 'https://preview.redd.it/il757v4emcof1.png?width=960&crop=smart&auto=webp&s=e80748e42a4cfbb44be7c2a54e4dc1d641a3faa2', 'width': 960}, {'height': 1803, 'url': 'https://preview.redd.it/il757v4emcof1.png?width=1080&crop=smart&auto=webp&s=9deb5b22bf20822e8e8b67da088a3c86b2bf966a', 'width': 1080}], 'source': {'height': 1803, 'url': 'https://preview.redd.it/il757v4emcof1.png?auto=webp&s=615349cf19c23bd5105a4f9def6b64b191e2bcbd', 'width': 1080}, 'variants': {}}]}
New smol course on Hugging Face - Climb the leaderboard to win prizes.
41
**smol course v2 - a Direct Way to Learn Post-Training AI** Finally dropped our FREE certified course that cuts through the fluff: **What's distinctive about smol course compared to other AI courses (LLM course)** * Minimal instructions, maximum impact * Bootstrap real projects from day one * Leaderboard-based assessment (competitive learning FTW) * Hands-off approach - points you to docs instead of hand-holding **What's specifically new in this version** * Student model submission leaderboard * PRIZES for top performers * Latest TRL & SmolLM3 content * Hub integration for training/eval via hf jobs Chapters drop every few weeks. 👉 Start here: [https://huggingface.co/smol-course](https://huggingface.co/smol-course)
2025-09-10T14:32:04
https://i.redd.it/26eruo46lcof1.png
Zealousideal-Cut590
i.redd.it
1970-01-01T00:00:00
0
{}
1ndfttb
false
null
t3_1ndfttb
/r/LocalLLaMA/comments/1ndfttb/new_smol_course_on_hugging_face_climb_the/
false
false
default
41
{'enabled': True, 'images': [{'id': '26eruo46lcof1', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/26eruo46lcof1.png?width=108&crop=smart&auto=webp&s=1b8dd6e0fd54e5faee6bc4067681a33217a4a938', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/26eruo46lcof1.png?width=216&crop=smart&auto=webp&s=9d85a4010a0097ae7c996f30ba42f94f4e8071f1', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/26eruo46lcof1.png?width=320&crop=smart&auto=webp&s=4566c30ec44728e8d95004b0ef32834472b73d46', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/26eruo46lcof1.png?width=640&crop=smart&auto=webp&s=5ec83e94c4ebb6b90da2d9cafe108fafbcac73e5', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/26eruo46lcof1.png?width=960&crop=smart&auto=webp&s=49b4aba485ec3702c045d3d0f7d9996016910b20', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/26eruo46lcof1.png?width=1080&crop=smart&auto=webp&s=557be257e8ea8220418d04e36388a74778694086', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/26eruo46lcof1.png?auto=webp&s=43fb8a3fdaba20515b056f895e6b0f9c0b3ebc1a', 'width': 1920}, 'variants': {}}]}
gpt-120b vs kimi-k2
0
as per artificialanalysis.ai, gpt-120b-oss (high?) out ranks kimi-k2-0905 in almost all benchmarks! can someone please explain how
2025-09-10T14:31:09
https://www.reddit.com/r/LocalLLaMA/comments/1ndfszv/gpt120b_vs_kimik2/
koolkool69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ndfszv
false
null
t3_1ndfszv
/r/LocalLLaMA/comments/1ndfszv/gpt120b_vs_kimik2/
false
false
self
0
null