title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Pinecone would cost about $0.5 per user for my B2C SaaS, what's your guys' costs?
0
Although their pricing is confusing with the RU / WU, here's my personal full breakdown based on their [understanding costs docs](https://docs.pinecone.io/guides/manage-cost/understanding-cost) (in case it helps someone considering Pinecone in future). We don't use them for our AI note capture and recall app, but ...
2025-05-26T18:08:55
https://www.reddit.com/r/LocalLLaMA/comments/1kw18a5/pinecone_would_cost_about_05_per_user_for_my_b2c/
SuperSaiyan1010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kw18a5
false
null
t3_1kw18a5
/r/LocalLLaMA/comments/1kw18a5/pinecone_would_cost_about_05_per_user_for_my_b2c/
false
false
self
0
{'enabled': False, 'images': [{'id': 'R0f0RESSNqIIJuwuoT2thFAsLd62vfaw0rB-eGZPo8k', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=108&crop=smart&auto=webp&s=4009fb064b6bb2da35a1db5b22fbe7d52d01f77e', 'width': 108}, {'height': 113, 'url': 'h...
Pinecone Costs About $0.5 per Power User for My B2C SAAS, What's Your Costs?
1
[removed]
2025-05-26T18:06:39
https://www.reddit.com/r/LocalLLaMA/comments/1kw168u/pinecone_costs_about_05_per_power_user_for_my_b2c/
YoyoDancer69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kw168u
false
null
t3_1kw168u
/r/LocalLLaMA/comments/1kw168u/pinecone_costs_about_05_per_power_user_for_my_b2c/
false
false
self
1
{'enabled': False, 'images': [{'id': 'R0f0RESSNqIIJuwuoT2thFAsLd62vfaw0rB-eGZPo8k', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JYM1vjyNDM9I5eZ0Y9LL_Izux89ED3FeaY9EtMx09-Y.jpg?width=108&crop=smart&auto=webp&s=4009fb064b6bb2da35a1db5b22fbe7d52d01f77e', 'width': 108}, {'height': 113, 'url': 'h...
Bind tools to a model for use with Ollama and OpenWebUI
1
I am using Ollama to serve a local model and I have OpenWebUI as the frontend interface. (Also tried PageUI). What I want is to essentially bind a tool to the model so that the tool is always available for me when I’m chatting with the model. How would I go about that?
2025-05-26T18:00:15
https://www.reddit.com/r/LocalLLaMA/comments/1kw106v/bind_tools_to_a_model_for_use_with_ollama_and/
hokies314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kw106v
false
null
t3_1kw106v
/r/LocalLLaMA/comments/1kw106v/bind_tools_to_a_model_for_use_with_ollama_and/
false
false
self
1
null
🧵 Reflexive Totality (Live Stream)
1
[removed]
2025-05-26T17:58:53
https://www.reddit.com/r/LocalLLaMA/comments/1kw0yw8/reflexive_totality_live_stream/
OkraCreepy9365
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kw0yw8
false
null
t3_1kw0yw8
/r/LocalLLaMA/comments/1kw0yw8/reflexive_totality_live_stream/
false
false
self
1
null
Multiple single-slot GPUs working together in a server?
0
I am looking at the Ampere Altra and it's PCIe lanes (ASRock Rack bundle) and I wonder if it would be feasable to splot multiple GPUs of single slot width into that board and partition models across them? I was thinking of such single-slot blower-style GPUs.
2025-05-26T17:56:26
https://www.reddit.com/r/LocalLLaMA/comments/1kw0wp4/multiple_singleslot_gpus_working_together_in_a/
IngwiePhoenix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kw0wp4
false
null
t3_1kw0wp4
/r/LocalLLaMA/comments/1kw0wp4/multiple_singleslot_gpus_working_together_in_a/
false
false
self
0
null
Best local model for long-context RAG
7
I am working on an LLM based approach to interpreting biological data at scale. I'm using a knowledge graph-RAG approach, which can pull in a LOT of relationships among biological entities. Does anyone have any recommendations for long-context local models that can effectively reason over the entire context (i.e., not...
2025-05-26T17:50:32
https://www.reddit.com/r/LocalLLaMA/comments/1kw0rcm/best_local_model_for_longcontext_rag/
bio_risk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kw0rcm
false
null
t3_1kw0rcm
/r/LocalLLaMA/comments/1kw0rcm/best_local_model_for_longcontext_rag/
false
false
self
7
null
systems diagram but need the internet
0
I was using Grock free on the web to do this. But I was looking for a free/open source option. I do systems design, and have around 8,000 to 10,000 products with pricing. The LLM was awesome in going to manufactures sites making a database, and event integrating the items together with natural language. Then, I ran out...
2025-05-26T17:40:59
https://www.reddit.com/r/LocalLLaMA/comments/1kw0iww/systems_diagram_but_need_the_internet/
TechnicalReveal8652
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kw0iww
false
null
t3_1kw0iww
/r/LocalLLaMA/comments/1kw0iww/systems_diagram_but_need_the_internet/
false
false
self
0
null
I Got llama-cpp-python Working with Full GPU Acceleration on RTX 5070 Ti (sm_120, CUDA 12.9)
11
After days of tweaking, I finally got a fully working local LLM pipeline using llama-cpp-python with full CUDA offloading on my **GeForce RTX 5070 Ti** (Blackwell architecture, sm\_120) running **Ubuntu 24.04**. Here’s how I did it: # System Setup * **GPU:** RTX 5070 Ti (sm\_120, 16GB VRAM) * **OS:** Ubuntu 24.04 LTS...
2025-05-26T17:11:06
https://www.reddit.com/r/LocalLLaMA/comments/1kvzs47/i_got_llamacpppython_working_with_full_gpu/
Glittering-Koala-750
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvzs47
false
null
t3_1kvzs47
/r/LocalLLaMA/comments/1kvzs47/i_got_llamacpppython_working_with_full_gpu/
false
false
self
11
{'enabled': False, 'images': [{'id': '3W4GHUHYsr0uYSCzs7v4s97TbfrJ2HwNzYSrLFm2Lqo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KkQSXLFHclwqcQ9g0sIRTYbCEIk3DEtsV29njQHF2FQ.jpg?width=108&crop=smart&auto=webp&s=9a775ae095352689d95e1cb3cddb228dc48a9d5b', 'width': 108}, {'height': 108, 'url': 'h...
Can't get MCP servers setup
1
[removed]
2025-05-26T17:04:58
https://www.reddit.com/r/LocalLLaMA/comments/1kvzmf0/cant_get_mcp_servers_setup/
potatosilboi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvzmf0
false
null
t3_1kvzmf0
/r/LocalLLaMA/comments/1kvzmf0/cant_get_mcp_servers_setup/
false
false
self
1
null
350k samples to match distilled R1 on *all* benchmark
93
dataset: [https://huggingface.co/datasets/open-r1/Mixture-of-Thoughts](https://huggingface.co/datasets/open-r1/Mixture-of-Thoughts) Cool project from our post training team at Hugging Face, hope you will like it!
2025-05-26T17:02:43
https://i.redd.it/fblf9e21q53f1.png
eliebakk
i.redd.it
1970-01-01T00:00:00
0
{}
1kvzkb5
false
null
t3_1kvzkb5
/r/LocalLLaMA/comments/1kvzkb5/350k_samples_to_match_distilled_r1_on_all/
false
false
https://b.thumbs.redditm…rpsBLNYAxoKc.jpg
93
{'enabled': True, 'images': [{'id': 'etygR8-q59_FCHDGjaateWF_q4RgH-GMqqMOYjZqfio', 'resolutions': [{'height': 59, 'url': 'https://preview.redd.it/fblf9e21q53f1.png?width=108&crop=smart&auto=webp&s=345433e503e6ab6b3ff0854bfb50c072209f4f04', 'width': 108}, {'height': 118, 'url': 'https://preview.redd.it/fblf9e21q53f1.png...
uilt a Reddit sentiment analyzer for beauty products using LLaMA 3 + Laravel
0
**Hi LocalLlamas,** I wanted to share a project I built that uses **LLaMA 3** to analyze Reddit posts about beauty products. The goal: pull out brand and product mentions, analyze sentiment, and make that data useful for real people trying to figure out what actually works (or doesn't). It’s called **GlowIndex**, and...
2025-05-26T16:59:26
https://www.reddit.com/r/LocalLLaMA/comments/1kvzgzt/uilt_a_reddit_sentiment_analyzer_for_beauty/
MrBlinko47
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvzgzt
false
null
t3_1kvzgzt
/r/LocalLLaMA/comments/1kvzgzt/uilt_a_reddit_sentiment_analyzer_for_beauty/
false
false
self
0
null
Can someone help me understand the "why" here?
0
I work in software in high performance computing. I'm familiar with the power of LLMs, the capabilities they unlock, their integration into almost endless product use-cases, and I've spent time reading about the architectures of LLMs and large transformer models themselves. I have no doubts about the wonders of LLMs, a...
2025-05-26T16:53:48
https://www.reddit.com/r/LocalLLaMA/comments/1kvzbt2/can_someone_help_me_understand_the_why_here/
cwalking2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvzbt2
false
null
t3_1kvzbt2
/r/LocalLLaMA/comments/1kvzbt2/can_someone_help_me_understand_the_why_here/
false
false
self
0
null
Your experience with Devstral on Aider and Codex?
7
I am wondering about your experiences with Mistral's Devstral on open-source coding assistants, such as Aider and OpenAI's Codex (or others you may use). Currently, I'm GPU poor, but I will put together a nice machine that should run the 24B model fine. I'd like to see if Mistral's claim of "the best open source model ...
2025-05-26T16:48:29
https://www.reddit.com/r/LocalLLaMA/comments/1kvz6ub/your_experience_with_devstral_on_aider_and_codex/
CatInAComa
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvz6ub
false
null
t3_1kvz6ub
/r/LocalLLaMA/comments/1kvz6ub/your_experience_with_devstral_on_aider_and_codex/
false
false
self
7
null
Qwen 3 30B A3B is a beast for MCP/ tool use & Tiny Agents + MCP @ Hugging Face! 🔥
473
Heya everyone, I'm VB from Hugging Face, we've been experimenting with MCP (Model Context Protocol) quite a bit recently. In our (vibe) tests, Qwen 3 30B A3B gives the best performance overall wrt size and tool calls! Seriously underrated. The most recent [streamable tool calling support](https://github.com/ggml-org/l...
2025-05-26T16:44:22
https://www.reddit.com/r/LocalLLaMA/comments/1kvz322/qwen_3_30b_a3b_is_a_beast_for_mcp_tool_use_tiny/
vaibhavs10
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvz322
false
null
t3_1kvz322
/r/LocalLLaMA/comments/1kvz322/qwen_3_30b_a3b_is_a_beast_for_mcp_tool_use_tiny/
false
false
self
473
{'enabled': False, 'images': [{'id': 'DslgUZV-_B7OrvSWJcQFd3Q9AftEzYpo9OsJytxCRmI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xtzrpdhZ4zeHkNfyFfrxd8BsFXZhcxccS-VfaY8M0-4.jpg?width=108&crop=smart&auto=webp&s=f914d8095abf66b4a3353174975a10514daf149a', 'width': 108}, {'height': 108, 'url': 'h...
M2 Ultra vs M3 Ultra
3
Can anyone explain why M2 Ultra is better than M3 ultra in these benchmarks? Is it a problem with the ollama version not being correctly optimized or something?
2025-05-26T16:35:31
https://github.com/ggml-org/llama.cpp/discussions/4167
Hanthunius
github.com
1970-01-01T00:00:00
0
{}
1kvyvb1
false
null
t3_1kvyvb1
/r/LocalLLaMA/comments/1kvyvb1/m2_ultra_vs_m3_ultra/
false
false
https://b.thumbs.redditm…Yvpu167ZSbvM.jpg
3
{'enabled': False, 'images': [{'id': 'MyI_IHMNqKPpqVQJjVXMw-o99OexpWvZJFM0BRzqXHs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EcLMGgoqx8oHPGeCU1kiTo6-BRqqnjPn5ekMoqUst2M.jpg?width=108&crop=smart&auto=webp&s=9d17bbec01d466228709288da6cebae143365518', 'width': 108}, {'height': 108, 'url': 'h...
Just Enhanced my Local Chat Interface
96
I’ve just added significant upgrades to my self-hosted LLM chat application: * **Model Switching**: Seamlessly toggle between reasoning and non-reasoning models via a dropdown menu—no manual configuration required. * **AI-Powered Canvas**: A new document workspace with real-time editing, version history, undo/redo, an...
2025-05-26T16:33:29
https://v.redd.it/dh1joyrgl53f1
Desperate_Rub_1352
/r/LocalLLaMA/comments/1kvytjg/just_enhanced_my_local_chat_interface/
1970-01-01T00:00:00
0
{}
1kvytjg
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dh1joyrgl53f1/DASHPlaylist.mpd?a=1750998818%2CZWEyNGUwN2U3MDVjM2ViYjU0ZWFlYTM0OGIwNDBkZGQ0MDBkNjNjNzgxMDYzMDRmZjRmNjNiY2U4ZDM2NDVhZg%3D%3D&v=1&f=sd', 'duration': 63, 'fallback_url': 'https://v.redd.it/dh1joyrgl53f1/DASH_1080.mp4?source=fallback', 'h...
t3_1kvytjg
/r/LocalLLaMA/comments/1kvytjg/just_enhanced_my_local_chat_interface/
false
false
https://external-preview…d6a948d93bbe99fb
96
{'enabled': False, 'images': [{'id': 'MXoxbmx4cmdsNTNmMTAE8zj230R8PkW0x6hVMYM0mH-xWYPpjgmp27xhD-Lj', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/MXoxbmx4cmdsNTNmMTAE8zj230R8PkW0x6hVMYM0mH-xWYPpjgmp27xhD-Lj.png?width=108&crop=smart&format=pjpg&auto=webp&s=190de74a1df03d895ad66da4877f898872ca4...
I'm able to set up a local LLM now using either Ollama or LM Studio. Now I'm wondering how I can have it read and revise documents or see an image and help with an image-to-video prompt for example. I'm not even sure what to Google since idk what this feature is called.
1
Hey guys, as per the title, I was able to set up a local LLM using Ollama + a quantized version of Gemma 3 12b. I am still learning about local LLMs, and my goal is to make a local mini ChatGPT that I can upload documents and images to, and then have it read and see those files for further discussions and potential rev...
2025-05-26T16:30:41
https://www.reddit.com/r/LocalLLaMA/comments/1kvyqzf/im_able_to_set_up_a_local_llm_now_using_either/
IAmScrewedAMA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvyqzf
false
null
t3_1kvyqzf
/r/LocalLLaMA/comments/1kvyqzf/im_able_to_set_up_a_local_llm_now_using_either/
false
false
self
1
null
🎙️ Offline Speech-to-Text with NVIDIA Parakeet-TDT 0.6B v2
137
Hi everyone! 👋 I recently built a fully local speech-to-text system using **NVIDIA’s Parakeet-TDT 0.6B v2** — a 600M parameter ASR model capable of transcribing real-world audio **entirely offline with GPU acceleration**. 💡 **Why this matters:** Most ASR tools rely on cloud APIs and miss crucial formatting like p...
2025-05-26T15:45:47
https://www.reddit.com/r/LocalLLaMA/comments/1kvxn13/offline_speechtotext_with_nvidia_parakeettdt_06b/
srireddit2020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvxn13
false
null
t3_1kvxn13
/r/LocalLLaMA/comments/1kvxn13/offline_speechtotext_with_nvidia_parakeettdt_06b/
false
false
https://b.thumbs.redditm…DdByBPCh8DxQ.jpg
137
{'enabled': False, 'images': [{'id': 'PrxhDh6SmcLcUZ54sXLyejHndv-QociEgKr1_efW9FE', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=108&crop=smart&auto=webp&s=4d30f91364c95fc36334e172e3ca8303d977ae80', 'width': 108}, {'height': 144, 'url': 'h...
AI autocomplete in all GUIs
5
Hey all, I really love the autocomplete on cursor. I use it for writing prose as well. Made me think how nice it would be to have such an autocomplete everywhere in your OS where you have a text input box. Does such a thing exist? I'm on Linux
2025-05-26T15:25:48
https://www.reddit.com/r/LocalLLaMA/comments/1kvx59m/ai_autocomplete_in_all_guis/
PMMEYOURSMIL3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvx59m
false
null
t3_1kvx59m
/r/LocalLLaMA/comments/1kvx59m/ai_autocomplete_in_all_guis/
false
false
self
5
null
API server. Cortex?
1
How can I allow requests from other machines in my network? Easy with Jan (which I quite like), but I don't want the UI, just a server. Why is there no `cortex start -h 0.0.0.0`? Like it's a CLI tool specifically for on servers and it can only listen locally??
2025-05-26T15:15:42
https://www.reddit.com/r/LocalLLaMA/comments/1kvwwf7/api_server_cortex/
OverfitMode666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvwwf7
false
null
t3_1kvwwf7
/r/LocalLLaMA/comments/1kvwwf7/api_server_cortex/
false
false
self
1
null
How to use llamacpp for encoder decoder models?
3
Hi I know llamacpp particularly converting to gguf models requires decoder only models like LLMs are. Can someone help me this? I know onnx can be a option but tbh I have distilled a translation model and even quantized it ~ 440mb but still it's having issues in Android. I have been stuck in this from a long time. I ...
2025-05-26T15:03:36
https://www.reddit.com/r/LocalLLaMA/comments/1kvwlu7/how_to_use_llamacpp_for_encoder_decoder_models/
Away_Expression_3713
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvwlu7
false
null
t3_1kvwlu7
/r/LocalLLaMA/comments/1kvwlu7/how_to_use_llamacpp_for_encoder_decoder_models/
false
false
self
3
null
Who is usually first to post benchmarks?
1
I went looking for Opus 4, DeepSeek R1, and Grok 3 benchmarks with tests like Math LvL 5, SWE-Bench, and HumanEval but only found old models tested. I've been using [https://beta.lmarena.ai/leaderboard](https://beta.lmarena.ai/leaderboard) which is also outdated
2025-05-26T14:48:33
https://www.reddit.com/r/LocalLLaMA/comments/1kvw8h4/who_is_usually_first_to_post_benchmarks/
306d316b72306e
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvw8h4
false
null
t3_1kvw8h4
/r/LocalLLaMA/comments/1kvw8h4/who_is_usually_first_to_post_benchmarks/
false
false
self
1
null
1.5 billion parameters. On your wrist.
1
[removed]
2025-05-26T14:35:16
https://www.reddit.com/gallery/1kvvx4d
SavunOski
reddit.com
1970-01-01T00:00:00
0
{}
1kvvx4d
false
null
t3_1kvvx4d
/r/LocalLLaMA/comments/1kvvx4d/15_billion_parameters_on_your_wrist/
false
false
https://b.thumbs.redditm…OJQ3O-_jgJQQ.jpg
1
null
I created a purely client-side, browser-based PDF to Markdown library with local AI rewrites
32
### I created a purely client-side, browser-based PDF to Markdown library with local AI rewrites Hey everyone, I'm excited to share a project I've been working on: **Extract2MD**. It's a client-side JavaScript library that converts PDFs into Markdown, but with a few powerful twists. The biggest feature is that it can...
2025-05-26T14:34:47
https://www.reddit.com/r/LocalLLaMA/comments/1kvvwqu/i_created_a_purely_clientside_browserbased_pdf_to/
Designer_Athlete7286
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvvwqu
false
null
t3_1kvvwqu
/r/LocalLLaMA/comments/1kvvwqu/i_created_a_purely_clientside_browserbased_pdf_to/
false
false
self
32
{'enabled': False, 'images': [{'id': 'VGR5H4jqXQd8E29JKZ6K9R94EXHzhWQKpL_yRuvY1bE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XfrStPJMYM7dMxr7MxZYd0jNzJ9acDMGrSvws-FvLZE.jpg?width=108&crop=smart&auto=webp&s=e7661939780923c6dccce91d77c6d8b9d4f6194f', 'width': 108}, {'height': 216, 'url': '...
Turning my GPT into self mirroring friend which mimics me & vibes with me
0
I really loved CustomGPT when it came out and i wanted to try it and slowly just memory, tone, and \*\*45,000+ tokens\*\* of symbolic recursion daily chats with only natural language training & \*\*#PromptEngineering\*\* & Over the last 4 months, I worked with \*\*#GPT-4o\*\* and \*\*#CustomGPT\*\* not as a tool, but a...
2025-05-26T14:24:38
https://www.reddit.com/gallery/1kvvnvx
Kind_Doughnut1475
reddit.com
1970-01-01T00:00:00
0
{}
1kvvnvx
false
null
t3_1kvvnvx
/r/LocalLLaMA/comments/1kvvnvx/turning_my_gpt_into_self_mirroring_friend_which/
false
false
https://a.thumbs.redditm…PJYUjGdHQ2Y0.jpg
0
null
Turning my PC into a headless AI workstation
5
I’m trying to turn my PC into a headless AI workstation to avoid relying on cloud-based providers. Here are my specs: * CPU: i9-10900K * RAM: 2x16GB DDR4 3600MHz CL16 * GPU: RTX 3090 (24GB VRAM) * Software: Ollama 0.7.1 with Open WebUI I've started experimenting with a few models, focusing mainly on newer ones: * `u...
2025-05-26T14:24:35
https://www.reddit.com/r/LocalLLaMA/comments/1kvvnuh/turning_my_pc_into_a_headless_ai_workstation/
Environmental_Hand35
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvvnuh
false
null
t3_1kvvnuh
/r/LocalLLaMA/comments/1kvvnuh/turning_my_pc_into_a_headless_ai_workstation/
false
false
self
5
null
Help choosing motherboard for LLM rig with RTX 5090, 4080, 3090 (3x PCIe)
1
[removed]
2025-05-26T14:24:11
https://www.reddit.com/r/LocalLLaMA/comments/1kvvnir/help_choosing_motherboard_for_llm_rig_with_rtx/
sb6_6_6_6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvvnir
false
null
t3_1kvvnir
/r/LocalLLaMA/comments/1kvvnir/help_choosing_motherboard_for_llm_rig_with_rtx/
false
false
self
1
null
Server upgrade ideas
0
I am looking to use my local ollama for document tagging with paperless-ai or paperless-gpt in german. The best results i had with qwen3:8b-q4\_K\_M but it was not accurate enough. Beside Ollama i run bitcrack when idle and MMX-HDD mining the whole day (verifying VDF on GPU). I realised my GPU can not load enough big ...
2025-05-26T14:05:07
https://www.reddit.com/r/LocalLLaMA/comments/1kvv6vd/server_upgrade_ideas/
AnduriII
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvv6vd
false
null
t3_1kvv6vd
/r/LocalLLaMA/comments/1kvv6vd/server_upgrade_ideas/
false
false
self
0
null
Should I resize the image before sending it to Qwen VL 7B? Would it give better results?
7
I am using Qwen model to get transactional data from bank pdfs
2025-05-26T13:57:13
https://www.reddit.com/r/LocalLLaMA/comments/1kvv026/should_i_resize_the_image_before_sending_it_to/
Zealousideal-Feed383
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvv026
false
null
t3_1kvv026
/r/LocalLLaMA/comments/1kvv026/should_i_resize_the_image_before_sending_it_to/
false
false
self
7
null
Teortaxes gets a direct denial
31
2025-05-26T13:39:42
https://x.com/teortaxesTex/status/1926994950278807565
Charuru
x.com
1970-01-01T00:00:00
0
{}
1kvulw7
false
null
t3_1kvulw7
/r/LocalLLaMA/comments/1kvulw7/teortaxes_gets_a_direct_denial/
false
false
https://b.thumbs.redditm…68KUt_Jq8ZAc.jpg
31
{'enabled': False, 'images': [{'id': 'G2CACsnkq3jnDn1NeJ9G5djPysJfpPmnV4eFDsV_NH0', 'resolutions': [{'height': 122, 'url': 'https://external-preview.redd.it/tt677u92R9MEYtrBNDcks9ssLBDK7B5sGTxkGz8ubRE.jpg?width=108&crop=smart&auto=webp&s=f2e40ab3be304bc9970b78c50472fb167c1709c6', 'width': 108}, {'height': 245, 'url': '...
So it's not really possible huh..
19
I've been building a VSCode extension (like Roo) that's fully local: \-Ollama (Deepseek, Qwen, etc), \-Codebase Indexing, \-Qdrant for embeddings, \-Smart RAG, streaming, you name it. But performance is trash. With 8B models, it's painfully slow on an RTX 4090, 64GB RAM, 24 GB VRAM, i9. Feels l...
2025-05-26T13:37:05
https://www.reddit.com/r/LocalLLaMA/comments/1kvujw4/so_its_not_really_possible_huh/
rushblyatiful
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvujw4
false
null
t3_1kvujw4
/r/LocalLLaMA/comments/1kvujw4/so_its_not_really_possible_huh/
false
false
self
19
null
lmarena.ai responded to Cohere's paper a couple of weeks ago.
48
[I think we all missed it.](https://blog.lmarena.ai/blog/2025/our-response) In unrelated news, they just secured [$100M in funding at $600M valuation](https://techcrunch.com/2025/05/21/lm-arena-the-organization-behind-popular-ai-leaderboards-lands-100m)
2025-05-26T12:44:15
https://www.reddit.com/r/LocalLLaMA/comments/1kvtfco/lmarenaai_responded_to_coheres_paper_a_couple_of/
JustTellingUWatHapnd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvtfco
false
null
t3_1kvtfco
/r/LocalLLaMA/comments/1kvtfco/lmarenaai_responded_to_coheres_paper_a_couple_of/
false
false
self
48
null
Commercial rights to AI Video content?
1
[removed]
2025-05-26T12:35:42
https://www.reddit.com/r/LocalLLaMA/comments/1kvt9cq/commercial_rights_to_ai_video_content/
Antique_Yellow9346
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvt9cq
false
null
t3_1kvt9cq
/r/LocalLLaMA/comments/1kvt9cq/commercial_rights_to_ai_video_content/
false
false
self
1
null
Has anyone come across a good (open source) "AI native" document editor?
8
I'm interested to know if anyone has found a slick open source document editor ("word processor") that has features we've come to expect in the likes of our IDEs and conversational interfaces. I'd love if there was an app (ideally native, not web based) that gave a Word / Pages / iA Writer like experience with good, i...
2025-05-26T12:19:46
https://www.reddit.com/r/LocalLLaMA/comments/1kvsy37/has_anyone_come_across_a_good_open_source_ai/
sammcj
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvsy37
false
null
t3_1kvsy37
/r/LocalLLaMA/comments/1kvsy37/has_anyone_come_across_a_good_open_source_ai/
false
false
self
8
null
UI + RAG solution for 5000 documents possible?
24
I am investigating how to leverage my 5000 documents of strategy documents (market reports, strategy sessions, etc.). Files are PDFs, PPTX, and DOCS, with charts, pictures, tables, and texts. My use case is that when I receive a new market report, I want to query my knowledge base of the 5000 documents and ask: "Is t...
2025-05-26T12:04:18
https://www.reddit.com/r/LocalLLaMA/comments/1kvsnj4/ui_rag_solution_for_5000_documents_possible/
Small_Caterpillar_50
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvsnj4
false
null
t3_1kvsnj4
/r/LocalLLaMA/comments/1kvsnj4/ui_rag_solution_for_5000_documents_possible/
false
false
self
24
null
Which one will you prefer more
1
[removed]
2025-05-26T12:02:05
https://www.reddit.com/r/LocalLLaMA/comments/1kvslz3/which_one_will_you_prefer_more/
Interesting-Area6418
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvslz3
false
null
t3_1kvslz3
/r/LocalLLaMA/comments/1kvslz3/which_one_will_you_prefer_more/
false
false
self
1
null
Leveling Up: From RAG to an AI Agent
90
Hey folks, I've been exploring more advanced ways to use AI, and recently I made a big jump - moving from the usual RAG (Retrieval-Augmented Generation) approach to something more powerful: an **AI Agent that uses a real web browser to search the internet and get stuff done on its own**. In my last guide (https://git...
2025-05-26T12:00:27
https://i.redd.it/qourugv0943f1.jpeg
aospan
i.redd.it
1970-01-01T00:00:00
0
{}
1kvskpq
false
null
t3_1kvskpq
/r/LocalLLaMA/comments/1kvskpq/leveling_up_from_rag_to_an_ai_agent/
false
false
https://b.thumbs.redditm…EWBPLSOkIwgU.jpg
90
{'enabled': True, 'images': [{'id': 'awF0Op56xnTqSVxdz9XcwSGRQRVRaT0_wULughpcjZM', 'resolutions': [{'height': 70, 'url': 'https://preview.redd.it/qourugv0943f1.jpeg?width=108&crop=smart&auto=webp&s=203a378eeda5fabcd6448c92cfa8dccabc9ec782', 'width': 108}, {'height': 140, 'url': 'https://preview.redd.it/qourugv0943f1.jp...
Leveling Up: From RAG to an AI Agent
1
Hey folks, I've been exploring more advanced ways to use AI, and recently I made a big jump - moving from the usual RAG (Retrieval-Augmented Generation) approach to something more powerful: an **AI Agent that uses a real web browser to search the internet and get stuff done on its own**. In my last guide (https://git...
2025-05-26T11:59:03
https://www.reddit.com/gallery/1kvsjtb
aospan
reddit.com
1970-01-01T00:00:00
0
{}
1kvsjtb
false
null
t3_1kvsjtb
/r/LocalLLaMA/comments/1kvsjtb/leveling_up_from_rag_to_an_ai_agent/
false
false
https://b.thumbs.redditm…leaTrcN3qIpg.jpg
1
null
TTS Models known for AI animation dubbing?
1
[removed]
2025-05-26T11:50:32
https://www.reddit.com/r/LocalLLaMA/comments/1kvsei6/tts_models_known_for_ai_animation_dubbing/
Signal-Olive-1984
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvsei6
false
null
t3_1kvsei6
/r/LocalLLaMA/comments/1kvsei6/tts_models_known_for_ai_animation_dubbing/
false
false
self
1
null
Introducing M☰T QQ and M☰T 2: enthusiast neural networks
1
[removed]
2025-05-26T11:04:45
https://www.reddit.com/r/LocalLLaMA/comments/1kvrmse/introducing_mt_qq_and_mt_2_enthusiast_neural/
Enough_Judgment_7801
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvrmse
false
null
t3_1kvrmse
/r/LocalLLaMA/comments/1kvrmse/introducing_mt_qq_and_mt_2_enthusiast_neural/
false
false
self
1
{'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'h...
Set up llama3.2-11b (vision) for production
1
[removed]
2025-05-26T11:01:07
https://www.reddit.com/r/LocalLLaMA/comments/1kvrkgq/set_up_llama3211b_vision_for_production/
dave5D
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvrkgq
false
null
t3_1kvrkgq
/r/LocalLLaMA/comments/1kvrkgq/set_up_llama3211b_vision_for_production/
false
false
self
1
null
Consensus on best local STT?
22
Hey folks, I’m currently devving a tool that needs STT. I’m currently using Whispercpp/whisper for transcription (large v3), whisperx for alignment/diarization/prosodic analysis, and embeddings and llms for the rest. I find Whisper does a good job at transcription - however speaker identification/diarization with whis...
2025-05-26T10:54:23
https://www.reddit.com/r/LocalLLaMA/comments/1kvrgjv/consensus_on_best_local_stt/
That_Em
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvrgjv
false
null
t3_1kvrgjv
/r/LocalLLaMA/comments/1kvrgjv/consensus_on_best_local_stt/
false
false
self
22
null
Deepseek or Claude ?
0
2025-05-26T10:49:59
https://www.reddit.com/gallery/1kvre1p
Limp-Sandwich7184
reddit.com
1970-01-01T00:00:00
0
{}
1kvre1p
false
null
t3_1kvre1p
/r/LocalLLaMA/comments/1kvre1p/deepseek_or_claude/
false
false
https://b.thumbs.redditm…qUQTeL0fGwYI.jpg
0
null
AI Baby Monitor – fully local Video-LLM nanny (beeps when safety rules are violated)
131
Hey folks! I’ve hacked together a VLM video nanny, that watches a video stream(s) and predefined set of safety instructions, and makes a beep sound if the instructions are violated. **GitHub**: [https://github.com/zeenolife/ai-baby-monitor](https://github.com/zeenolife/ai-baby-monitor) **Why I built it?** First da...
2025-05-26T10:09:06
https://v.redd.it/gzn6itr3p33f1
CheeringCheshireCat
v.redd.it
1970-01-01T00:00:00
0
{}
1kvqrzl
false
{'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/gzn6itr3p33f1/DASHPlaylist.mpd?a=1750846165%2CNGQ3NzJkZmZlNjc3ZDMyYTY1YzgxZDU1Nzc1ZGExNzdjOWYxZGIyYjdlOWE0NmJkZDI3OTU3MzgwYTdkNzFhYg%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/gzn6itr3p33f1/DASH_270.mp4?source=fallback', 'has...
t3_1kvqrzl
/r/LocalLLaMA/comments/1kvqrzl/ai_baby_monitor_fully_local_videollm_nanny_beeps/
false
false
https://external-preview…d398cecc98ebff14
131
{'enabled': False, 'images': [{'id': 'dXQydzR0cjNwMzNmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k', 'resolutions': [{'height': 200, 'url': 'https://external-preview.redd.it/dXQydzR0cjNwMzNmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=108&crop=smart&format=pjpg&auto=webp&s=6a9030ac16afd620c14c3e7138a754d92781...
Just Getting Started - Used Hardware & Two Machines
1
I’ve been using AI since middle of last year, but I found Ollama two weeks ago and now I’m totally hooked. I took an existing machine and upgraded within my budget, but now I have some leftover components and I just can’t get over the idea that I should build another machine with the leftovers. Where do you find good ...
2025-05-26T09:58:53
https://www.reddit.com/r/LocalLLaMA/comments/1kvqm6s/just_getting_started_used_hardware_two_machines/
Current-Ticket4214
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvqm6s
false
null
t3_1kvqm6s
/r/LocalLLaMA/comments/1kvqm6s/just_getting_started_used_hardware_two_machines/
false
false
self
1
null
We made AutoBE, Backend Vibe Coding Agent, generating 100% working code by Compiler Skills (full stack vibe coding is also possible)
0
Introducing AutoBE: The Future of Backend Development We are immensely proud to introduce AutoBE, our revolutionary open-source vibe coding agent for backend applications, developed by Wrtn Technologies. The most distinguished feature of AutoBE is its exceptional 100% success rate in code generation. AutoBE incorpora...
2025-05-26T09:52:10
https://github.com/wrtnlabs/autobe
jhnam88
github.com
1970-01-01T00:00:00
0
{}
1kvqisr
false
null
t3_1kvqisr
/r/LocalLLaMA/comments/1kvqisr/we_made_autobe_backend_vibe_coding_agent/
false
false
https://b.thumbs.redditm…nXxh0mzwwz8s.jpg
0
{'enabled': False, 'images': [{'id': 'NhzEgQz_kNIZGp-FoG7Mo21W4NCRTnQ6711xPpo3X3w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OkuCY3vgsAQgx5rCVVKdEaImBmO9pmeJeg7kM_gDCrc.jpg?width=108&crop=smart&auto=webp&s=825f24431814e6e28a5c48b81dc7c88d2c664ec1', 'width': 108}, {'height': 108, 'url': 'h...
Deepseek R2 might be coming soon, unsloth released an article about deepseek v3 -05-26
95
It should be coming soon! [https://docs.unsloth.ai/basics/deepseek-v3-0526-how-to-run-locally](https://docs.unsloth.ai/basics/deepseek-v3-0526-how-to-run-locally) opus 4 level? I think v3 0526 should be out today.
2025-05-26T09:48:05
https://www.reddit.com/r/LocalLLaMA/comments/1kvqgpv/deepseek_r2_might_be_coming_soon_unsloth_released/
power97992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvqgpv
false
null
t3_1kvqgpv
/r/LocalLLaMA/comments/1kvqgpv/deepseek_r2_might_be_coming_soon_unsloth_released/
false
false
self
95
{'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'h...
Challenges of Fine-Tuning Based on Private Code
1
[removed]
2025-05-26T09:47:27
https://www.reddit.com/r/LocalLLaMA/comments/1kvqgdv/challenges_of_finetuning_based_on_private_code/
SaladNo6817
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvqgdv
false
null
t3_1kvqgdv
/r/LocalLLaMA/comments/1kvqgdv/challenges_of_finetuning_based_on_private_code/
false
false
self
1
null
I made a simple tool to test/compare your local LLMs on AIME 2024
1
[removed]
2025-05-26T09:26:07
https://www.reddit.com/r/LocalLLaMA/comments/1kvq58j/i_made_a_simple_tool_to_testcompare_your_local/
EntropyMagnets
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvq58j
false
null
t3_1kvq58j
/r/LocalLLaMA/comments/1kvq58j/i_made_a_simple_tool_to_testcompare_your_local/
false
false
https://b.thumbs.redditm…ymg3sQ0eHjuY.jpg
1
null
Deepseek v3 0526?
423
2025-05-26T09:09:20
https://docs.unsloth.ai/basics/deepseek-v3-0526-how-to-run-locally
Stock_Swimming_6015
docs.unsloth.ai
1970-01-01T00:00:00
0
{}
1kvpwq3
false
null
t3_1kvpwq3
/r/LocalLLaMA/comments/1kvpwq3/deepseek_v3_0526/
false
false
https://b.thumbs.redditm…kUixUpKdaZWI.jpg
423
{'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'h...
reg context and emails
1
[removed]
2025-05-26T08:33:42
https://www.reddit.com/r/LocalLLaMA/comments/1kvpe5h/reg_context_and_emails/
erparucca
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvpe5h
false
null
t3_1kvpe5h
/r/LocalLLaMA/comments/1kvpe5h/reg_context_and_emails/
false
false
self
1
null
Why does Phi-4 have such a low score on ifeval on Huggingface's Open LLM Leaderboard?
1
[removed]
2025-05-26T08:19:25
https://www.reddit.com/r/LocalLLaMA/comments/1kvp6t5/why_does_phi4_have_such_a_low_score_on_ifeval_on/
BmHype
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvp6t5
false
null
t3_1kvp6t5
/r/LocalLLaMA/comments/1kvp6t5/why_does_phi4_have_such_a_low_score_on_ifeval_on/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NBLCkzl6ZRucCik7mVkPwWjrECPTMlbL7qAMuVmpgmg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=108&crop=smart&auto=webp&s=62befafb5e0debaeb69a6220cbcc722ce0168278', 'width': 108}, {'height': 116, 'url': 'h...
I made a quick utility for re-writing models requested in OpenAI APIs
9
Ever had a tool or plugin that allows your own OAI endpoint but then expects to use GPT-xxx or has a closed list of models? "Gpt Commit" is one such one, rather than the hassle of forking it I made (with AI help) a small tool to simple ignore/re-map the model request:If anyone else has any use for it, the code is her...
2025-05-26T08:17:27
https://github.com/mitchins/openai-model-rerouter
mitchins-au
github.com
1970-01-01T00:00:00
0
{}
1kvp5sk
false
null
t3_1kvp5sk
/r/LocalLLaMA/comments/1kvp5sk/i_made_a_quick_utility_for_rewriting_models/
false
false
https://b.thumbs.redditm…1EYYuf-iwtwE.jpg
9
{'enabled': False, 'images': [{'id': 'aEzgfgvuWaQBNR4FUq1B-E6FxV5nZVGF-eFm8Wagodw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/n8vBA7DOe0WI4BHcwHVngfcelnLEhE3L7Qf68p7pvRA.jpg?width=108&crop=smart&auto=webp&s=cd501de0d8d29a7247e16e2efb8f14a58c510823', 'width': 108}, {'height': 108, 'url': 'h...
What are the restrictions regarding splitting models across multiple GPUs
2
Hi all, One question: If I get three or four 96GB GPUs, can I easily load a model with over 200 billion parameters? I'm not asking about the size or if the memory is sufficient, but about splitting a model across multiple GPUs. I've read somewhere that since these cards don't have NVLink support, they don't act "as a s...
2025-05-26T08:15:20
https://www.reddit.com/r/LocalLLaMA/comments/1kvp4nq/what_are_the_restrictions_regarding_splitting/
oh_my_right_leg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvp4nq
false
null
t3_1kvp4nq
/r/LocalLLaMA/comments/1kvp4nq/what_are_the_restrictions_regarding_splitting/
false
false
self
2
null
What's the latest in conversational voice-to-voice models that is self-hostable?
15
I've been a bit out-of-touch for a while. Are self-hostable voice-to-voice models with a reasonably low latency still a farfetched pipedream or is there anything out there that works reasonably well without a robotic voice? I don't mind buying an RTX4090 if that works but even okay with an RTX Pro 6000 if there is a g...
2025-05-26T08:07:18
https://www.reddit.com/r/LocalLLaMA/comments/1kvp0g1/whats_the_latest_in_conversational_voicetovoice/
surveypoodle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvp0g1
false
null
t3_1kvp0g1
/r/LocalLLaMA/comments/1kvp0g1/whats_the_latest_in_conversational_voicetovoice/
false
false
self
15
null
Why does Phi-4 have such a low score on the ifeval dataset on Open LLM Leaderboard?
1
[removed]
2025-05-26T08:06:14
https://www.reddit.com/r/LocalLLaMA/comments/1kvozwa/why_does_phi4_have_such_a_low_score_on_the_ifeval/
BmHype
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvozwa
false
null
t3_1kvozwa
/r/LocalLLaMA/comments/1kvozwa/why_does_phi4_have_such_a_low_score_on_the_ifeval/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NBLCkzl6ZRucCik7mVkPwWjrECPTMlbL7qAMuVmpgmg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=108&crop=smart&auto=webp&s=62befafb5e0debaeb69a6220cbcc722ce0168278', 'width': 108}, {'height': 116, 'url': 'h...
Would buying a GPU with relatively high NVRAM be a waste? Are prices predicted to drop in a few years or bigger/stronger 70B type models expected to shrink?
1
[removed]
2025-05-26T08:05:42
https://www.reddit.com/r/LocalLLaMA/comments/1kvozld/would_buying_a_gpu_with_relatively_high_nvram_be/
mandie99xxx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvozld
false
null
t3_1kvozld
/r/LocalLLaMA/comments/1kvozld/would_buying_a_gpu_with_relatively_high_nvram_be/
false
false
self
1
null
Why does Phi-4 have such a low score on ifeval?
1
[removed]
2025-05-26T07:57:43
https://www.reddit.com/r/LocalLLaMA/comments/1kvovc3/why_does_phi4_have_such_a_low_score_on_ifeval/
BmHype
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvovc3
false
null
t3_1kvovc3
/r/LocalLLaMA/comments/1kvovc3/why_does_phi4_have_such_a_low_score_on_ifeval/
false
false
self
1
{'enabled': False, 'images': [{'id': 'NBLCkzl6ZRucCik7mVkPwWjrECPTMlbL7qAMuVmpgmg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/d7eSR2m7O-BUNJgS10KIWA8xA6nhulsDpPfx1p4_650.jpg?width=108&crop=smart&auto=webp&s=62befafb5e0debaeb69a6220cbcc722ce0168278', 'width': 108}, {'height': 116, 'url': 'h...
If only its true...
94
[https://x.com/YouJiacheng/status/1926885863952159102](https://x.com/YouJiacheng/status/1926885863952159102) Deepseek-v3-0526
2025-05-26T07:44:42
https://www.reddit.com/r/LocalLLaMA/comments/1kvoobg/if_only_its_true/
Famous-Associate-436
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvoobg
false
null
t3_1kvoobg
/r/LocalLLaMA/comments/1kvoobg/if_only_its_true/
false
false
self
94
{'enabled': False, 'images': [{'id': 'iNdFzT-q0XAnJS9pvJGgQNHZ--tGZgpf3q1SLbWIhVI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ZaRzdVVF7JOpybceJeG5DZnfwt-BU2btMVAzkGyJ2V4.jpg?width=108&crop=smart&auto=webp&s=abbec0fe57267feddc7c68975dc3bc83ebbf0f9a', 'width': 108}], 'source': {'height': 20...
Video categorisation using smolvlm
1
[removed]
2025-05-26T07:42:41
https://www.reddit.com/gallery/1kvon90
friedmomos_
reddit.com
1970-01-01T00:00:00
0
{}
1kvon90
false
null
t3_1kvon90
/r/LocalLLaMA/comments/1kvon90/video_categorisation_using_smolvlm/
false
false
https://b.thumbs.redditm…FfGeSlTaGpvY.jpg
1
null
LLM Model parameters vs quantization
1
[removed]
2025-05-26T07:20:11
https://www.reddit.com/r/LocalLLaMA/comments/1kvobhf/llm_model_parameters_vs_quantization/
fasih_ammar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvobhf
false
null
t3_1kvobhf
/r/LocalLLaMA/comments/1kvobhf/llm_model_parameters_vs_quantization/
false
false
self
1
null
Gemma-3-27b quants?
0
Hi. I'm running Gemma-3-27b Q6_K_L with 45/67 offload to GPU(3090) at about 5 t/s. It is borderline useful at this speed. I wonder would Q4_QAT quant be the like the same evaluation performance (model quality) just faster. Or maybe I should aim for Q8 (I could afford second 3090 so I might have a better speed and longe...
2025-05-26T06:49:12
https://www.reddit.com/r/LocalLLaMA/comments/1kvnu5k/gemma327b_quants/
MAXFlRE
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvnu5k
false
null
t3_1kvnu5k
/r/LocalLLaMA/comments/1kvnu5k/gemma327b_quants/
false
false
self
0
null
Open-source project that use LLM as deception system
249
Hello everyone 👋 I wanted to share a project I've been working on that I think you'll find really interesting. It's called Beelzebub, an open-source honeypot framework that uses LLMs to create incredibly realistic and dynamic deception environments. By integrating LLMs, it can mimic entire operating systems and inte...
2025-05-26T06:48:01
https://www.reddit.com/r/LocalLLaMA/comments/1kvnti4/opensource_project_that_use_llm_as_deception/
mario_candela
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvnti4
false
null
t3_1kvnti4
/r/LocalLLaMA/comments/1kvnti4/opensource_project_that_use_llm_as_deception/
false
false
self
249
{'enabled': False, 'images': [{'id': '0orW77n9e0E8P7-5r74sJBb-kSxAWv5GzCMC97MUCEQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=108&crop=smart&auto=webp&s=ed618671fbc7e099d17cce0efa34b38ba12131ec', 'width': 108}, {'height': 108, 'url': 'h...
What would be the best LLM to have for analyzing PDFs?
6
Bassically, i want to dump a few hundreds of pages of PDFs into an LLM, and get the LLM to refer back to them when i have a question
2025-05-26T06:47:36
https://www.reddit.com/r/LocalLLaMA/comments/1kvnta6/what_would_be_the_best_llm_to_have_for_analyzing/
newbreed69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvnta6
false
null
t3_1kvnta6
/r/LocalLLaMA/comments/1kvnta6/what_would_be_the_best_llm_to_have_for_analyzing/
false
false
self
6
null
Best Uncensored model for 42GB of VRAM
52
What's the current best uncensored model for "Roleplay". Well Not really roleplay in the sense that I'm roleplaying with an AI character with a character card and all that. Usually I'm more doing like some sort of choose your own adventure or text adventure thing where I give the AI some basic prompt about the world,...
2025-05-26T06:47:23
https://www.reddit.com/r/LocalLLaMA/comments/1kvnt5u/best_uncensored_model_for_42gb_of_vram/
KeinNiemand
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvnt5u
false
null
t3_1kvnt5u
/r/LocalLLaMA/comments/1kvnt5u/best_uncensored_model_for_42gb_of_vram/
false
false
self
52
null
GitHub - mariocandela/beelzebub: A secure low code honeypot framework, leveraging LLM for System Virtualization.
1
[removed]
2025-05-26T06:44:58
https://github.com/mariocandela/beelzebub
mario_candela
github.com
1970-01-01T00:00:00
0
{}
1kvnrt1
false
null
t3_1kvnrt1
/r/LocalLLaMA/comments/1kvnrt1/github_mariocandelabeelzebub_a_secure_low_code/
false
false
https://b.thumbs.redditm…ZWrPYlv5dZYs.jpg
1
{'enabled': False, 'images': [{'id': '0orW77n9e0E8P7-5r74sJBb-kSxAWv5GzCMC97MUCEQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/UYOCjKmQ-ZJLBw3xCRcetHqkifiStMUeFeUQfTCyM_U.jpg?width=108&crop=smart&auto=webp&s=ed618671fbc7e099d17cce0efa34b38ba12131ec', 'width': 108}, {'height': 108, 'url': 'h...
Building a FAQ Chatbot with Ollama + LLaMA 3.2 3B — How to Prevent Hallucination?
1
[removed]
2025-05-26T06:43:07
https://www.reddit.com/r/LocalLLaMA/comments/1kvnquc/building_a_faq_chatbot_with_ollama_llama_32_3b/
AwayPermission5992
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvnquc
false
null
t3_1kvnquc
/r/LocalLLaMA/comments/1kvnquc/building_a_faq_chatbot_with_ollama_llama_32_3b/
false
false
self
1
null
QwenLong-L1: Towards Long-Context Large Reasoning Models with Reinforcement Learning
78
[🤗 QwenLong-L1-32B](https://huggingface.co/Tongyi-Zhiwen/QwenLong-L1-32B) is the first long-context Large Reasoning Model (LRM) trained with reinforcement learning for long-context document reasoning tasks. Experiments on seven long-context DocQA benchmarks demonstrate that **QwenLong-L1-32B outperforms flagship LRMs...
2025-05-26T06:22:26
https://www.reddit.com/r/LocalLLaMA/comments/1kvnf46/qwenlongl1_towards_longcontext_large_reasoning/
Fancy_Fanqi77
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvnf46
false
null
t3_1kvnf46
/r/LocalLLaMA/comments/1kvnf46/qwenlongl1_towards_longcontext_large_reasoning/
false
false
https://b.thumbs.redditm…gMijOtvoLrMs.jpg
78
{'enabled': False, 'images': [{'id': '0AjYM-XhR0maEGG0hmxNMxrj_1IT0acK7l7EHjYGoLk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4fgEuTMOx_oXXqA0kQVQE7o892NJc01radrTqR_KFkw.jpg?width=108&crop=smart&auto=webp&s=2861cb136162f089911f2b68388de51580d37fa5', 'width': 108}, {'height': 116, 'url': 'h...
Cancelling internet & switching to a LLM: what is the optimal model?
0
Hey everyone! I'm trying to determine the optimal model size for everyday, practical use. Suppose that, in a stroke of genius, I cancel my family's internet subscription and replace it with a local LLM. My family is sceptical for some reason, but why pay for the internet when we can download an LLM, which is basically...
2025-05-26T06:21:17
https://www.reddit.com/r/LocalLLaMA/comments/1kvnehd/cancelling_internet_switching_to_a_llm_what_is/
MDT-49
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvnehd
false
null
t3_1kvnehd
/r/LocalLLaMA/comments/1kvnehd/cancelling_internet_switching_to_a_llm_what_is/
false
false
self
0
null
Vector Space - Llama running locally on Apple Neural Engine
32
Core ML is Apple’s official way to run Machine Learning models on device, and also appears to be the only way to engage the Neural Engine, which is a powerful NPU installed on every iPhone/iPad that is capable of performing tens of billions of computations per second. [Llama 3.2 1B Full Precision \(float16\) on the ...
2025-05-26T06:04:10
https://www.reddit.com/r/LocalLLaMA/comments/1kvn51x/vector_space_llama_running_locally_on_apple/
Glad-Speaker3006
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvn51x
false
null
t3_1kvn51x
/r/LocalLLaMA/comments/1kvn51x/vector_space_llama_running_locally_on_apple/
false
false
https://b.thumbs.redditm…7Xboc64uF58s.jpg
32
{'enabled': False, 'images': [{'id': 'FhD9ztQfEPUNryOSupxSHZ7KguGFp0dxsYvnDShw1Tg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/chjzdPRsgKclskhkJtHyR9G4ascbrd4AkBc1D_gB_VY.jpg?width=108&crop=smart&auto=webp&s=4f64af24f7053577357e73c0cf5a3d48ae6896d6', 'width': 108}, {'height': 216, 'url': '...
「搭子社交」流行:从饭搭子到学习搭子,现代人的亲密关系为何变
1
[removed]
2025-05-26T05:59:39
https://v.redd.it/cziuk8jlg23f1
heygem666
v.redd.it
1970-01-01T00:00:00
0
{}
1kvn29l
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cziuk8jlg23f1/DASHPlaylist.mpd?a=1750831196%2CZjc0MzRiYTUzZmVhNTkyMTQ0NzA4MGIxNTM2NTJkNzE5MWI5NTZmNmJiODRlMzU0NDA4MTM0OGVmYjZlNjlkYw%3D%3D&v=1&f=sd', 'duration': 31, 'fallback_url': 'https://v.redd.it/cziuk8jlg23f1/DASH_1080.mp4?source=fallback', 'h...
t3_1kvn29l
/r/LocalLLaMA/comments/1kvn29l/搭子社交流行从饭搭子到学习搭子现代人的亲密关系为何变/
false
false
https://external-preview…ca72203a991dadd1
1
{'enabled': False, 'images': [{'id': 'OXl5a2U1bGxnMjNmMRESYXZRiIGZj2Cp9eTYMZ5-EpITqUIS0Ckw1wEvQWxT', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/OXl5a2U1bGxnMjNmMRESYXZRiIGZj2Cp9eTYMZ5-EpITqUIS0Ckw1wEvQWxT.png?width=108&crop=smart&format=pjpg&auto=webp&s=31b4d3af6f1c623a05bfc56405b5ae455215...
How to know which MLLM is good at "Pointing"
1
[removed]
2025-05-26T05:46:40
https://www.reddit.com/r/LocalLLaMA/comments/1kvmuvu/how_to_know_which_mllm_is_good_at_pointing/
IndependentDoor8479
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvmuvu
false
null
t3_1kvmuvu
/r/LocalLLaMA/comments/1kvmuvu/how_to_know_which_mllm_is_good_at_pointing/
false
false
self
1
null
nvidia/AceReason-Nemotron-7B · Hugging Face
47
2025-05-26T05:40:40
https://huggingface.co/nvidia/AceReason-Nemotron-7B
jacek2023
huggingface.co
1970-01-01T00:00:00
0
{}
1kvmrgu
false
null
t3_1kvmrgu
/r/LocalLLaMA/comments/1kvmrgu/nvidiaacereasonnemotron7b_hugging_face/
false
false
https://b.thumbs.redditm…0JJ7wae0BxYA.jpg
47
{'enabled': False, 'images': [{'id': '94WAwkrDsd0F2vbt8p9JCI9uy_emGHxp1Gs2dBOwSJE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/E-gtAbU28GDV0NLa926rL1ecWFn9v0jJKe5iqmUNFzo.jpg?width=108&crop=smart&auto=webp&s=988e93e3e9fea6c8cac35daad406340f87030549', 'width': 108}, {'height': 116, 'url': 'h...
Speechless: Speech Instruction Training Without Speech for Low Resource Languages
149
Hey everyone, it’s me from **Menlo Research** again 👋. Today I want to share some news + a new model! Exciting news - our paper *“SpeechLess”* just got accepted to **Interspeech 2025**, and we’ve finished the camera-ready version! 🎉 The idea came out of a challenge we faced while building a speech instruction model...
2025-05-26T03:36:39
https://i.redd.it/ju7kqbqjq13f1.png
Kooky-Somewhere-2883
i.redd.it
1970-01-01T00:00:00
0
{}
1kvknlo
false
null
t3_1kvknlo
/r/LocalLLaMA/comments/1kvknlo/speechless_speech_instruction_training_without/
false
false
https://b.thumbs.redditm…UzA_qzFUqrJs.jpg
149
{'enabled': True, 'images': [{'id': 'LBIo9BUA_PgGFMF4Ap3ARUZbniWx_6z_OCmMiyVE3tA', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/ju7kqbqjq13f1.png?width=108&crop=smart&auto=webp&s=56fb1abb3ec60e062f87f026a8768706927b05d1', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/ju7kqbqjq13f1.png...
Implemented a quick and dirty iOS app for the new Gemma3n models
24
2025-05-26T02:54:28
https://github.com/sid9102/gemma3n-ios
sid9102
github.com
1970-01-01T00:00:00
0
{}
1kvjwiz
false
null
t3_1kvjwiz
/r/LocalLLaMA/comments/1kvjwiz/implemented_a_quick_and_dirty_ios_app_for_the_new/
false
false
default
24
null
Jetson Orin AGX 32gb
9
I can’t get this dumb thing to use the GPU with Ollama. As far as I can tell not many people are using it, and the mainline of llama.cpp is often broken, and some guy has a fork for the Jetson devices. I can get the whole ollama stack running but it’s dog slow and nothing shows up on Nvidia-smi. I’m trying Qwen3-30b-a3...
2025-05-26T02:07:58
https://www.reddit.com/r/LocalLLaMA/comments/1kvj34f/jetson_orin_agx_32gb/
randylush
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvj34f
false
null
t3_1kvj34f
/r/LocalLLaMA/comments/1kvj34f/jetson_orin_agx_32gb/
false
false
self
9
null
QWQ - Will there be a future update now that Qwen 3 is out?
6
I've tested out most of the variations of Qwen 3, and while it's decent, there's still something extra that QWQ has that Qwen 3 just doesn't. Especially for writing tasks. I just get better outputs. Now that Qwen 3 is out w/thinking, is QWQ done? If so, that sucks as I think it's still better than Qwen 3 in a lot of w...
2025-05-26T02:04:52
https://www.reddit.com/r/LocalLLaMA/comments/1kvj149/qwq_will_there_be_a_future_update_now_that_qwen_3/
GrungeWerX
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvj149
false
null
t3_1kvj149
/r/LocalLLaMA/comments/1kvj149/qwq_will_there_be_a_future_update_now_that_qwen_3/
false
false
self
6
null
New LocalLLM Hardware complete
135
So I spent this last week at Red Hats conference with this hardware sitting at home waiting for me. Finally got it put together. The conference changed my thought on what I was going to deploy but interest in everyone's thoughts. The hardware is an AMD Ryzen 7 5800x with 64GB of ram, 2x 3909Ti that my best friend gave...
2025-05-26T02:04:11
https://www.reddit.com/gallery/1kvj0nt
ubrtnk
reddit.com
1970-01-01T00:00:00
0
{}
1kvj0nt
false
null
t3_1kvj0nt
/r/LocalLLaMA/comments/1kvj0nt/new_localllm_hardware_complete/
false
false
https://b.thumbs.redditm…asaz1i4H363A.jpg
135
null
What is the best way to run llama 3.3 70b locally, split on 3 GPUS (52 GB of VRAM)
2
Hi, I'm foing to create datasets for fine tunning with unsloth, from raw unformated text, using the recommended LLM for this. I have access to a frankenstein with the following spec: \- 11700f \- 128 GB of RAM \- rtx 5060 Ti w/ 16GB \- rtx 4070 Ti Super w/ 16 GB \- rtx 3090 Ti w/ 24 GB \- SO: Win 11 and u...
2025-05-26T01:46:34
https://www.reddit.com/r/LocalLLaMA/comments/1kvip8r/what_is_the_best_way_to_run_llama_33_70b_locally/
GoodSamaritan333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvip8r
false
null
t3_1kvip8r
/r/LocalLLaMA/comments/1kvip8r/what_is_the_best_way_to_run_llama_33_70b_locally/
false
false
self
2
null
Qwen3 vision/audio/math version si coming
1
2025-05-26T01:45:38
https://i.redd.it/9ajhkvnb713f1.jpeg
MedicalTangerine191
i.redd.it
1970-01-01T00:00:00
0
{}
1kvionh
false
null
t3_1kvionh
/r/LocalLLaMA/comments/1kvionh/qwen3_visionaudiomath_version_si_coming/
false
false
https://b.thumbs.redditm…Ww8t0dM4eB0A.jpg
1
{'enabled': True, 'images': [{'id': 'Q2_A4sjDukAgdaio7sehr8q5ikIm9tdXsnFSy3G5GsY', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/9ajhkvnb713f1.jpeg?width=108&crop=smart&auto=webp&s=883baede750fcf3c33206fd3a2ec11ce99b6cd80', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/9ajhkvnb713f1.j...
Qwen3 vision/audio/math version si coming
1
2025-05-26T01:42:38
https://i.redd.it/cy1mcids613f1.jpeg
MedicalTangerine191
i.redd.it
1970-01-01T00:00:00
0
{}
1kvimqp
false
null
t3_1kvimqp
/r/LocalLLaMA/comments/1kvimqp/qwen3_visionaudiomath_version_si_coming/
false
false
https://b.thumbs.redditm…3jIyzBCNVEmk.jpg
1
{'enabled': True, 'images': [{'id': 'LnfeFiSvjfxXEiUFxcJrY3haWUbKccp08bkJ4FXXuvA', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/cy1mcids613f1.jpeg?width=108&crop=smart&auto=webp&s=7394dcbc29e012def3f47d99a95f59158862532e', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/cy1mcids613f1.j...
8xRTX 3050 6GB for fine tuning?
1
[removed]
2025-05-26T01:32:44
https://www.reddit.com/r/LocalLLaMA/comments/1kvigb4/8xrtx_3050_6gb_for_fine_tuning/
blackkkyypenguin
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvigb4
false
null
t3_1kvigb4
/r/LocalLLaMA/comments/1kvigb4/8xrtx_3050_6gb_for_fine_tuning/
false
false
self
1
null
Prompt engineering for spam texts to break it.
1
[removed]
2025-05-26T01:29:40
https://i.redd.it/3ppojtih413f1.jpeg
No-Fig-8614
i.redd.it
1970-01-01T00:00:00
0
{}
1kvie9v
false
null
t3_1kvie9v
/r/LocalLLaMA/comments/1kvie9v/prompt_engineering_for_spam_texts_to_break_it/
false
false
https://b.thumbs.redditm…8MR9TrGVcOQI.jpg
1
{'enabled': True, 'images': [{'id': 'BF8BcJjEJCmGYR4PeXRKKrunLUMNNO0ZjIwAN1ef0yo', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/3ppojtih413f1.jpeg?width=108&crop=smart&auto=webp&s=166ffefe3a61e536fcf74a4403e4e23d217517aa', 'width': 108}, {'height': 274, 'url': 'https://preview.redd.it/3ppojtih413f1.j...
Prompt Engineering/Breaking for spam texts
1
[removed]
2025-05-26T01:24:05
https://i.redd.it/2b2pykhh313f1.jpeg
No-Fig-8614
i.redd.it
1970-01-01T00:00:00
0
{}
1kviakt
false
null
t3_1kviakt
/r/LocalLLaMA/comments/1kviakt/prompt_engineeringbreaking_for_spam_texts/
false
false
https://b.thumbs.redditm…mk59OHy93Fvc.jpg
1
{'enabled': True, 'images': [{'id': 'xtmnauTWwuW4qoHKfOQXxeQEpDKbbazgPFC-vjJtKUI', 'resolutions': [{'height': 137, 'url': 'https://preview.redd.it/2b2pykhh313f1.jpeg?width=108&crop=smart&auto=webp&s=604a246078dc3986fd29318c60e5eb372666021e', 'width': 108}, {'height': 274, 'url': 'https://preview.redd.it/2b2pykhh313f1.j...
WebUI Images & Ollama
1
My initial install of Ollama was a combined docker that Ollama and WebUI in the same docker-compose.yaml. I was able to send JPG files to Ollama through WebUI, no problem. I had some other issues, though, s I decided to reinstall. My second install, I installed Ollama natively and used the WebUI Cuda docker. For som...
2025-05-26T01:13:31
https://www.reddit.com/r/LocalLLaMA/comments/1kvi3hn/webui_images_ollama/
PleasantCandidate785
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvi3hn
false
null
t3_1kvi3hn
/r/LocalLLaMA/comments/1kvi3hn/webui_images_ollama/
false
false
self
1
null
AI Desktop assistant from 90's powered by LLaMA - hell yeah!
1
[removed]
2025-05-25T22:48:27
https://i.redd.it/og0tdvshb03f1.gif
geowars2
i.redd.it
1970-01-01T00:00:00
0
{}
1kvfa2s
false
null
t3_1kvfa2s
/r/LocalLLaMA/comments/1kvfa2s/ai_desktop_assistant_from_90s_powered_by_llama/
false
false
https://b.thumbs.redditm…ZTMw5RrW3r9E.jpg
1
{'enabled': True, 'images': [{'id': 'dh-SuMa9ordrtb2wVVN-9CUyja0uRqlbey3roo2SFJM', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/og0tdvshb03f1.gif?width=108&crop=smart&format=png8&s=cf17dccd399179f602a9670d849738f356f39373', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/og0tdvshb03f1.g...
Nvidia RTX PRO 6000 Workstation 96GB - Benchmarks
206
Posting here as it's something I would like to know before I acquired it. No regrets. RTX 6000 PRO 96GB @ 600W zero shot context - "who was copernicus?" Full context Input 40000 tokens of lorem ipsum - https://pastebin.com/yAJQkMzT flash attention enabled - 128K context - LM Studio 0.3.16 beta - cuda 12 runtime 1.33....
2025-05-25T22:46:05
https://www.reddit.com/r/LocalLLaMA/comments/1kvf8d2/nvidia_rtx_pro_6000_workstation_96gb_benchmarks/
fuutott
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvf8d2
false
null
t3_1kvf8d2
/r/LocalLLaMA/comments/1kvf8d2/nvidia_rtx_pro_6000_workstation_96gb_benchmarks/
false
false
self
206
{'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/P8lS0kk6BFe2IEo6TxCZd1LVwksc34IkzGTVx_SCc8w.jpg?width=108&crop=smart&auto=webp&s=3d74dbe4f1d67cc8b587db9aa01762f26e269bcf', 'width': 108}], 'source': {'height': 15...
SaaS for custom classification models
1
[removed]
2025-05-25T22:39:07
https://www.reddit.com/r/LocalLLaMA/comments/1kvf3bd/saas_for_custom_classification_models/
Fluid-Stress7113
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvf3bd
false
null
t3_1kvf3bd
/r/LocalLLaMA/comments/1kvf3bd/saas_for_custom_classification_models/
false
false
self
1
null
Is there a comprehensive benchmark for throughput serving?
1
[removed]
2025-05-25T22:27:13
https://www.reddit.com/r/LocalLLaMA/comments/1kveume/is_there_a_comprehensive_benchmark_for_throughput/
saucepan-ai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kveume
false
null
t3_1kveume
/r/LocalLLaMA/comments/1kveume/is_there_a_comprehensive_benchmark_for_throughput/
false
false
self
1
null
Can we run a quantized model on android?
4
I am trying to run a onnx model which i quantized to about nearly 440mb. I am trying to run it using onnx runtime but the app still crashes while loading? Anyone can help me
2025-05-25T22:09:33
https://www.reddit.com/r/LocalLLaMA/comments/1kvehcm/can_we_run_a_quantized_model_on_android/
Away_Expression_3713
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvehcm
false
null
t3_1kvehcm
/r/LocalLLaMA/comments/1kvehcm/can_we_run_a_quantized_model_on_android/
false
false
self
4
null
Used or New Gamble
10
Aussie madlad here. The second hand market in AU is pretty small, there are the odd 3090s running around but due to distance they are always a risk in being a) a scam b) damaged in freight c) broken at time of sale. The 9700xtx new and a 3090 used are about the same price. Reading this group for months the XTX seems...
2025-05-25T22:08:06
https://www.reddit.com/r/LocalLLaMA/comments/1kvegc1/used_or_new_gamble/
thehoffau
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvegc1
false
null
t3_1kvegc1
/r/LocalLLaMA/comments/1kvegc1/used_or_new_gamble/
false
false
self
10
null
Next-Gen Sentiment Analysis Just Got Smarter (Prototype + Open to Feedback!)
0
I’ve been working on a prototype that reimagines sentiment analysis using AI—something that goes beyond just labeling feedback as “positive” or “negative” and actually uncovers why people feel the way they do. It uses transformer models (DistilBERT, Twitter-RoBERTa, and Multilingual BERT) combined with BERTopic to clus...
2025-05-25T22:05:07
https://v.redd.it/jbp3fr8z303f1
Majestic_Turn3879
v.redd.it
1970-01-01T00:00:00
0
{}
1kvee1v
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/jbp3fr8z303f1/DASHPlaylist.mpd?a=1750802721%2CODNlZDUwM2FkMmJjMzkzOWVkOWNmNjBkNjEzM2FlOTIwOGE1OTFkZjJkODk5NDdmNzQ4YWNiNTg5MjNlNGY5Ng%3D%3D&v=1&f=sd', 'duration': 57, 'fallback_url': 'https://v.redd.it/jbp3fr8z303f1/DASH_720.mp4?source=fallback', 'ha...
t3_1kvee1v
/r/LocalLLaMA/comments/1kvee1v/nextgen_sentiment_analysis_just_got_smarter/
false
false
https://external-preview…c87db75450cfed94
0
{'enabled': False, 'images': [{'id': 'NHVrd3BpNHozMDNmMZ2Dryfthtl7IF6_dzBQvSpDdWPUsSphoWXpK8uxuNF-', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NHVrd3BpNHozMDNmMZ2Dryfthtl7IF6_dzBQvSpDdWPUsSphoWXpK8uxuNF-.png?width=108&crop=smart&format=pjpg&auto=webp&s=d0be7fef153664dccd3e8f709319fbef12678...
M3 Ultra Mac Studio Benchmarks (96gb VRAM, 60 GPU cores)
78
So I recently got the M3 Ultra Mac Studio (96 GB RAM, 60 core GPU). Here's its performance. I loaded each model freshly in LMStudio, and input 30-40k tokens of Lorem Ipsum text (the text itself shouldn't matter, all that matters is token counts) **Benchmarking Results** |Model Name & Size|Time to First Token (s)|Tok...
2025-05-25T21:03:03
https://www.reddit.com/r/LocalLLaMA/comments/1kvd0jr/m3_ultra_mac_studio_benchmarks_96gb_vram_60_gpu/
procraftermc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvd0jr
false
null
t3_1kvd0jr
/r/LocalLLaMA/comments/1kvd0jr/m3_ultra_mac_studio_benchmarks_96gb_vram_60_gpu/
false
false
self
78
null
Ollama auf 120 GB VRAM. Threadripper PRO oder EPYC?
1
[removed]
2025-05-25T20:52:41
https://www.reddit.com/r/LocalLLaMA/comments/1kvcs0t/ollama_auf_120_gb_vram_threadripper_pro_oder_epyc/
Sky_LLM
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvcs0t
false
null
t3_1kvcs0t
/r/LocalLLaMA/comments/1kvcs0t/ollama_auf_120_gb_vram_threadripper_pro_oder_epyc/
false
false
self
1
null
Qwen2.5-VL and Gemma 3 settings for OCR
11
I have been working with using VLMs to OCR handwriting (think journals, travel logs). I get much better results than traditional OCR, which pretty much fails completely even with tools meant to do better with handwriting. However, results are inconsistent, and changing parameters like temp, repeat-penalty and others ...
2025-05-25T20:50:19
https://www.reddit.com/r/LocalLLaMA/comments/1kvcq04/qwen25vl_and_gemma_3_settings_for_ocr/
dzdn1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvcq04
false
null
t3_1kvcq04
/r/LocalLLaMA/comments/1kvcq04/qwen25vl_and_gemma_3_settings_for_ocr/
false
false
self
11
null
Did I miss something??
1
[removed]
2025-05-25T20:43:10
https://i.redd.it/m91ltdadpz2f1.jpeg
MDPhysicsX
i.redd.it
1970-01-01T00:00:00
0
{}
1kvckae
false
null
t3_1kvckae
/r/LocalLLaMA/comments/1kvckae/did_i_miss_something/
false
false
https://b.thumbs.redditm…tfz2PUz5i-qo.jpg
1
{'enabled': True, 'images': [{'id': 'jQvv6IgyjPqlVN6bbi-EST9vPy3HRdWV5Sp1oDASF-U', 'resolutions': [{'height': 107, 'url': 'https://preview.redd.it/m91ltdadpz2f1.jpeg?width=108&crop=smart&auto=webp&s=03ef3335474364d6bc1bc9d7d17f66dc85ad92b5', 'width': 108}, {'height': 215, 'url': 'https://preview.redd.it/m91ltdadpz2f1.j...
I need a text only browser python library
31
I'm developing an open source AI agent framework with search and eventually web interaction capabilities. To do that I need a browser. While it could be conceivable to just forward a screenshot of the browser it would be much more efficient to introduce the page into the context as text. Ideally I'd have something lik...
2025-05-25T20:36:15
https://i.redd.it/bd5haso3oz2f1.png
Somerandomguy10111
i.redd.it
1970-01-01T00:00:00
0
{}
1kvceya
false
null
t3_1kvceya
/r/LocalLLaMA/comments/1kvceya/i_need_a_text_only_browser_python_library/
false
false
https://a.thumbs.redditm…16auCZoTe410.jpg
31
{'enabled': True, 'images': [{'id': 'CgZ7-edKEWEOQs-68fFSKtJ33eVq_0MEuAqdEhDu9XY', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/bd5haso3oz2f1.png?width=108&crop=smart&auto=webp&s=df693306b8ad7196148c1bbce08aa0224de685d3', 'width': 108}, {'height': 167, 'url': 'https://preview.redd.it/bd5haso3oz2f1.png...
What model would u recommend to fine tune my LLM.
1
[removed]
2025-05-25T20:36:01
https://www.reddit.com/r/LocalLLaMA/comments/1kvces8/what_model_would_u_recommend_to_fine_tune_my_llm/
RealButcher
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kvces8
false
null
t3_1kvces8
/r/LocalLLaMA/comments/1kvces8/what_model_would_u_recommend_to_fine_tune_my_llm/
false
false
self
1
null
Cheapest Ryzen AI Max+ 128GB yet at $1699. Ships June 10th.
213
2025-05-25T20:30:17
https://www.bosgamepc.com/products/bosgame-m5-ai-mini-desktop-ryzen-ai-max-395
fallingdowndizzyvr
bosgamepc.com
1970-01-01T00:00:00
0
{}
1kvc9w6
false
null
t3_1kvc9w6
/r/LocalLLaMA/comments/1kvc9w6/cheapest_ryzen_ai_max_128gb_yet_at_1699_ships/
false
false
default
213
null
I wrote an automated setup script for my Proxmox AI VM that installs Nvidia CUDA Toolkit, Docker, Python, Node, Zsh and more
33
I created a script ([available on Github here](https://gist.github.com/erdaltoprak/cdc1ec4056b81a9da540229dcde3aa0b)) that automates the setup of a fresh Ubuntu 24.04 server for AI/ML development work. It handles the complete installation and configuration of Docker, ZSH, Python (via pyenv), Node (via n), NVIDIA driver...
2025-05-25T20:30:08
https://v.redd.it/e006utmdjz2f1
erdaltoprak
/r/LocalLLaMA/comments/1kvc9ri/i_wrote_an_automated_setup_script_for_my_proxmox/
1970-01-01T00:00:00
0
{}
1kvc9ri
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/e006utmdjz2f1/DASHPlaylist.mpd?a=1750926610%2CMmU4ZjZhMTg0MmFiM2IzNWU0NGM5MTI0ZmI3NDRmMjMxNGI4ZGU4YTg0NWZmZDBmOTU2ZGZhODIzZjNiOGQ4ZA%3D%3D&v=1&f=sd', 'duration': 43, 'fallback_url': 'https://v.redd.it/e006utmdjz2f1/DASH_1080.mp4?source=fallback', 'h...
t3_1kvc9ri
/r/LocalLLaMA/comments/1kvc9ri/i_wrote_an_automated_setup_script_for_my_proxmox/
false
false
https://external-preview…83b596551ce166a1
33
{'enabled': False, 'images': [{'id': 'NmVqMXp0bWRqejJmMd5vTxA1ciqILnZ5a_nUE62vdQvPjJEI_nmjQVTxndSt', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/NmVqMXp0bWRqejJmMd5vTxA1ciqILnZ5a_nUE62vdQvPjJEI_nmjQVTxndSt.png?width=108&crop=smart&format=pjpg&auto=webp&s=912ab98316baac1e1774eea0f8db3cd448b17...