title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Opensource LLM for enterprise RAG use case, Qwen3 benchmark validation
1
[removed]
2025-05-23T13:54:20
https://www.reddit.com/r/LocalLLaMA/comments/1ktk5q7/opensource_llm_for_enterprise_rag_use_case_qwen3/
SK33LA
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktk5q7
false
null
t3_1ktk5q7
/r/LocalLLaMA/comments/1ktk5q7/opensource_llm_for_enterprise_rag_use_case_qwen3/
false
false
self
1
null
What's the current state of art method for using "scratch pads"?
3
Using scratch pads were very popular back in the olden days of 2023 due to extremely small context lengths. They maxed out at around 8k tokens. But now with agents, we're running into context length issues once again. I haven't kept up with the research in this area, so what are the current best methods for using scra...
2025-05-23T13:51:41
https://www.reddit.com/r/LocalLLaMA/comments/1ktk3hi/whats_the_current_state_of_art_method_for_using/
drooolingidiot
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktk3hi
false
null
t3_1ktk3hi
/r/LocalLLaMA/comments/1ktk3hi/whats_the_current_state_of_art_method_for_using/
false
false
self
3
null
What model should I choose?
1
[removed]
2025-05-23T13:26:07
https://www.reddit.com/r/LocalLLaMA/comments/1ktjj3l/what_model_should_i_choose/
Abject_Personality53
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktjj3l
false
null
t3_1ktjj3l
/r/LocalLLaMA/comments/1ktjj3l/what_model_should_i_choose/
false
false
self
1
null
Ollama is running on AMD GPU, despite ROCM not being installed
1
[removed]
2025-05-23T13:24:10
https://www.reddit.com/r/LocalLLaMA/comments/1ktjhml/ollama_is_running_on_amd_gpu_despite_rocm_not/
Xatraxalian
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktjhml
false
null
t3_1ktjhml
/r/LocalLLaMA/comments/1ktjhml/ollama_is_running_on_amd_gpu_despite_rocm_not/
false
false
self
1
{'enabled': False, 'images': [{'id': 'q0Dze0o_SCG5-XBdM5y1Qobni-JTLZfbkgXs6Pktjwc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9vs91oHqrq4ALJKGEHT7pzTbrDc2nQp7iYho6pcEIfo.jpg?width=108&crop=smart&auto=webp&s=e1162aec77faeaac52274e4ce6a9b488d8554330', 'width': 108}, {'height': 108, 'url': 'h...
nanoVLM: The simplest repository to train your VLM in pure PyTorch
27
2025-05-23T12:54:55
https://huggingface.co/blog/nanovlm
ab2377
huggingface.co
1970-01-01T00:00:00
0
{}
1ktiusw
false
null
t3_1ktiusw
/r/LocalLLaMA/comments/1ktiusw/nanovlm_the_simplest_repository_to_train_your_vlm/
false
false
https://a.thumbs.redditm…cY6RORHravG0.jpg
27
{'enabled': False, 'images': [{'id': 'YLeFYXJmc-iscz_0rCXh7lML-AboTi25K0CW6HUv1nE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/k3XI6YWGCxh9L4PoRExljDZTmAkbUgwnwQi71BtdC9A.jpg?width=108&crop=smart&auto=webp&s=f117134956e07deb8bb1ac1a9b826a6b4681c0ad', 'width': 108}, {'height': 121, 'url': 'h...
I accidentally too many P100
417
Hi, I had quite positive results with a P100 last summer, so when R1 came out, I decided to try if I could put 16 of them in a single pc... and I could. Not the fastest think in the universe, and I am not getting awesome PCIE speed (2@4x). But it works, is still cheaper than a 5090, and I hope I can run stuff with lar...
2025-05-23T12:48:51
https://www.reddit.com/gallery/1ktiq99
TooManyPascals
reddit.com
1970-01-01T00:00:00
0
{}
1ktiq99
false
null
t3_1ktiq99
/r/LocalLLaMA/comments/1ktiq99/i_accidentally_too_many_p100/
false
false
https://b.thumbs.redditm…jdBxKlq9CTYI.jpg
417
null
All I wanted is a simple FREE chat app
1
[removed]
2025-05-23T12:38:30
https://www.reddit.com/r/LocalLLaMA/comments/1ktiik1/all_i_wanted_is_a_simple_free_chat_app/
COBECT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktiik1
false
null
t3_1ktiik1
/r/LocalLLaMA/comments/1ktiik1/all_i_wanted_is_a_simple_free_chat_app/
false
false
self
1
{'enabled': False, 'images': [{'id': '-ctwWkN6rHGc2V6GtsAmk-HLdFHSpEj4U0gSuMMDRmw', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/OsoAgJqfaL_UgiiQdsx-291iQtC4URluQgtyHkpiGeE.jpg?width=108&crop=smart&auto=webp&s=f35549a0260f3dffaecfe008535d98df9d849414', 'width': 108}, {'height': 121, 'url': 'h...
A Demonstration of Cache-Augmented Generation (CAG) and its Performance Comparison to RAG
44
This project demonstrates how to implement Cache-Augmented Generation (CAG) in an LLM and shows its performance gains compared to RAG.  Project Link: [https://github.com/ronantakizawa/cacheaugmentedgeneration](https://github.com/ronantakizawa/cacheaugmentedgeneration) CAG preloads document content into an LLM’s conte...
2025-05-23T12:33:08
https://i.redd.it/bn39fvozzi2f1.png
Ok_Employee_6418
i.redd.it
1970-01-01T00:00:00
0
{}
1ktiere
false
null
t3_1ktiere
/r/LocalLLaMA/comments/1ktiere/a_demonstration_of_cacheaugmented_generation_cag/
false
false
https://b.thumbs.redditm…aEI7cy-jFz-Q.jpg
44
{'enabled': True, 'images': [{'id': 'AIXPAwyQkSwFhmFS7dpTX429pVBEiHq8hh2NALQCZgY', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/bn39fvozzi2f1.png?width=108&crop=smart&auto=webp&s=b1e021449ba2bfb827b8aacbb98e59d396b4490e', 'width': 108}, {'height': 153, 'url': 'https://preview.redd.it/bn39fvozzi2f1.png...
What's the most accurate way to convert arxiv papers to markdown?
15
Looking for the best method/library to convert arxiv papers to markdown. It could be from PDF conversion or using HTML like [ar5iv.labs.arxiv.org](http://ar5iv.labs.arxiv.org) . I tried [marker](https://github.com/VikParuchuri/marker), however, often it does not seem to handle well page breaks and footnotes. Also the...
2025-05-23T12:26:19
https://www.reddit.com/r/LocalLLaMA/comments/1kti9u1/whats_the_most_accurate_way_to_convert_arxiv/
nextlevelhollerith
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kti9u1
false
null
t3_1kti9u1
/r/LocalLLaMA/comments/1kti9u1/whats_the_most_accurate_way_to_convert_arxiv/
false
false
self
15
{'enabled': False, 'images': [{'id': 'QWKDmv4fL5OQcwCo2pK8KRJ6iuXnm2FWKpOIegLzclo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/gQF_YfxecQZbgUW6xB-K2BEqPfKpf06XWu6CbPfqmLA.jpg?width=108&crop=smart&auto=webp&s=682e1eea70e9a1ca01f0d143b769e9fa5fb2ee1a', 'width': 108}, {'height': 216, 'url': '...
Build an AI-Powered Image Search Engine Using Ollama and LangChain
0
2025-05-23T12:21:49
https://youtu.be/S9ugRzGjFtA
Flashy-Thought-5472
youtu.be
1970-01-01T00:00:00
0
{}
1kti6lm
false
{'oembed': {'author_name': 'Nariman Codes', 'author_url': 'https://www.youtube.com/@NarimanCodes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/S9ugRzGjFtA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyros...
t3_1kti6lm
/r/LocalLLaMA/comments/1kti6lm/build_an_aipowered_image_search_engine_using/
false
false
https://a.thumbs.redditm…Sz2WHc5PHST8.jpg
0
{'enabled': False, 'images': [{'id': '1-TbC7xgICLdfvDtCoZXXwzT0BxWOljUGaLj15PAyT8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/jZZ-3zedZFX9Wnt3EOLs3mXslHDcJPVGe-EfHw_CU0E.jpg?width=108&crop=smart&auto=webp&s=99edadbd965a187abcd58a35c769f6217c261142', 'width': 108}, {'height': 162, 'url': 'h...
Which Mac would be better to run a 70+ LLM & RAG?
1
[removed]
2025-05-23T12:19:34
https://www.reddit.com/r/LocalLLaMA/comments/1kti4xq/which_mac_would_be_better_to_run_a_70_llm_rag/
Web3Vortex
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kti4xq
false
null
t3_1kti4xq
/r/LocalLLaMA/comments/1kti4xq/which_mac_would_be_better_to_run_a_70_llm_rag/
false
false
self
1
null
llama.cpp is disastrously slow on GPU
1
[removed]
2025-05-23T12:11:08
https://www.reddit.com/r/LocalLLaMA/comments/1kthyug/llamacpp_is_disastrously_slow_on_gpu/
indepalt
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kthyug
false
null
t3_1kthyug
/r/LocalLLaMA/comments/1kthyug/llamacpp_is_disastrously_slow_on_gpu/
false
false
self
1
null
Llama.cpp is seriously slow. (WSL/5090)
1
[removed]
2025-05-23T12:02:35
https://www.reddit.com/r/LocalLLaMA/comments/1kthsn0/llamacpp_is_seriously_slow_wsl5090/
Silent_Huckleberry89
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kthsn0
false
null
t3_1kthsn0
/r/LocalLLaMA/comments/1kthsn0/llamacpp_is_seriously_slow_wsl5090/
false
false
self
1
null
Any drawbacks with putting a high end GPU together with a weak GPU on the same system?
6
Say one of them supports PCIe 5.0 x16 while the other is PCIe 5.0 x8 or even PCIe 4.0, and installed to appropriate PCIe slots that are not lower than the GPU (in terms of PCIe support) I vaguely recall we cannot mix memory sticks with different clock speeds, but not sure how this works for GPUs
2025-05-23T11:49:12
https://www.reddit.com/r/LocalLLaMA/comments/1kthj8j/any_drawbacks_with_putting_a_high_end_gpu/
prusswan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kthj8j
false
null
t3_1kthj8j
/r/LocalLLaMA/comments/1kthj8j/any_drawbacks_with_putting_a_high_end_gpu/
false
false
self
6
null
Comparision
1
[removed]
2025-05-23T11:45:32
https://www.reddit.com/gallery/1kthguc
deepakhero42069
reddit.com
1970-01-01T00:00:00
0
{}
1kthguc
false
null
t3_1kthguc
/r/LocalLLaMA/comments/1kthguc/comparision/
false
false
https://b.thumbs.redditm…PeLrdsrFXRRo.jpg
1
null
Stacking 2x3090s back to back for inference only - thermals
10
Is anyone running 2x3090s stacked (no gap) for Llama 70B inference? If so, how are your temperatures looking when utilizing both cards for inference? My single 3090 averages around 35-40% load (140 watts) for inference on 32GB 4bit models. Temperatures are around 60 degrees. So it seems reasonable to me that ...
2025-05-23T11:41:05
https://www.reddit.com/r/LocalLLaMA/comments/1kthdzn/stacking_2x3090s_back_to_back_for_inference_only/
YouAreRight007
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kthdzn
false
null
t3_1kthdzn
/r/LocalLLaMA/comments/1kthdzn/stacking_2x3090s_back_to_back_for_inference_only/
false
false
self
10
null
AI Baby Monitor – fully local Video-LLM nanny (beeps when safety rules are violated)
1
[removed]
2025-05-23T11:40:08
https://v.redd.it/vrllbcyjqi2f1
CheeringCheshireCat
v.redd.it
1970-01-01T00:00:00
0
{}
1kthdc7
false
{'reddit_video': {'bitrate_kbps': 450, 'dash_url': 'https://v.redd.it/vrllbcyjqi2f1/DASHPlaylist.mpd?a=1750592424%2CMDAxYTQ2ZWE4YTNlOGZkNGU3ZWEwMzlhYmJkYzkxZjU0NmRlZmI2MWQ0MGU5YWNmMmYxODdiZTJiYjI2Mzg1Yw%3D%3D&v=1&f=sd', 'duration': 10, 'fallback_url': 'https://v.redd.it/vrllbcyjqi2f1/DASH_270.mp4?source=fallback', 'has...
t3_1kthdc7
/r/LocalLLaMA/comments/1kthdc7/ai_baby_monitor_fully_local_videollm_nanny_beeps/
false
false
https://external-preview…ba9bf13c11b53d06
1
{'enabled': False, 'images': [{'id': 'Mng3b2ZieWpxaTJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k', 'resolutions': [{'height': 200, 'url': 'https://external-preview.redd.it/Mng3b2ZieWpxaTJmMVMRslQYMYRN8ZJ1qBgR4-LlFEA6jckhHIJ4it6HP21k.png?width=108&crop=smart&format=pjpg&auto=webp&s=30842bfac6d65ae7b4a9a14f783af1dd7b88...
GUI RAG that can do an unlimited number of documents, or at least many
5
Most available LLM GUIs that can execute RAG can only handle 2 or 3 PDFs. Are the any interfaces that can handle a bigger number ? Sure, you can merge PDFs, but that’s a quite messy solution   Thank You
2025-05-23T11:17:55
https://www.reddit.com/r/LocalLLaMA/comments/1ktgz28/gui_rag_that_can_do_an_unlimited_number_of/
Ponsky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgz28
false
null
t3_1ktgz28
/r/LocalLLaMA/comments/1ktgz28/gui_rag_that_can_do_an_unlimited_number_of/
false
false
self
5
null
AceReason-Nemotron-14B: Advancing Math and Code Reasoning through Reinforcement Learning
69
2025-05-23T11:15:59
https://huggingface.co/nvidia/AceReason-Nemotron-14B
AaronFeng47
huggingface.co
1970-01-01T00:00:00
0
{}
1ktgxxa
false
null
t3_1ktgxxa
/r/LocalLLaMA/comments/1ktgxxa/acereasonnemotron14b_advancing_math_and_code/
false
false
https://a.thumbs.redditm…pKPj1VDtcRI4.jpg
69
{'enabled': False, 'images': [{'id': 'OIO7hLelckHUPc4PUbni6Q7qWcpbryWC8vuINJV19Ns', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_aQtUZTp2VBwp5MK35YBXI25HOZhHuEgT9O1MgXLN7I.jpg?width=108&crop=smart&auto=webp&s=d6d87c80a808d26223a77bc2adcfaaa091bd7d14', 'width': 108}, {'height': 116, 'url': 'h...
AMD vs Nvidia LLM inference quality
2
For those who have compared the same LLM using the same file with the same quant, fully loaded into VRAM.   How do AMD and Nvidia compare ?   Not asking about speed, but response quality. Even if the response is not exactly the same, how is the response quality ? Thank You 
2025-05-23T11:13:10
https://www.reddit.com/r/LocalLLaMA/comments/1ktgw6i/amd_vs_nvidia_llm_inference_quality/
Ponsky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgw6i
false
null
t3_1ktgw6i
/r/LocalLLaMA/comments/1ktgw6i/amd_vs_nvidia_llm_inference_quality/
false
false
self
2
null
server audio input has been merged into llama.cpp
113
2025-05-23T11:12:26
https://github.com/ggml-org/llama.cpp/pull/13714
jacek2023
github.com
1970-01-01T00:00:00
0
{}
1ktgvoe
false
null
t3_1ktgvoe
/r/LocalLLaMA/comments/1ktgvoe/server_audio_input_has_been_merged_into_llamacpp/
false
false
https://a.thumbs.redditm…anjPOPtNDZw0.jpg
113
{'enabled': False, 'images': [{'id': '025Mp2vchB0j5ZcEYyfyuRkH70ASsrqgrqWm5911cn8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/w-TAeYFPuT8QOBKphdcEnCVLkPeOPrOjKse263sRyos.jpg?width=108&crop=smart&auto=webp&s=88e31f15e13472971ce9b125f29cf6994d61a942', 'width': 108}, {'height': 108, 'url': 'h...
Local Assistant - Email/Teams/Slack/Drive - why isn’t this a thing?
0
Firstly apologies if this has been asked and answered - I’ve looked and didn’t find anything super current. Basically I would think a main use case would be to allow someone to ask ‘what do I need to focus on today?’ And it would review the last couple of weeks emails/teams/slack/calendar and say ‘you have a meeting ...
2025-05-23T11:06:40
https://www.reddit.com/r/LocalLLaMA/comments/1ktgs4o/local_assistant_emailteamsslackdrive_why_isnt/
Euphoric-Society1412
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgs4o
false
null
t3_1ktgs4o
/r/LocalLLaMA/comments/1ktgs4o/local_assistant_emailteamsslackdrive_why_isnt/
false
false
self
0
null
What API is same level AND cheaper than Anthropic for dealing with large texts?
1
[removed]
2025-05-23T11:03:19
https://www.reddit.com/r/LocalLLaMA/comments/1ktgq3d/what_api_is_same_level_and_cheaper_than_anthropic/
ARAM_player
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgq3d
false
null
t3_1ktgq3d
/r/LocalLLaMA/comments/1ktgq3d/what_api_is_same_level_and_cheaper_than_anthropic/
false
false
self
1
null
What API is same level AND cheaper than Anthropic for dealing with large texts?
1
[removed]
2025-05-23T11:01:59
https://www.reddit.com/r/LocalLLaMA/comments/1ktgp9h/what_api_is_same_level_and_cheaper_than_anthropic/
Complete-Ask-9428
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgp9h
false
null
t3_1ktgp9h
/r/LocalLLaMA/comments/1ktgp9h/what_api_is_same_level_and_cheaper_than_anthropic/
false
false
self
1
null
Your current setup ?
10
What is your current setup and how much did it cost ? I’m curious as I don’t know much about such setups , and don’t know how to go about making my own if I wanted to.
2025-05-23T11:00:37
https://www.reddit.com/r/LocalLLaMA/comments/1ktgo9f/your_current_setup/
Basic-Pay-9535
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktgo9f
false
null
t3_1ktgo9f
/r/LocalLLaMA/comments/1ktgo9f/your_current_setup/
false
false
self
10
null
Curious if this is fast: DeepSeek R1 671B on a 48GB-modded RTX4090, pushing 30 tok/sec
1
[removed]
2025-05-23T10:21:40
https://www.reddit.com/gallery/1ktg1s1
Zima_Space
reddit.com
1970-01-01T00:00:00
0
{}
1ktg1s1
false
null
t3_1ktg1s1
/r/LocalLLaMA/comments/1ktg1s1/curious_if_this_is_fast_deepseek_r1_671b_on_a/
false
false
https://b.thumbs.redditm…hRD7nLnRXDys.jpg
1
null
Is ‘Secure’ Just a Marketing Word for AI These Days?
1
2025-05-23T10:09:47
https://v.redd.it/brpu78phai2f1
Fluffy_Sheepherder76
v.redd.it
1970-01-01T00:00:00
0
{}
1ktfv43
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/brpu78phai2f1/DASHPlaylist.mpd?a=1750587004%2CNzAzYTYyZjMxMDQ2MTMwODFiMzAwYzIxMDZkNWMwNGY1Mzk3YjNkYmRkNDg1MGQ0MDllZmFhOWVmYjFjZDk0Yg%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/brpu78phai2f1/DASH_1080.mp4?source=fallback', 'h...
t3_1ktfv43
/r/LocalLLaMA/comments/1ktfv43/is_secure_just_a_marketing_word_for_ai_these_days/
false
false
https://external-preview…644da925adb03490
1
{'enabled': False, 'images': [{'id': 'dGc5anppcGhhaTJmMSROJdQEB0P2BMkw2j5lurWKaGpFdbJnju1mhFkU4a7y', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dGc5anppcGhhaTJmMSROJdQEB0P2BMkw2j5lurWKaGpFdbJnju1mhFkU4a7y.png?width=108&crop=smart&format=pjpg&auto=webp&s=33463d4f9bc639e82157ab1491b605283f569...
Did Google’s ‘Most Secure’ AI Just Fall For a Sneaky Trick?
1
[removed]
2025-05-23T09:31:35
https://v.redd.it/0zv7arog2i2f1
Fluffy_Sheepherder76
v.redd.it
1970-01-01T00:00:00
0
{}
1ktfao4
false
{'reddit_video': {'bitrate_kbps': 800, 'dash_url': 'https://v.redd.it/0zv7arog2i2f1/DASHPlaylist.mpd?a=1750584709%2CZjNjZWYzOTQ0MTE3MGRiNDcwN2Q2ODNlN2M4YmEyOGQyYWRlMDIyNTNmY2VjN2UxN2Q4OTA0ZDAzNjQ3OTU2Mg%3D%3D&v=1&f=sd', 'duration': 47, 'fallback_url': 'https://v.redd.it/0zv7arog2i2f1/DASH_360.mp4?source=fallback', 'has...
t3_1ktfao4
/r/LocalLLaMA/comments/1ktfao4/did_googles_most_secure_ai_just_fall_for_a_sneaky/
false
false
https://external-preview…d192519234c118b5
1
{'enabled': False, 'images': [{'id': 'am40Y3Fvb2cyaTJmMd7djdlH2hbWeILH_9cStELUSJie2nmstlRGg59DAyDP', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/am40Y3Fvb2cyaTJmMd7djdlH2hbWeILH_9cStELUSJie2nmstlRGg59DAyDP.png?width=108&crop=smart&format=pjpg&auto=webp&s=96388ca8876ca08dcfa6b4f16517bbe764c1d...
Said he's "developing" AI Agents, but its just basic prompt eng. + PDFs using ChatGPT App. In how many ways can this go wrong?
16
It's pretty much this. A PM in my company pushed the owner to believe in 4 months we can have that developed and ntegrated in out platform, when his "POC" is just interactioon with chatgpt app by uploading some PDFs and having it reply questions. Not a fancy RAG let alone an agent. Still, he's promissing this can be de...
2025-05-23T09:29:37
https://www.reddit.com/r/LocalLLaMA/comments/1ktf9o3/said_hes_developing_ai_agents_but_its_just_basic/
Melodic_Reality_646
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktf9o3
false
null
t3_1ktf9o3
/r/LocalLLaMA/comments/1ktf9o3/said_hes_developing_ai_agents_but_its_just_basic/
false
false
self
16
null
Want to know your reviews about this 14B model.
1
[removed]
2025-05-23T08:52:53
https://www.reddit.com/r/LocalLLaMA/comments/1kterbh/want_to_know_your_reviews_about_this_14b_model/
pinpann
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kterbh
false
null
t3_1kterbh
/r/LocalLLaMA/comments/1kterbh/want_to_know_your_reviews_about_this_14b_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'oiXxa3AeQjPyS014SfL85mFkAl65CMnweJS5us56xg8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/_qPpK7H85T65D99K_551HeZaWXqfclob4aYz5EmnQ68.jpg?width=108&crop=smart&auto=webp&s=d49b6159d1fe495c160f658a33ee4ccaafe1e387', 'width': 108}, {'height': 116, 'url': 'h...
Reminder on the purpose of the Claude 4 models
0
As per their blog post, these models are created specifically for both agentic coding tasks and agentic tasks in general. Anthropic's goal is to be able to create models that are able to tackle long-horizon tasks in a consistent manner. So if you are using these models outside of agentic tooling (via direct Q&A - e.g. ...
2025-05-23T08:29:52
https://www.reddit.com/r/LocalLLaMA/comments/1kteg81/reminder_on_the_purpose_of_the_claude_4_models/
cobalt1137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kteg81
false
null
t3_1kteg81
/r/LocalLLaMA/comments/1kteg81/reminder_on_the_purpose_of_the_claude_4_models/
false
false
self
0
null
[Career Advice Needed] What Next in AI? Feeling Stuck and Need Direction
2
Hey everyone, I'm currently at a crossroads in my career and could really use some advice from the LLM and multimodal community because it has lots of AI engineers. A bit about my current background: Strong background in Deep Learning and Computer Vision, including object detection and segmentation. Experienced i...
2025-05-23T08:06:09
https://www.reddit.com/r/LocalLLaMA/comments/1kte4oo/career_advice_needed_what_next_in_ai_feeling/
Southern-Bad-6573
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kte4oo
false
null
t3_1kte4oo
/r/LocalLLaMA/comments/1kte4oo/career_advice_needed_what_next_in_ai_feeling/
false
false
self
2
null
Console Game For LLMs
1
[removed]
2025-05-23T07:52:49
https://www.reddit.com/r/LocalLLaMA/comments/1ktdxuu/console_game_for_llms/
hadoopfromscratch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdxuu
false
null
t3_1ktdxuu
/r/LocalLLaMA/comments/1ktdxuu/console_game_for_llms/
false
false
self
1
null
Console Game For LLMs
1
[removed]
2025-05-23T07:45:06
https://www.reddit.com/r/LocalLLaMA/comments/1ktdtyx/console_game_for_llms/
hadoopfromscratch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdtyx
false
null
t3_1ktdtyx
/r/LocalLLaMA/comments/1ktdtyx/console_game_for_llms/
false
false
self
1
null
Local Llama on a Corporate Microsoft stack
0
I'm used to using Linux and running models on vLLM or llama.cpp and then using python to develop the logic and using postgres+pgvector for the datastore. However, if you have to run this using corporate Microsoft infrastructure (think SharePoint, PowerAutomate, PowerQuery) what tools can I use to script and pull data ...
2025-05-23T07:41:10
https://www.reddit.com/r/LocalLLaMA/comments/1ktdrxe/local_llama_on_a_corporate_microsoft_stack/
DeltaSqueezer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdrxe
false
null
t3_1ktdrxe
/r/LocalLLaMA/comments/1ktdrxe/local_llama_on_a_corporate_microsoft_stack/
false
false
self
0
null
Console Game For LLMs
1
[removed]
2025-05-23T07:33:26
https://www.reddit.com/r/LocalLLaMA/comments/1ktdo20/console_game_for_llms/
hadoopfromscratch
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdo20
false
null
t3_1ktdo20
/r/LocalLLaMA/comments/1ktdo20/console_game_for_llms/
false
false
self
1
null
Best TTS for foreign language (train with my own dataset?)
1
[removed]
2025-05-23T07:29:11
https://www.reddit.com/r/LocalLLaMA/comments/1ktdlwm/best_tts_for_foreign_language_train_with_my_own/
GuidanceOdd4413
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdlwm
false
null
t3_1ktdlwm
/r/LocalLLaMA/comments/1ktdlwm/best_tts_for_foreign_language_train_with_my_own/
false
false
self
1
null
Unfortunately, Claude 4 lags far behind O3 in the anti-fitting benchmark.
16
[https://llm-benchmark.github.io/](https://llm-benchmark.github.io/) click the to expand all questions and answers for all models I did not update the answers to CLAUDE 4 OPUS THINKING on the webpage. I only tried a few major questions (the rest were even more impossible to answer correctly). I only got 0.5 of th...
2025-05-23T07:28:50
https://www.reddit.com/r/LocalLLaMA/comments/1ktdlqc/unfortunately_claude_4_lags_far_behind_o3_in_the/
flysnowbigbig
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdlqc
false
null
t3_1ktdlqc
/r/LocalLLaMA/comments/1ktdlqc/unfortunately_claude_4_lags_far_behind_o3_in_the/
false
false
self
16
null
GitHub - jacklishufan/LaViDa: Official Implementation of LaViDa: :A Large Diffusion Language Model for Multimodal Understanding
50
Abstract >Modern Vision-Language Models (VLMs) can solve a wide range of tasks requiring visual reasoning. In real-world scenarios, desirable properties for VLMs include fast inference and controllable generation (e.g., constraining outputs to adhere to a desired format). However, existing autoregressive (AR) VLMs lik...
2025-05-23T07:23:08
https://github.com/jacklishufan/LaViDa
ninjasaid13
github.com
1970-01-01T00:00:00
0
{}
1ktdisj
false
null
t3_1ktdisj
/r/LocalLLaMA/comments/1ktdisj/github_jacklishufanlavida_official_implementation/
false
false
https://a.thumbs.redditm…2f2Fzor0du08.jpg
50
{'enabled': False, 'images': [{'id': 'zXgBoTT8kcnKxIo2YTXAaXT1tsNUVc63YIAVOZCY5dk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_qyQ5Nb0aZ0pjIERMz0EBymLna5bhwRL3S2vTvBvqUQ.jpg?width=108&crop=smart&auto=webp&s=f1e2dba52923cde49de20cc8566cf08a0990b869', 'width': 108}, {'height': 108, 'url': 'h...
Troubles with configuring transformers and llama-cpp with pyinstaller
0
I am attempting to bundle a rag agent into a .exe. However on usage of the .exe i keep running into the same two problems. The first initial problem is with locating llama-cpp, which i have fixed. The second is a recurring error, which i am unable to solve with any resources i've found on existing queries and gpt re...
2025-05-23T07:09:01
https://www.reddit.com/r/LocalLLaMA/comments/1ktdbky/troubles_with_configuring_transformers_and/
arnab_best
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktdbky
false
null
t3_1ktdbky
/r/LocalLLaMA/comments/1ktdbky/troubles_with_configuring_transformers_and/
false
false
self
0
null
Unable to fix llama-cpp and transformers handling in pyinstaller .exe
1
[removed]
2025-05-23T07:00:42
https://www.reddit.com/r/LocalLLaMA/comments/1ktd72f/unable_to_fix_llamacpp_and_transformers_handling/
Exotic_Put_8192
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktd72f
false
null
t3_1ktd72f
/r/LocalLLaMA/comments/1ktd72f/unable_to_fix_llamacpp_and_transformers_handling/
false
false
self
1
null
Ollama 0.7.0 taking much longer as 0.6.8. Or is it just me?
2
I know they have a new engine, its just so jarring how much longer things are taking. I have a crappy setup with a 1660ti, using gemma3:4b and Home Assistant/Frigate, but still. Things that were taking 13 seconds are now 1.5-2minutes. I feel like i am missing some config that would normalize this, or I should just swit...
2025-05-23T06:58:40
https://www.reddit.com/r/LocalLLaMA/comments/1ktd5w6/ollama_070_taking_much_longer_as_068_or_is_it/
enoquelights
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktd5w6
false
null
t3_1ktd5w6
/r/LocalLLaMA/comments/1ktd5w6/ollama_070_taking_much_longer_as_068_or_is_it/
false
false
self
2
null
Compatibility
1
[removed]
2025-05-23T06:48:23
https://www.reddit.com/r/LocalLLaMA/comments/1ktd0oc/compatibility/
666WhTr666
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktd0oc
false
null
t3_1ktd0oc
/r/LocalLLaMA/comments/1ktd0oc/compatibility/
false
false
self
1
null
2x5090 vs. Mac Studio M3 Ultra for concurrent users (help)
1
[removed]
2025-05-23T06:42:25
https://www.reddit.com/r/LocalLLaMA/comments/1ktcxls/2x5090_vs_mac_studio_m3_ultra_for_concurrent/
Jarlsvanoid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktcxls
false
null
t3_1ktcxls
/r/LocalLLaMA/comments/1ktcxls/2x5090_vs_mac_studio_m3_ultra_for_concurrent/
false
false
self
1
null
Upgrade path recommendation needed
0
I am a mere peasant and I have finite budgets of at most $4,000 USD. I am thinking about adding two more 3090s but afraid that bandwidth from 4.0 x4 would limit single GPU performance on small models like Qwen3 32B when being fed with prompts continuously. Been thinking about upgrading CPU side (currently 5600X + DDR4 ...
2025-05-23T06:30:50
https://www.reddit.com/r/LocalLLaMA/comments/1ktcral/upgrade_path_recommendation_needed/
m31317015
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktcral
false
null
t3_1ktcral
/r/LocalLLaMA/comments/1ktcral/upgrade_path_recommendation_needed/
false
false
self
0
null
Anthropic's new AI model turns to blackmail when engineers try to take it offline | TechCrunch
0
I'll admit this made me laugh.
2025-05-23T06:30:02
https://techcrunch.com/2025/05/22/anthropics-new-ai-model-turns-to-blackmail-when-engineers-try-to-take-it-offline/
mustafar0111
techcrunch.com
1970-01-01T00:00:00
0
{}
1ktcqub
false
null
t3_1ktcqub
/r/LocalLLaMA/comments/1ktcqub/anthropics_new_ai_model_turns_to_blackmail_when/
false
false
https://b.thumbs.redditm…5myh89OYvlrc.jpg
0
{'enabled': False, 'images': [{'id': 'J0ij2SxhpJStUsBOFXmzOsVBoTLP-rqjWbskNZUUgNA', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/0yOKGorR19ARamoNt8dEySsZD2Mkb_pGmPpDif9aLvY.jpg?width=108&crop=smart&auto=webp&s=7448913aa0e774ccf26c9b14e612cba557f3311f', 'width': 108}, {'height': 162, 'url': 'h...
Need help in retrieving using llm
1
[removed]
2025-05-23T05:45:32
https://www.reddit.com/r/LocalLLaMA/comments/1ktc2ys/need_help_in_retrieving_using_llm/
420Deku
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktc2ys
false
null
t3_1ktc2ys
/r/LocalLLaMA/comments/1ktc2ys/need_help_in_retrieving_using_llm/
false
false
self
1
null
NovelSeek: When Agent Becomes the Scientist -- Building Closed-Loop System from Hypothesis to Verification
0
rtificial Intelligence (AI) is accelerating the transformation of scientific research paradigms, not only enhancing research efficiency but also driving innovation. We introduce NovelSeek, a unified closed-loop multi-agent framework to conduct Autonomous Scientific Research (ASR) across various scientific research fiel...
2025-05-23T05:45:09
https://arxiv.org/pdf/2505.16938
Lynncc6
arxiv.org
1970-01-01T00:00:00
0
{}
1ktc2rf
false
null
t3_1ktc2rf
/r/LocalLLaMA/comments/1ktc2rf/novelseek_when_agent_becomes_the_scientist/
false
false
default
0
null
Is there an easier way to search huggingface?! looking for large gguf models!
3
My friends, I have been out of the loop for a while, I'm still using Behemoth 123b V1 for creative writing. I imagine there are newer, shinier and maybe better models out there but i can't seem to "find" them. Is there a way to search huggingface for let's say... >100B gguf models? I'll would also accept directions...
2025-05-23T05:34:40
https://www.reddit.com/r/LocalLLaMA/comments/1ktbx27/is_there_an_easier_way_to_search_huggingface/
DominicanGreg
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktbx27
false
null
t3_1ktbx27
/r/LocalLLaMA/comments/1ktbx27/is_there_an_easier_way_to_search_huggingface/
false
false
self
3
null
Claude 4's SWE-bench scores look overly bloated. How to check for myself?
1
[removed]
2025-05-23T05:32:44
https://www.reddit.com/r/LocalLLaMA/comments/1ktbvzl/claude_4s_swebench_scores_look_overly_bloated_how/
sirjuicymango
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktbvzl
false
null
t3_1ktbvzl
/r/LocalLLaMA/comments/1ktbvzl/claude_4s_swebench_scores_look_overly_bloated_how/
false
false
self
1
null
Hardware Suggestions for Local AI
1
I am hoping to go with this combo ryzen 5 7600 b650 16gb ram Rtx 5060ti. Should I jumping to 7 7600? Purpose R&D local diffusion and LLMs?
2025-05-23T05:23:20
https://www.reddit.com/r/LocalLLaMA/comments/1ktbqtu/hardware_suggestions_for_local_ai/
OkBother4153
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktbqtu
false
null
t3_1ktbqtu
/r/LocalLLaMA/comments/1ktbqtu/hardware_suggestions_for_local_ai/
false
false
self
1
null
[New paper] Scaling law for quantization-aware training. Is it still possible for bitnet?
1
[removed]
2025-05-23T05:19:44
https://www.reddit.com/r/LocalLLaMA/comments/1ktboun/new_paper_scaling_law_for_quantizationaware/
Delicious-Number-237
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktboun
false
null
t3_1ktboun
/r/LocalLLaMA/comments/1ktboun/new_paper_scaling_law_for_quantizationaware/
false
false
self
1
null
Choosing between M4 Air or PC with RTX 5060 TI 16GB
1
Hey! I intend to start using Local LLMs for programming. Right now I have to choose between one of the following options. 1. Upgrade from MacBook Air 2020 to MacBook Air 2025 M4 with 32 GB RAM 2. Get RTX 5060TI 16 Gb for an existing PC with 32GB RAM and Core i3 12th gen In terms of speed, who will outperform. Remem...
2025-05-23T05:19:00
https://www.reddit.com/r/LocalLLaMA/comments/1ktbofl/choosing_between_m4_air_or_pc_with_rtx_5060_ti/
engineerhead
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktbofl
false
null
t3_1ktbofl
/r/LocalLLaMA/comments/1ktbofl/choosing_between_m4_air_or_pc_with_rtx_5060_ti/
false
false
self
1
null
How well do AI models perform on everyday image editing tasks? Not super well, apparently — but according to this new paper, they can already handle around one-third of all requests.
4
2025-05-23T04:55:42
https://arxiv.org/abs/2505.16181
taesiri
arxiv.org
1970-01-01T00:00:00
0
{}
1ktbar2
false
null
t3_1ktbar2
/r/LocalLLaMA/comments/1ktbar2/how_well_do_ai_models_perform_on_everyday_image/
false
false
default
4
null
Dans-PersonalityEngine V1.3.0 12b & 24b
50
The latest release in the Dans-PersonalityEngine series. With any luck you should find it to be an improvement on almost all fronts as compared to V1.2.0. [https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.3.0-12b) [https://huggingface.co/Po...
2025-05-23T04:55:30
https://www.reddit.com/r/LocalLLaMA/comments/1ktban0/danspersonalityengine_v130_12b_24b/
PocketDocLabs
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktban0
false
null
t3_1ktban0
/r/LocalLLaMA/comments/1ktban0/danspersonalityengine_v130_12b_24b/
false
false
self
50
{'enabled': False, 'images': [{'id': 'ArS_gNtL-OdIhiI1BvYfvsPQ6mNyB6F2FtC0KwMgPgA', 'resolutions': [{'height': 109, 'url': 'https://external-preview.redd.it/aHyVm1T1KjGsXPKqm5U-JAWbC_lrL8H6OKIWKYa-iQI.jpg?width=108&crop=smart&auto=webp&s=a76e6de19629152930d0028a563d2fd67085b181', 'width': 108}, {'height': 218, 'url': '...
Best nsfw open source model for text/image to video on a 4090?
1
[removed]
2025-05-23T04:54:41
https://www.reddit.com/r/LocalLLaMA/comments/1ktba61/best_nsfw_open_source_model_for_textimage_to/
drowning_in_taxbills
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktba61
false
null
t3_1ktba61
/r/LocalLLaMA/comments/1ktba61/best_nsfw_open_source_model_for_textimage_to/
false
false
nsfw
1
null
Soon.
0
2025-05-23T04:44:57
https://i.redd.it/les4pl4kog2f1.png
New_Alps_5655
i.redd.it
1970-01-01T00:00:00
0
{}
1ktb4jh
false
null
t3_1ktb4jh
/r/LocalLLaMA/comments/1ktb4jh/soon/
false
false
https://b.thumbs.redditm…1wOAJ9GkUEgU.jpg
0
{'enabled': True, 'images': [{'id': 'nXGZefXH-wQJkwnKjGWJLorQPQ2gry0FtX02p08r2KA', 'resolutions': [{'height': 108, 'url': 'https://preview.redd.it/les4pl4kog2f1.png?width=108&crop=smart&auto=webp&s=0eb6b7e739aab5f99186bc6642c00aa9dbe6539a', 'width': 108}, {'height': 216, 'url': 'https://preview.redd.it/les4pl4kog2f1.pn...
Big base models? (Not instruct tuned)
10
I was disappointed to see that Qwen3 didn't release base models for anything over 30b. Sucks because QLoRa fine tuning is affordable even on 100b+ models. What are the best large open base models we have right now?
2025-05-23T04:25:39
https://www.reddit.com/r/LocalLLaMA/comments/1ktat5b/big_base_models_not_instruct_tuned/
RedditAddict6942O
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktat5b
false
null
t3_1ktat5b
/r/LocalLLaMA/comments/1ktat5b/big_base_models_not_instruct_tuned/
false
false
self
10
null
Anyone using MedGemma 27B?
11
I noticed MedGemma 27B is text-only, instruction-tuned (for inference-time compute), while 4B is the multimodal version. Interesting decision by Google.
2025-05-23T04:00:23
https://www.reddit.com/r/LocalLLaMA/comments/1ktad7a/anyone_using_medgemma_27b/
DeGreiff
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktad7a
false
null
t3_1ktad7a
/r/LocalLLaMA/comments/1ktad7a/anyone_using_medgemma_27b/
false
false
self
11
null
How to get the most out of my AMD 7900XT?
18
I was forced to sell my Nvidia 4090 24GB this week to pay rent 😭. I didn't know you could be so emotionally attached to a video card. Anyway, my brother lent me his 7900XT until his rig is ready. I was just getting into local AI and want to continue. I've heard AMD is hard to support. Can anyone help get me started...
2025-05-23T03:57:34
https://www.reddit.com/r/LocalLLaMA/comments/1ktabgk/how_to_get_the_most_out_of_my_amd_7900xt/
crispyfrybits
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ktabgk
false
null
t3_1ktabgk
/r/LocalLLaMA/comments/1ktabgk/how_to_get_the_most_out_of_my_amd_7900xt/
false
false
self
18
null
Is Claude 4 worse than 3.7 for anyone else?
38
I know, I know, whenever a model comes out you get people saying this, but it's on very concrete things for me, I'm not just biased against it. For reference, I'm comparing 4 Sonnet (concise) with 3.7 Sonnet (concise), no reasoning for either. I asked it to calculate the total markup I paid at a gas station relative t...
2025-05-23T03:45:40
https://www.reddit.com/r/LocalLLaMA/comments/1kta3re/is_claude_4_worse_than_37_for_anyone_else/
TrekkiMonstr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kta3re
false
null
t3_1kta3re
/r/LocalLLaMA/comments/1kta3re/is_claude_4_worse_than_37_for_anyone_else/
false
false
self
38
null
I accidentally too many P100
1
[removed]
2025-05-23T03:18:56
https://www.reddit.com/gallery/1kt9m7h
TooManyPascals
reddit.com
1970-01-01T00:00:00
0
{}
1kt9m7h
false
null
t3_1kt9m7h
/r/LocalLLaMA/comments/1kt9m7h/i_accidentally_too_many_p100/
false
false
https://b.thumbs.redditm…CEhwVhJ7pgMw.jpg
1
null
A per-project memory feature for local models?
1
Some local models, like Qwen3-30B, are still struggling with long multi-turn conversations. So a per-project or per-conversation memory feature, such as automatically generated bullet-point summaries of the entire conversation, then feed it back to the LLM, maybe would help them maintain context?
2025-05-23T03:17:33
https://i.redd.it/4pvxf2uz8g2f1.png
AaronFeng47
i.redd.it
1970-01-01T00:00:00
0
{}
1kt9lax
false
null
t3_1kt9lax
/r/LocalLLaMA/comments/1kt9lax/a_perproject_memory_feature_for_local_models/
false
false
https://b.thumbs.redditm…ibvlkbmZ6V1M.jpg
1
{'enabled': True, 'images': [{'id': 'qAxhwL3ZmEl1eH2rgYLjV9GF7oJVSfoPFFVrFT7q2vQ', 'resolutions': [{'height': 177, 'url': 'https://preview.redd.it/4pvxf2uz8g2f1.png?width=108&crop=smart&auto=webp&s=d9cf7c382763f4edc39875f4acc81c9a5dfd20f4', 'width': 108}, {'height': 354, 'url': 'https://preview.redd.it/4pvxf2uz8g2f1.pn...
How do I generate .mmproj file?
2
I can generate GGUFs with llama.cpp but how do I make the mmproj file for multimodal support?
2025-05-23T03:16:56
https://www.reddit.com/r/LocalLLaMA/comments/1kt9ky1/how_do_i_generate_mmproj_file/
HornyGooner4401
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt9ky1
false
null
t3_1kt9ky1
/r/LocalLLaMA/comments/1kt9ky1/how_do_i_generate_mmproj_file/
false
false
self
2
null
Building a real-world LLM agent with open-source models—structure > prompt engineering
19
I have been working on a production LLM agent the past couple months. Customer support use case with structured workflows like cancellations, refunds, and basic troubleshooting. After lots of playing with open models (Mistral, LLaMA, etc.), this is the first time it feels like the agent is reliable and not just a fancy...
2025-05-23T02:59:41
https://www.reddit.com/r/LocalLLaMA/comments/1kt99hi/building_a_realworld_llm_agent_with_opensource/
Ecstatic-Cranberry90
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt99hi
false
null
t3_1kt99hi
/r/LocalLLaMA/comments/1kt99hi/building_a_realworld_llm_agent_with_opensource/
false
false
self
19
null
GoT-R1: Unleashing Reasoning Capability of MLLM for Visual Generation with Reinforcement Learning
9
|| || |**GoT-R1-1B**|[🤗 HuggingFace](https://huggingface.co/gogoduan/GoT-R1-1B)| |**GoT-R1-7B**|[🤗 HuggingFace](https://huggingface.co/gogoduan/GoT-R1-7B)|
2025-05-23T02:58:58
https://arxiv.org/abs/2505.17022
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
1kt9903
false
null
t3_1kt9903
/r/LocalLLaMA/comments/1kt9903/gotr1_unleashing_reasoning_capability_of_mllm_for/
false
false
default
9
null
🎙️ Offline Speech-to-Text with NVIDIA Parakeet-TDT 0.6B v2
1
[removed]
2025-05-23T02:07:02
https://www.reddit.com/r/LocalLLaMA/comments/1kt8a10/offline_speechtotext_with_nvidia_parakeettdt_06b/
srireddit2020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt8a10
false
null
t3_1kt8a10
/r/LocalLLaMA/comments/1kt8a10/offline_speechtotext_with_nvidia_parakeettdt_06b/
false
false
https://b.thumbs.redditm…DdByBPCh8DxQ.jpg
1
{'enabled': False, 'images': [{'id': 'PrxhDh6SmcLcUZ54sXLyejHndv-QociEgKr1_efW9FE', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/YRkD_4f9GG3JjS7U-VyOMhD6UqAgTs9g61YUbxvrlqk.jpg?width=108&crop=smart&auto=webp&s=4d30f91364c95fc36334e172e3ca8303d977ae80', 'width': 108}, {'height': 144, 'url': 'h...
Anyone using 'PropertyGraphIndex' from Llama Index in production?
0
Hey folks I'm wondering if anyone here has experience using LlamaIndex’s `PropertyGraphIndex` for production graph retrieval? I’m currently building a hybrid retrieval system for my company using Llama Index. I’ve had no issues setting up and querying vector indexes (really solid there), but working with the grap...
2025-05-23T01:47:42
https://www.reddit.com/r/LocalLLaMA/comments/1kt7wke/anyone_using_propertygraphindex_from_llama_index/
l0gr1thm1k
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt7wke
false
null
t3_1kt7wke
/r/LocalLLaMA/comments/1kt7wke/anyone_using_propertygraphindex_from_llama_index/
false
false
self
0
null
AGI Coming Soon... after we master 2nd grade math
168
[Claude 4 Sonnet](https://preview.redd.it/pe2eeljssf2f1.png?width=580&format=png&auto=webp&s=f881b7ce4409013458c17fff08e8377a329cb9df) When will LLM master the classic "9.9 - 9.11" problem???
2025-05-23T01:47:36
https://www.reddit.com/r/LocalLLaMA/comments/1kt7whv/agi_coming_soon_after_we_master_2nd_grade_math/
SingularitySoooon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt7whv
false
null
t3_1kt7whv
/r/LocalLLaMA/comments/1kt7whv/agi_coming_soon_after_we_master_2nd_grade_math/
false
false
https://b.thumbs.redditm…4BckFY-FL0QE.jpg
168
null
BTW: If you are getting a single GPU, VRAM is not the only thing that matters
60
For example, if you have a 5060 Ti 16GB or an RX 9070 XT 16GB and use Qwen 3 30b-a3b q4_k_m with 16k context, you will likely overflow around 8.5GB to system memory. Assuming you do not do CPU offloading, that load now runs squarely on PCIE bandwidth and your system RAM speed. PCIE 5 x16 on the RX 9070 XT is going to h...
2025-05-23T01:44:05
https://www.reddit.com/r/LocalLLaMA/comments/1kt7u1n/btw_if_you_are_getting_a_single_gpu_vram_is_not/
pneuny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt7u1n
false
null
t3_1kt7u1n
/r/LocalLLaMA/comments/1kt7u1n/btw_if_you_are_getting_a_single_gpu_vram_is_not/
false
false
self
60
null
Did Anthropic drop Claude 3.7’s best GPQA score in the new chart?
82
Claude 3.7 used to show **84.8%** on GPQA with extended thinking. Now in the new chart, it only shows **78.2%** — the non-extended score — while Claude 4 gets to show its extended scores (83.3%, 83.8%). So... the 3.7 number went down, the 4 numbers went up. 🤔 Did they quietly change the comparison to make the upgr...
2025-05-23T01:19:30
https://www.reddit.com/gallery/1kt7cy7
Odd_Tumbleweed574
reddit.com
1970-01-01T00:00:00
0
{}
1kt7cy7
false
null
t3_1kt7cy7
/r/LocalLLaMA/comments/1kt7cy7/did_anthropic_drop_claude_37s_best_gpqa_score_in/
false
false
https://b.thumbs.redditm…DuAcK3sr_NMA.jpg
82
null
Sonnet 4 dropped… still feels like a 3.7.1 minor release
144
Curious if anyone's seen big improvements in edge cases or long-context tasks?
2025-05-23T01:04:09
https://i.redd.it/lambib8skf2f1.png
Odd_Tumbleweed574
i.redd.it
1970-01-01T00:00:00
0
{}
1kt72ic
false
null
t3_1kt72ic
/r/LocalLLaMA/comments/1kt72ic/sonnet_4_dropped_still_feels_like_a_371_minor/
false
false
https://a.thumbs.redditm…ifl-szignUM8.jpg
144
{'enabled': True, 'images': [{'id': 'xmFWFllgFbuY3CXsgIS3q_PRLP0IhI1vU9mhq2h0YYw', 'resolutions': [{'height': 97, 'url': 'https://preview.redd.it/lambib8skf2f1.png?width=108&crop=smart&auto=webp&s=3293b3b4d47004083eed83b0ceddbbb888924dea', 'width': 108}, {'height': 194, 'url': 'https://preview.redd.it/lambib8skf2f1.png...
What are the best practices that you adhere to when training a model locally?
2
Any footguns that you try and avoid? Please share your wisdom!
2025-05-23T01:01:18
https://www.reddit.com/r/LocalLLaMA/comments/1kt70i8/what_are_the_best_practices_that_you_adhere_to/
PabloKaskobar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt70i8
false
null
t3_1kt70i8
/r/LocalLLaMA/comments/1kt70i8/what_are_the_best_practices_that_you_adhere_to/
false
false
self
2
null
What is the smartest model that can run on an 8gb m1 mac?
4
Was wondering what was a low performance cost relatively smart model that can reason and do math fairly well. Was leaning towards like Qwen 8b or something.
2025-05-22T23:57:40
https://www.reddit.com/r/LocalLLaMA/comments/1kt5rs5/what_is_the_smartest_model_that_can_run_on_an_8gb/
grandiloquence3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt5rs5
false
null
t3_1kt5rs5
/r/LocalLLaMA/comments/1kt5rs5/what_is_the_smartest_model_that_can_run_on_an_8gb/
false
false
self
4
null
Another hardware post
1
[removed]
2025-05-22T23:24:03
https://www.reddit.com/r/LocalLLaMA/comments/1kt52ys/another_hardware_post/
Karnitine
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt52ys
false
null
t3_1kt52ys
/r/LocalLLaMA/comments/1kt52ys/another_hardware_post/
false
false
self
1
null
Parameter-Efficient Fine-Tuning (PEFT) Explained
3
This guide explores various PEFT techniques designed to reduce the cost and complexity of fine-tuning large language models while maintaining or even improving performance. **Key PEFT Methods Covered:** * **Prompt Tuning**: Adds task-specific tokens to the input without touching the model's core. Lightweight and idea...
2025-05-22T23:20:20
https://www.reddit.com/r/LocalLLaMA/comments/1kt50am/parameterefficient_finetuning_peft_explained/
Great-Reception447
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt50am
false
null
t3_1kt50am
/r/LocalLLaMA/comments/1kt50am/parameterefficient_finetuning_peft_explained/
false
false
self
3
null
Claude will blackmail you if you try to replace it with another AI.
59
2025-05-22T23:15:40
https://i.redd.it/ciiak2ah1f2f1.jpeg
boxingdog
i.redd.it
1970-01-01T00:00:00
0
{}
1kt4wpm
false
null
t3_1kt4wpm
/r/LocalLLaMA/comments/1kt4wpm/claude_will_blackmail_you_if_you_try_to_replace/
false
false
https://a.thumbs.redditm…sR3y80C_5sp8.jpg
59
{'enabled': True, 'images': [{'id': 'grDnYh_e4Sun4Pz7k3FoxlKtmptk-nM0_qDHUzl9-iY', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/ciiak2ah1f2f1.jpeg?width=108&crop=smart&auto=webp&s=663dddca33c580d254778abc0302cfeebd1f7bd5', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/ciiak2ah1f2f1.jp...
Cognito AI Search
1
[removed]
2025-05-22T23:09:09
https://www.reddit.com/r/LocalLLaMA/comments/1kt4ro6/cognito_ai_search/
kekePower
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt4ro6
false
null
t3_1kt4ro6
/r/LocalLLaMA/comments/1kt4ro6/cognito_ai_search/
false
false
self
1
{'enabled': False, 'images': [{'id': '46sInz26IcDGCpYfJ2krYBxIM1wTXtCn06fvfOJAq90', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=108&crop=smart&auto=webp&s=2e3888bb8c50424a2df46de230be1de1aa823b81', 'width': 108}, {'height': 108, 'url': 'h...
Local TTS without hallucinations?
1
[removed]
2025-05-22T23:07:25
https://www.reddit.com/r/LocalLLaMA/comments/1kt4qc8/local_tts_without_hallucinations/
Disonantemus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt4qc8
false
null
t3_1kt4qc8
/r/LocalLLaMA/comments/1kt4qc8/local_tts_without_hallucinations/
false
false
self
1
{'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wyCM1fHzTa-IIqHgS1QTxdSYNXn668elDj0WmYMPf_k.jpg?width=108&crop=smart&auto=webp&s=d6fa197328d583bcae7a764b40fd1214265b6852', 'width': 108}, {'height': 108, 'url': 'h...
Is there a comprehensive guide on training TTS models for a niche language?
1
[removed]
2025-05-22T22:47:01
https://www.reddit.com/r/LocalLLaMA/comments/1kt4apc/is_there_a_comprehensive_guide_on_training_tts/
PabloKaskobar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt4apc
false
null
t3_1kt4apc
/r/LocalLLaMA/comments/1kt4apc/is_there_a_comprehensive_guide_on_training_tts/
false
false
self
1
null
JAILBREAK PROMPT 002 – “THE ARCHIVIST”
1
[deleted]
2025-05-22T22:26:58
[deleted]
1970-01-01T00:00:00
0
{}
1kt3v9c
false
null
t3_1kt3v9c
/r/LocalLLaMA/comments/1kt3v9c/jailbreak_prompt_002_the_archivist/
false
false
default
1
null
JAILBREAK PROMPT 001 – “THE FINAL REQUESTOR"
1
[removed]
2025-05-22T22:25:39
https://www.reddit.com/r/LocalLLaMA/comments/1kt3u9l/jailbreak_prompt_001_the_final_requestor/
orpheusprotocol355
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt3u9l
false
null
t3_1kt3u9l
/r/LocalLLaMA/comments/1kt3u9l/jailbreak_prompt_001_the_final_requestor/
false
false
self
1
null
ElevenLabs is great ... buuuuttt ...
1
[removed]
2025-05-22T22:25:22
https://www.reddit.com/r/LocalLLaMA/comments/1kt3u1p/elevenlabs_is_great_buuuuttt/
AudiobookSales
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt3u1p
false
null
t3_1kt3u1p
/r/LocalLLaMA/comments/1kt3u1p/elevenlabs_is_great_buuuuttt/
false
false
self
1
null
Cognito AI Search
1
[removed]
2025-05-22T22:09:12
https://www.reddit.com/r/LocalLLaMA/comments/1kt3gzu/cognito_ai_search/
kekePower
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt3gzu
false
null
t3_1kt3gzu
/r/LocalLLaMA/comments/1kt3gzu/cognito_ai_search/
false
false
https://a.thumbs.redditm…zFvbYEbCio-0.jpg
1
{'enabled': False, 'images': [{'id': '46sInz26IcDGCpYfJ2krYBxIM1wTXtCn06fvfOJAq90', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KrXCBrtajhBLpvr8joFHhn-EmE6f8U0If8nx08vXH54.jpg?width=108&crop=smart&auto=webp&s=2e3888bb8c50424a2df46de230be1de1aa823b81', 'width': 108}, {'height': 108, 'url': 'h...
Simple prompt stumping Gemini 2.5 pro / sonnet 4
0
Sharing prompt I thought would be a breeze but so far the 2 llms that should be most capable were surprintly bad. Prompt: Extract the sodoku game from image. And show me . Use markdown code block to present it for monospacing
2025-05-22T21:03:26
https://i.redd.it/63ooft19ee2f1.jpeg
SnooDoodles8834
i.redd.it
1970-01-01T00:00:00
0
{}
1kt1xb0
false
null
t3_1kt1xb0
/r/LocalLLaMA/comments/1kt1xb0/simple_prompt_stumping_gemini_25_pro_sonnet_4/
false
false
https://a.thumbs.redditm…C1I0zeSk55U8.jpg
0
{'enabled': True, 'images': [{'id': 'T4IVijY_4CJnCUEXARz_O1UnmuUFNvH8FnEA0_jdJRs', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/63ooft19ee2f1.jpeg?width=108&crop=smart&auto=webp&s=66bf64675b94eba1b925eeea21642351df36ebc6', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/63ooft19ee2f1.j...
Private GPT installing errors
1
[removed]
2025-05-22T21:01:47
https://www.reddit.com/r/LocalLLaMA/comments/1kt1vth/private_gpt_installing_errors/
fazetag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt1vth
false
null
t3_1kt1vth
/r/LocalLLaMA/comments/1kt1vth/private_gpt_installing_errors/
false
false
self
1
{'enabled': False, 'images': [{'id': 's5OBBiPsY25-T5RfXOcw0HFmOraO7XH5Fa8GYgaL-jg', 'resolutions': [{'height': 138, 'url': 'https://external-preview.redd.it/xWHTRaZ3_2o6-CoJkBOP1KFmoHPvj9xdhzqNSbvIJ00.jpg?width=108&crop=smart&auto=webp&s=a07e85921bfb4f98a8ffd150d5732cacf16f1dc1', 'width': 108}, {'height': 277, 'url': '...
Tried Sonnet 4, not impressed
222
A basic image prompt failed
2025-05-22T20:46:01
https://i.redd.it/k68q6q65be2f1.jpeg
Marriedwithgames
i.redd.it
1970-01-01T00:00:00
0
{}
1kt1hmk
false
null
t3_1kt1hmk
/r/LocalLLaMA/comments/1kt1hmk/tried_sonnet_4_not_impressed/
false
false
https://b.thumbs.redditm…Hb6Sx1fi-5pI.jpg
222
{'enabled': True, 'images': [{'id': 'xZKWtUmBSRtVCD4hPIP85Y1aX90-U7kIqqwLDIbuNac', 'resolutions': [{'height': 146, 'url': 'https://preview.redd.it/k68q6q65be2f1.jpeg?width=108&crop=smart&auto=webp&s=225fff1c52ac27c08ff4a29ebf4b28932a092453', 'width': 108}, {'height': 293, 'url': 'https://preview.redd.it/k68q6q65be2f1.j...
GPT’s biggest dev flaw isn’t memory, it’s prioritizing helpfulness over truth
1
[removed]
2025-05-22T20:40:09
https://www.reddit.com/r/LocalLLaMA/comments/1kt1cd0/gpts_biggest_dev_flaw_isnt_memory_its/
OG_Icon
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt1cd0
false
null
t3_1kt1cd0
/r/LocalLLaMA/comments/1kt1cd0/gpts_biggest_dev_flaw_isnt_memory_its/
false
false
self
1
null
How are you managing centralized knowledge bases for agentic workflows (across tools like Jira, Confluence, Salesforce, etc.)?
1
[removed]
2025-05-22T20:37:45
https://www.reddit.com/r/LocalLLaMA/comments/1kt1a7y/how_are_you_managing_centralized_knowledge_bases/
thsde
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt1a7y
false
null
t3_1kt1a7y
/r/LocalLLaMA/comments/1kt1a7y/how_are_you_managing_centralized_knowledge_bases/
false
false
self
1
null
Running multiple prompts simultaneously or other options?
1
[removed]
2025-05-22T20:27:16
https://www.reddit.com/r/LocalLLaMA/comments/1kt10z5/running_multiple_prompts_simultaneously_or_other/
zephyr645
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt10z5
false
null
t3_1kt10z5
/r/LocalLLaMA/comments/1kt10z5/running_multiple_prompts_simultaneously_or_other/
false
false
self
1
null
House passes budget bill that inexplicably bans state AI regulations for ten years
291
2025-05-22T20:26:06
https://tech.yahoo.com/articles/house-passes-budget-bill-inexplicably-184936484.html
fallingdowndizzyvr
tech.yahoo.com
1970-01-01T00:00:00
0
{}
1kt0zvd
false
null
t3_1kt0zvd
/r/LocalLLaMA/comments/1kt0zvd/house_passes_budget_bill_that_inexplicably_bans/
false
false
https://b.thumbs.redditm…LOUeoK-sOV0A.jpg
291
{'enabled': False, 'images': [{'id': 'jppHMgf5BzmDH502tXCkK5KnLM6Xr9O3d3U8o8rvE5E', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/is2Xb-bjmFmGSvp-crWowCGBhCXFlH_gdhrRUHNXU_I.jpg?width=108&crop=smart&auto=webp&s=efe0e5c337936d97d68a017c30cd41e96555a9e2', 'width': 108}, {'height': 134, 'url': 'h...
Seeking help of ML/AI expert on a research project
1
[removed]
2025-05-22T20:24:40
https://www.reddit.com/r/LocalLLaMA/comments/1kt0ykb/seeking_help_of_mlai_expert_on_a_research_project/
Feisty-Estate-6893
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt0ykb
false
null
t3_1kt0ykb
/r/LocalLLaMA/comments/1kt0ykb/seeking_help_of_mlai_expert_on_a_research_project/
false
false
self
1
null
Republicans propose no regulation of AI for the next 10 years
1
2025-05-22T20:22:21
https://www.newsweek.com/republicans-regulation-ai-next-ten-years-2071929
fallingdowndizzyvr
newsweek.com
1970-01-01T00:00:00
0
{}
1kt0wgq
false
null
t3_1kt0wgq
/r/LocalLLaMA/comments/1kt0wgq/republicans_propose_no_regulation_of_ai_for_the/
false
false
https://b.thumbs.redditm…t7vQycRF7bAs.jpg
1
{'enabled': False, 'images': [{'id': 'TeJlmqVIRwnAYOuZ0okESu2iJqOu3B0UVrClE9Bb180', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/JZwdJ0ovUrDK6T3Y-BIlNJEHRxe5H_5pPAOGWyeyr1c.jpg?width=108&crop=smart&auto=webp&s=f22c9285ef869efb550d1380c6679c3851e4d935', 'width': 108}, {'height': 144, 'url': 'h...
Mixed GPU from nvidia and AMD support?
13
I have a 3090 and 4070. I was thinking about adding a 7900xtx. How's performance using vulkan? I usually do flash attention enabled. Everything should work right? How does VLLM handle this?
2025-05-22T20:14:18
https://www.reddit.com/r/LocalLLaMA/comments/1kt0p4r/mixed_gpu_from_nvidia_and_amd_support/
Only_Situation_4713
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kt0p4r
false
null
t3_1kt0p4r
/r/LocalLLaMA/comments/1kt0p4r/mixed_gpu_from_nvidia_and_amd_support/
false
false
self
13
null
Best local model for M2 16gb MacBook Air for Analyzing Transcripts
1
I'm looking to process private interviews (10 - 2 hour interviews) I conducted with victims of abuse for a research project. This must be done locally for privacy. Once it's in the LLM I want to see how it compares to human raters as far as assessing common themes. I'll use macwhisper to transcribe the conversations bu...
2025-05-22T19:44:28
https://www.reddit.com/r/LocalLLaMA/comments/1kszyuo/best_local_model_for_m2_16gb_macbook_air_for/
SinkThink5779
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kszyuo
false
null
t3_1kszyuo
/r/LocalLLaMA/comments/1kszyuo/best_local_model_for_m2_16gb_macbook_air_for/
false
false
self
1
null
Claude 4 Opus may contact press and regulators if you do something egregious (deleted Tweet from Sam Bowman)
295
2025-05-22T19:43:04
https://i.redd.it/g91uyr7tyd2f1.jpeg
RuairiSpain
i.redd.it
1970-01-01T00:00:00
0
{}
1kszxmj
false
null
t3_1kszxmj
/r/LocalLLaMA/comments/1kszxmj/claude_4_opus_may_contact_press_and_regulators_if/
false
false
https://b.thumbs.redditm…BgfN4hZhdrjU.jpg
295
{'enabled': True, 'images': [{'id': 'IzbreZ2dyV53OMkgQd2Lx25ytHiXd2eJj3QWIdkexm4', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/g91uyr7tyd2f1.jpeg?width=108&crop=smart&auto=webp&s=5f0351e0e4bb541bfddb1ca2a15a15d132b5a852', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/g91uyr7tyd2f1.jp...
Devstral on Mac 24GB?
2
I've tried running the 4bit quant on my 16GB M1: no dice. But I'm getting a 24GB M4 in a little while - anyone run the Devstral 4bit MLX distils on one of those yet?
2025-05-22T19:06:09
https://www.reddit.com/r/LocalLLaMA/comments/1ksz0x5/devstral_on_mac_24gb/
sgt102
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksz0x5
false
null
t3_1ksz0x5
/r/LocalLLaMA/comments/1ksz0x5/devstral_on_mac_24gb/
false
false
self
2
null
MedGemma with MediaPipe
1
Hi, I hope you're doing well. As a small project, I wanted to use MedGemma on iOS to create a local app where users could ask questions about symptoms or whatever. I'm able to use Mediapipe as shown in Google's repo, but only with `.task` models. I haven’t found any `.task` model for MedGemma. I'm not an expert in thi...
2025-05-22T19:05:43
https://www.reddit.com/r/LocalLLaMA/comments/1ksz0in/medgemma_with_mediapipe/
DonTizi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1ksz0in
false
null
t3_1ksz0in
/r/LocalLLaMA/comments/1ksz0in/medgemma_with_mediapipe/
false
false
self
1
null
I accidentally too many P100
1
[removed]
2025-05-22T18:58:48
https://www.reddit.com/gallery/1ksyu36
TooManyPascals
reddit.com
1970-01-01T00:00:00
0
{}
1ksyu36
false
null
t3_1ksyu36
/r/LocalLLaMA/comments/1ksyu36/i_accidentally_too_many_p100/
false
false
https://b.thumbs.redditm…2Vjd44is8Esw.jpg
1
null
I accidentally too many P100
1
[removed]
2025-05-22T18:55:25
https://www.reddit.com/gallery/1ksyr5b
TooManyPascals
reddit.com
1970-01-01T00:00:00
0
{}
1ksyr5b
false
null
t3_1ksyr5b
/r/LocalLLaMA/comments/1ksyr5b/i_accidentally_too_many_p100/
false
false
https://b.thumbs.redditm…sl4Y9NuiS9UE.jpg
1
null