title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
Any luck in running gemma 3n model locally on iphone with react native?
1
[removed]
2025-05-28T13:00:28
https://www.reddit.com/r/LocalLLaMA/comments/1kxggsd/any_luck_in_running_gemma_3n_model_locally_on/
Ordinary_Emu8014
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxggsd
false
null
t3_1kxggsd
/r/LocalLLaMA/comments/1kxggsd/any_luck_in_running_gemma_3n_model_locally_on/
false
false
self
1
null
New DeepseekV3 as well
28
New V3! https://preview.redd.it/wjoiebx5ti3f1.jpg?width=1280&format=pjpg&auto=webp&s=11bcdcd461259d9329165669759f04fb531ee79c
2025-05-28T12:58:33
https://www.reddit.com/r/LocalLLaMA/comments/1kxgfbj/new_deepseekv3_as_well/
shing3232
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxgfbj
false
null
t3_1kxgfbj
/r/LocalLLaMA/comments/1kxgfbj/new_deepseekv3_as_well/
false
false
https://b.thumbs.redditm…Zf4MWi32_rDs.jpg
28
null
Deepsee launch new DSv3 as well
1
Updated V3 as well https://preview.redd.it/bjulgxp0ti3f1.jpg?width=1280&format=pjpg&auto=webp&s=1cc6b4bced8b86c8ab1fe761637e70138e3b9fbf
2025-05-28T12:57:31
https://www.reddit.com/r/LocalLLaMA/comments/1kxgeje/deepsee_launch_new_dsv3_as_well/
shing3232
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxgeje
false
null
t3_1kxgeje
/r/LocalLLaMA/comments/1kxgeje/deepsee_launch_new_dsv3_as_well/
false
false
https://a.thumbs.redditm…0GxBXWgi1T14.jpg
1
null
vLLM Classify Bad Results
10
Has anyone used vLLM for classification? I have a fine-tuned modernBERT model with 5 classes. During model training, the best model shows a .78 F1 score. After the model is trained, I passed the test set through vLLM and Hugging Face pipelines as a test and get the screenshot above. Hugging Face pipeline matches the...
2025-05-28T12:50:16
https://i.redd.it/d9tr89iqri3f1.png
Upstairs-Garlic-2301
i.redd.it
1970-01-01T00:00:00
0
{}
1kxg95a
false
null
t3_1kxg95a
/r/LocalLLaMA/comments/1kxg95a/vllm_classify_bad_results/
false
false
https://a.thumbs.redditm…Ck4KaJ24wQ08.jpg
10
{'enabled': True, 'images': [{'id': 'LFJY147gafAUNJLufVK9T3r-EJAnH4-brIcaHpIXMMk', 'resolutions': [{'height': 124, 'url': 'https://preview.redd.it/d9tr89iqri3f1.png?width=108&crop=smart&auto=webp&s=156724ca5fdcba587564732869494a90061d253c', 'width': 108}, {'height': 249, 'url': 'https://preview.redd.it/d9tr89iqri3f1.pn...
Old model, new implementation
7
[chatllm.cpp](https://github.com/foldl/chatllm.cpp) implements this model as the 1st supported vision model. I have search this group. Not many have tested this model due to lack of support from llama.cpp. Now, would you like to try this model?
2025-05-28T12:24:34
https://www.reddit.com/r/LocalLLaMA/comments/1kxfq8r/old_model_new_implementation/
foldl-li
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxfq8r
false
null
t3_1kxfq8r
/r/LocalLLaMA/comments/1kxfq8r/old_model_new_implementation/
false
false
self
7
{'enabled': False, 'images': [{'id': 'RumUH0pn0UD5ECnIUOzs1g7sbyxOEntl4lbZSZpwHSc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1jcE0HdHUPFbZD9hiXaNrwsonKJdXFdcUjqkV5AiFOo.jpg?width=108&crop=smart&auto=webp&s=1acef437405ecf1a87bf1f70dbdfe8a7f73af9f4', 'width': 108}, {'height': 108, 'url': 'h...
Best budget GPU for running a local model+occasional gaming?
0
Hey. My intention is to run LLama and/or DeepSeek locally on my unraid server while occasionally still gaming now and then when not in use for AI. Case can fit up to 290mm cards otherwise I'd of gotten a used 3090. I've been looking at 5060 16GB, would that be a decent card? Or would going for a 5070 16gb be a better...
2025-05-28T11:59:13
https://www.reddit.com/r/LocalLLaMA/comments/1kxf7z2/best_budget_gpu_for_running_a_local/
answerencr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxf7z2
false
null
t3_1kxf7z2
/r/LocalLLaMA/comments/1kxf7z2/best_budget_gpu_for_running_a_local/
false
false
self
0
null
Running Llama 3.2-1b on my android through Pocketpal
1
2025-05-28T11:53:01
https://v.redd.it/rl8aqet2hi3f1
EmployeeLogical5051
v.redd.it
1970-01-01T00:00:00
0
{}
1kxf3v0
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/rl8aqet2hi3f1/DASHPlaylist.mpd?a=1751025194%2CZjIwNGRkOThhNmRlNTg0YTE3NDBjNTk1OTI4MTYzMjllODFjMTRiYWYwZWJlZTA5ZmI4NDJmYzIwNTQ4ODNhYg%3D%3D&v=1&f=sd', 'duration': 18, 'fallback_url': 'https://v.redd.it/rl8aqet2hi3f1/DASH_720.mp4?source=fallback', 'ha...
t3_1kxf3v0
/r/LocalLLaMA/comments/1kxf3v0/running_llama_321b_on_my_android_through_pocketpal/
false
false
https://external-preview…804a325282d2bf75
1
{'enabled': False, 'images': [{'id': 'MzduaTVvdTJoaTNmMZszMcdNxuXWeErgnsrIuHYF1y85A4lZHEELgSwMS8lM', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/MzduaTVvdTJoaTNmMZszMcdNxuXWeErgnsrIuHYF1y85A4lZHEELgSwMS8lM.png?width=108&crop=smart&format=pjpg&auto=webp&s=b33637624425e0ef1e7024c28b43152a6668...
Upgrading from RTX 4060 to 3090
3
Hi guys I am planning to upgrade from a 4060 to a 3090 to triple the VRAM and be able to run Qwen 3 30b or 32b, but I noticed that the 3090 has 2 power connections instead of one like my 4060. I have a cable that already has 2 endings, do I have to worry about anything else, or can I just slot the new one right in and ...
2025-05-28T11:49:51
https://www.reddit.com/r/LocalLLaMA/comments/1kxf1q7/upgrading_from_rtx_4060_to_3090/
ElekDn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxf1q7
false
null
t3_1kxf1q7
/r/LocalLLaMA/comments/1kxf1q7/upgrading_from_rtx_4060_to_3090/
false
false
self
3
null
Parakeet-TDT 0.6B v2 FastAPI STT Service (OpenAI-style API + Experimental Streaming)
26
Hi! I'm (finally) releasing a FastAPI wrapper around NVIDIA’s Parakeet-TDT 0.6B v2 ASR model with: * REST `/transcribe` endpoint with optional timestamps * Health & debug endpoints: `/healthz`, `/debug/cfg` * Experimental WebSocket `/ws` for real-time PCM streaming and partial/full transcripts GitHub: [https://github...
2025-05-28T11:47:57
https://www.reddit.com/r/LocalLLaMA/comments/1kxf0ig/parakeettdt_06b_v2_fastapi_stt_service/
Shadowfita
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxf0ig
false
null
t3_1kxf0ig
/r/LocalLLaMA/comments/1kxf0ig/parakeettdt_06b_v2_fastapi_stt_service/
false
false
self
26
{'enabled': False, 'images': [{'id': 'Yor2PiK5DIoagga8Ef6-su6OIlt5qBUMgWamr45nJVw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xJ17kcz6rkaNCdYXDG-VzsbOMWH2mcJrp4Pb3HyF9mk.jpg?width=108&crop=smart&auto=webp&s=f4be908586b4b521b8a363c5ca70fd5feec2349b', 'width': 108}, {'height': 108, 'url': 'h...
Seeking Help Setting Up a Local LLM Assistant for TTRPG Worldbuilding + RAG on Windows 11
5
Hey everyone! I'm looking for some guidance on setting up a local LLM to help with **TTRPG worldbuilding and running games** (like D&D or other systems). I want to be able to: - Generate and roleplay NPCs - Write world lore collaboratively - Answer rules questions from PDFs - Query my own documents (lore, setting info...
2025-05-28T11:16:50
https://www.reddit.com/r/LocalLLaMA/comments/1kxeg6a/seeking_help_setting_up_a_local_llm_assistant_for/
TheArchivist314
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxeg6a
false
null
t3_1kxeg6a
/r/LocalLLaMA/comments/1kxeg6a/seeking_help_setting_up_a_local_llm_assistant_for/
false
false
self
5
null
DeepSeek Announces Upgrade, Possibly Launching New Model Similar to 0324
315
The official DeepSeek group has issued an announcement claiming an upgrade, possibly a new model similar to the 0324 version.
2025-05-28T10:25:48
https://www.reddit.com/gallery/1kxdm2z
luckbossx
reddit.com
1970-01-01T00:00:00
0
{}
1kxdm2z
false
null
t3_1kxdm2z
/r/LocalLLaMA/comments/1kxdm2z/deepseek_announces_upgrade_possibly_launching_new/
false
false
https://b.thumbs.redditm…iw9gp50Wdl_Q.jpg
315
null
Cobolt is now available on Linux! 🎉
66
Remember when we said Cobolt is "Powered by community-driven development"? After our [last post](https://www.reddit.com/r/LocalLLaMA/comments/1kujwzl/we_believe_the_future_of_ai_is_local_private_and) about Cobolt – **our local, private, and personalized AI assistant** – the call for Linux support was overwhelming. Wel...
2025-05-28T10:23:07
https://www.reddit.com/r/LocalLLaMA/comments/1kxdkms/cobolt_is_now_available_on_linux/
ice-url
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxdkms
false
null
t3_1kxdkms
/r/LocalLLaMA/comments/1kxdkms/cobolt_is_now_available_on_linux/
false
false
self
66
{'enabled': False, 'images': [{'id': 'NNzbw1x8m7xNfFARrLCIjGSKnOdC-deMYYvKqIswEDg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=108&crop=smart&auto=webp&s=bd22f5d1f9db29cb0c5495f8f3d66125022ea1a6', 'width': 108}, {'height': 108, 'url': 'h...
Cobolt is now available on Linux! 🎉
1
[deleted]
2025-05-28T10:19:30
[deleted]
1970-01-01T00:00:00
0
{}
1kxdin0
false
null
t3_1kxdin0
/r/LocalLLaMA/comments/1kxdin0/cobolt_is_now_available_on_linux/
false
false
default
1
null
Deep Research Agent (Apple Silicon)
4
Hi everyone I’ve been using Perplexica which is honestly fantastic for every day use. I wish I could access it on every device alas I’m a noob at hosting and don’t really even know what I’d need to do it… Anyway, the point: I’m looking for a deep research agent that works on Apple Silicon I’ve used local-deep-researc...
2025-05-28T10:18:14
https://www.reddit.com/r/LocalLLaMA/comments/1kxdhzg/deep_research_agent_apple_silicon/
BalaelGios
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxdhzg
false
null
t3_1kxdhzg
/r/LocalLLaMA/comments/1kxdhzg/deep_research_agent_apple_silicon/
false
false
self
4
{'enabled': False, 'images': [{'id': '7KpX8ZO5jaU9AAaMjXtLTImZjtPOyYx1ewVQ5AEd7WI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dA9mn3r5WxZGATEWMxRN74d2LPRpm01S4hl_9yCThCw.jpg?width=108&crop=smart&auto=webp&s=4b555f9745cb7d1990248b8a2712f1b36496df45', 'width': 108}, {'height': 108, 'url': 'h...
Cobolt is now available on Linux! 🎉
2
Remember when we said Cobolt is "Powered by community-driven development"? After our [last post](https://www.reddit.com/r/LocalLLaMA/comments/1kujwzl/we_believe_the_future_of_ai_is_local_private_and) about Cobolt – our local, private, and personalized AI assistant – the call for Linux support was overwhelming. Well, y...
2025-05-28T10:14:16
https://www.reddit.com/r/LocalLLaMA/comments/1kxdfsq/cobolt_is_now_available_on_linux/
ice-url
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxdfsq
false
null
t3_1kxdfsq
/r/LocalLLaMA/comments/1kxdfsq/cobolt_is_now_available_on_linux/
false
false
self
2
{'enabled': False, 'images': [{'id': 'NNzbw1x8m7xNfFARrLCIjGSKnOdC-deMYYvKqIswEDg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YCnlhDpFOQrBOSdtdqBvC3VbL7egUyj2Dd-ayfrsClQ.jpg?width=108&crop=smart&auto=webp&s=bd22f5d1f9db29cb0c5495f8f3d66125022ea1a6', 'width': 108}, {'height': 108, 'url': 'h...
impressive streamlining in local llm deployment: gemma 3n downloading directly to my phone without any tinkering. what a time to be alive!
100
2025-05-28T10:08:41
https://i.redd.it/sd06j27qyh3f1.jpeg
thebigvsbattlesfan
i.redd.it
1970-01-01T00:00:00
0
{}
1kxdcpi
false
null
t3_1kxdcpi
/r/LocalLLaMA/comments/1kxdcpi/impressive_streamlining_in_local_llm_deployment/
false
false
https://b.thumbs.redditm…beO5Ya1XLZoQ.jpg
100
{'enabled': True, 'images': [{'id': 'YAeKaYu2qaZhbAYy0cMWEnsNZX4LHUwqNo_czIljTT4', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/sd06j27qyh3f1.jpeg?width=108&crop=smart&auto=webp&s=599724f93ec5bf34488f0dccede3c4f5022bf63a', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/sd06j27qyh3f1.j...
impressive streamlining in local llm deployment: Gemma 3n downloading directly to my phone without any tinkering. what a time to be alive.
1
google ai edge gallery apk: https://github.com/google-ai-edge/gallery/wiki/2.-Getting-Started
2025-05-28T10:07:11
https://i.redd.it/bsbufgpjxh3f1.jpeg
thebigvsbattlesfan
i.redd.it
1970-01-01T00:00:00
0
{}
1kxdbwv
false
null
t3_1kxdbwv
/r/LocalLLaMA/comments/1kxdbwv/impressive_streamlining_in_local_llm_deployment/
false
false
https://b.thumbs.redditm…ISTYJ8Af62Xg.jpg
1
{'enabled': True, 'images': [{'id': 'S5HBNERO_EZ1nm4_vX7KSjG8WmDV08hQYmjUlub1GvE', 'resolutions': [{'height': 112, 'url': 'https://preview.redd.it/bsbufgpjxh3f1.jpeg?width=108&crop=smart&auto=webp&s=8dc8919907a56ab083b61ac4dc908edbc6ec1de5', 'width': 108}, {'height': 224, 'url': 'https://preview.redd.it/bsbufgpjxh3f1.j...
Help: effect of Dry sampling on quality
0
I've build a tool to create image using a gradio api, the output is a json with the url generated passed back to the model. I was using Qwen 30B Moe Q4\_XL from unsloth with llama.cpp as my daily driver with dry multiplier at 0.8 without any major issue but here I found that it consistently changed the url hallucinati...
2025-05-28T10:05:41
https://www.reddit.com/r/LocalLLaMA/comments/1kxdb1n/help_effect_of_dry_sampling_on_quality/
fakezeta
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxdb1n
false
null
t3_1kxdb1n
/r/LocalLLaMA/comments/1kxdb1n/help_effect_of_dry_sampling_on_quality/
false
false
self
0
null
Scores in old and new lmarena are different
6
Have they provided any explanations on this?
2025-05-28T10:00:59
https://www.reddit.com/r/LocalLLaMA/comments/1kxd8cq/scores_in_old_and_new_lmarena_are_different/
Economy_Apple_4617
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxd8cq
false
null
t3_1kxd8cq
/r/LocalLLaMA/comments/1kxd8cq/scores_in_old_and_new_lmarena_are_different/
false
false
self
6
null
BrowserBee: A web browser agent in your Chrome side panel
1
[removed]
2025-05-28T09:54:46
https://www.reddit.com/r/LocalLLaMA/comments/1kxd50q/browserbee_a_web_browser_agent_in_your_chrome/
parsa28
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxd50q
false
null
t3_1kxd50q
/r/LocalLLaMA/comments/1kxd50q/browserbee_a_web_browser_agent_in_your_chrome/
false
false
self
1
{'enabled': False, 'images': [{'id': '8DMpX6r2OSWxRX4Pd29951OACdCjq5-CBr_tnz2zLaA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=108&crop=smart&auto=webp&s=9a2cdb78109d85405237f8fbc1b3e2d61ce75bd6', 'width': 108}, {'height': 108, 'url': 'h...
BrowserBee: A web browser agent in your Chrome side panel
1
[removed]
2025-05-28T09:50:16
https://www.reddit.com/r/LocalLLaMA/comments/1kxd2oz/browserbee_a_web_browser_agent_in_your_chrome/
parsa28
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxd2oz
false
null
t3_1kxd2oz
/r/LocalLLaMA/comments/1kxd2oz/browserbee_a_web_browser_agent_in_your_chrome/
false
false
self
1
{'enabled': False, 'images': [{'id': '8DMpX6r2OSWxRX4Pd29951OACdCjq5-CBr_tnz2zLaA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=108&crop=smart&auto=webp&s=9a2cdb78109d85405237f8fbc1b3e2d61ce75bd6', 'width': 108}, {'height': 108, 'url': 'h...
BrowserBee: A web browser agent in your Chrome side panel
1
[removed]
2025-05-28T09:48:45
https://www.reddit.com/r/LocalLLaMA/comments/1kxd1w1/browserbee_a_web_browser_agent_in_your_chrome/
parsa28
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxd1w1
false
null
t3_1kxd1w1
/r/LocalLLaMA/comments/1kxd1w1/browserbee_a_web_browser_agent_in_your_chrome/
false
false
self
1
{'enabled': False, 'images': [{'id': '8DMpX6r2OSWxRX4Pd29951OACdCjq5-CBr_tnz2zLaA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ddkEIVjmN_-FhxlYJNgvg83cFbZs5Jipen2ewWPyCho.jpg?width=108&crop=smart&auto=webp&s=9a2cdb78109d85405237f8fbc1b3e2d61ce75bd6', 'width': 108}, {'height': 108, 'url': 'h...
Metal performance 2x slower in notarized builds
1
[removed]
2025-05-28T09:12:26
https://www.reddit.com/r/LocalLLaMA/comments/1kxcimc/metal_performance_2x_slower_in_notarized_builds/
Impossible-Bat6366
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxcimc
false
null
t3_1kxcimc
/r/LocalLLaMA/comments/1kxcimc/metal_performance_2x_slower_in_notarized_builds/
false
false
self
1
null
How are you using MCP?
1
[removed]
2025-05-28T09:04:05
https://www.reddit.com/r/LocalLLaMA/comments/1kxce6o/how_are_you_using_mcp/
Fluffy_Sheepherder76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxce6o
false
null
t3_1kxce6o
/r/LocalLLaMA/comments/1kxce6o/how_are_you_using_mcp/
false
false
self
1
null
How are you using MCP
1
[removed]
2025-05-28T09:02:20
https://www.reddit.com/r/LocalLLaMA/comments/1kxcdb6/how_are_you_using_mcp/
Fluffy_Sheepherder76
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxcdb6
false
null
t3_1kxcdb6
/r/LocalLLaMA/comments/1kxcdb6/how_are_you_using_mcp/
false
false
self
1
null
Advising on LLM Deployment for Internal Codebase Use — Is DeepSeek-V3 Massive Overkill?
1
[removed]
2025-05-28T08:51:47
https://www.reddit.com/r/LocalLLaMA/comments/1kxc7ta/advising_on_llm_deployment_for_internal_codebase/
BroncoDankus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxc7ta
false
null
t3_1kxc7ta
/r/LocalLLaMA/comments/1kxc7ta/advising_on_llm_deployment_for_internal_codebase/
false
false
self
1
null
MCP Proxy – Use your embedded system as an agent
19
https://i.redd.it/kzn1dvcfkh3f1.gif Video: [https://www.youtube.com/watch?v=foCp3ja8FRA](https://www.youtube.com/watch?v=foCp3ja8FRA) Repository: [https://github.com/openserv-labs/mcp-proxy](https://github.com/openserv-labs/mcp-proxy) Hello! I've been playing around with agents, MCP servers and embedded systems for...
2025-05-28T08:47:54
https://www.reddit.com/r/LocalLLaMA/comments/1kxc5vo/mcp_proxy_use_your_embedded_system_as_an_agent/
arbayi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxc5vo
false
{'oembed': {'author_name': 'Batur Yılmaz Arslan', 'author_url': 'https://www.youtube.com/@BaturYilmazArslan', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/foCp3ja8FRA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-m...
t3_1kxc5vo
/r/LocalLLaMA/comments/1kxc5vo/mcp_proxy_use_your_embedded_system_as_an_agent/
false
false
https://b.thumbs.redditm…zDQION19qkpM.jpg
19
{'enabled': False, 'images': [{'id': 'ALOXY2PamJpEf9jNTwKvzcmANYAYelWajVomqXftPws', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/QE_AwMn8Vhy9CL-rjaMkp2CgPWYkmSjtSuxPv7QHnQs.jpg?width=108&crop=smart&auto=webp&s=6ecacbb51dd7cfa94197225ac09740f0e34fa278', 'width': 108}, {'height': 162, 'url': 'h...
Looks like the claraverse devs are listening to the comments from previous posts
1
[removed]
2025-05-28T08:46:32
https://www.youtube.com/watch?v=FWgFiBU7R14
k1sh0r
youtube.com
1970-01-01T00:00:00
0
{}
1kxc57n
false
{'oembed': {'author_name': 'ClaraVerse', 'author_url': 'https://www.youtube.com/@ClaraVerse.Tutorials', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/FWgFiBU7R14?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; ...
t3_1kxc57n
/r/LocalLLaMA/comments/1kxc57n/looks_like_the_claraverse_devs_are_listening_to/
false
false
https://b.thumbs.redditm…JrYkke7UwVgE.jpg
1
{'enabled': False, 'images': [{'id': 'hb1G5JYx5rimNYX2W7HeITqNZWn0lkoeeKQ43s4uc_E', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/k3XyeKX7NTyUUeeNFUiNna-UAkFxzQJAxTfaZyRC0-g.jpg?width=108&crop=smart&auto=webp&s=5a2157d4819c2f89cf940ccf60f86d47bdc228b7', 'width': 108}, {'height': 162, 'url': 'h...
What's possible with each currently purchasable amount of Mac Unified RAM?
2
This is a bit of an update of [https://www.reddit.com/r/LocalLLaMA/comments/1gs7w2m/choosing\_the\_right\_mac\_for\_running\_large\_llms/](https://www.reddit.com/r/LocalLLaMA/comments/1gs7w2m/choosing_the_right_mac_for_running_large_llms/) more than 6 months later, with different available CPUs/GPUs. I am going to ren...
2025-05-28T08:31:27
https://www.reddit.com/r/LocalLLaMA/comments/1kxbxmf/whats_possible_with_each_currently_purchasable/
thibaut_barrere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxbxmf
false
null
t3_1kxbxmf
/r/LocalLLaMA/comments/1kxbxmf/whats_possible_with_each_currently_purchasable/
false
false
self
2
null
Is Stanford's AGI Rivermind ever coming back?
0
Kinda feel like a conspiracy theorist, but what are the chances they were told by govmnt to shut it down lol? But really, are there any news or posts from Stanford about the incident and if they are going to make the model public again?
2025-05-28T08:23:39
https://www.reddit.com/r/LocalLLaMA/comments/1kxbtnc/is_stanfords_agi_rivermind_ever_coming_back/
cdanymar
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxbtnc
false
null
t3_1kxbtnc
/r/LocalLLaMA/comments/1kxbtnc/is_stanfords_agi_rivermind_ever_coming_back/
false
false
self
0
null
Another Ryzen Max+ 395 machine has been released. Are all the Chinese Max+ 395 machines the same?
30
Another AMD Ryzen Max+ 395 mini-pc has been released. The FEVM FA-EX9. For those who kept asking for it, this comes with Oculink. Here's a YT review. https://www.youtube.com/watch?v=-1kuUqp1X2I I think all the Chinese Max+ mini-pcs are the same. I noticed again that this machine has *exactly* the same port layout as ...
2025-05-28T08:10:10
https://www.reddit.com/r/LocalLLaMA/comments/1kxbmr9/another_ryzen_max_395_machine_has_been_released/
fallingdowndizzyvr
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxbmr9
false
null
t3_1kxbmr9
/r/LocalLLaMA/comments/1kxbmr9/another_ryzen_max_395_machine_has_been_released/
false
false
self
30
{'enabled': False, 'images': [{'id': '6QU22n3E7n6OJMoz58HPRHlOys8UjYhF1bBPxL-Fj4k', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/1Dtni1upEapADVmsfcu-FGGUWVJaHDTA5wNU5q6JsKw.jpg?width=108&crop=smart&auto=webp&s=b309630cf4e255095bb51b3ce5caa1873639a976', 'width': 108}, {'height': 162, 'url': 'h...
I built an open-source VRAM Calculator inside Hugging Face
1
It's a Chrome extension that sits inside the Hugging Face website. It auto-loads model specs into the calculation. [Link to the extension](https://chromewebstore.google.com/detail/hugging-face-vram-calcula/bioohacjdieeliinbpocpdhpdapfkhal?authuser=0&hl=en-GB). \> To test it, install the extension (no registration/key ...
2025-05-28T08:03:38
https://v.redd.it/2q7gz3mubh3f1
Cool-Maintenance8594
/r/LocalLLaMA/comments/1kxbjds/i_built_an_opensource_vram_calculator_inside/
1970-01-01T00:00:00
0
{}
1kxbjds
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/2q7gz3mubh3f1/DASHPlaylist.mpd?a=1751141026%2CMTQwMzY0OGI3MWFmNGE3MjRhYThkNjhiZjQzZGIwMTUzYmRjMjJkZWUzNWQ2YWIzOTNlZGNjN2I2YzBjZTkzNQ%3D%3D&v=1&f=sd', 'duration': 83, 'fallback_url': 'https://v.redd.it/2q7gz3mubh3f1/DASH_1080.mp4?source=fallback', 'h...
t3_1kxbjds
/r/LocalLLaMA/comments/1kxbjds/i_built_an_opensource_vram_calculator_inside/
false
false
https://external-preview…1bfca0c4e4ec7ce0
1
{'enabled': False, 'images': [{'id': 'ODBnb2o1bXViaDNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ODBnb2o1bXViaDNmMfUCYlogGf6sXjUEF0KDbiT8bN0vmNGT-3FN0g0NywX2.png?width=108&crop=smart&format=pjpg&auto=webp&s=6d35f82cf54d3695cf26fadd1fb08a0532b8f...
T-MAC extends its capabilities to Snapdragon mobile NPU!
2
https://github.com/microsoft/T-MAC/blob/main/t-man/README.md - 50 t/s for BitNet-2B-4T on Snapdragon 8G3 NPU - NPU only, doesn't impact other apps - Prebuilt APK for SDG3 devices [on github](https://github.com/microsoft/T-MAC/releases/tag/1.0.0a5)
2025-05-28T08:03:21
https://github.com/microsoft/T-MAC/blob/main/t-man/README.md
Aaaaaaaaaeeeee
github.com
1970-01-01T00:00:00
0
{}
1kxbj8u
false
null
t3_1kxbj8u
/r/LocalLLaMA/comments/1kxbj8u/tmac_extends_its_capabilities_to_snapdragon/
false
false
default
2
null
🧠 How do you go from a raw idea to something real? (For devs/designers/builders)
1
[removed]
2025-05-28T07:48:29
https://www.reddit.com/r/LocalLLaMA/comments/1kxbbcm/how_do_you_go_from_a_raw_idea_to_something_real/
InjurySuccessful3125
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxbbcm
false
null
t3_1kxbbcm
/r/LocalLLaMA/comments/1kxbbcm/how_do_you_go_from_a_raw_idea_to_something_real/
false
false
self
1
null
The Economist: "Companies abandon their generative AI projects"
614
A [recent article](https://archive.ph/P51MQ) in the Economist claims that "the share of companies abandoning most of their generative-AI pilot projects has risen to 42%, up from 17% last year." Apparently companies who invested in generative AI and slashed jobs are now disappointed and they began rehiring humans for ro...
2025-05-28T07:23:16
https://www.reddit.com/r/LocalLLaMA/comments/1kxaxw9/the_economist_companies_abandon_their_generative/
mayalihamur
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxaxw9
false
null
t3_1kxaxw9
/r/LocalLLaMA/comments/1kxaxw9/the_economist_companies_abandon_their_generative/
false
false
self
614
null
LLM Farm gemma-3
1
[removed]
2025-05-28T07:13:09
https://www.reddit.com/r/LocalLLaMA/comments/1kxasil/llm_farm_gemma3/
kindfii
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxasil
false
null
t3_1kxasil
/r/LocalLLaMA/comments/1kxasil/llm_farm_gemma3/
false
false
self
1
null
I'm trying to build a open source LLM rag ai asistant for my financial audit firm
1
[removed]
2025-05-28T07:04:39
https://www.reddit.com/r/LocalLLaMA/comments/1kxanzc/im_trying_to_build_a_open_source_llm_rag_ai/
timeladyxox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxanzc
false
null
t3_1kxanzc
/r/LocalLLaMA/comments/1kxanzc/im_trying_to_build_a_open_source_llm_rag_ai/
false
false
self
1
null
after 5 month deepseek still the king of the open source there base model is one of the most intelligent model in both closed source and also top in the open source . lets see what we will see in the next model no update about the next model . but i think they are looking for standard
1
[removed]
2025-05-28T06:53:38
https://i.redd.it/gnc0hvzizg3f1.png
Select_Dream634
i.redd.it
1970-01-01T00:00:00
0
{}
1kxahtn
false
null
t3_1kxahtn
/r/LocalLLaMA/comments/1kxahtn/after_5_month_deepseek_still_the_king_of_the_open/
false
false
https://b.thumbs.redditm…GAGxhCq_tJUs.jpg
1
{'enabled': True, 'images': [{'id': 'Hoj57_lrNsctqQJNajIy7NxQ9Qg_OlnXU7sQld2X4Fc', 'resolutions': [{'height': 48, 'url': 'https://preview.redd.it/gnc0hvzizg3f1.png?width=108&crop=smart&auto=webp&s=f721fe6f90e15a5d634342c2bc8b80d880ea6afe', 'width': 108}, {'height': 97, 'url': 'https://preview.redd.it/gnc0hvzizg3f1.png?...
When do you think the gap between local llm and o4-mini can be closed
15
Not sure if OpenAI recently upgraded this o4-mini free version, but I found this model really surpassed almost every local model in both correctness and consistency. I mainly tested on the coding part (not agent mode). It can understand the problem so well with minimal context (even compared to the Claude 3.7 & 4). I r...
2025-05-28T06:49:31
https://www.reddit.com/r/LocalLLaMA/comments/1kxafjv/when_do_you_think_the_gap_between_local_llm_and/
GregView
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kxafjv
false
null
t3_1kxafjv
/r/LocalLLaMA/comments/1kxafjv/when_do_you_think_the_gap_between_local_llm_and/
false
false
self
15
null
Google AI Edge Gallery
189
**Explore, Experience, and Evaluate the Future of On-Device Generative AI with Google AI Edge.** The Google AI Edge Gallery is an experimental app that puts the power of cutting-edge Generative AI models directly into your hands, running entirely on your Android *(available now)* and iOS *(coming soon)* devices. Dive ...
2025-05-28T06:33:50
https://i.redd.it/s6rgmrfawg3f1.png
Lynncc6
i.redd.it
1970-01-01T00:00:00
0
{}
1kxa788
false
null
t3_1kxa788
/r/LocalLLaMA/comments/1kxa788/google_ai_edge_gallery/
false
false
https://b.thumbs.redditm…cHtbtqpoyAbQ.jpg
189
{'enabled': True, 'images': [{'id': '6NTG7vNynnFIuiZo1EXhOu6NSSi5gAD7jh6L4knpExQ', 'resolutions': [{'height': 68, 'url': 'https://preview.redd.it/s6rgmrfawg3f1.png?width=108&crop=smart&auto=webp&s=2b3febb2ea73a2f6b70df4940bbef935ffb54fb9', 'width': 108}, {'height': 136, 'url': 'https://preview.redd.it/s6rgmrfawg3f1.png...
Google AI Edge Gallery is released!
1
[removed]
2025-05-28T06:27:28
https://x.com/itsPaulAi/status/1927453363425210810
Lynncc6
x.com
1970-01-01T00:00:00
0
{}
1kxa3qc
false
null
t3_1kxa3qc
/r/LocalLLaMA/comments/1kxa3qc/google_ai_edge_gallery_is_released/
false
false
https://b.thumbs.redditm…0u8beqsdY9SA.jpg
1
{'enabled': False, 'images': [{'id': 'XpUhnAnsQPxzWMB4hOye_0tpC_GVc2EK04U06ew4CaE', 'resolutions': [{'height': 98, 'url': 'https://external-preview.redd.it/kxxnDjSSQc5hu0hU1hn9LiDkI_lG0JsoHHAqfTLLY_s.jpg?width=108&crop=smart&auto=webp&s=2639f0dd23b34a72725e14099054779fc5fad746', 'width': 108}, {'height': 196, 'url': 'h...
Megakernel doubles Llama-1B inference speed for batch size 1
73
The authors of this [bloglike paper](https://hazyresearch.stanford.edu/blog/2025-05-27-no-bubbles) at Stanford found that vLLM and SGLang lose significant performance due to overhead in CUDA usage for low batch sizes - what you usually use when running locally to chat. Their improvement doubles the inference speed on a...
2025-05-28T05:58:06
https://www.reddit.com/r/LocalLLaMA/comments/1kx9nfk/megakernel_doubles_llama1b_inference_speed_for/
Chromix_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx9nfk
false
null
t3_1kx9nfk
/r/LocalLLaMA/comments/1kx9nfk/megakernel_doubles_llama1b_inference_speed_for/
false
false
self
73
{'enabled': False, 'images': [{'id': '-WHgGLJANkDpubg8JwSLJ_kMgGHdyAiWnD4mQMVCLm0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/SkkOsvk-2jlZgGZC1fPBHplz1bPi5ZFKveN7yDHKX3c.jpg?width=108&crop=smart&auto=webp&s=976ec4699fe996a0df9064bec720deebd3fb92bc', 'width': 108}, {'height': 216, 'url': '...
Best model for 4070 TI Super
2
Hello there, hope everyone is doing well. I am kinda new in this world, so I have been wondering what would be the best model for my graphic card. I want to use it for general purposes like asking what colours should I get my blankets if my room is white, what sizes should I buy etc etc. I just used chatgpt with the ...
2025-05-28T05:52:55
https://www.reddit.com/r/LocalLLaMA/comments/1kx9kje/best_model_for_4070_ti_super/
Beniko19
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx9kje
false
null
t3_1kx9kje
/r/LocalLLaMA/comments/1kx9kje/best_model_for_4070_ti_super/
false
false
self
2
null
How much VRAM headroom for context?
7
Still new to this and couldn't find a decent answer. I've been testing various models and I'm trying to find the largest model that I can run effectively on my 5090. The calculator on HF is giving me errors regardless of which model I enter. Is there a rule of thumb that one can follow for a rough estimate? I want to t...
2025-05-28T04:24:08
https://www.reddit.com/r/LocalLLaMA/comments/1kx82bo/how_much_vram_headroom_for_context/
Nomski88
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx82bo
false
null
t3_1kx82bo
/r/LocalLLaMA/comments/1kx82bo/how_much_vram_headroom_for_context/
false
false
self
7
null
Super fast OpenAI API inference engine benchmarking and evaluation client I made. Check it out!
1
[removed]
2025-05-28T04:03:11
https://github.com/sangstar/scale
Traditional-Review22
github.com
1970-01-01T00:00:00
0
{}
1kx7omh
false
null
t3_1kx7omh
/r/LocalLLaMA/comments/1kx7omh/super_fast_openai_api_inference_engine/
false
false
https://b.thumbs.redditm…p0ySjqxw1L0k.jpg
1
{'enabled': False, 'images': [{'id': '3DYE1LX7FaRMet0V4j6wppEIkS_Z7clacqDAaxycJmc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=108&crop=smart&auto=webp&s=2b347618f6bf6886041f3444d5d4047bdb559b02', 'width': 108}, {'height': 108, 'url': 'h...
Made a super fast OpenAI API endpoint benchmarking and evaluation client!
1
[removed]
2025-05-28T03:58:29
https://github.com/sangstar/scale
Traditional-Review22
github.com
1970-01-01T00:00:00
0
{}
1kx7ldt
false
null
t3_1kx7ldt
/r/LocalLLaMA/comments/1kx7ldt/made_a_super_fast_openai_api_endpoint/
false
false
https://b.thumbs.redditm…p0ySjqxw1L0k.jpg
1
{'enabled': False, 'images': [{'id': '3DYE1LX7FaRMet0V4j6wppEIkS_Z7clacqDAaxycJmc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=108&crop=smart&auto=webp&s=2b347618f6bf6886041f3444d5d4047bdb559b02', 'width': 108}, {'height': 108, 'url': 'h...
Building a KYC Agent – LangGraph vs AutoGen vs CrewAI
1
[removed]
2025-05-28T03:53:13
https://www.reddit.com/r/LocalLLaMA/comments/1kx7hzy/building_a_kyc_agent_langgraph_vs_autogen_vs/
Careless-Bat-1884
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx7hzy
false
null
t3_1kx7hzy
/r/LocalLLaMA/comments/1kx7hzy/building_a_kyc_agent_langgraph_vs_autogen_vs/
false
false
self
1
null
Made a performant benchmarking and evaluation client for inference servers!
1
Figured I'd share this here in case anyone is interested. A goofy project I've been working on, inspired by being annoyed at how slow LM-Eval can be. WIP still, need to do a lot of work with better eval metrics (like F1 etc) and try for a number of different datasets
2025-05-28T03:49:04
https://github.com/sangstar/scale
DM_ME_YOUR_CATS_PAWS
github.com
1970-01-01T00:00:00
0
{}
1kx7fe4
false
null
t3_1kx7fe4
/r/LocalLLaMA/comments/1kx7fe4/made_a_performant_benchmarking_and_evaluation/
false
false
https://b.thumbs.redditm…p0ySjqxw1L0k.jpg
1
{'enabled': False, 'images': [{'id': '3DYE1LX7FaRMet0V4j6wppEIkS_Z7clacqDAaxycJmc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YoDxLZD9aKFsEsiPHyS1mft3PF0u-s0k2N6GSAYj2d4.jpg?width=108&crop=smart&auto=webp&s=2b347618f6bf6886041f3444d5d4047bdb559b02', 'width': 108}, {'height': 108, 'url': 'h...
Usecase for graph summarization (chart to table)
1
I have bunch of Radio frequency usecase graphs in capacitance, inductance, IV, transistor and so on. I want to train a model that literally outputs a table. I found Deplot which I think suits my usecase. Issue is I have little samples to finetune. I was checking if I could get the setup to work with Lora but it is no...
2025-05-28T03:48:29
https://www.reddit.com/r/LocalLLaMA/comments/1kx7ezh/usecase_for_graph_summarization_chart_to_table/
unknown5493
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx7ezh
false
null
t3_1kx7ezh
/r/LocalLLaMA/comments/1kx7ezh/usecase_for_graph_summarization_chart_to_table/
false
false
self
1
null
Curious what everyone thinks of Meta's long term AI strategy. Do you think Meta will find its market when compared to Gemini/OpenAI? Open source obviously has its benefits but Mistral/Deepseek are worthy competitors. Would love to hear thoughts of where Llama is and potential to overtake?
1
[removed]
2025-05-28T02:11:40
https://www.reddit.com/r/LocalLLaMA/comments/1kx5jd9/curious_what_everyone_thinks_of_metas_long_term/
Excellent-Plastic638
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx5jd9
false
null
t3_1kx5jd9
/r/LocalLLaMA/comments/1kx5jd9/curious_what_everyone_thinks_of_metas_long_term/
false
false
self
1
null
Tip for those building agents. The CLI is king.
28
There are a lot of ways of exposing tools to your agents depending on the framework or your implementation. MCP servers are making this trivial. But I am finding that exposing a simple CLI tool to your LLM/Agent with instructions on how to use common cli commands can actually work better, while reducing complexity. For...
2025-05-28T01:46:50
https://www.reddit.com/gallery/1kx51dp
LocoMod
reddit.com
1970-01-01T00:00:00
0
{}
1kx51dp
false
null
t3_1kx51dp
/r/LocalLLaMA/comments/1kx51dp/tip_for_those_building_agents_the_cli_is_king/
false
false
https://b.thumbs.redditm…Sh-jsqyl80VA.jpg
28
null
Best Browser-Agent with Image Recogntion/Image Input?
1
[removed]
2025-05-28T01:14:45
https://www.reddit.com/r/LocalLLaMA/comments/1kx4e8n/best_browseragent_with_image_recogntionimage_input/
SafuWaifu
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx4e8n
false
null
t3_1kx4e8n
/r/LocalLLaMA/comments/1kx4e8n/best_browseragent_with_image_recogntionimage_input/
false
false
self
1
null
How are you using Qwen?
11
I’m currently training speculative decoding models on Qwen, aiming for 3-4x faster inference. However, I’ve noticed that Qwen’s reasoning style significantly differs from typical LLM outputs, reducing the expected performance gains. To address this, I’m looking to enhance training with additional reasoning-focused data...
2025-05-28T00:29:57
https://www.reddit.com/r/LocalLLaMA/comments/1kx3h5w/how_are_you_using_qwen/
xnick77x
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx3h5w
false
null
t3_1kx3h5w
/r/LocalLLaMA/comments/1kx3h5w/how_are_you_using_qwen/
false
false
self
11
null
Help with using unsloth on a structured conversation flow
1
[removed]
2025-05-28T00:21:16
https://www.reddit.com/r/LocalLLaMA/comments/1kx3ayn/help_with_using_unsloth_on_a_structured/
rjjacob
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx3ayn
false
null
t3_1kx3ayn
/r/LocalLLaMA/comments/1kx3ayn/help_with_using_unsloth_on_a_structured/
false
false
self
1
null
Creating a local LLM-powered NPC Dialog System (with simple RAG)
1
2025-05-27T23:58:29
https://erikr.bearblog.dev/creating-an-llm-powered-npc-dialog-system-with-simple-rag/
fumblebear
erikr.bearblog.dev
1970-01-01T00:00:00
0
{}
1kx2ttx
false
null
t3_1kx2ttx
/r/LocalLLaMA/comments/1kx2ttx/creating_a_local_llmpowered_npc_dialog_system/
false
false
https://b.thumbs.redditm…-6NDcgJw1ISE.jpg
1
{'enabled': False, 'images': [{'id': 'tXq_cdqcJ-341mHBzEnpF6FJ9AzKuXG0coRBECeQ0yE', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hmZ2zuFGLGZ_OHs7N14Ue07JMyCkUd5qHoaxRMtNn-c.jpg?width=108&crop=smart&auto=webp&s=85ed7d99594e160fecf37d78e7c330e1e21dafaa', 'width': 108}, {'height': 216, 'url': '...
Is LLaMa the right choice for local agents that will make use of outside data?
0
Trying to build my first local agentic system on a new Mac Mini M4 with 24GB RAM but I am not sure if LLaMa is the right choice on account of a crucial requirement is that it be able to connect to my Google Calendar. Is it really challenging to make local models work with online tools and is LLaMa capable of this? ...
2025-05-27T23:57:56
https://www.reddit.com/r/LocalLLaMA/comments/1kx2tfl/is_llama_the_right_choice_for_local_agents_that/
xtrafunky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx2tfl
false
null
t3_1kx2tfl
/r/LocalLLaMA/comments/1kx2tfl/is_llama_the_right_choice_for_local_agents_that/
false
false
self
0
null
Qwen3-14B vs Gemma3-12B
33
What do you guys thinks about these models? Which one to choose? I mostly ask some programming knowledge questions, primary Go and Java.
2025-05-27T23:42:05
https://www.reddit.com/r/LocalLLaMA/comments/1kx2hcm/qwen314b_vs_gemma312b/
COBECT
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx2hcm
false
null
t3_1kx2hcm
/r/LocalLLaMA/comments/1kx2hcm/qwen314b_vs_gemma312b/
false
false
self
33
null
GitHub - som1tokmynam/FusionQuant: FusionQuant Model Merge & GGUF Conversion Pipeline - Your Free Toolkit for Custom LLMs!
3
Hey all, Just dropped **FusionQuant v1.4**! a Docker-based toolkit to easily merge LLMs (with Mergekit) and convert them to GGUF (Llama.cpp) or the newly supported EXL2 format (Exllamav2) for local use. **GitHub:**[https://github.com/som1tokmynam/FusionQuant](https://github.com/som1tokmynam/FusionQuant) **Key v1.4 U...
2025-05-27T23:26:01
https://www.reddit.com/r/LocalLLaMA/comments/1kx24nq/github_som1tokmynamfusionquant_fusionquant_model/
Som1tokmynam
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx24nq
false
null
t3_1kx24nq
/r/LocalLLaMA/comments/1kx24nq/github_som1tokmynamfusionquant_fusionquant_model/
false
false
self
3
{'enabled': False, 'images': [{'id': 'hNkx9ytwlNXddx1fLS-mwtVWl2w2GvvtYklsF8BkRnI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ooJzPkMfYoFms4e2WfQfKTe_KiVxoPazDceEl5_Mg28.jpg?width=108&crop=smart&auto=webp&s=a89952465907ddd1cd4149b67da925a27adc6894', 'width': 108}, {'height': 108, 'url': 'h...
What am I doing wrong (Qwen3-8B)?
0
Qwen3-8B Q6_K_L in LMStudio. TitanXP (12GB VRAM) gpu, 32GB ram. As far as I read, this model should work fine with my card but it's incredibly slow. It keeps "thinking" for the simplest prompts. First thing I tried was saying "Hello" and it immediately starting doing math and trying to figure out the solution to a ...
2025-05-27T23:00:08
https://www.reddit.com/r/LocalLLaMA/comments/1kx1kct/what_am_i_doing_wrong_qwen38b/
BenefitOfTheDoubt_01
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx1kct
false
null
t3_1kx1kct
/r/LocalLLaMA/comments/1kx1kct/what_am_i_doing_wrong_qwen38b/
false
false
self
0
null
Is a MacBook Pro M3 Max with 36GB memory a good idea for $2300 as compared to an equivalent pc build?
1
[removed]
2025-05-27T22:14:23
https://www.reddit.com/r/LocalLLaMA/comments/1kx0jbu/is_a_macbook_pro_m3_max_with_36gb_memory_a_good/
nissan_sunny
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx0jbu
false
null
t3_1kx0jbu
/r/LocalLLaMA/comments/1kx0jbu/is_a_macbook_pro_m3_max_with_36gb_memory_a_good/
false
false
self
1
null
Deepseek R2 Release?
65
Didn’t Deepseek say they were accelerating the timeline to release R2 before the original May release date shooting for April? Now that it’s almost June, have they said anything about R2 or when they will be releasing?
2025-05-27T22:00:34
https://www.reddit.com/r/LocalLLaMA/comments/1kx077t/deepseek_r2_release/
Old-Medicine2445
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kx077t
false
null
t3_1kx077t
/r/LocalLLaMA/comments/1kx077t/deepseek_r2_release/
false
false
self
65
null
Advice needed - looking for the best model to run given my hardware
1
[removed]
2025-05-27T21:47:13
https://www.reddit.com/r/LocalLLaMA/comments/1kwzvnx/advice_needed_looking_for_the_best_model_to_run/
2x4x12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwzvnx
false
null
t3_1kwzvnx
/r/LocalLLaMA/comments/1kwzvnx/advice_needed_looking_for_the_best_model_to_run/
false
false
self
1
null
State of open-source computer using agents (2025)?
2
I'm looking for a new domain to dig into after spending time on language, music, and speech. I played around with [OpenAI's CUA](https://openai.com/index/computer-using-agent/) and think it's a cool idea. What are the best open-source CUA models available today to build on and improve? I'm looking for something hackab...
2025-05-27T21:42:27
https://www.reddit.com/r/LocalLLaMA/comments/1kwzrh4/state_of_opensource_computer_using_agents_2025/
entsnack
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwzrh4
false
null
t3_1kwzrh4
/r/LocalLLaMA/comments/1kwzrh4/state_of_opensource_computer_using_agents_2025/
false
false
self
2
null
Gemma 3n
1
[removed]
2025-05-27T21:37:16
https://www.youtube.com/watch?v=eJFJRyXEHZ0
alimmmmmmm69
youtube.com
1970-01-01T00:00:00
0
{}
1kwzmvk
false
{'oembed': {'author_name': 'Google for Developers', 'author_url': 'https://www.youtube.com/@GoogleDevelopers', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/eJFJRyXEHZ0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-...
t3_1kwzmvk
/r/LocalLLaMA/comments/1kwzmvk/gemma_3n/
false
false
https://b.thumbs.redditm…G3aK86C82ITo.jpg
1
{'enabled': False, 'images': [{'id': 'r0sxgNH0IzUXMQaOyiTA50SnDxGWeLZUCJ3d-KYmPFY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/w4NxFMgHoBanDU3q5KjrK4f_SK-uacKIZDOIeWGwPIY.jpg?width=108&crop=smart&auto=webp&s=987df3d25c3798e65bcfda4cff5d8c2fb393989a', 'width': 108}, {'height': 162, 'url': 'h...
Google are doing some incredible work.
1
[removed]
2025-05-27T21:35:04
https://www.reddit.com/r/LocalLLaMA/comments/1kwzkth/google_are_doing_some_incredible_work/
alimmmmmmm69
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwzkth
false
null
t3_1kwzkth
/r/LocalLLaMA/comments/1kwzkth/google_are_doing_some_incredible_work/
false
false
self
1
{'enabled': False, 'images': [{'id': 'r0sxgNH0IzUXMQaOyiTA50SnDxGWeLZUCJ3d-KYmPFY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/w4NxFMgHoBanDU3q5KjrK4f_SK-uacKIZDOIeWGwPIY.jpg?width=108&crop=smart&auto=webp&s=987df3d25c3798e65bcfda4cff5d8c2fb393989a', 'width': 108}, {'height': 162, 'url': 'h...
How the turn tables
1
2025-05-27T21:30:08
https://i.imgflip.com/9vdjuf.jpg
waiting_for_zban
i.imgflip.com
1970-01-01T00:00:00
0
{}
1kwzgi0
false
null
t3_1kwzgi0
/r/LocalLLaMA/comments/1kwzgi0/how_the_turn_tables/
false
false
https://b.thumbs.redditm…Kqvo7jYqx8uc.jpg
1
{'enabled': True, 'images': [{'id': '1WdwqUlHhokGbqsr5ZriC3VQC6gPNyl-dIH05KuBZX0', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/vTL3CWzPgrKTuBhUfhMQqYf570wFQy7oAPtjtpDdcyg.jpg?width=108&crop=smart&auto=webp&s=0bc59d8a2f5cad6421b5e617018df16f760128fb', 'width': 108}, {'height': 144, 'url': 'ht...
Looking for japanese translation model
1
[removed]
2025-05-27T21:29:19
https://www.reddit.com/r/LocalLLaMA/comments/1kwzftb/looking_for_japanese_translation_model/
Blackm1996
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwzftb
false
null
t3_1kwzftb
/r/LocalLLaMA/comments/1kwzftb/looking_for_japanese_translation_model/
false
false
self
1
null
How to use tools calling with Qwen 2.5 Coder?
1
[removed]
2025-05-27T21:02:51
https://www.reddit.com/r/LocalLLaMA/comments/1kwyro6/how_to_use_tools_calling_with_qwen_25_coder/
Educational-Shoe9300
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwyro6
false
null
t3_1kwyro6
/r/LocalLLaMA/comments/1kwyro6/how_to_use_tools_calling_with_qwen_25_coder/
false
false
self
1
null
Your favourite non-English/Chinese model
5
Much like English is the lingua franca for programming, it seems to also be the same preferred language for, well, language models (plus Chinese, obviously). For those generating content or using models in languages that are not Chinese or English, what is your model or models of choice? Gemma 3 and Qwen 3 boast, on ...
2025-05-27T20:58:59
https://www.reddit.com/r/LocalLLaMA/comments/1kwynyt/your_favourite_nonenglishchinese_model/
JohnnyOR
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwynyt
false
null
t3_1kwynyt
/r/LocalLLaMA/comments/1kwynyt/your_favourite_nonenglishchinese_model/
false
false
self
5
null
OpenRouter Inference: Issue with Combined Contexts
1
[removed]
2025-05-27T20:53:16
https://www.reddit.com/r/LocalLLaMA/comments/1kwyivr/openrouter_inference_issue_with_combined_contexts/
Critical-Sea-2581
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwyivr
false
null
t3_1kwyivr
/r/LocalLLaMA/comments/1kwyivr/openrouter_inference_issue_with_combined_contexts/
false
false
self
1
null
Local RAG for PDF questions
4
Hello, I am looking for some feedback one a simple project I put together for asking questions about PDFs. Anyone have experience with chromadb and langchain in combination with Ollama? [https://github.com/Mschroeder95/ai-rag-setup](https://github.com/Mschroeder95/ai-rag-setup)
2025-05-27T20:49:45
https://www.reddit.com/r/LocalLLaMA/comments/1kwyfnq/local_rag_for_pdf_questions/
Overall_Advantage750
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwyfnq
false
null
t3_1kwyfnq
/r/LocalLLaMA/comments/1kwyfnq/local_rag_for_pdf_questions/
false
false
self
4
{'enabled': False, 'images': [{'id': 'G5JWJlj-Qa4L9MQEJRUPEbJQQanWpbpjHNG8v9CjD_A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7H9Trziob4v0wza0YSrioj9_PT0VRi7YrwG5RHb9XJo.jpg?width=108&crop=smart&auto=webp&s=f01286ded6609e1a113a49a38e7f63998e31644b', 'width': 108}, {'height': 108, 'url': 'h...
How to make two llms work jointly in a problem solving task?
2
I am trying to understand if there is any way to make two local llms collaborate on a problem solving task. I am particularly curious to see the dynamics of such collaboration through systematic analytics of their conversational turns. Is this possible using say LM studio or ollama and Python?
2025-05-27T20:40:01
https://www.reddit.com/r/LocalLLaMA/comments/1kwy6u7/how_to_make_two_llms_work_jointly_in_a_problem/
sbs1799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwy6u7
false
null
t3_1kwy6u7
/r/LocalLLaMA/comments/1kwy6u7/how_to_make_two_llms_work_jointly_in_a_problem/
false
false
self
2
null
Why is Mistral Small 3 faster than the Qwen3 30B A3B model?
1
[removed]
2025-05-27T20:24:03
https://www.reddit.com/r/LocalLLaMA/comments/1kwxsi4/why_is_mistral_small_3_faster_than_the_qwen3_30b/
Alone_Ad_6011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwxsi4
false
null
t3_1kwxsi4
/r/LocalLLaMA/comments/1kwxsi4/why_is_mistral_small_3_faster_than_the_qwen3_30b/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'h...
Introducing free Ai software: Ai Chat to Cart and checkout with Stripe / Paypal (demo is using Llama via GroqCloud), Wush Wush Games is my son's video games store. I would love your feedback. Peace
1
You can find it here, would love your feedback [https://github.com/store-craft/storecraft](https://github.com/store-craft/storecraft)
2025-05-27T20:16:19
https://v.redd.it/5pkiw3butd3f1
hendrixstring
v.redd.it
1970-01-01T00:00:00
0
{}
1kwxldn
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/5pkiw3butd3f1/DASHPlaylist.mpd?a=1750968993%2CZmVmZWEzYjhiZWE4YTc0YTA1NjY1MWU4YTNmNWM3NGNjNmUyMGY1YTI2ZGY4NDIzMWFhNDE4NWQzNzIwZmJkNg%3D%3D&v=1&f=sd', 'duration': 66, 'fallback_url': 'https://v.redd.it/5pkiw3butd3f1/DASH_720.mp4?source=fallback', 'ha...
t3_1kwxldn
/r/LocalLLaMA/comments/1kwxldn/introducing_free_ai_software_ai_chat_to_cart_and/
false
false
https://external-preview…79160049c3ec59ce
1
{'enabled': False, 'images': [{'id': 'bzU2cmw0YnV0ZDNmMUjdaI7r9S6-MAMwjQiFfLJk__IxcsOiQ6_No2AEou7j', 'resolutions': [{'height': 66, 'url': 'https://external-preview.redd.it/bzU2cmw0YnV0ZDNmMUjdaI7r9S6-MAMwjQiFfLJk__IxcsOiQ6_No2AEou7j.png?width=108&crop=smart&format=pjpg&auto=webp&s=e928ccd40d53ebca3af5a6037ed38af409baf...
Mistral small 3 faster than qwen3 30b a3b model. It's wierd
1
[removed]
2025-05-27T20:07:18
https://www.reddit.com/r/LocalLLaMA/comments/1kwxcxq/mistral_small_3_faster_than_qwen3_30b_a3b_model/
Alone_Ad_6011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwxcxq
false
null
t3_1kwxcxq
/r/LocalLLaMA/comments/1kwxcxq/mistral_small_3_faster_than_qwen3_30b_a3b_model/
false
false
self
1
null
We build Curie: The Open-sourced AI Co-Scientist Making ML More Accessible for Your Research
58
After personally seeing many researchers in fields like biology, materials science, and chemistry struggle to apply machine learning to their valuable domain datasets to accelerate scientific discovery and gain deeper insights, often due to the lack of specialized ML knowledge needed to select the right algorithms, tun...
2025-05-27T19:49:33
https://www.reddit.com/r/LocalLLaMA/comments/1kwwwil/we_build_curie_the_opensourced_ai_coscientist/
Pleasant-Type2044
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwwwil
false
null
t3_1kwwwil
/r/LocalLLaMA/comments/1kwwwil/we_build_curie_the_opensourced_ai_coscientist/
false
false
https://b.thumbs.redditm…MeaFsQ2_gw_Y.jpg
58
null
Install llm on your MOBILE phone
0
I use this app to install llms 100% locally on my mobile phone And no I not sponsored or any of that crap, the app it's self is 100% free so there noway that they are sponsoring anybody. And yes you can install huggingface.co models without leaving the app at all
2025-05-27T19:46:08
https://i.redd.it/xagahzm0pd3f1.jpeg
Rare-Programmer-1747
i.redd.it
1970-01-01T00:00:00
0
{}
1kwwteg
false
null
t3_1kwwteg
/r/LocalLLaMA/comments/1kwwteg/install_llm_on_your_mobile_phone/
false
false
https://b.thumbs.redditm…ojdEPGpQ31EY.jpg
0
{'enabled': True, 'images': [{'id': 'P91Uz7r61L4yzXIkwnqBEAAIUHRf3MAGODVpLkG8h_A', 'resolutions': [{'height': 82, 'url': 'https://preview.redd.it/xagahzm0pd3f1.jpeg?width=108&crop=smart&auto=webp&s=e4bd3b028a1862743c3dbfdbf451ceebc7e8498f', 'width': 108}, {'height': 165, 'url': 'https://preview.redd.it/xagahzm0pd3f1.jp...
B-score: Detecting Biases in Large Language Models Using Response History
11
**TLDR:** When LLMs can see their own previous answers, their biases significantly decrease. We introduce B-score, a metric that detects bias by comparing responses between single-turn and multi-turn conversations. **Paper, Code & Data:** [https://b-score.github.io](https://b-score.github.io)
2025-05-27T19:44:41
https://www.reddit.com/r/LocalLLaMA/comments/1kwws3n/bscore_detecting_biases_in_large_language_models/
Substantial-Air-1285
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwws3n
false
null
t3_1kwws3n
/r/LocalLLaMA/comments/1kwws3n/bscore_detecting_biases_in_large_language_models/
false
false
self
11
null
most hackable coding agent
6
I find with local models coding agents need quite a lot of guidance and fail at tasks that are too complex. Also adherence to style and other rules is often not easy to achieve. I use agents to do planing, requirement engineering, software architecture stuff etc., which is usually very specific to my domain and tailor...
2025-05-27T19:37:44
https://www.reddit.com/r/LocalLLaMA/comments/1kwwlyv/most_hackable_coding_agent/
mnze_brngo_7325
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwwlyv
false
null
t3_1kwwlyv
/r/LocalLLaMA/comments/1kwwlyv/most_hackable_coding_agent/
false
false
self
6
null
How to think about ownership of my personal AI system
3
I’m working on building my own personal AI system, and thinking about what it means to own my own AI system. Here’s how I’m thinking about it and would appreciate thoughts from the community on where you think I am on or off base here.  I think ownership lies on spectrum between running on ChatGPT which I clearly don’...
2025-05-27T19:31:53
https://www.reddit.com/r/LocalLLaMA/comments/1kwwgon/how_to_think_about_ownership_of_my_personal_ai/
davidtwaring
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwwgon
false
null
t3_1kwwgon
/r/LocalLLaMA/comments/1kwwgon/how_to_think_about_ownership_of_my_personal_ai/
false
false
self
3
null
Help with safetensors quants
1
[removed]
2025-05-27T19:24:17
https://www.reddit.com/r/LocalLLaMA/comments/1kww9h0/help_with_safetensors_quants/
chub0ka
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kww9h0
false
null
t3_1kww9h0
/r/LocalLLaMA/comments/1kww9h0/help_with_safetensors_quants/
false
false
self
1
null
Mistral small 3 faster than qwen3 30b a3b model. It's wierd
1
[removed]
2025-05-27T19:08:17
https://www.reddit.com/r/LocalLLaMA/comments/1kwvuhe/mistral_small_3_faster_than_qwen3_30b_a3b_model/
Alone_Ad_6011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwvuhe
false
null
t3_1kwvuhe
/r/LocalLLaMA/comments/1kwvuhe/mistral_small_3_faster_than_qwen3_30b_a3b_model/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RVufpUx3tddh_BCVq7ZzBCU7nLRDZ_d0EprIuvN6J-E', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/DCdYO6jNgcn_GSlWqH68WjiuCVZVwk4zFJeJ-j8Ikbg.jpg?width=108&crop=smart&auto=webp&s=ff8c322202cb0f1a1f82f87a2c77754ddc0b9e61', 'width': 108}, {'height': 120, 'url': 'h...
Time to make all models think 🧠 – the brand-new *Mixture-of-Thoughts* reasoning dataset is here
1
[removed]
2025-05-27T19:06:17
https://www.reddit.com/r/LocalLLaMA/comments/1kwvslz/time_to_make_all_models_think_the_brandnew/
Thatisverytrue54321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwvslz
false
null
t3_1kwvslz
/r/LocalLLaMA/comments/1kwvslz/time_to_make_all_models_think_the_brandnew/
false
false
self
1
null
Why mistral small 3 faster than qwen3 30b a3b model?
1
[removed]
2025-05-27T19:05:25
https://www.reddit.com/r/LocalLLaMA/comments/1kwvrrz/why_mistral_small_3_faster_than_qwen3_30b_a3b/
Alone_Ad_6011
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwvrrz
false
null
t3_1kwvrrz
/r/LocalLLaMA/comments/1kwvrrz/why_mistral_small_3_faster_than_qwen3_30b_a3b/
false
false
self
1
null
Time to make all models think 🧠 – the brand-new Mixture-of-Thoughts reasoning dataset is here
1
[removed]
2025-05-27T19:04:49
https://www.reddit.com/r/LocalLLaMA/comments/1kwvr80/time_to_make_all_models_think_the_brandnew/
Thatisverytrue54321
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwvr80
false
null
t3_1kwvr80
/r/LocalLLaMA/comments/1kwvr80/time_to_make_all_models_think_the_brandnew/
false
false
self
1
null
😞No hate but claude-4 is disappointing
244
I mean how the heck literally Is Qwen-3 better than claude-4(the Claude who used to dog walk everyone). this is just disappointing 🫠
2025-05-27T18:10:17
https://i.redd.it/9dngmfww7d3f1.jpeg
Rare-Programmer-1747
i.redd.it
1970-01-01T00:00:00
0
{}
1kwucpn
false
null
t3_1kwucpn
/r/LocalLLaMA/comments/1kwucpn/no_hate_but_claude4_is_disappointing/
false
false
https://a.thumbs.redditm…94eP-h0-lc18.jpg
244
{'enabled': True, 'images': [{'id': 'qrlpDENbI4w2up03jLNVduozwcau1MgbEZoT9MLuxjE', 'resolutions': [{'height': 214, 'url': 'https://preview.redd.it/9dngmfww7d3f1.jpeg?width=108&crop=smart&auto=webp&s=fe77194d16dbde424a047b1d5e3f2c3b1dcf7d4e', 'width': 108}, {'height': 428, 'url': 'https://preview.redd.it/9dngmfww7d3f1.j...
When are we getting the Proton Mail equivalent of AI Service?
0
Please point me to one if already available. For a long time, Gmail, Yahoo and Outlook were the only mainstream good (free) personal email providers. We knew Google, and Microsoft mined our data for ads and some of us immediately switched to the likes of Protonmail when it came out or became popular. When do you thi...
2025-05-27T18:08:03
https://www.reddit.com/r/LocalLLaMA/comments/1kwuap4/when_are_we_getting_the_proton_mail_equivalent_of/
simracerman
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwuap4
false
null
t3_1kwuap4
/r/LocalLLaMA/comments/1kwuap4/when_are_we_getting_the_proton_mail_equivalent_of/
false
false
self
0
null
Recommendations for a local/open source todo/productivity assistant?
1
any popular local/open source todo productivity assistant. I seem to always go back to pen and paper with any software tool maybe AI helps with this?
2025-05-27T17:52:57
https://www.reddit.com/r/LocalLLaMA/comments/1kwtwk1/recommendations_for_a_localopen_source/
bornfree4ever
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwtwk1
false
null
t3_1kwtwk1
/r/LocalLLaMA/comments/1kwtwk1/recommendations_for_a_localopen_source/
false
false
self
1
null
[Research]: j1-nano & j1-micro: Absurdly Tiny RMs Competitive w/ Claude Opus, GPT-4o-mini, etc.
1
[removed]
2025-05-27T17:51:23
https://www.reddit.com/r/LocalLLaMA/comments/1kwtv5l/research_j1nano_j1micro_absurdly_tiny_rms/
leonardtang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwtv5l
false
null
t3_1kwtv5l
/r/LocalLLaMA/comments/1kwtv5l/research_j1nano_j1micro_absurdly_tiny_rms/
false
false
self
1
{'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]}
[Research] j1-nano & j1-micro: Absurdly Tiny RMs Competitive w/ Claude Opus, GPT-4o-mini, etc.
1
[removed]
2025-05-27T17:49:41
https://www.reddit.com/r/LocalLLaMA/comments/1kwttjy/research_j1nano_j1micro_absurdly_tiny_rms/
leonardtang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwttjy
false
null
t3_1kwttjy
/r/LocalLLaMA/comments/1kwttjy/research_j1nano_j1micro_absurdly_tiny_rms/
false
false
self
1
{'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]}
j1-nano & j1-micro: Absurdly Tiny RMs Competitive w/ Claude Opus, GPT-4o-mini, etc.
1
[removed]
2025-05-27T17:47:23
https://www.reddit.com/r/LocalLLaMA/comments/1kwtrf7/j1nano_j1micro_absurdly_tiny_rms_competitive_w/
leonardtang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwtrf7
false
null
t3_1kwtrf7
/r/LocalLLaMA/comments/1kwtrf7/j1nano_j1micro_absurdly_tiny_rms_competitive_w/
false
false
self
1
{'enabled': False, 'images': [{'id': 'QqSY3F9i2BgB-OdT_JpQr1vBqr2oq4spYNzkghHXwCM', 'resolutions': [], 'source': {'height': 64, 'url': 'https://external-preview.redd.it/4PgIzt2dsWk0hsH_pv6fTscUBf4LNxa8vUF1zyE23u0.jpg?auto=webp&s=adf334dabc58b5ccda405f20fe4d11f983c41fe9', 'width': 64}, 'variants': {}}]}
Best Model for OCR
1
[removed]
2025-05-27T17:40:38
https://www.reddit.com/r/LocalLLaMA/comments/1kwtl6r/best_model_for_ocr/
pgodgodzilla
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwtl6r
false
null
t3_1kwtl6r
/r/LocalLLaMA/comments/1kwtl6r/best_model_for_ocr/
false
false
self
1
null
Asus Flow Z13 best Local LLM Tests.
0
[https://www.youtube.com/watch?v=AcTmeGpzhBk](https://www.youtube.com/watch?v=AcTmeGpzhBk)
2025-05-27T17:23:40
https://www.reddit.com/r/LocalLLaMA/comments/1kwt5hl/asus_flow_z13_best_local_llm_tests/
Strong_Sympathy9955
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwt5hl
false
null
t3_1kwt5hl
/r/LocalLLaMA/comments/1kwt5hl/asus_flow_z13_best_local_llm_tests/
false
false
self
0
{'enabled': False, 'images': [{'id': 'PvGJWOLZ0UHJZ9cCD4kLq86-roAGlSyCJB6i6hL288E', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/Et395fzyMNJzU5HdrbkFsr_Axhs6aZiYWEZvR0Ow-Lg.jpg?width=108&crop=smart&auto=webp&s=f052a53f4bbe57b49e463da91e02a57d76f03899', 'width': 108}, {'height': 162, 'url': 'h...
What tools or upgrades do I need to run a local AI assistant (Jarvis) more efficiently?
1
[removed]
2025-05-27T17:19:16
https://www.reddit.com/r/LocalLLaMA/comments/1kwt16j/what_tools_or_upgrades_do_i_need_to_run_a_local/
Gold-Management5308
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwt16j
false
null
t3_1kwt16j
/r/LocalLLaMA/comments/1kwt16j/what_tools_or_upgrades_do_i_need_to_run_a_local/
false
false
self
1
null
Is there a local LLM that can give you a description or tags for videos similar to Gemini?
1
Say you want to automate creating descriptions or tags, or ask questions about videos. Can you do that locally?
2025-05-27T16:45:57
https://www.reddit.com/r/LocalLLaMA/comments/1kws5wd/is_there_a_local_llm_that_can_give_you_a/
GrayPsyche
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kws5wd
false
null
t3_1kws5wd
/r/LocalLLaMA/comments/1kws5wd/is_there_a_local_llm_that_can_give_you_a/
false
false
self
1
null
Is there a way to buy the NVIDIA RTX PRO 6000 Blackwell Server Edition right now?
6
I'm in the market for one due to the fact I've got a server infrastructure (with an A30 right now) in my homelab and everyone here is talking about the Workstation edition. I'm in the opposite boat, I need one of the cards without a fan and Nvidia hasn't emailed me anything indicating that the server cards are availabl...
2025-05-27T16:40:37
https://www.reddit.com/r/LocalLLaMA/comments/1kws15n/is_there_a_way_to_buy_the_nvidia_rtx_pro_6000/
Yorn2
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kws15n
false
null
t3_1kws15n
/r/LocalLLaMA/comments/1kws15n/is_there_a_way_to_buy_the_nvidia_rtx_pro_6000/
false
false
self
6
null
Why can't I reproduce results from papers like Phi, Llama, or Qwen? Am I doing something wrong or is this normal?
1
[removed]
2025-05-27T16:37:15
[deleted]
1970-01-01T00:00:00
0
{}
1kwry21
false
null
t3_1kwry21
/r/LocalLLaMA/comments/1kwry21/why_cant_i_reproduce_results_from_papers_like_phi/
false
false
default
1
null
Hunyuan releases HunyuanPortrait
55
🎉 Introducing HunyuanPortrait: Implicit Condition Control for Enhanced Portrait Animation 👉What's New? 1⃣Turn static images into living art! 🖼➡🎥 2⃣Unparalleled realism with Implicit Control + Stable Video Diffusion 3⃣SoTA temporal consistency & crystal-clear fidelity This breakthrough method outperforms existi...
2025-05-27T16:34:10
https://i.redd.it/66xgi7lrqc3f1.jpeg
ResearchCrafty1804
i.redd.it
1970-01-01T00:00:00
0
{}
1kwrv8g
false
null
t3_1kwrv8g
/r/LocalLLaMA/comments/1kwrv8g/hunyuan_releases_hunyuanportrait/
false
false
https://b.thumbs.redditm…a1_95ab5KdSA.jpg
55
{'enabled': True, 'images': [{'id': 'sCkyKbDfW0hRBxZRbIS9y9gU-tEDbnVzRPWkMRvm-hg', 'resolutions': [{'height': 61, 'url': 'https://preview.redd.it/66xgi7lrqc3f1.jpeg?width=108&crop=smart&auto=webp&s=9d6206e12f4c728bb898ed432be73c840616b0ca', 'width': 108}, {'height': 122, 'url': 'https://preview.redd.it/66xgi7lrqc3f1.jp...
Gemma3 fully OSS model alternative (context especially)?
4
Hey all. So I'm trying to move my workflow from cloud-based proprietary models to locally based FOSS models. I am using OLMO2 as my primary driver since it has good performance and a fully open dataset. However it's context is rather limited for large code files. Does anyone have a suggestion for a large context model ...
2025-05-27T16:29:50
https://www.reddit.com/r/LocalLLaMA/comments/1kwrr55/gemma3_fully_oss_model_alternative_context/
InvertedVantage
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwrr55
false
null
t3_1kwrr55
/r/LocalLLaMA/comments/1kwrr55/gemma3_fully_oss_model_alternative_context/
false
false
self
4
null
[META] Too many apps!
1
[removed]
2025-05-27T16:24:59
https://www.reddit.com/r/LocalLLaMA/comments/1kwrmsf/meta_too_many_apps/
PANIC_EXCEPTION
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kwrmsf
false
null
t3_1kwrmsf
/r/LocalLLaMA/comments/1kwrmsf/meta_too_many_apps/
false
false
self
1
null