title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
What features or specifications define a Small Language Model (SLM)?
4
Im trying to understand what qualifies a language model as a SLM. Is it purely based on the number of parameters or do other factors like training data size, context window size also plays a role? Can i consider llama 2 7b as a SLM?
2025-05-20T11:20:26
https://www.reddit.com/r/LocalLLaMA/comments/1kr2d1m/what_features_or_specifications_define_a_small/
Putrid_Spinach3961
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr2d1m
false
null
t3_1kr2d1m
/r/LocalLLaMA/comments/1kr2d1m/what_features_or_specifications_define_a_small/
false
false
self
4
null
Grounded in Context: Retrieval-Based Method for Hallucination Detection
18
Deepchecks recently released a hallucination detection framework, designed for long-context data and tailored to diverse use cases, including summarization, data extraction, and RAG. Inspired by RAG architecture, our method integrates retrieval and Natural Language Inference (NLI) models to predict factual consistency ...
2025-05-20T11:17:46
https://www.reddit.com/r/LocalLLaMA/comments/1kr2bcv/grounded_in_context_retrievalbased_method_for/
gpt-d13
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr2bcv
false
null
t3_1kr2bcv
/r/LocalLLaMA/comments/1kr2bcv/grounded_in_context_retrievalbased_method_for/
false
false
self
18
null
Budget Gaming/LLM PC: the great dilemma of B580 vs 3060
0
Hi there hello, In short: I'm about to build a budget machine (Ryzen5 7600, 32GB RAM) in order to allow my kid (and me too, but this is unofficial) to play some games and at the same time have some sort of decent system where to run LLMs both for work and for home automation. I really have trouble deciding between B5...
2025-05-20T11:10:54
https://www.reddit.com/r/LocalLLaMA/comments/1kr276r/budget_gamingllm_pc_the_great_dilemma_of_b580_vs/
trepz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr276r
false
null
t3_1kr276r
/r/LocalLLaMA/comments/1kr276r/budget_gamingllm_pc_the_great_dilemma_of_b580_vs/
false
false
self
0
null
Looking for tokenizer.model for Falcon 180B BASE (HF)
3
Hey everyone, I’m looking for the tokenizer.model file from the Falcon 180B BASE model – the original version that used to be on Hugging Face. I had downloaded the full set some time ago and saved it on a separate drive. I’ve since lost that disk and now I’m trying to restore the setup. Unfortunately, the tokenizer f...
2025-05-20T10:37:10
https://www.reddit.com/r/LocalLLaMA/comments/1kr1mxj/looking_for_tokenizermodel_for_falcon_180b_base_hf/
Most-Broccoli-427
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr1mxj
false
null
t3_1kr1mxj
/r/LocalLLaMA/comments/1kr1mxj/looking_for_tokenizermodel_for_falcon_180b_base_hf/
false
false
self
3
null
Question: feed diagram images into LLM
1
[removed]
2025-05-20T10:29:55
https://www.reddit.com/r/LocalLLaMA/comments/1kr1ito/question_feed_diagram_images_into_llm/
Own_Mud1038
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr1ito
false
null
t3_1kr1ito
/r/LocalLLaMA/comments/1kr1ito/question_feed_diagram_images_into_llm/
false
false
self
1
null
Choosing a diff format for Llama4 and Aider
3
I've been experimenting with Aider + Llama4 Scout for pair programming and have been pleased with the initial results. Perhaps a long shot, but does anyone have experience using Aider's [various "diff" formats](https://aider.chat/docs/more/edit-formats.html#editor-diff-and-editor-whole) with Llama 4 Scout or Maverick...
2025-05-20T10:28:05
https://www.reddit.com/r/LocalLLaMA/comments/1kr1hu3/choosing_a_diff_format_for_llama4_and_aider/
RobotRobotWhatDoUSee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr1hu3
false
null
t3_1kr1hu3
/r/LocalLLaMA/comments/1kr1hu3/choosing_a_diff_format_for_llama4_and_aider/
false
false
self
3
{'enabled': False, 'images': [{'id': 'nEkWU_iRPHcIypRX18tqK7LINGrqAvGclSxnrrFqHsg', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/qGrDWI5UisDlCPaheTjrIAXA2nxh4LyDgVhSTNDIdcg.jpg?width=108&crop=smart&auto=webp&s=dcfd4aa364c959a05cfd0f650469f51f1a123248', 'width': 108}, {'height': 105, 'url': 'h...
The "Reasoning" in LLMs might not be the actual reasoning, but why realise it now?
0
It's funny how people are now realising that the "thoughts"/"reasoning" given by the reasoning models like Deepseek-R1, Gemini etc. are not what model actually "thinks". Most of us had the understanding that these are not actual thoughts in February I guess. But the reason why we're still working on these reasoning m...
2025-05-20T10:08:10
https://www.reddit.com/r/LocalLLaMA/comments/1kr16pq/the_reasoning_in_llms_might_not_be_the_actual/
The-Silvervein
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr16pq
false
null
t3_1kr16pq
/r/LocalLLaMA/comments/1kr16pq/the_reasoning_in_llms_might_not_be_the_actual/
false
false
self
0
null
Any open-source LLMs where devs explain how/why they chose what constraints to add?
0
I am interested in how AI devs/creators deal with the moral side of what they build—like guardrails, usage policies embedded into architecture, ethical decisions around training data inclusion/exclusion, explainability mechanisms, or anything showing why they chose to limit or guide model behavior in a certain way. I ...
2025-05-20T09:19:03
https://www.reddit.com/r/LocalLLaMA/comments/1kr0h62/any_opensource_llms_where_devs_explain_howwhy/
sbs1799
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr0h62
false
null
t3_1kr0h62
/r/LocalLLaMA/comments/1kr0h62/any_opensource_llms_where_devs_explain_howwhy/
false
false
self
0
null
How to draw these kind of diagrams
1
[removed]
2025-05-20T09:02:55
https://i.redd.it/9kzje4hvjw1f1.png
commander-trex
i.redd.it
1970-01-01T00:00:00
0
{}
1kr092s
false
null
t3_1kr092s
/r/LocalLLaMA/comments/1kr092s/how_to_draw_these_kind_of_diagrams/
false
false
https://b.thumbs.redditm…ecQ5gwMIaj9E.jpg
1
{'enabled': True, 'images': [{'id': 'ukntQAdx05VuJaZWyCW7B-eTrkR27c7ey_MJCIyhvDc', 'resolutions': [{'height': 83, 'url': 'https://preview.redd.it/9kzje4hvjw1f1.png?width=108&crop=smart&auto=webp&s=38329165506f3bddcd893979cbf519082d0335de', 'width': 108}, {'height': 166, 'url': 'https://preview.redd.it/9kzje4hvjw1f1.png...
NVIDIA H200 or the new RTX Pro Blackwell for a RAG chatbot?
1
[removed]
2025-05-20T08:59:36
https://www.reddit.com/r/LocalLLaMA/comments/1kr07au/nvidia_h200_or_the_new_rtx_pro_blackwell_for_a/
snaiperist
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kr07au
false
null
t3_1kr07au
/r/LocalLLaMA/comments/1kr07au/nvidia_h200_or_the_new_rtx_pro_blackwell_for_a/
false
false
self
1
null
Eval generation and testing
1
What is everyone using for evals? I'm interested in any tools or recommendations for eval generation, not just from docs but multi turn or agent workflows. I've tried yourbench and started working with promptfoo synthetic generation but feel there must be a better way.
2025-05-20T08:33:30
https://www.reddit.com/r/LocalLLaMA/comments/1kqzuks/eval_generation_and_testing/
harmless_0
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqzuks
false
null
t3_1kqzuks
/r/LocalLLaMA/comments/1kqzuks/eval_generation_and_testing/
false
false
self
1
null
🧠 Share Your Local LLM Inference Benchmarks (ktransformers / ik_llama.cpp / vLLM ...) – Let’s Build the Ultimate Reference Together 🚀
1
[removed]
2025-05-20T08:26:54
https://www.reddit.com/r/LocalLLaMA/comments/1kqzrcj/share_your_local_llm_inference_benchmarks/
HereForAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqzrcj
false
null
t3_1kqzrcj
/r/LocalLLaMA/comments/1kqzrcj/share_your_local_llm_inference_benchmarks/
false
false
self
1
null
Qwen3 8B model on par with Gemini 2.5 Flash for code summarization
1
[removed]
2025-05-20T08:18:10
https://www.reddit.com/r/LocalLLaMA/comments/1kqzn0o/qwen3_8b_model_on_par_with_gemini_25_flash_for/
kms_dev
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqzn0o
false
null
t3_1kqzn0o
/r/LocalLLaMA/comments/1kqzn0o/qwen3_8b_model_on_par_with_gemini_25_flash_for/
false
false
self
1
null
DeepSeek V3 benchmarks using ktransformers
7
I would like to try KTransformers for DeepSeek V3 inference. Before spending $10k on hardware I would like to understand what kind of inference performance I will get. Even though KTransformers v0.3 with open source Intel AMX optimizations has been released around 3 weeks ago I didn't find any 3rd party benchmarks fo...
2025-05-20T07:51:18
https://www.reddit.com/r/LocalLLaMA/comments/1kqz9uu/deepseek_v3_benchmarks_using_ktransformers/
pmur12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqz9uu
false
null
t3_1kqz9uu
/r/LocalLLaMA/comments/1kqz9uu/deepseek_v3_benchmarks_using_ktransformers/
false
false
self
7
null
I trapped Llama3.2B into an art installation and made it question its own existence endlessly
2
[removed]
2025-05-20T07:36:12
https://i.redd.it/5hxh6ql74w1f1.jpeg
Dull-Pressure9628
i.redd.it
1970-01-01T00:00:00
0
{}
1kqz2gw
false
null
t3_1kqz2gw
/r/LocalLLaMA/comments/1kqz2gw/i_trapped_llama32b_into_an_art_installation_and/
false
false
https://b.thumbs.redditm…9Ol9JWfRoaFg.jpg
2
{'enabled': True, 'images': [{'id': 'tABl5luYbuEq_m1AU03Hlfx-TKU-ejaP_csFHeckjEk', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/5hxh6ql74w1f1.jpeg?width=108&crop=smart&auto=webp&s=f3bd83a1009781592dc6855f6ae1ba7899f0cc72', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/5hxh6ql74w1f1.jp...
AM5 motherboard for 2x RTX 5060 Ti 16 GB
6
Hello there, I've been looking for a couple of days already with no success as to what motherboard could support 2x RTX 5060 Ti 16 GB GPUs at maximum speed. It is a PCIe 5.0 8x GPU, but I am unsure whether it can take full advantage of it or is for example 4.0 8x enough. I would use them for running LLMs as well as tra...
2025-05-20T07:28:47
https://www.reddit.com/r/LocalLLaMA/comments/1kqyypq/am5_motherboard_for_2x_rtx_5060_ti_16_gb/
cybran3
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqyypq
false
null
t3_1kqyypq
/r/LocalLLaMA/comments/1kqyypq/am5_motherboard_for_2x_rtx_5060_ti_16_gb/
false
false
self
6
null
How fast can you serve a qwen2 7B model on single H100?
1
I am only getting 4Hz with acceleration from TRT-LLM, which seems slow to me. Is this expected?
2025-05-20T07:20:50
https://www.reddit.com/r/LocalLLaMA/comments/1kqyur8/how_fast_can_you_serve_a_qwen2_7b_model_on_single/
YeBigBear
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqyur8
false
null
t3_1kqyur8
/r/LocalLLaMA/comments/1kqyur8/how_fast_can_you_serve_a_qwen2_7b_model_on_single/
false
false
self
1
null
Is there any company which providers pay per use GPU Server?
0
I am looking for companies which lets you deploy thing & only charge for amount of time we use them. Just like aws lamda. I came to know about replicate but seems a bit on the costly side. Any other alternative?
2025-05-20T06:54:43
https://www.reddit.com/r/LocalLLaMA/comments/1kqyh1x/is_there_any_company_which_providers_pay_per_use/
DefiantScarcity3133
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqyh1x
false
null
t3_1kqyh1x
/r/LocalLLaMA/comments/1kqyh1x/is_there_any_company_which_providers_pay_per_use/
false
false
self
0
null
Sliding Window Attention support merged into llama.cpp, dramatically reducing the memory requirements for running Gemma 3
514
2025-05-20T06:48:35
https://github.com/ggml-org/llama.cpp/pull/13194
-p-e-w-
github.com
1970-01-01T00:00:00
0
{}
1kqye2t
false
null
t3_1kqye2t
/r/LocalLLaMA/comments/1kqye2t/sliding_window_attention_support_merged_into/
false
false
https://a.thumbs.redditm…x3G4IEI0ohz0.jpg
514
{'enabled': False, 'images': [{'id': 'IxmwlAJl6oKhAixWuUbMh2U0Ae8m7JQDIGto5AkmHeY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wwo-l6Lp28bzCUco8EwP9KcszHoY94gQORkIHOKSj3w.jpg?width=108&crop=smart&auto=webp&s=0a0716d4b5311e7bf8bcfd10a7d56cf206aea11d', 'width': 108}, {'height': 108, 'url': 'h...
I made local Ollama LLM GUI for macOS.
23
Hey r/LocalLLaMA! 👋 I'm excited to share a macOS GUI I've been working on for running local LLMs, called macLlama! It's currently at version 1.0.3. macLlama aims to make using Ollama even easier, especially for those wanting a more visual and user-friendly experience. Here are the key features: * **Ollama Server M...
2025-05-20T06:26:16
https://i.redd.it/j7vnr1ocrv1f1.png
gogimandoo
i.redd.it
1970-01-01T00:00:00
0
{}
1kqy2kc
false
null
t3_1kqy2kc
/r/LocalLLaMA/comments/1kqy2kc/i_made_local_ollama_llm_gui_for_macos/
false
false
https://b.thumbs.redditm…Ea-gmsM9TTes.jpg
23
{'enabled': True, 'images': [{'id': 'L6rTK68Etf_FpEniNcJBHz-OFVnvBDorVk-NvvmGG94', 'resolutions': [{'height': 110, 'url': 'https://preview.redd.it/j7vnr1ocrv1f1.png?width=108&crop=smart&auto=webp&s=45357760805470f09b2b0d8eebbffde153464e7a', 'width': 108}, {'height': 220, 'url': 'https://preview.redd.it/j7vnr1ocrv1f1.pn...
I created a never-ending story generator running local LLMs on my desktop.
1
[removed]
2025-05-20T06:02:17
https://www.reddit.com/r/LocalLLaMA/comments/1kqxq78/i_created_a_neverending_story_generator_running/
Super-Action3298
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqxq78
false
null
t3_1kqxq78
/r/LocalLLaMA/comments/1kqxq78/i_created_a_neverending_story_generator_running/
false
false
self
1
{'enabled': False, 'images': [{'id': 'S409SasqYcx_GUZ9RTqjucd0yhcAT7--IBgD3U7cLa8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/3K8YvmzPmn9AVt_vx5fNEbkhJN56D38hlXNSSX2RD-M.jpg?width=108&crop=smart&auto=webp&s=e05a5b7d772c13aaa998d37df68da0287dbf6cd0', 'width': 108}, {'height': 162, 'url': 'h...
Microsoft unveils “USB-C for AI apps.” I open-sourced the same concept 3 days earlier—proof inside.
356
• I released *llmbasedos* on 16 May. • Microsoft showed an almost identical “USB-C for AI” pitch on 19 May. • Same idea, mine is already running and Apache-2.0. 16 May 09:14 UTC GitHub tag v0.1 16 May 14:27 UTC Launch post on r/LocalLLaMA 19 May 16:00 UTC Verge headline “Windows gets the USB-C of AI apps” ...
2025-05-20T05:32:20
https://github.com/iluxu/llmbasedos
iluxu
github.com
1970-01-01T00:00:00
0
{}
1kqxa25
false
null
t3_1kqxa25
/r/LocalLLaMA/comments/1kqxa25/microsoft_unveils_usbc_for_ai_apps_i_opensourced/
false
false
https://a.thumbs.redditm…91rX-1gyJCw0.jpg
356
{'enabled': False, 'images': [{'id': 'UIMSzRR3wmdsdEEI9k_f63TZnSEiCtwUSUlkgRTvIuE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bzRdKyansO1kJ-qEk0diKPCKD02A4z1C6vyWkV3u2bE.jpg?width=108&crop=smart&auto=webp&s=de8a4224c8f6cc24af9471c2b55639ccc29a30db', 'width': 108}, {'height': 108, 'url': 'h...
Wouldn't it be great to have benchmarks for code speed
0
I was thinking of a benchmark where the code the LLM produces is timed. That could be very cool. I don't think that exists at the moment.
2025-05-20T05:21:33
https://www.reddit.com/r/LocalLLaMA/comments/1kqx44s/wouldnt_it_be_great_to_have_benchmarks_for_code/
MrMrsPotts
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqx44s
false
null
t3_1kqx44s
/r/LocalLLaMA/comments/1kqx44s/wouldnt_it_be_great_to_have_benchmarks_for_code/
false
false
self
0
null
Best scale-to-zero fine-tuned qwen-2-5-32b-coder-instruct host?
5
I have tried Predibase and looked into some other providers but have been very frustrated finding a simple way to host a **qwen-2-5-32b-coder** (and/or **coder-instruct**) model which I can then incrementally fine-tune thereafter. I couldn't even get the model to load properly on Predibase, but spent a few dollars just...
2025-05-20T05:13:51
https://www.reddit.com/r/LocalLLaMA/comments/1kqwzz7/best_scaletozero_finetuned_qwen2532bcoderinstruct/
Synapse709
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqwzz7
false
null
t3_1kqwzz7
/r/LocalLLaMA/comments/1kqwzz7/best_scaletozero_finetuned_qwen2532bcoderinstruct/
false
false
self
5
null
Very slow inference using LLAVA with LLama.cpp vs LM Studio
1
[removed]
2025-05-20T05:03:54
https://www.reddit.com/r/LocalLLaMA/comments/1kqwufs/very_slow_inference_using_llava_with_llamacpp_vs/
wayl
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqwufs
false
null
t3_1kqwufs
/r/LocalLLaMA/comments/1kqwufs/very_slow_inference_using_llava_with_llamacpp_vs/
false
false
self
1
null
Now that I converted my N64 to Linux, what is the best NSFW model to run on it?
403
I need the model in the 4.5MB range.
2025-05-20T04:29:28
https://www.reddit.com/r/LocalLLaMA/comments/1kqw9xn/now_that_i_converted_my_n64_to_linux_what_is_the/
DeepWisdomGuy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqw9xn
false
null
t3_1kqw9xn
/r/LocalLLaMA/comments/1kqw9xn/now_that_i_converted_my_n64_to_linux_what_is_the/
false
false
nsfw
403
null
What are currently the "best" solutions for Multimodal data extraction/ingestion available to us?
6
Doing some research on the topic and after a bunch of reading, figure I'd just directly crowdsource the question. I'll aggregate the responses, do some additional research, possibly some testing. Maybe I'll provide some feedback on my findings. Specifically focusing on document extraction Some notes and requiremen...
2025-05-20T04:26:02
https://www.reddit.com/r/LocalLLaMA/comments/1kqw7vl/what_are_currently_the_best_solutions_for/
joomla00
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqw7vl
false
null
t3_1kqw7vl
/r/LocalLLaMA/comments/1kqw7vl/what_are_currently_the_best_solutions_for/
false
false
self
6
null
8x 32GB V100 GPU server performance
1
[removed]
2025-05-20T04:10:38
https://www.reddit.com/r/LocalLLaMA/comments/1kqvyga/8x_32gb_v100_gpu_server_performance/
tfinch83
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqvyga
false
null
t3_1kqvyga
/r/LocalLLaMA/comments/1kqvyga/8x_32gb_v100_gpu_server_performance/
false
false
self
1
null
Speaking of the OpenAI Privacy Policy
1
[removed]
2025-05-20T04:01:01
https://www.reddit.com/r/LocalLLaMA/comments/1kqvs3x/speaking_of_the_openai_privacy_policy/
MrJaxendale
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqvs3x
false
null
t3_1kqvs3x
/r/LocalLLaMA/comments/1kqvs3x/speaking_of_the_openai_privacy_policy/
false
false
self
1
null
What ai is best for Chinese to English translation currently?
1
[removed]
2025-05-20T03:59:58
https://www.reddit.com/r/LocalLLaMA/comments/1kqvrbe/what_ai_is_best_for_chinese_to_english/
Civil_Candidate_824
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqvrbe
false
null
t3_1kqvrbe
/r/LocalLLaMA/comments/1kqvrbe/what_ai_is_best_for_chinese_to_english/
false
false
self
1
null
SmolChat - An Android App to run SLMs/LLMs locally, on-device is now available on Google Play
97
After nearly six months of development, SmolChat is now available on Google Play in 170+ countries and in two languages, English and simplified Chinese. SmolChat allows users to download LLMs and use them offline on their Android device, with a clean and easy-to-use interface. Users can group chats into folders, tune ...
2025-05-20T03:29:29
https://play.google.com/store/apps/details?id=io.shubham0204.smollmandroid&pcampaignid=web_share
shubham0204_dev
play.google.com
1970-01-01T00:00:00
0
{}
1kqv7lm
false
null
t3_1kqv7lm
/r/LocalLLaMA/comments/1kqv7lm/smolchat_an_android_app_to_run_slmsllms_locally/
false
false
https://b.thumbs.redditm…td8G86OmT0rA.jpg
97
{'enabled': False, 'images': [{'id': 'YAXsKRJUddjJfoP_69g7_D7TsVNUnG3fBGqZxNRW16M', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tUsrygHCoWdz2ebRdoSCY6YIEFIZ4gy4ejJadtdGwO4.jpg?width=108&crop=smart&auto=webp&s=ce5cb916591b157dde7cbe6a30b17ef5e7d83e96', 'width': 108}, {'height': 216, 'url': '...
SEED-GRPO: Semantic Entropy-Aware GRPO for Math Reasoning (56.7 AIME24 @ 7B)
1
[removed]
2025-05-20T02:59:30
https://www.reddit.com/r/LocalLLaMA/comments/1kqunwc/seedgrpo_semantic_entropyaware_grpo_for_math/
Competitive_Pilot_75
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqunwc
false
null
t3_1kqunwc
/r/LocalLLaMA/comments/1kqunwc/seedgrpo_semantic_entropyaware_grpo_for_math/
false
false
self
1
{'enabled': False, 'images': [{'id': 'hN3y7EstbkCtTMI3t0I9W7fHqwn6_Yu7uckuVriG2YM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UKqrPpO9tAmEdMq5VwN1iyvgVpmZg7_n76kO8VVRuf8.jpg?width=108&crop=smart&auto=webp&s=fb0aa88064eb9c37f96e55831097d1860ae60b55', 'width': 108}, {'height': 116, 'url': 'h...
Mindblowing demo: John Link led a team of AI agents to discover a forever-chemical-free immersion coolant using Microsoft Discovery.
393
2025-05-20T02:35:05
https://v.redd.it/9b7qevfimu1f1
cjsalva
v.redd.it
1970-01-01T00:00:00
0
{}
1kqu7dv
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/9b7qevfimu1f1/DASHPlaylist.mpd?a=1750300520%2CNWRkYWMwYmMwNTIwOWIyYzQyZjQzNGRiZjYzNzEyYmI0MTVkZWQzY2U2OTAxMWRmYWJlZjk2MzA2OWU1YTY2Ng%3D%3D&v=1&f=sd', 'duration': 127, 'fallback_url': 'https://v.redd.it/9b7qevfimu1f1/DASH_720.mp4?source=fallback', 'h...
t3_1kqu7dv
/r/LocalLLaMA/comments/1kqu7dv/mindblowing_demo_john_link_led_a_team_of_ai/
false
false
https://external-preview…3d620b7281f478cc
393
{'enabled': False, 'images': [{'id': 'dHQ1MWk0aGltdTFmMag1LLoTdbDTHM6ta6WYNiJEU-q2NTMmBmX376-kobql', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dHQ1MWk0aGltdTFmMag1LLoTdbDTHM6ta6WYNiJEU-q2NTMmBmX376-kobql.png?width=108&crop=smart&format=pjpg&auto=webp&s=d97b65e767ebea96c03d0128ab6b96c0fa6c7...
A model recommendation for creative writing
1
Which one should I be using for assisting on writing papers? I plan to run it locally for like normal stuff like chatting or generating some draftings on my RTX 4080 Laptop, Will it be useable?
2025-05-20T02:34:12
https://www.reddit.com/r/LocalLLaMA/comments/1kqu6t9/a_model_recommendation_for_creative_writing/
MessageOk4432
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqu6t9
false
null
t3_1kqu6t9
/r/LocalLLaMA/comments/1kqu6t9/a_model_recommendation_for_creative_writing/
false
false
self
1
null
Ultimate private AI suite
1
[removed]
2025-05-20T01:52:56
https://i.redd.it/0q9kuma3fu1f1.jpeg
ConstanceDover
i.redd.it
1970-01-01T00:00:00
0
{}
1kqtdea
false
null
t3_1kqtdea
/r/LocalLLaMA/comments/1kqtdea/ultimate_private_ai_suite/
false
false
https://b.thumbs.redditm…3jAp0FDMIFCc.jpg
1
{'enabled': True, 'images': [{'id': 'lNjmKa7fhF-LWp8MtrbyxSPEmSdG-bIUQkMdGEJTXFA', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/0q9kuma3fu1f1.jpeg?width=108&crop=smart&auto=webp&s=acd7ec5691d5f19e8d7627e432b26eb005f117fc', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/0q9kuma3fu1f1.j...
Help with local 3D model ?
1
[removed]
2025-05-20T01:23:33
https://www.reddit.com/r/LocalLLaMA/comments/1kqsskl/help_with_local_3d_model/
Feeling-Buy12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqsskl
false
null
t3_1kqsskl
/r/LocalLLaMA/comments/1kqsskl/help_with_local_3d_model/
false
false
self
1
null
How to Generate AI Images Locally on AMD RX 9070XT with ComfyUI + ZLUDA ...
1
2025-05-20T00:40:12
https://youtube.com/watch?v=U76ku-7AFV0&si=_RH1JRqbUig1Xo6q
Willow-Most
youtube.com
1970-01-01T00:00:00
0
{}
1kqrxoa
false
{'oembed': {'author_name': 'TechChuckle', 'author_url': 'https://www.youtube.com/@TechChuckle', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/U76ku-7AFV0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscop...
t3_1kqrxoa
/r/LocalLLaMA/comments/1kqrxoa/how_to_generate_ai_images_locally_on_amd_rx/
false
false
https://b.thumbs.redditm…Akz9iJjGCVBA.jpg
1
{'enabled': False, 'images': [{'id': 'WJ8UKX7K8vfzsrR4TN_KtCaDmm7HOwlgJ77nJNFUxNU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/EZB1aXpqO4QZ2FvNrgWoi8CUbNh6SLwX0x-K9OuAk8g.jpg?width=108&crop=smart&auto=webp&s=8c0cb41e8ec37eafb32762a98ab6102c714cc806', 'width': 108}, {'height': 162, 'url': 'h...
Looking for a high quality chat-dataset to mix with my reasoning datasets for fine-tuning
4
I'm looking for some good chat-datasets that we could mix with our reasoning datasets for fine-tuning. Most of the ones i've seen on huggingface are very junky. Curious what ours have found useful. Thanks!
2025-05-20T00:15:39
https://www.reddit.com/r/LocalLLaMA/comments/1kqrg9x/looking_for_a_high_quality_chatdataset_to_mix/
mutatedmonkeygenes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqrg9x
false
null
t3_1kqrg9x
/r/LocalLLaMA/comments/1kqrg9x/looking_for_a_high_quality_chatdataset_to_mix/
false
false
self
4
null
Reasoning Vision Language Model 12-24B?
2
I'm trying to find a reasoning vision language model from 12-24B. Ideally 24B...but all I can find is one or the other.
2025-05-20T00:00:08
https://www.reddit.com/r/LocalLLaMA/comments/1kqr4t6/reasoning_vision_language_model_1224b/
thejacer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqr4t6
false
null
t3_1kqr4t6
/r/LocalLLaMA/comments/1kqr4t6/reasoning_vision_language_model_1224b/
false
false
self
2
null
I use LLama to apply to 10,000 software engineering jobs in 5 days.
66
No, I’m not desperate or unemployed. I just wanted to get a real feel for what the global tech job market looks like in 2025. **Here’s the background on the profile I used:** * Master’s degree in Computer Science from a top European university, graduated with honors * 5 solid years at a FAANG company * Fluent in Engl...
2025-05-19T23:49:41
https://www.reddit.com/r/LocalLLaMA/comments/1kqqxbf/i_use_llama_to_apply_to_10000_software/
Elieroos
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqqxbf
false
null
t3_1kqqxbf
/r/LocalLLaMA/comments/1kqqxbf/i_use_llama_to_apply_to_10000_software/
false
false
self
66
{'enabled': False, 'images': [{'id': 'LisIUUGScx13mD-x3gFPv-giEc_OVliq9xdUF77fqKE', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/-p735bf3AY5ydtMxRmEXn9cm508xv_yLsoT9Fs_n6vQ.jpg?width=108&crop=smart&auto=webp&s=8e5f4eecb8f4e20584a0a45a6c7b3d80bca50562', 'width': 108}, {'height': 113, 'url': 'h...
help with permissions
1
[removed]
2025-05-19T23:27:54
https://www.reddit.com/r/LocalLLaMA/comments/1kqqh4l/help_with_permissions/
fazetag
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqqh4l
false
null
t3_1kqqh4l
/r/LocalLLaMA/comments/1kqqh4l/help_with_permissions/
false
false
self
1
null
Using your local Models to run Agents! (Open Source, 100% local)
29
2025-05-19T23:18:17
https://v.redd.it/5p9moal9nt1f1
Roy3838
v.redd.it
1970-01-01T00:00:00
0
{}
1kqq9t9
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5p9moal9nt1f1/DASHPlaylist.mpd?a=1750288714%2CMTdjMDQ1MjFkYTMwOTEzZGY1NTA2NDA3ZDgzYzdjMzhhNmU0MzNkYTFkZTNiYzUwNzgyN2I1MTVhNzY3NGE0MA%3D%3D&v=1&f=sd', 'duration': 88, 'fallback_url': 'https://v.redd.it/5p9moal9nt1f1/DASH_1080.mp4?source=fallback', 'h...
t3_1kqq9t9
/r/LocalLLaMA/comments/1kqq9t9/using_your_local_models_to_run_agents_open_source/
false
false
https://external-preview…a63f9bf68adce4a5
29
{'enabled': False, 'images': [{'id': 'd2Fkam5hbDludDFmMUseoVY8fQTbYJfjqlW4w2NBhsFRYZKCiBtmbkUNYsUI', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/d2Fkam5hbDludDFmMUseoVY8fQTbYJfjqlW4w2NBhsFRYZKCiBtmbkUNYsUI.png?width=108&crop=smart&format=pjpg&auto=webp&s=3c31d0195427856551ff3b72917261e18d053...
Looking to collaborate with and get advice from a few experienced developers interested in AI augmented development
0
Hello!  I’m a software engineer that’s been developing applications and infrastructure automation systems for over 20 years and I love it, and I am fixing to start a project that is meant to enhance my productivity and coding happiness by developing an architecture for a development platform that can support groups of ...
2025-05-19T23:16:25
https://www.reddit.com/r/LocalLLaMA/comments/1kqq8f6/looking_to_collaborate_with_and_get_advice_from_a/
awebb78
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqq8f6
false
null
t3_1kqq8f6
/r/LocalLLaMA/comments/1kqq8f6/looking_to_collaborate_with_and_get_advice_from_a/
false
false
self
0
null
[R] [Q] Why does RoPE need to be decoupled in DeepSeek V2/V3's MLA? I don't get why it prevents prefix key reuse
1
[removed]
2025-05-19T23:15:43
https://www.reddit.com/r/LocalLLaMA/comments/1kqq7vr/r_q_why_does_rope_need_to_be_decoupled_in/
gerrickle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqq7vr
false
null
t3_1kqq7vr
/r/LocalLLaMA/comments/1kqq7vr/r_q_why_does_rope_need_to_be_decoupled_in/
false
false
self
1
null
Demo of Sleep-time Compute to Reduce LLM Response Latency
76
This is a demo of Sleep-time compute to reduce LLM response latency.  Link: [https://github.com/ronantakizawa/sleeptimecompute](https://github.com/ronantakizawa/sleeptimecompute) Sleep-time compute improves LLM response latency by using the idle time between interactions to pre-process the context, allowing the model...
2025-05-19T22:37:52
https://i.redd.it/h9iyy36cgt1f1.png
Ok_Employee_6418
i.redd.it
1970-01-01T00:00:00
0
{}
1kqpemo
false
null
t3_1kqpemo
/r/LocalLLaMA/comments/1kqpemo/demo_of_sleeptime_compute_to_reduce_llm_response/
false
false
https://b.thumbs.redditm…fLJLecYbJRgQ.jpg
76
{'enabled': True, 'images': [{'id': 'f7JQxFMobvGipTqhTxy0rdLPruEQ_EW0gbdTrF9XO14', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/h9iyy36cgt1f1.png?width=108&crop=smart&auto=webp&s=e835c0cefdf2ae0861e036430274b7d97348a741', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/h9iyy36cgt1f1.png...
I got a Llama3.2b running on my pi and made it question its own existence endlessly
2
[removed]
2025-05-19T22:37:28
https://i.redd.it/3cesh685gt1f1.jpeg
Dull-Pressure9628
i.redd.it
1970-01-01T00:00:00
0
{}
1kqpeam
false
null
t3_1kqpeam
/r/LocalLLaMA/comments/1kqpeam/i_got_a_llama32b_running_on_my_pi_and_made_it/
false
false
https://b.thumbs.redditm…ez7F4JWB3wcI.jpg
2
{'enabled': True, 'images': [{'id': '4u_dBWKA-rbbAbIUAzsgsHe27q21v78hFKEjC4dLpjo', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/3cesh685gt1f1.jpeg?width=108&crop=smart&auto=webp&s=e9a60a4e3e704c6efc5945f9d83dde590b33bfa6', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/3cesh685gt1f1.jp...
Building a Fully Local AI Rig on AMD Ryzen 9 — What Use-Cases Should I Focus On?
1
[removed]
2025-05-19T22:30:20
https://www.reddit.com/r/LocalLLaMA/comments/1kqp8m5/building_a_fully_local_ai_rig_on_amd_ryzen_9_what/
_redacted-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqp8m5
false
null
t3_1kqp8m5
/r/LocalLLaMA/comments/1kqp8m5/building_a_fully_local_ai_rig_on_amd_ryzen_9_what/
false
false
self
1
null
What are your dream use-cases for a totally local AI rig?
1
[removed]
2025-05-19T22:25:49
https://www.reddit.com/r/LocalLLaMA/comments/1kqp4vk/what_are_your_dream_usecases_for_a_totally_local/
_redacted-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqp4vk
false
null
t3_1kqp4vk
/r/LocalLLaMA/comments/1kqp4vk/what_are_your_dream_usecases_for_a_totally_local/
false
false
self
1
{'enabled': False, 'images': [{'id': 'JdBt9k1bXwExyyrZ-OhRp27TypSYkF5YaPpUMhEpsXw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=108&crop=smart&auto=webp&s=082269d9fc14ff59a612334f36b23e4ff8fc75a8', 'width': 108}, {'height': 108, 'url': 'h...
I added automatic language detection and text-to-speech response to AI Runner
9
2025-05-19T22:25:00
https://v.redd.it/of7p2tkzdt1f1
w00fl35
/r/LocalLLaMA/comments/1kqp46f/i_added_automatic_language_detection_and/
1970-01-01T00:00:00
0
{}
1kqp46f
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/of7p2tkzdt1f1/DASHPlaylist.mpd?a=1750415106%2CZjExZTU0NjhhNTE2OTFlNGRlNTNhYjRmMGQwYmZkYjgxYzQwYmU4MmY2M2IxOTc1ZTVlMzYwZmEzNmFlMWRlMg%3D%3D&v=1&f=sd', 'duration': 75, 'fallback_url': 'https://v.redd.it/of7p2tkzdt1f1/DASH_1080.mp4?source=fallback', 'h...
t3_1kqp46f
/r/LocalLLaMA/comments/1kqp46f/i_added_automatic_language_detection_and/
false
false
https://external-preview…71786e7ca17d9e65
9
{'enabled': False, 'images': [{'id': 'bnh3ZjBiaDJldDFmMTuTBNgqFywp7VxarWureDzUbFixi-3H8s4hiED7R6fh', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnh3ZjBiaDJldDFmMTuTBNgqFywp7VxarWureDzUbFixi-3H8s4hiED7R6fh.png?width=108&crop=smart&format=pjpg&auto=webp&s=038dd1b32fc9de3de3e90418c5f6269e2ee63...
Best model to run on 8GB VRAM for coding?
3
I'd like to make use of my GeForce 1080 (8 GB VRAM) for assisting me with coding (C, Python, numerical physics simulations, GUIs, and ESP32 programming). Is there any useful model that'd be worth running? I know I won't be running something cutting-edge but I could do with some help. I can wait minutes for answers so...
2025-05-19T22:20:40
https://www.reddit.com/r/LocalLLaMA/comments/1kqp0mm/best_model_to_run_on_8gb_vram_for_coding/
cosmoschtroumpf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqp0mm
false
null
t3_1kqp0mm
/r/LocalLLaMA/comments/1kqp0mm/best_model_to_run_on_8gb_vram_for_coding/
false
false
self
3
null
Terminal-Bench: A new benchmark for AI agents in the terminal
1
[removed]
2025-05-19T22:03:07
http://tbench.ai
ombedrizoobo
tbench.ai
1970-01-01T00:00:00
0
{}
1kqoltu
false
null
t3_1kqoltu
/r/LocalLLaMA/comments/1kqoltu/terminalbench_a_new_benchmark_for_ai_agents_in/
false
false
default
1
null
DiffusionBee v2.12 — Flux.1, Textual Inversion, NSFW Blocking & More (Mac-only)
5
We just shipped a new DiffusionBee update for all the Mac-wielding degenerates and offline-creatives in the room. (If you’re not on arm64 or at least macOS 13, go touch some grass and come back later.) **🆕 What’s New:** * Flux.1 model support (arm64, macOS 13+ only) * Finally, yes, you can run Flux.1 natively. Sc...
2025-05-19T21:33:56
https://www.reddit.com/r/LocalLLaMA/comments/1kqnwrv/diffusionbee_v212_flux1_textual_inversion_nsfw/
lostbutyoucanfollow
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqnwrv
false
null
t3_1kqnwrv
/r/LocalLLaMA/comments/1kqnwrv/diffusionbee_v212_flux1_textual_inversion_nsfw/
false
false
nsfw
5
{'enabled': False, 'images': [{'id': 'OD4XEbuRyY-yd21Goppk_3oXtEuwpye_LNo55Lmulrg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L0qPfoDIGfF01WruFUdvYXL7Li8z9cxn9MQHPbDYJzU.jpg?width=108&crop=smart&auto=webp&s=30e34ad1a1780dca3c76d5f60fcfc622b85eb1c4', 'width': 108}, {'height': 108, 'url': 'h...
A person can dream,
0
512bit x 4gb_gddr7(32Gbps) x dual-pcb x clamshell = bandwidth: 2048GBps, momery: 256GB If we can get it for less than 2999USD before 2027,
2025-05-19T21:24:35
https://www.reddit.com/r/LocalLLaMA/comments/1kqnofx/a_person_can_dream/
Optifnolinalgebdirec
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqnofx
false
null
t3_1kqnofx
/r/LocalLLaMA/comments/1kqnofx/a_person_can_dream/
false
false
self
0
null
ELO Score in Chatbot Arena Over Time (Graph + Data)
0
Hey everyone! I've been trying for a while to find data to create a **graph showing the ELO evolution of LLM models in Chatbot Arena over time**. But since LMSYS doesn't publish when each model was added, I decided to take matters into my own hands and manually create a **dataset that tracked down the release dates fo...
2025-05-19T21:02:22
https://www.reddit.com/r/LocalLLaMA/comments/1kqn4mm/elo_score_in_chatbot_arena_over_time_graph_data/
coconautico
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqn4mm
false
null
t3_1kqn4mm
/r/LocalLLaMA/comments/1kqn4mm/elo_score_in_chatbot_arena_over_time_graph_data/
false
false
https://b.thumbs.redditm…wmPkItBKVf3s.jpg
0
null
Local LLM based chatbot for habit datapoint storage /recall
1
[removed]
2025-05-19T20:34:00
https://www.reddit.com/r/LocalLLaMA/comments/1kqmesj/local_llm_based_chatbot_for_habit_datapoint/
Altruistic-Finger-12
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqmesj
false
null
t3_1kqmesj
/r/LocalLLaMA/comments/1kqmesj/local_llm_based_chatbot_for_habit_datapoint/
false
false
self
1
null
👀 Microsoft just created an MCP Registry for Windows
261
2025-05-19T20:12:32
https://i.redd.it/6lwf9y6eqs1f1.png
eternviking
i.redd.it
1970-01-01T00:00:00
0
{}
1kqluy9
false
null
t3_1kqluy9
/r/LocalLLaMA/comments/1kqluy9/microsoft_just_created_an_mcp_registry_for_windows/
false
false
https://b.thumbs.redditm…FPDzYNB90kwY.jpg
261
{'enabled': True, 'images': [{'id': '-JYFo0kmQAmGOQ9kBV2QFjgMdtV4ZTZ4Cz6FHWcvAdM', 'resolutions': [{'height': 58, 'url': 'https://preview.redd.it/6lwf9y6eqs1f1.png?width=108&crop=smart&auto=webp&s=f19758a78d1b93823d64f553e78020268022aa38', 'width': 108}, {'height': 116, 'url': 'https://preview.redd.it/6lwf9y6eqs1f1.png...
Dell Unveils The Integration of NVIDIA’s GB300 “Blackwell Ultra” GPUs With Its AI Factories, Taking Performance & Scalability to New Levels
0
2025-05-19T19:52:59
https://wccftech.com/dell-unveils-the-integration-of-nvidia-blackwell-ultra-gpus-with-its-ai-factories/
_SYSTEM_ADMIN_MOD_
wccftech.com
1970-01-01T00:00:00
0
{}
1kqld0g
false
null
t3_1kqld0g
/r/LocalLLaMA/comments/1kqld0g/dell_unveils_the_integration_of_nvidias_gb300/
false
false
https://b.thumbs.redditm…z5Wf1AXmhWcY.jpg
0
{'enabled': False, 'images': [{'id': 'I2Q5ilSW15uBQAphbtRAAaiOKe9DpBubFURvebdnZDE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/frFfGHzX6yewr1UOids_2L87YHO6yflQPShTpGYNUQA.jpg?width=108&crop=smart&auto=webp&s=d5f86bbf1fdb9048f714a9de4202c303e07bd898', 'width': 108}, {'height': 108, 'url': 'h...
Has anyone here used a modded 22gb Rtx 2080 ti
2
I saw that you can buy these on eBay for about 500
2025-05-19T19:51:10
https://www.reddit.com/r/LocalLLaMA/comments/1kqlbdz/has_anyone_here_used_a_modded_22gb_rtx_2080_ti/
Responsible-Bad5572
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqlbdz
false
null
t3_1kqlbdz
/r/LocalLLaMA/comments/1kqlbdz/has_anyone_here_used_a_modded_22gb_rtx_2080_ti/
false
false
self
2
null
Be confident in your own judgement and reject benchmark JPEG's
153
2025-05-19T19:18:50
https://i.redd.it/1wtj3q6ngs1f1.jpeg
ForsookComparison
i.redd.it
1970-01-01T00:00:00
0
{}
1kqkhhy
false
null
t3_1kqkhhy
/r/LocalLLaMA/comments/1kqkhhy/be_confident_in_your_own_judgement_and_reject/
false
false
https://b.thumbs.redditm…_0hFvbFtVy-I.jpg
153
{'enabled': True, 'images': [{'id': '6A5BTmsryZQPa86Z2YPhIomn-HK78IYNxudaudPaWTo', 'resolutions': [{'height': 162, 'url': 'https://preview.redd.it/1wtj3q6ngs1f1.jpeg?width=108&crop=smart&auto=webp&s=ccb4e8acb070a5b80b5e80743c1fdbc4d71fe9ba', 'width': 108}, {'height': 325, 'url': 'https://preview.redd.it/1wtj3q6ngs1f1.j...
CoT stress question 🥵
1
Test your CoT llm with this question,enjoy! Imagine a perfectly spherical, frictionless planet entirely covered in a uniform layer of perfectly incompressible water. If a single drop of the same water is gently placed on the surface of this planet, describe in detail what will happen immediately and over time, conside...
2025-05-19T19:06:20
https://www.reddit.com/r/LocalLLaMA/comments/1kqk61c/cot_stress_question/
Illustrious-Dot-6888
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqk61c
false
null
t3_1kqk61c
/r/LocalLLaMA/comments/1kqk61c/cot_stress_question/
false
false
self
1
null
Looking for a 8b param to run with my data set for an AI personal assistant
2
I want to train an open source LLM on my own data (alr cleaned it and have everything right) I want to run one version on the cloud and one version on my own computer. What is the best current open source model to use?
2025-05-19T18:51:22
https://www.reddit.com/r/LocalLLaMA/comments/1kqjrwi/looking_for_a_8b_param_to_run_with_my_data_set/
jinstronda
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqjrwi
false
null
t3_1kqjrwi
/r/LocalLLaMA/comments/1kqjrwi/looking_for_a_8b_param_to_run_with_my_data_set/
false
false
self
2
null
Global Agent Hackathon by Agno is live!
1
[removed]
2025-05-19T18:18:28
https://www.reddit.com/r/LocalLLaMA/comments/1kqixb3/global_agent_hackathon_by_agno_is_live/
superconductiveKyle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqixb3
false
null
t3_1kqixb3
/r/LocalLLaMA/comments/1kqixb3/global_agent_hackathon_by_agno_is_live/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gja_zmWrbMqADHtjzbX5Ke-UtNyuew-F59tTPxLmBDY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=108&crop=smart&auto=webp&s=50b28015ec3b4747548ddb6053f1738999bc7d2c', 'width': 108}, {'height': 108, 'url': 'h...
Evaluating the best models at translating German - open models beat DeepL!
47
2025-05-19T18:17:55
https://nuenki.app/blog/best_language_models_for_german_translation
Nuenki
nuenki.app
1970-01-01T00:00:00
0
{}
1kqiwu2
false
null
t3_1kqiwu2
/r/LocalLLaMA/comments/1kqiwu2/evaluating_the_best_models_at_translating_german/
false
false
https://b.thumbs.redditm…Qqp2N0tML6Fg.jpg
47
{'enabled': False, 'images': [{'id': 'sl5AWBXJbnd8seHGhHam-my2xN8-2MTiLgaFFv_9VgQ', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/DtSOjZDQhCIQrR9MXzfYsDwli-PvO8iAuPXRBhYivls.jpg?width=108&crop=smart&auto=webp&s=79a054dd227c6f5432f86d0aad2f733d56deb387', 'width': 108}, {'height': 118, 'url': 'h...
Global Agent Hackathon by Agno is live! ($25k in total prizes)
1
[removed]
2025-05-19T18:15:37
https://www.reddit.com/r/LocalLLaMA/comments/1kqiuog/global_agent_hackathon_by_agno_is_live_25k_in/
superconductiveKyle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqiuog
false
null
t3_1kqiuog
/r/LocalLLaMA/comments/1kqiuog/global_agent_hackathon_by_agno_is_live_25k_in/
false
false
self
1
{'enabled': False, 'images': [{'id': 'gja_zmWrbMqADHtjzbX5Ke-UtNyuew-F59tTPxLmBDY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/IXNaAltUSQCDxp9493QiOnVFQu1k_3prNcIMNvoPcxo.jpg?width=108&crop=smart&auto=webp&s=50b28015ec3b4747548ddb6053f1738999bc7d2c', 'width': 108}, {'height': 108, 'url': 'h...
Low-Cost GPU Hosting for AI Models & Apps
1
[removed]
2025-05-19T17:54:03
https://www.reddit.com/r/LocalLLaMA/comments/1kqiaam/lowcost_gpu_hosting_for_ai_models_apps/
PrettyRevolution1842
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqiaam
false
null
t3_1kqiaam
/r/LocalLLaMA/comments/1kqiaam/lowcost_gpu_hosting_for_ai_models_apps/
false
false
self
1
null
Microsoft On-Device AI Local Foundry (Windows & Mac)
30
2025-05-19T17:46:57
https://devblogs.microsoft.com/foundry/unlock-instant-on-device-ai-with-foundry-local/
AngryBirdenator
devblogs.microsoft.com
1970-01-01T00:00:00
0
{}
1kqi3m0
false
null
t3_1kqi3m0
/r/LocalLLaMA/comments/1kqi3m0/microsoft_ondevice_ai_local_foundry_windows_mac/
false
false
default
30
{'enabled': False, 'images': [{'id': 'GXOYCqFmllQ_joNo6QYadF6Vo4ZZQrLzaZTuxX3dRUE', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/s7HfWyX7uW6coTKNdAxswziHueC-nss9O7Clu8I3zyI.jpg?width=108&crop=smart&auto=webp&s=e7cfd488b1326c8819d404cff7c0d6d95cbafacb', 'width': 108}, {'height': 123, 'url': 'h...
VS Code: Open Source Copilot
237
What do you think of this move by Microsoft? Is it just me, or are the possibilities endless? We can build customizable IDEs with an entire company’s tech stack by integrating MCPs on top, without having to build everything from scratch.
2025-05-19T17:27:31
https://code.visualstudio.com/blogs/2025/05/19/openSourceAIEditor
DonTizi
code.visualstudio.com
1970-01-01T00:00:00
0
{}
1kqhljr
false
null
t3_1kqhljr
/r/LocalLLaMA/comments/1kqhljr/vs_code_open_source_copilot/
false
false
https://b.thumbs.redditm…np9p5OOeHd9I.jpg
237
{'enabled': False, 'images': [{'id': 'gI5UNbMliL5WbCNvXlrvhhJCFfPhXA7cvuQQB4dfGDg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/7Ri8YRwu_7FpWFvmcgOzjF960jd6eY_pMWtoGfUyNOA.jpg?width=108&crop=smart&auto=webp&s=d3f7da257b92799305872c9c552e30115cbf8f02', 'width': 108}, {'height': 121, 'url': 'h...
THIS is the Most Important GPU of 2025
0
More details on the B60, the dual B60 with 48GB, software support, and pricing
2025-05-19T17:23:29
https://youtu.be/vZupIBqKHqM?si=dWxSNH2jzO-1qCLC
FullstackSensei
youtu.be
1970-01-01T00:00:00
0
{}
1kqhhp8
false
{'oembed': {'author_name': 'Linus Tech Tips', 'author_url': 'https://www.youtube.com/@LinusTechTips', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/vZupIBqKHqM?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gy...
t3_1kqhhp8
/r/LocalLLaMA/comments/1kqhhp8/this_is_the_most_important_gpu_of_2025/
false
false
https://b.thumbs.redditm…i7MqDH6BLgKw.jpg
0
{'enabled': False, 'images': [{'id': 'hqX1XJPzsut6Pu5b5L2iprNjn5AigAFUIBDoGZv-orY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/avbb0bdxIyVexvXRXN65rlN0Aut6hfd4goOHVixsfP8.jpg?width=108&crop=smart&auto=webp&s=cb692f0197a11b2f180845081cc6c2a0d8836b46', 'width': 108}, {'height': 162, 'url': 'h...
A Machine Gun in the Land of Sticks and Stones
1
[removed]
2025-05-19T17:17:33
https://www.reddit.com/r/LocalLLaMA/comments/1kqhc9w/a_machine_gun_in_the_land_of_sticks_and_stones/
_redacted-
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqhc9w
false
null
t3_1kqhc9w
/r/LocalLLaMA/comments/1kqhc9w/a_machine_gun_in_the_land_of_sticks_and_stones/
false
false
self
1
{'enabled': False, 'images': [{'id': 'JdBt9k1bXwExyyrZ-OhRp27TypSYkF5YaPpUMhEpsXw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lvmwDGS9MPRiZKL4EZcqjvZkcibg3uU1-7S5LJ5TJQY.jpg?width=108&crop=smart&auto=webp&s=082269d9fc14ff59a612334f36b23e4ff8fc75a8', 'width': 108}, {'height': 108, 'url': 'h...
Local LLMs show-down: More than 20 LLMs and one single Prompt
6
I became really curious about how far I could push LLMs and asked GPT-4o to help me craft a prompt that would make the models work really hard. Then I ran the same prompt through a selection of LLMs on my hardware along with a few commercial models for reference. You can read the results on my blog [https://blog.keke...
2025-05-19T17:15:55
https://www.reddit.com/r/LocalLLaMA/comments/1kqharr/local_llms_showdown_more_than_20_llms_and_one/
kekePower
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqharr
false
null
t3_1kqharr
/r/LocalLLaMA/comments/1kqharr/local_llms_showdown_more_than_20_llms_and_one/
false
false
self
6
null
Does Star Trek universe do mostly "vibe coding"?
1
[removed]
2025-05-19T17:12:40
https://www.reddit.com/r/LocalLLaMA/comments/1kqh7rz/does_star_trek_universe_do_mostly_vibe_coding/
derekp7
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqh7rz
false
null
t3_1kqh7rz
/r/LocalLLaMA/comments/1kqh7rz/does_star_trek_universe_do_mostly_vibe_coding/
false
false
self
1
null
MLX LM now integrated within Hugging Face
62
thread: [https://x.com/victormustar/status/1924510517311287508](https://x.com/victormustar/status/1924510517311287508)
2025-05-19T17:09:53
https://v.redd.it/bvoizhqstr1f1
paf1138
v.redd.it
1970-01-01T00:00:00
0
{}
1kqh56l
false
{'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/bvoizhqstr1f1/DASHPlaylist.mpd?a=1750266709%2COGJlZjA4YTkwN2EzZjM2NDA1ZWFlNGRlZTNkNGZmNGJiOTkyMzQ2NDBlNTA3NmU0MmIzNTEzNWY1ZmU5NDg0MA%3D%3D&v=1&f=sd', 'duration': 16, 'fallback_url': 'https://v.redd.it/bvoizhqstr1f1/DASH_1080.mp4?source=fallback', 'h...
t3_1kqh56l
/r/LocalLLaMA/comments/1kqh56l/mlx_lm_now_integrated_within_hugging_face/
false
false
https://external-preview…8176fac38c32afee
62
{'enabled': False, 'images': [{'id': 'ejhyMW5ocXN0cjFmMeXek4ObJQU75YpzzSznbvZU2j6Nva4vduBEs8qjugv3', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/ejhyMW5ocXN0cjFmMeXek4ObJQU75YpzzSznbvZU2j6Nva4vduBEs8qjugv3.png?width=108&crop=smart&format=pjpg&auto=webp&s=7875c6af7879db14178f5696692eb7035b104...
Drummer's Valkyrie 49B v1 - A strong, creative finetune of Nemotron 49B
70
2025-05-19T17:00:51
https://huggingface.co/TheDrummer/Valkyrie-49B-v1
TheLocalDrummer
huggingface.co
1970-01-01T00:00:00
0
{}
1kqgwh2
false
null
t3_1kqgwh2
/r/LocalLLaMA/comments/1kqgwh2/drummers_valkyrie_49b_v1_a_strong_creative/
false
false
https://b.thumbs.redditm…mXbCzhi6qXbI.jpg
70
{'enabled': False, 'images': [{'id': '6AW3V1L19ttaHqvJIOwp6QUcrDdh2aQdvn5BFvbdIxA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7-TxzLinFktWh46KZdKq3Yh3o06ZI3kUSrN3cJLAfu4.jpg?width=108&crop=smart&auto=webp&s=1e894be416e828a88e217ba1f2b4bdfbf53e0746', 'width': 108}, {'height': 116, 'url': 'h...
Local speech chat with Gemma3, speaking like a polyglot with multiple-personalities
20
Low-latency, speech-to(text-to)-speech conversation in any Linux window: [Demo video here](https://github.com/QuantiusBenignus/BlahST/blob/main/SPEECH-CHAT.md) This is **blahstbot**, part of the UI-less, text-in-any-window, BlahST for Linux.
2025-05-19T16:19:07
https://www.reddit.com/r/LocalLLaMA/comments/1kqfu8l/local_speech_chat_with_gemma3_speaking_like_a/
QuantuisBenignus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqfu8l
false
null
t3_1kqfu8l
/r/LocalLLaMA/comments/1kqfu8l/local_speech_chat_with_gemma3_speaking_like_a/
false
false
self
20
{'enabled': False, 'images': [{'id': 'TnD_OFY97c8GzlHvJXKbZSdvRBzDietvVU7KI3POcMY', 'resolutions': [{'height': 28, 'url': 'https://external-preview.redd.it/_DqZfzBO30P7lOkQ2HQCV602O9DD7LGKvVVJ_O8IA0g.jpg?width=108&crop=smart&auto=webp&s=2dbdffba8fa64b438ada5677f9f957ac03937852', 'width': 108}, {'height': 56, 'url': 'ht...
I'm trying to create a lightweight LLM with limited context window using only MLP layers
6
This is an ambitious and somewhat unconventional challenge, but I'm fascinated by the idea of exploring the limits of what pure feed-forward networks can achieve in language modeling, especially for highly resource-constrained environments. The goal is to build something incredibly efficient, perhaps for edge devices o...
2025-05-19T16:18:49
https://www.reddit.com/r/LocalLLaMA/comments/1kqftyo/im_trying_to_create_a_lightweight_llm_with/
tagrib
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqftyo
false
null
t3_1kqftyo
/r/LocalLLaMA/comments/1kqftyo/im_trying_to_create_a_lightweight_llm_with/
false
false
self
6
null
OS/Software for running an LLM Server AND Gaming?
0
I've done research on the hardware, but a bit confused on the software. I want to build a PC that I can access remotely to run LLM inference as well as do some single-player gaming over moonlight streaming. Ideally with Wake-on-Lan to reduce power consumption. 1. Would Windows or Linux be the better choice here? (I...
2025-05-19T15:59:07
https://www.reddit.com/r/LocalLLaMA/comments/1kqfbxl/ossoftware_for_running_an_llm_server_and_gaming/
legit_split_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqfbxl
false
null
t3_1kqfbxl
/r/LocalLLaMA/comments/1kqfbxl/ossoftware_for_running_an_llm_server_and_gaming/
false
false
self
0
null
Best Non-Chinese Open Reasoning LLMs atm?
0
So before the inevitable comes up, yes I know that there isn't really much harm in running Qwen or Deepseek locally, but unfortunately bureaucracies gonna bureaucracy. I've been told to find a non Chinese LLM to use both for (yes, silly) security concerns and (slightly less silly) censorship concerns I know Gemma is p...
2025-05-19T15:28:09
https://www.reddit.com/r/LocalLLaMA/comments/1kqekgh/best_nonchinese_open_reasoning_llms_atm/
ProbaDude
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqekgh
false
null
t3_1kqekgh
/r/LocalLLaMA/comments/1kqekgh/best_nonchinese_open_reasoning_llms_atm/
false
false
self
0
null
SFT with 8-bit quants or Mixed precision
1
[removed]
2025-05-19T15:04:35
https://www.reddit.com/r/LocalLLaMA/comments/1kqdze0/sft_with_8bit_quants_or_mixed_precision/
Circumstancision
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqdze0
false
null
t3_1kqdze0
/r/LocalLLaMA/comments/1kqdze0/sft_with_8bit_quants_or_mixed_precision/
false
false
self
1
null
How to get ethical approval for research as an independent researcher?
1
[removed]
2025-05-19T14:57:46
https://www.reddit.com/r/LocalLLaMA/comments/1kqdt3x/how_to_get_ethical_approval_for_research_as_an/
TrainingCultural7548
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqdt3x
false
null
t3_1kqdt3x
/r/LocalLLaMA/comments/1kqdt3x/how_to_get_ethical_approval_for_research_as_an/
false
false
self
1
null
Creating a "learning" coding assistant
0
So I have recently started using Xcode to create an iPhone app. I have never had the patience for writing code, so I've been using Gemini and have actually come pretty far with my app. Basically I will provide it with my swift code for each file and then explain to it what my goal is and then go from there. I currently...
2025-05-19T14:48:40
https://www.reddit.com/r/LocalLLaMA/comments/1kqdl9o/creating_a_learning_coding_assistant/
xkrist0pherx
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqdl9o
false
null
t3_1kqdl9o
/r/LocalLLaMA/comments/1kqdl9o/creating_a_learning_coding_assistant/
false
false
self
0
null
Late-Night Study Lifesaver? Testing Out Ask AI from SolutionInn
1
[removed]
2025-05-19T14:47:26
[deleted]
1970-01-01T00:00:00
0
{}
1kqdk6w
false
null
t3_1kqdk6w
/r/LocalLLaMA/comments/1kqdk6w/latenight_study_lifesaver_testing_out_ask_ai_from/
false
false
default
1
null
Best models for 24 and 32gb vram? 5 distinct tasks, using openwebui
1
Hello all I am setting up a personal openwbui setup for friends and family. My plan is to mostly use 3090,but give access to 5090 when not gaming or doing other ai projects in comfy using a 2 server ollama setup. So the 32gb models might offer a bit more when the server is avail. But primary is running on 24gbvram 64...
2025-05-19T14:16:18
https://www.reddit.com/r/LocalLLaMA/comments/1kqcsv3/best_models_for_24_and_32gb_vram_5_distinct_tasks/
puppyjsn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqcsv3
false
null
t3_1kqcsv3
/r/LocalLLaMA/comments/1kqcsv3/best_models_for_24_and_32gb_vram_5_distinct_tasks/
false
false
self
1
null
Local OCR in mobile applications with React Native ExecuTorch
1
[removed]
2025-05-19T13:44:37
https://v.redd.it/7xh5woi5tq1f1
FinancialAd1961
v.redd.it
1970-01-01T00:00:00
0
{}
1kqc2b7
false
{'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/7xh5woi5tq1f1/DASHPlaylist.mpd?a=1750254292%2COTE1MTgzMDQ1OTY3MDhlMGQ3ZjBjZTJjMGY4OWVhMmNkNjBkNGI2OWU5ZDY2ZTNmMDY3MTg4NjcyMTY0NTQ5ZA%3D%3D&v=1&f=sd', 'duration': 101, 'fallback_url': 'https://v.redd.it/7xh5woi5tq1f1/DASH_720.mp4?source=fallback', 'h...
t3_1kqc2b7
/r/LocalLLaMA/comments/1kqc2b7/local_ocr_in_mobile_applications_with_react/
false
false
https://external-preview…4406d57bec341435
1
{'enabled': False, 'images': [{'id': 'bmN3Zm1vaTV0cTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/bmN3Zm1vaTV0cTFmMfi2qVvx4F7X6e5BMa1RqcYP_iw5S_JmqnBQPixowxc_.png?width=108&crop=smart&format=pjpg&auto=webp&s=4e826fa31e60d372d89bba669c8522d02db2b...
Anybody got Qwen2.5vl to work consistently?
1
I've been using it for only a few hours and I can tell its very accurate at screen captioning, detecting UI elements and displaying their coordinates in JSON format, but it has a bad habit of going on an endless loop. I'm using the 7b model Q8 and I've only prompted it to find all the UI elements on the screen, which i...
2025-05-19T13:41:30
https://www.reddit.com/r/LocalLLaMA/comments/1kqbzr2/anybody_got_qwen25vl_to_work_consistently/
swagonflyyyy
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqbzr2
false
null
t3_1kqbzr2
/r/LocalLLaMA/comments/1kqbzr2/anybody_got_qwen25vl_to_work_consistently/
false
false
self
1
null
Mini PC recommendation
1
[removed]
2025-05-19T13:35:26
https://www.reddit.com/r/LocalLLaMA/comments/1kqbuq4/mini_pc_recommendation/
RevolutionaryPick241
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqbuq4
false
null
t3_1kqbuq4
/r/LocalLLaMA/comments/1kqbuq4/mini_pc_recommendation/
false
false
self
1
null
Is Parquet the best format for AI datasets now ?
0
Many datasets are shared in Parquet format, what do you think about it ? (mostly talking about text datasets, but also interested in other modalities too) Last week the apache/arrow finally released a way to modify a Parquet file locally, i.e. no need to rewrite all the data every time you need to insert/delete/edit 1...
2025-05-19T13:19:22
https://www.reddit.com/r/LocalLLaMA/comments/1kqbhvi/is_parquet_the_best_format_for_ai_datasets_now/
qlhoest
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqbhvi
false
null
t3_1kqbhvi
/r/LocalLLaMA/comments/1kqbhvi/is_parquet_the_best_format_for_ai_datasets_now/
false
false
self
0
null
Any known vendor/buyer for LLM home server, but in a PC case ? Cant put a blade in my flat...
1
[removed]
2025-05-19T13:19:06
https://www.reddit.com/r/LocalLLaMA/comments/1kqbhmy/any_known_vendorbuyer_for_llm_home_server_but_in/
watzemember
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqbhmy
false
null
t3_1kqbhmy
/r/LocalLLaMA/comments/1kqbhmy/any_known_vendorbuyer_for_llm_home_server_but_in/
false
false
self
1
null
Been away for two months.. what's the new hotness?
83
What's the new hotness? Saw a Qwen model? I'm usually able to run things in the 20-23B range... but if there's low end stuff, I'm interested in that as well.
2025-05-19T13:18:32
https://www.reddit.com/r/LocalLLaMA/comments/1kqbh7g/been_away_for_two_months_whats_the_new_hotness/
bigattichouse
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqbh7g
false
null
t3_1kqbh7g
/r/LocalLLaMA/comments/1kqbh7g/been_away_for_two_months_whats_the_new_hotness/
false
false
self
83
null
Anything below 7b is useless
0
I feel like as much as it is appealing to low vram gpus or lower end cpus, there is nothing useful that comes out of these models. There reasoning is bad, and their knowledge inevitably very limited. Despite how well they might score on some benchmarks, they are nothing more than a gimmick. What do you think?
2025-05-19T12:55:59
https://www.reddit.com/r/LocalLLaMA/comments/1kqazmm/anything_below_7b_is_useless/
GreenTreeAndBlueSky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqazmm
false
null
t3_1kqazmm
/r/LocalLLaMA/comments/1kqazmm/anything_below_7b_is_useless/
false
false
self
0
null
RTX PRO 6000 - Help me benchmark
1
[removed]
2025-05-19T12:54:04
https://www.reddit.com/r/LocalLLaMA/comments/1kqay8r/rtx_pro_6000_help_me_benchmark/
KernQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqay8r
false
null
t3_1kqay8r
/r/LocalLLaMA/comments/1kqay8r/rtx_pro_6000_help_me_benchmark/
false
false
self
1
null
Demo of Sleep-time Compute to Reduce LLM Response Latency
1
[removed]
2025-05-19T12:47:46
https://i.redd.it/dqmlrygziq1f1.png
Ok_Employee_6418
i.redd.it
1970-01-01T00:00:00
0
{}
1kqatmg
false
null
t3_1kqatmg
/r/LocalLLaMA/comments/1kqatmg/demo_of_sleeptime_compute_to_reduce_llm_response/
false
false
https://b.thumbs.redditm…uFsL4dxfLC1g.jpg
1
{'enabled': True, 'images': [{'id': 'caxgt3E8oHyg9_AQtk-k8-Rkz-9BGCT0aBGiWVJQqt4', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/dqmlrygziq1f1.png?width=108&crop=smart&auto=webp&s=6ce6f0fe76aefebd675e9bff77f2440ece519e48', 'width': 108}, {'height': 128, 'url': 'https://preview.redd.it/dqmlrygziq1f1.png...
Is Intel Arc GPU with 48GB of memory going to take over for $1k?
289
At the 3:58 mark video says cost is expected to be less than $1K: [https://www.youtube.com/watch?v=Y8MWbPBP9i0](https://www.youtube.com/watch?v=Y8MWbPBP9i0) [https://videocardz.com/newz/intel-announces-arc-pro-b60-24gb-and-b50-16gb-cards-dual-b60-features-48gb-memory](https://videocardz.com/newz/intel-announces-arc-pr...
2025-05-19T12:43:45
https://www.reddit.com/r/LocalLLaMA/comments/1kqaqmr/is_intel_arc_gpu_with_48gb_of_memory_going_to/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqaqmr
false
null
t3_1kqaqmr
/r/LocalLLaMA/comments/1kqaqmr/is_intel_arc_gpu_with_48gb_of_memory_going_to/
false
false
self
289
{'enabled': False, 'images': [{'id': 'QkBkzo69L9FPJFFvgY_pKvU07Uk9bCROnv0X9tm1uGk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=108&crop=smart&auto=webp&s=8072bed96cc1790b00d5b4a11f0ab5aa362c538b', 'width': 108}, {'height': 162, 'url': 'h...
Is Intel's ARC GPU 48GB GPU going to take over?
1
At the 3:58 mark video says cost is expected to be less than $1K: [https://www.youtube.com/watch?v=Y8MWbPBP9i0](https://www.youtube.com/watch?v=Y8MWbPBP9i0) [https://videocardz.com/newz/intel-announces-arc-pro-b60-24gb-and-b50-16gb-cards-dual-b60-features-48gb-memory](https://videocardz.com/newz/intel-announces-arc-pr...
2025-05-19T12:42:12
https://www.reddit.com/r/LocalLLaMA/comments/1kqaphp/is_intels_arc_gpu_48gb_gpu_going_to_take_over/
Terminator857
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqaphp
false
null
t3_1kqaphp
/r/LocalLLaMA/comments/1kqaphp/is_intels_arc_gpu_48gb_gpu_going_to_take_over/
false
false
self
1
{'enabled': False, 'images': [{'id': 'QkBkzo69L9FPJFFvgY_pKvU07Uk9bCROnv0X9tm1uGk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=108&crop=smart&auto=webp&s=8072bed96cc1790b00d5b4a11f0ab5aa362c538b', 'width': 108}, {'height': 162, 'url': 'h...
llama.cpp now supports Llama 4 vision
93
Vision support is picking up speed with the recent refactoring to better support it in general. Note that there's a minor(?) [issue with Llama 4 vision](https://github.com/ggml-org/llama.cpp/pull/13282) in general, as you can see below. It's most likely with the model, not with the implementation in llama.cpp, as the i...
2025-05-19T12:22:27
https://www.reddit.com/r/LocalLLaMA/comments/1kqab4m/llamacpp_now_supports_llama_4_vision/
Chromix_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqab4m
false
null
t3_1kqab4m
/r/LocalLLaMA/comments/1kqab4m/llamacpp_now_supports_llama_4_vision/
false
false
https://b.thumbs.redditm…9VkdleNOJcJw.jpg
93
{'enabled': False, 'images': [{'id': 'xCocCp_GtOpQABZSmSEYgMQkRf9mUiqrXVi8rbnByzw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B8o6PBUKxWoyfTLHowtPtTQrUM4omNNyOv5t_-1MIqk.jpg?width=108&crop=smart&auto=webp&s=622517cfa0fdcee698976b99a00dd71571acbd46', 'width': 108}, {'height': 108, 'url': 'h...
Intel Arc B60 DUAL-GPU 48GB Video Card Tear-Down | MAXSUN Arc Pro B60 Dual
121
[Gamers Nexus](https://www.youtube.com/@GamersNexus)
2025-05-19T12:18:09
https://www.youtube.com/watch?v=Y8MWbPBP9i0
Optifnolinalgebdirec
youtube.com
1970-01-01T00:00:00
0
{}
1kqa7vx
false
{'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Y8MWbPBP9i0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosco...
t3_1kqa7vx
/r/LocalLLaMA/comments/1kqa7vx/intel_arc_b60_dualgpu_48gb_video_card_teardown/
false
false
https://b.thumbs.redditm…b6_a7gW9Ue6M.jpg
121
{'enabled': False, 'images': [{'id': 'QkBkzo69L9FPJFFvgY_pKvU07Uk9bCROnv0X9tm1uGk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=108&crop=smart&auto=webp&s=8072bed96cc1790b00d5b4a11f0ab5aa362c538b', 'width': 108}, {'height': 162, 'url': 'h...
KTransformers v0.3.1 now supports Intel Arc GPUs (A770 + new B-series): 7 tps DeepSeek R1 decode speed for a single CPU + a single A770
80
As shared in [this post](https://www.reddit.com/r/LocalLLaMA/comments/1kq9294/intel_launches_299_arc_pro_b50_with_16gb_of/), Intel just dropped their new Arc Pro B-series GPUs today. Thanks to early collaboration with Intel, KTransformers v0.3.1 is out now with Day 0 support for these new cards — including the previou...
2025-05-19T12:16:17
https://www.reddit.com/r/LocalLLaMA/comments/1kqa6l0/ktransformers_v031_now_supports_intel_arc_gpus/
CombinationNo780
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kqa6l0
false
null
t3_1kqa6l0
/r/LocalLLaMA/comments/1kqa6l0/ktransformers_v031_now_supports_intel_arc_gpus/
false
false
self
80
null
Intel Dual B60 with 48GB VRAM - Sub $1000
1
[removed]
2025-05-19T12:03:26
https://youtu.be/Y8MWbPBP9i0?feature=shared
ImpossibleHabit615
youtu.be
1970-01-01T00:00:00
0
{}
1kq9xvv
false
{'oembed': {'author_name': 'Gamers Nexus', 'author_url': 'https://www.youtube.com/@GamersNexus', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/Y8MWbPBP9i0?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyrosco...
t3_1kq9xvv
/r/LocalLLaMA/comments/1kq9xvv/intel_dual_b60_with_48gb_vram_sub_1000/
false
false
https://b.thumbs.redditm…DLT5LyVIyUHw.jpg
1
{'enabled': False, 'images': [{'id': 'QkBkzo69L9FPJFFvgY_pKvU07Uk9bCROnv0X9tm1uGk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/7FOJ3auvDt89Hg_vMibknvVEFUx6iCFTiqBX1eYmFSA.jpg?width=108&crop=smart&auto=webp&s=8072bed96cc1790b00d5b4a11f0ab5aa362c538b', 'width': 108}, {'height': 162, 'url': 'h...
How can I integrate a pretrained LLM (like LLaMA, Qwen) into a Speech-to-Text (ASR) pipeline?
4
Hey everyone, I'm exploring the idea of building a Speech-to-Text system that leverages the capabilities of pretrained language models like LLaMA, or Qwen—not just as a traditional language model for rescoring but potentially as a more integral part of the transcription process. Has anyone here tried something like t...
2025-05-19T12:02:03
https://www.reddit.com/r/LocalLLaMA/comments/1kq9wtz/how_can_i_integrate_a_pretrained_llm_like_llama/
Extra-Designer9333
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq9wtz
false
null
t3_1kq9wtz
/r/LocalLLaMA/comments/1kq9wtz/how_can_i_integrate_a_pretrained_llm_like_llama/
false
false
self
4
null
Intel Announces Arc Pro B-Series, "Project Battlematrix" Linux Software Improvements
62
2025-05-19T11:46:51
https://www.phoronix.com/review/intel-arc-pro-b-series
reps_up
phoronix.com
1970-01-01T00:00:00
0
{}
1kq9mfl
false
null
t3_1kq9mfl
/r/LocalLLaMA/comments/1kq9mfl/intel_announces_arc_pro_bseries_project/
false
false
https://a.thumbs.redditm…E_3lB-U8ykG4.jpg
62
{'enabled': False, 'images': [{'id': '9gsOsUk7wZGtWiZOETwIxtZ9lVGZIRlbcb6FJRN_uYo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/GNAZ-cVMVgweIup1G1242zhFgkq9dtWIloVxhHHhjYo.jpg?width=108&crop=smart&auto=webp&s=41b235ed8fe992345920e400f9dd8f5b5ced709a', 'width': 108}, {'height': 121, 'url': 'h...
What is the smoothest speech interface to run locally?
7
M3 Mac, running Gemma 12B in LMStudio. Is low-latency natural speech possible? Or am I better off just using voice input transcription?
2025-05-19T11:38:49
https://www.reddit.com/r/LocalLLaMA/comments/1kq9h8x/what_is_the_smoothest_speech_interface_to_run/
winkler1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1kq9h8x
false
null
t3_1kq9h8x
/r/LocalLLaMA/comments/1kq9h8x/what_is_the_smoothest_speech_interface_to_run/
false
false
self
7
null