title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How does gemma3:4b-it-qat fare against OpenAI models on MMLU-Pro benchmark? Try for yourself in Excel | 28 | I made an Excel add-in that lets you run a prompt on thousands of rows of tasks. Might be useful for some of you to quickly benchmark new models when they come out. In the video I ran gemma3:4b-it-qat, gpt-4.1-mini, and o4-mini on a (admittedly tiny) subset of the MMLU Pro benchmark. I think I understand now why OpenAI... | 2025-06-04T17:37:22 | https://v.redd.it/ye3ahlk05y4f1 | Kapperfar | /r/LocalLLaMA/comments/1l3btj3/how_does_gemma34bitqat_fare_against_openai_models/ | 1970-01-01T00:00:00 | 0 | {} | 1l3btj3 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/ye3ahlk05y4f1/DASHPlaylist.mpd?a=1751780246%2CZjczZDVmNzIyMzg0ZWQwMDA1M2MwN2YyZDU4YTVjZDA5MWJhZGQ2Nzg2YWFkOTUwOWVmMjMyYTE1YjI0ZWFiMA%3D%3D&v=1&f=sd', 'duration': 121, 'fallback_url': 'https://v.redd.it/ye3ahlk05y4f1/DASH_1080.mp4?source=fallback', '... | t3_1l3btj3 | /r/LocalLLaMA/comments/1l3btj3/how_does_gemma34bitqat_fare_against_openai_models/ | false | false | 28 | {'enabled': False, 'images': [{'id': 'YWJhMzhuazA1eTRmMe_FxokcQpBsD-M8wgi3hLI8PHPTr7rVnpmgkef-dH9o', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/YWJhMzhuazA1eTRmMe_FxokcQpBsD-M8wgi3hLI8PHPTr7rVnpmgkef-dH9o.png?width=108&crop=smart&format=pjpg&auto=webp&s=425ccf9e97c37d435a06bbb596c41e067952e... | |
Digitizing 30 Stacks of Uni Dokuments & Feeding into a Local LLM | 5 | Hey everyone,
I’m embarking on a pretty ambitious project and could really use some advice. I have about 30 stacks of university notes – each stack is roughly 200 pages – that I want to digitize and then feed into a LLM for analysis. Basically, I'd love to be able to ask the LLM questions about my notes and get intell... | 2025-06-04T17:31:45 | https://www.reddit.com/r/LocalLLaMA/comments/1l3boea/digitizing_30_stacks_of_uni_dokuments_feeding/ | SpitePractical8460 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l3boea | false | null | t3_1l3boea | /r/LocalLLaMA/comments/1l3boea/digitizing_30_stacks_of_uni_dokuments_feeding/ | false | false | self | 5 | null |
Anthropic shutting out Windsurf - This is why I'm so big on local and open source | 1 | [removed] | 2025-06-04T16:27:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l3a0iy/anthropic_shutting_out_windsurf_this_is_why_im_so/ | davidtwaring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l3a0iy | false | null | t3_1l3a0iy | /r/LocalLLaMA/comments/1l3a0iy/anthropic_shutting_out_windsurf_this_is_why_im_so/ | false | false | self | 1 | null |
Anthropic Shutting out Windsurf -- This is why I'm so big on local and open source | 1 | [removed] | 2025-06-04T16:25:31 | https://www.reddit.com/r/LocalLLaMA/comments/1l39yea/anthropic_shutting_out_windsurf_this_is_why_im_so/ | davidtwaring | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l39yea | false | null | t3_1l39yea | /r/LocalLLaMA/comments/1l39yea/anthropic_shutting_out_windsurf_this_is_why_im_so/ | false | false | self | 1 | null |
lancement d'un noyau ia modulaire | 1 | [removed] | 2025-06-04T16:25:14 | https://www.reddit.com/r/LocalLLaMA/comments/1l39y4y/lancement_dun_noyau_ia_modulaire/ | diama_ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l39y4y | false | null | t3_1l39y4y | /r/LocalLLaMA/comments/1l39y4y/lancement_dun_noyau_ia_modulaire/ | false | false | 1 | null | |
Is there any open source project leveraging genAI to run quality checks on tabular data ? | 1 | Hey guys, most of the work in the ML/data science/BI still relies on tabular data. Everybody who has worked on that knows data quality is where most of the work goes, and that’s super frustrating.
I used to use great expectations to run quality checks on dataframes, but that’s based on hard coded rules (you declare t... | 2025-06-04T16:12:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l39mc2/is_there_any_open_source_project_leveraging_genai/ | Jazzlike_Tooth929 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l39mc2 | false | null | t3_1l39mc2 | /r/LocalLLaMA/comments/1l39mc2/is_there_any_open_source_project_leveraging_genai/ | false | false | self | 1 | null |
Drummer's Cydonia 24B v3 - A Mistral 24B 2503 finetune! | 127 | Survey Time: I'm working on Skyfall v3 but need opinions on the upscale size. 31B sounds comfy for a 24GB setup? Do you have an upper/lower bound in mind for that range? | 2025-06-04T16:03:35 | https://huggingface.co/TheDrummer/Cydonia-24B-v3 | TheLocalDrummer | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l39ea3 | false | null | t3_1l39ea3 | /r/LocalLLaMA/comments/1l39ea3/drummers_cydonia_24b_v3_a_mistral_24b_2503/ | false | false | default | 127 | {'enabled': False, 'images': [{'id': 't081jAI6iYNLTzx3riAMviTtpLcwyeFOx6ZQvPV3hRI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/v0smBaFAfIOYhWsjTXmZvmibfthD29DfOmGvXCsBLOk.jpg?width=108&crop=smart&auto=webp&s=24a01201ee1428f9838f07875e4b98e8a59afa19', 'width': 108}, {'height': 116, 'url': 'h... |
Real-time knowledge graph with Kuzu and CocoIndex, high performance open source stack end to end - GraphRAG | 1 | [removed] | 2025-06-04T15:56:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l397ky/realtime_knowledge_graph_with_kuzu_and_cocoindex/ | Whole-Assignment6240 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l397ky | false | null | t3_1l397ky | /r/LocalLLaMA/comments/1l397ky/realtime_knowledge_graph_with_kuzu_and_cocoindex/ | false | false | self | 1 | null |
Has anyone successfully built a coding assistant using local llama? | 36 | Something that's like Copilot, Kilocode, etc.
What model are you using? What pc specs do you have? How is the performance?
Lastly, is this even possible? | 2025-06-04T15:49:06 | https://www.reddit.com/r/LocalLLaMA/comments/1l390xb/has_anyone_successfully_built_a_coding_assistant/ | rushblyatiful | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l390xb | false | null | t3_1l390xb | /r/LocalLLaMA/comments/1l390xb/has_anyone_successfully_built_a_coding_assistant/ | false | false | self | 36 | null |
Help Choosing the Best LLM Inference Stack for Local Deployment (8x RTX 6000 Blackwell) | 1 | [removed] | 2025-06-04T15:41:01 | https://www.reddit.com/r/LocalLLaMA/comments/1l38tfw/help_choosing_the_best_llm_inference_stack_for/ | Fresh_Month_2594 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l38tfw | false | null | t3_1l38tfw | /r/LocalLLaMA/comments/1l38tfw/help_choosing_the_best_llm_inference_stack_for/ | false | false | self | 1 | null |
Best model for research in PyTorch | 2 | Hello, I'm looking for a model good in PyTorch that could help me for my research project. Any ideas? | 2025-06-04T15:16:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l387hu/best_model_for_research_in_pytorch/ | Soft-Salamander7514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l387hu | false | null | t3_1l387hu | /r/LocalLLaMA/comments/1l387hu/best_model_for_research_in_pytorch/ | false | false | self | 2 | null |
Recommendations for model setup on single H200 | 0 | I have been using a server with a single A100 GOU, and now I have an upgrade to a server which ahs a single H200 (141GB VRAM). Currently I have been using a Mistral-Small-3.1-24B version and serving it behind a vLLM instance.
My use case is typically instruction based wherein mostly the server is churning user define... | 2025-06-04T14:40:15 | https://www.reddit.com/r/LocalLLaMA/comments/1l379ix/recommendations_for_model_setup_on_single_h200/ | OpportunityProper252 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l379ix | false | null | t3_1l379ix | /r/LocalLLaMA/comments/1l379ix/recommendations_for_model_setup_on_single_h200/ | false | false | self | 0 | null |
Simple News Broadcast Generator Script using local LLM as "editor" EdgeTTS as narrator, using a list of RSS feeds you can curate yourself | 32 |
In this repo I built a simple python script which scrapes RSS feeds and generates a news broadcast mp3 narrated by a realistic voice, using Ollama, so local LLM, to generate the summaries and final composed broadcast.
You can specify whichever news sources you want in the feeds.yaml file, as well as the number ... | 2025-06-04T14:20:15 | https://github.com/kliewerdaniel/News02 | KonradFreeman | github.com | 1970-01-01T00:00:00 | 0 | {} | 1l36s62 | false | null | t3_1l36s62 | /r/LocalLLaMA/comments/1l36s62/simple_news_broadcast_generator_script_using/ | false | false | default | 32 | {'enabled': False, 'images': [{'id': 'fjxqU7FwvzpZ5aD4dKhEDL4Mh84C2kD-LdIr5egsvAE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hcM8-O1ltv8-loyy7_68UOpQyrqYIJ-2Wv8L99rgmA4.jpg?width=108&crop=smart&auto=webp&s=d80318cfb026081ad9bd96d0a03d72bb81c3f579', 'width': 108}, {'height': 108, 'url': 'h... |
Suggestions for a good model for generating Drupal module code? | 0 | I've tried the opencoder and Deepseek models, as well as llama, gemma and a few others, but they tend to really not generate sensible results even with the temperature lowered. Does anyone have any tiips on which model(s) might be best suited for generating Drupal code?
Thanks!! | 2025-06-04T14:11:00 | https://www.reddit.com/r/LocalLLaMA/comments/1l36kbc/suggestions_for_a_good_model_for_generating/ | tastybeer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l36kbc | false | null | t3_1l36kbc | /r/LocalLLaMA/comments/1l36kbc/suggestions_for_a_good_model_for_generating/ | false | false | self | 0 | null |
Is DevStral actually usable for C programming? Occasionally getting segmentation faults... | 1 | [removed] | 2025-06-04T13:57:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l368co/is_devstral_actually_usable_for_c_programming/ | ParticularContest201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l368co | false | null | t3_1l368co | /r/LocalLLaMA/comments/1l368co/is_devstral_actually_usable_for_c_programming/ | false | false | self | 1 | null |
Improving DeepSeek-R1-0528 Inference Speed | 1 | [removed] | 2025-06-04T13:53:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l365fa/improving_deepseekr10528_inference_speed/ | prepytixel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l365fa | false | null | t3_1l365fa | /r/LocalLLaMA/comments/1l365fa/improving_deepseekr10528_inference_speed/ | false | false | self | 1 | null |
SmolVLA: Efficient Vision-Language-Action Model trained on Lerobot Community Data | 1 | [removed] | 2025-06-04T13:45:19 | WoanqDil | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l35ygr | false | null | t3_1l35ygr | /r/LocalLLaMA/comments/1l35ygr/smolvla_efficient_visionlanguageaction_model/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'vtztkucqyw4f1', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?width=108&crop=smart&format=png8&s=81fb251db57dc279abf625ae5ee13f0b8c897e21', 'width': 108}, {'height': 95, 'url': 'https://preview.redd.it/vtztkucqyw4f1.gif?width=216&crop=smart&format=... | |
Common Corpus: The Largest Collection of Ethical Data for LLM Pre-Training | 140 | "Announcing the release of the official Common Corpus paper: a 20 page report detailing how we collected, processed and published 2 trillion tokens of reusable data for LLM pretraining."
Thread by the first author: https://x.com/Dorialexander/status/1930249894712717744
Paper: https://arxiv.org/abs/2506.01732
| 2025-06-04T13:37:13 | Initial-Image-1015 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l35rp1 | false | null | t3_1l35rp1 | /r/LocalLLaMA/comments/1l35rp1/common_corpus_the_largest_collection_of_ethical/ | false | false | 140 | {'enabled': True, 'images': [{'id': 'Eodox09e1J5kGgD9RnLimI6X9YHgeP49qkeGxNEYhrE', 'resolutions': [{'height': 117, 'url': 'https://preview.redd.it/l1wcpiqhyw4f1.jpeg?width=108&crop=smart&auto=webp&s=9551cfe4a9add655326ce00738e015a801d139c6', 'width': 108}, {'height': 235, 'url': 'https://preview.redd.it/l1wcpiqhyw4f1.j... | ||
Is DevStral actually usable for C programming? Occasionally getting segmentation faults... | 1 | [removed] | 2025-06-04T13:34:32 | https://www.reddit.com/r/LocalLLaMA/comments/1l35pjo/is_devstral_actually_usable_for_c_programming/ | ParticularContest201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l35pjo | false | null | t3_1l35pjo | /r/LocalLLaMA/comments/1l35pjo/is_devstral_actually_usable_for_c_programming/ | false | false | self | 1 | null |
RTX 5060 Ti 16GB vs 5070 Ti 16GB for AI workloads (LLMs, fine-tuning, etc.) | 1 | [removed] | 2025-06-04T13:26:54 | VIrgin_COde | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l35jh8 | false | null | t3_1l35jh8 | /r/LocalLLaMA/comments/1l35jh8/rtx_5060_ti_16gb_vs_5070_ti_16gb_for_ai_workloads/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'iik0idchww4f1', 'resolutions': [{'height': 74, 'url': 'https://preview.redd.it/iik0idchww4f1.jpeg?width=108&crop=smart&auto=webp&s=62fac6ee3eb824af9f339556e35bfa258baf03be', 'width': 108}, {'height': 149, 'url': 'https://preview.redd.it/iik0idchww4f1.jpeg?width=216&crop=smart&auto=w... | |
KV Cache in nanoVLM | 25 | I thought I had a fair amount of understanding about KV Cache before implementing it from scratch. I would like to dedicate this blog post to all of them who are really curious about KV Cache, think they know enough about the idea, but would love to implement it someday.
We discover a lot of things while working t... | 2025-06-04T13:23:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l35h5g/kv_cache_in_nanovlm/ | Disastrous-Work-1632 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l35h5g | false | null | t3_1l35h5g | /r/LocalLLaMA/comments/1l35h5g/kv_cache_in_nanovlm/ | false | false | 25 | {'enabled': False, 'images': [{'id': 'vknG7DZMBYx0_fJ40PKSKNLzuhkrd4QfuZqTU-ujEyM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fqfBX7C3jmtI84itPZTI_T_664vZv239TD6hn9XTKfY.jpg?width=108&crop=smart&auto=webp&s=5a87716d667ffbac2c68a865fbfead1af7713350', 'width': 108}, {'height': 108, 'url': 'h... | |
How to access my LLM remotely | 0 | I have Ollama and docker running Open Web-UI setup and working well on the LAN. How can I open port 3000 to access the LLM from anywhere? I have a static IP but when I try to port forward it doesn't respond. | 2025-06-04T13:18:43 | https://www.reddit.com/r/LocalLLaMA/comments/1l35d0c/how_to_access_my_llm_remotely/ | bones10145 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l35d0c | false | null | t3_1l35d0c | /r/LocalLLaMA/comments/1l35d0c/how_to_access_my_llm_remotely/ | false | false | self | 0 | null |
AMA – I’ve built 7 commercial RAG projects. Got tired of copy-pasting boilerplate, so we open-sourced our internal stack. | 623 | Hey folks,
I’m a senior tech lead with 8+ years of experience, and for the last \~3 I’ve been knee-deep in building LLM-powered systems — RAG pipelines, agentic apps, text2SQL engines. We’ve shipped real products in manufacturing, sports analytics, NGOs, legal… you name it.
After doing this *again and again*, I got t... | 2025-06-04T13:06:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l352wk/ama_ive_built_7_commercial_rag_projects_got_tired/ | Loud_Picture_1877 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l352wk | false | null | t3_1l352wk | /r/LocalLLaMA/comments/1l352wk/ama_ive_built_7_commercial_rag_projects_got_tired/ | false | false | self | 623 | {'enabled': False, 'images': [{'id': 'DNbUBM7ed4V49RRULrIFUyofnrMt6h4cI90ApWQTQDg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MDZAltpl6XhYmd4yidNS3D5sMjU_8_9h9cIl7CUWTRY.jpg?width=108&crop=smart&auto=webp&s=c4df8d942ed3660c38a8f93dc941c58ecee2f46f', 'width': 108}, {'height': 108, 'url': 'h... |
looking for a free good image to video ai service | 0 | I’m looking for a good free image to video ai that lets me generate around 8 eight second videos a day on a free plan without blocking 60 to 70 percent of my prompts.
i tried a couple of sites with the prompt “girl slowly does a 360 turn” and both blocked it.
does anyone know any sites or tools maybe even [**domoai**... | 2025-06-04T12:33:42 | https://www.reddit.com/r/LocalLLaMA/comments/1l34dqu/looking_for_a_free_good_image_to_video_ai_service/ | Own_View3337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l34dqu | false | null | t3_1l34dqu | /r/LocalLLaMA/comments/1l34dqu/looking_for_a_free_good_image_to_video_ai_service/ | false | false | self | 0 | null |
Help me use AI for my game - specific case | 8 |
Hi, hope this is the right place to ask.
I created a game to play myself in C# and C++ - its one of those hidden object games.
As I made it for myself I used assets from another game from a different genre. The studio that developed that game has since closed down in 2016, but I don't know who owns the copyright n... | 2025-06-04T12:01:57 | https://www.reddit.com/r/LocalLLaMA/comments/1l33r5h/help_me_use_ai_for_my_game_specific_case/ | Salamander500 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l33r5h | false | null | t3_1l33r5h | /r/LocalLLaMA/comments/1l33r5h/help_me_use_ai_for_my_game_specific_case/ | false | false | self | 8 | null |
Self aware models? | 1 | [removed] | 2025-06-04T11:51:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l33k2n/self_aware_models/ | Gadrakmtg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l33k2n | false | null | t3_1l33k2n | /r/LocalLLaMA/comments/1l33k2n/self_aware_models/ | false | false | self | 1 | null |
Most recently updated knowledge base/ training data. | 1 | What good llm models, does not matter the size, has the most updated knowledge base?
| 2025-06-04T11:43:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l33enp/most_recently_updated_knowledge_base_training_data/ | EasyConference4177 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l33enp | false | null | t3_1l33enp | /r/LocalLLaMA/comments/1l33enp/most_recently_updated_knowledge_base_training_data/ | false | false | self | 1 | null |
Best model for data extraction from scanned documents | 11 | I'm building my little ocr tool to extract data from pdfs, mostly bank receipt, id cards, and stuff like that.
I experimented with few models (running on ollama locally), and I found that gemma3:12b was the best choice I could get.
I'm running on a 4070 laptop with 8Gb, but I have a desktop with a 5080 if the model... | 2025-06-04T11:39:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l33bph/best_model_for_data_extraction_from_scanned/ | Wintlink- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l33bph | false | null | t3_1l33bph | /r/LocalLLaMA/comments/1l33bph/best_model_for_data_extraction_from_scanned/ | false | false | self | 11 | null |
Would like to run AI on my old laptop | 1 | [removed] | 2025-06-04T11:25:29 | https://www.reddit.com/r/LocalLLaMA/comments/1l332oa/would_like_to_run_ai_on_my_old_laptop/ | SaasMinded | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l332oa | false | null | t3_1l332oa | /r/LocalLLaMA/comments/1l332oa/would_like_to_run_ai_on_my_old_laptop/ | false | false | 1 | null | |
The godawful Limitless Pendant reviewed… | 1 | 2025-06-04T11:21:28 | https://www.damianreilly.co.uk/p/review-the-godawful-limitless-pendant | myrtlehinchwater | damianreilly.co.uk | 1970-01-01T00:00:00 | 0 | {} | 1l33043 | false | null | t3_1l33043 | /r/LocalLLaMA/comments/1l33043/the_godawful_limitless_pendant_reviewed/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'TwRHafzxrhfsJkhI2MqiSfz6nbh04f3ARvp9BRu2Ji4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GsAlco5cHQ9EkxToe9ZoJ5sB0pU-SR6dip3m9hLAeuE.jpg?width=108&crop=smart&auto=webp&s=3e502827f453a16bd5dcc6a637a1ed58b73705a4', 'width': 108}, {'height': 108, 'url': 'h... | ||
Practical Steps to Fine-Tune a Small LLM (e.g., Mistral) on a Laptop? | 1 | [removed] | 2025-06-04T11:13:47 | https://www.reddit.com/r/LocalLLaMA/comments/1l32v77/practical_steps_to_finetune_a_small_llm_eg/ | Popular_Student_2822 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l32v77 | false | null | t3_1l32v77 | /r/LocalLLaMA/comments/1l32v77/practical_steps_to_finetune_a_small_llm_eg/ | false | false | self | 1 | null |
Can I Train an LLM on a Normal Laptop? (Need Advice!) | 1 | [removed] | 2025-06-04T11:11:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l32tkd/can_i_train_an_llm_on_a_normal_laptop_need_advice/ | Popular_Student_2822 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l32tkd | false | null | t3_1l32tkd | /r/LocalLLaMA/comments/1l32tkd/can_i_train_an_llm_on_a_normal_laptop_need_advice/ | false | false | self | 1 | null |
Is this Server PC a horrible mistake for these purposes? | 1 | [removed] | 2025-06-04T10:48:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l32fi0/is_this_server_pc_a_horrible_mistake_for_these/ | Humble_Stuff5531 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l32fi0 | false | null | t3_1l32fi0 | /r/LocalLLaMA/comments/1l32fi0/is_this_server_pc_a_horrible_mistake_for_these/ | false | false | self | 1 | null |
Shisa V2 405B: The strongest model ever built in Japan! (JA/EN) | 310 | Hey everyone, so we've released the latest member of our [Shisa V2](https://www.reddit.com/r/LocalLLaMA/comments/1jz2lll/shisa_v2_a_family_of_new_jaen_bilingual_models/) family of open bilingual (Japanes/English) models: [Shisa V2 405B](https://shisa.ai/posts/shisa-v2-405b/)!
* Llama 3.1 405B Fine Tune, inherits the L... | 2025-06-04T09:32:34 | https://www.reddit.com/r/LocalLLaMA/comments/1l318di/shisa_v2_405b_the_strongest_model_ever_built_in/ | randomfoo2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l318di | false | null | t3_1l318di | /r/LocalLLaMA/comments/1l318di/shisa_v2_405b_the_strongest_model_ever_built_in/ | false | false | self | 310 | null |
Progress update — current extraction status + next step for dataset formatting | 0 |
I’ve currently extracted only {{char}}’s dialogue — without {{user}} responses — from the visual novel.
Right now, I haven’t fully separated SFW from NSFW yet.
There are two files:
One with mixed SFW + NSFW
One with NSFW-only content
I’m wondering now:
Should I also extract SFW-only into its own file?
Once extra... | 2025-06-04T09:10:33 | Akowmako | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l30wtf | false | null | t3_1l30wtf | /r/LocalLLaMA/comments/1l30wtf/progress_update_current_extraction_status_next/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'bqztjXF0wlfVnn_sVTsZfkYM4PXHu8QJj5zxVlJFDP4', 'resolutions': [{'height': 36, 'url': 'https://preview.redd.it/cw2ynsrvmv4f1.png?width=108&crop=smart&auto=webp&s=810a025b524b32dc257771e91c59289fa7be309f', 'width': 108}, {'height': 72, 'url': 'https://preview.redd.it/cw2ynsrvmv4f1.png?... | ||
Should I buy this laptop? | 0 | Hey everyone,
I came across a used Dell XPS 13 9340 with 32gb RAM and a 1TB SSD, running on the Meteor Lake chip. The seller is asking 650 euro for it.
Just looking for some advice. I currently have a MacBook M2 Max with 32gb, which I like, but the privacy concerns and limited flexibility with Linux are pushing me to ... | 2025-06-04T07:34:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l2zjet/should_i_buy_this_laptop/ | Optimal_League_1419 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2zjet | false | null | t3_1l2zjet | /r/LocalLLaMA/comments/1l2zjet/should_i_buy_this_laptop/ | false | false | self | 0 | null |
Colab of xtts2 conqui? Tried available on google but not working | 0 | [https://huggingface.co/spaces/coqui/xtts](https://huggingface.co/spaces/coqui/xtts)
Want whats working here but for longer context.
thank you. | 2025-06-04T07:25:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l2zeql/colab_of_xtts2_conqui_tried_available_on_google/ | jadhavsaurabh | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2zeql | false | null | t3_1l2zeql | /r/LocalLLaMA/comments/1l2zeql/colab_of_xtts2_conqui_tried_available_on_google/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'qEg0nV4qLjF_R339rWG2nm0ZKDxL3ktS8y5QFNx3XaI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/cI6tjfI7nprfyU6AtFvYsbkvkXhd-uhgYF0bPonkRU4.jpg?width=108&crop=smart&auto=webp&s=14485689b80d197ea46135f5f612fca2c19a8443', 'width': 108}, {'height': 116, 'url': 'h... |
Why doesn't Llama4:16x17b run well on a host with enough ram to run 32b dense models? | 0 | I have M1 Max with 32GB ram. It runs 32b models very well (13-16 tokens/s). I thought I could run a large MoE like llama4:16x17b, because if only 17b parameters are active + some shared layers, it will easily fit in my ram and the other mempages can sleep in swap space. But no.
$ ollama ps
NAME ID... | 2025-06-04T06:45:51 | https://www.reddit.com/r/LocalLLaMA/comments/1l2yssk/why_doesnt_llama416x17b_run_well_on_a_host_with/ | umataro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2yssk | false | null | t3_1l2yssk | /r/LocalLLaMA/comments/1l2yssk/why_doesnt_llama416x17b_run_well_on_a_host_with/ | false | false | self | 0 | null |
Tried 10 models, all seem to refuse to write a 10,000 word story. Is there something bad with my prompt? I'm just doing some testing to learn and I can't figure out how to get the LLM to do as I say. | 55 | 2025-06-04T06:36:26 | StartupTim | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 1l2ynsc | false | null | t3_1l2ynsc | /r/LocalLLaMA/comments/1l2ynsc/tried_10_models_all_seem_to_refuse_to_write_a/ | false | false | 55 | {'enabled': True, 'images': [{'id': 'wQB-UVcq4f39CkUKxnJvTeA2uOvjdGGU560OImk8Nnk', 'resolutions': [{'height': 32, 'url': 'https://external-preview.redd.it/rlOy3w1CoH2JdExOlsJT5MCZp4fqssksLcusxXxosAg.png?width=108&crop=smart&auto=webp&s=ae525cb0dfa6fdc778bcdc65ff48475c663de14a', 'width': 108}, {'height': 65, 'url': 'htt... | |||
Suitable LLM+prompt for extracting data points from an image of graphs/charts | 1 | [removed] | 2025-06-04T05:58:13 | EmeraldThug | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l2y2bb | false | null | t3_1l2y2bb | /r/LocalLLaMA/comments/1l2y2bb/suitable_llmprompt_for_extracting_data_points/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'ojlIhXXoxjnefWadiG6oo4dn2nHZ999Ah0du_w8F4iA', 'resolutions': [{'height': 72, 'url': 'https://preview.redd.it/yxdcc5lwmu4f1.png?width=108&crop=smart&auto=webp&s=7783c890465fe3d02ea0e4f9597c178da71ac830', 'width': 108}, {'height': 144, 'url': 'https://preview.redd.it/yxdcc5lwmu4f1.png... | ||
nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1 · Hugging Face | 77 | 2025-06-04T05:34:44 | https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-VL-8B-V1 | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1l2xpf5 | false | null | t3_1l2xpf5 | /r/LocalLLaMA/comments/1l2xpf5/nvidiallama31nemotronnanovl8bv1_hugging_face/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'jDcrQduqX5g-ycbyFiwL5ysLV6x6-8E3fp6HGIL8u7Q', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/3iMGSgahr2EGqEFlPNVdelZ_zKKfIXKVuELuwxGt7R4.jpg?width=108&crop=smart&auto=webp&s=01e6c02b48eac1d78a94fb44f556438e8bd923b3', 'width': 108}, {'height': 116, 'url': 'h... | ||
Locally loading the pretrained weights of Qwen2.5-0.5B | 1 | [removed] | 2025-06-04T05:33:09 | https://www.reddit.com/r/LocalLLaMA/comments/1l2xoip/locally_loading_the_pretrained_weights_of/ | hendy0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2xoip | false | null | t3_1l2xoip | /r/LocalLLaMA/comments/1l2xoip/locally_loading_the_pretrained_weights_of/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=108&crop=smart&auto=webp&s=60f1fd153b0a403486324ab9ab487f26fcccc124', 'width': 108}, {'height': 109, 'url': 'h... |
Loading Qwen Pretrained Weights for Fine-tuning | 1 | [removed] | 2025-06-04T05:27:44 | https://www.reddit.com/r/LocalLLaMA/comments/1l2xldu/loading_qwen_pretrained_weights_for_finetuning/ | hendy0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2xldu | false | null | t3_1l2xldu | /r/LocalLLaMA/comments/1l2xldu/loading_qwen_pretrained_weights_for_finetuning/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '2XSd1VnYkyg18jnDzmhs_F6KLPWKAfk7zmciWOnKVBc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1dm7yMOq16uiS-56QQ6PfHjS1Kmx_C8RKgzcKsGQbuE.jpg?width=108&crop=smart&auto=webp&s=60f1fd153b0a403486324ab9ab487f26fcccc124', 'width': 108}, {'height': 109, 'url': 'h... |
Looking for Guidance on Local LLM Optimization | 0 | I’m interested in learning about optimization techniques for running inference on local LLMs, but there’s so much information out there that I’m not sure where to start. I’d really appreciate any suggestions or guidance on how to begin.
I’m currently using a gaming laptop with an RTX 4050 GPU. Also, do you think learn... | 2025-06-04T04:54:18 | https://www.reddit.com/r/LocalLLaMA/comments/1l2x1be/looking_for_guidance_on_local_llm_optimization/ | stinkbug_007 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2x1be | false | null | t3_1l2x1be | /r/LocalLLaMA/comments/1l2x1be/looking_for_guidance_on_local_llm_optimization/ | false | false | self | 0 | null |
Python Pandas Ditches NumPy for Speedier PyArrow | 144 | 2025-06-04T04:44:44 | https://thenewstack.io/python-pandas-ditches-numpy-for-speedier-pyarrow/ | Sporeboss | thenewstack.io | 1970-01-01T00:00:00 | 0 | {} | 1l2wvf3 | false | null | t3_1l2wvf3 | /r/LocalLLaMA/comments/1l2wvf3/python_pandas_ditches_numpy_for_speedier_pyarrow/ | false | false | 144 | {'enabled': False, 'images': [{'id': 'x63vjJO5jvX8J5A_FSdVNXOBldLGHYVzgJVrJ6TICYc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/YNruBwDE4glgMZxB7Rz7Dn9TVxQIuQU5bX9UAhvGW9I.jpg?width=108&crop=smart&auto=webp&s=3861884de0126014c55e3f76985339defbab8768', 'width': 108}, {'height': 162, 'url': 'h... | ||
Turning to LocalLLM instead of Gemini? | 6 | Hey all,
I've been using Gemini 2.5 pro as a coding assistant for a long time now. Recently good has really neutered Gemini. Responses are less confident, often ramble and repeat the same code dozens of times. I've been testing R1 0528 8b 16fp on a 5090 and it seems to come up with decent solutions, faster than G... | 2025-06-04T04:43:20 | https://www.reddit.com/r/LocalLLaMA/comments/1l2wuk3/turning_to_localllm_instead_of_gemini/ | rymn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2wuk3 | false | null | t3_1l2wuk3 | /r/LocalLLaMA/comments/1l2wuk3/turning_to_localllm_instead_of_gemini/ | false | false | self | 6 | null |
Fully offline verbal chat bot | 75 | I wanted to get some feedback on my project at its current state. The goal is to have the program run in the background so that the LLM is always accessible with just a keybind. Right now I have it displaying a console for debugging, but it is capable of running fully in the background. This is written in Rust, and is ... | 2025-06-04T03:41:10 | https://v.redd.it/cw4rpviiyt4f1 | NonYa_exe | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l2vrg2 | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/cw4rpviiyt4f1/DASHPlaylist.mpd?a=1751600487%2CM2U5MzIxNmYxOGVmNDU2NTVmYjEyOTIzNGMwNjRlZDFhMmZhMTA0NTQ1ZWMxZTZjNGJiODNlMDEzMjdiZWJkYQ%3D%3D&v=1&f=sd', 'duration': 79, 'fallback_url': 'https://v.redd.it/cw4rpviiyt4f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l2vrg2 | /r/LocalLLaMA/comments/1l2vrg2/fully_offline_verbal_chat_bot/ | false | false | 75 | {'enabled': False, 'images': [{'id': 'dWY5OGJ3aWl5dDRmMXZQOk9qxWLlo00dzciqndO7qSY-J3_oYBSBLg5Z6rT9', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dWY5OGJ3aWl5dDRmMXZQOk9qxWLlo00dzciqndO7qSY-J3_oYBSBLg5Z6rT9.png?width=108&crop=smart&format=pjpg&auto=webp&s=f5d6d8654b0dc55df3f2d8852d0d9c124d07f... | |
Meta AI is really good for removing objects, texts, watermarks, etc from images in just a few clicks. | 1 | 2025-06-04T03:40:29 | https://www.reddit.com/gallery/1l2vqyy | Obvious_King2150 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1l2vqyy | false | null | t3_1l2vqyy | /r/LocalLLaMA/comments/1l2vqyy/meta_ai_is_really_good_for_removing_objects_texts/ | false | false | 1 | null | ||
Recommended courses/certifications to become an AI integration professional as a software engineer? | 1 | [removed] | 2025-06-04T03:28:13 | https://www.reddit.com/r/LocalLLaMA/comments/1l2vj2m/recommended_coursescertifications_to_become_an_ai/ | Ill_Yam_9994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2vj2m | false | null | t3_1l2vj2m | /r/LocalLLaMA/comments/1l2vj2m/recommended_coursescertifications_to_become_an_ai/ | false | false | self | 1 | null |
Simulated Transcendence: Exploring the Psychological Effects of Prolonged LLM Interaction | 0 | I've been researching a phenomenon I'm calling **Simulated Transcendence (ST)**—a pattern where extended interactions with large language models (LLMs) give users a sense of profound insight or personal growth, which may not be grounded in actual understanding.
**Key Mechanisms Identified:**
* **Semantic Drift:** Ove... | 2025-06-04T03:25:28 | https://www.reddit.com/r/LocalLLaMA/comments/1l2vhbu/simulated_transcendence_exploring_the/ | AirplaneHat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2vhbu | false | null | t3_1l2vhbu | /r/LocalLLaMA/comments/1l2vhbu/simulated_transcendence_exploring_the/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'qNGDBk-zGbl3NXt1amgYXXBbIZktw2XXX27lvHHk2Fo', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/AvD4RRtVLH4fdKu2icc49F4tjVsPu9l-Q330AwGDuws.jpg?width=108&crop=smart&auto=webp&s=2e019fb2ae04e664a6868b912c66f53397890996', 'width': 108}, {'height': 113, 'url': 'h... |
Connecting to an LM Studio server using an Android client? | 1 | Does anyone have a solution for this, or is ollama my best bet for being able to remotely host a server. I have tailscale on all devices, so can definitely use that. I looked into ChatterUI but it doesnt seem to be compatible with LM Studio, unless I am missing anything. Thoughts? | 2025-06-04T03:07:21 | https://www.reddit.com/r/LocalLLaMA/comments/1l2v58k/connecting_to_an_lm_studio_server_using_an/ | nat2r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2v58k | false | null | t3_1l2v58k | /r/LocalLLaMA/comments/1l2v58k/connecting_to_an_lm_studio_server_using_an/ | false | false | self | 1 | null |
Help: BSODs with any model | 1 | [removed] | 2025-06-04T02:53:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l2uvka/help_bsods_with_any_model/ | VitallyRaccoon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2uvka | false | null | t3_1l2uvka | /r/LocalLLaMA/comments/1l2uvka/help_bsods_with_any_model/ | false | false | self | 1 | null |
Using AI to review sales booking calls and improve objection handling | 1 | [removed] | 2025-06-04T02:48:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l2us3j/using_ai_to_review_sales_booking_calls_and/ | jprime4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2us3j | false | null | t3_1l2us3j | /r/LocalLLaMA/comments/1l2us3j/using_ai_to_review_sales_booking_calls_and/ | false | false | self | 1 | null |
Used DeepSeek-R1 0528 (Qwen 3 distill) to extract information from a PDF with Ollama and the results are great | 0 | I've converted the latest [Nvidia financial results](https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-first-quarter-fiscal-2026) to markdown and fed it to the model. The values extracted were all correct - something I haven't seen for <13B model. What are your impressions of the model?
- [Watc... | 2025-06-04T02:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/1l2umib/used_deepseekr1_0528_qwen_3_distill_to_extract/ | curiousily_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2umib | false | null | t3_1l2umib | /r/LocalLLaMA/comments/1l2umib/used_deepseekr1_0528_qwen_3_distill_to_extract/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 't2EnApoTGr54KYWonX6z7WBWt9w9XZj6oBGtwLmjIos', 'resolutions': [{'height': 71, 'url': 'https://external-preview.redd.it/Jdd2J-_htAJai6Xr-bxogk7HZd4dzL1qfmZTJYdOfL8.jpg?width=108&crop=smart&auto=webp&s=3c3963995ac274a3d73fed0fa18b0c7800daabd8', 'width': 108}, {'height': 143, 'url': 'h... |
Ecne AI Podcast Generator - Update | 22 | [main page of the new early development GUI](https://preview.redd.it/l1ttsivtlt4f1.png?width=974&format=png&auto=webp&s=5cd68053e425cae46eaf174906c23e18539a2795)
So I've been working more on one of my side projects, the [Ecne-AI-Podcaster](https://github.com/ETomberg391/Ecne-AI-Podcaster) This was to automate as much ... | 2025-06-04T02:29:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l2uf6e/ecne_ai_podcast_generator_update/ | Dundell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2uf6e | false | null | t3_1l2uf6e | /r/LocalLLaMA/comments/1l2uf6e/ecne_ai_podcast_generator_update/ | false | false | 22 | {'enabled': False, 'images': [{'id': 'kvtnKZsDxU3WMPXcbwq_8t-ZEg2GbZGKlOMogDo3XjE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/45EfjPVsgm1MYdHAKtLDE7-W5DynLHblAveFiC_Lni4.jpg?width=108&crop=smart&auto=webp&s=57eeea0584b55d81e3a0e7aea51da7fe8b533002', 'width': 108}, {'height': 108, 'url': 'h... | |
Building an AI sales call coach trained on real objections-best stack for passive listening, grading, and coaching? | 1 | [removed] | 2025-06-04T02:25:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l2uc1q/building_an_ai_sales_call_coach_trained_on_real/ | Zealousideal_Top8456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2uc1q | false | null | t3_1l2uc1q | /r/LocalLLaMA/comments/1l2uc1q/building_an_ai_sales_call_coach_trained_on_real/ | false | false | self | 1 | null |
Understand Any Repo In Seconds | 0 | Hey Devs & PMs!
Imagine if you could approach any GitHub repository and:
✨ Instantly grasp its core through intelligent digests.
✨ See its structure unfold before your eyes in clear diagrams.
✨ Simply *ask* the codebase questions and get meaningful answers.
I've created [**Gitscape.ai**](http://Gitscape.ai) ([h... | 2025-06-04T02:24:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l2ubor/understand_any_repo_in_seconds/ | Purple_Huckleberry58 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2ubor | false | null | t3_1l2ubor | /r/LocalLLaMA/comments/1l2ubor/understand_any_repo_in_seconds/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'lMxBo0Bl-j1avfhruNBelKZWZ42GY1AF8eqiHj1lXdw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/J7cWBiMpOXIpK7EzhpbyiDu4mjrNzuoZjktZA9lJV-M.jpg?width=108&crop=smart&auto=webp&s=5952a25c64c3feff3c5ffa73ec90e38921b64f94', 'width': 108}, {'height': 108, 'url': 'h... |
WINA by Microsoft | 1 | [removed] | 2025-06-04T01:32:25 | https://www.reddit.com/r/LocalLLaMA/comments/1l2ta4d/wina_by_microsoft/ | mas554ter365 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2ta4d | false | null | t3_1l2ta4d | /r/LocalLLaMA/comments/1l2ta4d/wina_by_microsoft/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'iIAybxrKhc8mLATwgu3MVJWB8lY8OZqDINzCmSk7SK4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_1itsub6A0MZ6zgczmc41XfXDsezUI5HcfLCBy9cBoM.jpg?width=108&crop=smart&auto=webp&s=40a77179dfc505f41aa170ba092e10cbaa75fa97', 'width': 108}, {'height': 108, 'url': 'h... |
Setting up an AI to help prepare for a high difficulty questions test | 1 | [removed] | 2025-06-04T01:24:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l2t4jm/setting_up_an_ai_to_help_prepare_for_a_high/ | FinancialMechanic853 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2t4jm | false | null | t3_1l2t4jm | /r/LocalLLaMA/comments/1l2t4jm/setting_up_an_ai_to_help_prepare_for_a_high/ | false | false | self | 1 | null |
Secure Minions: private collaboration between Ollama and frontier models | 35 | Extremely interesting developments coming out of Hazy Research. Has anyone tested this yet? | 2025-06-04T00:23:17 | https://ollama.com/blog/secureminions | MediocreBye | ollama.com | 1970-01-01T00:00:00 | 0 | {} | 1l2rwhu | false | null | t3_1l2rwhu | /r/LocalLLaMA/comments/1l2rwhu/secure_minions_private_collaboration_between/ | false | false | 35 | {'enabled': False, 'images': [{'id': 'krjt_5uhqcaDfYjfO7lkezThehav9cAIRJgcK-OKAmM', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/s0D7i4Rco0trWh9Bu1uEkgnoJJLA3UNKUA9vs57seII.jpg?width=108&crop=smart&auto=webp&s=53486800d92d75b19d59502534fa9ba2785c14b0', 'width': 108}, {'height': 113, 'url': 'h... | |
Deepseek R1 0528 8B running locally on Samsung Galaxy tab S10 ultra (Mediatek demensity 9300+) | 0 | App: MNN Chat
Settings:
Backend: opencl
Thread Number: 6
| 2025-06-04T00:15:39 | https://v.redd.it/5ysuy6l6zs4f1 | Ok_Essay3559 | /r/LocalLLaMA/comments/1l2rr3z/deepseek_r1_0528_8b_running_locally_on_samsung/ | 1970-01-01T00:00:00 | 0 | {} | 1l2rr3z | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/5ysuy6l6zs4f1/DASHPlaylist.mpd?a=1751717746%2CMGZjMmMxMTQyY2RiZDQ4MGQ5MDUwYTY4M2ViZTg0NTkyNzQyMjUwOGE2NGY0NmM4YzUwMjNmODA1ZDA5ZTYyMQ%3D%3D&v=1&f=sd', 'duration': 29, 'fallback_url': 'https://v.redd.it/5ysuy6l6zs4f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l2rr3z | /r/LocalLLaMA/comments/1l2rr3z/deepseek_r1_0528_8b_running_locally_on_samsung/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'dHRoZDVxbjZ6czRmMXZZzKu1t2kRtGkjW8FFkNYcYMNT6d174HwLLtjmiNAX', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/dHRoZDVxbjZ6czRmMXZZzKu1t2kRtGkjW8FFkNYcYMNT6d174HwLLtjmiNAX.png?width=108&crop=smart&format=pjpg&auto=webp&s=02e6d8acae1558993ab3d654ccc02f9540b61... | |
Is there a standard for AI-Readable context files in repositories ? | 1 | [removed] | 2025-06-04T00:15:19 | https://www.reddit.com/r/LocalLLaMA/comments/1l2rqv8/is_there_a_standard_for_aireadable_context_files/ | shijoi87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2rqv8 | false | null | t3_1l2rqv8 | /r/LocalLLaMA/comments/1l2rqv8/is_there_a_standard_for_aireadable_context_files/ | false | false | self | 1 | null |
New to local LLMs, but just launched my iOS+macOS app that runs LLMs locally | 0 | Hey everyone! I'm pretty new to the world of local LLMs, but I’ve been pretty fascinated with the idea of running an LLM on a smartphone for a while. I spent some time looking into how to do this, and ended up writing my own Swift wrapper for `llama.cpp` called [Kuzco](https://github.com/jaredcassoutt/Kuzco).
I ... | 2025-06-04T00:10:03 | https://v.redd.it/dm6oetmrxs4f1 | D1no_nugg3t | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l2rn3e | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/dm6oetmrxs4f1/DASHPlaylist.mpd?a=1751587817%2CMGM0YmNhMmQ5M2RiMzk5OTYyNWUxNzUxOTA3NDI5MDAyMzllYTQ0OTc2MDcxYTgxNWI0ZjJmNTdiNTA5NTA1Mw%3D%3D&v=1&f=sd', 'duration': 24, 'fallback_url': 'https://v.redd.it/dm6oetmrxs4f1/DASH_1080.mp4?source=fallback', 'h... | t3_1l2rn3e | /r/LocalLLaMA/comments/1l2rn3e/new_to_local_llms_but_just_launched_my_iosmacos/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'Z2RoN3p2bXJ4czRmMQwACy4WNa4bY9-leubtLYxRtX-4z95GrOzzVAD13MCL', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/Z2RoN3p2bXJ4czRmMQwACy4WNa4bY9-leubtLYxRtX-4z95GrOzzVAD13MCL.png?width=108&crop=smart&format=pjpg&auto=webp&s=d76ecc32482f19819f9302b8229e7cfbf241... | |
How my open-source extension does with a harder virtual try on outfit! | 0 | I'm open sourcing a chrome extension that lets you try on anything that you see on the internet. Feels like magic.
[click here to visit the github](https://github.com/parsakhaz/fashn-tryon-extension) | 2025-06-03T23:44:31 | https://v.redd.it/e8m2fq0cts4f1 | ParsaKhaz | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l2r3rt | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/e8m2fq0cts4f1/DASHPlaylist.mpd?a=1751586285%2CZTgyNWE4ZTA3YWYxNDFlMjQwMmI3ZmJkMmVhNzAwYzU0MzlkMDY3YzhlMzg3YzE0MWVjODUyMjU1YmQyMzVlZQ%3D%3D&v=1&f=sd', 'duration': 7, 'fallback_url': 'https://v.redd.it/e8m2fq0cts4f1/DASH_720.mp4?source=fallback', 'has... | t3_1l2r3rt | /r/LocalLLaMA/comments/1l2r3rt/how_my_opensource_extension_does_with_a_harder/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'a2N0Z3RxMGN0czRmMZEFLO9nCJikC7mtBpPcIQAr59c4sK2P034UkenC8j1x', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/a2N0Z3RxMGN0czRmMZEFLO9nCJikC7mtBpPcIQAr59c4sK2P034UkenC8j1x.png?width=108&crop=smart&format=pjpg&auto=webp&s=d2fcc31ac4c0e125dd221ae51c9bd10bdb04c... | |
Help Me Understand MOE vs Dense | 41 | It seems SOTA LLMS are moving towards MOE architectures. The smartest models in the world [seem to be using it](https://lmarena.ai/leaderboard). But why? When you use a MOE model, only a fraction of parameters are actually active. Wouldn't the model be "smarter" if you just use all parameters? Efficiency is awesome, bu... | 2025-06-03T23:33:16 | https://www.reddit.com/r/LocalLLaMA/comments/1l2qv7z/help_me_understand_moe_vs_dense/ | Express_Seesaw_8418 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2qv7z | false | null | t3_1l2qv7z | /r/LocalLLaMA/comments/1l2qv7z/help_me_understand_moe_vs_dense/ | false | false | self | 41 | null |
Building an extension that lets you try ANY clothing on with AI! Who wants me to open source it? | 0 | 2025-06-03T23:31:14 | https://v.redd.it/y0z3cehfrs4f1 | ParsaKhaz | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l2qtle | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/y0z3cehfrs4f1/DASHPlaylist.mpd?a=1751585488%2CODUzMjE5ZTUwYWZkNDQ5ZTM3OWE2ZmJmYmY4ODRhMTM3YzM0MzdkZmRhYWE3ZWUzNGJiNDQyOGNkNjdhMmI2OQ%3D%3D&v=1&f=sd', 'duration': 8, 'fallback_url': 'https://v.redd.it/y0z3cehfrs4f1/DASH_720.mp4?source=fallback', 'has... | t3_1l2qtle | /r/LocalLLaMA/comments/1l2qtle/building_an_extension_that_lets_you_try_any/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'OGdrcWRjaGZyczRmMRmQT4-0lQTlIfgftoYeHfX8nRIDSoRoafHzyMNvPJv5', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/OGdrcWRjaGZyczRmMRmQT4-0lQTlIfgftoYeHfX8nRIDSoRoafHzyMNvPJv5.png?width=108&crop=smart&format=pjpg&auto=webp&s=0ade8021fb51396bfa71b971ba1a763861b5... | ||
B vs Quantization | 6 | I've been reading about different configurations for my Large Language Model (LLM) and had a question. I understand that Q4 models are generally less accurate compared to other quantization settings (am i wright?).
To clarify, I'm trying to decide between two configurations:
* 4B\_Q8: fewer parameters with potentiall... | 2025-06-03T23:30:53 | https://www.reddit.com/r/LocalLLaMA/comments/1l2qtbo/b_vs_quantization/ | Empty_Object_9299 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2qtbo | false | null | t3_1l2qtbo | /r/LocalLLaMA/comments/1l2qtbo/b_vs_quantization/ | false | false | self | 6 | null |
Are you really using open-source or local LLMs and do they help you? | 1 | [removed] | 2025-06-03T23:07:34 | https://www.reddit.com/r/LocalLLaMA/comments/1l2qar6/are_you_really_using_opensource_or_local_llms_and/ | AccidentFriendly7530 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2qar6 | false | null | t3_1l2qar6 | /r/LocalLLaMA/comments/1l2qar6/are_you_really_using_opensource_or_local_llms_and/ | false | false | self | 1 | null |
Llama 3.3 70b Vs Newer Models | 24 | On my MBP (M3 Max 16/40 64GB), the largest model I can run seems to be Llama 3.3 70b. The swathe of new models don't have any options with this many parameters its either 30b or 200b+.
My question is does Llama 3.3 70b, compete or even is it still my best option for local use, or even with the much lower amount of ... | 2025-06-03T22:35:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l2pl4l/llama_33_70b_vs_newer_models/ | BalaelGios | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2pl4l | false | null | t3_1l2pl4l | /r/LocalLLaMA/comments/1l2pl4l/llama_33_70b_vs_newer_models/ | false | false | self | 24 | null |
B vs Quantization | 1 | I'm looking for some advice on choosing the best configuration for my Large Language Model (LLM). I'm trying to understand the differences between two settings and would appreciate any guidance.
What's the main difference between a 4B\_Q8 and a 12B\_Q4\_0 configuration? Is one significantly better than the other, or a... | 2025-06-03T22:30:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l2pgks/b_vs_quantization/ | Empty_Object_9299 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2pgks | false | null | t3_1l2pgks | /r/LocalLLaMA/comments/1l2pgks/b_vs_quantization/ | false | false | self | 1 | null |
The LLM rabbit hole | 0 | Everything started as a genuine interest in trying some small LLMs at home.
It began with an old PC case housing an RTX 2080 (with 8GB of RAM!). Just to see what was possible.
My first steps were with llama.cpp before I moved on to other horizons. Then came one 3090, then two 3090s.
LLMs started being integrated int... | 2025-06-03T22:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/1l2p6gt/the_llm_rabbit_hole/ | Mobile_Tart_1016 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2p6gt | false | null | t3_1l2p6gt | /r/LocalLLaMA/comments/1l2p6gt/the_llm_rabbit_hole/ | false | false | self | 0 | null |
Extract information from resume in json format | 1 | [removed] | 2025-06-03T22:09:24 | https://www.reddit.com/r/LocalLLaMA/comments/1l2oz7r/extract_information_from_resume_in_json_format/ | Fast_Huckleberry_894 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2oz7r | false | null | t3_1l2oz7r | /r/LocalLLaMA/comments/1l2oz7r/extract_information_from_resume_in_json_format/ | false | false | self | 1 | null |
What GUI are you using for local LLMs? (AnythingLLM, LM Studio, etc.) | 171 | I’ve been trying out AnythingLLM and LM Studio lately to run models like LLaMA and Gemma locally. Curious what others here are using.
What’s been your experience with these or other GUI tools like GPT4All, Oobabooga, PrivateGPT, etc.?
What do you like, what’s missing, and what would you recommend for someone looking ... | 2025-06-03T22:09:03 | https://www.reddit.com/r/LocalLLaMA/comments/1l2oywk/what_gui_are_you_using_for_local_llms_anythingllm/ | Aaron_MLEngineer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2oywk | false | null | t3_1l2oywk | /r/LocalLLaMA/comments/1l2oywk/what_gui_are_you_using_for_local_llms_anythingllm/ | false | false | self | 171 | null |
Awakening people to well-being... | 1 | [removed] | 2025-06-03T21:37:58 | https://www.reddit.com/r/LocalLLaMA/comments/1l2o7fo/awakening_people_to_wellbeing/ | zartte_Forever7927 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2o7fo | false | null | t3_1l2o7fo | /r/LocalLLaMA/comments/1l2o7fo/awakening_people_to_wellbeing/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'XUaYY9UvWUY2dxBrvBdLynhcdraE1n8Mbfig66gxGHo', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/pQrveUfJ0u1VocQBhr2ks-zddW0tDw7PGVGeeLyLLdo.jpg?width=108&crop=smart&auto=webp&s=5dcc3a7916439929f7a706c6ee0266c5e0f227ed', 'width': 108}, {'height': 216, 'url': '... |
How does gemma3:4b-it-qat fare against OpenAI models on the MMLU-Pro benchmark? See for yourself in Excel | 1 | [removed] | 2025-06-03T21:03:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l2nc5k/how_does_gemma34bitqat_fare_against_openai_models/ | OptimalParking | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2nc5k | false | null | t3_1l2nc5k | /r/LocalLLaMA/comments/1l2nc5k/how_does_gemma34bitqat_fare_against_openai_models/ | false | false | self | 1 | null |
live transcription | 13 | I want to use whisper or any other model similar accuracy on device android with inference. PLease suggest me the one with best latency. Please help me if i am missing out something - onnx, Tflite , ctranslate2
if you know anything about this category any open source proejcts that can help me pull off a live transcrip... | 2025-06-03T20:16:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l2m4q9/live_transcription/ | Away_Expression_3713 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2m4q9 | false | null | t3_1l2m4q9 | /r/LocalLLaMA/comments/1l2m4q9/live_transcription/ | false | false | self | 13 | null |
[R] SocialSim’25: Social Simulations with LLMs — Call for Papers + Shared Task | 1 | [removed] | 2025-06-03T20:04:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l2lthz/r_socialsim25_social_simulations_with_llms_call/ | RSTZZZ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2lthz | false | null | t3_1l2lthz | /r/LocalLLaMA/comments/1l2lthz/r_socialsim25_social_simulations_with_llms_call/ | false | false | self | 1 | null |
TTS that Synchronizes Phonemes/Text and Audio Live? / TTS + Animatronics | 1 | [removed] | 2025-06-03T19:56:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l2lm9n/tts_that_synchronizes_phonemestext_and_audio_live/ | SourceTop1470 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2lm9n | false | null | t3_1l2lm9n | /r/LocalLLaMA/comments/1l2lm9n/tts_that_synchronizes_phonemestext_and_audio_live/ | false | false | self | 1 | null |
Social Simulation with LLMs | 1 | 2025-06-03T19:56:04 | https://sites.google.com/view/social-sims-with-llms/home | RSTZZZ | sites.google.com | 1970-01-01T00:00:00 | 0 | {} | 1l2llv4 | false | null | t3_1l2llv4 | /r/LocalLLaMA/comments/1l2llv4/social_simulation_with_llms/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RBPSVg5ZXUfLuQDkRyZKykvGvxWJwrHaMyNDHrBoXtY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=108&crop=smart&auto=webp&s=09247db6c0a148d31436ea73b9638a861f1757fc', 'width': 108}, {'height': 216, 'url': '... | ||
Simulating Social Media Personas with Local LLMs | 1 | [removed] | 2025-06-03T19:55:23 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1l2ll8l | false | null | t3_1l2ll8l | /r/LocalLLaMA/comments/1l2ll8l/simulating_social_media_personas_with_local_llms/ | false | false | default | 1 | null | ||
Simulating Social Media Personas with Local LLMs | 1 | [removed] | 2025-06-03T19:54:12 | https://www.reddit.com/r/LocalLLaMA/comments/1l2lk4j/simulating_social_media_personas_with_local_llms/ | RSTZZZ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2lk4j | false | null | t3_1l2lk4j | /r/LocalLLaMA/comments/1l2lk4j/simulating_social_media_personas_with_local_llms/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RBPSVg5ZXUfLuQDkRyZKykvGvxWJwrHaMyNDHrBoXtY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=108&crop=smart&auto=webp&s=09247db6c0a148d31436ea73b9638a861f1757fc', 'width': 108}, {'height': 216, 'url': '... |
Simulating Social Media Personas with LLMs — COLM 2025 + Kaggle Task | 1 | [removed] | 2025-06-03T19:50:59 | https://sites.google.com/view/social-sims-with-llms/home | RSTZZZ | sites.google.com | 1970-01-01T00:00:00 | 0 | {} | 1l2lh3q | false | null | t3_1l2lh3q | /r/LocalLLaMA/comments/1l2lh3q/simulating_social_media_personas_with_llms_colm/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RBPSVg5ZXUfLuQDkRyZKykvGvxWJwrHaMyNDHrBoXtY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=108&crop=smart&auto=webp&s=09247db6c0a148d31436ea73b9638a861f1757fc', 'width': 108}, {'height': 216, 'url': '... | |
Simulating Social Media Personas with LLMs — COLM 2025 + Kaggle Task | 1 | [removed] | 2025-06-03T19:49:04 | https://sites.google.com/view/social-sims-with-llms/home | RSTZZZ | sites.google.com | 1970-01-01T00:00:00 | 0 | {} | 1l2lfa2 | false | null | t3_1l2lfa2 | /r/LocalLLaMA/comments/1l2lfa2/simulating_social_media_personas_with_llms_colm/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'RBPSVg5ZXUfLuQDkRyZKykvGvxWJwrHaMyNDHrBoXtY', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/jlv0p7zmiiatfxZ95Hq4q2DAZgLpssATtPwYW2FlCP0.jpg?width=108&crop=smart&auto=webp&s=09247db6c0a148d31436ea73b9638a861f1757fc', 'width': 108}, {'height': 216, 'url': '... | |
Yoshua Bengio, Turing-award winning AI Godfather, starts a company to keep rampant AI innovation in check | 0 | [https://techcrunch.com/2025/06/03/yoshua-bengio-launches-lawzero-a-nonprofit-ai-safety-lab/](https://techcrunch.com/2025/06/03/yoshua-bengio-launches-lawzero-a-nonprofit-ai-safety-lab/)
| 2025-06-03T19:48:55 | Particular_Pool8344 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l2lf54 | false | null | t3_1l2lf54 | /r/LocalLLaMA/comments/1l2lf54/yoshua_bengio_turingaward_winning_ai_godfather/ | false | false | 0 | {'enabled': True, 'images': [{'id': '23eejwbe6Qe99Ly_RvNaoTWnXb74y6Ktm79t4Visd9E', 'resolutions': [{'height': 73, 'url': 'https://preview.redd.it/hfoomobunr4f1.png?width=108&crop=smart&auto=webp&s=32ef9b5e7f88f7da8094c4a9346139128486b7aa', 'width': 108}, {'height': 147, 'url': 'https://preview.redd.it/hfoomobunr4f1.png... | ||
Is there any small models for home budgets | 4 | Hi, Is there any small local models I could feed my bank statements into and have it done a full budget breakdown? What would be the best way to go about this for a beginner? | 2025-06-03T19:48:36 | https://www.reddit.com/r/LocalLLaMA/comments/1l2letx/is_there_any_small_models_for_home_budgets/ | DueRuin3912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2letx | false | null | t3_1l2letx | /r/LocalLLaMA/comments/1l2letx/is_there_any_small_models_for_home_budgets/ | false | false | self | 4 | null |
OOM for GRPO on Qwen3-32b, 8xA100 80GB | 0 | Hi everyone, I'm trying to run Qwen3-32b and am always getting OOM after loading the model checkpoints. I'm using 6xA100s for training and 2 for inference. num\_generations is down to 4, and I tried decreasing to 2 with batch size on device of 1 to debug - still getting OOM. Would love some help or any resources. | 2025-06-03T19:43:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l2la28/oom_for_grpo_on_qwen332b_8xa100_80gb/ | Classic_Eggplant8827 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2la28 | false | null | t3_1l2la28 | /r/LocalLLaMA/comments/1l2la28/oom_for_grpo_on_qwen332b_8xa100_80gb/ | false | false | self | 0 | null |
Paid LLM courses that teach practical knowledge? Free courses are good too! | 0 | My employer has given me a budget of up to around $1000 for training. I think the best way to spend this money would be learning about LLMs or AI in general. I don't want to take a course in bullshit like "AI for managers" or whatever other nonsense is trying to cash in on the LLM buzz. I also don't want to become an A... | 2025-06-03T19:27:32 | https://www.reddit.com/r/LocalLLaMA/comments/1l2kvd7/paid_llm_courses_that_teach_practical_knowledge/ | LanceThunder | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2kvd7 | false | null | t3_1l2kvd7 | /r/LocalLLaMA/comments/1l2kvd7/paid_llm_courses_that_teach_practical_knowledge/ | false | false | self | 0 | null |
Sonnet Claude 4 ran locally? | 0 | Hi,
I recently started using Cursor to make a website and fell in love with Agent and Claude 4.
I have a 9950x3d with a 5090 with 96GB if ram and lots of Gen5 m.2 storage. I'm wondering if I can run something like this locally? So it can assist with editing and coding on its own via vibe coding.
You guys are amazing... | 2025-06-03T19:10:10 | https://www.reddit.com/r/LocalLLaMA/comments/1l2kffu/sonnet_claude_4_ran_locally/ | VanFenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2kffu | false | null | t3_1l2kffu | /r/LocalLLaMA/comments/1l2kffu/sonnet_claude_4_ran_locally/ | false | false | self | 0 | null |
Cooling question | 7 | I got a “new” 3090 and I got the bright idea to go buy a 1200W power supply and put my 3070 in the same case instead of the upgrade. Before I go buy the new PS, I tried the fit and it feels like that’s pretty tight. Is that enough room between the cards for airflow or am I about to start a fire? I’m adding two new cas... | 2025-06-03T18:58:51 | johnfkngzoidberg | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1l2k7rk | false | null | t3_1l2k7rk | /r/LocalLLaMA/comments/1l2k7rk/cooling_question/ | false | false | 7 | {'enabled': True, 'images': [{'id': 'VAC7c--rZyP4uKdRUcU4sNPso7_yXTNVzCYQfT8F5pE', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/vd9n6tpyer4f1.jpeg?width=108&crop=smart&auto=webp&s=c1bdafed2a5bbdd47b1f52b5244f8b3a71726791', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/vd9n6tpyer4f1.j... | ||
GuidedQuant: Boost LLM layer-wise PTQ methods using the end loss guidance (Qwen3, Gemma3, Llama3.3 / 2~4bit Quantization) | 36 | **Paper (ICML 2025):** [https://arxiv.org/abs/2505.07004](https://arxiv.org/abs/2505.07004)
**Code:** [https://github.com/snu-mllab/GuidedQuant](https://github.com/snu-mllab/GuidedQuant)
**HuggingFace Collection:** 2\~4-bit quantized Qwen3-32B, gemma-3-27b-it, Llama-3.1-8B-Instruct, Llama-3.3-70B-Instruct → [Link](h... | 2025-06-03T18:54:54 | https://www.reddit.com/r/LocalLLaMA/comments/1l2k4nw/guidedquant_boost_llm_layerwise_ptq_methods_using/ | jusjinuk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2k4nw | false | null | t3_1l2k4nw | /r/LocalLLaMA/comments/1l2k4nw/guidedquant_boost_llm_layerwise_ptq_methods_using/ | false | false | self | 36 | null |
GuidedQuant: Boost LLM layer-wise PTQ methods using the end loss guidance (Qwen3, Gemma3, Llama3.3 / 2~4bit Quantization) | 1 | **Paper (ICML 2025):** [https://arxiv.org/abs/2505.07004](https://arxiv.org/abs/2505.07004)
**Code:** [https://github.com/snu-mllab/GuidedQuant](https://github.com/snu-mllab/GuidedQuant)
**HuggingFace Collection:** 2\~4-bit quantized Qwen3-32B, gemma-3-27b-it, Llama-3.1-8B-Instruct, Llama-3.3-70B-Instruct → [Link](h... | 2025-06-03T18:54:52 | https://www.reddit.com/r/LocalLLaMA/comments/1l2k4n5/guidedquant_boost_llm_layerwise_ptq_methods_using/ | jusjinuk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2k4n5 | false | null | t3_1l2k4n5 | /r/LocalLLaMA/comments/1l2k4n5/guidedquant_boost_llm_layerwise_ptq_methods_using/ | false | false | self | 1 | null |
GuidedQuant: Boost LLM layer-wise PTQ methods using the end loss guidance (Qwen3, Gemma3, Llama3.3 / 2~4bit Quantization) | 1 | **Paper (ICML 2025):** [https://arxiv.org/abs/2505.07004](https://arxiv.org/abs/2505.07004)
**Code:** [https://github.com/snu-mllab/GuidedQuant](https://github.com/snu-mllab/GuidedQuant)
**HuggingFace Collection:** 2\~4-bit quantized Qwen3-32B, gemma-3-27b-it, Llama-3.1-8B-Instruct, Llama-3.3-70B-Instruct → [Link](h... | 2025-06-03T18:54:50 | https://www.reddit.com/r/LocalLLaMA/comments/1l2k4lo/guidedquant_boost_llm_layerwise_ptq_methods_using/ | jusjinuk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2k4lo | false | null | t3_1l2k4lo | /r/LocalLLaMA/comments/1l2k4lo/guidedquant_boost_llm_layerwise_ptq_methods_using/ | false | false | 1 | null | |
GuidedQuant: Boost layer-wise PTQ methods using the end loss guidance (Qwen3, Gemma3, Llama3.3 / 2~4bit quantization) | 1 | [removed] | 2025-06-03T18:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/1l2k1j8/guidedquant_boost_layerwise_ptq_methods_using_the/ | jusjinuk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2k1j8 | false | null | t3_1l2k1j8 | /r/LocalLLaMA/comments/1l2k1j8/guidedquant_boost_layerwise_ptq_methods_using_the/ | false | false | 1 | null | |
Using a LLM (large language model ) as a simplest physics engine — no physics code, just prompts | 1 | [removed] | 2025-06-03T18:15:30 | https://www.reddit.com/r/LocalLLaMA/comments/1l2j7ax/using_a_llm_large_language_model_as_a_simplest/ | Arch1324 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2j7ax | false | null | t3_1l2j7ax | /r/LocalLLaMA/comments/1l2j7ax/using_a_llm_large_language_model_as_a_simplest/ | false | false | 1 | null | |
GuidedQuant: Boost layer-wise PTQ methods by using end loss guidance (Qwen3, Gemma3, Llama3.3 / 2~4bit quantization) | 1 | [removed] | 2025-06-03T18:11:02 | https://www.reddit.com/r/LocalLLaMA/comments/1l2j2zj/guidedquant_boost_layerwise_ptq_methods_by_using/ | jusjinuk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2j2zj | false | null | t3_1l2j2zj | /r/LocalLLaMA/comments/1l2j2zj/guidedquant_boost_layerwise_ptq_methods_by_using/ | false | false | self | 1 | null |
5060ti llama-cpp-python | 1 | [removed] | 2025-06-03T18:07:22 | https://www.reddit.com/r/LocalLLaMA/comments/1l2izj5/5060ti_llamacpppython/ | pc_zoomer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2izj5 | false | null | t3_1l2izj5 | /r/LocalLLaMA/comments/1l2izj5/5060ti_llamacpppython/ | false | false | self | 1 | null |
I would really like to start digging deeper into LLMs. If I have $1500-$2000 to spend, what hardware setup would you recommend assuming I have nothing currently. | 25 | I have very little idea of what I'm looking for with regard to hardware. I'm a mac guy generally, so i'm familiar with their OS, so that's a plus for me. I also like that their memory is all very fast and shared with the GPU, which I \*think\* helps run things faster instead of being memory or CPU bound, but I'm not 10... | 2025-06-03T17:54:32 | https://www.reddit.com/r/LocalLLaMA/comments/1l2imqv/i_would_really_like_to_start_digging_deeper_into/ | BokehJunkie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2imqv | false | null | t3_1l2imqv | /r/LocalLLaMA/comments/1l2imqv/i_would_really_like_to_start_digging_deeper_into/ | false | false | self | 25 | null |
Which open source model is the cheapest to host and gives great performance? | 0 | Hello guys,
Which open source model is the cheapest to host on a \~$30 Hetzner server and gives great performance?
I am building a SAAS app and I want to integrate AI into it extensively. I don't have money for AI APIs.
Thank you for you time. | 2025-06-03T17:33:35 | https://www.reddit.com/r/LocalLLaMA/comments/1l2i315/which_open_source_model_is_the_cheapest_to_host/ | Last-Kaleidoscope406 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2i315 | false | null | t3_1l2i315 | /r/LocalLLaMA/comments/1l2i315/which_open_source_model_is_the_cheapest_to_host/ | false | false | self | 0 | null |
Which open source model is the cheapest to host and gives great performance? | 1 | [removed] | 2025-06-03T17:29:26 | https://www.reddit.com/r/LocalLLaMA/comments/1l2hz0k/which_open_source_model_is_the_cheapest_to_host/ | ExtremeYogurt3627 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1l2hz0k | false | null | t3_1l2hz0k | /r/LocalLLaMA/comments/1l2hz0k/which_open_source_model_is_the_cheapest_to_host/ | false | false | self | 1 | null |
New META Paper - How much do language models memorize? | 234 | Very interesting paper on dataset size, parameter size, and grokking. | 2025-06-03T16:47:12 | https://arxiv.org/abs/2505.24832 | Thrumpwart | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1l2gvar | false | null | t3_1l2gvar | /r/LocalLLaMA/comments/1l2gvar/new_meta_paper_how_much_do_language_models/ | false | false | default | 234 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.