title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
testing ai realism without crossing the line using stabilityai and domoai | 0 | not tryin to post nsfw, just wanted to test the boundaries of realism and style.
[stabilityai](http://stability.ai) with some custom models gave pretty decent freedom. then touched everything up in [domoai](https://www.domoai.app/home?via=081621AUG) using a soft-glow filter.
the line between “art” and “too much” is s... | 2025-06-20T04:44:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lfvbqg/testing_ai_realism_without_crossing_the_line/ | Own_View3337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfvbqg | false | null | t3_1lfvbqg | /r/LocalLLaMA/comments/1lfvbqg/testing_ai_realism_without_crossing_the_line/ | false | false | self | 0 | null |
96GB VRAM plus 256GB/512GB Fast RAM | 12 | I'm thinking of combining 96GB (1800GB/s) VRAM from the 6000 RTX PRO (already have this) with 256GB or 512GB (410GB/s) RAM in the upcoming Threadripper.
Do you all think this could run any largish versions of Deepseek with useful thruput? | 2025-06-20T04:42:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lfvaos/96gb_vram_plus_256gb512gb_fast_ram/ | SteveRD1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfvaos | false | null | t3_1lfvaos | /r/LocalLLaMA/comments/1lfvaos/96gb_vram_plus_256gb512gb_fast_ram/ | false | false | self | 12 | null |
New 24B finetune: Impish_Magic_24B | 61 | It's the **20th of June, 2025**—The world is getting more and more chaotic, but let's look at the bright side: **Mistral** released a new model at a **very** good size of **24B**, no more "sign here" or "accept this weird EULA" there, a proper **Apache 2.0 License**, nice! 👍🏻
This model is based on **mistralai/Magis... | 2025-06-20T04:21:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lfuxn1/new_24b_finetune_impish_magic_24b/ | Sicarius_The_First | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfuxn1 | false | null | t3_1lfuxn1 | /r/LocalLLaMA/comments/1lfuxn1/new_24b_finetune_impish_magic_24b/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': '51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/51No_P_uAdDX1Ycoltbek_a-pSyT0jWN6KAjsiAu82A.png?width=108&crop=smart&auto=webp&s=5bb85cf25fd314ab613856c46b8fce17d683ab63', 'width': 108}, {'height': 116, 'url': 'h... |
I did a thing... | 1 | [removed] | 2025-06-20T04:01:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lfukb1/i_did_a_thing/ | Ok-Mud9471 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfukb1 | false | null | t3_1lfukb1 | /r/LocalLLaMA/comments/1lfukb1/i_did_a_thing/ | false | false | self | 1 | null |
Open Discussion: Improving HTML-to-Markdown Extraction Using Local LLMs (7B/8B, llama.cpp) – Seeking Feedback on My Approach! | 15 | Hey Reddit,
I'm working on a smarter way to convert HTML web pages to high-quality Markdown using **local LLMs** (Qwen2.5-7B/8B, llama.cpp) running on consumer GPUs. My goal: outperform traditional tools like Readability or html2text on tricky websites (e.g. modern SPAs, tech blogs, and noisy sites) — and do it all *f... | 2025-06-20T03:28:14 | https://www.reddit.com/r/LocalLLaMA/comments/1lftz5s/open_discussion_improving_htmltomarkdown/ | coolmenu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lftz5s | false | null | t3_1lftz5s | /r/LocalLLaMA/comments/1lftz5s/open_discussion_improving_htmltomarkdown/ | false | false | self | 15 | null |
[DEAL] On-demand B200 GPUs for $1.49/hr at DeepInfra (promo ends June 30) | 0 | no commitments
any configuration (1x, 2x and so on)
minute level billing
cheapest in the market👌 | 2025-06-20T03:00:40 | https://www.reddit.com/r/LocalLLaMA/comments/1lftglj/deal_ondemand_b200_gpus_for_149hr_at_deepinfra/ | temirulan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lftglj | false | null | t3_1lftglj | /r/LocalLLaMA/comments/1lftglj/deal_ondemand_b200_gpus_for_149hr_at_deepinfra/ | false | false | self | 0 | null |
Running Local LLMs (“AI”) on Old Unsupported AMD GPUs and Laptop iGPUs using llama.cpp with Vulkan (Arch Linux Guide) | 19 | 2025-06-20T02:51:23 | https://ahenriksson.com/posts/running-llm-on-old-amd-gpus/ | Kallocain | ahenriksson.com | 1970-01-01T00:00:00 | 0 | {} | 1lftaep | false | null | t3_1lftaep | /r/LocalLLaMA/comments/1lftaep/running_local_llms_ai_on_old_unsupported_amd_gpus/ | false | false | default | 19 | null | |
If an omni-modal AI exists that can extract any sort of information from any given modality/ies (text, audio, video, GUI, etc), which task would you use it for ? | 0 | One common example is intelligent document processing. But I imagine we can also apply it on random youtube videos to cross-check for NSFW or gruesome contents or audios and describe what sort of contents were there in mild text for large-scale analysis. I see that not many research works exist for information extracti... | 2025-06-20T02:40:32 | https://www.reddit.com/r/LocalLLaMA/comments/1lft30z/if_an_omnimodal_ai_exists_that_can_extract_any/ | Marionberry6886 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lft30z | false | null | t3_1lft30z | /r/LocalLLaMA/comments/1lft30z/if_an_omnimodal_ai_exists_that_can_extract_any/ | false | false | self | 0 | null |
Running Local LLMs (“AI”) on Old Unsupported AMD GPUs and Laptop iGPUs (Arch Linux Guide) | 0 | 2025-06-20T02:34:36 | https://ahenriksson.com/posts/running-llm-on-old-amd-gpus/ | Kallocain | ahenriksson.com | 1970-01-01T00:00:00 | 0 | {} | 1lfsz42 | false | null | t3_1lfsz42 | /r/LocalLLaMA/comments/1lfsz42/running_local_llms_ai_on_old_unsupported_amd_gpus/ | false | false | default | 0 | null | |
help with Condaerror | 3 | I'm very new to AI and I'm really confused about all this.
I'm trying to use AllTalk, but I'm having a problem called “Condaerror: Run conda init before Conda activate.”
I searched the internet and it's really hard for me to understand, so I'm asking here to see if someone could explain it to me in a more...uhh...sim... | 2025-06-20T02:17:57 | https://www.reddit.com/r/LocalLLaMA/comments/1lfsntm/help_with_condaerror/ | miorex | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfsntm | false | null | t3_1lfsntm | /r/LocalLLaMA/comments/1lfsntm/help_with_condaerror/ | false | false | self | 3 | null |
Simulating top-down thinking in LLMs through prompting - a path to AGI like output? | 0 | the theory behind this is that since llms are essentially just coherency engines that use text probability to produce output that best fits whatever narrative is in the context window, then if you take a problem and give the llm enough context and constraints and then ask it to solve it, you will have created a high-pr... | 2025-06-20T01:48:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lfs36u/simulating_topdown_thinking_in_llms_through/ | edspert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfs36u | false | null | t3_1lfs36u | /r/LocalLLaMA/comments/1lfs36u/simulating_topdown_thinking_in_llms_through/ | false | false | self | 0 | null |
Performance scaling from 400W to 600W on 2 5090s (MSI, Inno) and 2 4090s (ASUS, Gigabyte) from compute-bound task (SDXL). | 8 | Hi there guys, hoping you are having a good day/night!
Continuing a bit from this post [https://www.reddit.com/r/nvidia/comments/1ld3f9n/small\_comparison\_of\_2\_5090s\_1\_voltage\_efficient\_1/](https://www.reddit.com/r/nvidia/comments/1ld3f9n/small_comparison_of_2_5090s_1_voltage_efficient_1/)
Now this this time, ... | 2025-06-20T01:23:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lfrmj6/performance_scaling_from_400w_to_600w_on_2_5090s/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfrmj6 | false | null | t3_1lfrmj6 | /r/LocalLLaMA/comments/1lfrmj6/performance_scaling_from_400w_to_600w_on_2_5090s/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'EdaQxJeXXbDAR7IH6sBO_A4JGYRzpN5CoV5gk49NIGo', 'resolutions': [], 'source': {'height': 100, 'url': 'https://external-preview.redd.it/EdaQxJeXXbDAR7IH6sBO_A4JGYRzpN5CoV5gk49NIGo.jpeg?auto=webp&s=f3790c84b91d68186c7f69a223b13d9924f446bc', 'width': 99}, 'variants': {}}]} |
Running DeepSeek locally using ONNX Runtime | 0 | Just wanted to drop this here for anyone interested in running models locally using ONNX Runtime. The focus here is on using the NPU in Snapdragon X Elite, but can be extended to other systems as well!
| 2025-06-20T00:42:28 | https://www.youtube.com/live/VRDB_ob7ulA?si=sR3Pes-BGUlPPJxh | DangerousGood4561 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1lfqsps | false | null | t3_1lfqsps | /r/LocalLLaMA/comments/1lfqsps/running_deepseek_locally_using_onnx_runtime/ | false | false | default | 0 | null |
How to set temperature RIGHT | 0 | In Google AI Studio, I've noticed that lots of people think that the models aren't that great, and when coding can behave almost erratically and make bad, silly mistakes. The main culprit is because they weirdly set the default temp of all their models to 1. The temperature range at least in AI Studio is from 0 (comple... | 2025-06-20T00:03:53 | https://v.redd.it/gslo28iwyy7f1 | Longjumping_Spot5843 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lfq0bk | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/gslo28iwyy7f1/DASHPlaylist.mpd?a=1752969846%2CNDUyMTljNDFlZWFhMzFmZjYwNTJkZTlkYzU5Mzk1Yjk3MTZjOTRjYzUzZjQwMTA3N2YxYzBhNGI1ZTEzOTFiZQ%3D%3D&v=1&f=sd', 'duration': 41, 'fallback_url': 'https://v.redd.it/gslo28iwyy7f1/DASH_1080.mp4?source=fallback', 'h... | t3_1lfq0bk | /r/LocalLLaMA/comments/1lfq0bk/how_to_set_temperature_right/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bTIwNTd5aHd5eTdmMd0mZGb55txDNP6wvQ_3GRbDM3anw7Owu5c3y6JjwrsE', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bTIwNTd5aHd5eTdmMd0mZGb55txDNP6wvQ_3GRbDM3anw7Owu5c3y6JjwrsE.png?width=108&crop=smart&format=pjpg&auto=webp&s=509010eefd4fd1d0dfa10b70ce2eb0f0b0fa0... | |
Current best uncensored model? | 274 | this is probably one of the biggest advantages of local LLM's yet there is no universally accepted answer to what's the best model as of June 2025.
**So share your BEST uncensored model!**
*by ''best uncensored model' i mean the least censored model (that helped you get a nuclear bomb in your kitched), but also t... | 2025-06-19T23:51:12 | https://www.reddit.com/r/LocalLLaMA/comments/1lfpqs6/current_best_uncensored_model/ | Accomplished-Feed568 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfpqs6 | false | null | t3_1lfpqs6 | /r/LocalLLaMA/comments/1lfpqs6/current_best_uncensored_model/ | false | false | self | 274 | null |
Qwen3 for Apple Neural Engine | 120 | We just dropped ANEMLL 0.3.3 alpha with Qwen3 support for Apple's Neural Engine
https://github.com/Anemll/Anemll
Start to support open source!
Cheers,
Anemll 🤖 | 2025-06-19T23:43:28 | https://www.reddit.com/r/LocalLLaMA/comments/1lfpkyv/qwen3_for_apple_neural_engine/ | Competitive-Bake4602 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfpkyv | false | null | t3_1lfpkyv | /r/LocalLLaMA/comments/1lfpkyv/qwen3_for_apple_neural_engine/ | false | false | self | 120 | {'enabled': False, 'images': [{'id': 'nQKqVo6OHbUS3Rgj27R8TDTcE9cz10aAXxP0kdlPMQI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/nQKqVo6OHbUS3Rgj27R8TDTcE9cz10aAXxP0kdlPMQI.png?width=108&crop=smart&auto=webp&s=954a1218e83cc3cae269b5c08148218bcb74581a', 'width': 108}, {'height': 108, 'url': 'h... |
Anyone else tracking datacenter GPU prices on eBay? | 58 | I've been in the habit of checking eBay for AMD Instinct prices for a few years now, and noticed just today that MI210 prices seem to be dropping pretty quickly (though still priced out of my budget!) and there is a used MI300X for sale there for the first time, for *only* $35K /s
I watch MI60 and MI100 prices too, bu... | 2025-06-19T23:35:16 | https://www.reddit.com/r/LocalLLaMA/comments/1lfpewd/anyone_else_tracking_datacenter_gpu_prices_on_ebay/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfpewd | false | null | t3_1lfpewd | /r/LocalLLaMA/comments/1lfpewd/anyone_else_tracking_datacenter_gpu_prices_on_ebay/ | false | false | self | 58 | null |
Dual RTX 6000, Blackwell and Ada Lovelace, with thermal imagery | 58 | This rig is more for training than local inference (though there is a lot of the latter with Qwen) but I thought it might be helpful to see how the new Blackwell cards dissipate heat compared to the older blower style for Quadros prominent since Amphere.
There are two IR color ramps - a standard heat map and a rainbow... | 2025-06-19T23:23:38 | https://www.reddit.com/gallery/1lfp66e | Thalesian | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lfp66e | false | null | t3_1lfp66e | /r/LocalLLaMA/comments/1lfp66e/dual_rtx_6000_blackwell_and_ada_lovelace_with/ | false | false | 58 | {'enabled': True, 'images': [{'id': 'dp9jZ9I5ulT5RZQDN9KwsbRB_C7Fi7IzWNUNN0l7OB8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/dp9jZ9I5ulT5RZQDN9KwsbRB_C7Fi7IzWNUNN0l7OB8.jpeg?width=108&crop=smart&auto=webp&s=4ee16e93b04183bd2df8a98a5a90d52b535fd63c', 'width': 108}, {'height': 162, 'url': 'h... | |
Dual RTX 6000, Blackwell + Ada Lovelace, with thermal imagery | 1 | This rig is more for training than local inference (though there is a lot of the latter with Qwen) but I thought it might be helpful to see how the new Blackwell cards dissipate heat compared to the older blower style for Quadros prominent since Amphere.
There are two IR color ramps - a standard heat map and a rainbo... | 2025-06-19T23:16:02 | https://www.reddit.com/gallery/1lfp0ch | Thalesian | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lfp0ch | false | null | t3_1lfp0ch | /r/LocalLLaMA/comments/1lfp0ch/dual_rtx_6000_blackwell_ada_lovelace_with_thermal/ | false | false | 1 | {'enabled': True, 'images': [{'id': '8eF8bVPaFXTp_Koh0baE9cj7mvUPagqHoDLgF26UjbI', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/8eF8bVPaFXTp_Koh0baE9cj7mvUPagqHoDLgF26UjbI.jpeg?width=108&crop=smart&auto=webp&s=fad8bd00d6ff520849f46654d69c69e63901c5f2', 'width': 108}, {'height': 162, 'url': 'h... | |
Why We Need Truth-Seeking AI: Announcing $1M in Grants | 0 | Anyone into philosophy and building an AI?
https://youtu.be/HKFqZozACos
Links in the comment section of the video.
[I am not involved with the project, I just follow Johnathan on YouTube and thought that someone here might be interested in it.] | 2025-06-19T22:52:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lfoi0v/why_we_need_truthseeking_ai_announcing_1m_in/ | Cane_P | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfoi0v | false | null | t3_1lfoi0v | /r/LocalLLaMA/comments/1lfoi0v/why_we_need_truthseeking_ai_announcing_1m_in/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'n90JS4PebXxypqdeoc2kOLYNKWwWx_Q5MWaUcVMeixU', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/n90JS4PebXxypqdeoc2kOLYNKWwWx_Q5MWaUcVMeixU.jpeg?width=108&crop=smart&auto=webp&s=f120f538b97b7197407376032bf36aa4c0177a27', 'width': 108}, {'height': 162, 'url': '... |
Is there any frontend which supports OpenAI features like web search or Scheduled Tasks? | 2 | I’m currently using OpenWebUI… and they are not good at implementing basic features in Chatgpt Plus that’s been around for a long time.
For example, web search. OpenWebUI web search sucks when using o3 or gpt-4.1. You have to configure a google/bing/etc api key, and then it takes 5+ minutes to do a simple query!
Mea... | 2025-06-19T22:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lfo7p0/is_there_any_frontend_which_supports_openai/ | DepthHour1669 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfo7p0 | false | null | t3_1lfo7p0 | /r/LocalLLaMA/comments/1lfo7p0/is_there_any_frontend_which_supports_openai/ | false | false | self | 2 | null |
As a storyteller, how can I have this? | 0 | I am jealous of vibe coding. They get to create a lot and learn but there's no such thing yet for storytelling. I want to create short stories using ai. Image creation is not for me. Have anyone figured out anything for short films? I want to spend my days tinkering with shots, frames and movements. | 2025-06-19T22:14:37 | Original-Party-2759 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lfnnif | false | null | t3_1lfnnif | /r/LocalLLaMA/comments/1lfnnif/as_a_storyteller_how_can_i_have_this/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'vt4ji7xgky7f1', 'resolutions': [{'height': 120, 'url': 'https://preview.redd.it/vt4ji7xgky7f1.jpeg?width=108&crop=smart&auto=webp&s=67a4a42f09c957d1800d8c07ac19a35a6b1b1bbc', 'width': 108}, {'height': 241, 'url': 'https://preview.redd.it/vt4ji7xgky7f1.jpeg?width=216&crop=smart&auto=... | |
Optimized Chatterbox TTS (Up to 2-4x non-batched speedup) | 45 | Over the past few weeks I've been experimenting for speed, and finally it's stable - a version that easily triples the original inference speed on my Windows machine with Nvidia 3090. I've also streamlined the torch dtype mismatch, so it does not require torch.autocast and thus using half precision is faster, lowering ... | 2025-06-19T22:14:13 | https://www.reddit.com/r/LocalLLaMA/comments/1lfnn7b/optimized_chatterbox_tts_up_to_24x_nonbatched/ | RSXLV | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfnn7b | false | null | t3_1lfnn7b | /r/LocalLLaMA/comments/1lfnn7b/optimized_chatterbox_tts_up_to_24x_nonbatched/ | false | false | self | 45 | null |
Prompt engineering tip: Use bulleted lists | 0 | I was asking gemini for a plan for an MVP. My prompt was messy. Output from gemini was good. I then asked deepseek the same. I liked how deepseek structured the output, more robotic, less prose.
I then asked gemini again in the style of deepseek and wow, what a difference. The output was so clean and tidy, less p... | 2025-06-19T21:47:47 | https://www.reddit.com/r/LocalLLaMA/comments/1lfn1l3/prompt_engineering_tip_use_bulleted_lists/ | Terminator857 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfn1l3 | false | null | t3_1lfn1l3 | /r/LocalLLaMA/comments/1lfn1l3/prompt_engineering_tip_use_bulleted_lists/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'RcFkSsKYzFr8PJ596l5oUr199tyGk3gGxxSpWVmCk2M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/RcFkSsKYzFr8PJ596l5oUr199tyGk3gGxxSpWVmCk2M.png?width=108&crop=smart&auto=webp&s=52e9c9359c50c5ce8fad8838f7c261d6848e95c0', 'width': 108}, {'height': 108, 'url': 'h... |
ICONN 1 is now out! | 270 | Hello to r/LocalLLaMA ,
Today is a huge day for us, and we're thrilled to finally share something we've poured an incredible amount of time and resources into: **ICONN-1**. This isn't another fine-tune; we built this model from the ground up, a project that involved a significant investment of **$50,000 to train from ... | 2025-06-19T21:44:34 | https://www.reddit.com/r/LocalLLaMA/comments/1lfmyy3/iconn_1_is_now_out/ | Enderchef | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfmyy3 | false | null | t3_1lfmyy3 | /r/LocalLLaMA/comments/1lfmyy3/iconn_1_is_now_out/ | false | false | self | 270 | {'enabled': False, 'images': [{'id': 'SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=108&crop=smart&auto=webp&s=5b2c6b95c12457e1084b5bb7a75f8669279c2f8e', 'width': 108}, {'height': 116, 'url': 'h... |
llama3.2:1b | 0 | Added this to test ollama was working with my 5070ti and I am seriously impressed. Near instant accurate responses beating 13B finetuned medical LLMs. | 2025-06-19T21:30:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lfmmyd/llama321b/ | Glittering-Koala-750 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfmmyd | false | null | t3_1lfmmyd | /r/LocalLLaMA/comments/1lfmmyd/llama321b/ | false | false | self | 0 | null |
iOS shortcut for private voice, text, and photo questions via Ollama API. | 1 | I've seen Gemini and OpenAI shortcuts, but I wanted something more private and locally hosted. So, I built this! You can ask your locally hosted AI questions via voice and text, and even with photos if you host a vision-capable model like Qwen2.5VL. Assigning it to your action button makes for fast and easy access. ... | 2025-06-19T21:17:25 | https://www.reddit.com/r/LocalLLaMA/comments/1lfmc4b/ios_shortcut_for_private_voice_text_and_photo/ | FreemanDave | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfmc4b | false | null | t3_1lfmc4b | /r/LocalLLaMA/comments/1lfmc4b/ios_shortcut_for_private_voice_text_and_photo/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'e-1E-fEOFlFbkjKUhF2g8IrVXDpKeL4Ty1BG16SOj3I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e-1E-fEOFlFbkjKUhF2g8IrVXDpKeL4Ty1BG16SOj3I.png?width=108&crop=smart&auto=webp&s=cccf196695128b046c6efeb7637fce727ba1dbf3', 'width': 108}, {'height': 108, 'url': 'h... |
We just added LlamaIndex support to AG-UI — bring a frontend to your agent | 15 | Hey all, I'm on the team behind AG-UI, a lightweight standard that brings agents into the UI as dynamic, stateful, real-time collaborators.
I'm seriously excited to share that **AG-UI now supports LlamaIndex** out of the box. You can wire up a LlamaIndex agent to a modern UI in seconds.
# AG-UI features:
* Real-time... | 2025-06-19T21:08:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lfm412/we_just_added_llamaindex_support_to_agui_bring_a/ | nate4t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfm412 | false | null | t3_1lfm412 | /r/LocalLLaMA/comments/1lfm412/we_just_added_llamaindex_support_to_agui_bring_a/ | false | false | self | 15 | null |
We just added LlamaIndex support to AG-UI — bring a frontend to your agent | 1 | [removed] | 2025-06-19T21:06:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lfm2cm/we_just_added_llamaindex_support_to_agui_bring_a/ | nate4t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfm2cm | false | null | t3_1lfm2cm | /r/LocalLLaMA/comments/1lfm2cm/we_just_added_llamaindex_support_to_agui_bring_a/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WfYO9CxPR5tbGfhzCSdTjR-PQPqbM1z-qIHZycRYaMY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WfYO9CxPR5tbGfhzCSdTjR-PQPqbM1z-qIHZycRYaMY.png?width=108&crop=smart&auto=webp&s=17f996362507f3c5a48f84f5f3186aae06e01f35', 'width': 108}, {'height': 108, 'url': 'h... |
Question: Multimodal LLM (text + image) with very long context (200k tokens) | 0 | Hi everyone,
I’m looking for an LLM that can handle both text and images with a very long context window, up to 200k tokens.
I saw that GPT-4-o (o3-mini) can handle 200k tokens but doesn’t process images. Current multimodal models usually support around 30k to 100k tokens max.
Two questions:
1. **Does a multimodal ... | 2025-06-19T21:03:46 | https://www.reddit.com/r/LocalLLaMA/comments/1lfm0dl/question_multimodal_llm_text_image_with_very_long/ | Mobile_Estate_9160 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfm0dl | false | null | t3_1lfm0dl | /r/LocalLLaMA/comments/1lfm0dl/question_multimodal_llm_text_image_with_very_long/ | false | false | self | 0 | null |
Tool for creating datasets from unstructured data. | 0 | Since creating datasets from unstructured data like text is cumbersome I thought, given that I'm a software engineer, I'd make a tool for it.
I'm not aware of any good and convenient solutions. Most of the time it's using ChatGPT and doing it manually or having to setup solution locally. (Let me know if there's a bett... | 2025-06-19T20:07:37 | https://www.reddit.com/r/LocalLLaMA/comments/1lfkns2/tool_for_creating_datasets_from_unstructured_data/ | WanderSprocket | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfkns2 | false | null | t3_1lfkns2 | /r/LocalLLaMA/comments/1lfkns2/tool_for_creating_datasets_from_unstructured_data/ | false | false | self | 0 | null |
Is the 3060 12GB the best performance/cost for entry level local hosted? | 1 | Hi, I was wondering if the 3060 would be a good buy for someone wanting to start out with Local host LLMs. I planned to look for something I can put in my small Proxmox home server/Nas to play around with things like Voice home assistant via small LLMs and just to learn more, so a bit of LLM, a bit of Stable Diffusion.... | 2025-06-19T20:06:59 | https://www.reddit.com/r/LocalLLaMA/comments/1lfkn72/is_the_3060_12gb_the_best_performancecost_for/ | SKX007J1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfkn72 | false | null | t3_1lfkn72 | /r/LocalLLaMA/comments/1lfkn72/is_the_3060_12gb_the_best_performancecost_for/ | false | false | self | 1 | null |
Preparing for the Intelligence Explosion | 0 | Abstract:
AI that can accelerate research could drive a century of technological progress over just a few years. During such a period, new technological or political developments will raise consequential and hard-to-reverse decisions, in rapid succession. We call these developments grand challenges. These challenges i... | 2025-06-19T19:52:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lfka3j/preparing_for_the_intelligence_explosion/ | jackdareel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfka3j | false | null | t3_1lfka3j | /r/LocalLLaMA/comments/1lfka3j/preparing_for_the_intelligence_explosion/ | false | false | self | 0 | null |
We Tested Apple's On-Device Model for RAG Task | 79 | Hey r/LocalLLaMA,
We ran Apple's on-device model through samples of our RAG evaluation framework (1000 questions).
# TL;DR
**The Good:**
* **8.5/10 factual accuracy** on questions it decides to answer (on par with best small models like Qwen3 4B and IBM Granite 3.3 2B)
* **\~30 tokens/second** on M3 MacBook Air (1... | 2025-06-19T19:25:13 | https://www.reddit.com/r/LocalLLaMA/comments/1lfjmx4/we_tested_apples_ondevice_model_for_rag_task/ | No_Salamander1882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfjmx4 | false | null | t3_1lfjmx4 | /r/LocalLLaMA/comments/1lfjmx4/we_tested_apples_ondevice_model_for_rag_task/ | false | false | self | 79 | {'enabled': False, 'images': [{'id': 'wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/wqsi-JfLb8pSXmshgX3Ny5LrE8yAdxgSirsoFM-A7B0.jpeg?width=108&crop=smart&auto=webp&s=070d5dc10bbb27decd7458d197b3348cf9d147e4', 'width': 108}, {'height': 144, 'url': '... |
that's 500 IQ move | 67 | 2025-06-19T19:21:47 | BoringAd6806 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lfjjxh | false | null | t3_1lfjjxh | /r/LocalLLaMA/comments/1lfjjxh/thats_500_iq_move/ | false | false | default | 67 | {'enabled': True, 'images': [{'id': 'duqrjaumpx7f1', 'resolutions': [{'height': 91, 'url': 'https://preview.redd.it/duqrjaumpx7f1.png?width=108&crop=smart&auto=webp&s=21e00044eaee0cf1b49d6e2a89a97accc47f8645', 'width': 108}, {'height': 182, 'url': 'https://preview.redd.it/duqrjaumpx7f1.png?width=216&crop=smart&auto=web... | ||
How to create synthetic datasets for multimodal models like vision and audio? | 0 | Just like we have the Meta synthetic datasets kit to create high quality synthetic datasets for text based models, how can we apply a similar approach to multimodal models like vision models,audio models? | 2025-06-19T19:09:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lfj8i4/how_to_create_synthetic_datasets_for_multimodal/ | SelectionCalm70 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfj8i4 | false | null | t3_1lfj8i4 | /r/LocalLLaMA/comments/1lfj8i4/how_to_create_synthetic_datasets_for_multimodal/ | false | false | self | 0 | null |
Any reason to go true local vs cloud? | 17 | **Is there any value for investing in a GPU — price for functionality?**
My own use case and conundrum:
I have access to some powerful enterprises level compute and environments at work (through Azure AI Foundry and enterprise Stack). I'm a hobbyist dev and tinkerer for LLMs, building a much needed upgrade to my perso... | 2025-06-19T19:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lfj8hf/any_reason_to_go_true_local_vs_cloud/ | ghost202 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfj8hf | false | null | t3_1lfj8hf | /r/LocalLLaMA/comments/1lfj8hf/any_reason_to_go_true_local_vs_cloud/ | false | false | self | 17 | null |
How to intsall Sesame TTS locall in Win | 1 | Hi everyone, puzzeled right now.
No matter how much I tried, I just can't seem to install sesame locally in my PC.
Even after following the detailed tutorial's from their gthb page, I just cannot get it to work.
Do I need to do anything other than following the instructions from the github page?
At the end, I want... | 2025-06-19T18:55:15 | https://www.reddit.com/r/LocalLLaMA/comments/1lfivt4/how_to_intsall_sesame_tts_locall_in_win/ | Dragonacious | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfivt4 | false | null | t3_1lfivt4 | /r/LocalLLaMA/comments/1lfivt4/how_to_intsall_sesame_tts_locall_in_win/ | false | false | self | 1 | null |
Help with Ollama & Open WebUI – Best Practices for Staff Knowledge Base | 1 | [removed] | 2025-06-19T18:43:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lfilpl/help_with_ollama_open_webui_best_practices_for/ | Numerous-Ideal-7665 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfilpl | false | null | t3_1lfilpl | /r/LocalLLaMA/comments/1lfilpl/help_with_ollama_open_webui_best_practices_for/ | false | false | self | 1 | null |
Kyutai's STT with semantic VAD now opensource | 132 | Kyutai published their latest tech demo few weeks ago, unmute.sh. It is an impressive voice-to-voice assistant using a 3rd-party text-to-text LLM (gemma), while retaining the conversation low latency of Moshi.
They are currently opensourcing the various components for that.
The first component they opensourced is the... | 2025-06-19T18:33:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lficpj/kyutais_stt_with_semantic_vad_now_opensource/ | phhusson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lficpj | false | null | t3_1lficpj | /r/LocalLLaMA/comments/1lficpj/kyutais_stt_with_semantic_vad_now_opensource/ | false | false | self | 132 | {'enabled': False, 'images': [{'id': 'Tk7TPUXaCv0JxtoKJ8ZTKaRNtOpd6Cvo5_neUZjNTYk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Tk7TPUXaCv0JxtoKJ8ZTKaRNtOpd6Cvo5_neUZjNTYk.png?width=108&crop=smart&auto=webp&s=9afbd00eebd442d93bbbcfa4b9ac60c1f5862891', 'width': 108}, {'height': 108, 'url': 'h... |
I have an dual xeon e5-2680v2 with 64gb of ram, what is the best local llm I can run ? | 0 | what the title says, I have an dual xeon e5-2680v2 with 64gb of ram, what is the best local llm I can run ? | 2025-06-19T18:04:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lfhm4m/i_have_an_dual_xeon_e52680v2_with_64gb_of_ram/ | eightbitgamefan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfhm4m | false | null | t3_1lfhm4m | /r/LocalLLaMA/comments/1lfhm4m/i_have_an_dual_xeon_e52680v2_with_64gb_of_ram/ | false | false | self | 0 | null |
New Finnish models (Poro 2) based on Llama 3.1 8B and 70B | 26 | Poro 2 models are based on Llama 3.1 for both 8B and 70B versions. They've been continually pre-trained on 165B tokens using a carefully balanced mix of Finnish, English, code, and math data.
In my opinion they perform better than Gemma 3 at least when it comes to Finnish. Gemma 3 is probably still smarter but won't w... | 2025-06-19T18:02:04 | https://www.reddit.com/r/LocalLLaMA/comments/1lfhjja/new_finnish_models_poro_2_based_on_llama_31_8b/ | mpasila | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfhjja | false | null | t3_1lfhjja | /r/LocalLLaMA/comments/1lfhjja/new_finnish_models_poro_2_based_on_llama_31_8b/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'JzIz-AHYk0Imuoe3OwQ8pRU0vyxqIGGeaF-52Aly9ho', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JzIz-AHYk0Imuoe3OwQ8pRU0vyxqIGGeaF-52Aly9ho.png?width=108&crop=smart&auto=webp&s=810717e51c264afc8ab7106884e570bbdb855c2a', 'width': 108}, {'height': 116, 'url': 'h... |
[Setup discussion] AMD RX 7900 XTX workstation for local LLMs — Linux or Windows as host OS? | 6 | Hey everyone,
I’m a software developer and currently building a workstation to run local LLMs. I want to experiment with agents, text-to-speech, image generation, multi-user interfaces, etc.
The goal is broad: from hobby projects to a shared AI assistant for my family.
Specs:
• GPU: RX 7900 XTX 24GB
• CPU: i7-14700... | 2025-06-19T17:55:43 | https://www.reddit.com/r/LocalLLaMA/comments/1lfhdnb/setup_discussion_amd_rx_7900_xtx_workstation_for/ | ElkanRoelen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfhdnb | false | null | t3_1lfhdnb | /r/LocalLLaMA/comments/1lfhdnb/setup_discussion_amd_rx_7900_xtx_workstation_for/ | false | false | self | 6 | null |
Is DDR4 and PCIe 3.0 holding back my inference speed? | 3 | I'm running Llama-CPP on two Rx 6800's (~512GB/s memory bandwidth) - each one getting 8 pcie lanes. I have a Ryzen 9 3950x paired with this and 64GB of 2900mhz DDR4 in dual-channel.
I'm extremely pleased with inference speeds for models that fit on one GPU, but I have a weird cap of ~40 tokens/second when using models... | 2025-06-19T17:44:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lfh3lc/is_ddr4_and_pcie_30_holding_back_my_inference/ | ForsookComparison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfh3lc | false | null | t3_1lfh3lc | /r/LocalLLaMA/comments/1lfh3lc/is_ddr4_and_pcie_30_holding_back_my_inference/ | false | false | self | 3 | null |
cheapest computer to install an rtx 3090 for inference ? | 2 | Hello, I need a second rig to run Magistral Q6 with an RTX3090 (I already have the 3090). I am actually running Magistral on an AMD 7950X, 128GB RAM, ProArt X870E , RTX 3090, and I get 30 tokens/s. Now I need a second rig for a second person with the same performance. I know the CPU should not impact a lot because the ... | 2025-06-19T17:42:51 | https://www.reddit.com/r/LocalLLaMA/comments/1lfh1s0/cheapest_computer_to_install_an_rtx_3090_for/ | vdiallonort | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfh1s0 | false | null | t3_1lfh1s0 | /r/LocalLLaMA/comments/1lfh1s0/cheapest_computer_to_install_an_rtx_3090_for/ | false | false | self | 2 | null |
Sam Altman says Meta offered OpenAI staff $100 million bonuses, as Mark Zuckerberg ramps up AI poaching efforts | 191 | "Meta Platforms tried to poach OpenAI employees by offering signing bonuses as high as $100 million, with even larger annual compensation packages, OpenAI chief executive Sam Altman said."
[https://www.cnbc.com/2025/06/18/sam-altman-says-meta-tried-to-poach-openai-staff-with-100-million-bonuses-mark-zuckerberg.html](... | 2025-06-19T17:30:37 | choose_a_guest | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lfgqkd | false | null | t3_1lfgqkd | /r/LocalLLaMA/comments/1lfgqkd/sam_altman_says_meta_offered_openai_staff_100/ | false | false | default | 191 | {'enabled': True, 'images': [{'id': 'niqpo23p5x7f1', 'resolutions': [{'height': 79, 'url': 'https://preview.redd.it/niqpo23p5x7f1.jpeg?width=108&crop=smart&auto=webp&s=7b72b5caa6732a946994182ff1bc5b7b83345b22', 'width': 108}, {'height': 159, 'url': 'https://preview.redd.it/niqpo23p5x7f1.jpeg?width=216&crop=smart&auto=w... | |
Run Deepseek locally on a 24g GPU: Quantizing on our Giga Computing 6980P Xeon | 48 | 2025-06-19T17:28:59 | https://www.youtube.com/watch?v=KQDpE2SLzbA | atape_1 | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 1lfgp3i | false | {'oembed': {'author_name': 'Level1Techs', 'author_url': 'https://www.youtube.com/@Level1Techs', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/KQDpE2SLzbA?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscop... | t3_1lfgp3i | /r/LocalLLaMA/comments/1lfgp3i/run_deepseek_locally_on_a_24g_gpu_quantizing_on/ | false | false | default | 48 | {'enabled': False, 'images': [{'id': 'hH0pP3ONlv9RFU_tt26eUVTIN9Qz11vaCtIPHTz4lhc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/hH0pP3ONlv9RFU_tt26eUVTIN9Qz11vaCtIPHTz4lhc.jpeg?width=108&crop=smart&auto=webp&s=1660c57ee933ab1644847b55d63a801d7dee0ab7', 'width': 108}, {'height': 162, 'url': '... | |
AMD Lemonade Server Update: Ubuntu, llama.cpp, Vulkan, webapp, and more! | 91 | Hi r/localllama, it’s been a bit since my [post](https://www.reddit.com/r/LocalLLaMA/comments/1jujc9p/introducing_lemonade_server_npuaccelerated_local/) introducing [Lemonade Server](https://lemonade-server.ai), AMD’s open-source local LLM server that prioritizes NPU and GPU acceleration.
GitHub: [https://github.com/l... | 2025-06-19T17:18:57 | https://www.reddit.com/gallery/1lfgfu5 | jfowers_amd | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lfgfu5 | false | null | t3_1lfgfu5 | /r/LocalLLaMA/comments/1lfgfu5/amd_lemonade_server_update_ubuntu_llamacpp_vulkan/ | false | false | 91 | {'enabled': True, 'images': [{'id': 'snRoVONGmevA0S70HKIe-_OVILdqukspGvuQ8vgG6Fg', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/snRoVONGmevA0S70HKIe-_OVILdqukspGvuQ8vgG6Fg.png?width=108&crop=smart&auto=webp&s=4a2f1112d7055199e7fba9720febcfec1ac3aabf', 'width': 108}, {'height': 204, 'url': 'h... | |
5090 benchmarks - where are they? | 10 | As much as I love my hybrid 28GB setup, I would love a few more tokens.
Qwen3 32b Q4KL gives me around 16 tps initially @ 32k context. What are you 5090 owners getting?
Does anyone even have a 5090? 3090 all the way?
| 2025-06-19T16:27:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lff4ni/5090_benchmarks_where_are_they/ | Secure_Reflection409 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lff4ni | false | null | t3_1lff4ni | /r/LocalLLaMA/comments/1lff4ni/5090_benchmarks_where_are_they/ | false | false | self | 10 | null |
Computer-Use on Windows Sandbox | 49 | Windows Sandbox support - run computer-use agents on Windows business apps without VMs or cloud costs.
Your enterprise software runs on Windows, but testing agents required expensive cloud instances. Windows Sandbox changes this - it's Microsoft's built-in lightweight virtualization sitting on every Windows 10/11 mach... | 2025-06-19T16:14:44 | https://v.redd.it/2xrdz059sw7f1 | Impressive_Half_2819 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lfetix | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/2xrdz059sw7f1/DASHPlaylist.mpd?a=1752941697%2COGZlMmE4NjlmZjQwNmNkMmM0ZTU0ZmE0MmZjZDNhNGIyNzNhNGRhZTVlODk3MmU3MzhiZTdmZDYzOGEzZTU4NQ%3D%3D&v=1&f=sd', 'duration': 69, 'fallback_url': 'https://v.redd.it/2xrdz059sw7f1/DASH_720.mp4?source=fallback', 'ha... | t3_1lfetix | /r/LocalLLaMA/comments/1lfetix/computeruse_on_windows_sandbox/ | false | false | 49 | {'enabled': False, 'images': [{'id': 'MHY2YzU5dThzdzdmMUUIhfD3WmHuxYkgbFXnt7PvLDhATd-8_6cYVR-PGp7c', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/MHY2YzU5dThzdzdmMUUIhfD3WmHuxYkgbFXnt7PvLDhATd-8_6cYVR-PGp7c.png?width=108&crop=smart&format=pjpg&auto=webp&s=eec08747952a4b84b9524ef0b8c461703eb98... | |
[Project] DeepSeek-Based 15M-Parameter Model for Children’s Stories (Open Source) | 21 | 2025-06-19T15:58:06 | https://www.reddit.com/r/LocalLLaMA/comments/1lfeein/project_deepseekbased_15mparameter_model_for/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfeein | false | null | t3_1lfeein | /r/LocalLLaMA/comments/1lfeein/project_deepseekbased_15mparameter_model_for/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'USEPksTbnhSpjNDP3AWTvRB_hIM8jFv6ba_v6qu8L9U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/USEPksTbnhSpjNDP3AWTvRB_hIM8jFv6ba_v6qu8L9U.png?width=108&crop=smart&auto=webp&s=c17c2869b5051648a79720b9a6713d7d3b76d7b5', 'width': 108}, {'height': 108, 'url': 'h... | ||
From GPT-2 to DeepSeek: A 15M-Parameter Model for Children’s Stories | 1 | [removed] | 2025-06-19T15:55:49 | https://www.reddit.com/r/LocalLLaMA/comments/1lfecjh/from_gpt2_to_deepseek_a_15mparameter_model_for/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfecjh | false | null | t3_1lfecjh | /r/LocalLLaMA/comments/1lfecjh/from_gpt2_to_deepseek_a_15mparameter_model_for/ | false | false | 1 | null | |
1-Bit LLM vs 1.58-Bit LLM | 0 | 1.58-bit LLM model is using terniary coding (-1, 0, +1) for the coefficients, where as 1-bit models are using binary coding (-1, +1) for the coefficients. In practice the terniary 1.58 bit coding is done using 2 bits of information.
The problem with 1-bit coefficients is that it is not possible to represent a zero, wh... | 2025-06-19T15:53:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lfeam0/1bit_llm_vs_158bit_llm/ | Vegetable_End_8935 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfeam0 | false | null | t3_1lfeam0 | /r/LocalLLaMA/comments/1lfeam0/1bit_llm_vs_158bit_llm/ | false | false | self | 0 | null |
Skywork-SWE-32B | 80 | [https://huggingface.co/Skywork/Skywork-SWE-32B](https://huggingface.co/Skywork/Skywork-SWE-32B)
***Skywork-SWE-32B*** is a code agent model developed by [Skywork AI](https://skywork.ai/home), specifically designed for software engineering (SWE) tasks. It demonstrates strong performance across several key metrics:
*... | 2025-06-19T15:45:12 | https://www.reddit.com/r/LocalLLaMA/comments/1lfe33m/skyworkswe32b/ | jacek2023 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfe33m | false | null | t3_1lfe33m | /r/LocalLLaMA/comments/1lfe33m/skyworkswe32b/ | false | false | self | 80 | {'enabled': False, 'images': [{'id': 'Qv1IlT89kGNjF7n0FWG6IungnWSmE77ruzXaHKrED_8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Qv1IlT89kGNjF7n0FWG6IungnWSmE77ruzXaHKrED_8.png?width=108&crop=smart&auto=webp&s=a2ca8271d9a1351e61b08293b19378467e2a8b75', 'width': 108}, {'height': 116, 'url': 'h... |
How do you size hardware | 1 | (my background: 25 years in tech, software engineer with lots of hardware/sysadmin experience)
I'm working with a tech-for-good startup and have created a chatbot app for them, which has some small specific tools (data validation and posting to an API)
I've had a lot of success with gemma3:12b-it-qat (but haven't sta... | 2025-06-19T15:44:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lfe2do/how_do_you_size_hardware/ | GroundbreakingMain93 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfe2do | false | null | t3_1lfe2do | /r/LocalLLaMA/comments/1lfe2do/how_do_you_size_hardware/ | false | false | self | 1 | null |
Skywork/Skywork-SWE-32B · Hugging Face | 1 | ***Skywork-SWE-32B*** is a code agent model developed by [Skywork AI](https://skywork.ai/home), specifically designed for software engineering (SWE) tasks. | 2025-06-19T15:43:45 | https://huggingface.co/Skywork/Skywork-SWE-32B | jacek2023 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lfe1sm | false | null | t3_1lfe1sm | /r/LocalLLaMA/comments/1lfe1sm/skyworkskyworkswe32b_hugging_face/ | false | false | default | 1 | null |
low cost egpu HW setup (DIY build from random parts config or otherwise) options / questions / suggestions? | 1 | 1: Simplest question -- if one has a modern LINUX(!) system with USB3.x ports without possible thunderbolt / PCIE tunneling, is there a technically reasonable option to connect egpus for inference over a USB 3.x 5 / 10 / 20 Gbps port? I assume there are things like USB based PCIE root complex controller ICs which coul... | 2025-06-19T15:43:27 | https://www.reddit.com/r/LocalLLaMA/comments/1lfe1jt/low_cost_egpu_hw_setup_diy_build_from_random/ | Calcidiol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfe1jt | false | null | t3_1lfe1jt | /r/LocalLLaMA/comments/1lfe1jt/low_cost_egpu_hw_setup_diy_build_from_random/ | false | false | self | 1 | null |
Browser-based tool to record, transcribe, and summarise your audio notes/meetings — all locally, no uploads | 0 | Built a website to capture meetings, transcribe and summarise them.
Record multiple audio clips into a single session.
Transcribe directly in the browser using Whisper.
Summarise the full session using Ollama or LM Studio.
Customised system prompts to suit your summarisation needs.
Cloud based options for trans... | 2025-06-19T15:28:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lfdo8q/browserbased_tool_to_record_transcribe_and/ | schawla | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfdo8q | false | null | t3_1lfdo8q | /r/LocalLLaMA/comments/1lfdo8q/browserbased_tool_to_record_transcribe_and/ | false | false | self | 0 | null |
Best offline image processor model? | 2 | I want to be able to set up an image processor that can distinguish what car is what.. make and model | 2025-06-19T15:16:30 | https://www.reddit.com/r/LocalLLaMA/comments/1lfddq1/best_offline_image_processor_model/ | chiknugcontinuum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfddq1 | false | null | t3_1lfddq1 | /r/LocalLLaMA/comments/1lfddq1/best_offline_image_processor_model/ | false | false | self | 2 | null |
Is there a way that I can have a llm or some kind of vision model identify different types of animals on a low power device like a pi? | 7 | At my job there's an issue of one kind of animal eating all the food meant for another kind of animal. For instance, there will be a deer feeder but the goats will find it and live by the feeder. I want the feeder to identify the type of animal before activating. I can do this with a PC, but some of these feeders ar... | 2025-06-19T15:09:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lfd7m6/is_there_a_way_that_i_can_have_a_llm_or_some_kind/ | Red_Redditor_Reddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfd7m6 | false | null | t3_1lfd7m6 | /r/LocalLLaMA/comments/1lfd7m6/is_there_a_way_that_i_can_have_a_llm_or_some_kind/ | false | false | self | 7 | null |
Has anyone tried the new ICONN-1 (an Apache licensed model) | 20 | A post was made by the creators on the Huggingface subreddit. I haven’t had a chance to use it yet. Has anyone else?
It isn’t clear at a quick glance if this is a dense model or MoE. The description mentions MoE so I assume it is, but no discussion on the expert size.
Supposedly this is a new base model, but I wonder... | 2025-06-19T15:09:13 | https://huggingface.co/ICONNAI/ICONN-1 | silenceimpaired | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1lfd7e2 | false | null | t3_1lfd7e2 | /r/LocalLLaMA/comments/1lfd7e2/has_anyone_tried_the_new_iconn1_an_apache/ | false | false | default | 20 | {'enabled': False, 'images': [{'id': 'SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/SPYrTwyJE3TQKvjnrmxAQjGjLKoUyWEDHwmv3_PzeoA.png?width=108&crop=smart&auto=webp&s=5b2c6b95c12457e1084b5bb7a75f8669279c2f8e', 'width': 108}, {'height': 116, 'url': 'h... |
First External Deployment Live — Cold Starts Solved Without Keeping GPUs Always On | 4 | Thanks to this community for all the feedback in earlier threads . we just completed our first real-world pilot of our snapshot-based LLM runtime. The goal was to eliminate idle GPU burn without sacrificing cold start performance.
In this setup:
•Model loading happens in under 2 seconds
•Snapshot-based orchestration... | 2025-06-19T14:59:17 | pmv143 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lfcycb | false | null | t3_1lfcycb | /r/LocalLLaMA/comments/1lfcycb/first_external_deployment_live_cold_starts_solved/ | false | false | default | 4 | {'enabled': True, 'images': [{'id': 'n8xdwq2tew7f1', 'resolutions': [{'height': 167, 'url': 'https://preview.redd.it/n8xdwq2tew7f1.jpeg?width=108&crop=smart&auto=webp&s=dd0fcf67698276cc7d87e0f17d48faab3784b962', 'width': 108}, {'height': 335, 'url': 'https://preview.redd.it/n8xdwq2tew7f1.jpeg?width=216&crop=smart&auto=... | |
OpenAI Post - Toward understanding and preventing misalignment generalization | 0 | They are saying training a single/narrow 'misaligned persona' can generalize to cause the model at large to be unethical.
I'm curious if this may be related to when you rain such a persona (a previous meta paper suggested that the initial training up to 3ish bits per parameter is memorization before it goes more into... | 2025-06-19T14:27:17 | https://openai.com/index/emergent-misalignment/ | noage | openai.com | 1970-01-01T00:00:00 | 0 | {} | 1lfc64h | false | null | t3_1lfc64h | /r/LocalLLaMA/comments/1lfc64h/openai_post_toward_understanding_and_preventing/ | false | false | default | 0 | null |
Choosing the best cloud LLM provider | 3 | Between google collab and other cloud providers for open source LLM. Do you think it is the best option ? I do want your opinions regarding what are other cheapest but good option as well | 2025-06-19T14:25:07 | https://www.reddit.com/r/LocalLLaMA/comments/1lfc49l/choosing_the_best_cloud_llm_provider/ | Glad_Net8882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfc49l | false | null | t3_1lfc49l | /r/LocalLLaMA/comments/1lfc49l/choosing_the_best_cloud_llm_provider/ | false | false | self | 3 | null |
Local AI setup 1x5090, 5x3090 | 32 | **What I’ve been building lately: a local multi-model AI stack that’s getting kind of wild (in a good way)**
Been heads-down working on a local AI stack that’s all about fast iteration and strong reasoning, fully running on consumer GPUs. It’s still evolving, but here’s what the current setup looks like:
# 🧑💻 Codi... | 2025-06-19T14:08:38 | https://www.reddit.com/r/LocalLLaMA/comments/1lfbqgw/local_ai_setup_1x5090_5x3090/ | Emergency_Fuel_2988 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lfbqgw | false | null | t3_1lfbqgw | /r/LocalLLaMA/comments/1lfbqgw/local_ai_setup_1x5090_5x3090/ | false | false | self | 32 | null |
Hallucination? | 0 | Can someone help me out? im using msty and no matter which local model i use its generating incorrect response. I've tried reinstalling too but it doesn't work | 2025-06-19T13:06:03 | Sussymannnn | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lfacep | false | null | t3_1lfacep | /r/LocalLLaMA/comments/1lfacep/hallucination/ | false | false | 0 | {'enabled': True, 'images': [{'id': 'q_YaUOnuuBrX5ZRPSRfHrtrjtGvzbh1mafhvkkEcuP4', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/2hn0v2r9uv7f1.png?width=108&crop=smart&auto=webp&s=1b2816651367eedfab6f88151cde3cd10da3294f', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/2hn0v2r9uv7f1.png... | ||
Chatbox AI Delisted from iOS App Store. Any good alternatives? | 1 | Not sure why it got delisted..
https://chatboxai.app/en
What do you use to connect back to Llamacpp/Kobold/LM Studio?
Most of the apps require a ton of permissions. | 2025-06-19T12:49:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lf9zph/chatbox_ai_delisted_from_ios_app_store_any_good/ | simracerman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf9zph | false | null | t3_1lf9zph | /r/LocalLLaMA/comments/1lf9zph/chatbox_ai_delisted_from_ios_app_store_any_good/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/JQJYWP9EtIyW64HOx_ngOVbE5TF6SXekcj5FkVZaVII.png?width=108&crop=smart&auto=webp&s=279a09b67459be926a08944e6c9ea50312a63a5f', 'width': 108}, {'height': 113, 'url': 'h... |
Explain AI and MCP to a 5 year old in the 90s | 116 | 2025-06-19T12:45:32 | https://www.reddit.com/gallery/1lf9wof | cov_id19 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lf9wof | false | null | t3_1lf9wof | /r/LocalLLaMA/comments/1lf9wof/explain_ai_and_mcp_to_a_5_year_old_in_the_90s/ | false | false | 116 | {'enabled': True, 'images': [{'id': '64oqjh3Mi1lX6NMRJ57nKz9L5oT26BTAsIGTdNtrvn8', 'resolutions': [{'height': 144, 'url': 'https://external-preview.redd.it/64oqjh3Mi1lX6NMRJ57nKz9L5oT26BTAsIGTdNtrvn8.png?width=108&crop=smart&auto=webp&s=f36552420d32e54898af3ff5799e89deef32d887', 'width': 108}, {'height': 289, 'url': 'h... | ||
I just shipped an AI Voice Agent that replaced the entire cold calling team | 0 | ERROR: type should be string, got "https://preview.redd.it/ap25f767nv7f1.png?width=1375&format=png&auto=webp&s=30462d11f4685b74033a9c2bc34abe0c122ca001\n\n \n \n \nMost automated-call setups are glorified IVRs:\n\n* No real outbound calls\n* Freeze at objections\n* Can’t lock meetings or send follow-ups by email \n* Definitely can’t close deals or trigger payments\n\nSo I built a smarter one with **a NO CODE voice agent** with 6 plugins. Rolled it out last week for a mid-size healthcare clinic, and here’s what it handles from them now:\n\n* **24/7 inbound:** every call answered, zero hold music.\n* **Smart triage:** checks doctor availability, books the slot and send a calendar invite, then emails + messages the patient the details.\n* **Post-visit feedback:** calls back after the appointment, grabs NPS in under a minute.\n\nUnder the hood it’s the same multi-agent stack I use for outbound SDR work: **Superu AI** grabs form data, scrapes public info, writes context-aware scripts on the fly, branches when the caller changes topic, and logs everything to the CRM. \n \nMy role? \nWe'll building an agent that talks is just a few min. task. \n\nShaping the agent to handle the queries, random questions and detailed info on the topic all this is done through the prompting which took me 3 days of hit and trail to make it talk like this.\n\nOfcourse it can be done better just spend more time in fining your promt.\n\n**Week-one stats:** zero missed calls, 72 % booking rate, receptionist finally free to help walk-ins.\n\nI can see a lot of business opportunities for folks like us dealing with even local business can make us good bucks\n\n\n\n \n" | 2025-06-19T12:24:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lf9hq8/i_just_shipped_an_ai_voice_agent_that_replaced/ | Agile_Baseball8351 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf9hq8 | false | null | t3_1lf9hq8 | /r/LocalLLaMA/comments/1lf9hq8/i_just_shipped_an_ai_voice_agent_that_replaced/ | false | false | 0 | null | |
Help me pick a PDF to Markdown/JSON converter pleaseeee | 0 | I’m trying to pick an OCR or document parsing tool, but the market’s noisy and hard to compare (everyone's benchmark says they're the best). Also LLMs are expensive. If you’ve worked with any, would love your input.
What’s your primary use case or workflow involving document parsing or understanding?
Which tools or s... | 2025-06-19T12:09:24 | https://www.reddit.com/r/LocalLLaMA/comments/1lf96ez/help_me_pick_a_pdf_to_markdownjson_converter/ | Ordinary_Quantity_68 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf96ez | false | null | t3_1lf96ez | /r/LocalLLaMA/comments/1lf96ez/help_me_pick_a_pdf_to_markdownjson_converter/ | false | false | self | 0 | null |
Kyutai new Speech-To-Text models (STT 1B and STT 2.6B) | 1 | [removed] | 2025-06-19T11:50:37 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lf8tjs | false | null | t3_1lf8tjs | /r/LocalLLaMA/comments/1lf8tjs/kyutai_new_speechtotext_models_stt_1b_and_stt_26b/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '7n7oe4l0hv7f1', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/7n7oe4l0hv7f1.jpeg?width=108&crop=smart&auto=webp&s=62369d70b00ce933afbc36bf159dbdb859ab3a36', 'width': 108}, {'height': 242, 'url': 'https://preview.redd.it/7n7oe4l0hv7f1.jpeg?width=216&crop=smart&auto=... | |
Kyutai Speech-To-Text (STT 1B and STT 2.6B) | 1 | [removed] | 2025-06-19T11:48:16 | Nunki08 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lf8s02 | false | null | t3_1lf8s02 | /r/LocalLLaMA/comments/1lf8s02/kyutai_speechtotext_stt_1b_and_stt_26b/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'AY9vrwbgWgnFsbCkEPMPj5n4vk9It8_LAjPFdOZF2Ms', 'resolutions': [{'height': 121, 'url': 'https://preview.redd.it/2ytns4pmgv7f1.jpeg?width=108&crop=smart&auto=webp&s=322c6be20be877bd949756520db41d4e188fad28', 'width': 108}, {'height': 242, 'url': 'https://preview.redd.it/2ytns4pmgv7f1.j... | ||
Kyutai Speech-To-Text (STT 1B and STT 2.6B) | 1 | Kyutai STT: A speech-to-text optimized for real-time usage: [https://kyutai.org/next/stt](https://kyutai.org/next/stt)
kyutai/stt-1b-en\_fr: [https://huggingface.co/kyutai/stt-1b-en\_fr](https://huggingface.co/kyutai/stt-1b-en_fr)
kyutai/stt-2.6b-en: [https://huggingface.co/kyutai/stt-2.6b-en](https://huggingface.c... | 2025-06-19T11:39:05 | https://v.redd.it/qvkso7b6ev7f1 | Nunki08 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lf8m3j | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/qvkso7b6ev7f1/DASHPlaylist.mpd?a=1752925160%2CYjc1NGQ4MzllODkyY2RhMzlhZDU2MGYwNWMzYTI5ZjcyNDZmMjk1MDlhMWQ2ODY0NDAxMGMyYzM3NWI4N2QyNQ%3D%3D&v=1&f=sd', 'duration': 53, 'fallback_url': 'https://v.redd.it/qvkso7b6ev7f1/DASH_1080.mp4?source=fallback', 'h... | t3_1lf8m3j | /r/LocalLLaMA/comments/1lf8m3j/kyutai_speechtotext_stt_1b_and_stt_26b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NTU3MHNjYjZldjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NTU3MHNjYjZldjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=108&crop=smart&format=pjpg&auto=webp&s=e26c6a65a553eca662a60e66d3808c5370a2... | |
Kyutai Speech-To-Text (STT 1B and STT 2.6B) | 1 | Kyutai STT: A speech-to-text optimized for real-time usage: [https://kyutai.org/next/stt](https://kyutai.org/next/stt)
kyutai/stt-1b-en\_fr: [https://huggingface.co/kyutai/stt-1b-en\_fr](https://huggingface.co/kyutai/stt-1b-en_fr)
kyutai/stt-2.6b-en: [https://huggingface.co/kyutai/stt-2.6b-en](https://huggingface.c... | 2025-06-19T11:31:39 | https://v.redd.it/mxcthq2wbv7f1 | Nunki08 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lf8h8f | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/mxcthq2wbv7f1/DASHPlaylist.mpd?a=1752924716%2CNDYyNmU1ZGQyNzVjZmEyMDM2MGY5NGZkYjk3NzQ3OTJhZGQ1ZmU0NmExOWE4OTExZDUzMjNlY2M5NGE2ZTE3Nw%3D%3D&v=1&f=sd', 'duration': 53, 'fallback_url': 'https://v.redd.it/mxcthq2wbv7f1/DASH_1080.mp4?source=fallback', 'h... | t3_1lf8h8f | /r/LocalLLaMA/comments/1lf8h8f/kyutai_speechtotext_stt_1b_and_stt_26b/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'NTQ1ZGZ1MndidjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/NTQ1ZGZ1MndidjdmMVZ-Uud54CiEh-fogT8UBkwtbcV04X-Xv1aF2JRHETwz.png?width=108&crop=smart&format=pjpg&auto=webp&s=b720638c1baec415bb29802edca13f28ed9c... | |
I have a HP workstation running a xeon e5 2699v4 I would like to add 4 p40s I would like to know if this is possible. | 0 | It is a Z440 Here is a picture of the motherboard. what adapters and such would I need to get 4 p40s to work. I could run two power supplies if that would help.
https://preview.redd.it/bycisoz59v7f1.jpg?width=4000&format=pjpg&auto=webp&s=46a9b06fa0090ed3720d24b588e3ebce8fcd3aaa
| 2025-06-19T11:06:52 | https://www.reddit.com/r/LocalLLaMA/comments/1lf81mp/i_have_a_hp_workstation_running_a_xeon_e5_2699v4/ | tbandtg | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf81mp | false | null | t3_1lf81mp | /r/LocalLLaMA/comments/1lf81mp/i_have_a_hp_workstation_running_a_xeon_e5_2699v4/ | false | false | 0 | null | |
🧠 Lost in the Mix: How Well Do LLMs Understand Code-Switched Text? | 3 | A new preprint takes a deep dive into the blind spot of multilingual LLMs: **code-switching**—where two or more languages are mixed within the same sentence or discourse.
📄 ["Lost in the Mix: Evaluating LLM Understanding of Code-Switched Text"](https://arxiv.org/abs/2506.14012v1)
Key insights:
* ⚠️ Embedding *non-E... | 2025-06-19T11:00:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lf7xdm/lost_in_the_mix_how_well_do_llms_understand/ | Ok-Cut-3551 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf7xdm | false | null | t3_1lf7xdm | /r/LocalLLaMA/comments/1lf7xdm/lost_in_the_mix_how_well_do_llms_understand/ | false | false | self | 3 | null |
"Cheap" 24GB GPU options for fine-tuning? | 3 | I'm currently weighing up options for a GPU to fine-tune larger LLMs (Deepseek 70b), as well as give me reasonable performance in inference. I'm willing to compromise speed for card capacity.
Was initially considering a 3090 but after some digging there seems to be a lot more NVIDIA cards that have potential (p40, ec... | 2025-06-19T10:55:55 | https://www.reddit.com/r/LocalLLaMA/comments/1lf7ux8/cheap_24gb_gpu_options_for_finetuning/ | deus119 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf7ux8 | false | null | t3_1lf7ux8 | /r/LocalLLaMA/comments/1lf7ux8/cheap_24gb_gpu_options_for_finetuning/ | false | false | self | 3 | null |
Need help with finetuning | 1 | I need to finetune an open source model to summarise and analyze very large context data (around 50000 tokens, cannot decompose it into chunks). I need to do both SFT and reinforcement learning.
Does anyone have experience with ORPO, DPO on very large context? ORPO though claims to use less memmory because of no ref... | 2025-06-19T10:47:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lf7ppq/need_help_with_finetuning/ | Elemental_Ray | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf7ppq | false | null | t3_1lf7ppq | /r/LocalLLaMA/comments/1lf7ppq/need_help_with_finetuning/ | false | false | self | 1 | null |
Mixture Of Adversaries. | 6 | # Mixture of Adversaries (MoA)
## Intro
I wanted to think of a system that would address the major issues preventing "mission critical" use of LLMs:
**1. Hallucinations**
* No internal "Devil's advocate" or consensus mechanism to call itself out with
**2. Outputs tend to prepresent a "regression to the mean"**
* ov... | 2025-06-19T09:39:42 | https://www.reddit.com/r/LocalLLaMA/comments/1lf6nvw/mixture_of_adversaries/ | teleprax | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf6nvw | false | null | t3_1lf6nvw | /r/LocalLLaMA/comments/1lf6nvw/mixture_of_adversaries/ | false | false | self | 6 | null |
Which Open-source VectorDB for storing ColPali/ColQwen embeddings? | 4 | Hi everyone, this is my first post in this subreddit, and I'm wondering if this is the best sub to ask this.
I'm currently doing a research project that involves using ColPali embedding/retrieval modules for RAG. However, from my research, I found out that most vector databases are highly incompatible with the embeddi... | 2025-06-19T09:36:20 | https://www.reddit.com/r/LocalLLaMA/comments/1lf6m5i/which_opensource_vectordb_for_storing/ | dafroggoboi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf6m5i | false | null | t3_1lf6m5i | /r/LocalLLaMA/comments/1lf6m5i/which_opensource_vectordb_for_storing/ | false | false | self | 4 | null |
Less than 2GB models Hallucinate on the first prompt itself in LM studio | 0 | I have tried with 5 models which are less than 2 GB and they keep repeating 4-5 lines again and again.
I have a RTX 2060 6GB VRAM, 16GB RAM, 8 core 16 threads ryzen.
Models greater than 2GB in size run fine.
I have tried changing temperature and model import settings but nothing has worked out so far. | 2025-06-19T09:20:35 | https://v.redd.it/hkl35nfjpu7f1 | HareMayor | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1lf6e1t | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/hkl35nfjpu7f1/DASHPlaylist.mpd?a=1752916849%2CYzg3YjkwMTBiNmEyZWViMDNmZTk1YWFhNDgxN2JhMWEyYWRlODA5ODYwYjg3YmYxMjllODdlNDFjZjU3NDRjYw%3D%3D&v=1&f=sd', 'duration': 15, 'fallback_url': 'https://v.redd.it/hkl35nfjpu7f1/DASH_1080.mp4?source=fallback', 'h... | t3_1lf6e1t | /r/LocalLLaMA/comments/1lf6e1t/less_than_2gb_models_hallucinate_on_the_first/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'bnhlMWtyZmpwdTdmMaMSsUFIRXdehewBebiXVkkMz_xXnA2RrWxjB5EV6p-n', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/bnhlMWtyZmpwdTdmMaMSsUFIRXdehewBebiXVkkMz_xXnA2RrWxjB5EV6p-n.png?width=108&crop=smart&format=pjpg&auto=webp&s=ba571f0e412edc79661b4e59c3e351a6cb21b... | |
Few-Shot Examples: Overfitting / Leakage | 0 | #TL:DR
How do I get a model to avoid leaking/ overfitting its system prompt examples into the outputs?
# Context
I'm working with **qwen3 32b Q4_K_L**, in both thinking and non-thinking modes with 7900XTX on vulkan, for a structured output pipeline with the recommended sampling parameters, besides min_p = 0.01
# Iss... | 2025-06-19T09:11:58 | https://www.reddit.com/r/LocalLLaMA/comments/1lf69bk/fewshot_examples_overfitting_leakage/ | ROS_SDN | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf69bk | false | null | t3_1lf69bk | /r/LocalLLaMA/comments/1lf69bk/fewshot_examples_overfitting_leakage/ | false | false | self | 0 | null |
Anyone have experience with Refact.ai tool? | 0 | I recently found [refact.ai](http://refact.ai) on SWE bench, on the lite version being the highest scorer. It is also an open source tool but i can't a lot information about it or the group behind it.
Does anyone have experience with it? Care to share it? | 2025-06-19T09:05:29 | https://www.reddit.com/r/LocalLLaMA/comments/1lf65ts/anyone_have_experience_with_refactai_tool/ | EternalOptimister | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf65ts | false | null | t3_1lf65ts | /r/LocalLLaMA/comments/1lf65ts/anyone_have_experience_with_refactai_tool/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '8iyUZQC1CU-UelDJDhHlD9c6m04ywSmrgXg-sZuUFzc', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/8iyUZQC1CU-UelDJDhHlD9c6m04ywSmrgXg-sZuUFzc.png?width=108&crop=smart&auto=webp&s=47f9c989917007a59d33a69890214a2974cb771e', 'width': 108}, {'height': 113, 'url': 'h... |
Personalized AI Tutor built on top of Gemini | 1 | [removed] | 2025-06-19T08:55:02 | https://www.reddit.com/r/LocalLLaMA/comments/1lf6059/personalized_ai_tutor_built_on_top_of_gemini/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf6059 | false | null | t3_1lf6059 | /r/LocalLLaMA/comments/1lf6059/personalized_ai_tutor_built_on_top_of_gemini/ | false | false | self | 1 | null |
Personalized AI Tutor built on top of Gemini | 1 | [removed] | 2025-06-19T08:54:11 | https://www.reddit.com/r/LocalLLaMA/comments/1lf5zos/personalized_ai_tutor_built_on_top_of_gemini/ | OtherRaisin3426 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf5zos | false | null | t3_1lf5zos | /r/LocalLLaMA/comments/1lf5zos/personalized_ai_tutor_built_on_top_of_gemini/ | false | false | self | 1 | null |
Qwen 2.5 32B or Similar Models | 2 | Hi everyone, I'm quite new to the concepts around Large Language Models (LLMs). From what I've seen so far, most of the API access for these models seems to be paid or subscription based. I was wondering if anyone here knows about ways to access or use these models for free—either through open-source alternatives or by... | 2025-06-19T08:52:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lf5z06/qwen_25_32b_or_similar_models/ | Valuable_Benefit9938 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf5z06 | false | null | t3_1lf5z06 | /r/LocalLLaMA/comments/1lf5z06/qwen_25_32b_or_similar_models/ | false | false | self | 2 | null |
Jan got an upgrade: New design, switched from Electron to Tauri, custom assistants, and 100+ fixes - it's faster & more stable now | 491 | Jan v0.6.0 is out.
* Fully redesigned UI
* Switched from Electron to Tauri for lighter and more efficient performance
* You can create your own assistants with instructions & custom model settings
* New themes & customization settings (e.g. font size, code block highlighting style)
Including improvements to thread ha... | 2025-06-19T08:52:09 | https://www.reddit.com/gallery/1lf5yog | eck72 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1lf5yog | false | null | t3_1lf5yog | /r/LocalLLaMA/comments/1lf5yog/jan_got_an_upgrade_new_design_switched_from/ | false | false | 491 | {'enabled': True, 'images': [{'id': '9cdGcDSZyRjc_UX2wAVlzVVLsf-enKZlEOwOWD7xiXc', 'resolutions': [{'height': 86, 'url': 'https://external-preview.redd.it/9cdGcDSZyRjc_UX2wAVlzVVLsf-enKZlEOwOWD7xiXc.png?width=108&crop=smart&auto=webp&s=221b4db86ddab09ec6f129c3e6c9b3234bfc02e8', 'width': 108}, {'height': 173, 'url': 'ht... | |
Effect of Linux on M-series Mac inference perfomance | 0 | Hi everyone! Recently I have been considering buying a used M-series Mac for everyday use and local LLM inferece. I am looking for decent T/s with 8-32B models, and good CPU performace for my work (which M-series Macs are known for). I am generally a fan of the unified memory idea and the philosophy with which these co... | 2025-06-19T08:14:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lf5eu2/effect_of_linux_on_mseries_mac_inference/ | libregrape | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf5eu2 | false | null | t3_1lf5eu2 | /r/LocalLLaMA/comments/1lf5eu2/effect_of_linux_on_mseries_mac_inference/ | false | false | self | 0 | null |
Giving invite link of manus ai Agent. (With 1.9k token ) | 0 | I think many already know manus ai agent. It's awesome.
You can get 1500+300 free credit and access of this ai agent. Enjoy
>Use this Invite
[Link](https://manus.im/invitation/QE3PHKPEV6PGVRI) | 2025-06-19T07:26:50 | https://www.reddit.com/r/LocalLLaMA/comments/1lf4otq/giving_invite_link_of_manus_ai_agent_with_19k/ | shadow--404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf4otq | false | null | t3_1lf4otq | /r/LocalLLaMA/comments/1lf4otq/giving_invite_link_of_manus_ai_agent_with_19k/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU', 'resolutions': [{'height': 57, 'url': 'https://external-preview.redd.it/-KicQtg3F05jY9RJnOIW7mTXG5gYHARkFYj99D5ifPU.png?width=108&crop=smart&auto=webp&s=260714b5951bb46fdf2bf0a74425b2ca66c9306b', 'width': 108}, {'height': 114, 'url': 'h... |
Freeplane xml mind maps locally: only Qwen3 and Phi4 Reasoning Plus can create them in one shot? | 2 | I started to experiment with Freeplane xml mind map creation using only LLMs. Grok can create ingenious xml mind maps, which can be opened in Freeplane. But there are local solutions too! I used Qwen3 14b q8 and Phi4 Reasoning Plus q8 to create xml mind maps. In my opinion Phi4 Reasoning Plus is the king of local mind ... | 2025-06-19T07:24:48 | https://www.reddit.com/r/LocalLLaMA/comments/1lf4npv/freeplane_xml_mind_maps_locally_only_qwen3_and/ | custodiam99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf4npv | false | null | t3_1lf4npv | /r/LocalLLaMA/comments/1lf4npv/freeplane_xml_mind_maps_locally_only_qwen3_and/ | false | false | self | 2 | null |
Does ollama pass username or other info to models? | 1 | Searched around but can't find a clear answer about this, was wondering if anybody here knew before I start poking around the source.
This evening I installed a fresh copy of Debian on my machine to mess around with my new 4060 Ti, downloaded ollama and gemma3 as user eliasnd, and for my first message asked it to wri... | 2025-06-19T06:52:12 | https://www.reddit.com/r/LocalLLaMA/comments/1lf45eq/does_ollama_pass_username_or_other_info_to_models/ | eliasnd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf45eq | false | null | t3_1lf45eq | /r/LocalLLaMA/comments/1lf45eq/does_ollama_pass_username_or_other_info_to_models/ | false | false | self | 1 | null |
Looking to generate videos of cartoon characters - need help with suggestions. | 2 | I’m interested in generating video of popular cartoon characters like SpongeBob and Homer. I’m curious about the approach and tools I should use to achieve this.
Currently, all models can generate videos up to 5 seconds long, which is fine for me. However, I want the anatomy and art style of the characters to remain a... | 2025-06-19T06:19:56 | https://www.reddit.com/r/LocalLLaMA/comments/1lf3nak/looking_to_generate_videos_of_cartoon_characters/ | 6UwO9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf3nak | false | null | t3_1lf3nak | /r/LocalLLaMA/comments/1lf3nak/looking_to_generate_videos_of_cartoon_characters/ | false | false | self | 2 | null |
Embedding Language Model (ELM) | 13 | I can be a bit nutty, but this HAS to be the future.
The ability to sample and score over the continuous latent representation, made relatively extremely transparent by a densely populated semantic "map" which can be traversed.
Anyone want to team up and train one 😎 | 2025-06-19T05:48:24 | https://arxiv.org/html/2310.04475v2 | Repulsive-Memory-298 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 1lf35fh | false | null | t3_1lf35fh | /r/LocalLLaMA/comments/1lf35fh/embedding_language_model_elm/ | false | false | default | 13 | null |
Multiple claude code pro accounts on One Machine? my path into madness (and a plea for sanity, lol, guyzz this is bad) | 0 | Okay, so hear me out. My workflow is... intense. And one Claude Code Pro account just isn't cutting it. I've got a couple of pro accounts for... reasons. Don't ask. (whispering, ... saving cost..., keep that as a secret for me, will ya)
Back to topic, how in the world do you switch between them on the same machine wit... | 2025-06-19T05:47:53 | https://www.reddit.com/r/LocalLLaMA/comments/1lf354a/multiple_claude_code_pro_accounts_on_one_machine/ | ExplanationEqual2539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf354a | false | null | t3_1lf354a | /r/LocalLLaMA/comments/1lf354a/multiple_claude_code_pro_accounts_on_one_machine/ | false | false | self | 0 | null |
Voice Mode - Dirty Method | 1 | [removed] | 2025-06-19T05:17:03 | https://www.reddit.com/r/LocalLLaMA/comments/1lf2n6u/voice_mode_dirty_method/ | MixedPixels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf2n6u | false | null | t3_1lf2n6u | /r/LocalLLaMA/comments/1lf2n6u/voice_mode_dirty_method/ | false | false | self | 1 | null |
Is there any LLM tool for UX and accessibility? | 1 | Is there any LLM tool for UX and accessibility? I am looking for some kind of scanner that detects issues in my apps. | 2025-06-19T04:11:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lf1j2v/is_there_any_llm_tool_for_ux_and_accessibility/ | darkcatpirate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf1j2v | false | null | t3_1lf1j2v | /r/LocalLLaMA/comments/1lf1j2v/is_there_any_llm_tool_for_ux_and_accessibility/ | false | false | self | 1 | null |
Which AWS Sagemaker Quota to request for training llama 3.2-3B-Instruct with PPO and Reinforcement learning? | 3 | This is my first time using AWS. I have been added to my PI's lab organization, which has some credits. Now I am trying to do an experiment where I will be basically using a modified reward method for training llama3.2-3B with PPO. The authors of the original work used 4 A100 GPUs for their training with PPO (they used... | 2025-06-19T03:26:54 | https://www.reddit.com/r/LocalLLaMA/comments/1lf0pk9/which_aws_sagemaker_quota_to_request_for_training/ | Furiousguy79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf0pk9 | false | null | t3_1lf0pk9 | /r/LocalLLaMA/comments/1lf0pk9/which_aws_sagemaker_quota_to_request_for_training/ | false | false | self | 3 | null |
IdeaWeaver: One CLI to Train, Track, and Deploy Your Models with Custom Data | 0 | ERROR: type should be string, got "\n\nhttps://i.redd.it/6qqrrq4qys7f1.gif\n\nAre you looking for a single tool that can handle the entire lifecycle of training a model on your data, track experiments, and register models effortlessly?\n\nMeet IdeaWeaver.\n\nWith just a single command, you can:\n\n* Train a model using your custom dataset\n* Automatically track experiments in MLflow, Comet, or DagsHub\n* Push trained models to registries like Hugging Face Hub, MLflow, Comet, or DagsHub\n\nAnd we’re not stopping there, AWS Bedrock integration is coming soon.\n\nNo complex setup. No switching between tools. Just clean CLI-based automation.\n\n\n\n👉 Learn more here: [https://ideaweaver-ai-code.github.io/ideaweaver-docs/training/train-output/](https://ideaweaver-ai-code.github.io/ideaweaver-docs/training/train-output/)\n\n👉 GitHub repo: [https://github.com/ideaweaver-ai-code/ideaweaver](https://github.com/ideaweaver-ai-code/ideaweaver)\n\n" | 2025-06-19T03:24:41 | https://www.reddit.com/r/LocalLLaMA/comments/1lf0o4u/ideaweaver_one_cli_to_train_track_and_deploy_your/ | Prashant-Lakhera | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf0o4u | false | null | t3_1lf0o4u | /r/LocalLLaMA/comments/1lf0o4u/ideaweaver_one_cli_to_train_track_and_deploy_your/ | false | false | 0 | null | |
Any LLM that can detect musical tonality from an audio? | 5 | I was wondering if there is such a thing locally. | 2025-06-19T02:52:23 | https://www.reddit.com/r/LocalLLaMA/comments/1lf01uz/any_llm_that_can_detect_musical_tonality_from_an/ | 9acca9 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lf01uz | false | null | t3_1lf01uz | /r/LocalLLaMA/comments/1lf01uz/any_llm_that_can_detect_musical_tonality_from_an/ | false | false | self | 5 | null |
[Open] LMeterX - Professional Load Testing for Any OpenAI-Compatible LLM API | 9 | **Solving Real Pain Points**
🤔 Don't know your LLM's concurrency limits?
🤔 Need to compare model performance but lack proper tools?
🤔 Want professional metrics (TTFT, TPS, RPS) not just basic HTTP stats?
**Key Features**
✅ Universal compatibility - Applicable to any openai format API such as GPT, Claude, Ll... | 2025-06-19T02:45:45 | https://www.reddit.com/r/LocalLLaMA/comments/1lezxa9/open_lmeterx_professional_load_testing_for_any/ | SignalBelt7205 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1lezxa9 | false | null | t3_1lezxa9 | /r/LocalLLaMA/comments/1lezxa9/open_lmeterx_professional_load_testing_for_any/ | false | false | 9 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.