title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Why can't I reproduce benchmark scores from papers like Phi, Llama, or Qwen? Am I doing something wrong or is this normal? | 1 | [removed] | 2025-05-27T16:15:45 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1kwre9q | false | null | t3_1kwre9q | /r/LocalLLaMA/comments/1kwre9q/why_cant_i_reproduce_benchmark_scores_from_papers/ | false | false | default | 1 | null | ||
Voice Assisted TODO Agent with Ollama | 1 | [removed] | 2025-05-27T16:15:29 | Wooden_Living_4553 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kwrdzl | false | null | t3_1kwrdzl | /r/LocalLLaMA/comments/1kwrdzl/voice_assisted_todo_agent_with_ollama/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'LisfHUiqtK9k1MtziN_r-RZlPG_tB6GEhiKt4WeyidE', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/paub2wq0nc3f1.png?width=108&crop=smart&auto=webp&s=70020610d2f3b21ce82f42454d59ffb465fd8777', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/paub2wq0nc3f1.png... | ||
Why can't I reproduce benchmark scores from papers like Phi, Llama, or Qwen? Am I doing something wrong or is this normal? | 1 | [removed] | 2025-05-27T16:14:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kwrcut/why_cant_i_reproduce_benchmark_scores_from_papers/ | Loose-Touch6108 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwrcut | false | null | t3_1kwrcut | /r/LocalLLaMA/comments/1kwrcut/why_cant_i_reproduce_benchmark_scores_from_papers/ | false | false | self | 1 | null |
Models with very recent training data? | 4 | I'm looking for a local model that has very recent training data, like April or May of this year.
I want to use it with Ollama and connect it to Figma's new MCP server so that I can instruct the model to create directly in Figma.
Seeing as Figma MCP support just released in he last few months, I figure I might have s... | 2025-05-27T16:08:57 | https://www.reddit.com/r/LocalLLaMA/comments/1kwr7ya/models_with_very_recent_training_data/ | new_pr0spect | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwr7ya | false | null | t3_1kwr7ya | /r/LocalLLaMA/comments/1kwr7ya/models_with_very_recent_training_data/ | false | false | self | 4 | null |
[Research] AutoThink: Adaptive reasoning technique that improves local LLM performance by 43% on GPQA-Diamond | 162 | Hey r/LocalLLaMA!
I wanted to share a technique we've been working on called **AutoThink** that significantly improves reasoning performance on local models through adaptive resource allocation and steering vectors.
# What is AutoThink?
Instead of giving every query the same amount of "thinking time," AutoThink:
1.... | 2025-05-27T15:53:20 | https://www.reddit.com/r/LocalLLaMA/comments/1kwqt64/research_autothink_adaptive_reasoning_technique/ | asankhs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwqt64 | false | null | t3_1kwqt64 | /r/LocalLLaMA/comments/1kwqt64/research_autothink_adaptive_reasoning_technique/ | false | false | self | 162 | {'enabled': False, 'images': [{'id': 'FDXdZKuGBEAtS9eO0aBLVTgnysK1vKKknVcyyajidhI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZVZ79K3RkE2tKrpxJPOiyb0pGu5dId_pvgcx07xOs6g.jpg?width=108&crop=smart&auto=webp&s=432009460474e3d3b1cb12c99e65235fd8abcd13', 'width': 108}, {'height': 108, 'url': 'h... |
Best local/open-source coding models for 24GB VRAM? | 9 | Hey so i recently got a 3090 for pretty cheap, and thus i'm not really memory-constrained anymore.
I wanted to ask for the best currently available models i could use for code on my machine.
That'd be for all sorts of projects but mostly Python, C, C++, Java projects. Not much web dev or niche languages. I'm looking ... | 2025-05-27T15:49:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kwqq1p/best_localopensource_coding_models_for_24gb_vram/ | HRudy94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwqq1p | false | null | t3_1kwqq1p | /r/LocalLLaMA/comments/1kwqq1p/best_localopensource_coding_models_for_24gb_vram/ | false | false | self | 9 | null |
Local Educational LLM | 1 | [removed] | 2025-05-27T15:36:59 | https://www.reddit.com/r/LocalLLaMA/comments/1kwqe4k/local_educational_llm/ | _camera_up | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwqe4k | false | null | t3_1kwqe4k | /r/LocalLLaMA/comments/1kwqe4k/local_educational_llm/ | false | false | self | 1 | null |
Local LLM server setup | 1 | [removed] | 2025-05-27T15:34:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kwqbra/local_llm_server_setup/ | _infY_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwqbra | false | null | t3_1kwqbra | /r/LocalLLaMA/comments/1kwqbra/local_llm_server_setup/ | false | false | self | 1 | null |
Local LLM server setup | 1 | [removed] | 2025-05-27T15:22:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kwq0sd/local_llm_server_setup/ | _infY_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwq0sd | false | null | t3_1kwq0sd | /r/LocalLLaMA/comments/1kwq0sd/local_llm_server_setup/ | false | false | self | 1 | null |
error when trying to import a dataset in Google Collab | 1 | [removed] | 2025-05-27T14:35:40 | https://www.reddit.com/r/LocalLLaMA/comments/1kwounh/error_when_trying_to_import_a_dataset_in_google/ | Glad_Net8882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwounh | false | null | t3_1kwounh | /r/LocalLLaMA/comments/1kwounh/error_when_trying_to_import_a_dataset_in_google/ | false | false | self | 1 | null |
Tool like llama-swap, but for very different runtimes too? | 1 | [removed] | 2025-05-27T14:29:43 | https://www.reddit.com/r/LocalLLaMA/comments/1kwopd7/tool_like_llamaswap_but_for_very_different/ | yelling-at-clouds-40 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwopd7 | false | null | t3_1kwopd7 | /r/LocalLLaMA/comments/1kwopd7/tool_like_llamaswap_but_for_very_different/ | false | false | self | 1 | null |
I created a ChatGPT-like UI for Local LLMs | 20 | Hey r/LocalLLaMA (and other AI enthusiasts!),
Wanted to share a project I've been pouring my evenings and weekends into: **AnyLM**.
I'm a big fan of local LLMs (Ollama, LMStudio, etc.) but always found myself wanting a cleaner, more integrated UI, something like ChatGPT, but for all my models, both local and API-base... | 2025-05-27T14:22:41 | https://www.reddit.com/gallery/1kwoj76 | QuantumPancake422 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kwoj76 | false | null | t3_1kwoj76 | /r/LocalLLaMA/comments/1kwoj76/i_created_a_chatgptlike_ui_for_local_llms/ | false | false | 20 | null | |
I created a ChatGPT-like UI for Local LLMs | 1 | [removed] | 2025-05-27T14:14:44 | https://www.reddit.com/gallery/1kwocja | BeautifulFlower7101 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kwocja | false | null | t3_1kwocja | /r/LocalLLaMA/comments/1kwocja/i_created_a_chatgptlike_ui_for_local_llms/ | false | false | 1 | null | |
I created a ChatGPT-like UI for Local LLMs | 1 | [removed] | 2025-05-27T14:12:08 | https://www.reddit.com/gallery/1kwoacr | BeautifulFlower7101 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kwoacr | false | null | t3_1kwoacr | /r/LocalLLaMA/comments/1kwoacr/i_created_a_chatgptlike_ui_for_local_llms/ | false | false | 1 | null | |
Invented a new AI reasoning framework called HDA2A and wrote a basic paper - Potential to be something massive - check it out | 0 | [removed] | 2025-05-27T14:06:07 | https://www.reddit.com/r/LocalLLaMA/comments/1kwo5d3/invented_a_new_ai_reasoning_framework_called/ | Zizosk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwo5d3 | false | null | t3_1kwo5d3 | /r/LocalLLaMA/comments/1kwo5d3/invented_a_new_ai_reasoning_framework_called/ | false | false | self | 0 | null |
Why is my LLaMA running on CPU? | 0 | Sorry, I am obviously new to this.
I have python 3.10.6 installed, I created a venv and installed the requirements form the file and successfully ran the web ui locally but when I ran my first prompt I noticed it's exectuting on the CPU.
I also couldn't find any documentation, am I that bad at this? ;) If you have... | 2025-05-27T14:04:35 | https://www.reddit.com/r/LocalLLaMA/comments/1kwo41n/why_is_my_llama_running_on_cpu/ | ThinKingofWaves | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwo41n | false | null | t3_1kwo41n | /r/LocalLLaMA/comments/1kwo41n/why_is_my_llama_running_on_cpu/ | false | false | self | 0 | null |
Play with Meta's Byte Latent Transformer "tokenizer-free" patcher in a HF Space | 1 | [removed] | 2025-05-27T13:59:13 | https://www.reddit.com/r/LocalLLaMA/comments/1kwnz5l/play_with_metas_byte_latent_transformer/ | lucalp__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwnz5l | false | null | t3_1kwnz5l | /r/LocalLLaMA/comments/1kwnz5l/play_with_metas_byte_latent_transformer/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bWXVvwa8flCrmkYJvLXfE5G12bSSSTbkElYwWaDiCi0', 'resolutions': [{'height': 104, 'url': 'https://external-preview.redd.it/xysnssK0wWdIRckvWVwaBSbIhMo96eApOHbJ846j7qQ.jpg?width=108&crop=smart&auto=webp&s=2cd1045517eda93c2aaafc19130bea85c7466318', 'width': 108}], 'source': {'height': 12... |
Switched from a PC to Mac for LLM dev - One week Later | 79 |
[Broke down and bought a Mac Mini - my processes run 5x faster : r/LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/1ks5sh4/broke_down_and_bought_a_mac_mini_my_processes_run/)
Exactly a week ago I tromped to the Apple Store and bought a Mac Mini M4 Pro with 24gb memory - the model they usually stock in store... | 2025-05-27T13:54:23 | https://www.reddit.com/r/LocalLLaMA/comments/1kwnv4o/switched_from_a_pc_to_mac_for_llm_dev_one_week/ | ETBiggs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwnv4o | false | null | t3_1kwnv4o | /r/LocalLLaMA/comments/1kwnv4o/switched_from_a_pc_to_mac_for_llm_dev_one_week/ | false | false | self | 79 | null |
Any low resources but high-quality sound Cloning TTS? | 1 | [removed] | 2025-05-27T13:43:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kwnm07/any_low_resources_but_highquality_sound_cloning/ | Tombother | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwnm07 | false | null | t3_1kwnm07 | /r/LocalLLaMA/comments/1kwnm07/any_low_resources_but_highquality_sound_cloning/ | false | false | self | 1 | null |
Made app for LLM/MCP/Agent experimenation | 9 | This is app for experimenting with different AI models and MCP servers. It supports anything OpenAI-compatible - OpenAI, Google, Mistral, LM Studio, Ollama, llama.cpp.
It's an open-source desktop app in Go [https://github.com/unra73d/agent-smith](https://github.com/unra73d/agent-smith)
You can select any combination ... | 2025-05-27T13:33:30 | https://www.reddit.com/r/LocalLLaMA/comments/1kwndsy/made_app_for_llmmcpagent_experimenation/ | Gold_Ad_2201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwndsy | false | null | t3_1kwndsy | /r/LocalLLaMA/comments/1kwndsy/made_app_for_llmmcpagent_experimenation/ | false | false | 9 | {'enabled': False, 'images': [{'id': 'ecA_GUF8_sowTBPFJO57MLXl9YjvGT4N6nOHP6HlaFk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SvhIAAf7rr58Ch8QyfpquKGkkCLO_l6uEg7_aB6MIk4.jpg?width=108&crop=smart&auto=webp&s=278737a0c99ce81afabd4f2823bda207ccda7d65', 'width': 108}, {'height': 108, 'url': 'h... | |
Trying to fine tune llama 3.2 3B on a custom data set for a random college to see how it goes ....but results are not as expected....new trained model can't seem to answer based on the new data. | 1 | [removed] | 2025-05-27T13:27:42 | https://www.reddit.com/r/LocalLLaMA/comments/1kwn959/trying_to_fine_tune_llama_32_3b_on_a_custom_data/ | Adorable-Device-2732 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwn959 | false | null | t3_1kwn959 | /r/LocalLLaMA/comments/1kwn959/trying_to_fine_tune_llama_32_3b_on_a_custom_data/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0-fRWqjlLadVXj5pfYp4_Oe3xgBWE-_rdjVSn7hlohI.jpg?width=108&crop=smart&auto=webp&s=f34d2dfdbbfa7de0f1956f186fd8430ee96a1a55', 'width': 108}, {'height': 216, 'url': '... |
Setup Recommendation for University (H200 vs RTX 6000 Pro) | 6 | My (small) university asked me to build a machine with GPUs that we're going to share between 2 PhD students and myself for a project (we got a grant for that).
The budget is 100k€. The machine will be used for training and data generation during the first year.
After that, we will turn it into an inference machine... | 2025-05-27T13:25:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kwn7t4/setup_recommendation_for_university_h200_vs_rtx/ | tkon3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwn7t4 | false | null | t3_1kwn7t4 | /r/LocalLLaMA/comments/1kwn7t4/setup_recommendation_for_university_h200_vs_rtx/ | false | false | self | 6 | null |
No DeepSeek v3 0526 | 0 | Unfortunately, the link was a placeholder and the release didn't materialize. | 2025-05-27T13:19:15 | https://docs.unsloth.ai/basics/deepseek-v3-0526-rumors | Hanthunius | docs.unsloth.ai | 1970-01-01T00:00:00 | 0 | {} | 1kwn29v | false | null | t3_1kwn29v | /r/LocalLLaMA/comments/1kwn29v/no_deepseek_v3_0526/ | false | false | 0 | {'enabled': False, 'images': [{'id': 'ZmadbtMLxXXHFKwJkCjeTUDuX5sS57sYwkHR8IIGo6Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/fxYCW6fqdbJ5RWjh_x1fsIyj0ZtZFx8MOAvXVxIw2PE.jpg?width=108&crop=smart&auto=webp&s=1ef4773905a7285d6ca9d2707252ecf3322ec746', 'width': 108}, {'height': 108, 'url': 'h... | |
FairyR1 32B / 14B | 42 | 2025-05-27T13:19:11 | https://huggingface.co/collections/PKU-DS-LAB/fairy-r1-6834014fe8fd45bc211c6dd7 | AaronFeng47 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kwn27n | false | null | t3_1kwn27n | /r/LocalLLaMA/comments/1kwn27n/fairyr1_32b_14b/ | false | false | 42 | {'enabled': False, 'images': [{'id': 'VV-f_T9dnmvGYd_O4osXjD47mjveffCENOVTP6PdJvw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/W-qV0BV1voPJhiTsOQdsGmcAlL-lVFIkzu14DCr59cA.jpg?width=108&crop=smart&auto=webp&s=475c9047773d45c6e63407bdf89c2e914ea97493', 'width': 108}, {'height': 116, 'url': 'h... | ||
mtmd : support Qwen 2.5 Omni (input audio+vision, no audio output) by ngxson · Pull Request #13784 · ggml-org/llama.cpp | 61 | 2025-05-27T12:58:06 | https://github.com/ggml-org/llama.cpp/pull/13784 | jacek2023 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kwmlos | false | null | t3_1kwmlos | /r/LocalLLaMA/comments/1kwmlos/mtmd_support_qwen_25_omni_input_audiovision_no/ | false | false | default | 61 | null | |
Just bought a used 3090. Should I keep my 3080 10GB? | 1 | [removed] | 2025-05-27T12:45:31 | https://www.reddit.com/r/LocalLLaMA/comments/1kwmc3n/just_bought_a_used_3090_should_i_keep_my_3080_10gb/ | Turbulent_Jump_2000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwmc3n | false | null | t3_1kwmc3n | /r/LocalLLaMA/comments/1kwmc3n/just_bought_a_used_3090_should_i_keep_my_3080_10gb/ | false | false | self | 1 | null |
Any good way to use LM Studio API as a chat backend with anything besides OpenWebUI? Tired of ChatGPT model switching and want all local with damn web search. | 11 | Tried for hours with OpenWebUI and it doesn't see a single model I have with Lmstudio even with it loaded I lowkey just want a local web UI with web search I can use qwen 30b with and stop dealing with ChatGPT's awful model switching which just gives me wrong answers to basic questions unless I manually switch it to o4... | 2025-05-27T12:37:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kwm5z0/any_good_way_to_use_lm_studio_api_as_a_chat/ | Commercial-Celery769 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwm5z0 | false | null | t3_1kwm5z0 | /r/LocalLLaMA/comments/1kwm5z0/any_good_way_to_use_lm_studio_api_as_a_chat/ | false | false | self | 11 | null |
Finetuning or running the new gemma 3n models locally? | 2 | Has anyone had any luck running these new 3n models?
i noticed the safetensors aren't released yet so if you are running it or fine tuning it how are you going about the process?
[https://huggingface.co/collections/google/gemma-3n-preview-682ca41097a31e5ac804d57b](https://huggingface.co/collections/google/gemma-3n-pr... | 2025-05-27T12:31:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kwm20i/finetuning_or_running_the_new_gemma_3n_models/ | jay2jp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwm20i | false | null | t3_1kwm20i | /r/LocalLLaMA/comments/1kwm20i/finetuning_or_running_the_new_gemma_3n_models/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'lMXsg923oKXNqAFcv091XpOzt0tS-VbvJyD1BGYthSo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/nuTGd6nR-D7i0exzDvXeyeroWnA1sgWJyyF8GipdVWU.jpg?width=108&crop=smart&auto=webp&s=7d9d79bae8b5636ef4da12984fd0bbb5d013938c', 'width': 108}, {'height': 116, 'url': 'h... |
Please help to choose GPU for Ollama setup | 0 | So, I dipping me feet in to local LLMs, I first tried it on LM Studio on my desktop with 3080ti and it runs nicely, but I want to run it on my homeserver, not desktop.
So ATM I launched it on Debian VM runnning on Proxmox. it has 12 CPU threads dedicated to it, outh of 12 threads(6 cores) my AMD Ryzen 3600 has, and... | 2025-05-27T12:28:17 | https://www.reddit.com/r/LocalLLaMA/comments/1kwlzez/please_help_to_choose_gpu_for_ollama_setup/ | bswan2 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwlzez | false | null | t3_1kwlzez | /r/LocalLLaMA/comments/1kwlzez/please_help_to_choose_gpu_for_ollama_setup/ | false | false | self | 0 | null |
Run qwen 30b-a3b on Android local with Alibaba MNN Chat | 66 | https://github.com/alibaba/MNN/blob/master/apps/Android/MnnLlmChat/README.md#version-050
| 2025-05-27T12:26:03 | https://v.redd.it/aafvzgkhib3f1 | Juude89 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kwlxvb | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/aafvzgkhib3f1/DASHPlaylist.mpd?a=1750940777%2COTIyYjZiYjFhMTc1YTdlMWNjNDUwNzZiNzc3YmUwNDc1MGYxYjEwOTIyYzkzYmVlNThkNDZjNmFiYTE2MTE1MQ%3D%3D&v=1&f=sd', 'duration': 46, 'fallback_url': 'https://v.redd.it/aafvzgkhib3f1/DASH_720.mp4?source=fallback', 'ha... | t3_1kwlxvb | /r/LocalLLaMA/comments/1kwlxvb/run_qwen_30ba3b_on_android_local_with_alibaba_mnn/ | false | false | 66 | {'enabled': False, 'images': [{'id': 'aGZnZW1ma2hpYjNmMebcV0-OYASONSRSOZTsoevngxFFIFBRatfx4SVyyBoC', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/aGZnZW1ma2hpYjNmMebcV0-OYASONSRSOZTsoevngxFFIFBRatfx4SVyyBoC.png?width=108&crop=smart&format=pjpg&auto=webp&s=5e732ab1100613bbeaf51f16ad95465b67e4... | |
I created a ChatGPT-like UI for Local LLMs | 1 | [removed] | 2025-05-27T12:14:08 | https://www.reddit.com/gallery/1kwlpfe | BeautifulFlower7101 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kwlpfe | false | null | t3_1kwlpfe | /r/LocalLLaMA/comments/1kwlpfe/i_created_a_chatgptlike_ui_for_local_llms/ | false | false | 1 | null | |
Are there any good small MoE models? Something like 8B or 6B or 4B with active 2B | 9 | Thanks | 2025-05-27T11:50:41 | https://www.reddit.com/r/LocalLLaMA/comments/1kwl974/are_there_any_good_small_moe_models_something/ | Own-Potential-2308 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwl974 | false | null | t3_1kwl974 | /r/LocalLLaMA/comments/1kwl974/are_there_any_good_small_moe_models_something/ | false | false | self | 9 | null |
newbie,, versions mismatch hell with triton,vllm and unsloth | 0 | this is my fist time training a model
trying to use unsloth to fine tune qwen0.6b-bnb but i keep running into problems at first i asked chat gpt and ity suggested downgrading from python .13 to .11 i went there and now its suggestin going to .10
reading unsloth or vllm or triton repos doesnt mention having to use py .... | 2025-05-27T11:46:48 | https://www.reddit.com/r/LocalLLaMA/comments/1kwl6lo/newbie_versions_mismatch_hell_with_tritonvllm_and/ | Excel_Document | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwl6lo | false | null | t3_1kwl6lo | /r/LocalLLaMA/comments/1kwl6lo/newbie_versions_mismatch_hell_with_tritonvllm_and/ | false | false | self | 0 | null |
Seeking Guidance on Fine-Tuning for a Desktop Development Project | 1 | [removed] | 2025-05-27T11:16:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kwknjt/seeking_guidance_on_finetuning_for_a_desktop/ | YoussefTrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwknjt | false | null | t3_1kwknjt | /r/LocalLLaMA/comments/1kwknjt/seeking_guidance_on_finetuning_for_a_desktop/ | false | false | self | 1 | null |
Is speculative Decoding effective for handling multiple user queries concurrently or w/o SD is better. | 6 | has anyone tried speculative decoding for handling multiple user queries concurrently.
how does it perform. | 2025-05-27T11:07:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kwkhrx/is_speculative_decoding_effective_for_handling/ | Remarkable-Law9287 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwkhrx | false | null | t3_1kwkhrx | /r/LocalLLaMA/comments/1kwkhrx/is_speculative_decoding_effective_for_handling/ | false | false | self | 6 | null |
Wife isn’t home, that means H200 in the living room ;D | 784 | Finally got our H200 System, until it’s going in the datacenter next week that means localLLaMa with some extra power :D | 2025-05-27T10:40:11 | https://www.reddit.com/gallery/1kwk1jm | Flintbeker | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kwk1jm | false | null | t3_1kwk1jm | /r/LocalLLaMA/comments/1kwk1jm/wife_isnt_home_that_means_h200_in_the_living_room/ | false | false | 784 | null | |
I'm looking for Gemini 2.0 Flash alternative for ComfyUI (to run locally) 🙏 | 1 | [removed] | 2025-05-27T10:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kwjrp4/im_looking_for_gemini_20_flash_alternative_for/ | VirtualWishX | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwjrp4 | false | null | t3_1kwjrp4 | /r/LocalLLaMA/comments/1kwjrp4/im_looking_for_gemini_20_flash_alternative_for/ | false | false | self | 1 | null |
I want to create a project of Text to Speech locally without api | 1 | [removed] | 2025-05-27T09:41:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kwj4uy/i_want_to_create_a_project_of_text_to_speech/ | atmanirbhar21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwj4uy | false | null | t3_1kwj4uy | /r/LocalLLaMA/comments/1kwj4uy/i_want_to_create_a_project_of_text_to_speech/ | false | false | self | 1 | null |
Anyone tried DCPMM with LLMs? | 7 | I've been seeing 128GB DCPMM modules for \~70usd per, thinking of using them. What's the performance like? | 2025-05-27T09:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/1kwj4d7/anyone_tried_dcpmm_with_llms/ | sTrollZ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwj4d7 | false | null | t3_1kwj4d7 | /r/LocalLLaMA/comments/1kwj4d7/anyone_tried_dcpmm_with_llms/ | false | false | self | 7 | null |
I want to create a project of Text to Speech locally without any external api's | 1 | [removed] | 2025-05-27T09:38:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kwj35u/i_want_to_create_a_project_of_text_to_speech/ | atmanirbhar21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwj35u | false | null | t3_1kwj35u | /r/LocalLLaMA/comments/1kwj35u/i_want_to_create_a_project_of_text_to_speech/ | false | false | self | 1 | null |
The Aider LLM Leaderboards were updated with benchmark results for Claude 4, revealing that Claude 4 Sonnet didn't outperform Claude 3.7 Sonnet | 310 | 2025-05-27T09:37:08 | Dr_Karminski | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kwj2p2 | false | null | t3_1kwj2p2 | /r/LocalLLaMA/comments/1kwj2p2/the_aider_llm_leaderboards_were_updated_with/ | false | false | 310 | {'enabled': True, 'images': [{'id': 'qUOEc2SfwtuE8pn7inuvXBcwGr7N8Gr78cogRiqTaJM', 'resolutions': [{'height': 131, 'url': 'https://preview.redd.it/ls92grf5oa3f1.png?width=108&crop=smart&auto=webp&s=2d8a8e6c515161d0ef8ca37e9e9c5c655cd8eeb0', 'width': 108}, {'height': 262, 'url': 'https://preview.redd.it/ls92grf5oa3f1.pn... | |||
Unsloth Fine-tuning without Nvidia | 1 | [removed] | 2025-05-27T09:30:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kwiyxo/unsloth_finetuning_without_nvidia/ | Glad_Net8882 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwiyxo | false | null | t3_1kwiyxo | /r/LocalLLaMA/comments/1kwiyxo/unsloth_finetuning_without_nvidia/ | false | false | self | 1 | null |
Engineers who work in companies that have embraced AI coding, how has your worklife changed? | 82 | I've been working on my own since just before GPT 4, so I never experienced AI in the workplace. How has the job changed? How are sprints run? Is more of your time spent reviewing pull requests? Has the pace of releases increased? Do things break more often? | 2025-05-27T09:03:54 | https://www.reddit.com/r/LocalLLaMA/comments/1kwile2/engineers_who_work_in_companies_that_have/ | thezachlandes | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwile2 | false | null | t3_1kwile2 | /r/LocalLLaMA/comments/1kwile2/engineers_who_work_in_companies_that_have/ | false | false | self | 82 | null |
Is it possible to run LLM entirely on decentralized nodes with no cloud backend? | 1 | [removed] | 2025-05-27T09:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kwikte/is_it_possible_to_run_llm_entirely_on/ | Maleficent_Apple_287 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwikte | false | null | t3_1kwikte | /r/LocalLLaMA/comments/1kwikte/is_it_possible_to_run_llm_entirely_on/ | false | false | self | 1 | null |
Is there any model/provider agnostic client/sdk which has support for MCP, tools, RAG, multimodality, streaming, etc.? | 0 | I'm currently looking for a model/provider-agnostic SDK or client that supports a wide range of modern LLM capabilities out of the box. Specifically, I'm interested in something that covers:
Multi-provider compatibility (OpenAI,
Ollama, Google, etc., and possibly even it's own backend if possible for local model file... | 2025-05-27T08:43:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kwib3x/is_there_any_modelprovider_agnostic_clientsdk/ | GamerWael | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwib3x | false | null | t3_1kwib3x | /r/LocalLLaMA/comments/1kwib3x/is_there_any_modelprovider_agnostic_clientsdk/ | false | false | self | 0 | null |
SEED-GRPO: Pure RL with a 7B Model Sets New SOTA on AIME24 (56.7) | 1 | [removed] | 2025-05-27T08:43:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kwib3n/seedgrpo_pure_rl_with_a_7b_model_sets_new_sota_on/ | Competitive_Pilot_75 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwib3n | false | null | t3_1kwib3n | /r/LocalLLaMA/comments/1kwib3n/seedgrpo_pure_rl_with_a_7b_model_sets_new_sota_on/ | false | false | self | 1 | null |
Open Source iOS OLLAMA Client | 8 | As you all know, ollama is a program that allows you to install and use various latest LLMs on your computer. Once you install it on your computer, you don't have to pay a usage fee, and you can install and use various types of LLMs according to your performance.
https://preview.redd.it/wb9qvk3vaa3f1.png?width=1984&fo... | 2025-05-27T08:22:08 | https://www.reddit.com/r/LocalLLaMA/comments/1kwi08a/open_source_ios_ollama_client/ | billythepark | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwi08a | false | null | t3_1kwi08a | /r/LocalLLaMA/comments/1kwi08a/open_source_ios_ollama_client/ | false | false | 8 | null | |
Cognito: Your AI Sidekick for Chrome. A MIT licensed very lightweight Web UI with multitools. | 92 | * **Easiest Setup: No python, no docker, no endless dev packages.** Just download it from [Chrome](https://chromewebstore.google.com/detail/pphjdjdoclkedgiaahmiahladgcpohca?utm_source=item-share-cb) or my [Github](https://github.com/3-ark/Cognito-AI_Sidekick) (Same with the store, just the latest release). You don't n... | 2025-05-27T08:13:51 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhw20/cognito_your_ai_sidekick_for_chrome_a_mit/ | Asleep-Ratio7535 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhw20 | false | null | t3_1kwhw20 | /r/LocalLLaMA/comments/1kwhw20/cognito_your_ai_sidekick_for_chrome_a_mit/ | false | false | self | 92 | {'enabled': False, 'images': [{'id': 'KQnoxsNXSt_ExJwiasklEOLgivpWk4hJKurpwqFTYpI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/LK9RDOmWWg-1fAGxwAutIIXGyDq1BJqkepaHRh_fCYA.jpg?width=108&crop=smart&auto=webp&s=9a8d1d978965d9ac2d5363effe7f8828084d0689', 'width': 108}, {'height': 108, 'url': 'h... |
Why LLM Agents Still Hallucinate (Even with Tool Use and Prompt Chains) | 44 | You’d think calling external tools would “fix” hallucinations in LLM agents, but even with tools integrated (LangChain, ReAct, etc.), the bots still confidently invent or misuse tool outputs.
Part of the problem is that most pipelines treat the LLM like a black box between prompt → tool → response. There's no consiste... | 2025-05-27T08:04:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhr56/why_llm_agents_still_hallucinate_even_with_tool/ | Mountain-Insect-2153 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhr56 | false | null | t3_1kwhr56 | /r/LocalLLaMA/comments/1kwhr56/why_llm_agents_still_hallucinate_even_with_tool/ | false | false | self | 44 | null |
What are the best practices (sampling settings and prompting) for OCR, especially subtitles? | 1 | [removed] | 2025-05-27T08:04:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhr3a/what_are_the_best_practices_sampling_settings_and/ | nmkd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhr3a | false | null | t3_1kwhr3a | /r/LocalLLaMA/comments/1kwhr3a/what_are_the_best_practices_sampling_settings_and/ | false | false | self | 1 | null |
3x AMD Instinct MI50 (48GB VRAM total): what can I do with it? | 3 | Hi everyone,
I've been running some smaller models locally on my laptop as a coding assistant, but I decided I wanted to run bigger models and maybe get answers a little bit faster.
Last weekend, I came across a set of 3 AMD MI50's on eBay which I bought for 330 euro total. I picked up an old 3-way CrossFire motherbo... | 2025-05-27T07:46:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhif5/3x_amd_instinct_mi50_48gb_vram_total_what_can_i/ | spaceman_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhif5 | false | null | t3_1kwhif5 | /r/LocalLLaMA/comments/1kwhif5/3x_amd_instinct_mi50_48gb_vram_total_what_can_i/ | false | false | self | 3 | null |
What are the best vision models at the moment ? | 15 | I'm trying to create an app that extract data from scanned documents and photos, and I was using InterVL2.5-4b running with ollama, but I was wondering if there are better models out there ?
What are your recommendation ?
I wanted to try the 8b version of intervl but there is no GGUF available at the moment.
Than... | 2025-05-27T07:42:01 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhg3t/what_are_the_best_vision_models_at_the_moment/ | Wintlink- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhg3t | false | null | t3_1kwhg3t | /r/LocalLLaMA/comments/1kwhg3t/what_are_the_best_vision_models_at_the_moment/ | false | false | self | 15 | null |
Best LLM for PGS to SRT (movies) | 1 | [removed] | 2025-05-27T07:38:00 | https://www.reddit.com/r/LocalLLaMA/comments/1kwhe3j/best_llm_for_pgs_to_srt_movies/ | Im_The_Hollow_Man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwhe3j | false | null | t3_1kwhe3j | /r/LocalLLaMA/comments/1kwhe3j/best_llm_for_pgs_to_srt_movies/ | false | false | self | 1 | null |
Llama 3.1 Nemotron Ultra 253B Q3_K_S on my machine (RTX 6000 + 256GB DDR5 RAM) | 1 | [removed] | 2025-05-27T07:27:22 | https://www.reddit.com/r/LocalLLaMA/comments/1kwh8mv/llama_31_nemotron_ultra_253b_q3_k_s_on_my_machine/ | Wide_Food_2636 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwh8mv | false | null | t3_1kwh8mv | /r/LocalLLaMA/comments/1kwh8mv/llama_31_nemotron_ultra_253b_q3_k_s_on_my_machine/ | false | false | self | 1 | null |
What is the best tool/agents for daily desktop use you can't part with anymore | 1 | [removed] | 2025-05-27T07:22:24 | https://www.reddit.com/r/LocalLLaMA/comments/1kwh5zq/what_is_the_best_toolagents_for_daily_desktop_use/ | Malfun_Eddie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwh5zq | false | null | t3_1kwh5zq | /r/LocalLLaMA/comments/1kwh5zq/what_is_the_best_toolagents_for_daily_desktop_use/ | false | false | self | 1 | null |
How Does Qwen3-32B Handle Multi-Language Coding? | 1 | [removed] | 2025-05-27T06:56:34 | https://www.reddit.com/r/LocalLLaMA/comments/1kwgsgq/how_does_qwen332b_handle_multilanguage_coding/ | jameslee2295 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwgsgq | false | null | t3_1kwgsgq | /r/LocalLLaMA/comments/1kwgsgq/how_does_qwen332b_handle_multilanguage_coding/ | false | false | self | 1 | null |
Model Context Protocol With Local LLM | 1 | [removed] | 2025-05-27T06:46:02 | https://www.reddit.com/r/LocalLLaMA/comments/1kwgmy1/model_context_protocol_with_local_llm/ | No_Finding2396 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwgmy1 | false | null | t3_1kwgmy1 | /r/LocalLLaMA/comments/1kwgmy1/model_context_protocol_with_local_llm/ | false | false | self | 1 | null |
Fudan University (FDU) and Shanghai Academy of AI for Science(SAIS): AI for Science 2025 | 1 | Produced by Fudan University and Shanghai Academy of AI for Science with support from Nature Research Intelligence, this report explores how artificial intelligence is transforming scientific discovery. It covers significant advances across disciplines — such as mathematics, life sciences and physical sciences — while ... | 2025-05-27T06:21:17 | https://www.nature.com/articles/d42473-025-00161-3 | Lynncc6 | nature.com | 1970-01-01T00:00:00 | 0 | {} | 1kwg9ni | false | null | t3_1kwg9ni | /r/LocalLLaMA/comments/1kwg9ni/fudan_university_fdu_and_shanghai_academy_of_ai/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'WYgAMe89EDXcDqzlUukeeJmDzVGLFu7AQS7LfTCgm2c', 'resolutions': [{'height': 143, 'url': 'https://external-preview.redd.it/gUbZJGJpCHSwQirU7VTZKb7HvKb-aSASFTOJf8d8VFE.jpg?width=108&crop=smart&auto=webp&s=0473c3ac375dbea706c85e96259c0c5b4cf4172f', 'width': 108}, {'height': 286, 'url': '... | |
Used A100 80 GB Prices Don't Make Sense | 147 | Can someone explain what I'm missing? The median price of the A100 80GB PCIe on eBay is $18,502 RTX 6000 Pro Blackwell cards can be purchased new for $8500.
What am I missing here? Is there something about the A100s that justifies the price difference? The only thing I can think of is 200w less power consumption and... | 2025-05-27T05:44:37 | https://www.reddit.com/r/LocalLLaMA/comments/1kwfp8v/used_a100_80_gb_prices_dont_make_sense/ | fakebizholdings | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwfp8v | false | null | t3_1kwfp8v | /r/LocalLLaMA/comments/1kwfp8v/used_a100_80_gb_prices_dont_make_sense/ | false | false | self | 147 | null |
AgentKit - Drop-in plugin system for AI agents and MCP servers | 12 | I got tired of rebuilding the same tools every time I started a new project, or ripping out server/agent implementation to switch solutions, so I built a lightweight plugin system that lets you drop Python files into a folder and generate requirements.txt for them, create a .env with all the relevant items, and dynamic... | 2025-05-27T05:28:10 | https://github.com/batteryshark/agentkit | atrfx | github.com | 1970-01-01T00:00:00 | 0 | {} | 1kwfftk | false | null | t3_1kwfftk | /r/LocalLLaMA/comments/1kwfftk/agentkit_dropin_plugin_system_for_ai_agents_and/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'taugWyvcmXWgAl3K8RmTWlF_lQYs7kar2WhZGnRlmHA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zd-ZiUz7OOd-ZVTfTYwqHZdD3FQ-Zlrik2BdtQdhqMY.jpg?width=108&crop=smart&auto=webp&s=baf65a7058044881b99a2e6a90932f3e08926847', 'width': 108}, {'height': 108, 'url': 'h... | |
PFN Launches PLaMo Translate,a LLM model made for translation task | 14 | Archive Link:
[https://www.preferred.jp/en/news/pr20250527/](https://www.preferred.jp/en/news/pr20250527/)
Web Translation Demo:
[https://translate-demo.plamo.preferredai.jp/](https://translate-demo.plamo.preferredai.jp/)
Model on Huggingface:
[https://huggingface.co/pfnet/plamo-2-translate](https://huggingf... | 2025-05-27T05:22:58 | https://www.reddit.com/r/LocalLLaMA/comments/1kwfcw1/pfn_launches_plamo_translatea_llm_model_made_for/ | rikimtasu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwfcw1 | false | null | t3_1kwfcw1 | /r/LocalLLaMA/comments/1kwfcw1/pfn_launches_plamo_translatea_llm_model_made_for/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '1tJ3KIrQGSESVZ7jdAndFJ02nLpdnVJP9j7Z5OR4Ams', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/XxYJUy7IlUFtHfWDnJwC5IDY1dFFc6MnORWVycdjc0E.jpg?width=108&crop=smart&auto=webp&s=0e0bc42892eaa629111572d1cb30a235d46bf044', 'width': 108}, {'height': 135, 'url': 'h... |
Gemma 3 Performance: Tokens Per Second in LM Studio vs. Ollama on Mac Studio M3 Ultra | 1 | [removed] | 2025-05-27T05:09:12 | https://medium.com/p/7e1af75438e4 | Rif-SQL | medium.com | 1970-01-01T00:00:00 | 0 | {} | 1kwf4ss | false | null | t3_1kwf4ss | /r/LocalLLaMA/comments/1kwf4ss/gemma_3_performance_tokens_per_second_in_lm/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'mHABBOeKKY7FDSBYGtJE5_Cel3QinZBFmTwhK_5YHhU', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/eUTE6dW5zCtU-O7BgfzurRyBcaNNU3hCzv9MFh1v0Gk.jpg?width=108&crop=smart&auto=webp&s=a65ec67693f26a771981bd257dd0bd8a7956e982', 'width': 108}, {'height': 216, 'url': '... | |
I made an open-source synthetic text datasets generator for LLM projects | 1 | [removed] | 2025-05-27T05:06:33 | https://www.reddit.com/r/LocalLLaMA/comments/1kwf3a2/i_made_an_opensource_synthetic_text_datasets/ | astro__pat | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwf3a2 | false | null | t3_1kwf3a2 | /r/LocalLLaMA/comments/1kwf3a2/i_made_an_opensource_synthetic_text_datasets/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ol6l1w7L9PJwrnUQaW0T9pMEA5YQYURMZPun1reHqSA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Kx1qWhUTwhJEjTFoOpQHzKHh8XHSlKFVAbsBg6dkBuo.jpg?width=108&crop=smart&auto=webp&s=efd0d10102343cc497e618b49d245fd618e08f03', 'width': 108}, {'height': 108, 'url': 'h... |
Claude sonnet 4 trouble | 1 | [removed] | 2025-05-27T05:05:05 | https://www.reddit.com/r/LocalLLaMA/comments/1kwf2do/claude_sonnet_4_trouble/ | TheVerge_Trades | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwf2do | false | null | t3_1kwf2do | /r/LocalLLaMA/comments/1kwf2do/claude_sonnet_4_trouble/ | false | false | self | 1 | null |
Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration | 92 | Abstract
>Long-horizon video-audio reasoning and fine-grained pixel understanding impose conflicting requirements on omnimodal models: dense temporal coverage demands many low-resolution frames, whereas precise grounding calls for high-resolution inputs. We tackle this trade-off with a two-system architecture: a Globa... | 2025-05-27T04:46:15 | https://huggingface.co/Haoz0206/Omni-R1 | ninjasaid13 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 1kwer9z | false | null | t3_1kwer9z | /r/LocalLLaMA/comments/1kwer9z/omnir1_reinforcement_learning_for_omnimodal/ | false | false | 92 | {'enabled': False, 'images': [{'id': 'GtcV9DcDdGlJyBtvfLSpnCruagztlOTW3rDgZ22y6gI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Mslr5FmgDa5Wl6TVAGHIe-yyfpC8KB7GpupP6mmM8Ko.jpg?width=108&crop=smart&auto=webp&s=2bce7761374b03ad2e325bf00303e9630f6174cf', 'width': 108}, {'height': 116, 'url': 'h... | |
LlamaFirewall: framework open source per rilevare e mitigare i rischi per la sicurezza incentrati sull'intelligenza artificiale - Help Net Security | 0 | 2025-05-27T04:39:16 | https://www.helpnetsecurity.com/2025/05/26/llamafirewall-open-source-framework-detect-mitigate-ai-centric-security-risks/ | Aiochedolor | helpnetsecurity.com | 1970-01-01T00:00:00 | 0 | {} | 1kwen4z | false | null | t3_1kwen4z | /r/LocalLLaMA/comments/1kwen4z/llamafirewall_framework_open_source_per_rilevare/ | false | false | 0 | {'enabled': False, 'images': [{'id': '_Xn7LhDNFIY7RcgTctjI8ok4tx05EAyY5sjGXbrxITA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/Pu8f913f0BZRcZq8aaUGAPiafLL1kwOvTbOn67rnK18.jpg?width=108&crop=smart&auto=webp&s=cae63ace79d487bafee1d338dcdc0a4d7fca7ca8', 'width': 108}, {'height': 121, 'url': 'h... | ||
Prompting for agentic workflows | 3 | Under the hood I have a project memory that's fed into each new conversation. I tell this to one of my agents at the start of a session and I pretty much have my next day (or sometimes week) planned out:
Break down this (plan.md) into steps that can each be completed within one hour. Publish each of these step plans i... | 2025-05-27T04:36:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kwelai/prompting_for_agentic_workflows/ | ansmo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwelai | false | null | t3_1kwelai | /r/LocalLLaMA/comments/1kwelai/prompting_for_agentic_workflows/ | false | false | self | 3 | null |
Best settings for running Qwen3-30B-A3B with llama.cpp (16GB VRAM and 64GB RAM) | 34 | In the past I used to mostly configure gpu layers to fit as closely as possible on the 16GB RAM. But lately there seem to be much better options to optimize for VRAM/RAM split. Especially with MoE models? I'm currently running Q4\_K\_M version (about 18.1 GB in size) with 38 layers and 8k context size because I was foc... | 2025-05-27T03:44:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kwdpey/best_settings_for_running_qwen330ba3b_with/ | gamesntech | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwdpey | false | null | t3_1kwdpey | /r/LocalLLaMA/comments/1kwdpey/best_settings_for_running_qwen330ba3b_with/ | false | false | self | 34 | null |
I compared Manus and Genspark, with 7 challenging tasks | 1 | [removed] | 2025-05-27T03:39:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kwdm8w/i_compared_manus_and_genspark_with_7_challenging/ | Fit-Rule8548 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwdm8w | false | null | t3_1kwdm8w | /r/LocalLLaMA/comments/1kwdm8w/i_compared_manus_and_genspark_with_7_challenging/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'YQ7u__e6YDY29nHTlKbkGti2diIMtKYGCHkrZiHBCyg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CUoAi99NcPPSt-OBRxg2ktDrxyr2E4OtlaT4HNOK-4s.jpg?width=108&crop=smart&auto=webp&s=07fe7f0dccc36a0d30087768a64418e67a5840d4', 'width': 108}, {'height': 108, 'url': 'h... |
I forked llama-swap to add an ollama compatible api, so it can be a drop in replacement | 45 | For anyone else who has been annoyed with:
- ollama
- client programs that only support ollama for local models
I present you with [llama-swappo](https://github.com/kooshi/llama-swappo), a bastardization of the simplicity of llama-swap which adds an ollama compatible api to it.
This was mostly a quick hack I added... | 2025-05-27T03:17:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kwd7tg/i_forked_llamaswap_to_add_an_ollama_compatible/ | Kooshi_Govno | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwd7tg | false | null | t3_1kwd7tg | /r/LocalLLaMA/comments/1kwd7tg/i_forked_llamaswap_to_add_an_ollama_compatible/ | false | false | self | 45 | {'enabled': False, 'images': [{'id': 'g4EVeRAf4By-HP5yEiUCAOKaak_wBQnOJjcWGsFXFgY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WPNJ3TuBSDxdXMoKKGq63bjUxYbQZGAeHKZ9Y0tO_mE.jpg?width=108&crop=smart&auto=webp&s=1ef38c2100499edb8bda219be4183c24c47d18e7', 'width': 108}, {'height': 108, 'url': 'h... |
Been diving into running local LLMs (GPT4All, LM Studio, etc.) and came across this Beelink mini PC — seems to be a sweet spot between price and power for local AI stuff. | 1 | [removed] | 2025-05-27T03:13:15 | https://www.reddit.com/r/LocalLLaMA/comments/1kwd59w/been_diving_into_running_local_llms_gpt4all_lm/ | Hoodlum_Hero421 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwd59w | false | null | t3_1kwd59w | /r/LocalLLaMA/comments/1kwd59w/been_diving_into_running_local_llms_gpt4all_lm/ | false | false | self | 1 | null |
Automate Your Bill Splitting with CrewAI and Ollama | 1 | I’ve been wrestling with the chaos of splitting group bills for years—until I decided to let AI take the wheel. Meet my **Bill Splitting Automation Tool**, built with VisionParser, CrewAI, and ollama/mistral-nemo. Here’s what it does:
# 🔍 How It Works
1. **PDF Parsing → Markdown**
* Upload any bill PDF (restauran... | 2025-05-27T03:09:25 | https://v.redd.it/9jat479yq83f1 | Solid_Woodpecker3635 | /r/LocalLLaMA/comments/1kwd2p1/automate_your_bill_splitting_with_crewai_and/ | 1970-01-01T00:00:00 | 0 | {} | 1kwd2p1 | false | null | t3_1kwd2p1 | /r/LocalLLaMA/comments/1kwd2p1/automate_your_bill_splitting_with_crewai_and/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'cmp6eHc2OXlxODNmMRWvGkBENpCIpC_EKUjQsCr5pB9yZN0tDOQcUMxmDOnL', 'resolutions': [{'height': 36, 'url': 'https://external-preview.redd.it/cmp6eHc2OXlxODNmMRWvGkBENpCIpC_EKUjQsCr5pB9yZN0tDOQcUMxmDOnL.png?width=108&crop=smart&format=pjpg&auto=webp&s=3a262e4c6f4f0c7addb55e961652e87b93993... | |
Built an AI Study Assistant That Automatically Creates Notes + SRS | 1 | [removed] | 2025-05-27T03:02:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kwcy8g/built_an_ai_study_assistant_that_automatically/ | Hirojinho | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwcy8g | false | null | t3_1kwcy8g | /r/LocalLLaMA/comments/1kwcy8g/built_an_ai_study_assistant_that_automatically/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'E47A6SeusMi2E0TGdaF3F8xV3n3fk5JslT9Ws6Njvcs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CtcQkq07FS6lp4oxGPfXsyGBd1aHQz3Asfyghz8NCYo.jpg?width=108&crop=smart&auto=webp&s=6a7a278e6e1bcbc9de074da335a6ac30371bc147', 'width': 108}, {'height': 108, 'url': 'h... |
You don't need to wait for AMD AI MAX 395+ | 1 | [removed] | 2025-05-27T02:55:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kwct3w/you_dont_need_to_wait_for_amd_ai_max_395/ | Specialist_You3410 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwct3w | false | null | t3_1kwct3w | /r/LocalLLaMA/comments/1kwct3w/you_dont_need_to_wait_for_amd_ai_max_395/ | false | false | self | 1 | null |
LLama.cpp on intel 185H iGPU possible on a machine with RTX dGPU? | 1 | Hello, is it possible to run ollama or llama.cpp inferencing on a laptop with Ultra185H and a RTX4090 using onlye the Arc iGPU? I am trying to maximize the use of the machine as I already have an Ollama instance making use of the RTX4090 for inferencing and wondering if I can make use of the 185H iGPU for smaller mode... | 2025-05-27T02:50:09 | https://www.reddit.com/r/LocalLLaMA/comments/1kwcpqe/llamacpp_on_intel_185h_igpu_possible_on_a_machine/ | mlaihk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwcpqe | false | null | t3_1kwcpqe | /r/LocalLLaMA/comments/1kwcpqe/llamacpp_on_intel_185h_igpu_possible_on_a_machine/ | false | false | self | 1 | null |
Teach and Help with Decision: Keep P40 VM vs M4 24GB vs Ryzen Ai 9 365 vs Intel 125H | 0 | I currently have a modified Nvidia P40 with a GTX1070 cooler added to it. Works great for dinking around, but in my home-lab its taking up valuable space and its getting to the point I'm wondering if its heating up my HBAs too much. I've floated the idea of selling my modded P40 and instead switching to something small... | 2025-05-27T02:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kwcp6m/teach_and_help_with_decision_keep_p40_vm_vs_m4/ | s0n1cm0nk3y | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwcp6m | false | null | t3_1kwcp6m | /r/LocalLLaMA/comments/1kwcp6m/teach_and_help_with_decision_keep_p40_vm_vs_m4/ | false | false | self | 0 | null |
Should lower temperature be used now | 0 | It's been a while since I programaticlly called an ai model. Is lower temperature creative enough now. When I did it I had temp to be .80 and top p to be .95 and top alpha at .6. What generation parameters with what models do you use? | 2025-05-27T01:50:27 | https://www.reddit.com/r/LocalLLaMA/comments/1kwbk7j/should_lower_temperature_be_used_now/ | diaperrunner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwbk7j | false | null | t3_1kwbk7j | /r/LocalLLaMA/comments/1kwbk7j/should_lower_temperature_be_used_now/ | false | false | self | 0 | null |
I scraped 111,847 finance-related jobs directly from corporate websites. | 1 | [removed] | 2025-05-27T01:44:06 | https://www.reddit.com/r/LocalLLaMA/comments/1kwbfvr/i_scraped_111847_financerelated_jobs_directly/ | Separate-Breath2267 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwbfvr | false | null | t3_1kwbfvr | /r/LocalLLaMA/comments/1kwbfvr/i_scraped_111847_financerelated_jobs_directly/ | false | false | self | 1 | null |
Downloading models on android inquiry | 0 | Just wondering how to install local models on android? I wanna try out the smaller Qwen and Gemini models but all the local downloads seem to be through vLLM and I believe that's only for PC? Could I just use termux or is there an alternative for Android?
Any help would be appreciated!
| 2025-05-27T01:09:52 | https://www.reddit.com/r/LocalLLaMA/comments/1kwas79/downloading_models_on_android_inquiry/ | PhantasmHunter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwas79 | false | null | t3_1kwas79 | /r/LocalLLaMA/comments/1kwas79/downloading_models_on_android_inquiry/ | false | false | self | 0 | null |
Free Speech to Speech Audio convertor (web or Google Colsb) | 1 | Hi. Can anyone please suggest some tools for doing speech to Speech (pre recorded) audio voice conversion tools. With which we can change the speaker's voices. Looking for something that is easy to run, consistent and fast. The audio length will be around 10-15 minutes. | 2025-05-27T00:31:11 | https://www.reddit.com/r/LocalLLaMA/comments/1kwa11w/free_speech_to_speech_audio_convertor_web_or/ | Tarun302 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kwa11w | false | null | t3_1kwa11w | /r/LocalLLaMA/comments/1kwa11w/free_speech_to_speech_audio_convertor_web_or/ | false | false | self | 1 | null |
Burned a lot on LLM calls — looking for an LLM gateway + observability tool. Landed on Keywords AI… anyone else? | 1 | [removed] | 2025-05-27T00:10:21 | https://www.reddit.com/r/LocalLLaMA/comments/1kw9m80/burned_a_lot_on_llm_calls_looking_for_an_llm/ | Main-Fisherman-2075 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw9m80 | false | null | t3_1kw9m80 | /r/LocalLLaMA/comments/1kw9m80/burned_a_lot_on_llm_calls_looking_for_an_llm/ | false | false | self | 1 | null |
PC for local AI | 12 | Hey there! I use AI a lot. For the last 2 months I'm being experimenting with Roo Code and MCP servers, but always using Gemini, Claude and Deepseek. I would like to try local models but not sure what I need to get a good model running, like Devstral or Qwen 3.
My actual PC is not that big: i5 13600kf, 32gb ram, rtx407... | 2025-05-26T23:59:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kw9ecd/pc_for_local_ai/ | amunocis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw9ecd | false | null | t3_1kw9ecd | /r/LocalLLaMA/comments/1kw9ecd/pc_for_local_ai/ | false | false | self | 12 | null |
Peace-Through-Land-Auction | 0 | [removed] | 2025-05-26T23:48:47 | https://www.reddit.com/r/LocalLLaMA/comments/1kw96k1/peacethroughlandauction/ | zero_moo-s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw96k1 | false | null | t3_1kw96k1 | /r/LocalLLaMA/comments/1kw96k1/peacethroughlandauction/ | false | false | self | 0 | null |
How to build a multi-model agent like this? | 1 | [removed] | 2025-05-26T23:24:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kw8oaw/how_to_build_a_multimodel_agent_like_this/ | Crafty_Read_6928 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw8oaw | false | null | t3_1kw8oaw | /r/LocalLLaMA/comments/1kw8oaw/how_to_build_a_multimodel_agent_like_this/ | false | false | 1 | null | |
Is there a high-throughput engine that is actually stable? | 8 | Been using vLLM (vllm serve). It's nice when it runs but it keeps hanging and crashing while attempting the simplest of tasks. A prompt or request that works perfectly fine one time will hang or crash sending back two minutes later. Is there an inferencing engine that can handle high throughput while not crashing or ha... | 2025-05-26T23:05:32 | https://www.reddit.com/r/LocalLLaMA/comments/1kw8a4h/is_there_a_highthroughput_engine_that_is_actually/ | No-Break-7922 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw8a4h | false | null | t3_1kw8a4h | /r/LocalLLaMA/comments/1kw8a4h/is_there_a_highthroughput_engine_that_is_actually/ | false | false | self | 8 | null |
Best llm for human-like conversations? | 1 | [removed] | 2025-05-26T23:02:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kw882w/best_llm_for_humanlike_conversations/ | FrostFireAnna | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw882w | false | null | t3_1kw882w | /r/LocalLLaMA/comments/1kw882w/best_llm_for_humanlike_conversations/ | false | false | self | 1 | null |
DIA 1B Podcast Generator - With Consistent Voices and Script Generation | 160 | I'm pleased to share 🐐 GOATBookLM 🐐...
A dual voice Open Source podcast generator powered by [hashtag#NariLabs](https://www.linkedin.com/search/results/all/?keywords=%23narilabs&origin=HASH_TAG_FROM_FEED) [hashtag#Dia](https://www.linkedin.com/search/results/all/?keywords=%23dia&origin=HASH_TAG_FROM_FEED) 1B audio ... | 2025-05-26T22:36:14 | https://v.redd.it/4ym9al41e73f1 | Smartaces | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1kw7n6w | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/4ym9al41e73f1/DASHPlaylist.mpd?a=1750890989%2CNDBmZWJiNjY2YTk1YzFhMzJjZWExNTNiNzBlZWQxMTM3ZDc0OTYyZTE2YjFlZWEzZGZiYzM1NThjNTVmNmVmOQ%3D%3D&v=1&f=sd', 'duration': 113, 'fallback_url': 'https://v.redd.it/4ym9al41e73f1/DASH_1080.mp4?source=fallback', '... | t3_1kw7n6w | /r/LocalLLaMA/comments/1kw7n6w/dia_1b_podcast_generator_with_consistent_voices/ | false | false | 160 | {'enabled': False, 'images': [{'id': 'NG1pdDduNDFlNzNmMUcfJmyGLBoX3HGWzWW7GBEQ5TlU9sPw-Gkkjhi-K8NK', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/NG1pdDduNDFlNzNmMUcfJmyGLBoX3HGWzWW7GBEQ5TlU9sPw-Gkkjhi-K8NK.png?width=108&crop=smart&format=pjpg&auto=webp&s=36d03f5dac41401e387c7dc2cbcd1529aff1... | |
Best model for code | 1 | [removed] | 2025-05-26T22:19:56 | https://www.reddit.com/r/LocalLLaMA/comments/1kw79x1/best_model_for_code/ | JohnMolorov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw79x1 | false | null | t3_1kw79x1 | /r/LocalLLaMA/comments/1kw79x1/best_model_for_code/ | false | false | self | 1 | null |
Code single file with multiple LLM models | 9 | Interesting discovery
If several different models work on SAME code, for SAME application, one by one, fixing each other errors, the vibe coding is starting to make sense
application example: [https://github.com/vyrti/dl](https://github.com/vyrti/dl)
(its a file download tool for all platforms, primary for huggin... | 2025-05-26T21:56:36 | https://www.reddit.com/r/LocalLLaMA/comments/1kw6qm1/code_single_file_with_multiple_llm_models/ | AleksHop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw6qm1 | false | null | t3_1kw6qm1 | /r/LocalLLaMA/comments/1kw6qm1/code_single_file_with_multiple_llm_models/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'T5MPksDT6rIMqxy_7Aav8mI24Y0hYq7uOwBlSlC12AA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/sZRKNwusuzqqRgeEIWS7jdESKIXv-E-47rOKDPhUSfo.jpg?width=108&crop=smart&auto=webp&s=53f28e3d7ed30810d8f110938a277b477d98f97a', 'width': 108}, {'height': 108, 'url': 'h... |
CRAZY voice quality for uncensored roleplay, I wish it's local. | 112 | https://www.youtube.com/watch?v=Fcq85N0grk4 | 2025-05-26T21:37:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kw6akn/crazy_voice_quality_for_uncensored_roleplay_i/ | ExplanationEqual2539 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw6akn | false | null | t3_1kw6akn | /r/LocalLLaMA/comments/1kw6akn/crazy_voice_quality_for_uncensored_roleplay_i/ | false | false | self | 112 | {'enabled': False, 'images': [{'id': 'I6b_HwI1LJmRMYDtGv8GQ6mvn68V6tP9FkYxdLhb4Y4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/jbZHjnjNpjU2zm2iq4irLLR0rbdIL0fMvFD73GgJqQQ.jpg?width=108&crop=smart&auto=webp&s=9b6d26e6f1dddd265b45b37e22caa44b0d534aca', 'width': 108}, {'height': 162, 'url': 'h... |
Paid Interview for AI Engineers Building Generative Agent Tools | 0 | We’re running a paid 30-minute research interview for U.S.-based AI engineers actively building **custom generative agentic tools** (e.g., LLMs, LangChain, RAG, orchestration frameworks).
**What we need:**
* Full-time employees (9+ months preferred)
* Hands-on builders (not just managing teams)
* Titles like AI Engin... | 2025-05-26T20:05:18 | https://www.reddit.com/r/LocalLLaMA/comments/1kw42sz/paid_interview_for_ai_engineers_building/ | brutalgrace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw42sz | false | null | t3_1kw42sz | /r/LocalLLaMA/comments/1kw42sz/paid_interview_for_ai_engineers_building/ | false | false | self | 0 | null |
With Veo3 producing hyper realistic content - Are we in for a global verification mechanism? | 0 | The idea of immutable records and verification is really not new anymore and crypto bros have been tooting the horn constantly (albeit, a bit louder during bull runs), that blockchain will be ubiquitous and that it will be the future. But everyone tried to find use cases, only to find that it could be done much easier ... | 2025-05-26T19:36:44 | https://www.reddit.com/r/LocalLLaMA/comments/1kw3ejc/with_veo3_producing_hyper_realistic_content_are/ | Mr_Moonsilver | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw3ejc | false | null | t3_1kw3ejc | /r/LocalLLaMA/comments/1kw3ejc/with_veo3_producing_hyper_realistic_content_are/ | false | false | self | 0 | null |
I fine-tuned Qwen2.5-VL 7B to re-identify objects across frames and generate grounded stories | 106 | 2025-05-26T19:21:25 | https://v.redd.it/0yb58acdf63f1 | DanielAPO | /r/LocalLLaMA/comments/1kw310h/i_finetuned_qwen25vl_7b_to_reidentify_objects/ | 1970-01-01T00:00:00 | 0 | {} | 1kw310h | false | {'reddit_video': {'bitrate_kbps': 5000, 'dash_url': 'https://v.redd.it/0yb58acdf63f1/DASHPlaylist.mpd?a=1751008893%2CMzcyYzYwNDNlNGFlYzkyZTQ3Y2QzZDJhOWIzNTAzZTNlY2UzZDk0MzZjNGU0ZmY4YmM3N2M4NTYxNzcwYTYxNQ%3D%3D&v=1&f=sd', 'duration': 299, 'fallback_url': 'https://v.redd.it/0yb58acdf63f1/DASH_1080.mp4?source=fallback', '... | t3_1kw310h | /r/LocalLLaMA/comments/1kw310h/i_finetuned_qwen25vl_7b_to_reidentify_objects/ | false | false | default | 106 | {'enabled': False, 'images': [{'id': 'ZXJzMW1rY2RmNjNmMdaMStUEb5oAuu0jCl0Xw3e5m5dlVJowjoJYmTy8vqCj', 'resolutions': [{'height': 70, 'url': 'https://external-preview.redd.it/ZXJzMW1rY2RmNjNmMdaMStUEb5oAuu0jCl0Xw3e5m5dlVJowjoJYmTy8vqCj.png?width=108&crop=smart&format=pjpg&auto=webp&s=32cf0c58ecad49958808888073309a0916fd4... | |
Amazing Qwen3 Answer - Hilarious | 1 | [removed] | 2025-05-26T18:49:39 | https://www.reddit.com/r/LocalLLaMA/comments/1kw28ok/amazing_qwen3_answer_hilarious/ | Eden63 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw28ok | false | null | t3_1kw28ok | /r/LocalLLaMA/comments/1kw28ok/amazing_qwen3_answer_hilarious/ | false | false | 1 | null | |
Cleaning up responses to fix up synthetic data | 0 | I wrote a python script to generate synthetic data from Claude.
However, one thing I noticed is that sometimes the text at the end gets cut off (Due to it reaching the maximum characters/tokens)
```
The idea that her grandfather might have kept such secrets, that her family might be connected to something beyond rati... | 2025-05-26T18:45:16 | https://www.reddit.com/r/LocalLLaMA/comments/1kw24tk/cleaning_up_responses_to_fix_up_synthetic_data/ | ICanSeeYou7867 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw24tk | false | null | t3_1kw24tk | /r/LocalLLaMA/comments/1kw24tk/cleaning_up_responses_to_fix_up_synthetic_data/ | false | false | self | 0 | null |
Eye project | 1 | [removed] | 2025-05-26T18:36:03 | https://www.reddit.com/r/LocalLLaMA/comments/1kw1wpb/eye_project/ | Ok_Prize_4453 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw1wpb | false | null | t3_1kw1wpb | /r/LocalLLaMA/comments/1kw1wpb/eye_project/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'yfw_k1iu6uTH39L4gGGMB9nyIcINaKGK0x_AZ6lNvOA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/SeW4DD110v-1j8JhPXBq9QW3lExGBRpsOuq5hcckBGE.jpg?width=108&crop=smart&auto=webp&s=2ad5f8c779e6bd60a296aaf07fccebd185a3ce5a', 'width': 108}, {'height': 108, 'url': 'h... |
Steal the best human computer interactions for LLMs from Gemini, ChatGPT, Tong Yi, Perplexity, Manus, Meta AI, and more | 1 | [removed] | 2025-05-26T18:33:04 | https://www.reddit.com/gallery/1kw1tyk | capitalizedtime | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1kw1tyk | false | null | t3_1kw1tyk | /r/LocalLLaMA/comments/1kw1tyk/steal_the_best_human_computer_interactions_for/ | false | false | 1 | null | |
Best model and method to translate webnovels to English with previous chapters as context | 1 | [removed] | 2025-05-26T18:32:45 | https://www.reddit.com/r/LocalLLaMA/comments/1kw1toi/best_model_and_method_to_translate_webnovels_to/ | dommynamar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw1toi | false | null | t3_1kw1toi | /r/LocalLLaMA/comments/1kw1toi/best_model_and_method_to_translate_webnovels_to/ | false | false | self | 1 | null |
Introducing ORBIT, an open-source inference toolkit to break free from the token tax. | 1 | [removed] | 2025-05-26T18:27:04 | https://www.reddit.com/r/LocalLLaMA/comments/1kw1oie/introducing_orbit_an_opensource_inference_toolkit/ | Single_Zebra_7406 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw1oie | false | null | t3_1kw1oie | /r/LocalLLaMA/comments/1kw1oie/introducing_orbit_an_opensource_inference_toolkit/ | false | false | self | 1 | null |
POC: Running up to 123B as a Letterfriend on <300€ for all hardware. | 56 | Let's swap. This is about my experience running large models on affordable hardware. Who needs NVIDIA when you have some time?
My intention was to have a local, private LLM of the best quality for responding to letters with a large context (8K).
Letters? Yep, it's all about slow response time. Slow. Really slow, so l... | 2025-05-26T18:19:12 | https://www.reddit.com/r/LocalLLaMA/comments/1kw1hfd/poc_running_up_to_123b_as_a_letterfriend_on_300/ | Ploepxo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1kw1hfd | false | null | t3_1kw1hfd | /r/LocalLLaMA/comments/1kw1hfd/poc_running_up_to_123b_as_a_letterfriend_on_300/ | false | false | self | 56 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.