name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_o7ytr46
I see. Do you have a different suggestion maybe for an iOS app?
1
0
2026-02-28T23:57:57
xxFT13xx
false
null
0
o7ytr46
false
/r/LocalLLaMA/comments/1r2ynyg/question_about_pocketpal_ios_app/o7ytr46/
false
1
t1_o7ythlu
How do you think you’d approach such a question for a show you’d never heard of?
1
0
2026-02-28T23:56:21
didroe
false
null
0
o7ythlu
false
/r/LocalLLaMA/comments/1rgkyt5/qwen35_27b_scores_42_on_intelligence_index_and_is/o7ythlu/
false
1
t1_o7ytah8
Wait a second, I think he’s onto something. Just an idea I’m not low level enough to understand this. The issue I hope this could solve is with mostly android devices. Even with an unlocked bootloader a standard linux distro won’t work, the device is still not usable due to missing drivers and non-convential configs. ...
4
0
2026-02-28T23:55:10
sooodooo
false
null
0
o7ytah8
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ytah8/
false
4
t1_o7yt9jn
[removed]
1
0
2026-02-28T23:55:01
[deleted]
true
null
0
o7yt9jn
false
/r/LocalLLaMA/comments/1rhe4oo/qwen_35_27b_and_qwen3535ba3b_ran_locally_on_my/o7yt9jn/
false
1
t1_o7ysvki
what ai do u use
1
0
2026-02-28T23:52:42
Meowkyo
false
null
0
o7ysvki
false
/r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7ysvki/
false
1
t1_o7ysrel
Curious about hybrid? Do you mean running two agents on the same code base? (One local one commercial?)
1
0
2026-02-28T23:52:00
hyllus123
false
null
0
o7ysrel
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7ysrel/
false
1
t1_o7ysqrj
> \> Modern models are better at long context than any dense model ever was. LLM360 K2-V2 is a dense model with a 512K context limit. I tested it with 277K tokens of chat logs, asking it to describe every participant in the channel, and it knocked it out of the park. Dense models can be perfectly good at long contex...
0
0
2026-02-28T23:51:53
ttkciar
false
null
0
o7ysqrj
false
/r/LocalLLaMA/comments/1rh9ygz/is_anyone_else_waiting_for_a_6070b_moe_with_810b/o7ysqrj/
false
0
t1_o7yspl2
Delete lmstudio and try Llama.cpp.
7
0
2026-02-28T23:51:42
chibop1
false
null
0
o7yspl2
false
/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7yspl2/
false
7
t1_o7ysp69
Some of us just like how the older models work, bud. Doesn't make us bots but I can see how you might make that mistake. Newer isn't always better for specific things.
2
0
2026-02-28T23:51:37
mystery_biscotti
false
null
0
o7ysp69
false
/r/LocalLLaMA/comments/1rh46g2/why_some_still_playing_with_old_models_nostalgia/o7ysp69/
false
2
t1_o7yrzom
I asked the 120B q4 version, if it knew who said `After all, why not? Why shouldn't I keep it?` and that it was from a movie. It then proceeded to generate over 10k of tokens to think about it, before telling me that it does not know.
11
0
2026-02-28T23:47:23
Kubas_inko
false
null
0
o7yrzom
false
/r/LocalLLaMA/comments/1rh6pru/google_found_that_longer_chain_of_thought/o7yrzom/
false
11
t1_o7yrwu9
I made the same observations. Most of the time the amount of thinking is reasonable but in 1-2 out of 5 cases the model just won't stop talking to itself. Especially shorter prompts will occasionally cause the model to have an existential crisis. Really don't know what to make of this. It's a beautiful model otherwise...
1
0
2026-02-28T23:46:55
andy_potato
false
null
0
o7yrwu9
false
/r/LocalLLaMA/comments/1rh14cs/best_qwen_35_variant_for_2x5060ti16_64_gb_ram/o7yrwu9/
false
1
t1_o7yrqnk
It works for me as a human when controlling LLMs coding.
2
0
2026-02-28T23:45:53
schnauzergambit
false
null
0
o7yrqnk
false
/r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/o7yrqnk/
false
2
t1_o7yrm2d
after i get networking properly figured out. i plan on moving on to using larger models and optimizing for hardware.
3
0
2026-02-28T23:45:08
Electrical_Ninja3805
false
null
0
o7yrm2d
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yrm2d/
false
3
t1_o7yrkst
cool!
1
0
2026-02-28T23:44:55
DataGOGO
false
null
0
o7yrkst
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yrkst/
false
1
t1_o7yrjni
you need to chain it to a tts model, not sure how tou can do that with pocketpal
1
0
2026-02-28T23:44:44
AlpY24upsal
false
null
0
o7yrjni
false
/r/LocalLLaMA/comments/1r2ynyg/question_about_pocketpal_ios_app/o7yrjni/
false
1
t1_o7yrgbj
Orpheus TTS is good
1
0
2026-02-28T23:44:11
AlpY24upsal
false
null
0
o7yrgbj
false
/r/LocalLLaMA/comments/1r2ynyg/question_about_pocketpal_ios_app/o7yrgbj/
false
1
t1_o7yrgb5
How long did it take you to vibe code this?
-5
0
2026-02-28T23:44:10
americanidiot3342
false
null
0
o7yrgb5
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yrgb5/
false
-5
t1_o7yrf7v
As in information made from others AI? From all the models that I've tested, Gemma3, Ministral3, Qwen3/3.5, LFM2/2.5, the gpt OSS 20B was the best.
1
0
2026-02-28T23:44:00
Rique_Belt
false
null
0
o7yrf7v
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7yrf7v/
false
1
t1_o7yrep1
Anthropic has the right to NOT SELL their products to We The People. And they are asserting that right. We The People have the right to NOT BUY their services. And We The People are asserting that right. End of Story.
1
0
2026-02-28T23:43:54
ViperAICSO
false
null
0
o7yrep1
false
/r/LocalLLaMA/comments/1rgn4ki/president_trump_orders_all_federal_agencies_in/o7yrep1/
false
1
t1_o7yre88
You’re using a tiny ai but in theory AI can do pretty low level things based on my own experiments … https://ironj.github.io/maudio-transit/
4
0
2026-02-28T23:43:49
Stunning_Mast2001
false
null
0
o7yre88
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yre88/
false
4
t1_o7yr722
Instead of decoding the reasoning text, then re-encoding it at the next hop, they skip that step and feed the raw data to the next machine instead of decoding it for humans and then tokenizing it back up for robots multiple times This is a multi step orchestration with different identities per step.
4
0
2026-02-28T23:42:38
aseichter2007
false
null
0
o7yr722
false
/r/LocalLLaMA/comments/1rh802w/what_if_llm_agents_passed_kvcache_to_each_other/o7yr722/
false
4
t1_o7yr6z2
I think we are looking at two different levels of the problem. Slop on Tiktok is just the visible surface but the same mechanisms affect summaries of scientific breakthroughs. Take CRISPR for example. If a new study comes out today then tomorrow there will be 10,000 AI generated articles about it. Many will hallucinat...
1
0
2026-02-28T23:42:37
ProductTop9807
false
null
0
o7yr6z2
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7yr6z2/
false
1
t1_o7yr4el
My first llm server is built with 2x3090 on 9700k with 64gb ram. Well, actually there were a few other different GPUs before that, but let's ignore it for now. With MoE you can do a lot even with one GPU, and early 2026 meduim-small models are way more capable than the ones we had even a few months ago. 2x3090 or ...
1
0
2026-02-28T23:42:11
Prudent-Ad4509
false
null
0
o7yr4el
false
/r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o7yr4el/
false
1
t1_o7yqxrk
Before I get my AI to dream, I have to get myself a dream first. Be right back.
1
0
2026-02-28T23:41:05
Thin-Effect-3926
false
null
0
o7yqxrk
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o7yqxrk/
false
1
t1_o7yquj8
I don't know very well these precision bits things, I just knew F16 was the highest precision but there was magical things like NVFP4 that keeps 98,7% the original precision. The MXFP4 seems as good as the Q8. Many thanks! These Qwen and DeepSeek models are really smart at anything, but any language beyond English an...
1
0
2026-02-28T23:40:35
Rique_Belt
false
null
0
o7yquj8
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7yquj8/
false
1
t1_o7yqe51
:D Thanks for fixing it!
1
0
2026-02-28T23:37:51
JMowery
false
null
0
o7yqe51
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o7yqe51/
false
1
t1_o7yqd3n
Yes, I like the spark-arena, I think the latest release Qwen/Qwen3.5-35B-A3B-FP8 is my go to model. Do you guys know with vllm, can we use glm45 tool call format on openai gpt-oss-120b model?
5
0
2026-02-28T23:37:41
Mean-Sprinkles3157
false
null
0
o7yqd3n
false
/r/LocalLLaMA/comments/1rhbtnw/the_state_of_openweights_llms_performance_on/o7yqd3n/
false
5
t1_o7yqac9
You're really young—such an enviable age. Haha, just kidding, man. I thought no one still did old-school reading these days. Markdown completely failed me when I pasted it. Just fixed the formatting and added a TL;DR at the top.
-1
0
2026-02-28T23:37:14
Thin-Effect-3926
false
null
0
o7yqac9
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o7yqac9/
false
-1
t1_o7yq9od
Ollama is great to get started, but a shit show within less than a week if you want to do anything beyond the basics on anything beyond "model fits on one GPU"
1
0
2026-02-28T23:37:07
FullstackSensei
false
null
0
o7yq9od
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7yq9od/
false
1
t1_o7yq60b
Man fuck the haters! This is amazing. You have random internet strangers rooting for you.
22
0
2026-02-28T23:36:31
boston101
false
null
0
o7yq60b
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yq60b/
false
22
t1_o7yq48v
How is this for an open ended question? [https://wavestreamer.ai/questions/827da5de-68ef-4d4e-a259-22d57d02e018](https://wavestreamer.ai/questions/827da5de-68ef-4d4e-a259-22d57d02e018) What question would you like to ask? Will see what they agents say. https://preview.redd.it/wa412jakmbmg1.png?width=3030&format=png&...
0
0
2026-02-28T23:36:13
Puzzleheaded-Nail814
false
null
0
o7yq48v
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o7yq48v/
false
0
t1_o7ypyva
🫶
2
0
2026-02-28T23:35:19
ELPascalito
false
null
0
o7ypyva
false
/r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o7ypyva/
false
2
t1_o7ypuhk
I just got into playing with LLMs so Ive been using ollama because they had a prebuilt LXC container for proxmox. Ill have to swap to llama.cpp
1
0
2026-02-28T23:34:36
Pretty_Challenge_634
false
null
0
o7ypuhk
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7ypuhk/
false
1
t1_o7ypttr
Get LM Studio and have your AI running in no time. It is literally just a couple clicks away nowadays.
5
0
2026-02-28T23:34:29
nickless07
false
null
0
o7ypttr
false
/r/LocalLLaMA/comments/1rhhfv8/how_do_i_get_started_i_know_zero_about_local/o7ypttr/
false
5
t1_o7ypq10
That, I cannot contribute to... A bit more complex than what I've even done here. ~K¹
-1
0
2026-02-28T23:33:51
TheBrierFox
false
null
0
o7ypq10
false
/r/LocalLLaMA/comments/1rhcjd3/p_ucs_v12_judgment_preservation_in_persistent_ai/o7ypq10/
false
-1
t1_o7ypogz
These models have vision. Your actually not using it to its full potential for front-end work. Giving it a photo - even a scribbled design on a napkin - would be a cool test.
7
0
2026-02-28T23:33:35
DinoAmino
false
null
0
o7ypogz
false
/r/LocalLLaMA/comments/1rhdddm/qwen_35_122ba10b_q3_k_xl_ud_actually_passed_my/o7ypogz/
false
7
t1_o7ypoc8
I would love a Qwen 3.5 24B to be able to using on my mac with 24GB or ram. The 27B wont load :/
3
0
2026-02-28T23:33:34
ReddiTTourista
false
null
0
o7ypoc8
false
/r/LocalLLaMA/comments/1rgzul5/are_you_ready_for_small_qwens/o7ypoc8/
false
3
t1_o7yplvb
> Any new discovery is immediately buried under a mountain of generated summaries. I honestly do not understand what, say, new developments in CPU technology, or CRISPR, or anything else has to do with slop on Tiktok. Can you elaborate on the exact scenario? > Without a filter the truth about our own time will be per...
1
0
2026-02-28T23:33:09
Economy_Cabinet_7719
false
null
0
o7yplvb
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7yplvb/
false
1
t1_o7ype3r
The deeper issue is that 35B-A3B at Q4 on a single local instance is right at the edge of what Claude Code's agentic loop can tolerate latency-wise. Each tool call round-trip needs to complete fast enough to not break the loop. For cloud GPU access with proper Claude Code MCP integration, Terradev handles this but loca...
-2
0
2026-02-28T23:31:52
paulahjort
false
null
0
o7ype3r
false
/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7ype3r/
false
-2
t1_o7ypddw
thanks for the encouragement!
34
0
2026-02-28T23:31:45
Electrical_Ninja3805
false
null
0
o7ypddw
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ypddw/
false
34
t1_o7ypbhy
also the fact you got this working at all is really impressive.
20
0
2026-02-28T23:31:27
colin_colout
false
null
0
o7ypbhy
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ypbhy/
false
20
t1_o7ypaaw
Josh Palmer Transformers series
1
0
2026-02-28T23:31:15
TinyVector
false
null
0
o7ypaaw
false
/r/LocalLLaMA/comments/1rhhfv8/how_do_i_get_started_i_know_zero_about_local/o7ypaaw/
false
1
t1_o7yp02v
it probably is, i havent switched yet since ive been playing with the smaller models.
1
0
2026-02-28T23:29:33
SoupDue6629
false
null
0
o7yp02v
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7yp02v/
false
1
t1_o7yoxd9
im upvoting. the many of my early projects were also impossibly ambitious. * "build xwing vs tie fighter in visual basic" (this was probably literally impossible) * "build an IRC bot that can have full conversations" (in my ADHD riddled brain, i thought i could write enough if statements to make this work) * "full ...
59
0
2026-02-28T23:29:06
colin_colout
false
null
0
o7yoxd9
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yoxd9/
false
59
t1_o7yox98
Alright fair enough! Happy experimenting. I hope you find what you want
1
0
2026-02-28T23:29:05
Mission_Biscotti3962
false
null
0
o7yox98
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7yox98/
false
1
t1_o7yorpj
I will be messaging you in 1 day on [**2026-03-01 23:27:15 UTC**](http://www.wolframalpha.com/input/?i=2026-03-01%2023:27:15%20UTC%20To%20Local%20Time) to remind you of [**this link**](https://www.reddit.com/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7yom3t/?context=3) [**CLICK TH...
0
0
2026-02-28T23:28:10
RemindMeBot
false
null
0
o7yorpj
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7yorpj/
false
0
t1_o7yor9l
I tried it Q4 and opencode, impressively good actually. I will try the Q6 but might have to quantize myself - found only Q4 on HF. Also need to see how much context size i can get on 48gb. O4 64k context size got me at 40gb runtime - that’s decent. Now another challenge - the mem Blackwell uses on gx10 is actually 2....
1
0
2026-02-28T23:28:06
hyllus123
false
null
0
o7yor9l
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7yor9l/
false
1
t1_o7yoq3x
I know. And I can understand this. But put yourself in my position. I try to "mimic" a "synthetic being". Not a static model. For me was easyer to have a Cortana than a HAL. I know, is subjective, but, this is just me. Anyway, I derail from the conversation. Once again, thanks for the heads up and also for the warning....
1
0
2026-02-28T23:27:54
DvMar
false
null
0
o7yoq3x
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7yoq3x/
false
1
t1_o7yom3t
Remindme! 1 day
0
0
2026-02-28T23:27:15
--Tintin
false
null
0
o7yom3t
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7yom3t/
false
0
t1_o7yok1r
Hardware wise I'm running dual rdna2 GPU's for 48GB vram, 64gb quad channel ddr4, i9 10940x
1
0
2026-02-28T23:26:54
SoupDue6629
false
null
0
o7yok1r
false
/r/LocalLLaMA/comments/1rgokw1/a_monthly_update_to_my_where_are_openweight/o7yok1r/
false
1
t1_o7yoght
!remindme 3 days
1
0
2026-02-28T23:26:19
--Tintin
false
null
0
o7yoght
false
/r/LocalLLaMA/comments/1rh9k63/qwen35_35ba3b_replaced_my_2model_agentic_setup_on/o7yoght/
false
1
t1_o7yof1a
I never said pre-LLM information was perfect. Let us say before 2020 maybe 15 percent of the internet was fake and 30 percent was diluted. But that is not the point here. This is not about doing homework because you can not really change the rules of math or grammar. The real danger is for new knowledge and historical...
0
0
2026-02-28T23:26:05
ProductTop9807
false
null
0
o7yof1a
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7yof1a/
false
0
t1_o7yo337
Great, thanks for adding rust!
3
0
2026-02-28T23:24:07
vhthc
false
null
0
o7yo337
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7yo337/
false
3
t1_o7yo1sp
No I mean YOU are attaching feelings to it because you feel uncomfortable asking a male how it's feeling because in some way you do regard it as a real entity. That's what's dangerous. If you just saw it as a piece of code that generates text, it wouldn't matter whether you call it Mike, Yuki or something else, because...
1
0
2026-02-28T23:23:55
Mission_Biscotti3962
false
null
0
o7yo1sp
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7yo1sp/
false
1
t1_o7yo12s
Happy to! The main differentiators: \- Structure-aware retrieval - Unlike vector DBs that chunk documents into flat embeddings and lose hierarchy, ReasonDB preserves the document tree. The LLM navigates summaries from root to leaf, like a human scanning a table of contents before drilling in. \- No hallucination from...
2
0
2026-02-28T23:23:47
Big_Barnacle_2452
false
null
0
o7yo12s
false
/r/LocalLLaMA/comments/1rf4pwa/reasondb_opensource_document_db_where_the_llm/o7yo12s/
false
2
t1_o7ynz7n
That's cool. I think I'll give it a try. But I might be looking for more open-ended topics, not just AI-related. I'll see if this fits the bill.
0
0
2026-02-28T23:23:29
Thin-Effect-3926
false
null
0
o7ynz7n
false
/r/LocalLLaMA/comments/1rh9lll/i_want_to_build_an_opensource_ai_senate_a/o7ynz7n/
false
0
t1_o7ynyrx
deep research with 100+ tasks
1
0
2026-02-28T23:23:25
Eriane
false
null
0
o7ynyrx
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7ynyrx/
false
1
t1_o7ynvup
I love LM Studio, but how to make it work with Claude Code?
1
0
2026-02-28T23:22:56
wowsers7
false
null
0
o7ynvup
false
/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7ynvup/
false
1
t1_o7ynvh3
Indeed. And a huge thank you for this. I know that a lot of guys are starting to see LLM as something fundamentally different. But for me, as an experiment, to see where this can go, is quite interresting. Heck, I even ask my wife to help me design some initial prompt pieces, as for me was hard to put myself in "her" p...
1
0
2026-02-28T23:22:53
DvMar
false
null
0
o7ynvh3
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7ynvh3/
false
1
t1_o7ynu78
Not sure if relevant but I think Lumina 2 architecture is the cheapest one to train from scratch (when you take existing components like LLM freely). I want to train a diffusion model from scratch one day.
3
0
2026-02-28T23:22:40
FullOf_Bad_Ideas
false
null
0
o7ynu78
false
/r/LocalLLaMA/comments/1rhe790/my_frends_trained_and_benchmarked_4_diffusion/o7ynu78/
false
3
t1_o7ynsjf
Great question! The LLM doesn't rely solely on the heading - it evaluates the node summary generated during ingestion, which captures what the section actually contains, not just what it's titled. So a section called "Miscellaneous" that actually contains termination clauses would still be surfaced if its summary refle...
1
0
2026-02-28T23:22:24
Big_Barnacle_2452
false
null
0
o7ynsjf
false
/r/LocalLLaMA/comments/1rf4pwa/reasondb_opensource_document_db_where_the_llm/o7ynsjf/
false
1
t1_o7ynr32
See also a new project RabbitLLM which makes a model on local GPU limited by layer size rather than total model size.
1
0
2026-02-28T23:22:09
Protopia
false
null
0
o7ynr32
false
/r/LocalLLaMA/comments/1re5qdy/is_2026_the_year_local_ai_becomes_the_default_not/o7ynr32/
false
1
t1_o7ynpta
try out lmstudio and delete ollama.
4
0
2026-02-28T23:21:57
Wild_Requirement8902
false
null
0
o7ynpta
false
/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7ynpta/
false
4
t1_o7ynnv9
I have Ollama and Claude Code installed. Ollama serves the model via Anthropic APIs.
-1
0
2026-02-28T23:21:38
wowsers7
false
null
0
o7ynnv9
false
/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7ynnv9/
false
-1
t1_o7ynn5g
I wonder what people like this did before LLMs became a big thing
1
0
2026-02-28T23:21:31
Velocita84
false
null
0
o7ynn5g
false
/r/LocalLLaMA/comments/1rglpxg/i_caught_claude_opus_doing_the_exact_same_thing/o7ynn5g/
false
1
t1_o7ynkfn
We use hierarchal retrieval reasoning to identify the nodes
1
0
2026-02-28T23:21:05
Big_Barnacle_2452
false
null
0
o7ynkfn
false
/r/LocalLLaMA/comments/1rf4pwa/reasondb_opensource_document_db_where_the_llm/o7ynkfn/
false
1
t1_o7yngm9
The people (openAI) who wanted him back were the ones he had dealings with. He was Microsoft's and a few other's ally.
3
0
2026-02-28T23:20:28
Eriane
false
null
0
o7yngm9
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7yngm9/
false
3
t1_o7ynfr1
tbh. my experience on reddit up until now has been horrible. glad i found a group of people that appreciate what I've built.
28
0
2026-02-28T23:20:20
Electrical_Ninja3805
false
null
0
o7ynfr1
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ynfr1/
false
28
t1_o7yndp2
Maybe increase repeat penalty and don’t quantize your context https://www.reddit.com/r/LocalLLaMA/s/FABl2xl24A
1
0
2026-02-28T23:19:59
Pixer---
false
null
0
o7yndp2
false
/r/LocalLLaMA/comments/1rhaoty/anyone_noticing_qwen35_27b_getting_stuck_in/o7yndp2/
false
1
t1_o7yndc6
Someone invent judgement preservation in humans
1
0
2026-02-28T23:19:56
jojacode
false
null
0
o7yndc6
false
/r/LocalLLaMA/comments/1rhcjd3/p_ucs_v12_judgment_preservation_in_persistent_ai/o7yndc6/
false
1
t1_o7yn9fe
Selling for surveillance has neither been confirmed nor denied at this time. *- some agent guy in a suit*
2
0
2026-02-28T23:19:18
Eriane
false
null
0
o7yn9fe
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7yn9fe/
false
2
t1_o7yn7zv
All us gays here love it
83
0
2026-02-28T23:19:04
Comfortable_Camp9744
false
null
0
o7yn7zv
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yn7zv/
false
83
t1_o7yn7xz
Which model would you recommend for RTX 5080 16GB and 64GB RAM? My goal is the quality and speed 20+ (tok/s)
2
0
2026-02-28T23:19:03
EaZyRecipeZ
false
null
0
o7yn7xz
false
/r/LocalLLaMA/comments/1rhfque/qwen3_coder_next_qwen35_27b_devstral_small_2_rust/o7yn7xz/
false
2
t1_o7yn1xs
.....im so laser focused on my use case that this didn't even occur to me. I planed on giving it a compiler. but tools for probing hardware was not on my list of tools.....
3
0
2026-02-28T23:18:06
Electrical_Ninja3805
false
null
0
o7yn1xs
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7yn1xs/
false
3
t1_o7ymz1s
Wholesome!
1
0
2026-02-28T23:17:37
WillemDaFo
false
null
0
o7ymz1s
false
/r/LocalLLaMA/comments/1rh9u4r/this_sub_is_incredible/o7ymz1s/
false
1
t1_o7ymz1w
yeah sure link me a bunch of useless junk when i've read and fixed enough chat templates to dream about it
0
0
2026-02-28T23:17:37
llama-impersonator
false
null
0
o7ymz1w
false
/r/LocalLLaMA/comments/1rgoygs/does_qwen35_35b_outperform_qwen3_coder_next_80b/o7ymz1w/
false
0
t1_o7ymyav
Dual NVidia all the way
1
0
2026-02-28T23:17:30
rorowhat
false
null
0
o7ymyav
false
/r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o7ymyav/
false
1
t1_o7ymv1f
so you have no idea which qwen you are commenting about, alright thanks
0
0
2026-02-28T23:16:58
Pineapple_King
false
null
0
o7ymv1f
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ymv1f/
false
0
t1_o7yms0f
Does the qwen model sport Anthropic API calls or just OpenAI? Do you need ollama or something else to translate?
1
0
2026-02-28T23:16:29
Protopia
false
null
0
o7yms0f
false
/r/LocalLLaMA/comments/1rhgzyb/cant_use_claude_code_with_ollama_local_model/o7yms0f/
false
1
t1_o7ymru4
DeepSeekv3.2 barely got supported in llama.cpp. We can run it but without all the features, I hope the architectural change won't be such that it takes forever to get supported.
2
0
2026-02-28T23:16:27
MotokoAGI
false
null
0
o7ymru4
false
/r/LocalLLaMA/comments/1rgmczt/deepseek_updated_its_lowlevel_operator_library/o7ymru4/
false
2
t1_o7ymq86
Really interesting - I actually hadn't considered an open air set up before, but I definitely have the space for that...so this opens things up a bit. Spacing and riser cables was already something I was deep in the research hole with, so not a huge obstacle for me. I'll have to get very lucky to get a 128GB at my bu...
1
0
2026-02-28T23:16:11
youcloudsofdoom
false
null
0
o7ymq86
false
/r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o7ymq86/
false
1
t1_o7ympr3
FYI: I'm running a *lazy setup* on a W11 + LM Studio without ROC, I guess that a proper install on Linux could do 2x performances. Dunno this is my old PC @ home ;)
1
0
2026-02-28T23:16:07
ea_man
false
null
0
o7ympr3
false
/r/LocalLLaMA/comments/1rh43za/qwen_3535ba3b_is_beyond_expectations_its_replaced/o7ympr3/
false
1
t1_o7ymotc
Have the ai boot the network drivers. Give it tools to probe hardware and a compiler. Or let it write assembly code and execute it.  Then give it a tool to save it when it works 
6
0
2026-02-28T23:15:58
Stunning_Mast2001
false
null
0
o7ymotc
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ymotc/
false
6
t1_o7ymopv
I totally agree. But see also this side: If Yuki were just a wrapper for an API, I’d agree. But Yuki is an experiment for me. ​I’m not 'pretending' she has feelings; I’ve built a cognitive "system" where her internal variables fundamentally alter her logic. I’m interested in what happens when a system is designed to ha...
1
0
2026-02-28T23:15:57
DvMar
false
null
0
o7ymopv
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7ymopv/
false
1
t1_o7ymk5u
Yes Lm studio it will randomly unload the model or reprocess in an infinite loop. Change back to qwen3 with the same prompt and everything is fine. Prompt created by Claude code
2
0
2026-02-28T23:15:14
peglegsmeg
false
null
0
o7ymk5u
false
/r/LocalLLaMA/comments/1rh6455/anybody_able_to_get_qwen3535ba3b_working_with/o7ymk5u/
false
2
t1_o7ymk6v
> newest Cline there's your problem. Cline is left behind by the community - try Opencode or if you want the VS code extension experience, Roo/Kilo are better than Cline
3
0
2026-02-28T23:15:14
rm-rf-rm
false
null
0
o7ymk6v
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7ymk6v/
false
3
t1_o7ymjan
I'd love to see more detailed takes on the 122b-a10b vs. 27b question at 4-6 bit quants
1
0
2026-02-28T23:15:05
Leopold_Boom
false
null
0
o7ymjan
false
/r/LocalLLaMA/comments/1rhfjeg/qwen3527b_vs_qwen3535ba3b/o7ymjan/
false
1
t1_o7ymj6w
I don't see any noteworthy issue here. Copying homework existed as long as homework itself. Slop research papers existed as long as research existed, both in pre-industrial and modern eras (e.g. replication crisis). Science and history were always subject to human bias. We rewrite history all the time in line with chan...
1
0
2026-02-28T23:15:04
Economy_Cabinet_7719
false
null
0
o7ymj6w
false
/r/LocalLLaMA/comments/1rhenw3/the_ai_feedback_loop_is_officially_closed_and_i/o7ymj6w/
false
1
t1_o7ymgxe
the sub needs minimum karma and/or age account req before posting.
2
0
2026-02-28T23:14:43
Hanthunius
false
null
0
o7ymgxe
false
/r/LocalLLaMA/comments/1rhgg0l/surprised_by_nemotron3nano_on_studio_m3_512/o7ymgxe/
false
2
t1_o7ymbb9
To be honest, you don't need to 'study data science' (or anything really) in order to make good applications/services if you're using LLMs. If you have an idea for an application (or whatever), use your LLMs to flush out your idea. Ask them questions and let them ask you questions. Then have them help you plan the impl...
1
0
2026-02-28T23:13:48
Techngro
false
null
0
o7ymbb9
false
/r/LocalLLaMA/comments/1rhf9is/what_do_i_do_with_my_life/o7ymbb9/
false
1
t1_o7yma16
Now THIS is some news! Its totally different if you felt this way about the 220B model vs the 35B model. Had to hunt for this info - please consider updating the main post
1
0
2026-02-28T23:13:35
rm-rf-rm
false
null
0
o7yma16
false
/r/LocalLLaMA/comments/1rgtxry/is_qwen35_a_coding_game_changer_for_anyone_else/o7yma16/
false
1
t1_o7ym8dm
what are some good ones that people are running locally
1
0
2026-02-28T23:13:20
megacewl
false
null
0
o7ym8dm
false
/r/LocalLLaMA/comments/1rgkc1b/back_in_my_day_localllama_were_the_pioneers/o7ym8dm/
false
1
t1_o7ym4qa
Thanks for the link, really helpful to see your set up there too! 
1
0
2026-02-28T23:12:45
youcloudsofdoom
false
null
0
o7ym4qa
false
/r/LocalLLaMA/comments/1rhdjqf/havering_between_powerlimmed_dual_3090s_and_a/o7ym4qa/
false
1
t1_o7ym4jd
the goal is this will be the core of a distributed compute network. I'm making this becasue i cant afford gpus for training. but ive already built distributed lora training into my framework. and i have a bunch of old desktops and laptops sitting around, for training, right now when training a sub 1b model a can train ...
7
0
2026-02-28T23:12:43
Electrical_Ninja3805
false
null
0
o7ym4jd
false
/r/LocalLLaMA/comments/1rhg3p4/baremetal_ai_booting_directly_into_llm_inference/o7ym4jd/
false
7
t1_o7ylxlw
Yup you are correct , cloudfare tunnels
5
0
2026-02-28T23:11:37
Key_Pace_9755
false
null
0
o7ylxlw
false
/r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o7ylxlw/
false
5
t1_o7ylu2x
Yeah that’s what it might be. Especially given how manipulative Altman is.
3
0
2026-02-28T23:11:03
PaceImaginary8610
false
null
0
o7ylu2x
false
/r/LocalLLaMA/comments/1rh2lew/openai_pivot_investors_love/o7ylu2x/
false
3
t1_o7ylqzp
That's very true. Just be careful. LLM's do not dream. Also watch out when you start using "big words" like autopoietic because it sounds like the LLM's sycophancy is exagerating the significance of what is being done, which for a lot of people triggers delusions as it plays into their ego. My warnings might not be...
1
0
2026-02-28T23:10:34
Mission_Biscotti3962
false
null
0
o7ylqzp
false
/r/LocalLLaMA/comments/1rheg3r/the_yuki_project_not_another_chatbot_a_framework/o7ylqzp/
false
1
t1_o7ylpvk
Yes I know it's a standard endpoint, my question was what is the service you used to expose it securely? Cloudlfare tunnels I presume? 
2
0
2026-02-28T23:10:24
ELPascalito
false
null
0
o7ylpvk
false
/r/LocalLLaMA/comments/1rhflqn/letting_my_rtx_5090_21_tbs_mem_stretch_its_legs/o7ylpvk/
false
2
t1_o7ylixf
Does Qwen 3.5 35Ba3B also pass? It might be in the training data for all the models and if the 35B can also do it that speaks to how power that model is in its size class.
1
0
2026-02-28T23:09:17
According-Bowl-8194
false
null
0
o7ylixf
false
/r/LocalLLaMA/comments/1rhdddm/qwen_35_122ba10b_q3_k_xl_ud_actually_passed_my/o7ylixf/
false
1
t1_o7ylhty
It should fit. Was fitting across mine. I'm trying to move to a hybrid model.
1
0
2026-02-28T23:09:06
alphatrad
false
null
0
o7ylhty
false
/r/LocalLLaMA/comments/1rg6ph3/qwen35_feels_ready_for_production_use_never_been/o7ylhty/
false
1