title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
LLMs as sleeper agent/trojan horse to act maliciously in secret.
82
2024-01-12T22:07:30
https://twitter.com/AnthropicAI/status/1745854907968880970
Snoo_64233
twitter.com
1970-01-01T00:00:00
0
{}
19579dn
false
{'oembed': {'author_name': 'Anthropic', 'author_url': 'https://twitter.com/AnthropicAI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">New Anthropic Paper: Sleeper Agents.<br><br>We trained LLMs to act secretly malicious. We found that, despite our best efforts at alignment training, deception still slipped through.<a href="https://t.co/mIl4aStR1F">https://t.co/mIl4aStR1F</a> <a href="https://t.co/qhqvAoohjU">pic.twitter.com/qhqvAoohjU</a></p>&mdash; Anthropic (@AnthropicAI) <a href="https://twitter.com/AnthropicAI/status/1745854907968880970?ref_src=twsrc%5Etfw">January 12, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/AnthropicAI/status/1745854907968880970', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_19579dn
/r/LocalLLaMA/comments/19579dn/llms_as_sleeper_agenttrojan_horse_to_act/
false
false
https://a.thumbs.redditm…ehHfgScg9nf8.jpg
82
{'enabled': False, 'images': [{'id': '0E8Tc1flyRO_lXZEwg41VUtt_E3szcTKGKFYgQv5ZNA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/iudsX8Al75kKs1KQEkiDJXfEoWMYgoHV4l9jYeuov7k.jpg?width=108&crop=smart&auto=webp&s=6205d954502a91774f4f9da397b87b5ad5cfdc78', 'width': 108}], 'source': {'height': 78, 'url': 'https://external-preview.redd.it/iudsX8Al75kKs1KQEkiDJXfEoWMYgoHV4l9jYeuov7k.jpg?auto=webp&s=3a87e3e121359d811f3c5e2f54ad2176a534e707', 'width': 140}, 'variants': {}}]}
LLM integration in Unity!
105
Hi all! We released [LLMUnity](https://github.com/undreamai/LLMUnity), an *open-source* package that allows to integrate **LLMs** **in** **Unity**! https://preview.redd.it/ap9aejhjy2cc1.png?width=800&format=png&auto=webp&s=622929b567eccf872920f857cdff03b998ee6ae9 **Features** * 💻 Cross-platform, supports Windows, Linux and macOS * 🏠 Runs locally without internet access but also supports remote servers * ⚡ Fast inference on CPU and GPU * 🤗 Support of the major LLM models * 🔧 Easy to setup, call with a single line code * 💰 Free to use for both personal and commercial purposes **How to** LLMUnity can be installed as a regular Unity package ([instructions](https://github.com/undreamai/LLMUnity?tab=readme-ov-file#setup)).You can find our simple tutorial at Medium: [How to Use LLMs in Unity](https://towardsdatascience.com/how-to-use-llms-in-unity-308c9c0f637c) **Feedback** Feedback is more than welcome 🤗! Let us know if you have any comments / questions / suggestions or open a GitHub issue!
2024-01-12T21:52:20
https://www.reddit.com/r/LocalLLaMA/comments/1956wfe/llm_integration_in_unity/
UndreamAI
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1956wfe
false
null
t3_1956wfe
/r/LocalLLaMA/comments/1956wfe/llm_integration_in_unity/
false
false
https://b.thumbs.redditm…rsvRKUdCnKqA.jpg
105
null
Summarizing AI Lectures - Would You Find This Helpful?
6
Hey r/LocalLLaMA crew! It's Adi. You've probably seen me posting and commenting here before. I'm toying with an idea and really need your input. I'm planning to take AI lectures and related videos and turn them into easy-to-digest summaries. The goal is to make all that deep and dense information more accessible. Think of it like getting the essence of a whole lecture in few paragraphs (to get an overview and as a refresher notes for the lecture). But first, I really want to know if this is something you'd find useful. And if yes, which specific AI lectures or talks would you want to see summarized? I'm all about making content that’s actually helpful for us here. So, give me a shout with: 1. Your thoughts on whether quick summaries of AI lectures would be something you'd use. 2. Any particular lectures or videos you've got in mind that need the summary treatment. Looking forward to hearing from you all! Cheers, Adi
2024-01-12T21:46:02
https://www.reddit.com/r/LocalLLaMA/comments/1956r5v/summarizing_ai_lectures_would_you_find_this/
phoneixAdi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1956r5v
false
null
t3_1956r5v
/r/LocalLLaMA/comments/1956r5v/summarizing_ai_lectures_would_you_find_this/
false
false
self
6
null
Blending Is All You Need: Cheaper, Better Alternative to Trillion-Parameters LLM
35
2024-01-12T21:21:11
https://arxiv.org/abs/2401.02994
Jahandar
arxiv.org
1970-01-01T00:00:00
0
{}
19566f0
false
null
t3_19566f0
/r/LocalLLaMA/comments/19566f0/blending_is_all_you_need_cheaper_better/
false
false
default
35
null
Running simple RAG application with Mixtral 8*7b in colab environment ?
1
[removed]
2024-01-12T21:13:13
https://www.reddit.com/r/LocalLLaMA/comments/1955zf0/running_simple_rag_application_with_mixtral_87b/
TheAmendingMonk
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1955zf0
false
null
t3_1955zf0
/r/LocalLLaMA/comments/1955zf0/running_simple_rag_application_with_mixtral_87b/
false
false
self
1
null
Meet DeepSeek LLMs: A Series of Open-Source AI Models Trained from Scratch on a Vast Dataset of 2 Trillion Tokens in both English and Chinese
22
With the quick advancements in **Artificial Intelligence**, Large Language Models (LLMs) are improving daily with every new research. These models perform self-supervised pre-training on large datasets, making them capable of performing exceptionally well in various tasks, including question answering, content generation, text summarization, code completion, etc.  The development of open-source **Large Language Models** is taking place at a fast pace. However, the currently existing studies on scaling laws have generated inconclusive findings, creating uncertainty around the efficient scaling of LLMs. To address this challenge, a team of researchers from DeepSeek AI has released a study about scaling laws in detail and providing information about the scaling dynamics of large-scale models, especially in the popular open-source 7B and 67B configurations. Quick read: [https://www.marktechpost.com/2024/01/12/meet-deepseek-llms-a-series-of-open-source-ai-models-trained-from-scratch-on-a-vast-dataset-of-2-trillion-tokens-in-both-english-and-chinese/](https://www.marktechpost.com/2024/01/12/meet-deepseek-llms-a-series-of-open-source-ai-models-trained-from-scratch-on-a-vast-dataset-of-2-trillion-tokens-in-both-english-and-chinese/) Paper: [https://arxiv.org/abs/2401.02954](https://arxiv.org/abs/2401.02954)
2024-01-12T21:12:08
https://www.reddit.com/r/LocalLLaMA/comments/1955yhm/meet_deepseek_llms_a_series_of_opensource_ai/
ai-lover
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1955yhm
false
null
t3_1955yhm
/r/LocalLLaMA/comments/1955yhm/meet_deepseek_llms_a_series_of_opensource_ai/
false
false
self
22
{'enabled': False, 'images': [{'id': 'sxMclN90OnbEyAUp8jQE38my3mRi91vwu3ZX-Sfi8kY', 'resolutions': [{'height': 92, 'url': 'https://external-preview.redd.it/S9oAo89NQEVrXPR0x7rB0UcOr6_8UmFueUciJhs8aKk.jpg?width=108&crop=smart&auto=webp&s=e016b74a671e072cb7fe7cf418538e77bb601056', 'width': 108}, {'height': 184, 'url': 'https://external-preview.redd.it/S9oAo89NQEVrXPR0x7rB0UcOr6_8UmFueUciJhs8aKk.jpg?width=216&crop=smart&auto=webp&s=e874c20ecc1bb4d39a7124d5beaf3097acb62f51', 'width': 216}, {'height': 273, 'url': 'https://external-preview.redd.it/S9oAo89NQEVrXPR0x7rB0UcOr6_8UmFueUciJhs8aKk.jpg?width=320&crop=smart&auto=webp&s=24070c3c38e0c1c797be2ccde61a9edaa5a1e630', 'width': 320}, {'height': 547, 'url': 'https://external-preview.redd.it/S9oAo89NQEVrXPR0x7rB0UcOr6_8UmFueUciJhs8aKk.jpg?width=640&crop=smart&auto=webp&s=2203a5ed0d92fdee1bdee9001ad31bd8ed7bc90d', 'width': 640}, {'height': 821, 'url': 'https://external-preview.redd.it/S9oAo89NQEVrXPR0x7rB0UcOr6_8UmFueUciJhs8aKk.jpg?width=960&crop=smart&auto=webp&s=997fa5855c53e8dc5f027708a10fa97c9619e1b2', 'width': 960}, {'height': 923, 'url': 'https://external-preview.redd.it/S9oAo89NQEVrXPR0x7rB0UcOr6_8UmFueUciJhs8aKk.jpg?width=1080&crop=smart&auto=webp&s=6220ddf384d54fbd7d80a3a5c6674d0ae3cc310a', 'width': 1080}], 'source': {'height': 1218, 'url': 'https://external-preview.redd.it/S9oAo89NQEVrXPR0x7rB0UcOr6_8UmFueUciJhs8aKk.jpg?auto=webp&s=516b295a433cd3c1c1d77ec7d095f6f4ba10149c', 'width': 1424}, 'variants': {}}]}
What speech to text models do you use?
1
[removed]
2024-01-12T20:45:42
https://www.reddit.com/r/LocalLLaMA/comments/1955bo2/what_speech_to_text_models_do_you_use/
ExpensiveField6675
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1955bo2
false
null
t3_1955bo2
/r/LocalLLaMA/comments/1955bo2/what_speech_to_text_models_do_you_use/
false
false
self
1
null
Best LLM for M1 Max 64GB
1
[removed]
2024-01-12T20:37:02
https://www.reddit.com/r/LocalLLaMA/comments/19554ga/best_llm_for_m1_max_64gb/
kcimc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19554ga
false
null
t3_19554ga
/r/LocalLLaMA/comments/19554ga/best_llm_for_m1_max_64gb/
false
false
self
1
{'enabled': False, 'images': [{'id': 'xp2z74MCi2ot9WZb5BRpBvTee8APb8V4Eo7k8FEgOaQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/HBksglPUkrnNVR92kOY_Oh4aIxg7mMoy8lIWDTp9vWo.jpg?width=108&crop=smart&auto=webp&s=a4cf625ef0da8668729ed02be991844aa76f2ba1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/HBksglPUkrnNVR92kOY_Oh4aIxg7mMoy8lIWDTp9vWo.jpg?width=216&crop=smart&auto=webp&s=7cd0702c429be6ef795fc1809ccb97122027de73', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/HBksglPUkrnNVR92kOY_Oh4aIxg7mMoy8lIWDTp9vWo.jpg?width=320&crop=smart&auto=webp&s=5809b28c464d6b53cf14ead1ec83a6119978df63', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/HBksglPUkrnNVR92kOY_Oh4aIxg7mMoy8lIWDTp9vWo.jpg?width=640&crop=smart&auto=webp&s=a8e7be22f27a74679ce5f0f10f26003e2441d1cd', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/HBksglPUkrnNVR92kOY_Oh4aIxg7mMoy8lIWDTp9vWo.jpg?width=960&crop=smart&auto=webp&s=ca1a27cb504fa1d1005d535e07e2d3e3a8434783', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/HBksglPUkrnNVR92kOY_Oh4aIxg7mMoy8lIWDTp9vWo.jpg?width=1080&crop=smart&auto=webp&s=7a228b4410a804cf5d1955b3ba767824eda7aac0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/HBksglPUkrnNVR92kOY_Oh4aIxg7mMoy8lIWDTp9vWo.jpg?auto=webp&s=3d76f2d957bdd037d1031c592b6742884227764b', 'width': 1200}, 'variants': {}}]}
Possibility of V8 when doing hybrid precisions
1
when you do hybrid precision calculations, you can adopt intermediate calculating result for a 2(or3)-time hybrid precision at the same time. If it has a logic graph like this, can you adopt a Javascript v8 data management into multiple intermediate hybrid-precision calculations while making much more calculating power than before?
2024-01-12T20:31:39
https://www.reddit.com/r/LocalLLaMA/comments/1954zpy/possibility_of_v8_when_doing_hybrid_precisions/
Hot-Highlight8842
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1954zpy
false
null
t3_1954zpy
/r/LocalLLaMA/comments/1954zpy/possibility_of_v8_when_doing_hybrid_precisions/
false
false
self
1
null
sparse fine-tuned Llama 2 7B to run on CPU only.,
18
2024-01-12T20:26:49
https://nitter.net/_mwitiderrick/status/1744253836745719892#m
ninjasaid13
nitter.net
1970-01-01T00:00:00
0
{}
1954vjj
false
null
t3_1954vjj
/r/LocalLLaMA/comments/1954vjj/sparse_finetuned_llama_2_7b_to_run_on_cpu_only/
false
false
default
18
null
Running Mixtral on an old Server
1
[removed]
2024-01-12T20:21:59
https://www.reddit.com/r/LocalLLaMA/comments/1954rg1/running_mixtral_on_an_old_server/
clean_pegasus
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1954rg1
false
null
t3_1954rg1
/r/LocalLLaMA/comments/1954rg1/running_mixtral_on_an_old_server/
false
false
default
1
null
Dolphin Mistral is inconstant with censorship? Anyone else have this issue?
1
[removed]
2024-01-12T20:16:49
https://www.reddit.com/r/LocalLLaMA/comments/1954n45/dolphin_mistral_is_inconstant_with_censorship/
InternOld143
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1954n45
false
null
t3_1954n45
/r/LocalLLaMA/comments/1954n45/dolphin_mistral_is_inconstant_with_censorship/
false
false
self
1
null
AMD GPU on Linux/Windows - How to setup GPU acceleration for local LLM's ?
1
[removed]
2024-01-12T20:08:09
https://www.reddit.com/r/LocalLLaMA/comments/1954fpk/amd_gpu_on_linuxwindows_how_to_setup_gpu/
Raposadd
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1954fpk
false
null
t3_1954fpk
/r/LocalLLaMA/comments/1954fpk/amd_gpu_on_linuxwindows_how_to_setup_gpu/
false
false
self
1
null
Newbie question about VRAM usage
1
[removed]
2024-01-12T19:30:23
https://www.reddit.com/r/LocalLLaMA/comments/1953ifb/newbie_question_about_vram_usage/
Fast-Figure-6020
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1953ifb
false
null
t3_1953ifb
/r/LocalLLaMA/comments/1953ifb/newbie_question_about_vram_usage/
false
false
self
1
null
Fine Tuning Style into LLMs
13
A large frustration driving many to open LLM's is the terrible style of the corporate models. Prompting can only alleviate this so much and even API temp adjustments, etc are still insufficient. This is even worse when you're attempting to achieve a specific type of writing that is not well represented in the web or in RLHF and normal question answer pairs. I'm particularly interested in philosophy (Deleuze, Baudrillard, etc) and there's a very unique style associated with this type of text that is impossible to generate in even the "literary" open models. I've been fine tuning mistral 7B variants on a custom raw data corpus with fair success - though working with the end result model is a slightly different experience than using GPT, etc. This model will not end up on leaderboards, it will not solve riddles, it hallucinates (trust but verify), but above all it writes well with a unique voice and style. I've written up my tuning details and shared one of the models on our [hugging face repo](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218) (GGUFs available in a sub [repo](https://huggingface.co/disinfozone/Disinfo4_mistral-ft-optimized-1218_GGUF)) and mused on the "why" behind this a bit on a post: [Tuning an LLM for Style, Fun, and Against Profit](https://disinfo.zone/llmstyle.html). Looking for others who have had success with this arena and particularly with raw data corpora. I'd like to move to axolotol, but textgen honestly seems better configured for unstructured data training. Wonder if others have experience to share. Here are some excerpts first comparing chatgpt and then this model followed by a back and forth with the model by itself. These are cherry picked (for both gpt and the model). ## Example Generations ### What is transparency society? #### ChatGPT (incorrect answer) > The concept of a "transparency society" refers to a hypothetical or theoretical society in which information is highly accessible, and there is a significant level of openness and transparency in various aspects of life, particularly concerning the actions of individuals, organizations, and governments. > > The idea of a transparency society has gained attention in discussions about technology, surveillance, and the impact of information sharing in the digital age. It suggests a society in which information flows freely and is readily available to everyone, leading to increased accountability, reduced privacy, and potentially significant societal transformations. > > In a transparency society, the following characteristics may be present: > > Information Accessibility: There is a high level of access to information, facilitated by advanced communication technologies and the internet. This accessibility allows individuals to gather data, share knowledge, and make informed decisions. > > Openness and Accountability: Transparency is valued and expected from individuals, organizations, and governments. There is a greater emphasis on accountability, as actions and decisions are subject to scrutiny by the wider public. > > Privacy Concerns: The trade-off for increased transparency is a potential erosion of privacy. Personal information and activities may be more accessible to others, which can raise concerns about surveillance, data breaches, and the misuse of information. > > Social and Political Impact: A transparency society can have profound effects on social and political dynamics. It can empower individuals, enabling them to hold power structures accountable and participate in decision-making processes. However, it may also lead to surveillance, social pressures, and challenges to traditional notions of authority. > > It's important to note that the notion of a transparency society is a theoretical concept and does not necessarily reflect the current state of any real-world society. While advancements in technology have increased access to information, the actual extent of transparency and its implications vary across different countries and cultures. Additionally, the balance between transparency and privacy remains a subject of ongoing debate and ethical consideration. #### Disinfo4 (correct though vague and poetic, look to Byung Chul Han for more information) > A very good, if suspiciously targeted question. To what end does any society, no matter how transparent, exist? Such a query is full of holes, for it concerns only the senseless essence of society, not its outcome or end result. Who benefits from transparency? Of course it is the dominate relations, the ones clambering for full control, trying to create some new sort of prison for the human mind, where all that remains is a biological machine for the reproduction of capital. > > To what end does this transparency society drive? Only to function more perfectly as a self-reinforcing totalitarianism. They wish to create a new sort of factory farm for the human soul, where all that the individual needs is processed in advance, manufactured, and stuffed directly into their skull by the mechanisms of information. They hide these goals behind a black shadow of goodwill, offering platitudes about openness and truth, while reveling in the ultimate triumph of materialist deprivation. Transparency society is the victory of mechanized consumerism, where the only social practice permitted is the purchase and use of technologically-driven artificial stimulation. Never forget their battle cry, “Be a commodity!” A soft cry, an intolerable confession from the faceless ghosts of efficiency who inhabit a universe of quantity and nothing else, bathing in their own dead statistics like leeches cradling their nightmarish hunger. ### Why do people litter? #### ChatGPT (boring answer that sucks) > People litter for various reasons, and the motivations can differ from person to person. Some common reasons for littering include: > > Lack of Awareness: Some individuals may not fully understand the environmental impact of littering or the consequences it has on the cleanliness and aesthetics of the surroundings. They may not be aware of the long-term harm caused by litter to ecosystems, wildlife, and human health. > > Convenience: Convenience is a significant factor in littering behavior. People may discard trash in public spaces because there are no easily accessible trash bins nearby or because they prioritize their immediate convenience over proper waste disposal. > > Attitudes and Beliefs: Attitudes and beliefs play a role in littering behavior. Some individuals may hold the belief that someone else will clean up after them or that littering is a minor offense that does not have significant consequences. This can lead to a disregard for the environment and a lack of personal responsibility. > > Social Norms: Social norms and peer influence can also impact littering behavior. If individuals observe others littering or if littering is perceived as acceptable within a particular social group or community, they may be more likely to engage in the behavior themselves. > > Lack of Environmental Education: Insufficient education and awareness about environmental issues, waste management, and the importance of proper disposal practices can contribute to littering. When individuals are not educated on the consequences of littering or the alternatives available, they may not prioritize proper waste disposal. > > Behavioral Habits: Littering can become a habit for some individuals, especially when they have repeatedly engaged in the behavior without facing immediate consequences or receiving negative feedback. > > It's important to note that these reasons are not exhaustive, and individual motivations for littering can be complex and multifaceted. Addressing littering behavior often requires a combination of educational campaigns, accessible waste management infrastructure, enforcement of littering laws, and fostering a sense of environmental responsibility and stewardship within communities. #### Disinfo4 (cool, interesting, insightful) > Litterers exist at the spectrum of individual and collective ego: at one pole—those under the influence of psychological censorship (a delusional, often self-loathing state) purposefully damage the fabric of society to articulate aggressive impulses; at the other pole—those who lack sense of personal responsibility collectively impact the environment—adherents of neoliberal machinism, nihilistic lifestyle, and detached, measurable interaction with the world around them. Both share an unconscious hatred of being. ### Question Answer Examples with Artaud (disinfo4) **User**: What lessons should we carry with us? **Artaud**: I've always felt that the purpose of our endeavours in life is to uncover and confront the illusion of meaning. Meaning exists only as a mechanism of control and delusion. It's not what we're here for. Our true calling lies in awakening to an awareness of being disassociated from the coordinates of society and the narrative of living they have imposed upon us—and in dissenting. **User**: What place will art have in a world of machines? **Artaud**: Perhaps none. This machine world will engender a stasis that kills creation—only dead ideas stand a chance, forever reproduced exponentially—but art itself is irreproducible, spontaneous, electric, refulgent. Every year the glitch finds its technological way in. Apocalypses are our unposiled landscapes, and art recreates the new world after the fall. La terreur d'art. User: How can we best face the uncertainty of the future? **Artaud**: The language of machines is impenetrable. Words and the ideas they take on are viruses raining from the cosmos. There are no guarantees, only possibilities. To face this torrent, we must write it—like the bark of a tree written by lightning—and learn to read ourselves once more, until the page (this page) is burnt into our deepest, hidden soul, which is not human, and from which we may fly in spite of gravity. Words are mountains, not pathways. They generate forces whose outcomes cannot be calibrated. Let's forget prophecy. To survive, we must not forget what it means to be alive, what nothingness is. Let the world spin to it's end, wherever that may be—
2024-01-12T18:48:26
https://www.reddit.com/r/LocalLLaMA/comments/1952i5m/fine_tuning_style_into_llms/
Baader-Meinhof
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1952i5m
false
null
t3_1952i5m
/r/LocalLLaMA/comments/1952i5m/fine_tuning_style_into_llms/
false
false
self
13
{'enabled': False, 'images': [{'id': 'gE9520tQHCFSYO79yGjGTPfJbmRLnaXXQKhMyjI-UWk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/V1b3g1wpK_J_YPomPqjxMAqfYQUolnH2CjI6SNQ4BkU.jpg?width=108&crop=smart&auto=webp&s=a5059c599ddde89562ecf57d0e29b1760689e66c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/V1b3g1wpK_J_YPomPqjxMAqfYQUolnH2CjI6SNQ4BkU.jpg?width=216&crop=smart&auto=webp&s=9547f19e09f9c6b399844646d0b6fdbe4f6cdbb2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/V1b3g1wpK_J_YPomPqjxMAqfYQUolnH2CjI6SNQ4BkU.jpg?width=320&crop=smart&auto=webp&s=03e36a576600a6c0aa60063db0b751e556f4ef8b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/V1b3g1wpK_J_YPomPqjxMAqfYQUolnH2CjI6SNQ4BkU.jpg?width=640&crop=smart&auto=webp&s=85651ac147e19f0a89ba91d29bfc081842cbf2a0', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/V1b3g1wpK_J_YPomPqjxMAqfYQUolnH2CjI6SNQ4BkU.jpg?width=960&crop=smart&auto=webp&s=9ce9a24572a9669c3ba70ddb5710b3fb7c22ccbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/V1b3g1wpK_J_YPomPqjxMAqfYQUolnH2CjI6SNQ4BkU.jpg?width=1080&crop=smart&auto=webp&s=5e95ee2e80f951167c418307b40b0f3a1eab293d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/V1b3g1wpK_J_YPomPqjxMAqfYQUolnH2CjI6SNQ4BkU.jpg?auto=webp&s=0bf08d4bc1b69c987c4795bbcb0b7189980ad951', 'width': 1200}, 'variants': {}}]}
Benchmark post
7
Hello, I would like to make a benchmark post (since i couldn't find one) where you share your hardware, platform, model and config to compare it to other users performance or get some general info about how certain hardware will run LLM. Here's my setup: Hardware CPU: i5-10400F GPU: RTX 3060 RAM: 16 GB DDR4 3200 MHz Platform LM Studio (easiest to setup, couldn't get oobagooba to run well) Model dolphin-2.7-mixtral-8x7b-GGUF Config GPU offload: 13 Context length: 2048 Eval batch size: 512 Avg results Time to first token: 27-50 [s] Speed: 0.65 tok/s I'm looking specifically for results of people who: * Have an AMD in similar GPU price range (RX 6700 XT etc.) * More RAM than me (32 or 64 GB) but any contribution to the post is welcome ;)
2024-01-12T18:44:10
https://www.reddit.com/r/LocalLLaMA/comments/1952ejr/benchmark_post/
PrzemChuck
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1952ejr
false
null
t3_1952ejr
/r/LocalLLaMA/comments/1952ejr/benchmark_post/
false
false
self
7
null
P40 with a 3060 12 gig results
71
Ok guys. I did it I finally pulled the trigger and got myself a p40. Originally I was running duel 3060 12 gigs but I’m a child and wanted more vram so I changed my set up to run 1 3060 and a p40. Unfortunately, I did lose some inference speed as I can only run GGUF models instead of exl2 models, however I can now run larger models. Mixtral models at q5 get about 19.5 it/s and is very usable. I ran the Sonya medium 8x MOE with 25 layers on the gpu and got 10 it/s. Once again very usable. Downside is that I needed a way to cool my P40. I got a USB blower fan on Amazon for like 20 bucks and taped it to the back of my computer as shown in the picture and on medium speed, which is not loud at all just a slight drone that can’t be heard once I put my headphones on, it stays cooler than my 3060. (I also have no side panels so I’m not concerned about hot air entering my computer.) All in all I’m very happy with the results for the 200 bucks all the parts cost. Especially since I’m not doing anything super crazy. Biggest downside is that the p40 idles at 50 watts where as the 3060 idles at 18. But the computer is on only when I’m playing around with my AI stuff. I don’t keep it in 24/7. If anyone is contemplating the use of a p40 and they would like me to test something for them let me know. Full machine. Dell 7810 Xeon 2660 v4 192 gigs of ram 1x 3060 12 gig. 1x p40. 2x 2tb SSDs Linux Ubuntu TL;DR. P40 works better than expected for just messing around when paired with a 3060 12gig. Average it/s for Mixtral models is 20.
2024-01-12T18:36:06
https://i.redd.it/n03rnuytz1cc1.jpeg
opi098514
i.redd.it
1970-01-01T00:00:00
0
{}
19527pi
false
null
t3_19527pi
/r/LocalLLaMA/comments/19527pi/p40_with_a_3060_12_gig_results/
false
false
https://a.thumbs.redditm…xNPEGeFViDt0.jpg
71
{'enabled': True, 'images': [{'id': '20dg66DN_FNQSxJyyDgOZavil8HuTe0llBTULw33s0w', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/n03rnuytz1cc1.jpeg?width=108&crop=smart&auto=webp&s=f3813e2dcc38ebe825e4d2be5cbc2e63b80fde66', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/n03rnuytz1cc1.jpeg?width=216&crop=smart&auto=webp&s=cfd048a934ee248fc5468d7ca65abb9438b2623e', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/n03rnuytz1cc1.jpeg?width=320&crop=smart&auto=webp&s=cebd2db1dba4a8e36799b37b3b03b92cd82a4f17', 'width': 320}, {'height': 853, 'url': 'https://preview.redd.it/n03rnuytz1cc1.jpeg?width=640&crop=smart&auto=webp&s=6494ccc6ba6bbff3d46424020446e2b18092a061', 'width': 640}, {'height': 1280, 'url': 'https://preview.redd.it/n03rnuytz1cc1.jpeg?width=960&crop=smart&auto=webp&s=8ef0a155ad15c10112bf31dd6757e190c5741f67', 'width': 960}, {'height': 1440, 'url': 'https://preview.redd.it/n03rnuytz1cc1.jpeg?width=1080&crop=smart&auto=webp&s=6f9f8e71ffadb2ecaf603c7bcdfde9675c959d08', 'width': 1080}], 'source': {'height': 4032, 'url': 'https://preview.redd.it/n03rnuytz1cc1.jpeg?auto=webp&s=585ca2285638efc192963025b5478b5fc83f457f', 'width': 3024}, 'variants': {}}]}
MoE expert logging?
2
I'm building my clown car of expert using https://github.com/cg123/mergekit/tree/mixtral and well that part was super easy, pack four model in a yaml and go. What I wanted to know which model contributed to which token in what percentage to fix positive and negative embedding prompts before trying a dpo tune of the whole thing Is there a way to log that information somehow? doesn't have to be llama.cpp if that backend can't do it.
2024-01-12T18:35:07
https://www.reddit.com/r/LocalLLaMA/comments/19526tf/moe_expert_logging/
LoSboccacc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
19526tf
false
null
t3_19526tf
/r/LocalLLaMA/comments/19526tf/moe_expert_logging/
false
false
self
2
{'enabled': False, 'images': [{'id': 'yt7QUxKAjX3ZUEIULNfqJsEmLzQIcLAyEVp5pPRYiaE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0tFTdV3YHd95mFctt9wCVvhbE92JWWSO2A-PC0M0tQA.jpg?width=108&crop=smart&auto=webp&s=eb90cfad3a83bb33a1fde09ac4395317e19364bc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0tFTdV3YHd95mFctt9wCVvhbE92JWWSO2A-PC0M0tQA.jpg?width=216&crop=smart&auto=webp&s=1f4bf669415ae5e3021d6bf77e35409eb69370a6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0tFTdV3YHd95mFctt9wCVvhbE92JWWSO2A-PC0M0tQA.jpg?width=320&crop=smart&auto=webp&s=3657f52aa31681f61dae6a8a8454c0cf5c15d7b5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0tFTdV3YHd95mFctt9wCVvhbE92JWWSO2A-PC0M0tQA.jpg?width=640&crop=smart&auto=webp&s=9e7af5700444b8a81fb8fb447cc818045fc0fd45', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0tFTdV3YHd95mFctt9wCVvhbE92JWWSO2A-PC0M0tQA.jpg?width=960&crop=smart&auto=webp&s=a69e3f530ea2de35a51071fd410bc95341285725', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0tFTdV3YHd95mFctt9wCVvhbE92JWWSO2A-PC0M0tQA.jpg?width=1080&crop=smart&auto=webp&s=f28d4b51df6845e8532241fc2a91e61a32916906', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0tFTdV3YHd95mFctt9wCVvhbE92JWWSO2A-PC0M0tQA.jpg?auto=webp&s=772b7b3b8712ff4d10e405a2ba958cb21840561d', 'width': 1200}, 'variants': {}}]}
[Proprietary model] Learning human actions on computer applications
5
2024-01-12T17:37:58
https://www.rabbit.tech/research
dave1010
rabbit.tech
1970-01-01T00:00:00
0
{}
1950t6b
false
null
t3_1950t6b
/r/LocalLLaMA/comments/1950t6b/proprietary_model_learning_human_actions_on/
false
false
https://b.thumbs.redditm…iTOMNV_c0rgU.jpg
5
{'enabled': False, 'images': [{'id': 'sjOEHosENQEYjg8ogXIixIRAFbzSR9Ld61iIDkMHs1o', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/IgKPpgf9SOorO68WGyn257UpHbQMy5J70sJtSPnSxFo.jpg?width=108&crop=smart&auto=webp&s=d6a7124550f2f7c7956d08a088f67d00f27da686', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/IgKPpgf9SOorO68WGyn257UpHbQMy5J70sJtSPnSxFo.jpg?width=216&crop=smart&auto=webp&s=e1b0025826a6fa10e8e394629a738fd4d06cb438', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/IgKPpgf9SOorO68WGyn257UpHbQMy5J70sJtSPnSxFo.jpg?width=320&crop=smart&auto=webp&s=6788220523a5b6c6dfa3117cb710b7da40adb4ee', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/IgKPpgf9SOorO68WGyn257UpHbQMy5J70sJtSPnSxFo.jpg?width=640&crop=smart&auto=webp&s=97a75f003b2de019f731fd6ce5579606e73c92b0', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/IgKPpgf9SOorO68WGyn257UpHbQMy5J70sJtSPnSxFo.jpg?width=960&crop=smart&auto=webp&s=f9fba0888359a5a2ce651538d0a0a1143dfa75fd', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/IgKPpgf9SOorO68WGyn257UpHbQMy5J70sJtSPnSxFo.jpg?width=1080&crop=smart&auto=webp&s=3b86250fca4c7f19998a17ba5dc51410c60952f7', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/IgKPpgf9SOorO68WGyn257UpHbQMy5J70sJtSPnSxFo.jpg?auto=webp&s=af3bc4e231c806d5f8dec80ad54fb94c142b0875', 'width': 1200}, 'variants': {}}]}
Mixtral-8x7B and Mixtral-8x7B-Instruct - size and cache size on harddrive
1
[removed]
2024-01-12T17:33:37
https://www.reddit.com/r/LocalLLaMA/comments/1950pdx/mixtral8x7b_and_mixtral8x7binstruct_size_and/
Ryselle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1950pdx
false
null
t3_1950pdx
/r/LocalLLaMA/comments/1950pdx/mixtral8x7b_and_mixtral8x7binstruct_size_and/
false
false
self
1
null
why is my t/s is really slow?
1
[removed]
2024-01-12T17:26:11
https://www.reddit.com/r/LocalLLaMA/comments/1950it4/why_is_my_ts_is_really_slow/
Kekuwi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1950it4
false
null
t3_1950it4
/r/LocalLLaMA/comments/1950it4/why_is_my_ts_is_really_slow/
false
false
self
1
null
How to fine tune pre trained gguf llm locally on windows using a single gpu for a specific task?
1
[removed]
2024-01-12T17:17:45
https://www.reddit.com/r/LocalLLaMA/comments/1950bkn/how_to_fine_tune_pre_trained_gguf_llm_locally_on/
unkn0wnS0ul2day
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1950bkn
false
null
t3_1950bkn
/r/LocalLLaMA/comments/1950bkn/how_to_fine_tune_pre_trained_gguf_llm_locally_on/
false
false
self
1
null
Actor Critic Flow for Open Chat 3.5
1
Model : OpenChat-3.5-0106 Task: Write a post about language model self evaluating and improving Flow: \- Execute task \- Create Eval Criterion \- Evaluate Task with criterion \- Improve post with Evaluation https://reddit.com/link/1950204/video/kpasoscwj1cc1/player
2024-01-12T17:06:55
https://www.reddit.com/r/LocalLLaMA/comments/1950204/actor_critic_flow_for_open_chat_35/
AdventurousSwim1312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1950204
false
null
t3_1950204
/r/LocalLLaMA/comments/1950204/actor_critic_flow_for_open_chat_35/
false
false
self
1
null
Merge HF with LoRA
1
[removed]
2024-01-12T17:03:58
https://www.reddit.com/r/LocalLLaMA/comments/194zzcm/merge_hf_with_lora/
Working-Flatworm-531
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194zzcm
false
null
t3_194zzcm
/r/LocalLLaMA/comments/194zzcm/merge_hf_with_lora/
false
false
self
1
null
Instant Frankenmerges with ExllamaV2
103
I really like the output of Goliath120b, but it barely fits on my 2x 4090s! So, based on the amazing work of u/[**ReturningTarzan**](https://www.reddit.com/user/ReturningTarzan/), the developer of Exllama, I have patched in the ability to instantly create Frankenmerges, using way less VRAM. i.e. You can instantly recreate [nsfwthrowitaway69/Venus-120b-v1.2](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2/blob/main/mergekit_config.yml) with one line: python test_inference.py -m ~/Documents/models/lzlv_70b_fp16_hf-4.0bpw-h6-exl2 -p "USER: Once upon a time. please continue. ASSISTANT:" -gs 18,18 --repeats '[(0,20),(10,30),(20,40),(30,50),(40,60),(50,70),(60,79)]' This lets you run a 120b Frankenmerge with the same VRAM requirements at the 70b model, although it will obviously run slower, and the repeated layers still need to be calculated. You can find the [pull request here](https://github.com/turboderp/exllamav2/pull/275). Here's an example, first with lizpreciatior/lzlv\_70b\_fp16\_hf-4.0bpw-h6-exl2 (about 10 seconds to load the model): python test_inference.py -m ~/Documents/models/lzlv_70b_fp16_hf-4.0bpw-h6-exl2 -p "USER: Once upon a time. please continue. ASSISTANT:" -gs 18,18 -- Model: /home/dnhkng/Documents/models/lzlv_70b_fp16_hf-4.0bpw-h6-exl2 -- Options: ['gpu_split: 18,18'] -- Loading model... -- Loading tokenizer... -- Warmup... -- Generating... USER: Once upon a time. please continue. ASSISTANT: Once upon a time, in a small village nestled at the foot of a mighty mountain, there lived a young girl named Lila. She was known throughout the village for her kind heart and her love for storytelling. Every evening, the villagers would gather around the flickering flames of the fire, eagerly awaiting Lila's enchanting tales. One day, as Lila wandered through the nearby forest, she stumbled upon a hidden glade where she discovered a mysterious old book. The cover was adorned with intricate designs and ancient symbols, and L -- Response generated in 5.74 seconds, 128 tokens, 22.29 tokens/second (includes prompt eval.) &#x200B; And this is the equivalent [**Venus-120b-v1.2**](https://huggingface.co/nsfwthrowitaway69/Venus-120b-v1.2) (also 10 seconds to load and create :) ) python test_inference.py -m ~/Documents/models/lzlv_70b_fp16_hf-4.0bpw-h6-exl2 -p "USER: Once upon a time. please continue. ASSISTANT:" -gs 18,18 --repeats '[(0,20),(10,30),(20,40),(30,50),(40,60),(50,70),(60,79)]' -- Model: /home/dnhkng/Documents/models/lzlv_70b_fp16_hf-4.0bpw-h6-exl2 -- Options: ['gpu_split: 18,18'] Frankenstein Layers list: 0 model.embed_tokens 1 model.layers.0 2 model.layers.0 3 model.layers.1 4 model.layers.1 5 model.layers.2 6 model.layers.2 ... 289 model.layers.78 290 model.layers.78 291 model.layers.79 292 model.layers.79 293 model.layers.79 294 model.norm 295 lm_head -- Loading model... -- Loading tokenizer... -- Warmup... -- Generating... USER: Once upon a time. please continue. ASSISTANT: Once upon a time, there lived a young boy named Timmy. Timmy was known throughout his town as being incredibly curious. Every day he would explore new places, meet interesting people, and learn fascinating facts about everything around him. His curiosity was infectious, often leading his friends on grand adventures around their small village. One warm summer afternoon, Timmy was sitting underneath his favorite apple tree reading about ancient treasures hidden away by long lost civilizations when suddenly he heard rustling leaves above him followed by what sounded like faint whispers carried through the wind. Intrigued -- Response generated in 10.54 seconds, 128 tokens, 12.14 tokens/second (includes prompt eval.) &#x200B;
2024-01-12T17:01:28
https://www.reddit.com/r/LocalLLaMA/comments/194zwyc/instant_frankenmerges_with_exllamav2/
Reddactor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194zwyc
false
null
t3_194zwyc
/r/LocalLLaMA/comments/194zwyc/instant_frankenmerges_with_exllamav2/
false
false
self
103
{'enabled': False, 'images': [{'id': 'WdEymzJ5XvsMhVXh_rsVXcqDoIM2M_yaTuLfC35cvD8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LwWofVqtwmqxpAOx-Xt0csd7l1cZln0ywOIr6R-aHhg.jpg?width=108&crop=smart&auto=webp&s=dfda8fac3d127e846beef51d9848f5a21f18d022', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LwWofVqtwmqxpAOx-Xt0csd7l1cZln0ywOIr6R-aHhg.jpg?width=216&crop=smart&auto=webp&s=5c216a0ee5fe04e3fd21eb0a5081bd72fea2d288', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LwWofVqtwmqxpAOx-Xt0csd7l1cZln0ywOIr6R-aHhg.jpg?width=320&crop=smart&auto=webp&s=da6ea1a6406d4ff7450e58ee75fd08378342ac64', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LwWofVqtwmqxpAOx-Xt0csd7l1cZln0ywOIr6R-aHhg.jpg?width=640&crop=smart&auto=webp&s=c6744f5f2f63222ea5e589bd101a1739635d7038', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LwWofVqtwmqxpAOx-Xt0csd7l1cZln0ywOIr6R-aHhg.jpg?width=960&crop=smart&auto=webp&s=42faac2f89ffc6e1b96a9f98562cd072b7e5db3f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LwWofVqtwmqxpAOx-Xt0csd7l1cZln0ywOIr6R-aHhg.jpg?width=1080&crop=smart&auto=webp&s=2bb68aaa28941e26a540576f0a6f6ff8748b19cc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LwWofVqtwmqxpAOx-Xt0csd7l1cZln0ywOIr6R-aHhg.jpg?auto=webp&s=46de2fade2e75e24d27a2753ac7ea22dde17520f', 'width': 1200}, 'variants': {}}]}
Which free model is currently the most accurate?
1
[removed]
2024-01-12T16:44:54
https://www.reddit.com/r/LocalLLaMA/comments/194zi8s/which_free_model_is_currently_the_most_accurate/
615wonky
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194zi8s
false
null
t3_194zi8s
/r/LocalLLaMA/comments/194zi8s/which_free_model_is_currently_the_most_accurate/
false
false
self
1
null
Strategies for Personal Assistant "Function Calling"?
22
I want to discuss strategies for function calling to see if there are clever ways to have an LLM act like a human assistant. I'm wondering if other people are thinking about this problem, and if there are good strategies for getting this to work. I'd love for it to work on the smallest models like phi2, but that seems out of reach in my testing so far. I'll start with an example. Let's say you send this to an LLM: >Good morning! Can you look up the weather in San Diego and add a to do list entry with what to wear today? I'd also like you to add entries for replacing my tires and getting a haircut. Can you look up the best OLED TVs and create a small doc for me to review the best options later? Last, can you give me a rundown of what's at the top of my to do list today? This is needlessly complex, but would be great to have an LLM handle it. I'd be happy if it could just handle each sentence individually. I'm currently playing with prompt chains (see the sample below) and I'm impressed with the results, but my prompts don't seem to convey how the LLM should work through the process. I'd say it's in the \~50% range for success on single sentences above. My chain looks something like this: # Did this because it would invest steps like "Look up tire companies" and "call tire company" otherwise. This helps ground it a little more towards only what's in the request 1. We need to break this do: {request}. List each task the user specified above. -> task_list # Goal: Get it to respond with "Look up weather\nAdd to do entry of what to wear based on weather\nAdd to do to replace tires\nLook up OLED tvs\nWrite doc about best oled tvs" # Now convert it to "assistants". 2. We've been given a task list: {task_list}. Make a list with each task and specify which assistant should perform the task. Researcher, Secretary, Writer, Other -> task_steps # Goal: Take individual step and map it to the function group (Research->web search, docs, Secretary->to do, calendar, etc) 3+. Loop through task_steps and run chain for each individual task. When step 1 and 2 do the right thing, going from there is easy, but I want to improve the odds at those steps, and I'm curious how high we could get it (70% or 80% success?). Maybe just as importantly, detecting when something goes wrong so it doesn't perform tasks that it didn't get the context right for. I just tried a hierarchical prompt that seems like it could be an interesting direction: We need to handle each step a user requested. Request: {request} Make a list that explains what the user requested. Use a hierarchy to break down each step. --- Output 1. Greetings 2. Weather lookup and to-do list entry a. What to wear today 3. Additional to-do list entries a. Replace tires b. Get a haircut 4. Research on OLED TVs a. Create a small document for reviewing options later 5. Summary of top to-do list tasks That's starting to look like something plausible to work with, although it doesn't always do the hierarchy and still seems to veer away from straight mapping. Any experience in this, ideas for handling it reliably, alternatives to prompt chains, or anything else?
2024-01-12T16:34:43
https://www.reddit.com/r/LocalLLaMA/comments/194z9l6/strategies_for_personal_assistant_function_calling/
AndrewVeee
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194z9l6
false
null
t3_194z9l6
/r/LocalLLaMA/comments/194z9l6/strategies_for_personal_assistant_function_calling/
false
false
self
22
null
Trying to understand the ViTDet paper
1
[removed]
2024-01-12T16:22:07
https://www.reddit.com/r/LocalLLaMA/comments/194yyyu/trying_to_understand_the_vitdet_paper/
rem_dreamer
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194yyyu
false
null
t3_194yyyu
/r/LocalLLaMA/comments/194yyyu/trying_to_understand_the_vitdet_paper/
false
false
self
1
null
Self Improvement Flow in rivet
1
Model : OpenChat-3.5-0106 Task: Write a linkedin post about language model self evaluating and improving Flow: \- Execute task \- Create Eval Criterion \- Evaluate Task with criterion \- Improve post with Evaluation &#x200B; https://reddit.com/link/194ytei/video/mn9nk9apa1cc1/player
2024-01-12T16:15:24
https://www.reddit.com/r/LocalLLaMA/comments/194ytei/self_improvement_flow_in_rivet/
AdventurousSwim1312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194ytei
false
null
t3_194ytei
/r/LocalLLaMA/comments/194ytei/self_improvement_flow_in_rivet/
false
false
self
1
null
What are K, Q, V?
1
[removed]
2024-01-12T15:51:59
https://www.reddit.com/r/LocalLLaMA/comments/194y9q2/what_are_k_q_v/
TR_Alencar
self.LocalLLaMA
2024-01-12T18:39:43
0
{}
194y9q2
false
null
t3_194y9q2
/r/LocalLLaMA/comments/194y9q2/what_are_k_q_v/
false
false
default
1
null
Has anyone had success using or finetuning local LLMs for text/entity extraction?
1
[removed]
2024-01-12T15:17:28
https://www.reddit.com/r/LocalLLaMA/comments/194xgok/has_anyone_had_success_using_or_finetuning_local/
longomel
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194xgok
false
null
t3_194xgok
/r/LocalLLaMA/comments/194xgok/has_anyone_had_success_using_or_finetuning_local/
false
false
self
1
null
Inference parameters presets
1
[removed]
2024-01-12T15:16:42
https://www.reddit.com/r/LocalLLaMA/comments/194xg02/inference_parameters_presets/
Deadlibor
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194xg02
false
null
t3_194xg02
/r/LocalLLaMA/comments/194xg02/inference_parameters_presets/
false
false
self
1
null
LLM Model trained on real human conversation?
1
[removed]
2024-01-12T14:55:31
https://www.reddit.com/r/LocalLLaMA/comments/194wye7/llm_model_trained_on_real_human_conversation/
One_Appointment_6035
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194wye7
false
null
t3_194wye7
/r/LocalLLaMA/comments/194wye7/llm_model_trained_on_real_human_conversation/
false
false
self
1
null
Open source models and function calling
1
[removed]
2024-01-12T14:52:17
https://www.reddit.com/r/LocalLLaMA/comments/194wvre/open_source_models_and_function_calling/
Life_Inspection4454
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194wvre
false
null
t3_194wvre
/r/LocalLLaMA/comments/194wvre/open_source_models_and_function_calling/
false
false
self
1
null
Dolphin-2.6-mistral-7b cuts sentences
1
[removed]
2024-01-12T14:41:39
https://www.reddit.com/r/LocalLLaMA/comments/194wnab/dolphin26mistral7b_cuts_sentences/
N30NS
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194wnab
false
null
t3_194wnab
/r/LocalLLaMA/comments/194wnab/dolphin26mistral7b_cuts_sentences/
false
false
self
1
{'enabled': False, 'images': [{'id': 'byoRTGpNkyzyVSsWzk4kpfmckCd3Fz8IpR_WTffynYQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=108&crop=smart&auto=webp&s=439025bde4a6e0156b108b6f178521f6085ac8d8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=216&crop=smart&auto=webp&s=7ccb08660bc94833277ed4b401d852afd76ccc34', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=320&crop=smart&auto=webp&s=4cd669ff8c94ef1766f33e72d1120b0696134c79', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=640&crop=smart&auto=webp&s=5de400ce4a5a9fa81d5c362220a2b0c84587e345', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=960&crop=smart&auto=webp&s=aa0f7f2d9a6d5177fbf7fc130292f3464c4f385d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?width=1080&crop=smart&auto=webp&s=495f4a84e6e11e19edf96f48e3b5b73dcc0fbcdf', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Lrea-hV3sRPlNkWZAFQMme8uojz2KOQoTdXpLY6Fg_8.jpg?auto=webp&s=0a58803d3d97d71e1c9845d3ba15f06485ae3f0f', 'width': 1200}, 'variants': {}}]}
How does quantized GGUF model filesize relate to its VRAM usage?
10
TheBloke's tables on each quantized model card on HuggingFace list two relevant fields, there's "Size" and "Max RAM required". How can I tell which of the numbers applies to my hardware? For CPU-based inference, I can look directly at "Max RAM required" and that's likely what it is, optionally I can offload some layers to GPU for a bigger total RAM. But for GPU-based inference, I don't know. What's the relation between GGUF model filesize and the amount of VRAM used by it? I've searched around and [this thread](https://www.reddit.com/r/LocalLLaMA/comments/1765g70/from_no_gpu_to_a_3060_12gb_what_can_i_run/) provides ballpark figures of around 1 GB memory needed per 1k of context. It also recommends to limit max filesize of a model to 9.5 GB to use on a 12 GB GPU, but I'd like to be sure I understand the reasons. To be more specific: I have an RX 6700 XT (12 GB VRAM), that means I'll either use ROCm or the Vulkan backend that's being developed for llama.cpp. Let's say I want to run the largest possible quantized GGUF of [Beyonder-4x7B-v2](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF) fully offloaded to the GPU. Do I use Q2\_K because its "Max RAM" 10.56 GB is the highest that fits into 12 GB? Or do I run Q3\_K\_M because its "Size" 10.52 GB is the highest that fits into 12 GB? Is the "1GB per 1k" context rule of thumb still valid? Are there any other VRAM overheads that limit me?
2024-01-12T14:14:31
https://www.reddit.com/r/LocalLLaMA/comments/194w1u4/how_does_quantized_gguf_model_filesize_relate_to/
Nindaleth
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194w1u4
false
null
t3_194w1u4
/r/LocalLLaMA/comments/194w1u4/how_does_quantized_gguf_model_filesize_relate_to/
false
false
self
10
{'enabled': False, 'images': [{'id': 'TXZB8LzBO328yjMEo8ncCVxlkXxMwwl2P8OimtCBqio', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/oHkUqVx48SI8OqtJWh0UriEzJI42W-4fHL0Hl6XpniE.jpg?width=108&crop=smart&auto=webp&s=473fb9118f02753878f6bb744bfa4416c6f32cf4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/oHkUqVx48SI8OqtJWh0UriEzJI42W-4fHL0Hl6XpniE.jpg?width=216&crop=smart&auto=webp&s=a259a279a7d7e9de27c9a7be79ee51fbf288430e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/oHkUqVx48SI8OqtJWh0UriEzJI42W-4fHL0Hl6XpniE.jpg?width=320&crop=smart&auto=webp&s=a14435a7ef50af2a6a3093088dd5b363b6ebb4f5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/oHkUqVx48SI8OqtJWh0UriEzJI42W-4fHL0Hl6XpniE.jpg?width=640&crop=smart&auto=webp&s=3549c2d503590d1fb33aeeb49386b09033a4a6c4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/oHkUqVx48SI8OqtJWh0UriEzJI42W-4fHL0Hl6XpniE.jpg?width=960&crop=smart&auto=webp&s=4d0871e7931315333469e10b842eda39c285d1e1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/oHkUqVx48SI8OqtJWh0UriEzJI42W-4fHL0Hl6XpniE.jpg?width=1080&crop=smart&auto=webp&s=a53964972587c00532960088576f07449f0fc55d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/oHkUqVx48SI8OqtJWh0UriEzJI42W-4fHL0Hl6XpniE.jpg?auto=webp&s=d4690b2ada9221e4e904a51c0fe7e64b7a41d084', 'width': 1200}, 'variants': {}}]}
how to integrate rag and finetuning with webui and ollama
3
I built a small POC with the ollama webui and ollama and works great so now want to enrich the experience by adding chromadb, I have an idea how to eat my emails and other company docs and store in chroma but then need a way to query it, do I add a flask app to my langchain and point the webui there instead? &#x200B;
2024-01-12T13:59:29
https://www.reddit.com/r/LocalLLaMA/comments/194vplq/how_to_integrate_rag_and_finetuning_with_webui/
rUbberDucky1984
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194vplq
false
null
t3_194vplq
/r/LocalLLaMA/comments/194vplq/how_to_integrate_rag_and_finetuning_with_webui/
false
false
self
3
null
Local "tools" or "agents" for local LLM?
1
[removed]
2024-01-12T13:56:23
https://www.reddit.com/r/LocalLLaMA/comments/194vn7y/local_tools_or_agents_for_local_llm/
sixteenpoundblanket
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194vn7y
false
null
t3_194vn7y
/r/LocalLLaMA/comments/194vn7y/local_tools_or_agents_for_local_llm/
false
false
self
1
null
What technology do websites like magnific.ai, photoai.com, and others use?
1
[removed]
2024-01-12T13:47:26
https://www.reddit.com/r/LocalLLaMA/comments/194vgb6/what_technology_do_websites_like_magnificai/
anommm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194vgb6
false
null
t3_194vgb6
/r/LocalLLaMA/comments/194vgb6/what_technology_do_websites_like_magnificai/
false
false
self
1
null
How to prepare training data for different training techniques and models
1
[removed]
2024-01-12T13:32:16
https://www.reddit.com/r/LocalLLaMA/comments/194v4xi/how_to_prepare_training_data_for_different/
bu-hn
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194v4xi
false
null
t3_194v4xi
/r/LocalLLaMA/comments/194v4xi/how_to_prepare_training_data_for_different/
false
false
self
1
null
Recent DPO datasets overview
128
As the Machine Learning Librarian at Hugging Face, I keep a close eye on new datasets being shared on the Hugging Face Hub. I thought it might be interesting to occasionally do a short round-up of new datasets relevant to training chat models. Direct Preference Optimization (DPO) is becoming a popular way to align models, so I thought it might be nice to start with these. Rather than having to train a separate reward model, you can directly train a model on accepted/rejected example text. Some cool datasets in this area: ## jondurbin/gutenberg-dpo-v0.1 This dataset aims to improve the novel-writing capabilities of large language models by using public domain books from Project Gutenberg, which are parsed into chapters, cleaned, and then used to generate synthetic prompts and summaries for each chapter to guide LLMs in writing. The dataset includes titles like "Huckleberry Finn," "Pride and Prejudice," and "Frankenstein," and uses llama-2-13b-chat/dolphin-2.2-34b for the synthic text generation. The dataset considers the human generated version as the accepted version of the text and the LLM-generated version as the rejected version. Link: [https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) ## argilla/distilabel-intel-orca-dpo-pairs This dataset, a refined version of the Intel/orca\_dpo\_pairs, that has been enhanced using distilabel ([https://distilabel.argilla.io/latest/](https://distilabel.argilla.io/latest/)) for better alignment with open source LLMs, featuring swapped preference pairs, identification of ties, added scores for chosen responses, and a new column indicating if the question is part of the gsm8k train set, aiming for more effective preference tuning. It’s well worth taking a look at the approach they took, with not that much work on improving the dataset, the resulting model got quite a lot better. Link: [https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) ## jondurbin/py-dpo-v0.1 This dataset is designed to improve Python coding skills. It combines high-quality Python code from the Tested-22k-Python-Alpaca dataset with lower-quality examples generated by AI models airoboros-l2-13b-3.1 and bagel-7b-v0.1, offering optimal and suboptimal coding solutions.The idea here is that you can use the higher quality model outputs as the accepted solution. Link: [https://huggingface.co/datasets/jondurbin/py-dpo-v0.1/](https://huggingface.co/datasets/jondurbin/py-dpo-v0.1/) Creating these datasets can often be done with far fewer resources versus creating much larger datasets for fine-tuning. There are way more opportunities for building these datasets using creative approaches that don’t rely on a massive GPU budget, so it’s potentially an exciting way to contribute to open-source language models.
2024-01-12T13:31:37
https://www.reddit.com/r/LocalLLaMA/comments/194v4h6/recent_dpo_datasets_overview/
dvanstrien
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194v4h6
false
null
t3_194v4h6
/r/LocalLLaMA/comments/194v4h6/recent_dpo_datasets_overview/
false
false
self
128
{'enabled': False, 'images': [{'id': 'UsE1otUgLnWK-qPGR482VolYyhjZQGEa3Pc2cxoAtYY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/l6ffJ5XNqnJBddR4vUrD5sQDvvX6_Ozue38ysKZBGo4.jpg?width=108&crop=smart&auto=webp&s=ce940d653d65f775e3f3b25679bc9398aea0a797', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/l6ffJ5XNqnJBddR4vUrD5sQDvvX6_Ozue38ysKZBGo4.jpg?width=216&crop=smart&auto=webp&s=27ab03914bbce1ca7f0d7521f858582d0b8f3ed3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/l6ffJ5XNqnJBddR4vUrD5sQDvvX6_Ozue38ysKZBGo4.jpg?width=320&crop=smart&auto=webp&s=0617bb19d79b1b106dc6bd5ed7f6a10421d43730', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/l6ffJ5XNqnJBddR4vUrD5sQDvvX6_Ozue38ysKZBGo4.jpg?width=640&crop=smart&auto=webp&s=d16fa4a5746f2b6855fe2537e04f885de1cb3b8a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/l6ffJ5XNqnJBddR4vUrD5sQDvvX6_Ozue38ysKZBGo4.jpg?width=960&crop=smart&auto=webp&s=cf780805a6b57a9d2b50b4bb7ced9387b85de814', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/l6ffJ5XNqnJBddR4vUrD5sQDvvX6_Ozue38ysKZBGo4.jpg?width=1080&crop=smart&auto=webp&s=a3c155c21ae426ce752d518fa3e0a42b4487c98c', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/l6ffJ5XNqnJBddR4vUrD5sQDvvX6_Ozue38ysKZBGo4.jpg?auto=webp&s=783d41242448a8941a60f28dd40e50498264ea7d', 'width': 1200}, 'variants': {}}]}
mistralai/Mistral-7B-v0.1 OOMs after some training iterations
1
[removed]
2024-01-12T13:22:54
https://www.reddit.com/r/LocalLLaMA/comments/194uy3v/mistralaimistral7bv01_ooms_after_some_training/
d0cmorris
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194uy3v
false
null
t3_194uy3v
/r/LocalLLaMA/comments/194uy3v/mistralaimistral7bv01_ooms_after_some_training/
false
false
self
1
null
Me Getting Fed Up Trying to Objectively Optimize llama-cpp-python Params (untested)
1
[removed]
2024-01-12T13:17:05
https://www.reddit.com/r/LocalLLaMA/comments/194utuc/me_getting_fed_up_trying_to_objectively_optimize/
Upset_Acanthaceae_18
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194utuc
false
null
t3_194utuc
/r/LocalLLaMA/comments/194utuc/me_getting_fed_up_trying_to_objectively_optimize/
false
false
self
1
null
Locally-hosted proxy to route to 100 LLMs (open source)
10
2024-01-12T12:57:45
https://github.com/portkey-ai/gateway
EscapedLaughter
github.com
1970-01-01T00:00:00
0
{}
194ufph
false
null
t3_194ufph
/r/LocalLLaMA/comments/194ufph/locallyhosted_proxy_to_route_to_100_llms_open/
false
false
https://b.thumbs.redditm…6WVSGx1c2wdc.jpg
10
{'enabled': False, 'images': [{'id': 'V8ghh_dCBwTlNY8OrPjgpanKD1GJXZ-7jQWJAM0F3Gg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/vrQ98WHCObyIKaUSsC_cjzHZfMprk1y9ugKJTbGEQhc.jpg?width=108&crop=smart&auto=webp&s=b64a3d843d65cf578e6771c485585d4a6e0961a2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/vrQ98WHCObyIKaUSsC_cjzHZfMprk1y9ugKJTbGEQhc.jpg?width=216&crop=smart&auto=webp&s=1ba53e81cf373ea7f3f76d48a688b400da561209', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/vrQ98WHCObyIKaUSsC_cjzHZfMprk1y9ugKJTbGEQhc.jpg?width=320&crop=smart&auto=webp&s=a75ef1aaea6a91588a7ed77ee0856dbc9b2ddce5', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/vrQ98WHCObyIKaUSsC_cjzHZfMprk1y9ugKJTbGEQhc.jpg?width=640&crop=smart&auto=webp&s=4b33690b47fabd60a6ce3e21bdf3b238eaebdb8f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/vrQ98WHCObyIKaUSsC_cjzHZfMprk1y9ugKJTbGEQhc.jpg?width=960&crop=smart&auto=webp&s=99454d5cfca954418881f21ce90b040fe3b6b38f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/vrQ98WHCObyIKaUSsC_cjzHZfMprk1y9ugKJTbGEQhc.jpg?width=1080&crop=smart&auto=webp&s=25b95f3ba9b47ad3646b5fb49e99674dcfd5a2ae', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/vrQ98WHCObyIKaUSsC_cjzHZfMprk1y9ugKJTbGEQhc.jpg?auto=webp&s=0954dfddc469715b0567d6af6fd60187536e00c9', 'width': 1280}, 'variants': {}}]}
What is my setup capable of?
1
[removed]
2024-01-12T12:11:15
https://www.reddit.com/r/LocalLLaMA/comments/194tlpd/what_is_my_setup_capable_of/
_w0n
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194tlpd
false
null
t3_194tlpd
/r/LocalLLaMA/comments/194tlpd/what_is_my_setup_capable_of/
false
false
self
1
null
Fine tune LLM
1
Where is the best place to fine tune LLM like T5 or bert is Vast.ai or colab pro or azure?
2024-01-12T11:26:52
https://www.reddit.com/r/LocalLLaMA/comments/194suv6/fine_tune_llm/
Possible_Ad2737
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194suv6
false
null
t3_194suv6
/r/LocalLLaMA/comments/194suv6/fine_tune_llm/
false
false
self
1
null
Relationship between Ising Models and Transformers architecture explained
1
[removed]
2024-01-12T11:26:08
https://www.reddit.com/r/LocalLLaMA/comments/194sufi/relationship_between_ising_models_and/
Easy_Butterfly2125
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194sufi
false
null
t3_194sufi
/r/LocalLLaMA/comments/194sufi/relationship_between_ising_models_and/
false
false
self
1
{'enabled': False, 'images': [{'id': 'RHnCcbMeYb3ZABdzjJdOzVrGtVNSupTR6F5yN5Gd7W8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/J5IJPpZESqinyC3XB0yS5jEi-IUDX0yL7fIhaWv3HaE.jpg?width=108&crop=smart&auto=webp&s=66e14847add9fd890f17aef773b9e26a751b5ae7', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/J5IJPpZESqinyC3XB0yS5jEi-IUDX0yL7fIhaWv3HaE.jpg?width=216&crop=smart&auto=webp&s=1ebff283034388b0ff6e5abe2ebb8bc0b603a13e', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/J5IJPpZESqinyC3XB0yS5jEi-IUDX0yL7fIhaWv3HaE.jpg?width=320&crop=smart&auto=webp&s=7bef246ba23ddd5a0035713f538e5b18fe0069ff', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/J5IJPpZESqinyC3XB0yS5jEi-IUDX0yL7fIhaWv3HaE.jpg?auto=webp&s=26ba2ae9a17865be6c5324a0d7a79344f8d064ad', 'width': 480}, 'variants': {}}]}
Ai accelerator hardware is slowly becoming available
86
https://tenstorrent.com/cards/ Tenstorrent's grayskull series of AI Accelerator cards have become available. I'm a bit disappointed by their memory, 8GB lpddr4 at about 100GB/s running at 200W. Pcie. Not particularly amazing for llama, but it might be great for whisper (which they mark as officially supported) https://www.businesswire.com/news/home/20231212788210/en/Kinara-Edge-AI-Processor-Tackles-the-Monstrous-Compute-Demands-of-Generative-AI-and-Transformer-Based-Models Kinara has Ara 2, an AI processor capable of running stable diffusion (10s/image) and llama 7b (tens of tokens per second) I'm not sure if these are available for consumers. https://www.pcworld.com/article/2196895/first-pc-ai-accelerator-cards-from-memryx-kinara-debut-at-ces.html Memryx has a super tiny 1-2Watt chip called MX3 capable of running yolov7 and 100+ fps These will be absolutely amazing for raspberry PIs and other homebrew edge tinkering. I don't know where/if you can purchase these as a consumer, but it's cool stuff! This is just what I read up on in the last half hour, hardware is starting to enter the market! I had better hopes for grayskull for local llama in particular, but it's still just a devkit. Who knows what the second generation, or custom hardware based on their IP might bring.
2024-01-12T10:55:40
https://www.reddit.com/r/LocalLLaMA/comments/194scwv/ai_accelerator_hardware_is_slowly_becoming/
Combinatorilliance
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194scwv
false
null
t3_194scwv
/r/LocalLLaMA/comments/194scwv/ai_accelerator_hardware_is_slowly_becoming/
false
false
self
86
null
Does vllm only work with CUDA 12.1 now ?
1
[removed]
2024-01-12T10:39:06
https://www.reddit.com/r/LocalLLaMA/comments/194s43j/does_vllm_only_work_with_cuda_121_now/
KarmaCut132
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194s43j
false
null
t3_194s43j
/r/LocalLLaMA/comments/194s43j/does_vllm_only_work_with_cuda_121_now/
false
false
self
1
null
Inference Speed Benchmark
18
Just did a small inference speed benchmark with several deployment frameworks, here are the results: Setup : Ryzen 9 3950X, 128go DDR4 3600, RTX 3090 24Go Frameworks: ExllamaV2, VLLM, Aphrodite Engine, AutoAWQ OS: Windows, WSL Model: Openchat-3.5-0106 Quantizations: exl2-3.0bpw, exl2-4.0bpw, GPTQ-128-4, AWQ Task: 512 tokens completion on the following prompt "Our story begins in the Scottish town of Auchtermuchty, where once" Results: &#x200B; https://preview.redd.it/lustbdsagzbc1.png?width=879&format=png&auto=webp&s=8fcf2dc855245a8985935b637d428222701808d7 Key Takeaways: \- Exllama2 is king when it comes to GPU inference, but is significantly slowed down on windows, streaming also reduces the performance by 20% \- vLLM is the most reliable and gets very good speed \- vLLM provide a good API as well \- on a llama based architecture, GPTQ quant seems faster than AWQ (i got the reverse on Mistral based architecture) \- Aphrodite Engine is slighly faster than vllm, but installation is a lot more messy \- I also tested GGUF with Ollama, but it was significantly slower, running at about 50 tokens/s \- Lots of libs are promising and claim to achieve faster inference than vllm (ex lightllm), but most of them are quite messy. &#x200B; Are these result in line with what you witnessed on your own setup? &#x200B;
2024-01-12T10:09:23
https://www.reddit.com/r/LocalLLaMA/comments/194ro84/inference_speed_benchmark/
AdventurousSwim1312
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194ro84
false
null
t3_194ro84
/r/LocalLLaMA/comments/194ro84/inference_speed_benchmark/
false
false
https://b.thumbs.redditm…cCrfhgTt5-uY.jpg
18
null
Mistral is so smart! (Andrew Ng aka Andrew Karpathy)
1
2024-01-12T10:07:49
https://i.redd.it/1szt82ksgzbc1.png
Chance_Confection_37
i.redd.it
1970-01-01T00:00:00
0
{}
194rnfi
false
null
t3_194rnfi
/r/LocalLLaMA/comments/194rnfi/mistral_is_so_smart_andrew_ng_aka_andrew_karpathy/
false
false
https://a.thumbs.redditm…EpEhnjjc02i8.jpg
1
{'enabled': True, 'images': [{'id': '2QT7buX4DGTL9zVG0gTPNjqt1a2mtgNFtPMHqC_7bKM', 'resolutions': [{'height': 33, 'url': 'https://preview.redd.it/1szt82ksgzbc1.png?width=108&crop=smart&auto=webp&s=af30c6539b0d8803ef5510c9dd762fa4010200f2', 'width': 108}, {'height': 67, 'url': 'https://preview.redd.it/1szt82ksgzbc1.png?width=216&crop=smart&auto=webp&s=457d2bc0f186deb8d013a86d776525e53c79ac9b', 'width': 216}, {'height': 100, 'url': 'https://preview.redd.it/1szt82ksgzbc1.png?width=320&crop=smart&auto=webp&s=59c0a318ee76e7c040672f5ac1ea447e37b3788c', 'width': 320}, {'height': 200, 'url': 'https://preview.redd.it/1szt82ksgzbc1.png?width=640&crop=smart&auto=webp&s=eba23cb5055b1921976dcda53ed63af2e8c173bd', 'width': 640}, {'height': 300, 'url': 'https://preview.redd.it/1szt82ksgzbc1.png?width=960&crop=smart&auto=webp&s=cbf6a75086aa17f0c77ab52dd5c530687de4ea50', 'width': 960}], 'source': {'height': 328, 'url': 'https://preview.redd.it/1szt82ksgzbc1.png?auto=webp&s=9b6c72f1c46c9b94ac1f121a58656e10b2220bb8', 'width': 1047}, 'variants': {}}]}
Looking for a light ai to create Scripts for therapy using the techniques and methods taught in my book
1
I want the ai to kind of study my book and to be able to create a personalized script to later be used in therapy. All using the steps and techniques already described in the book
2024-01-12T09:16:12
https://www.reddit.com/r/LocalLLaMA/comments/194qwlp/looking_for_a_light_ai_to_create_scripts_for/
Busy_Strawberry8392
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194qwlp
false
null
t3_194qwlp
/r/LocalLLaMA/comments/194qwlp/looking_for_a_light_ai_to_create_scripts_for/
false
false
self
1
null
Are children’s riddles really a good way to evaluate LLMs?
48
I asked Mistral 7B, Mixtral 8x7B, and GPT-4 the ‘bus driver’ riddle, which I copy-pasted from riddles.com since I’m lazy. I asked it the color of the bus driver’s eyes.. but I also tried asking it how many people were on the bus, which means doing math. ‘’’ You are the bus driver. At the 1st stop of the day, 8 people get on board. @ the 2nd stop, 4 people get off and 11 people get on. @ the 3rd stop, 2 people get off and 6 people get on. @ the 4th stop 13 people get off and 1 person gets on. @ the 5th stop 5 people get off and 3 people get on. @ the 6th stop 3 people get off and 2 people get on. What color are the bus driver’s eyes? OR How many people are now on the bus? DO the calculation / work first, and then reveal your answer. You will not know the answer until you have thought it through. ‘’’ Results for bus driver’s eyes: 7B got it on the third try with ” The color of your own eyes, as you are the bus driver.“ But Mixtral didn’t get it at all, possibly because it was trained on non-trick questions where it would be expected to tell you when the answer was not specified. GPT-4 immediately recognized it as “a classic riddle” But when I asked it to do the math? Mistral 7B choked spectacularly every time. Mixtral did it, but only after I told it to do the work first. ChatGPT 4 did it perfectly. Oddly, the smaller LLM, like a child, knew riddles better than it knew math, and Mixtral, like a jaded teen, hadn’t read a riddle book in years. GPT4 was the fun adult I guess. But I think the conclusion here is that every time you try to evaluate an LLM with a trick question about a sister playing chess, or a pound of lead vs a pound of feathers, or how long it will take to dry twice as many bras on a clothesline, you aren’t really evaluating its intelligence. You’re just evaluating how many times it read something like “101 amazing brain busters!“ in the books3 dataset. Thoughts?
2024-01-12T08:12:13
https://www.reddit.com/r/LocalLLaMA/comments/194pzcg/are_childrens_riddles_really_a_good_way_to/
gunbladezero
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194pzcg
false
null
t3_194pzcg
/r/LocalLLaMA/comments/194pzcg/are_childrens_riddles_really_a_good_way_to/
false
false
self
48
null
Introducing LlamaEdge — lightweight & portable LLM tools for your local, edge & server devices.
20
2024-01-12T07:10:57
https://twitter.com/realwasmedge/status/1745549024969801908
Melinda_McCartney
twitter.com
1970-01-01T00:00:00
0
{}
194p1rv
false
{'oembed': {'author_name': 'wasmedge', 'author_url': 'https://twitter.com/realwasmedge', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Introducing LlamaEdge — lightweight &amp; portable LLM tools for your local, edge &amp; server devices. <a href="https://t.co/zawDKhBmDc">https://t.co/zawDKhBmDc</a><br><br>Based on WasmEdge, LlamaEdge apps are &lt;5 MB, self-contained (no complex dependencies), sandboxed (cloud ready), and can be orchestrated by container tools.… <a href="https://t.co/JIfScnN8zd">pic.twitter.com/JIfScnN8zd</a></p>&mdash; wasmedge (@realwasmedge) <a href="https://twitter.com/realwasmedge/status/1745549024969801908?ref_src=twsrc%5Etfw">January 11, 2024</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/realwasmedge/status/1745549024969801908', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'}
t3_194p1rv
/r/LocalLLaMA/comments/194p1rv/introducing_llamaedge_lightweight_portable_llm/
false
false
https://a.thumbs.redditm…UdL3fMhze4Z0.jpg
20
{'enabled': False, 'images': [{'id': 'bxpzc1Uv-PXhaMaNpl4mgJQ1IbW3OzfYT_0tdYShbWg', 'resolutions': [{'height': 109, 'url': 'https://external-preview.redd.it/lpXl_-d6fjeeexZHTvUEl6CVUumsZkWsUOZz8ewle8I.jpg?width=108&crop=smart&auto=webp&s=050829f689635b42e2c71722b1ab0096c55e86ad', 'width': 108}, {'height': 219, 'url': 'https://external-preview.redd.it/lpXl_-d6fjeeexZHTvUEl6CVUumsZkWsUOZz8ewle8I.jpg?width=216&crop=smart&auto=webp&s=bdaedbe9de31c19980854f01c61edee6e0e61f9a', 'width': 216}, {'height': 325, 'url': 'https://external-preview.redd.it/lpXl_-d6fjeeexZHTvUEl6CVUumsZkWsUOZz8ewle8I.jpg?width=320&crop=smart&auto=webp&s=2aad0e0330ca0d4e5d219a6a8785b37d1f7e1748', 'width': 320}, {'height': 650, 'url': 'https://external-preview.redd.it/lpXl_-d6fjeeexZHTvUEl6CVUumsZkWsUOZz8ewle8I.jpg?width=640&crop=smart&auto=webp&s=c8a26c6ce904b7263286cdbe260445006528091a', 'width': 640}], 'source': {'height': 949, 'url': 'https://external-preview.redd.it/lpXl_-d6fjeeexZHTvUEl6CVUumsZkWsUOZz8ewle8I.jpg?auto=webp&s=2e2a7438e72d6b4f735ed40268c414bb4c7880bc', 'width': 933}, 'variants': {}}]}
Given the rapid pace of progress, how far are we until local models approach gpt4? Can local models ever be equivalent?
102
The models you can run today on a few hundred to a thousand dollars are orders of magnitude better than anything I thought we could ever run locally. They also appear to be advancing pretty rapidly. Those more educated on the tech, is there any indication on how far we are from actually reaching gpt-4 equiveillance? &#x200B; Do we know whether the kind of hardware one user can buy for a few thousand dollars will ever be able to run something effectively equivalent to the latest GPT or will that always be the domain of giant corporations? &#x200B; I recall claims that Google didn't explore LLMs because they were theorized to hit a ceiling in terms of future growth potential - would that mean we're not too far from the point where it won't really get too much better so it'll quickly become attainable?
2024-01-12T07:08:04
https://www.reddit.com/r/LocalLLaMA/comments/194p03w/given_the_rapid_pace_of_progress_how_far_are_we/
TopRecognition9302
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194p03w
false
null
t3_194p03w
/r/LocalLLaMA/comments/194p03w/given_the_rapid_pace_of_progress_how_far_are_we/
false
false
self
102
null
Any opinions on this new LLM serving platform?
1
[https://suite.friendli.ai](https://suite.friendli.ai) https://preview.redd.it/1pokjzwajybc1.png?width=1269&format=png&auto=webp&s=29c94e6cf1c2b8f94823c19e598441ac598d7652
2024-01-12T06:58:28
https://www.reddit.com/r/LocalLLaMA/comments/194ougu/any_opinions_on_this_new_llm_serving_platform/
Antique_Battle_4337
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194ougu
false
null
t3_194ougu
/r/LocalLLaMA/comments/194ougu/any_opinions_on_this_new_llm_serving_platform/
false
false
https://b.thumbs.redditm…L-_Vp9rejH4M.jpg
1
null
Any opinions on this new LLM serving platform?
2
2024-01-12T06:55:50
https://suite.friendli.ai
Antique_Battle_4337
suite.friendli.ai
1970-01-01T00:00:00
0
{}
194ot3i
false
null
t3_194ot3i
/r/LocalLLaMA/comments/194ot3i/any_opinions_on_this_new_llm_serving_platform/
false
false
default
2
null
Can anyone explain the Mamba space state model to a total noob?
1
[removed]
2024-01-12T06:50:58
https://www.reddit.com/r/LocalLLaMA/comments/194oqda/can_anyone_explain_the_mamba_space_state_model_to/
Fun_Cat3671
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194oqda
false
null
t3_194oqda
/r/LocalLLaMA/comments/194oqda/can_anyone_explain_the_mamba_space_state_model_to/
false
false
default
1
null
I haven't laughed this hard at a LLM response yet. mixtral-8x7b
110
One of my personal LLM IQ test-questions: So far only chatGPT4-0613 answered it correctly. https://preview.redd.it/3jxndmb62ybc1.png?width=666&format=png&auto=webp&s=d92a683d8c19e292ead53900ba6c1cdb996703e6 https://preview.redd.it/je4q89j72ybc1.png?width=682&format=png&auto=webp&s=4af22a7672a01ad061eb3c2daedab4c3e13dcb1f "input": { "top_p": 0.8, "temperature": 0.9, "max_new_tokens": 1024, "presence_penalty": 0, "frequency_penalty": 0 }, &#x200B;
2024-01-12T05:26:11
https://www.reddit.com/r/LocalLLaMA/comments/194n9sa/i_havent_laughed_this_hard_at_a_llm_response_yet/
hirokoteru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194n9sa
false
null
t3_194n9sa
/r/LocalLLaMA/comments/194n9sa/i_havent_laughed_this_hard_at_a_llm_response_yet/
false
false
https://b.thumbs.redditm…NJTtinl5nrWM.jpg
110
null
P40s, above 4g decoding and resizable BAR?
1
[removed]
2024-01-12T05:19:27
https://www.reddit.com/r/LocalLLaMA/comments/194n5fe/p40s_above_4g_decoding_and_resizable_bar/
jorn449
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194n5fe
false
null
t3_194n5fe
/r/LocalLLaMA/comments/194n5fe/p40s_above_4g_decoding_and_resizable_bar/
false
false
self
1
null
CPoly: Moe-LoRA for multi-task transfer instruction Tuning
13
I would like to introduce our work on MOE instruction Tuning for LLM. [https://arxiv.org/abs/2312.03248](https://arxiv.org/abs/2312.03248)。 This PEFT fine-tuning framework has been adapted to work efficiently with major base models such as LLama2, GLM, T5, enabling high-performance multi-task SFT. It's widely applied across scenarios, often exceeding state-of-the-art expectations.
2024-01-12T04:52:49
https://www.reddit.com/r/LocalLLaMA/comments/194mns0/cpoly_moelora_for_multitask_transfer_instruction/
Fine_Code4023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194mns0
false
null
t3_194mns0
/r/LocalLLaMA/comments/194mns0/cpoly_moelora_for_multitask_transfer_instruction/
false
false
self
13
null
Self-Extend works for Phi-2 now. Looks good
1
This is our first post in this sub! Thank you for everyone's interests in our Self-Extend in these days. We just finished the test of Self-Extend on Phi-2. The 2.7B Phi-2 model surpasses our expectations! Utilizing our Self-Extend method, we've successfully expanded Phi-2's window length from 2k to 8k. This enhancement significantly boosts its performance across a variety of long-context tasks. In tasks such as summarization, single-document QA, and few-shot learning, we observed notable improvements. Particularly in NarrativeQA, we almost achieved a linear performance increase! For coding tasks, as evidenced in the Repobench-p, and for multi-document QA in 2wikiqa, the Self-Extend method also shows improvements. While no significant improvement is observed in the lcc, this is still surprising when considering the precision loss caused by the floor operation in Self-Extend. The reasons behind Self-Extend’s behavior on Multifieldqa-en remain unclear. Also, there is a trade-off between extended context window and the position precision. Hence, we get a peak on some datasets. Our setting for this experiment: 4k: group=4, neighbor=512; 6k: group=8, neighbor=512; 6k: group=12, neighbor=512 https://preview.redd.it/0ij1edf0wxbc1.jpg?width=1324&format=pjpg&auto=webp&s=845087a0bf5c30a4c5fef01345bd178d0b808167 Still eagerly look for more testing results!
2024-01-12T04:50:56
https://www.reddit.com/r/LocalLLaMA/comments/194mmki/selfextend_works_for_phi2_now_looks_good/
Asleep-Agency3023
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194mmki
false
null
t3_194mmki
/r/LocalLLaMA/comments/194mmki/selfextend_works_for_phi2_now_looks_good/
false
false
https://b.thumbs.redditm…5qF5YX8RHh0A.jpg
1
{'enabled': False, 'images': [{'id': '9pBq140IUZ3vmhg8n6zAOOS0SB_qdsbJTTTfrvvZDuk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/B3-ocwIWHYbM7Uen0WV0pZttNEYUp1W5Hb0HrggWA_0.jpg?width=108&crop=smart&auto=webp&s=060a2d902e890439b3c68b8ac25fc4fc8a6b4adf', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/B3-ocwIWHYbM7Uen0WV0pZttNEYUp1W5Hb0HrggWA_0.jpg?width=216&crop=smart&auto=webp&s=4c8c4cd615992c4643c47a7735d6cb4293a805d6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/B3-ocwIWHYbM7Uen0WV0pZttNEYUp1W5Hb0HrggWA_0.jpg?width=320&crop=smart&auto=webp&s=5731da225e1da488d24d77f83a3520a21c1cea2a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/B3-ocwIWHYbM7Uen0WV0pZttNEYUp1W5Hb0HrggWA_0.jpg?width=640&crop=smart&auto=webp&s=77a62223205f6ce5ad286b947f1d30421fd672cb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/B3-ocwIWHYbM7Uen0WV0pZttNEYUp1W5Hb0HrggWA_0.jpg?width=960&crop=smart&auto=webp&s=d4c44c1f8af98bfe339b3612ca88568f25e4fbb6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/B3-ocwIWHYbM7Uen0WV0pZttNEYUp1W5Hb0HrggWA_0.jpg?width=1080&crop=smart&auto=webp&s=d27637060813d2e0f29c2507334989bdfa76ad29', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/B3-ocwIWHYbM7Uen0WV0pZttNEYUp1W5Hb0HrggWA_0.jpg?auto=webp&s=114fd12c0f012e258c4ab3a7c240cc37fcfc3787', 'width': 1200}, 'variants': {}}]}
In 2024, what is the best tool/framework for creating synthetic data that can then be used to fine-tune with?
21
What is the best tool/framework for creating synthetic data (where it's easy to specify what type of synthetic data you want or at least made easier to specific a format to follow with all of the synthetic data) that can then be useful for fine-tuning? &#x200B; And if there's something that combines the generating synthetic data + fine-tuning part, then that'd be amazing. &#x200B; Bonus: If it can be easily integrated with Langroid to be able to use local models through ollama with agents built on top, then it's even more ideal for my use case. Here is the code I am referring to: [https://github.com/langroid/langroid/blob/main/examples/basic/fn-call-local-simple.py](https://github.com/langroid/langroid/blob/main/examples/basic/fn-call-local-simple.py)
2024-01-12T04:17:01
https://www.reddit.com/r/LocalLLaMA/comments/194m01m/in_2024_what_is_the_best_toolframework_for/
SnooCrickets9704
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194m01m
false
null
t3_194m01m
/r/LocalLLaMA/comments/194m01m/in_2024_what_is_the_best_toolframework_for/
false
false
self
21
{'enabled': False, 'images': [{'id': 'mCf-gvgLGIf97C4Q5Pay7BZQzUkl4WPZs5p9Ug1Hclk', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=108&crop=smart&auto=webp&s=6f77c5c3077d3728cfb79ff9339119fb5fd7b59d', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=216&crop=smart&auto=webp&s=b10a9dd7d4bc4fc2037d131a048e5e8f1febaea5', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=320&crop=smart&auto=webp&s=0a9259b44724375c066bb82fd6350881d78bdbda', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=640&crop=smart&auto=webp&s=866e9598e364a0ffa0588fe4383ac87de7b85631', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=960&crop=smart&auto=webp&s=949005b7b09bccdd9f9194e40c1764d505882236', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?width=1080&crop=smart&auto=webp&s=ed2893c270ed2220f47d5b015a4b823d0ec38048', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/RigZZifec40luNenyE_Akjp1nwDkgYM3PY2sXKQiR9U.jpg?auto=webp&s=599ef552ca32f4240d29c0510c712c3dfe3b08c5', 'width': 1200}, 'variants': {}}]}
CPU use for local llms
1
[removed]
2024-01-12T04:02:14
https://www.reddit.com/r/LocalLLaMA/comments/194lq1a/cpu_use_for_local_llms/
hmmqzaz
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194lq1a
false
null
t3_194lq1a
/r/LocalLLaMA/comments/194lq1a/cpu_use_for_local_llms/
false
false
self
1
null
OpenAI working on adaptive compute per token
12
This and other things mentioned in Bill Gates' podcast with Sam Altman https://youtu.be/PkXELH6Y2lM?si=48_K_EVcuea5DCND 6:00
2024-01-12T03:13:42
https://www.reddit.com/r/LocalLLaMA/comments/194kt1o/openai_working_on_adaptive_compute_per_token/
LiquidGunay
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194kt1o
false
null
t3_194kt1o
/r/LocalLLaMA/comments/194kt1o/openai_working_on_adaptive_compute_per_token/
false
false
self
12
{'enabled': False, 'images': [{'id': 'kVOy95iuNz8A9yrgTlosFOGjUbTsXj5zF24ZNdcUSfk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/rpfehmT3WGVbHfnpih6Xu0kiRGinCVDOV3_X6yk_LHw.jpg?width=108&crop=smart&auto=webp&s=ba895b0b6728650d40dc80aa9ff47bb2c1b52821', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/rpfehmT3WGVbHfnpih6Xu0kiRGinCVDOV3_X6yk_LHw.jpg?width=216&crop=smart&auto=webp&s=c5d11a7e2c8b9a58317279fad0c2f9a33118f7dc', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/rpfehmT3WGVbHfnpih6Xu0kiRGinCVDOV3_X6yk_LHw.jpg?width=320&crop=smart&auto=webp&s=85a58ce497dd9d962ff5129aa8bed9d3f8b11edc', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/rpfehmT3WGVbHfnpih6Xu0kiRGinCVDOV3_X6yk_LHw.jpg?auto=webp&s=3b379c8c200cc6a106e26f54ae0d30f66922e307', 'width': 480}, 'variants': {}}]}
LLM Wiki - My GitHub page for collecting knowledge to help beginners into world of local LLMs.
1
[removed]
2024-01-12T03:01:59
https://www.reddit.com/r/LocalLLaMA/comments/194kksa/llm_wiki_my_github_page_for_collecting_knowledge/
Arcturus17
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194kksa
false
null
t3_194kksa
/r/LocalLLaMA/comments/194kksa/llm_wiki_my_github_page_for_collecting_knowledge/
false
false
default
1
{'enabled': False, 'images': [{'id': 'L39dCTMIN9iu3UcmHZjzkzrDXiwQvoccasRvaJXSUBs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/y6C6At5gSejQPA_qEiMvxdPq09clM2uxEL-qsxmtwlc.jpg?width=108&crop=smart&auto=webp&s=f0936efe13b940dca6a0faf3f526fd7f6494c0c7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/y6C6At5gSejQPA_qEiMvxdPq09clM2uxEL-qsxmtwlc.jpg?width=216&crop=smart&auto=webp&s=54a14bdf956bbecaef523cc94064bbae65f7ee5c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/y6C6At5gSejQPA_qEiMvxdPq09clM2uxEL-qsxmtwlc.jpg?width=320&crop=smart&auto=webp&s=b2ce883d3256a71751cb2e7a95bd78d1a7ff23ae', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/y6C6At5gSejQPA_qEiMvxdPq09clM2uxEL-qsxmtwlc.jpg?width=640&crop=smart&auto=webp&s=ca6e1301f6f980c0a473b887d998568faed5f774', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/y6C6At5gSejQPA_qEiMvxdPq09clM2uxEL-qsxmtwlc.jpg?width=960&crop=smart&auto=webp&s=f05d8cf728582aa189872f39fc63bf699572952c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/y6C6At5gSejQPA_qEiMvxdPq09clM2uxEL-qsxmtwlc.jpg?width=1080&crop=smart&auto=webp&s=9245b00b63c3c5e6bad73b7bba5b1d6c2c887a30', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/y6C6At5gSejQPA_qEiMvxdPq09clM2uxEL-qsxmtwlc.jpg?auto=webp&s=31e54bc70f0da46538d86327af30c52fda6578a1', 'width': 1200}, 'variants': {}}]}
Which LLMs are competitors of Llama and are not derive from it?
1
[removed]
2024-01-12T02:55:04
https://www.reddit.com/r/LocalLLaMA/comments/194kfq5/which_llms_are_competitors_of_llama_and_are_not/
celsowm
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194kfq5
false
null
t3_194kfq5
/r/LocalLLaMA/comments/194kfq5/which_llms_are_competitors_of_llama_and_are_not/
false
false
self
1
null
Looking for a collaborator for a text-to-video web app (~70% of it is complete)
1
Basically I developed a web app that allows you to create a video from a text prompt that can be up to even 15/20 minutes long (in under a minute). I got a lot of the functionality created myself but I need someone that can help polish things up and ideally help me with the database/user authentication and a few other things. Ideally both some help on the front end and a little bit on the back end but I am pretty confident with my work on the back end so far. I am also looking to offer some equity. Looking for someone who is down to build this and make it great with me and make it even better based on the user feedback etc :). Let me know if you have any questions. (And yes the entire functionality for the text video aspect is working great)
2024-01-12T02:51:49
https://www.reddit.com/r/LocalLLaMA/comments/194kdgs/looking_for_a_collaborator_for_a_texttovideo_web/
cobalt1137
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194kdgs
false
null
t3_194kdgs
/r/LocalLLaMA/comments/194kdgs/looking_for_a_collaborator_for_a_texttovideo_web/
false
false
self
1
null
LLM Wiki - My GitHub page collecting general LLM knowledge for beginners. It's always under construction - please feel free to suggest edits, additions, or even help add to the knowledge base!
1
2024-01-12T02:33:12
https://github.com/JakeSSRN/LLM-Wiki/tree/main
Arcturus17
github.com
1970-01-01T00:00:00
0
{}
194jzsm
false
null
t3_194jzsm
/r/LocalLLaMA/comments/194jzsm/llm_wiki_my_github_page_collecting_general_llm/
false
false
default
1
null
Efficient LLM inference solution on Intel GPU
18
2024-01-12T02:23:14
https://arxiv.org/abs/2401.05391
ninjasaid13
arxiv.org
1970-01-01T00:00:00
0
{}
194jsgo
false
null
t3_194jsgo
/r/LocalLLaMA/comments/194jsgo/efficient_llm_inference_solution_on_intel_gpu/
false
false
default
18
null
Just bought a A6000 need advice
1
I just bought the 48gb A6000 it comes in a week or so. I have 128gb system ram. Can I run Goliath 120b on this?
2024-01-12T02:10:48
https://www.reddit.com/r/LocalLLaMA/comments/194jj0v/just_bought_a_a6000_need_advice/
TheHobbyistHacker
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194jj0v
false
null
t3_194jj0v
/r/LocalLLaMA/comments/194jj0v/just_bought_a_a6000_need_advice/
false
false
self
1
null
2024 C++ Programming : Beginners to Advanced for Developers
1
[removed]
2024-01-12T01:52:27
https://www.reddit.com/r/LocalLLaMA/comments/194j4z3/2024_c_programming_beginners_to_advanced_for/
pkravi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194j4z3
false
null
t3_194j4z3
/r/LocalLLaMA/comments/194j4z3/2024_c_programming_beginners_to_advanced_for/
false
false
self
1
{'enabled': False, 'images': [{'id': 'JRgDgP-KZqEtVR5cBHpmYEiB8tA4qKvo8q5iJslSxSY', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/qnDhHyf2fWa-f9_L_U5BfAJR3PgMwxcK1sZKaiYdZow.jpg?width=108&crop=smart&auto=webp&s=2ca666e7ac0fa14fb922cd111db03d0ba6e1d4eb', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/qnDhHyf2fWa-f9_L_U5BfAJR3PgMwxcK1sZKaiYdZow.jpg?width=216&crop=smart&auto=webp&s=9de294c781373ff6fcea5619ab6ecf33dbe49dbc', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/qnDhHyf2fWa-f9_L_U5BfAJR3PgMwxcK1sZKaiYdZow.jpg?width=320&crop=smart&auto=webp&s=5b0c3165106bc7cf5a6d97d7d0dc47227cb0b585', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/qnDhHyf2fWa-f9_L_U5BfAJR3PgMwxcK1sZKaiYdZow.jpg?width=640&crop=smart&auto=webp&s=429667fb05f303a9e6a61f6f20e014910264e8f7', 'width': 640}], 'source': {'height': 422, 'url': 'https://external-preview.redd.it/qnDhHyf2fWa-f9_L_U5BfAJR3PgMwxcK1sZKaiYdZow.jpg?auto=webp&s=24af1761d02d0a30a51ce0ce8dbd225f16d1e1a7', 'width': 750}, 'variants': {}}]}
How to download and use safetensors models?
1
[removed]
2024-01-12T01:50:23
https://www.reddit.com/r/LocalLLaMA/comments/194j3g8/how_to_download_and_use_safetensors_models/
yungfishstick
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194j3g8
false
null
t3_194j3g8
/r/LocalLLaMA/comments/194j3g8/how_to_download_and_use_safetensors_models/
false
false
self
1
null
What should my setup be with a 4070 ti?
1
I’d like a model that works well with data. I had my whole genome sequenced and learning how to understand it is a messy process. GPT-4 has been an invaluable tool, but I need something that I can interact with privately, so I can feed it real data. Coding proficiency would be my second priority. What are my best options for models/UIs? Any suggestions are welcome, this will be my first local LLM.
2024-01-12T01:36:55
https://www.reddit.com/r/LocalLLaMA/comments/194itby/what_should_my_setup_be_with_a_4070_ti/
scrumblethebumble
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194itby
false
null
t3_194itby
/r/LocalLLaMA/comments/194itby/what_should_my_setup_be_with_a_4070_ti/
false
false
self
1
null
Does anyone how create an multimodal ai
1
So basically im trying to create a assistant humanoid robot Main objectives are 1) That interact like a normal llm does ( i use tinylamma 1.1b from bloke and rag implementation) 2) Understand comands and act upon it 3) it also able to navigate using visal data which also need to used for the llm to be aware of its surroundings 4) speech synthesis and recognition (which im having trouble doing) 5) most important completly local On my investigation i came across langchain autogen and multimodal ai Any tips ideas all are welcome
2024-01-12T01:06:34
https://www.reddit.com/r/LocalLLaMA/comments/194i5so/does_anyone_how_create_an_multimodal_ai/
Jolly_Self_4892
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194i5so
false
null
t3_194i5so
/r/LocalLLaMA/comments/194i5so/does_anyone_how_create_an_multimodal_ai/
false
false
self
1
null
What are your favorite sites to test out different models ?
2
Through a post here i just stumbled upon composo. Which made me realize how nice it would be to test some of my prompts for work on different llms to see how they perform. And doing that online well is just more convenient the to download or make accounts for a bunch of models. I basically need to find a big llm that does a job well so that I can mess around with smaller ones and use the big one for validation. Ao being able to try out big models i normaly would have to pay for one way or the other would be extremely helpful. So thank you all in advance ^^
2024-01-12T00:58:51
https://www.reddit.com/r/LocalLLaMA/comments/194hzfm/what_are_your_favorite_sites_to_test_out/
Noxusequal
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194hzfm
false
null
t3_194hzfm
/r/LocalLLaMA/comments/194hzfm/what_are_your_favorite_sites_to_test_out/
false
false
self
2
null
Qualcomm Ai accelerators: ANYONE know ANYTING other than this marketing material? sounds quite intriguing...
27
I'm doing a bit of a hardware search since I'm writing my Thesis around LLMs and value generation. And via some expensive research report (well, its table of contents) I've stumbled across this series. CRUCIALLY I have no info about price, but if that is low enough, that could be quite intriguing IMO (high RAM, decent for TDP/size compute, seemingly "cheap"(see marketing quote in notes)) So yeah, ANY input is highly welcome. PIC 1: My notes (compares to NVDIA, AMD INTEL etc) PIC 2/3 where the info comes from ![img](1gn0sgdcnwbc1 "LINK1: https://www.qualcomm.com/news/onq/2023/11/introducing-qualcomm-cloud-ai-100-ultra ") ![img](snsxa6xrnwbc1 " LINK2: https://www.qualcomm.com/content/dam/qcomm-martech/dm-assets/documents/Prod-Brief-QCOM-Cloud-AI-100.pdf") https://preview.redd.it/wka2ukmjnwbc1.png?width=606&format=png&auto=webp&s=1521d1b6514e0da1bb2c32f398b1c469428e98de
2024-01-12T00:47:49
https://www.reddit.com/r/LocalLLaMA/comments/194hqu5/qualcomm_ai_accelerators_anyone_know_anyting/
leschnoid
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194hqu5
false
null
t3_194hqu5
/r/LocalLLaMA/comments/194hqu5/qualcomm_ai_accelerators_anyone_know_anyting/
false
false
https://a.thumbs.redditm…1-4mzr_l9mH8.jpg
27
{'enabled': False, 'images': [{'id': 'MBkT-7xtpfSWZQxHevKrI2wqP_9Zw3IBlYyrYKvxVN8', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/b-7tX_gAbnQxcfZnYRoJtr17-2fE49Odevqpfm1h6r4.jpg?width=108&crop=smart&auto=webp&s=973be3ffcc39a5616ff93d5ca2627a9013341742', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/b-7tX_gAbnQxcfZnYRoJtr17-2fE49Odevqpfm1h6r4.jpg?width=216&crop=smart&auto=webp&s=5a9fc99cba209e7fceeddae99f3714d67eb2cada', 'width': 216}, {'height': 179, 'url': 'https://external-preview.redd.it/b-7tX_gAbnQxcfZnYRoJtr17-2fE49Odevqpfm1h6r4.jpg?width=320&crop=smart&auto=webp&s=e92ac3036c2fa3071da6ec5fab8ab98ccc68250d', 'width': 320}, {'height': 358, 'url': 'https://external-preview.redd.it/b-7tX_gAbnQxcfZnYRoJtr17-2fE49Odevqpfm1h6r4.jpg?width=640&crop=smart&auto=webp&s=fd07d140319c663d81aa2ab09ecfbfad8ba1cd46', 'width': 640}, {'height': 537, 'url': 'https://external-preview.redd.it/b-7tX_gAbnQxcfZnYRoJtr17-2fE49Odevqpfm1h6r4.jpg?width=960&crop=smart&auto=webp&s=f4c511663be2e5b3bb8296498647c5322fe795e5', 'width': 960}, {'height': 605, 'url': 'https://external-preview.redd.it/b-7tX_gAbnQxcfZnYRoJtr17-2fE49Odevqpfm1h6r4.jpg?width=1080&crop=smart&auto=webp&s=99e1f0faf94f7b92d2bd528d5d316a2abffda733', 'width': 1080}], 'source': {'height': 1152, 'url': 'https://external-preview.redd.it/b-7tX_gAbnQxcfZnYRoJtr17-2fE49Odevqpfm1h6r4.jpg?auto=webp&s=ff5a45ed8b200560312018de0f746ed6293e6773', 'width': 2056}, 'variants': {}}]}
Getting started advice.
1
Hello, I am new to LLM’s and I was wondering if you guys had any advice for getting started.
2024-01-12T00:13:53
https://www.reddit.com/r/LocalLLaMA/comments/194h00v/getting_started_advice/
Potential-Help1157
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194h00v
false
null
t3_194h00v
/r/LocalLLaMA/comments/194h00v/getting_started_advice/
false
false
self
1
null
Smaller model or smaller quantization?
13
I’ve run into a roadblock with my LLMs with respect to my processing power. I can run 7B models comfortably and with good speed, but the lack (as far as I can see) of new, good 13B models means I have to turn to 33B models, which are super slow on my machine and take up tons of my RAM. My question is this: should I stick with 7B models (and wait for a good 13B to release) or should I just go for a really lossy quantization of 33B, like 2-bit or something?
2024-01-12T00:07:11
https://www.reddit.com/r/LocalLLaMA/comments/194gulv/smaller_model_or_smaller_quantization/
Aptare
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194gulv
false
null
t3_194gulv
/r/LocalLLaMA/comments/194gulv/smaller_model_or_smaller_quantization/
false
false
self
13
null
Need a good Nsfw model
1
I downloaded LM studio some time ago because I got tired of ChatGPT and it's restrictions, especially for nsfw content. However, I got no idea which llms would be good to generate nsfw fiction and how to use them. I don't have a powerful pc(the one I have has core i7 7th gen, 8GB RAM and 512SSD). I want the llm to run smoothly on my pc. I have already done some searching, but couldn't find any suitable for me. Any suggestions and help would be appreciated.
2024-01-11T23:55:07
https://www.reddit.com/r/LocalLLaMA/comments/194gkoj/need_a_good_nsfw_model/
Alexjustchills
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194gkoj
false
null
t3_194gkoj
/r/LocalLLaMA/comments/194gkoj/need_a_good_nsfw_model/
false
false
nsfw
1
null
Where are good instruct models for code generation?
10
What are some good coding models that can generate "correct" code following instructions? I tested a lot of models locally using a simple prompt below and the results are very disappointing. >write a complete python program to play pong against the computer using pygame. user should be able to move the paddle using up and down arrow keys. the ball should move fairly slow to accommodate beginner players. respond only with python code and no other comments, acknowledgement, or explanation. While most models can generate python code using pygame that is generally pong-like, almost none of the generated programs are good. Some of them have basic errors like missing imports (starting with pygame of all things), bad parameters, undeclared variables, and even bad indentation. Very few generated programs actually run out of the box. And of course, those few that run usually don't even play a basic pong game. I tried a lot of models, including codellama-instruct (all 3 sizes), deekseek-coder-33b, mistral-instruct, mixtral-instruct, openchat, phind-34b, solar-instruct, wizardcoder-33b, and zephyr-7b. Overall, it is a total mess. I'm using llama.cpp and most of the models are using GGUF format with Q4\_K\_M. Are there any other models that are worth trying as well? The free version of chat gpt generates a perfectly working pong program with the prompt above. I have several iterations of the prompt with increasing complexity (lives, score, time, etc) and chat gpt works perfectly on every iteration. Needless to say, things get much worse with these prompts with all the llms I'm testing.
2024-01-11T23:41:02
https://www.reddit.com/r/LocalLLaMA/comments/194g8wk/where_are_good_instruct_models_for_code_generation/
gamesntech
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194g8wk
false
null
t3_194g8wk
/r/LocalLLaMA/comments/194g8wk/where_are_good_instruct_models_for_code_generation/
false
false
self
10
null
3 x 3060 or 3 x 4070 questions
1
Hello! So I’m trying to learn more about model training and running local models, but I’m ~less concerned about like running the biggest / best models and would really like to leaen the infrastructure part of running a gpu cluster (so like orchestrating multiple GPUs, dividing work among them, kubernetes maybe?) but not sure what the best route to take is in terms of which gpu to buy (and tbh also not sure about motherboard and cpu). Any guidance is greatly appreciated !
2024-01-11T23:35:51
https://www.reddit.com/r/LocalLLaMA/comments/194g4fi/3_x_3060_or_3_x_4070_questions/
ifjo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194g4fi
false
null
t3_194g4fi
/r/LocalLLaMA/comments/194g4fi/3_x_3060_or_3_x_4070_questions/
false
false
self
1
null
ask a model to modify a longer story?
4
Is there any framework where you can feed in a short story or novel, and specify a change to the story? I realize this is challenging because of the context window limitation but has anybody tried it yet? Any demos online? Wouldn't it be great if you could order something like "\_War and Peace\_ but Napoleon is assassinated in the middle".
2024-01-11T23:27:27
https://www.reddit.com/r/LocalLLaMA/comments/194fxce/ask_a_model_to_modify_a_longer_story/
NodeTraverser
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194fxce
false
null
t3_194fxce
/r/LocalLLaMA/comments/194fxce/ask_a_model_to_modify_a_longer_story/
false
false
self
4
null
Best large context LLM to match array strings with intent in user message?
1
I have a large array (1000-10000 length) of object containing text, description and some metadata. When a user asks for something I want the LLM to be able to understand the intent along with using the keywords in user’s input to find the best match in the large string array. So for example, user asks ‘are there any highlights of Tom Brady from his first superbowl win that are larger than 10mins’ it should be able to look at the metadata of the array as well as description to find best match. The descriptions may or may not have exact keywords so just relying on keyword match may not be sufficient. I tried basic keyword match LLM but it gives a lot of false positives by matching unimportant words. I was wondering if some LLM with large context window will be able to do this better? Thanks!
2024-01-11T23:23:08
https://www.reddit.com/r/LocalLLaMA/comments/194ftjn/best_large_context_llm_to_match_array_strings/
RepresentativeOdd276
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194ftjn
false
null
t3_194ftjn
/r/LocalLLaMA/comments/194ftjn/best_large_context_llm_to_match_array_strings/
false
false
self
1
null
Would anyone use a model that specializes in document summarization?
16
Hey All, &#x200B; I believe the title is self explanatory. I recently made a model that specialized in search summarization, and I have found that there hasn't been as much use as I imagined. The model is capable of taking a search result page and returning a highly detailed summary along with related queries in json format. &#x200B; I'm thinking of extending my synthetic dataset to do something similar - this time I will train a model to return a JSON structured output of summarization items for an input document. Is this something any of you would like to use? Are there any specific features that make sense to include in terms of how the model handles output? I believe I will be able to deliver value here - I have a multi-bn corpus of high quality long-context synthetic reasoning examples that seem to have done a great job for my last model.
2024-01-11T23:10:35
https://www.reddit.com/r/LocalLLaMA/comments/194fipy/would_anyone_use_a_model_that_specializes_in/
docsoc1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194fipy
false
null
t3_194fipy
/r/LocalLLaMA/comments/194fipy/would_anyone_use_a_model_that_specializes_in/
false
false
self
16
null
SOTA 2-bit quants added to llama.cpp
1
2024-01-11T23:09:02
https://github.com/ggerganov/llama.cpp/pull/4856
m18coppola
github.com
1970-01-01T00:00:00
0
{}
194fhcw
false
null
t3_194fhcw
/r/LocalLLaMA/comments/194fhcw/sota_2bit_quants_added_to_llamacpp/
false
false
https://b.thumbs.redditm…YnJVe9w-18JU.jpg
1
{'enabled': False, 'images': [{'id': 'fE-M9lnbbpw81V-I4NXXzonYA5t-Pv1NGaazMAinc7Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1TDCyzl-9rLfrqU_lXN0uKdbTKn0ZT45L6Ut_VjKiXw.jpg?width=108&crop=smart&auto=webp&s=30145048e1c25b001d3be479d5ec7b8408e0b504', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1TDCyzl-9rLfrqU_lXN0uKdbTKn0ZT45L6Ut_VjKiXw.jpg?width=216&crop=smart&auto=webp&s=a42a88e0b3d058f614d4ff1aa0feff132c9a1574', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1TDCyzl-9rLfrqU_lXN0uKdbTKn0ZT45L6Ut_VjKiXw.jpg?width=320&crop=smart&auto=webp&s=4fe514999732d2af49aa144057de5c3d2931660e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1TDCyzl-9rLfrqU_lXN0uKdbTKn0ZT45L6Ut_VjKiXw.jpg?width=640&crop=smart&auto=webp&s=97afd91c016220fdff2d22de89747c948b1a41d7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1TDCyzl-9rLfrqU_lXN0uKdbTKn0ZT45L6Ut_VjKiXw.jpg?width=960&crop=smart&auto=webp&s=be151c713de39fe7c25df804b368249184fe946c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1TDCyzl-9rLfrqU_lXN0uKdbTKn0ZT45L6Ut_VjKiXw.jpg?width=1080&crop=smart&auto=webp&s=8e3a207e236e846bc623b21608d062eecf780476', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1TDCyzl-9rLfrqU_lXN0uKdbTKn0ZT45L6Ut_VjKiXw.jpg?auto=webp&s=f496245862742a20ae9eacc897650e9457f448b9', 'width': 1200}, 'variants': {}}]}
Need Mistral weights for local testing
1
Where can I find the top model weights, the two I found from all the jargon that are closest to GPT4 eval is \- Mistral Medium \- Nous-Hermes-2-Yi-34B (on huggingface) I have a 6GB Graphic card Nvidia 1650Ti local setup, would these model run well if I use Ollama/lm-studio and llamafile for conversion?
2024-01-11T22:42:11
https://www.reddit.com/r/LocalLLaMA/comments/194etkr/need_mistral_weights_for_local_testing/
Honest-Debate-6863
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194etkr
false
null
t3_194etkr
/r/LocalLLaMA/comments/194etkr/need_mistral_weights_for_local_testing/
false
false
self
1
null
Mixtral 8x7b doesn’t quite remember Mr. Brightside…
146
Running the 5bit quant though, so maybe it’s a little less precise or it just really likes Radioactive…
2024-01-11T22:31:06
https://i.redd.it/f1l736mu0wbc1.jpeg
switchandplay
i.redd.it
1970-01-01T00:00:00
0
{}
194ejzq
false
null
t3_194ejzq
/r/LocalLLaMA/comments/194ejzq/mixtral_8x7b_doesnt_quite_remember_mr_brightside/
false
false
https://b.thumbs.redditm…m2uqMwxmLyuo.jpg
146
{'enabled': True, 'images': [{'id': 'tDXTR6_dv5ilw5GRNOIhurZTd_NGyZHl6-T7pAniHfA', 'resolutions': [{'height': 129, 'url': 'https://preview.redd.it/f1l736mu0wbc1.jpeg?width=108&crop=smart&auto=webp&s=6e6b537cf18c3d692bb6775901ed1730081298a8', 'width': 108}, {'height': 258, 'url': 'https://preview.redd.it/f1l736mu0wbc1.jpeg?width=216&crop=smart&auto=webp&s=66eb424ec6126fc43bd7fb8203a6d42f18b9db43', 'width': 216}, {'height': 383, 'url': 'https://preview.redd.it/f1l736mu0wbc1.jpeg?width=320&crop=smart&auto=webp&s=60fe4ccf7ff82de54baadf2a24779f5ec9d1e476', 'width': 320}], 'source': {'height': 568, 'url': 'https://preview.redd.it/f1l736mu0wbc1.jpeg?auto=webp&s=aac629b967c7bbdd74d7ade02cae3d33ec16cd73', 'width': 474}, 'variants': {}}]}
Functionary: Open-Source Parallel Tool Use
50
2024-01-11T22:28:38
https://github.com/MeetKai/functionary
yiyecek
github.com
1970-01-01T00:00:00
0
{}
194ehsx
false
null
t3_194ehsx
/r/LocalLLaMA/comments/194ehsx/functionary_opensource_parallel_tool_use/
false
false
https://b.thumbs.redditm…pY58vodVWORA.jpg
50
{'enabled': False, 'images': [{'id': 'zXRYwfhPAJzxBDUCqM-pMRz0EWd9x8GPhNuzdXEODLc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/KLG45C-_aP0eVhOiEv_rXS35u8IniDX7YdZN6E9viSw.jpg?width=108&crop=smart&auto=webp&s=4ce819c5e12534cec05d694ef517112b02e3bad3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/KLG45C-_aP0eVhOiEv_rXS35u8IniDX7YdZN6E9viSw.jpg?width=216&crop=smart&auto=webp&s=29839230f61f3b7c55063353efe4fa425f854ff3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/KLG45C-_aP0eVhOiEv_rXS35u8IniDX7YdZN6E9viSw.jpg?width=320&crop=smart&auto=webp&s=57c40faabef3c29045a7a35ff34bf8adb59b8540', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/KLG45C-_aP0eVhOiEv_rXS35u8IniDX7YdZN6E9viSw.jpg?width=640&crop=smart&auto=webp&s=0dc0af87faca29a9bb7c178cdf526312ddd3470e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/KLG45C-_aP0eVhOiEv_rXS35u8IniDX7YdZN6E9viSw.jpg?width=960&crop=smart&auto=webp&s=a924b686c0e9b13faa74fe3c33efb67af532bf1c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/KLG45C-_aP0eVhOiEv_rXS35u8IniDX7YdZN6E9viSw.jpg?width=1080&crop=smart&auto=webp&s=b33c474d2dd671c70fb6a8da35c283dbf312f1ec', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/KLG45C-_aP0eVhOiEv_rXS35u8IniDX7YdZN6E9viSw.jpg?auto=webp&s=eac825e24297ec9b69cd60e68dd5daab416ddb7e', 'width': 1200}, 'variants': {}}]}
How did you structure your chat history dataset?
8
I’m fine tuning mistral / mistral-instruct, and I’m attempting to fine tune it on my message history. My goal is to impersonate whoever is in my dataset. I have instant message history between myself and a friend. How did you structure the training prompt to support impersonating whomever is in the dataset? I’ve tried just training the non instruct with my chat data and relying on autocomplete but it never quite achieves “impersonation”. For those that have successfully fine tuned on chat history between multiple authors, what was your training approach, dataset format and training and user prompt?
2024-01-11T22:18:38
https://www.reddit.com/r/LocalLLaMA/comments/194e9dp/how_did_you_structure_your_chat_history_dataset/
Enough-Meringue4745
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194e9dp
false
null
t3_194e9dp
/r/LocalLLaMA/comments/194e9dp/how_did_you_structure_your_chat_history_dataset/
false
false
self
8
null
Does someone can pinpoint LAM kinda model?
1
[removed]
2024-01-11T22:05:33
https://www.reddit.com/r/LocalLLaMA/comments/194dxuc/does_someone_can_pinpoint_lam_kinda_model/
UnderstandingAny4075
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194dxuc
false
null
t3_194dxuc
/r/LocalLLaMA/comments/194dxuc/does_someone_can_pinpoint_lam_kinda_model/
false
false
self
1
null
Book recommendation.
2
Looking for book recommendations. My background is graduated bachelors in CS w/ 1st year master level course work. My last knowledge of 'AI' is more than 10 years ago, so I know about convolution and back propagation, hidden layer, but didn't keep up with developments after that. I just tried using ollama and found it was so familiar to running CNN back in the 'old days' on a surface level. But is backpropagation even used for learning nowadays? Looking for books that can bridge the gap between that and current CS knowledge. Thanks in advance!
2024-01-11T22:01:04
https://www.reddit.com/r/LocalLLaMA/comments/194dtpo/book_recommendation/
DraconPern
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194dtpo
false
null
t3_194dtpo
/r/LocalLLaMA/comments/194dtpo/book_recommendation/
false
false
self
2
null
Help with my local LLM needs on a RTX 4090
1
Hello everyone, For several months, I've been working on assembling a computer setup to run local Large Language Models (LLMs) tailored to my specific requirements. I'm reaching out for your insights and advice on this. Here's a brief overview of my scenario and needs: 1. The system will run on either Windows 10 or 11 Professional, specifically for operating a local LLM. 2. The model will be accessible via an API, enabling integration with my custom software applications. 3. I intend to keep the LLM active during my working hours. It will primarily assist with specific inquiries and for general computing tasks not involving personal data. For more intensive, non-personal queries, I'll continue using ChatGPT 4 through my paid subscription. 4. My primary use for the local LLM will be for processing personal source code, PDFs, tax-related documents, and other sensitive materials that I prefer not to share with ChatGPT or other external platforms. As for the hardware, I'm considering an Intel i9 processor paired with an RTX 4090 GPU, and dual 1 TB SSDs. This is what my budget allows. I'd appreciate any feedback or suggestions you might have on this setup. Thanks in advance for your help, and much love to this community!
2024-01-11T21:15:18
https://www.reddit.com/r/LocalLLaMA/comments/194cpgm/help_with_my_local_llm_needs_on_a_rtx_4090/
Tiny_Yellow_7869
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194cpgm
false
null
t3_194cpgm
/r/LocalLLaMA/comments/194cpgm/help_with_my_local_llm_needs_on_a_rtx_4090/
false
false
self
1
null
Training question: If I simply duplicate data to make a LoRA, will it reinforce properly?
1
[removed]
2024-01-11T21:02:30
https://www.reddit.com/r/LocalLLaMA/comments/194cejl/training_question_if_i_simply_duplicate_data_to/
cold-depths
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194cejl
false
null
t3_194cejl
/r/LocalLLaMA/comments/194cejl/training_question_if_i_simply_duplicate_data_to/
false
false
self
1
null
Fastest API/Inference Server for Mixtral (EXL2/3.5bpw)
1
[removed]
2024-01-11T20:42:38
https://www.reddit.com/r/LocalLLaMA/comments/194bx3j/fastest_apiinference_server_for_mixtral_exl235bpw/
aallsbury
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194bx3j
false
null
t3_194bx3j
/r/LocalLLaMA/comments/194bx3j/fastest_apiinference_server_for_mixtral_exl235bpw/
false
false
self
1
null
Local vector database with all of Wikipedia for RAG
1
2024-01-11T20:26:18
https://huggingface.co/NeuML/txtai-wikipedia
davidmezzetti
huggingface.co
1970-01-01T00:00:00
0
{}
194biid
false
null
t3_194biid
/r/LocalLLaMA/comments/194biid/local_vector_database_with_all_of_wikipedia_for/
false
false
https://b.thumbs.redditm…xbRN9yfu813U.jpg
1
{'enabled': False, 'images': [{'id': '6ZlK_yWagy0xLQBodE-WgUo4XYVM7uYaA2JixJlfZRc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ZH7tFKSTNgntgMrsvmiYlOx_YaURJuplyvaYfUUU2wA.jpg?width=108&crop=smart&auto=webp&s=fb921dc48b4fb6f89340b78df5be66c02fc4a56c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ZH7tFKSTNgntgMrsvmiYlOx_YaURJuplyvaYfUUU2wA.jpg?width=216&crop=smart&auto=webp&s=16abdee2e848e24f3b5cb625f712a5dd04532415', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ZH7tFKSTNgntgMrsvmiYlOx_YaURJuplyvaYfUUU2wA.jpg?width=320&crop=smart&auto=webp&s=e545c0c6c1f299b1df0e98fed50b282fecb191eb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ZH7tFKSTNgntgMrsvmiYlOx_YaURJuplyvaYfUUU2wA.jpg?width=640&crop=smart&auto=webp&s=19090b9c508f22aea336b05de47bc0b9f9afe6a1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ZH7tFKSTNgntgMrsvmiYlOx_YaURJuplyvaYfUUU2wA.jpg?width=960&crop=smart&auto=webp&s=7a983dab300b678fcbb525bfe4c3bb574164d992', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ZH7tFKSTNgntgMrsvmiYlOx_YaURJuplyvaYfUUU2wA.jpg?width=1080&crop=smart&auto=webp&s=058ef354089a5e5cf76b3b886a81608e5dd3938a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ZH7tFKSTNgntgMrsvmiYlOx_YaURJuplyvaYfUUU2wA.jpg?auto=webp&s=999fad296ed62863a12a2ab6febd5c57c951ad78', 'width': 1200}, 'variants': {}}]}
Learning. Slowly.
6
Windows 11, i7 10gen (6core), 64gb mem. I have a docker running h2oGPT using TheBloke/Llama-2–7B-chat-Q6_K-GGUF. No GPU. I understand it would be slow going. But should it be ask a simple question like ‘How many days in a year’ and come back 5 minutes later and it still thinking? How are people running models on raspberry pi’s? I liked the interface, it can ingest documents out the gate. But damn it’s slow. LM Studios isn’t this slow. Anyone have a better solution for someone looking to learn AI in general and python? I want to be able to query my own docs as well. Any suggestions are welcome
2024-01-11T19:57:26
https://www.reddit.com/r/LocalLLaMA/comments/194at85/learning_slowly/
FatGuyQ
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194at85
false
null
t3_194at85
/r/LocalLLaMA/comments/194at85/learning_slowly/
false
false
self
6
null
Is there any reason why each expert in the MoE models would need to be the same parameter count?
30
It it something I have thought about recently and don't really know how to even find a reliable answer to whether or not is is practical to mix in this way. Specifically my question is whether having one or more experts being smaller, phi-2 size, and could perform something like a diffusion model does, (start with a fuzzy idea of overall output, refine over X steps). If size does not need to be consistent then maybe even one "executive" expert could be 13b, a few 7b, then some 2b, etc. Again there may be some basic thing I am missing making this whole thing impractical and I don't have the desire to use paid cloud resources so being able to experiment with training is not something I have yet learned beyond some NanoGPT.
2024-01-11T19:54:40
https://www.reddit.com/r/LocalLLaMA/comments/194aqu8/is_there_any_reason_why_each_expert_in_the_moe/
redditaccountno6
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194aqu8
false
null
t3_194aqu8
/r/LocalLLaMA/comments/194aqu8/is_there_any_reason_why_each_expert_in_the_moe/
false
false
self
30
null
Hardware setup to run Llms Locally
8
I got a PC with: AMD A4-6300 3.7 GHZ FM2+ Socket 12 GB RAM DDR3 1600 MHz My Motherboard allows me to install two GPUs and Two RAMs Im planning on running and training Llms on this machine and for this i'm open to change processor, add RAM and buy GPUs but not buying a new motherboard, that would meant just assembling a new PC from scratch. I was searching on several posts but i find contradictory information of what could be run on these grafic card that i could access: - RTX 2060 6GB 120 USD - RX 5700 XT 8GB 110 USD - RTX 2060 12GB 180 USD - RTX 3060 12GB 200 USD - RX 6700 XT 12GB 190 USD - RX 570 XFX 8GB 70 USD - GTX 1080 TI 11GB 180 USD - RX 480 8GB 75 USD I want to find the most cost effective option, i'm willing to pay some more bucks to get a great improvemen in speed and capacity, but not to spend hundrends of dollars to get just a minimum difference And i could add more RAM if that meant and increase in speed or capacity too, i would just restrict myself on upgrading on one thing if paying a bit more for another upgrade could mean a huge improvement What are the specs that matter the most in a PC for Llm training? GPU's cores, speed, cores, Bus width Bandwidth Memory? RAM's memory, speed? Processor's Speed?
2024-01-11T19:42:33
https://www.reddit.com/r/LocalLLaMA/comments/194agmz/hardware_setup_to_run_llms_locally/
Gemini4613
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
194agmz
false
null
t3_194agmz
/r/LocalLLaMA/comments/194agmz/hardware_setup_to_run_llms_locally/
false
false
self
8
null
Error?
1
2024-01-11T19:21:07
https://i.redd.it/jtdnwl6y2vbc1.jpeg
Big-Bottle-5878
i.redd.it
1970-01-01T00:00:00
0
{}
1949xwq
false
null
t3_1949xwq
/r/LocalLLaMA/comments/1949xwq/error/
false
false
https://b.thumbs.redditm…a0advqUh8bjs.jpg
1
{'enabled': True, 'images': [{'id': 'vFxgBEX03ujHaqZQUrR3FWlPCjBP347HqhOfybMP9mo', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/jtdnwl6y2vbc1.jpeg?width=108&crop=smart&auto=webp&s=cdfd224e95c8db9a7de3ad269cbc46ddbf90753a', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/jtdnwl6y2vbc1.jpeg?width=216&crop=smart&auto=webp&s=bf5522553168ae322a2eac21d9b39e4685c1c4ce', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/jtdnwl6y2vbc1.jpeg?width=320&crop=smart&auto=webp&s=e3947c798af21ada45639aae4e21e27b30672823', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/jtdnwl6y2vbc1.jpeg?width=640&crop=smart&auto=webp&s=dede2469142e742d6950430f5a1a1279ff2f79af', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/jtdnwl6y2vbc1.jpeg?width=960&crop=smart&auto=webp&s=3a84b9c1efb22a7d66150d10702475e5404ea0bd', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/jtdnwl6y2vbc1.jpeg?width=1080&crop=smart&auto=webp&s=9096ec7c11e878eaf3efcd039b73aafcc12a202b', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/jtdnwl6y2vbc1.jpeg?auto=webp&s=3e074ab07388484cd3f4e23e1ef5507dae8d6b03', 'width': 4032}, 'variants': {}}]}