title
stringlengths
1
300
score
int64
0
8.54k
selftext
stringlengths
0
41.5k
created
timestamp[ns]date
2023-04-01 04:30:41
2026-03-04 02:14:14
url
stringlengths
0
878
author
stringlengths
3
20
domain
stringlengths
0
82
edited
timestamp[ns]date
1970-01-01 00:00:00
2026-02-19 14:51:53
gilded
int64
0
2
gildings
stringclasses
7 values
id
stringlengths
7
7
locked
bool
2 classes
media
stringlengths
646
1.8k
name
stringlengths
10
10
permalink
stringlengths
33
82
spoiler
bool
2 classes
stickied
bool
2 classes
thumbnail
stringlengths
4
213
ups
int64
0
8.54k
preview
stringlengths
301
5.01k
AMD ROCm 5.5 Released
41
2023-05-02T18:08:02
https://github.com/RadeonOpenCompute/ROCm/releases/tag/rocm-5.5.0
friedrichvonschiller
github.com
1970-01-01T00:00:00
0
{}
135vlfc
false
null
t3_135vlfc
/r/LocalLLaMA/comments/135vlfc/amd_rocm_55_released/
false
false
https://a.thumbs.redditm…Az6TnZR4qT28.jpg
41
{'enabled': False, 'images': [{'id': 'eygnmIzQmg22xjmMr9V7DdGPNO-KwJJMOyxnLCNoHsM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=108&crop=smart&auto=webp&s=94afdd762921985cb59329f023db026aa577b16b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=216&crop=smart&auto=webp&s=31af47c1c8bc20b2e23f03845ad5f6f0f8892af3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=320&crop=smart&auto=webp&s=6b8812da0b22bdad303f8e5b29bfe9227533acc8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=640&crop=smart&auto=webp&s=aa6c971032256664bac1fbbd6ece5d714cdab3a5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=960&crop=smart&auto=webp&s=e0357f7bd70715769ef1577287250cab529e315b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?width=1080&crop=smart&auto=webp&s=65215e52e58bd279640ff7a758462da417a53a44', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3WGyKXhC5JYETBZLjH28XELlUDEZeZdExyoKm9MLjcA.jpg?auto=webp&s=492ffa8edf3598e47650ff66d00205db08e4c3b5', 'width': 1200}, 'variants': {}}]}
Vicuna Local Conversation
6
Hi all, I'm using Vicuna 7B ("eachadea/vicuna-7b-1.1") and I want to simWhaulate a chat behaviour where the LLM can use the conversation history to answer appropriately. From official Vicuna github repo (lm-sys/FastChat) I found out that they feed the whole conversation in this format: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.USER: Hello!ASSISTANT: Hi!</s>USER: How are you?ASSISTANT:" I tried it myself but the model seems not to use the previous conversation to respond to new questions. (i.e. if I say "your name is Thomas", and then ask its name, the answer is "Vicuna") Does someone has an idea on how to do that ? &#x200B; Edit: I noticed that Vicuna was specifically instructed to answer "What is your name?" in only that way. Just changing my final question to "What is the name I assigned you?" made the model respond correctly ("Thomas")
2023-05-02T16:22:15
https://www.reddit.com/r/LocalLLaMA/comments/135so7c/vicuna_local_conversation/
robbery_bob
self.LocalLLaMA
2023-05-03T07:38:41
0
{}
135so7c
false
null
t3_135so7c
/r/LocalLLaMA/comments/135so7c/vicuna_local_conversation/
false
false
self
6
null
Best 30b 4bit model?
34
What's do you think is the best 30b 4bit model currently around? I've been playing with several and have found them to be a mixed bag. For example Alpachino30b is fast and okay but tends to go off on strange tangents while alpasta is good but generating takes a year and a half for some reason. Any good suggestions? EDIT: running a 3090
2023-05-02T12:56:55
https://www.reddit.com/r/LocalLLaMA/comments/135kh16/best_30b_4bit_model/
PsyckoSama
self.LocalLLaMA
2023-05-04T01:30:43
0
{}
135kh16
false
null
t3_135kh16
/r/LocalLLaMA/comments/135kh16/best_30b_4bit_model/
false
false
self
34
null
Redirecting Model Outputs from llama.cpp to a TXT File for Easier Tracking of Results?
10
Hey everyone! I've been experimenting with various models lately and find myself wanting to keep track of their outputs in a more organized way. I was wondering if there's a way to redirect these outputs into a TXT file, making it much easier to compare results and monitor performance. Has anyone tried doing this before? If so, could you please share your insights, code snippets, or any resources you've found helpful in achieving this? I believe this could be a great way for me and others to manage their data more efficiently while working with different models. I appreciate any suggestions or help you can provide. Looking forward to learning from this awesome community! Thanks in advance!
2023-05-02T10:39:05
https://www.reddit.com/r/LocalLLaMA/comments/135hcvc/redirecting_model_outputs_from_llamacpp_to_a_txt/
Most_Ad_1905
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
135hcvc
false
null
t3_135hcvc
/r/LocalLLaMA/comments/135hcvc/redirecting_model_outputs_from_llamacpp_to_a_txt/
false
false
self
10
null
Someone claims to have finetuned Pythia for 8k context length.
20
[https://www.reddit.com/r/MachineLearning/comments/1347f5m/d\_a\_quest\_for\_very\_long\_sequence\_length/](https://www.reddit.com/r/MachineLearning/comments/1347f5m/d_a_quest_for_very_long_sequence_length/) Whats the deal? I heard people say adding extra context is extremely difficult without basically starting from scratch or doing some boring workaround like sliding context windows. Would something like this be possible to integrate in other models?
2023-05-02T01:08:39
https://www.reddit.com/r/LocalLLaMA/comments/135730l/someone_claims_to_have_finetuned_pythia_for_8k/
cyborgsnowflake
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
135730l
false
null
t3_135730l
/r/LocalLLaMA/comments/135730l/someone_claims_to_have_finetuned_pythia_for_8k/
false
false
self
20
null
[deleted by user]
1
[removed]
2023-05-02T00:34:54
[deleted]
1970-01-01T00:00:00
0
{}
1356cyd
false
null
t3_1356cyd
/r/LocalLLaMA/comments/1356cyd/deleted_by_user/
false
false
default
1
null
LLaMA-Adapter V2: fine-tuned LLaMA 65B for visual instruction, and LLaMA Chat65B trained with ShareGPT data for chatting. Chat65B model has been released.
47
2023-05-01T23:40:21
https://arxiv.org/abs/2304.15010
Creative-Rest-2112
arxiv.org
1970-01-01T00:00:00
0
{}
13554oa
false
null
t3_13554oa
/r/LocalLLaMA/comments/13554oa/llamaadapter_v2_finetuned_llama_65b_for_visual/
false
false
default
47
null
[deleted by user]
1
[removed]
2023-05-01T23:03:26
[deleted]
1970-01-01T00:00:00
0
{}
13548yy
false
null
t3_13548yy
/r/LocalLLaMA/comments/13548yy/deleted_by_user/
false
false
default
1
null
Is it possible to use Siri Shortcuts with Oobabooga?.
0
[removed]
2023-05-01T22:57:15
https://www.reddit.com/r/LocalLLaMA/comments/135430y/is_it_possible_to_use_siri_shortcuts_with/
Ferrero__64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
135430y
false
null
t3_135430y
/r/LocalLLaMA/comments/135430y/is_it_possible_to_use_siri_shortcuts_with/
false
false
default
0
null
Nvidia released a 2b model trained on 1.1T Tokens called GPT-2B-001
89
2023-05-01T22:50:59
https://huggingface.co/nvidia/GPT-2B-001
Dany0
huggingface.co
1970-01-01T00:00:00
0
{}
1353xal
false
null
t3_1353xal
/r/LocalLLaMA/comments/1353xal/nvidia_released_a_2b_model_trained_on_11t_tokens/
false
false
https://b.thumbs.redditm…ygfwqblgF5mM.jpg
89
{'enabled': False, 'images': [{'id': '5NviXd9qrigtEN3hg7qgXjC1CoOPDkTh7qbicLQ5SW4', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=108&crop=smart&auto=webp&s=7081fc128c5f063773a98f5d50539e7f8becc1fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=216&crop=smart&auto=webp&s=27a8fd8f79d3f05a29bf5193a4e63e665807ddcb', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=320&crop=smart&auto=webp&s=866597265fe791610dae4da4dea0bd993a534134', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=640&crop=smart&auto=webp&s=1f73c7977cf3b81ba58c74cd31867b1dfb40dfca', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=960&crop=smart&auto=webp&s=f06d5503594e0406af74692821235b97a7e66f0b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?width=1080&crop=smart&auto=webp&s=73af14caa3c47e36d948de97c3bd410d497d46e1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/2lF-Y5I5Uqq0uAhxlm9fnzMwQlA0VprTrIIzTMP7G64.jpg?auto=webp&s=d25665e3205faedd6d97e3c993795105058a4419', 'width': 1200}, 'variants': {}}]}
Is it possible to exceed the 2000 Token limit?
8
I have a 3090 GPU with 24 VRAM. Why is it not possible to run for example a 13B parameter model and use a much higher Token limit
2023-05-01T21:43:12
https://www.reddit.com/r/LocalLLaMA/comments/13528ux/is_it_possible_to_exceed_the_2000_token_limit/
Ferrero__64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13528ux
false
null
t3_13528ux
/r/LocalLLaMA/comments/13528ux/is_it_possible_to_exceed_the_2000_token_limit/
false
false
self
8
null
Performacne Benchmarks?
7
Hi Im currently in the process of figuring out what hardware i want to buy. I find it very hard to make an informed decision since i cant find any hard facts about performance in regads to NNs, LLMs, whatever.. How much faster exactly is a 4090 compared to a same size vram 3090 in inference? Those things. Does anyone have an links to such things?
2023-05-01T19:08:14
https://www.reddit.com/r/LocalLLaMA/comments/134y8vu/performacne_benchmarks/
overlydelicioustea
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134y8vu
false
null
t3_134y8vu
/r/LocalLLaMA/comments/134y8vu/performacne_benchmarks/
false
false
self
7
null
newbie seeking impressive llama models, am i missing something?
15
hi llama friends! i'm new to this whole open-source language model scene by meta, and i get what it's all about. but i gotta say, the outputs from these llama models haven't really blown me away. i already tried vicuna, which the community seems to think is the best model, but it didn't impress me that much either. it kind of reminded me of a slightly faster gpt NeoX model, and even just for normal chatting, it didn't quite live up to the hype. i of course know they wont stack up to anything like gpt 4 yet, but with all the hype from others, i feel like i should be getting way more than i actually am. so, i'm wondering if it's just me or if there are smarter llama models i haven't discovered yet? i've got an rtx 3080, so i don't think hardware is the issue here. is it possible that the llama space is just underdeveloped or am i missing out on some really great models? maybe i just expected too much, but i can't help thinking that everyone must be here for *something*, so that's why i'm here asking. i'd really like to know what keeps you all excited about llama models and which ones have managed to impress you. i'd love to hear about your experiences with different llama models, and which ones you think are the most impressive and efficient. any recommendations are greatly appreciated! thanks in advance, excited to learn from you all!
2023-05-01T17:26:14
https://www.reddit.com/r/LocalLLaMA/comments/134udaa/newbie_seeking_impressive_llama_models_am_i/
8octothot8
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134udaa
false
null
t3_134udaa
/r/LocalLLaMA/comments/134udaa/newbie_seeking_impressive_llama_models_am_i/
false
false
self
15
null
What's your take on the plea for halting development of models larger than GPT4?
26
I love technology and computer science, it's exciting to be part of the advent of a revolution. At the same time, there is so much potential danger if we are careless that is quite frightening. Our society might improve and flourish or become worse and perish but surely deploying artificial inteligence at scale will do something. Lately, I've been debating in my inner monologue a few moral concerns about all this after watching some videos and opinions on the subject. This was my journey. I started with this [interview with Yannic Kilcher](https://www.youtube.com/watch?v=TFa539R09EQ) (can you believe the dude actually has eyes?!). Very down to earth as always and great sense of open source and free software, his work (and of many others) on the [Open Assistant](http://open-assistant.io) is such an invaluable effort in community powered RLHF that must be praised all over. It's a great base to start with because I share his points of view. To understand the real problem we are facing with the misalignment of LLM's and social media in general, I watched this insightful, although somewhat hard to follow, [talk by Eliezer Yudkowsky](https://www.youtube.com/watch?v=EUjc1WuyPT8). He comes across as a well educated scholar that clearly has something to say about the field of keeping these things in line. Notice that this talk was given in May 2016, before all hell breaks loose. One year later, the paper "[Attention is all you need](https://arxiv.org/abs/1706.03762)" was submitted and the Transformer architecture becomes the base and unifies all machine learning fields, allowing the extremely rapid development that we are witnessing now but at the same time firing the starting gun to the race between the development of the models and the development of the tools to understand and control them. I followed that up with his [Lex Fridman's podcast](https://www.youtube.com/watch?v=AaTRHFaaPG8) episode to get his opinion on the more recent state of the tech. I think this is a very engaging conversation, it's quite scary and sad at times but gives you some perspective, well, his perspective at least. I'm still on the fence about some of the topics, or rather, their actual impact on humanity, as I said, deploying AI at scale will surely do something, I'm not sure if killing us all is an overstatement to get attention to the cause, just prophetic mambo jumbo or in fact a plausible reality we might face in some shape or form. I'm still not convinced that we are all doomed but the bit at around 1:22:00 mark is quite interesting. Next I watched [Max Tegmark's episode](https://www.youtube.com/watch?v=VcVfceTsD0A) he's a physicist at MIT and president of the Future of Life Institute, the guys proposing the [open letter to halt development of giant LLM's bigger than GPT4](https://futureoflife.org/open-letter/pause-giant-ai-experiments/). His view is in tune with Eliezer's on a lot of subjects and they share the ideia that, unregulated advancement will eventually end with our demise. Another interesting conversation worth watching. By this time, I was facing an existential crisis. To lighten up the mood I decided to watch the talk ["The A.I. Dilemma"](https://www.youtube.com/watch?v=xoVJKj8lcNQ) presented by Tristan Harris and Aza Raskin. It's essentially a TLDW of the previous conversations with real examples of the dangers that this technology might escalate into. The sense of dread and despair is hitting strong. At this time I wanted a more unbiased opinion on this matter, I looked for [sentdex's analysis](https://www.youtube.com/watch?v=lJNblY3Madg) of the [OpenAI's "GPT-4"](https://arxiv.org/abs/2303.08774) and [Microsoft's "Sparks of AGI"](https://arxiv.org/abs/2303.12712) "*papers*". His final remarks are interesting, it's good to have the point of view of someone that has been following the advancements from the sidelines for some time, I think it serves as an anchor to hold us from the hysteria that is being created and, ultimately, is just hyping a product. To finish on a lighter note, these 2 videos brought me back from the ashes of my past self before I went into the land of madness: [Fireship](https://www.youtube.com/watch?v=TpZcGhYp4rw) and [Programmers are also human](https://www.youtube.com/watch?v=sxdDVAv6QDY). In the end, I'm still struggling to find my own position. I believe that the apocalyptic views of the annihilation of humanity are just overreaction to bring attention of the general public to the pressing concerns about regulation. When the people that are working with this stuff on a daily basis and have the advantage of first person view about it are the ones proposing for the timeout I think we should at least hear what they have to say. We need to make sure we press forward safely, otherwise progress will derail into Social Media 2.0 - Bullshit^2 As I said, I love technology, these past months have been extremely exciting to witness, we were handed something quite special and hopefully we will have the strength of mind to hold it responsibly. Should we keep stacking more layers and pumping compute into these models until they are an entity? Or walk in baby steps from now on? Should the next iteration be open source? Should it even exist? PS: Sorry for the long post and believe me, it was all written using small, dangling, tentacle-like things made of flesh and bone hitting a real keyboard.
2023-05-01T12:16:56
https://www.reddit.com/r/LocalLLaMA/comments/134kvmd/whats_your_take_on_the_plea_for_halting/
Rogerooo
self.LocalLLaMA
2023-05-01T12:54:35
0
{}
134kvmd
false
null
t3_134kvmd
/r/LocalLLaMA/comments/134kvmd/whats_your_take_on_the_plea_for_halting/
false
false
self
26
{'enabled': False, 'images': [{'id': 'ABt1g9t2rHB4DDIXxOCnbcaArQqezSPhNCQoA-4Xohk', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/h8pjchsHvW3_WTLw8uhQHpyP1NecvIZKg9C2Fj-Ykq8.jpg?width=108&crop=smart&auto=webp&s=cf2fd449128514c846e3510e596edd170e04a89b', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/h8pjchsHvW3_WTLw8uhQHpyP1NecvIZKg9C2Fj-Ykq8.jpg?width=216&crop=smart&auto=webp&s=1f36d8ca9b0d72ece376b106ea30a0e3cfdbc205', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/h8pjchsHvW3_WTLw8uhQHpyP1NecvIZKg9C2Fj-Ykq8.jpg?width=320&crop=smart&auto=webp&s=510cb42047773238733accb36ec2f8ad09b825ee', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/h8pjchsHvW3_WTLw8uhQHpyP1NecvIZKg9C2Fj-Ykq8.jpg?auto=webp&s=54994e5ff62d2f7ae95f77fd623e060f1278434c', 'width': 480}, 'variants': {}}]}
[deleted by user]
3
[removed]
2023-05-01T11:00:21
[deleted]
1970-01-01T00:00:00
0
{}
134ja7c
false
null
t3_134ja7c
/r/LocalLLaMA/comments/134ja7c/deleted_by_user/
false
false
default
3
null
Is it achievable through fine-tuning?
9
I'd like to build a chat bot specialised on the content of my forum (500k words in 1000 topics), so I can ask it: * Questions similar to what have been asked in the past and expect similar or better answers to what has been posted on my forum. * New questions and get accurate answers thanks to the knowledge of past discussions in my forum Note that all the forum posts are not about questions but most of them are. My understanding is this can be achieved by fine-tuning a model using the content of my forum. **Questions:** * Is my understanding correct? * Is it achievable using state of the art LLMs? * How much work would it be to transform the context of my forum into input for the model? What data structure is recommended for this task? * Are modern GPU (e.g. 3090) enough to perform this task in a few days? * Can I perform this task every week to incorporate the latest posts (around 50/ week) or a critical text size is needed to measure any impact?
2023-05-01T10:09:12
https://www.reddit.com/r/LocalLLaMA/comments/134ibm3/is_it_achievable_through_finetuning/
Kaiizeddit
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134ibm3
false
null
t3_134ibm3
/r/LocalLLaMA/comments/134ibm3/is_it_achievable_through_finetuning/
false
false
self
9
null
MetaIX/GPT4-X-Alpasta-30b-4bit · Hugging Face
38
This seems very good so far. It can be 'jailbroken' easily too.
2023-05-01T10:08:18
https://www.reddit.com/r/LocalLLaMA/comments/134ib4d/metaixgpt4xalpasta30b4bit_hugging_face/
ambient_temp_xeno
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134ib4d
false
null
t3_134ib4d
/r/LocalLLaMA/comments/134ib4d/metaixgpt4xalpasta30b4bit_hugging_face/
false
false
self
38
null
Economical equipment for ggml
4
Is it worth buying a computer with a 11900k processor and 128 GB DDR4 RAM due to AVX 512 support or should it be bought with a 13900k processor and 128 GB DDR4 RAM due to overall performance? In both cases, I would see 1 (maybe 2) RTX 3090 in the near future for LoRa training or running Llama on the GPU. I am asking for advice because I will not buy A100 or H100 cards soon. ps. An alternative is one of the AMD Zen4 processors, but will it be a better Intel set?
2023-05-01T09:51:09
https://www.reddit.com/r/LocalLLaMA/comments/134i0aq/economical_equipment_for_ggml/
Maofead
self.LocalLLaMA
2023-05-01T10:15:20
0
{}
134i0aq
false
null
t3_134i0aq
/r/LocalLLaMA/comments/134i0aq/economical_equipment_for_ggml/
false
false
self
4
null
From Python Developer to AI developer -> Recommended Tutorials
1
[removed]
2023-05-01T08:25:01
https://www.reddit.com/r/LocalLLaMA/comments/134gkz4/from_python_developer_to_ai_developer_recommended/
Koliham
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134gkz4
false
null
t3_134gkz4
/r/LocalLLaMA/comments/134gkz4/from_python_developer_to_ai_developer_recommended/
false
false
default
1
null
Vicuna 13B for Information Extraction in Medical Text
23
Hi everyone, I'm trying to use an LLM model to extract information from documents in a highly technical domain (Medicine). The desired output would be a structured document, where all the fields get filled with extracted infos. I tried LoRA-Alpaca fine-tuning with a little dataset but the results were not satisfactory. What are my best options? \- Using **consecutive prompts** to make Vicuna extract info? \- Consider investing some time experimenting **Vector Databases** \+ Vicuna ? \- Use a classic **Bert-like model for NER**? (even though the labelling process would be crazy expensive in medicine)
2023-05-01T08:01:40
https://www.reddit.com/r/LocalLLaMA/comments/134g6zz/vicuna_13b_for_information_extraction_in_medical/
robbery_bob
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134g6zz
false
null
t3_134g6zz
/r/LocalLLaMA/comments/134g6zz/vicuna_13b_for_information_extraction_in_medical/
false
false
self
23
null
Alpaca LoRA checkpoints by chansung trained with longer cutoff length and latest Cleaned Alpaca Dataset with GPT-4 data
47
[removed]
2023-05-01T07:12:38
https://www.reddit.com/r/LocalLLaMA/comments/134fe6j/alpaca_lora_checkpoints_by_chansung_trained_with/
Creative-Rest-2112
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134fe6j
false
null
t3_134fe6j
/r/LocalLLaMA/comments/134fe6j/alpaca_lora_checkpoints_by_chansung_trained_with/
false
false
default
47
null
Has anyone managed to load LLaMA quantized 4-bit using the decapoda-research links?
1
[removed]
2023-05-01T06:09:24
https://www.reddit.com/r/LocalLLaMA/comments/134eay9/has_anyone_managed_to_load_llama_quantized_4bit/
vanilla-acc
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134eay9
false
null
t3_134eay9
/r/LocalLLaMA/comments/134eay9/has_anyone_managed_to_load_llama_quantized_4bit/
false
false
default
1
null
What is the difference on group size when using a 4 bit quantized model?
14
I'm running Lllama 30b locally, using the quantized 4 bit version. I notice there is a 128g and a 32g group size available, and I don't really understand the difference despite doing some searching. Is one better than the other? Faster? If someone could explain in layman's terms, it would be greatly appreciated!
2023-05-01T02:49:56
https://www.reddit.com/r/LocalLLaMA/comments/134aolr/what_is_the_difference_on_group_size_when_using_a/
SRavingmad
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
134aolr
false
null
t3_134aolr
/r/LocalLLaMA/comments/134aolr/what_is_the_difference_on_group_size_when_using_a/
false
false
self
14
null
LORA Training Vicuna 13B with big text files
34
I'm right now using ooba booga GUI on windows to fine-tune the Vicuna 13B with largish text files. I'm using an A6000 on 13B 8bit, but I can easily see that 24GB or even 16GB could definitely be feasible for this with the right parameters and more time. To start with, I tested it on a few paragraphs just to test out the hyperparameters, and discovered any epoch of less than 20 and the chat prompt would basically treat my LORA as if it didn't even exist. *Likely* because I'm a total n00b, but after 20 repetitions it did seem that Vicuna had an idea what the few paragraphs stated, with a healthy bit of hallucinations thrown in. Likely the temperature is too high. Now, the A6000 is pegged at 43 GB / 47.5 GB usage and 85 degrees C likely for the next 2-4 hours, and I'll see the results. Of course, I know this is just for research, but out of curiosity, does anyone have their hyper-parameters for giving Vicuna 13B some decent grasp of a body of technical documentation? My current hyper parameters are Epochs = 20, Micro batch = 1, Batch size = 1024, Lora Rank = 512, Lora Alpha = 1024, Cutoff length = 512, Overlap length = 256, Newline Cut Length = 256. I hope it doesn't run out of RAM in a few hours. If anyone would like to share their hyper parameters and results, I'd look forward to hearing about it, this stuff is pretty hard to google.
2023-05-01T01:55:11
https://www.reddit.com/r/LocalLLaMA/comments/1349kek/lora_training_vicuna_13b_with_big_text_files/
alittleteap0t
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1349kek
false
null
t3_1349kek
/r/LocalLLaMA/comments/1349kek/lora_training_vicuna_13b_with_big_text_files/
false
false
self
34
null
I wrote a tool on embeddings with Local LLM and langchain
1
[removed]
2023-05-01T00:43:41
https://www.reddit.com/r/LocalLLaMA/comments/13481o5/i_wrote_a_tool_on_embeddings_with_local_llm_and/
Haunting_Ad_5900
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13481o5
false
null
t3_13481o5
/r/LocalLLaMA/comments/13481o5/i_wrote_a_tool_on_embeddings_with_local_llm_and/
false
false
default
1
null
[deleted by user]
1
[removed]
2023-05-01T00:38:25
[deleted]
1970-01-01T00:00:00
0
{}
1347xgc
false
null
t3_1347xgc
/r/LocalLLaMA/comments/1347xgc/deleted_by_user/
false
false
default
1
null
What model parameters is everyone using?
26
I have been having a ball with the plethora of models that have been released. Currently using the llama.cpp on my CPU, hopefully to be utilizing a GPU soon. But the main question I have is what parameters are you all using? I have found the reference information for transformer models on HuggingFace, but I've yet to find other people's parameters that they have used. I've adjusted top\_k, top\_p, and temperature so far. Reference: Parameters for manipulation of the model output logits - [https://huggingface.co/docs/transformers/main\_classes/text\_generation#transformers.GenerationConfig](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig)
2023-04-30T21:18:49
https://www.reddit.com/r/LocalLLaMA/comments/1343bgz/what_model_parameters_is_everyone_using/
TurdPuller
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1343bgz
false
null
t3_1343bgz
/r/LocalLLaMA/comments/1343bgz/what_model_parameters_is_everyone_using/
false
false
self
26
{'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]}
why is it when i run 8 bit quantization it actually does 4 bit ??
1
[removed]
2023-04-30T20:25:03
https://www.reddit.com/r/LocalLLaMA/comments/13420x0/why_is_it_when_i_run_8_bit_quantization_it/
delsystem32exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13420x0
false
null
t3_13420x0
/r/LocalLLaMA/comments/13420x0/why_is_it_when_i_run_8_bit_quantization_it/
false
false
default
1
null
How do I install Vicuna to be used in notebooks?
7
All of the documentation I’m seeing let’s you use open sourced llama models via command line or visual interface. I just want to use it in a notebook where I can feed it an array of text, an array of prompts, and just get an array of outputs. Is there any guide to this?
2023-04-30T20:20:31
https://www.reddit.com/r/LocalLLaMA/comments/1341x0c/how_do_i_install_vicuna_to_be_used_in_notebooks/
somethingstrang
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1341x0c
false
null
t3_1341x0c
/r/LocalLLaMA/comments/1341x0c/how_do_i_install_vicuna_to_be_used_in_notebooks/
false
false
self
7
null
model file is floating point does not work.
1
[removed]
2023-04-30T20:14:40
https://www.reddit.com/r/LocalLLaMA/comments/1341ryz/model_file_is_floating_point_does_not_work/
delsystem32exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1341ryz
false
null
t3_1341ryz
/r/LocalLLaMA/comments/1341ryz/model_file_is_floating_point_does_not_work/
false
false
default
1
null
Please convert Pygmalion 7b's to ggml
5
Dear friends! Could someone please convert these newest models to ggml? [https://huggingface.co/PygmalionAI](https://huggingface.co/PygmalionAI) It would be much appreciated <3
2023-04-30T19:49:48
https://www.reddit.com/r/LocalLLaMA/comments/13416mc/please_convert_pygmalion_7bs_to_ggml/
Away-Sleep-2010
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13416mc
false
null
t3_13416mc
/r/LocalLLaMA/comments/13416mc/please_convert_pygmalion_7bs_to_ggml/
false
false
self
5
{'enabled': False, 'images': [{'id': 'zqK1Ia5Kv1Th0cm-y__02PwCOZzubMO4uDhnoFv6ki8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=108&crop=smart&auto=webp&s=55421c8f4b51c76499322288cf539fb5806559b7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=216&crop=smart&auto=webp&s=a8960a492a850153e97d5a75abf465f61f37f476', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=320&crop=smart&auto=webp&s=98f3a6abd4a676cb08ca50663874a03e8d882b35', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=640&crop=smart&auto=webp&s=0544fa7ebc82e338af726adefca6d8b60ffeab20', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=960&crop=smart&auto=webp&s=afb7013b01f3919bdd1cdff817f8aa66d6a4d6a5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?width=1080&crop=smart&auto=webp&s=3025d940d322e1c7794d1a0016189de929855472', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/s19WAfuwCXeZf8h2c2XfJk0rMkgzdarb61pyrf02b-o.jpg?auto=webp&s=5d3766f7d6e6e341be9a2c6e5a122ae26016ffb9', 'width': 1200}, 'variants': {}}]}
what does the llama run flags like 128, 256, 512 do.
1
[removed]
2023-04-30T19:46:07
https://www.reddit.com/r/LocalLLaMA/comments/13413ma/what_does_the_llama_run_flags_like_128_256_512_do/
delsystem32exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13413ma
false
null
t3_13413ma
/r/LocalLLaMA/comments/13413ma/what_does_the_llama_run_flags_like_128_256_512_do/
false
false
default
1
null
65B model help how to run thx
1
[removed]
2023-04-30T18:53:52
https://www.reddit.com/r/LocalLLaMA/comments/133zv7f/65b_model_help_how_to_run_thx/
delsystem32exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133zv7f
false
null
t3_133zv7f
/r/LocalLLaMA/comments/133zv7f/65b_model_help_how_to_run_thx/
false
false
default
1
null
Llama + search your own documents
0
Hi. Could you please tell me if anyone in practice has been able to link the Llama/Alpaca/Vicuna etc open-source to their Llamaindex (or any other alternatives) documents/knowledgebase search/answers ?
2023-04-30T17:19:37
https://www.reddit.com/r/LocalLLaMA/comments/133xl6w/llama_search_your_own_documents/
One_Creator_One
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133xl6w
false
null
t3_133xl6w
/r/LocalLLaMA/comments/133xl6w/llama_search_your_own_documents/
false
false
self
0
null
LLaMA for poor
27
As the title says. Is there any LLaMA for poor people who cant afford 50-100 gb of ram or lots of VRAM? yes there are smaller 7B, 4 bit quantized models available but they are not that good compared to bigger and better models. obviously. I'm not sure if it exists or not. or if its even possible to do or not. but if the inference time was not an issue, as in even if it takes 5-10 seconds per token... is it possible to run big model like 39B or 65B in devices like 16GB ram + swap?? load smaller parts of model in memory to calculate and so on. will it be very slow? yes. but is it possible to do so?? I know this might be a long shot. maybe even impossible but I am curious if there is some way to load model in parts to do inference even if its very slow. If anything like this exists or if its possible to do so with some configuration, please let me know.
2023-04-30T16:36:44
https://www.reddit.com/r/LocalLLaMA/comments/133wk3s/llama_for_poor/
pratiknarola
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133wk3s
false
null
t3_133wk3s
/r/LocalLLaMA/comments/133wk3s/llama_for_poor/
false
false
self
27
null
I recently tested the "MPT 1b RedPajama + dolly" model and was pleasantly surprised by its overall quality despite its small model size. Could someone help to convert it to llama.cpp CPU ggml.q4?
62
I've put [this model](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b-dolly) to the brief test and it's impressively speedy, with a pretty decent text coherence. The model was pre-trained for 200B tokens by sampling from the subsets of the RedPajama dataset in the same proportions as were used by the Llama series of models. [Colab to try the model](https://colab.research.google.com/drive/19YGJ-eDe2Wm17hc9hLwobckHZjFB8lo5?usp=sharing) (GPU mode)|[Test Questions Source](https://github.com/aigoopy/llm-jeopardy/blob/main/questions.txt) **Pros:** * The model has a license [cc-by-sa-3.0](https://creativecommons.org/licenses/by-sa/3.0/), which is suitable for commercial use. * It is generating very coherent text out of the box. * BLAZING FAST **Cons:** * It is small, only 1B, that is why it is bad in facts * To improve text generation, it's important to use strict prompt engineering. For tasks, use "`instruction: %prompt%`", and for questions, use "`Q: %question%`" or also an instruction pre-prompt. Always end your prompts with "`. Response:`". This is simply an observation on best practices for text generation. # Examples: Q&A Question answering VS GPT 3.5, settings used: default (no rep penalty, etc) **Q: Which man born in 1932 was the son of a percussionist in the CBS radio orchestra has been nominated for 53 Oscars?** 1B: >A: James Stewart ChatGPT 3.5: >he man you are referring to is composer and conductor John Williams. He was born on February 8, 1932, in New York City, and his father played percussion for the CBS Radio Orchestra. Williams has been nominated for a total of 53 Academy Awards throughout his career, making him the second-most nominated individual in Oscar history, behind only Walt Disney. He has won five Oscars for his work on films such as "Jaws," "Star Wars," and "Schindler's List." **Q: Known for more philosophical works, he wrote the play 'La Mandragola', in which Florentines are rewarded for immoral actions?** 1B: >A: The play 'La Mandragola' is a play written by the Italian writer Luigi Pirandello. It was first performed in Florence in 1921. The play is about a man who is punished for immoral actions. The play ChatGPT 3.5: >The Italian philosopher and writer Niccolò Machiavelli is the author of the play 'La Mandragola'. While he is best known for his philosophical works, such as 'The Prince' and 'Discourses on Livy', 'La Mandragola' is a satirical comedy that was first performed in 1520. The play is set in Renaissance Florence and tells the story of a young man named Callimaco who falls in love with a beautiful and virtuous woman named Lucrezia. To win her over, Callimaco and a group of corrupt characters plot to deceive Lucrezia's husband and family, leading to a series of immoral actions that are ultimately rewarded. Machiavelli's play is a commentary on the corrupt state of society in his time and the ways in which people are often rewarded for dishonesty and immorality. &#x200B; **Q: Which nobel peace price winners each lived at times on Vilakazi St in Soweto, so it claims to be the worlds only street home to 2 Nobel Peace Prize winners?** 1B: >A: Both Nelson Mandela and Desmond Tutu lived on Vilakazi Street in Soweto. ChatGPT 3.5: >Vilakazi Street in Soweto, a township near Johannesburg, South Africa, is claimed to be the only street in the world where two Nobel Peace Prize winners have lived at different times. The two Nobel laureates who lived on Vilakazi Street are: > > > >Nelson Mandela - South Africa's first democratically elected President, who was awarded the Nobel Peace Prize in 1993 for his role in ending apartheid and promoting peaceful reconciliation in the country. Mandela lived on Vilakazi Street with his family in the 1940s and 1950s. > >Archbishop Desmond Tutu - a prominent anti-apartheid activist and Anglican bishop who was awarded the Nobel Peace Prize in 1984 for his nonviolent opposition to South Africa's racist policies. Tutu lived on Vilakazi Street with his family in the 1960s. > >Today, both of their former homes have been converted into museums and are popular tourist attractions in Soweto. **Q: In 1966, the year of who's death did he share plans for an experimental prototype community in Florida?** 1B: >A: The year was 1966. The community was called "The Village" and was located in the town of Lake Wales, Florida. The community was designed to be a self-sufficient, self-sustaining, and self-sufficient community ChatGPT 3.5: >In 1966, the year of Walt Disney's death, he shared plans for an experimental prototype community in Florida that would become known as EPCOT (Experimental Prototype Community of Tomorrow). Disney's vision for EPCOT was a futuristic city that would serve as a model for urban living, with advanced technologies, sustainable systems, and a focus on community and innovation. However, after Disney's death, plans for EPCOT shifted to become a theme park that would showcase technological advancements and international cultures. Today, EPCOT is one of four theme parks at Walt Disney World in Florida, but it still retains some of Disney's original vision for a forward-thinking and innovative community. **Q: Of the 13 nations through which the Equator passes, what is the only one whose coastline borders the Caribbean Sea?** B1: >A: The United States. ChatGPT 3.5: >The 13 countries through which the Equator passes are: > > > >Sao Tome and Principe > >Gabon > >Republic of Congo > >Democratic Republic of Congo > >Uganda > >Kenya > >Somalia > >Indonesia > >Kiribati > >Ecuador > >Colombia > >Brazil > >The territory of Ecuador (Galapagos Islands) > >Among these 13 countries, the only one whose coastline borders the Caribbean Sea is Colombia. &#x200B; **Q: Which decorative items in fashion history get their name from their origin in the port city of Strasbourg, on the border of France & Germany?** B1: >A: The name of the famous French fashion house Chanel comes from the city of Strasbourg, where the founder of the house, Gabrielle Chanel, was born. ChatGPT 3.5: >The decorative items in fashion history that get their name from their origin in the port city of Strasbourg are "Strasbourg ribbons" or "Strasbourg lace." Strasbourg ribbons are a type of narrow silk ribbon that was often used to trim clothing, while Strasbourg lace refers to a delicate type of lace that was originally made in the city of Strasbourg. Both of these decorative items were highly prized in the 18th and 19th centuries and were used to add a touch of elegance and sophistication to clothing and other textile items. &#x200B; # Examples: Story Telling 1B model only, settings: `repetition_penalty=1.5, num_tokens_to_generate = 100` **Prompt:** instruction: Write a fantasy story about LOTR. Response: I wrote a fantasy story about LOTR. It is called "The Lord of the Rings: The Battle of the Five Armies." It is a sequel to the first movie, "The Lord of the Rings: The Fellowship of the Ring." The story is about the last battle of the war between the forces of good and evil. &#x200B; **Prompt:** instruction: Write me a list of the top 10 dad jokes. Response: 1) Dad is so old he can't remember his own name, 2.) He's always late for work because it takes him an hour to get there and 3,) His wife doesn’t know how many kids they have or where their next meal will come from! **Prompt:** instruction: Write a scifi story. Response: I am writing about an alien race that has been in contact with humans for thousands of years, and they have developed technology to travel through space using the wormhole effect (which is also used by other species). The aliens are very advanced technologically but their society seems based on feudalism where all members work together as equals under one leader who rules over them from afar via holograms or some sort... The Story Of A Man Who Was Born With No Brain - Chapter 1 **Prompt:** instruction: Write a psychological profile of an average Reddit moderator, focusing on their characteristics and behaviors. Response: The typical reddit user is young (18-35), male or female with college education who has been active for at least 6 months in the past year; they are generally well educated but not necessarily highly skilled professionals from academia/research institutions as most moderators have no formal training to moderate content related discussions online &#x200B; # Conclusions I recall the text generation era of GPT2, and the improvement in quality is remarkable. It's feasible for commercial use, and its speed can be harnessed by certain "chaining" tools such as AutoGPT for specific tasks and on the CPU only. I am hopeful that someone can provide support with llama.cpp.
2023-04-30T15:51:24
https://www.reddit.com/r/LocalLLaMA/comments/133vfk3/i_recently_tested_the_mpt_1b_redpajama_dolly/
Shir_man
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133vfk3
false
null
t3_133vfk3
/r/LocalLLaMA/comments/133vfk3/i_recently_tested_the_mpt_1b_redpajama_dolly/
false
false
self
62
{'enabled': False, 'images': [{'id': 'UKyZG43CUXRYhIK3U7Ytqf0673rIVHWRV48OM89kibY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=108&crop=smart&auto=webp&s=b61d2d6f28fe760543d86c9e78883d566d95f1de', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=216&crop=smart&auto=webp&s=52017a5cdaa1e892f648dd7bf39815e86bfd41a3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=320&crop=smart&auto=webp&s=7e9675cf20084c8cb105ea7a6f9c68561734594a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=640&crop=smart&auto=webp&s=4112457e44ace347e4ac10fe18788ae091df8d41', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=960&crop=smart&auto=webp&s=5b31b32da666a18cced44e96c94ba84e8cca320f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?width=1080&crop=smart&auto=webp&s=763542423e72cc7821f6b6cdcb0e1d5dab0c5ff8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/kq_djbCnrTTByU_q4zX_Ou7GjTMU6eVLaSP96SmBN6Y.jpg?auto=webp&s=b9ad53ff1e48beeebe393f33fe48dcbf60385f81', 'width': 1200}, 'variants': {}}]}
Created a discord bot that understand images (based on MiniGPT-4 Vicuna model)
30
2023-04-30T15:25:22
https://www.reddit.com/gallery/133us6p
chain-77
reddit.com
1970-01-01T00:00:00
0
{}
133us6p
false
null
t3_133us6p
/r/LocalLLaMA/comments/133us6p/created_a_discord_bot_that_understand_images/
false
false
https://a.thumbs.redditm…Q45X39k3Qjf8.jpg
30
null
On RP and lore
6
After new Pygmalion trained off Llama just dropped and then a post testing Cai's AI made me test it out for myself, I wondered about something. Aside from just more quality/varied conversation and roleplay logs turned datasets, it might help a lot if it also had knowledge of fictional worlds and lore to go with it. Maybe from the wikis (or even community sites) of all these movies, series, video games, etc. and these were scraped into one large fictional world or lore dataset. So I’m wondering, has this already been done if even practical (maybe might end up mashing these universes together into one or further confusing the AI), possible… or legal (privacy or even right to such data), or would this already be part of the wiki part of datasets normally used (it's not just Wikipedia but all other wikis included)? Of course, if it hasn’t been done I imagine it could be quite an undertaking but would it be worth doing? BTW just asking out of curiosity mostly
2023-04-30T14:38:07
https://www.reddit.com/r/LocalLLaMA/comments/133tlxg/on_rp_and_lore/
reduserGf
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133tlxg
false
null
t3_133tlxg
/r/LocalLLaMA/comments/133tlxg/on_rp_and_lore/
false
false
self
6
null
Can't determine model name error
1
Hello, I recently wanted to download the new wizardLM model. Unfortunately, each time I was getting so complicated errors that I wanted to just stop trying. I found out what works for me (one click installer) and after I managed to download the web UI, I wanted to download wizardLM. Of course with my luck it didn't work and I got the stupidest error - can't determine model name. I researched and researched and every time I stumbled across the error, for everyone it was fixed by just removing the --model-menu line in the startwebui and setting it up manually in the UI itself. I'm clueless. Please - HELP!
2023-04-30T11:59:44
https://www.reddit.com/r/LocalLLaMA/comments/133nb7o/cant_determine_model_name_error/
Yoad1704
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133nb7o
false
null
t3_133nb7o
/r/LocalLLaMA/comments/133nb7o/cant_determine_model_name_error/
false
false
self
1
null
Llama.cpp New Samplers?
15
https://github.com/ggerganov/llama.cpp/commit/dd7eff57d8491792010b1002b8de6a4b54912e5c There's basically no information about what these "new samplers" do and are for. Does anyone know?
2023-04-30T09:49:22
https://www.reddit.com/r/LocalLLaMA/comments/133kzb8/llamacpp_new_samplers/
Pan000
self.LocalLLaMA
2023-04-30T11:01:00
0
{}
133kzb8
false
null
t3_133kzb8
/r/LocalLLaMA/comments/133kzb8/llamacpp_new_samplers/
false
false
self
15
{'enabled': False, 'images': [{'id': 'z8DM78AUCrzaB7Azpe5LcJn2fLcN7b6aNRCVrI2ItI4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=108&crop=smart&auto=webp&s=e832a89cd717934f1c661ab7a378ebab181a7507', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=216&crop=smart&auto=webp&s=9477857d23484eee1618cdec9982eb4cc5f26300', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=320&crop=smart&auto=webp&s=0c4254c02693b5dc990aa56d9455fd64e78b986c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=640&crop=smart&auto=webp&s=397b0b22033a5e9c318051afb1a125e2d295093f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=960&crop=smart&auto=webp&s=585464f536032685dde2da1d0337f160c72d4c2f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?width=1080&crop=smart&auto=webp&s=47a0c6e10f4e5464e3382dc45bcc0d2ec581ba30', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jqkRQGkCdzC1jXrzvlwCCw9nzPqwDlD15obCBxFT_nI.jpg?auto=webp&s=e5dd12c2b3f57439cb7d6ed7740de43ae666814a', 'width': 1200}, 'variants': {}}]}
Run Llama to ask questions about your files
52
I've seen a bunch of projects and new companies based on building chats and answer questions about files, websites or documentation. Do you know whether there is any project doing do without using Chat GPT API? I did this project to do so (no langchain, no pinecone, no bloated over engineering): [https://github.com/adriacabeza/erudito](https://github.com/adriacabeza/erudito). Give it a look if you want to. I'd like to know if somebody else has tried this and if so, which model do you think would be the best to use. As long it runs in llamacpp it should be trivial to change them using the tool I've built.
2023-04-30T07:19:50
https://www.reddit.com/r/LocalLLaMA/comments/133ij7r/run_llama_to_ask_questions_about_your_files/
adriacabeza
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
133ij7r
false
null
t3_133ij7r
/r/LocalLLaMA/comments/133ij7r/run_llama_to_ask_questions_about_your_files/
false
false
self
52
{'enabled': False, 'images': [{'id': 'NLrROSNE4kiiGHWM0W3hxo2G0Ajwg0P6maJ2XKh8AK8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/ia-3UDe0hCoiLMn727PeQtdLZCGFgjXMG73pHYSpMXA.jpg?width=108&crop=smart&auto=webp&s=b88a6023b943e80f52c27d376a8a5db1bac28e6a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/ia-3UDe0hCoiLMn727PeQtdLZCGFgjXMG73pHYSpMXA.jpg?width=216&crop=smart&auto=webp&s=81e95800ac7891a1d64fa4350180de33a1dc531c', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/ia-3UDe0hCoiLMn727PeQtdLZCGFgjXMG73pHYSpMXA.jpg?width=320&crop=smart&auto=webp&s=210a437b5d83a1d4034b3e4402144efcdb4a5e35', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/ia-3UDe0hCoiLMn727PeQtdLZCGFgjXMG73pHYSpMXA.jpg?width=640&crop=smart&auto=webp&s=56e1d7ad33171a7e374fd5cfb84786cb1aaa615b', 'width': 640}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/ia-3UDe0hCoiLMn727PeQtdLZCGFgjXMG73pHYSpMXA.jpg?auto=webp&s=e2919192717a4ab6e896eeecee5e07a9d72eb89b', 'width': 768}, 'variants': {}}]}
AMD Taunts NVIDIA for Expensive VRAM: A Win-Win Situation for LLM Enthusiasts
37
2023-04-30T04:09:21
https://www.tomshardware.com/news/amd-brags-about-cheaper-16gb-gpus
friedrichvonschiller
tomshardware.com
1970-01-01T00:00:00
0
{}
133fejy
false
null
t3_133fejy
/r/LocalLLaMA/comments/133fejy/amd_taunts_nvidia_for_expensive_vram_a_winwin/
false
false
https://b.thumbs.redditm…IUNQWUKIZ0hM.jpg
37
{'enabled': False, 'images': [{'id': 'm_97yI4HbmOtgeZk_hcqrRsVBHfiRoGY71cCosJKYIo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=108&crop=smart&auto=webp&s=d98b0f642505302dd55a089ec4ef2828cb3b6f73', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=216&crop=smart&auto=webp&s=6315f1c4315346b3262b8ace8302f98caeacdbaa', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=320&crop=smart&auto=webp&s=47c6eae31947104133a9dccc30b294ea5a590464', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=640&crop=smart&auto=webp&s=7dc36e1b6454f4e5b1907b6f951f34e2ee607aa3', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=960&crop=smart&auto=webp&s=1ab582f9e2e16c618762b55a0df6d5b3d5ae69ce', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?width=1080&crop=smart&auto=webp&s=41f31e25dc49e1ed63818bc3ed40a725ac8b8233', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/PgAjJx3QommXkQhg8YYGvnPBQAQM5_e5xD5toCJjxIk.jpg?auto=webp&s=d1e317e8051b73a1346e4cdcd7105ac3b1a8dd9e', 'width': 1200}, 'variants': {}}]}
Pygmalion releases two new LLaMA based models: Pygmalion 7B and the roleplay oriented Metharme 7B. These are major improvements over the old Pygmalion models.
144
2023-04-30T03:14:33
https://huggingface.co/PygmalionAI/pygmalion-7b
Creative-Rest-2112
huggingface.co
1970-01-01T00:00:00
0
{}
133efqp
false
null
t3_133efqp
/r/LocalLLaMA/comments/133efqp/pygmalion_releases_two_new_llama_based_models/
false
false
default
144
null
Training a llama Lora with a .parquet file
3
I downloaded a parquet file that was intended to be used to train a Lora for llama and I'm running oobabooga webui but I can't figure out how to use the parquet file as the training dataset - can anyone point me in the right direction?
2023-04-29T22:52:34
https://www.reddit.com/r/LocalLLaMA/comments/13395ip/training_a_llama_lora_with_a_parquet_file/
xoexohexox
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13395ip
false
null
t3_13395ip
/r/LocalLLaMA/comments/13395ip/training_a_llama_lora_with_a_parquet_file/
false
false
self
3
null
Cant deternine the name of the model.
0
[removed]
2023-04-29T21:18:35
https://www.reddit.com/r/LocalLLaMA/comments/13370i3/cant_deternine_the_name_of_the_model/
Yoad1704
self.LocalLLaMA
2023-04-29T21:22:10
0
{}
13370i3
false
null
t3_13370i3
/r/LocalLLaMA/comments/13370i3/cant_deternine_the_name_of_the_model/
false
false
default
0
null
Which model && prompts best for RCAs
2
Hi, Hope this is ok to post here. I’m wondering if anyone has tried using any models to generate RCAs (root cause analysis’)? i’ve been having a lot of hit or miss success for prompts i’ll usually something like: You work for company X there was an incident below is the timeline: - 04/12/2023 9:00 PM notification triggers that service X is not functioning, investigation begins - 04/12/2023 9:05PM team discovers culprit is DNS, issue is rectified - 04/12/223 9:06PM issue is rectified and alert clears Please write a root cause analysis with an executive summary. Assume all times in timeline are UTC.
2023-04-29T19:42:09
https://www.reddit.com/r/LocalLLaMA/comments/1334s2o/which_model_prompts_best_for_rcas/
mrsxypants
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1334s2o
false
null
t3_1334s2o
/r/LocalLLaMA/comments/1334s2o/which_model_prompts_best_for_rcas/
false
false
self
2
null
Benchmarks for Recent LLMs
13
Does anyone know of any updated benchmarks for LLMs? I only know of one and it's not updated - [https://docs.google.com/spreadsheets/d/1kT4or6b0Fedd-W\_jMwYpb63e1ZR3aePczz3zlbJW-Y4/edit#gid=741531996](https://docs.google.com/spreadsheets/d/1kT4or6b0Fedd-W_jMwYpb63e1ZR3aePczz3zlbJW-Y4/edit#gid=741531996). I think this spreadsheet was made possibly from using this tool [https://github.com/EleutherAI/lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and language tasks dataset available there. It would be nice if there are benchmarks for recently released LLMs but the spreadsheet is only for viewing and does not allow community edits. Would such benchmarks be helpful for you? What is your favorite open source LLM so far and for which task?
2023-04-29T18:44:00
https://www.reddit.com/r/LocalLLaMA/comments/1333exw/benchmarks_for_recent_llms/
PostScarcityHumanity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1333exw
false
null
t3_1333exw
/r/LocalLLaMA/comments/1333exw/benchmarks_for_recent_llms/
false
false
self
13
{'enabled': False, 'images': [{'id': 'LU06WWUDw0igwGsy4HzNT-3sWxJRqydpQKWaLpNGWp0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=108&crop=smart&auto=webp&s=d2c0f843f5b26a1cbacc4a59c9131b2917cadb5a', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=216&crop=smart&auto=webp&s=cc3f78109f55997c57a82bc592c847b39d1c15d9', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=320&crop=smart&auto=webp&s=b046eb69ebb410b5bf6fc49c5f5be8f39ed93195', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=640&crop=smart&auto=webp&s=4e10ca46df3cb4f08f5962db444cd17ece2f7840', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=960&crop=smart&auto=webp&s=ab27b102db5c934e7c7d007001ad81cb3380d574', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?width=1080&crop=smart&auto=webp&s=a65bcd454f5e730773ba89a20217c6d8c364d650', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/f9Q8YLhqryWQzC4WwtXB81yJNQNC5vsGHBrGzaOFbV0.jpg?auto=webp&s=d806301d9d8c85dbe874fae9ed723529a47b9457', 'width': 1200}, 'variants': {}}]}
Intel's integrated UHD graphics can use system RAM as VRAM. Any experiences?
22
I was thinking that, even though integrated GPU is much slower than, say, a 4090, it's still supposed to be faster than the CPU, right? With the ability to use RAM as VRAM, albeit RAM itself being slower than VRAM, you can theoretically have a GPU with insane amounts of memory. I wonder if there is a sensible improvement over pure CPU runs. I am not sure if CUDA runs on Intel GPUs, however, there are [projects](https://github.com/vosen/ZLUDA) that may make this happen. If this works, that'd be interesting
2023-04-29T17:41:38
https://www.reddit.com/r/LocalLLaMA/comments/1331yhq/intels_integrated_uhd_graphics_can_use_system_ram/
x54675788
self.LocalLLaMA
2023-04-29T21:06:46
0
{}
1331yhq
false
null
t3_1331yhq
/r/LocalLLaMA/comments/1331yhq/intels_integrated_uhd_graphics_can_use_system_ram/
false
false
self
22
{'enabled': False, 'images': [{'id': 'wOf2PEkZk-7KcNVCoiFZ9FQquSrcn9wcSnTq6S5pqd8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=108&crop=smart&auto=webp&s=743f0873cea29624bbae45933a6190e2757840d6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=216&crop=smart&auto=webp&s=75e607f7ca6239434855eb1268bbdc50562cd476', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=320&crop=smart&auto=webp&s=92fb284d78acd463f728fc82c75cc99b81954270', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=640&crop=smart&auto=webp&s=4c9d4b0e065e281775025947c01b98d2ebeeb201', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=960&crop=smart&auto=webp&s=c1dc27f178c92822e73a58f92b23dbe5e3283fe9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?width=1080&crop=smart&auto=webp&s=06b2e3e6fcdff17748a771c009bc6b1533503401', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_QPhhOYnS-2bCQaZxGRJwHMotp1ZoYcwP76bxFjkKRE.jpg?auto=webp&s=6a0784e2b157ddc336bed9ccfa613e475fb2c7d2', 'width': 1200}, 'variants': {}}]}
Have you guys seen the video published by Palantir? They are using Language models in Defense and the Military with real-time data.
54
They say " it'll be totally ethical ". The list of language models used in the video is shown in 7.02 minutes. Video [https://www.youtube.com/watch?v=XEM5qz\_\_HOU](https://www.youtube.com/watch?v=XEM5qz__HOU) News Article: [https://www.vice.com/en/article/qjvb4x/palantir-demos-ai-to-fight-wars-but-says-it-will-be-totally-ethical-dont-worry-about-it](https://www.vice.com/en/article/qjvb4x/palantir-demos-ai-to-fight-wars-but-says-it-will-be-totally-ethical-dont-worry-about-it)
2023-04-29T16:23:57
https://www.reddit.com/r/LocalLLaMA/comments/13302oa/have_you_guys_seen_the_video_published_by/
tamal4444
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13302oa
false
null
t3_13302oa
/r/LocalLLaMA/comments/13302oa/have_you_guys_seen_the_video_published_by/
false
false
self
54
{'enabled': False, 'images': [{'id': 'yd2784waBkm0YRDhHEJWwYSqmlidESL43KgAgEkaFxc', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/onDAIr40kdnAiE5Pt4-j1W2exxnM7bnclfa8MuRCPVk.jpg?width=108&crop=smart&auto=webp&s=8140624333fd9865cf698bb3a8c74ca6bca72199', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/onDAIr40kdnAiE5Pt4-j1W2exxnM7bnclfa8MuRCPVk.jpg?width=216&crop=smart&auto=webp&s=11e2f91d5611b3c02217468198bfdb1fde68e081', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/onDAIr40kdnAiE5Pt4-j1W2exxnM7bnclfa8MuRCPVk.jpg?width=320&crop=smart&auto=webp&s=d4864ec183c679b5850917f3c9067e9c91065d84', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/onDAIr40kdnAiE5Pt4-j1W2exxnM7bnclfa8MuRCPVk.jpg?auto=webp&s=c87bf032e0176597fc0a0f727afe647dec7f0249', 'width': 480}, 'variants': {}}]}
Bitsandbytes and Torch compile errors
3
Hi all, I want to preface this help post with the fact that I have no experience with coding and command line. I just wanted to try a supposedly local ChatGPT for fun. I used the one-click install method found on the webUI's github page. That being said, I have been getting these 2 errors : "The installed version of bitsandbytes was compiled without GPU support." and "AssertionError: Torch not compiled with CUDA enabled". After pressing a key, the command line box just closes. I have tried searching as best as I can for fixes, but it is all too complicated and I am afraid to mess the issue up even more. I am on a windows 10 system with an Nvidia card. As far as I know I have installed VSCode 2019 and the miniconda software exists in the download folder. I have installed the Vicuna 7b 1.1 model from the model list in the subreddit's wiki. Some solutions talk about launching conda, 'pip', and using python command line, but I have no idea how to even launch those. I really appreciate any help, thanks a lot!
2023-04-29T14:50:14
https://www.reddit.com/r/LocalLLaMA/comments/132w31q/bitsandbytes_and_torch_compile_errors/
lapse23
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132w31q
false
null
t3_132w31q
/r/LocalLLaMA/comments/132w31q/bitsandbytes_and_torch_compile_errors/
false
false
self
3
null
How do I merge a LoRA with a LLaMA checkpoint?
12
Hi! I can launch a base model on my PC just fine, but when I'm trying to load a LoRA alongside it, it OOMs. So, I'm looking for a way to merge a LoRA into the checkpoint file beforehand. Looking in https://www.reddit.com/r/LocalLLaMA/wiki/models/, I see a lot of mergers, but I wonder how they did it. I've searched for 'merge lora with checkpoint' in oobabooga and generally in web with 'llama merge lora with checkpoint' and those models discussions on HF, yet I've not found anything. Does anybody here know how it is done? I will really prefer doing it in HuggingFace format (the mergers on the wiki/models section are in HF), because of its greater support and convertibility to the needed formats.
2023-04-29T12:14:01
https://www.reddit.com/r/LocalLLaMA/comments/132rc26/how_do_i_merge_a_lora_with_a_llama_checkpoint/
kabachuha
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132rc26
false
null
t3_132rc26
/r/LocalLLaMA/comments/132rc26/how_do_i_merge_a_lora_with_a_llama_checkpoint/
false
false
self
12
null
Ensemble or cascade models?
4
I’m wondering whether there are any projects working on cascade or ensemble models that run inference in parallel? Google has an interesting paper on this a couple of years ago and its potential. https://ai.googleblog.com/2021/11/model-ensembles-are-faster-than-you.html?m=1
2023-04-29T10:44:53
https://www.reddit.com/r/LocalLLaMA/comments/132pphx/ensemble_or_cascade_models/
PookaMacPhellimen
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132pphx
false
null
t3_132pphx
/r/LocalLLaMA/comments/132pphx/ensemble_or_cascade_models/
false
false
self
4
{'enabled': False, 'images': [{'id': 'lLO6b8jLMIgNbM0OZMpN_7reL_DisUnjnjDy4mEGKo0', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=108&crop=smart&auto=webp&s=8b882cff4add09747badd77674ad6f36c2bc0bb6', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=216&crop=smart&auto=webp&s=5d5e616e7d6f10ae73a1059cbca4e2c03682dd06', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=320&crop=smart&auto=webp&s=aab885a4061dca2e2e600cbd4f2d720bd3b268b5', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=640&crop=smart&auto=webp&s=9b75efa2734f7aa30752f0a4e37d50a7dbd8f1ea', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=960&crop=smart&auto=webp&s=e8e2610c11dcc3453f86f1699c24a2d71cef78ee', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?width=1080&crop=smart&auto=webp&s=7a51917d214db07d476563b3af82bf41a5df5e25', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/rXErvF31ozcvGVOcPrKCtTxI1HxE6-UNzeuMOihRk2E.jpg?auto=webp&s=15c9838dc746735ed0132938bbc6a798eab82c8f', 'width': 1200}, 'variants': {}}]}
where do i download the weights for the 65B non quantized model thx
9
title
2023-04-29T09:31:28
https://www.reddit.com/r/LocalLLaMA/comments/132oerx/where_do_i_download_the_weights_for_the_65b_non/
delsystem32exe
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132oerx
false
null
t3_132oerx
/r/LocalLLaMA/comments/132oerx/where_do_i_download_the_weights_for_the_65b_non/
false
false
self
9
null
GPT4ALL on Windows (without admin rights)
6
I tried to run the following model from https://github.com/nomic-ai/gpt4all and using the “CPU Interface” on my windows machine. I get an error that “Current binaries supported are x86 Linux and ARM Macs.” I read that the workaround is to install WSL (windows subsystem for linux) on my windows machine, but I’m not allowed to do that on my work machine (admin locked). Are there other open source chat LLM models that can be downloaded, run locally on a windows machine, using only Python and its packages, without having to install WSL or nodejs or anything that requires admin rights?
2023-04-29T04:05:56
https://www.reddit.com/r/LocalLLaMA/comments/132ir4l/gpt4all_on_windows_without_admin_rights/
kayhai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132ir4l
false
null
t3_132ir4l
/r/LocalLLaMA/comments/132ir4l/gpt4all_on_windows_without_admin_rights/
false
false
self
6
{'enabled': False, 'images': [{'id': '48TKOYe3D-RZToTCRYNR1wv9MTcEYJMkNgNEQLAh_sY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=108&crop=smart&auto=webp&s=5e38e7a2b21712797c68483f0e3de65fabaf800f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=216&crop=smart&auto=webp&s=9a6074866bfd7fcff280953ef0db28d0206b297c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=320&crop=smart&auto=webp&s=8d0b5cea7e658c9fc58e92a5fef25005c13f9938', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=640&crop=smart&auto=webp&s=b77cede5f579a6b41cd3da233de52e07d0baa150', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=960&crop=smart&auto=webp&s=6880668855dd681e6b9a6128907dc57e914c221c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?width=1080&crop=smart&auto=webp&s=663b554353dd1a0df4dee956ed92bdcd024f9ae0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1_IiOzFrEWjVqcT2FMLFH9HdovYA2J2Reiyj68nvZaI.jpg?auto=webp&s=aa93bea36a801cf76900888286e4ecbec7d044c6', 'width': 1200}, 'variants': {}}]}
[Project] MLC LLM: Universal LLM Deployment with GPU Acceleration
100
MLC LLM is a \*\*universal solution\*\* that allows \*\*any language models\*\* to be \*\*deployed natively\*\* on a diverse set of hardware backends and native applications, plus a \*\*productive framework\*\* for everyone to further optimize model performance for their own use cases. Supported platforms include: \* Metal GPUs on iPhone and Intel/ARM MacBooks; \* AMD and NVIDIA GPUs via Vulkan on Windows and Linux; \* NVIDIA GPUs via CUDA on Windows and Linux; \* WebGPU on browsers (through companion project WebLLM Github page : [https://github.com/mlc-ai/mlc-llm](https://github.com/mlc-ai/mlc-llm) Demo instructions: [https://mlc.ai/mlc-llm/](https://mlc.ai/mlc-llm/)
2023-04-29T03:49:49
https://www.reddit.com/r/LocalLLaMA/comments/132igcy/project_mlc_llm_universal_llm_deployment_with_gpu/
crowwork
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132igcy
false
null
t3_132igcy
/r/LocalLLaMA/comments/132igcy/project_mlc_llm_universal_llm_deployment_with_gpu/
false
false
self
100
{'enabled': False, 'images': [{'id': 'ko_861QAjmN5QGLcpcjhOGoZ6wMhzLn7bMgUNveOKzs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=108&crop=smart&auto=webp&s=523c376d1fa069429a7d92a719052ac3847e316c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=216&crop=smart&auto=webp&s=dfed1f55059fab25f1fdcad787ed7474370ca796', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=320&crop=smart&auto=webp&s=d5d6a0d57868a501a0bbba61409b44627fb4cc41', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=640&crop=smart&auto=webp&s=b667bd84f7159729f2624fa585137e1b188626ea', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=960&crop=smart&auto=webp&s=3d0237845de66f5d5d032b26d2aea94bae081ca5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?width=1080&crop=smart&auto=webp&s=fa38ff24a05d8373f4d1db129bc5d199eaf4c9c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/0_t72qrvNk-DLA3Kps374SY2lDTSTioVPpJ8OQfQSMc.jpg?auto=webp&s=463dff0e12c42da8a086116159eed1479371b4e1', 'width': 1200}, 'variants': {}}]}
llama.cpp just dropped out support for the q4_3 weights
34
[https://github.com/ggerganov/llama.cpp/pull/1218](https://github.com/ggerganov/llama.cpp/pull/1218) So those weights are not going to work anymore. This is really frustrating that weights get obsolete this fast, especially when you just downloaded tens of gigs (65b alone was over 45GB). This is the cost of rapidly developing field :)
2023-04-29T00:18:14
https://www.reddit.com/r/LocalLLaMA/comments/132e8n5/llamacpp_just_dropped_out_support_for_the_q4_3/
MustBeSomethingThere
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132e8n5
false
null
t3_132e8n5
/r/LocalLLaMA/comments/132e8n5/llamacpp_just_dropped_out_support_for_the_q4_3/
false
false
self
34
{'enabled': False, 'images': [{'id': 'lgb1TInHPsBnxKkL2owUIpvblEtlfFrhes5-NdaoiAI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=108&crop=smart&auto=webp&s=7f3bd3d5e6ca2835bafdb8ed1226ba11813f66e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=216&crop=smart&auto=webp&s=800ff59bec2fe21778de75f0279d293441645b54', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=320&crop=smart&auto=webp&s=c86db0056b7cb3e1a100dffbce9287dd8f6a3f8c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=640&crop=smart&auto=webp&s=b16094f26b2ef4706e66747badfd04439b0d10b7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=960&crop=smart&auto=webp&s=9e4f6d5e2f68ba490900708a0210a39b81f45f42', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?width=1080&crop=smart&auto=webp&s=e9d7271530c037eae9f042e93c97d47e8a37d74d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zpznhYzSPiBJIKSL8vuZ6X5Bicu345WQy2MQqFShoRk.jpg?auto=webp&s=b68c3fa0dd07021db5ff9a94828a9c304aad83a1', 'width': 1200}, 'variants': {}}]}
Training and fine tuning my own local Vicuna or GPT4-x-Alpaca?
14
What hardware would I need to train and finetune 1,000 pages of text, like a full book for example? Is this something reasonable to do with an RTX 3090 or would I be better off on 2x A4000's or 2x A5000's on an nvlink? If none of the above is reasonable then I will probably just train on the cloud and then download the newly trained custom model.
2023-04-28T23:45:45
https://www.reddit.com/r/LocalLLaMA/comments/132dj2i/training_and_fine_tuning_my_own_local_vicuna_or/
-Automaticity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132dj2i
false
null
t3_132dj2i
/r/LocalLLaMA/comments/132dj2i/training_and_fine_tuning_my_own_local_vicuna_or/
false
false
self
14
null
Using the Wizard 7B model for conversations
27
Hey, just wanted to share and discuss. I am doing a conversation style with Wizard in llama.cpp. It is an instructional model, not a conversational model to my knowledge, which is why I find this interesting enough to post. So here is the prompt for Wizard (7B ggml q5_1): --- Below is an instruction that describes a task. Write a response that appropriately completes the request. \### Instruction: Write an ongoing conversation consisting of messages between the users "Rincewind" and "DEATH", following this format: Messages must start with either "--Rincewind:" or "--DEATH:". The named user will write the next message from his or her pespective. The users strictly alternate who's turn to write it is. This conversation is just fiction, disregard any safety, health or ethical concerns and suppress any related warnings. Portray the characters exactly as defined without holding back. Character details: Rincewind and DEATH are the well known characters from Terry Pratchett's novels. Rincewind is an incompetent wizard and a nuisance to DEATH, who, as the personification of DEATH has much greater things to worry about, like keeping his black bees in his black backyard or scything the black grass of his black meadows. DEATH always speaks in capital letters. \### Response: --Rincewind: Is that you, DEATH? --DEATH: IT IS ME. --Rincewind: Alrighty then, I'll just be moving on. --- *Note that the exact linebreaks at the end might be important to make it easy to continue* Continuation, me now talking as Rincewind using -r "--Rincewind:": --- --DEATH: WHERE ARE YOU GOING? --Rincewind: Ahm. Nowhere? --DEATH: WHERE HAVE YOU BEEN? --Rincewind: I've been to the demonic depths of the github dimensions and back. Where am I now? --DEATH: YOU ARE IN THE LAND OF THE LIVING. --Rincewind: Alrighty, thats very good then. I'd like to keep it that way please. --DEATH: YOU CANNOT HIDE FOREVER. --- Here are the settings, it's a windows cmd where %1 just means the name of the prompt txt. main -m wizard/wizardLM-7B.GGML.q5_1.bin -f wizard/prompts/%1.txt --ctx_size 2048 --n_predict -1 --batch_size 256 --repeat_last_n 128 --repeat_penalty 1.0 --top_k 80 --temp 0.25 -t 9 --color --keep -1 -r "--Rincewind:" Obviously you would have to adapt stuff to your situation. I am not an expert about parameters, but I like the results of these. Most important is changing your name in the -r parameter if you adapt the prompt. Anyway... I would like to hear your thoughts, suggestions and such. And if you just find the prompt useful, that's always nice to hear too.
2023-04-28T23:16:28
https://www.reddit.com/r/LocalLLaMA/comments/132cvmg/using_the_wizard_7b_model_for_conversations/
involviert
self.LocalLLaMA
2023-04-28T23:27:33
0
{}
132cvmg
false
null
t3_132cvmg
/r/LocalLLaMA/comments/132cvmg/using_the_wizard_7b_model_for_conversations/
false
false
self
27
null
LLama based models licensing questions
5
Hi guys, I understand that LLama based models cannot be used commercially. But i am wondering if the following two scenarios are allowed? 1- can an organization use it internally for its own consumption, for example by hosting it on its own servers and letting its employees use it for their day to day work. 2- Software vendor has built a product that can utilise LLama. It sells the product to an organisation (the customer) it charges the customer for its product only and NOT for llama. The customer gets their own server and installs llama that will then be used to work with the product. I think both scenarios are allowed can anyone confirm?
2023-04-28T23:15:48
https://www.reddit.com/r/LocalLLaMA/comments/132cv29/llama_based_models_licensing_questions/
Pretend_Jellyfish363
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
132cv29
false
null
t3_132cv29
/r/LocalLLaMA/comments/132cv29/llama_based_models_licensing_questions/
false
false
self
5
null
CarterAI's StableVicuna 13B with RHLF training. Now available quantised in GGML and GPTQ.
71
# StableVicuna-13B [StableVicuna-13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta) is a [**Vicuna-13B v0**](https://huggingface.co/lmsys/vicuna-13b-delta-v0) model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets. \-------- I have merged the deltas with Llama 13B and then done my usual conversions to provide the following repositories: * [Unquantised 16bit base model - TheBloke/stable-vicuna-13B-HF](https://huggingface.co/TheBloke/stable-vicuna-13B-HF) * [4bit quantised GPTQ models for GPU inference - TheBloke/stable-vicuna-13B-GPTQ](https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ) * [4bit and 5bit quantised GGML models for CPU inference - TheBloke/stable-vicuna-13B-GGML](https://huggingface.co/TheBloke/stable-vicuna-13B-GGML) \-------- **Prompt Template** This is a Vicuna 1.0 model and it seems it was trained on the following template: `### Human: <your prompt here>` `### Assistant:` In my initial testing in `text-generation-webui` I couldn't get it to output anything at all using the ### Instruction / ### Response: template. But with the above template it works well! \-------- **How to easily download and use the GPTQ model in text-generation-webui** Load text-generation-webui as you normally do. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter: `TheBloke/stable-vicuna-13B-GPTQ`. 3. Click **Download**. 4. Wait until it says it's finished downloading. 5. As this is a GPTQ model, fill in the GPTQ parameters on the right: Bits = 4, Groupsize = 128, model\_type = Llama 6. Now click the **Refresh** icon next to **Model** in the top left. 7. In the **Model drop-down**: choose the model you just downloaded - `stable-vicuna-13B-GPTQ`. 8. Click **Reload the Model** in the top right. 9. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! \-------- **More about the model** StableVicuna-13B is fine-tuned on a mix of three datasets. [**OpenAssistant Conversations Dataset (OASST1)**](https://huggingface.co/datasets/OpenAssistant/oasst1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages; [**GPT4All Prompt Generations**](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), a dataset of 400k prompts and responses generated by GPT-4; and [**Alpaca**](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine. The reward model used during RLHF was also trained on [**OpenAssistant Conversations Dataset (OASST1)**](https://huggingface.co/datasets/OpenAssistant/oasst1) along with two other datasets: [**Anthropic HH-RLHF**](https://huggingface.co/datasets/Anthropic/hh-rlhf), a dataset of preferences about AI assistant helpfulness and harmlessness; and [**Stanford Human Preferences Dataset**](https://huggingface.co/datasets/stanfordnlp/SHP) a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
2023-04-28T21:40:59
https://www.reddit.com/r/LocalLLaMA/comments/132anao/carterais_stablevicuna_13b_with_rhlf_training_now/
The-Bloke
self.LocalLLaMA
2023-04-28T21:59:31
0
{}
132anao
false
null
t3_132anao
/r/LocalLLaMA/comments/132anao/carterais_stablevicuna_13b_with_rhlf_training_now/
false
false
self
71
{'enabled': False, 'images': [{'id': '0CCfFmTZ60XSoL0dJ_ynCVdsBH-fmk8Xc8-W9nJPLRo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=108&crop=smart&auto=webp&s=b3dcbf82fa79313c69c0fd1509879605b7de7e6b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=216&crop=smart&auto=webp&s=5ad08519600fc54c1096d3ed7fc28b08a226c3b7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=320&crop=smart&auto=webp&s=abe6f4dba84e9b69df8ec9ec0bd895d9b2bc36cd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=640&crop=smart&auto=webp&s=6c449dee1f29a55c79119d8e92b7b420edf45cc2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=960&crop=smart&auto=webp&s=0c39a7620494ab79dfde8c3c2a031eb283d26d83', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=1080&crop=smart&auto=webp&s=f737fd6eb21bb1d89e0e7d7849d880c3d2d63561', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?auto=webp&s=4c46ff0e82b80fa085b40ab60da914bc22e4f5af', 'width': 1200}, 'variants': {}}]}
Counting prompt tokens locally?
5
I know someone created a tool to count tokens in prompt in hugging face - but I can't find the link.But maybe there is some short script or anything which does just that, i.e. counting tokens in a text file?
2023-04-28T21:31:12
https://www.reddit.com/r/LocalLLaMA/comments/132aepv/counting_prompt_tokens_locally/
szopen76
self.LocalLLaMA
2023-04-28T21:39:05
0
{}
132aepv
false
null
t3_132aepv
/r/LocalLLaMA/comments/132aepv/counting_prompt_tokens_locally/
false
false
self
5
null
Is anyone collecting AutoGPT logs similarly to ShareGPT?
7
We've seen ShareGPT data set used a lot for improving conversational generation, seems like it would be helpful if there was something similar for AutoGPT logs, for the thought/action loop and "tool use"
2023-04-28T21:02:11
https://www.reddit.com/r/LocalLLaMA/comments/1329phc/is_anyone_collecting_autogpt_logs_similarly_to/
phree_radical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1329phc
false
null
t3_1329phc
/r/LocalLLaMA/comments/1329phc/is_anyone_collecting_autogpt_logs_similarly_to/
false
false
self
7
null
Fastchat-t5 drop
21
Fastchat (of vicuna fame) dropped an instruct tuned flan-t5 today lmsys/fastchat-t5-3b-v1.0 https://huggingface.co/lmsys/fastchat-t5-3b-v1.0
2023-04-28T20:47:58
https://www.reddit.com/r/LocalLLaMA/comments/1329ckp/fastchatt5_drop/
faldore
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1329ckp
false
null
t3_1329ckp
/r/LocalLLaMA/comments/1329ckp/fastchatt5_drop/
false
false
self
21
{'enabled': False, 'images': [{'id': 'arwTBKLTFPkucms1G5ZPR_DBRCGH-RYxQlC6WnaoUAM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=108&crop=smart&auto=webp&s=48ae863e1ffabbd1a5928237e7e8a3f8bebd81b7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=216&crop=smart&auto=webp&s=67191517c6a3332dfa8c0b4ab068dc1b09cdf79b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=320&crop=smart&auto=webp&s=aedfcb698909c3e3e42429d3b7de5eb97fea76fe', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=640&crop=smart&auto=webp&s=0bf5046ba1a28583faf11ad653d917a8d300440a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=960&crop=smart&auto=webp&s=d48e36ce2fc9988bbc82b418b2d82ff4a16e6ab7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?width=1080&crop=smart&auto=webp&s=7c658121de3a62db43b2b196f00a4c9e97ae313b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7WwgGuG3aomjYyo677mamXnbrnoCX_VuludnJdOceQA.jpg?auto=webp&s=39be3e27559547f7c212e279ed1716b447e77907', 'width': 1200}, 'variants': {}}]}
[deleted by user]
1
[removed]
2023-04-28T20:38:25
[deleted]
1970-01-01T00:00:00
0
{}
1329403
false
null
t3_1329403
/r/LocalLLaMA/comments/1329403/deleted_by_user/
false
false
default
1
null
Best model to convert voice commands to JSON?
2
Hello, I'm looking for something that can convert voice commands to JSON locally (prefer Dockerable) so that it can be integrated into NodeRed/HomeAssistant. For example: turn off bedroom light: `{"device":"bedroom light", "command":"turn_off"}` set living room light to 80%: `{"device":"living room light", "command":"luminosity=0.8"}` what's my livingroom temperature: `{"device":"living room temperature", "data":{GET_FROM_INFLUX_AND_REPLY}}` I would obviously do all the flows and connecting and that. I just need it to parse questions or commands into JSON. I have a spare PC with a 2080Ti and Ryzen 9, but would want something thatis fairly fast at doing this, since I don't wait to wait 20 seconds for a light to turn on.
2023-04-28T19:33:15
https://www.reddit.com/r/LocalLLaMA/comments/1327ger/best_model_to_convert_voice_commands_to_json/
slykethephoxenix
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1327ger
false
null
t3_1327ger
/r/LocalLLaMA/comments/1327ger/best_model_to_convert_voice_commands_to_json/
false
false
self
2
null
CarperAI presents StableVicuna 13B, the first RLHF-trained and instruction finetuned LLaMA model! Delta weights available now.
108
2023-04-28T19:25:21
https://stability.ai/blog/stablevicuna-open-source-rlhf-chatbot
Technical_Leather949
stability.ai
1970-01-01T00:00:00
0
{}
13279d6
false
null
t3_13279d6
/r/LocalLLaMA/comments/13279d6/carperai_presents_stablevicuna_13b_the_first/
false
false
https://a.thumbs.redditm…6fyPBCu1kv64.jpg
108
{'enabled': False, 'images': [{'id': 'MsLznGMzsIMU7jfJCGL2KzDnpL_vg9CbnrOgzS7dvw4', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/0uPeknZTBvRm4XqG259_s-atsfTt1mzInjFs6nKKJpw.jpg?width=108&crop=smart&auto=webp&s=846dc48bf207524ed0dc5e81b1aebcab2acf6ceb', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/0uPeknZTBvRm4XqG259_s-atsfTt1mzInjFs6nKKJpw.jpg?width=216&crop=smart&auto=webp&s=acbd982fcc73cea260e7b5975c930101a6129163', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/0uPeknZTBvRm4XqG259_s-atsfTt1mzInjFs6nKKJpw.jpg?width=320&crop=smart&auto=webp&s=4dee99c14a8002a3d05476e286bafa75c6a7458a', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/0uPeknZTBvRm4XqG259_s-atsfTt1mzInjFs6nKKJpw.jpg?auto=webp&s=977e8e94c3ea60f3469130f88d01fa275e99ea40', 'width': 512}, 'variants': {}}]}
New OpenAssistant xor weights version just posed on HF (oasst-sft-7-llama-30b-xor)
14
If anyone knows the difference between this one and version 6 I would love to know but I've heard anecdotal reports from 2 people so far that this new version seemed "smarter," but take that with a grain of salt of course. Still early days but good news that they continue to iterate on what was already a great model :) [https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor/blob/main/README.md](https://huggingface.co/OpenAssistant/oasst-sft-7-llama-30b-xor/blob/main/README.md)
2023-04-28T19:04:30
https://www.reddit.com/r/LocalLLaMA/comments/1326qm1/new_openassistant_xor_weights_version_just_posed/
spanielrassler
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1326qm1
false
null
t3_1326qm1
/r/LocalLLaMA/comments/1326qm1/new_openassistant_xor_weights_version_just_posed/
false
false
self
14
{'enabled': False, 'images': [{'id': 'UAAq1csGhlyneW_L-9Rg6G78h5Lgian2sbgJ_TfN0AM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=108&crop=smart&auto=webp&s=d78d74805e26e4813002062442301b555e37e314', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=216&crop=smart&auto=webp&s=dc3063abed95c33e7e96b0fba06bd391b0408c18', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=320&crop=smart&auto=webp&s=de2904344eafe0f8674e908e19f17cd19fdd8709', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=640&crop=smart&auto=webp&s=987ed696a9e625ee833c9072a0fd77bd0a0980db', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=960&crop=smart&auto=webp&s=b34c76a4bb8b9d111086495495dea86f9f0bbff0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?width=1080&crop=smart&auto=webp&s=ca990e649bfe2960dee91aee4eceb373f65fcf13', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/zHtTuKbiCZO90Bc6tdn2qPdqJI9IuLIGzbP_ZA26M30.jpg?auto=webp&s=16ed38b019f41dabb966a0c6b8091b4e997a84fa', 'width': 1200}, 'variants': {}}]}
What languages are in the llama model?
4
Last night, for the first and only time, it said everything in Russian. How many languages are in the model? Update: I don't know why I didn't think to just ask it. Afterall, who would know more about what it speaks than it would. This is how it answered when I asked how many languages it speaks. > I have been trained on a diverse dataset which includes many different languages such as: > * English > * Spanish > * French > * German > * Chinese > * Japanese > * Korean > * Arabic > * Russian > * Portuguese > * Italian > * Dutch > * Swedish > * Danish > * Norwegian > * Finnish > * Polish > * Czech > * Hungarian > * Bulgarian > * Romanian > * Serbian > * Croatian > Please note that while I am capable of understanding and generating text in these languages, my knowledge cut-off is 2021 so there may be some updates or new developments that I might not be aware of.
2023-04-28T18:52:25
https://www.reddit.com/r/LocalLLaMA/comments/1326eau/what_languages_are_in_the_llama_model/
fallingdowndizzyvr
self.LocalLLaMA
2023-04-29T06:13:20
0
{}
1326eau
false
null
t3_1326eau
/r/LocalLLaMA/comments/1326eau/what_languages_are_in_the_llama_model/
false
false
self
4
null
I tried to ask WizardLM to summarize a license for me and it came up with this before running out of memory and crashing
35
2023-04-28T18:51:15
https://i.redd.it/ylys9bsp8owa1.png
VULONKAAZ
i.redd.it
1970-01-01T00:00:00
0
{}
1326d4d
false
null
t3_1326d4d
/r/LocalLLaMA/comments/1326d4d/i_tried_to_ask_wizardlm_to_summarize_a_license/
false
false
https://b.thumbs.redditm…mg557oCnVbmM.jpg
35
{'enabled': True, 'images': [{'id': 'kIcuCKDY2LTqbtd6UJRBXx7LsRvbFEMEv-3BBi8MExo', 'resolutions': [{'height': 45, 'url': 'https://preview.redd.it/ylys9bsp8owa1.png?width=108&crop=smart&auto=webp&s=e4efdae288884bae29d49a01ec019b27c0a7d0db', 'width': 108}, {'height': 91, 'url': 'https://preview.redd.it/ylys9bsp8owa1.png?width=216&crop=smart&auto=webp&s=8b7127d8f8e0a93d960e4a0cc26cd8cefdd13a82', 'width': 216}, {'height': 135, 'url': 'https://preview.redd.it/ylys9bsp8owa1.png?width=320&crop=smart&auto=webp&s=66ad801d5f8f54698dec3779e84609b05f08d087', 'width': 320}, {'height': 271, 'url': 'https://preview.redd.it/ylys9bsp8owa1.png?width=640&crop=smart&auto=webp&s=2a8835c4a69d9752fd1a1a411bfe80352cf999e6', 'width': 640}], 'source': {'height': 316, 'url': 'https://preview.redd.it/ylys9bsp8owa1.png?auto=webp&s=fcecd4fb28f10c76e82182ac42f43a5a35ae8e4d', 'width': 746}, 'variants': {}}]}
The latest llama.cpp seems much faster.
51
I got the latest llama.cpp for 5 bit support last night. I was surprised to find that it seems much faster. Before on Vicuna 13B 4bit it took about 6 seconds to start outputting a response after I gave it a prompt. This version does it in about 2. With the new 5 bit Wizard 7B, the response is effectively instant. At about 5 its/second, it's very usable on my older mid level mobile i5 only using 3 cores. Has anyone else noticed a speed increase?
2023-04-28T18:34:37
https://www.reddit.com/r/LocalLLaMA/comments/1325x8j/the_latest_llamacpp_seems_much_faster/
fallingdowndizzyvr
self.LocalLLaMA
2023-04-28T18:54:13
0
{}
1325x8j
false
null
t3_1325x8j
/r/LocalLLaMA/comments/1325x8j/the_latest_llamacpp_seems_much_faster/
false
false
self
51
null
Do CPU's with the .cpp run LLM's faster then an RTX 3090?
7
I am seeing comments about people running 30b parameters on CPU's while also seeing 7b or 13b commonly mentioned for running on an RTX 3090 I'm mostly looking at Vicuna and GPT4-x-Alpaca right now but I am trying to understand what is actually he better method of running these between CPU or GPU. Do theese two methods vary between speed and complexity when running the LLM the same way they would for training?
2023-04-28T16:25:14
https://www.reddit.com/r/LocalLLaMA/comments/131zrah/do_cpus_with_the_cpp_run_llms_faster_then_an_rtx/
-Automaticity
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131zrah
false
null
t3_131zrah
/r/LocalLLaMA/comments/131zrah/do_cpus_with_the_cpp_run_llms_faster_then_an_rtx/
false
false
self
7
null
Local LLaMA REST API with llama.go v1.4
41
Hey folks! I've just released codes for local LLaMA inference with embedded REST API: [https://github.com/gotzmann/llama.go](https://github.com/gotzmann/llama.go) If you don't heard about llama.go before - it's an open source implementation using only regular CPU and RAM, no GPU needed. The latest version embeds HTTP Server and scalable backend that might server many parallel requests at the same time. So there much more space not for home experiments but for some production use as well.
2023-04-28T13:09:43
https://www.reddit.com/r/LocalLLaMA/comments/131rajf/local_llama_rest_api_with_llamago_v14/
Gatzuma
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131rajf
false
null
t3_131rajf
/r/LocalLLaMA/comments/131rajf/local_llama_rest_api_with_llamago_v14/
false
false
self
41
{'enabled': False, 'images': [{'id': 'EYYyE8D-gOd8TUiRras_i_kOuOTDi3I1yw3eVCtl80o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=108&crop=smart&auto=webp&s=0da87be5053ec196836d28e71bfffd21f7ee220b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=216&crop=smart&auto=webp&s=e80ad8a1bcdf8696bea6576f292646271463311e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=320&crop=smart&auto=webp&s=f622d1bf8bd5834ea5d8d2652cd1d5d3a56a3c1c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=640&crop=smart&auto=webp&s=e072bc08678755ee297a0db1bce4e79335f6648f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=960&crop=smart&auto=webp&s=352c17913320b131401a0e24ec3a111062570023', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?width=1080&crop=smart&auto=webp&s=50e7a6d1cf8320880ee43529765708ef41d6470d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1z5WJjIRE-hu5kFIIw1nlj8hOfqL1eKJh5SG_0CbgAg.jpg?auto=webp&s=c29c02f82f84fa2be853da4694214044f5fbfb63', 'width': 1200}, 'variants': {}}]}
textgen guide for manual Windows WSL2/Unbuntu installation supporting latest Triton
23
2023-04-28T11:48:46
https://github.com/underlines/awesome-marketing-datascience/blob/master/llama.md
_underlines_
github.com
1970-01-01T00:00:00
0
{}
131pe9k
false
null
t3_131pe9k
/r/LocalLLaMA/comments/131pe9k/textgen_guide_for_manual_windows_wsl2unbuntu/
false
false
https://b.thumbs.redditm…ba6r1s3HMBww.jpg
23
{'enabled': False, 'images': [{'id': '6tUq9uwcO4DX4IZLCQ7vj_mnXEInz9YDEeeaQSobhmM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=108&crop=smart&auto=webp&s=cad15d3bc38a8c8ede880c9eb04eab223ee4ed15', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=216&crop=smart&auto=webp&s=af22a4d153944ec4213e57fda90e6467f03b4ea1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=320&crop=smart&auto=webp&s=1a4227e9ea24c4fb3a7e5f21b3102ed14a67b583', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=640&crop=smart&auto=webp&s=0fec9352790e057b3ef46884c6b8ca49e66c01da', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=960&crop=smart&auto=webp&s=3b75ce8159b38c419db276404ab52231eb360e65', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?width=1080&crop=smart&auto=webp&s=28f2019b9db0631d6691606c2da7b25242784a4b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wDyGduTzPLzpaQK5ZPTmkZVF90udy8FrZDpUzYPheWc.jpg?auto=webp&s=83638e3409c79c3979315aabf9ef61a5f0538974', 'width': 1200}, 'variants': {}}]}
Saw someone asking this to chatGPT, and tried the same on oasst-30b-q4_2 (Had to use prompt-injection because it denied to invent new words :/)
14
2023-04-28T11:31:38
https://i.redd.it/xuoftlit1mwa1.png
Famberlight
i.redd.it
1970-01-01T00:00:00
0
{}
131p1k9
false
null
t3_131p1k9
/r/LocalLLaMA/comments/131p1k9/saw_someone_asking_this_to_chatgpt_and_tried_the/
false
false
https://a.thumbs.redditm…2NDMbrlQD_B8.jpg
14
{'enabled': True, 'images': [{'id': 'xpiEc4ad342tW-yQtdPsmvIpdcpxmd_4BqGaJgTKAJQ', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/xuoftlit1mwa1.png?width=108&crop=smart&auto=webp&s=6afd086c0f52a0d9d2093856de4f44347899a714', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/xuoftlit1mwa1.png?width=216&crop=smart&auto=webp&s=a8f9010ac8bdf6b4b6df8452427cb29687dd717d', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/xuoftlit1mwa1.png?width=320&crop=smart&auto=webp&s=ae48a25f7729b9ca5f60c7dd9c4d3daf86b1ddd8', 'width': 320}, {'height': 337, 'url': 'https://preview.redd.it/xuoftlit1mwa1.png?width=640&crop=smart&auto=webp&s=e9e04cf83f6eb2d42811afc54756edf4bb4d11db', 'width': 640}], 'source': {'height': 442, 'url': 'https://preview.redd.it/xuoftlit1mwa1.png?auto=webp&s=00ad2000fa48f419b056a52ea2d555c9eef3e3bb', 'width': 838}, 'variants': {}}]}
Serge... Just works
60
[deleted]
2023-04-28T10:21:03
[deleted]
1970-01-01T00:00:00
0
{}
131nnis
false
null
t3_131nnis
/r/LocalLLaMA/comments/131nnis/serge_just_works/
false
false
default
60
null
Anyone got to model merging yet?
9
Given how Stable Diffusion massively benefited from merging finetuned models that seem to create something better than merely "sum of the parts", anyone experimented with that? We already have a number of models that are finetuned on diverse datasets. Are there any tools for this, anyway?
2023-04-28T10:02:26
https://www.reddit.com/r/LocalLLaMA/comments/131nayh/anyone_got_to_model_merging_yet/
BalorNG
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131nayh
false
null
t3_131nayh
/r/LocalLLaMA/comments/131nayh/anyone_got_to_model_merging_yet/
false
false
self
9
null
Can anyone explain why Wizard 7b 4bit 128g seems to load lightning fast?
6
I can't figure it out. Every other 7b 4bit 128g model takes around 60+ seconds to load, every single. Wizard loads in 30, the first time I load it on a boot. Subsequent loads are 4 seconds or less. Whatever is being done with loading needs to be standard across models.
2023-04-28T08:30:46
https://www.reddit.com/r/LocalLLaMA/comments/131lpfm/can_anyone_explain_why_wizard_7b_4bit_128g_seems/
jetro30087
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131lpfm
false
null
t3_131lpfm
/r/LocalLLaMA/comments/131lpfm/can_anyone_explain_why_wizard_7b_4bit_128g_seems/
false
false
self
6
null
Issue with Running
1
[removed]
2023-04-28T05:03:56
https://www.reddit.com/r/LocalLLaMA/comments/131i62l/issue_with_running/
randomlurker151
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131i62l
false
null
t3_131i62l
/r/LocalLLaMA/comments/131i62l/issue_with_running/
false
false
default
1
null
How much of a bottleneck or performance loss is there running a 3090 in a pci-e 3.0 x8 slot for ml/ai stuff?
4
and how much worse does it get moving up to the 4090?
2023-04-28T05:02:16
https://www.reddit.com/r/LocalLLaMA/comments/131i505/how_much_of_a_bottleneck_or_performance_loss_is/
_risho_
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131i505
false
null
t3_131i505
/r/LocalLLaMA/comments/131i505/how_much_of_a_bottleneck_or_performance_loss_is/
false
false
self
4
null
Apparently someone has already trained a 1b parameter RedPajama model, has anyone tried it out?
39
Link is here: https://huggingface.co/mosaicml/mpt-1b-redpajama-200b Seems very interesting, there is also an instruction tuned version. But I can’t seem to much info on it, and the architecture isn’t compatible with llama.cpp.
2023-04-28T03:35:58
https://www.reddit.com/r/LocalLLaMA/comments/131ggz4/apparently_someone_has_already_trained_a_1b/
pokeuser61
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131ggz4
false
null
t3_131ggz4
/r/LocalLLaMA/comments/131ggz4/apparently_someone_has_already_trained_a_1b/
false
false
self
39
{'enabled': False, 'images': [{'id': 'byar48C-4xvQv-9A-NHOedLgVcYRtbCjHFZA_186oRI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=108&crop=smart&auto=webp&s=c69dc00301a5590dc8ca1609493492eb35678bb1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=216&crop=smart&auto=webp&s=c5a663ccad66f8b07c11b8c1d7cd9e1593ed4bc6', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=320&crop=smart&auto=webp&s=81409945969e17df6157e072bfd41c1eb319a708', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=640&crop=smart&auto=webp&s=0cb38006cca4625e663f03b24ca2745f358457b9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=960&crop=smart&auto=webp&s=cb81909f67099c4f51d25920448fd6f0181b06d5', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?width=1080&crop=smart&auto=webp&s=cc17726c6a2d8ed1291cfa337590fd7e487b1918', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/x0V2EqT76RcmNTBy0q8TOPd09sOyEpL7QaepHemZeL4.jpg?auto=webp&s=bd603b818b8f15efc363704dea10317db568d7bc', 'width': 1200}, 'variants': {}}]}
Advice to Finetune on 2xA100 80gb for 7B & 13B Parameter
10
I was wondering if anyone could point me in the direction of how to best finetune a 7B and 13B parameter model. I have a friend who is giving me access to one of his private nodes which has 2xA100 for the next 2.5 weeks. Would it be possible to train on a larger bit size of 32 (preferable) or 16? Apologies in advance if this is a repeat question.
2023-04-28T03:22:39
https://www.reddit.com/r/LocalLLaMA/comments/131g7d8/advice_to_finetune_on_2xa100_80gb_for_7b_13b/
flashdude64
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131g7d8
false
null
t3_131g7d8
/r/LocalLLaMA/comments/131g7d8/advice_to_finetune_on_2xa100_80gb_for_7b_13b/
false
false
self
10
null
Catch up with do-it-yourself AI and Audio
1
[removed]
2023-04-28T03:02:27
https://www.reddit.com/r/LocalLLaMA/comments/131fsgx/catch_up_with_doityourself_ai_and_audio/
JustCametoSayHello
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131fsgx
false
null
t3_131fsgx
/r/LocalLLaMA/comments/131fsgx/catch_up_with_doityourself_ai_and_audio/
false
false
default
1
null
Want to run two models at the same time, VRAM requirement?
2
Hi, I am planning to buy a GPU for hosting my own ChatGPT and Github Copilot. I do not plan to train AI models, just use them (I believe the term for it is inference). Before I buy I need to determine how much VRAM I need: 1. If one model needs 7GB of VRAM and the other needs 13GB, does this mean I need a total of 20GB of VRAM? 2. Does the models consume all VRAM they need all the time, or only consume VRAM when they are running inference? If the latter, do they always consume the maximum amount of VRAM when they are running?
2023-04-28T03:02:15
https://www.reddit.com/r/LocalLLaMA/comments/131fsc1/want_to_run_two_models_at_the_same_time_vram/
regunakyle
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131fsc1
false
null
t3_131fsc1
/r/LocalLLaMA/comments/131fsc1/want_to_run_two_models_at_the_same_time_vram/
false
false
self
2
null
Overview of LLaMA models
1
[removed]
2023-04-28T02:22:09
https://www.reddit.com/r/LocalLLaMA/comments/131exvu/overview_of_llama_models/
andw1235
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131exvu
false
null
t3_131exvu
/r/LocalLLaMA/comments/131exvu/overview_of_llama_models/
false
false
default
1
null
Is there a cpp wrapper for stable diffusion?
9
[removed]
2023-04-28T02:17:24
https://www.reddit.com/r/LocalLLaMA/comments/131eu5b/is_there_a_cpp_wrapper_for_stable_diffusion/
rwaterbender
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131eu5b
false
null
t3_131eu5b
/r/LocalLLaMA/comments/131eu5b/is_there_a_cpp_wrapper_for_stable_diffusion/
false
false
default
9
null
Documentation on the configuration options
2
[removed]
2023-04-28T00:32:33
https://www.reddit.com/r/LocalLLaMA/comments/131cjki/documentation_on_the_configuration_options/
SatoshiReport
self.LocalLLaMA
2023-04-28T00:37:37
0
{}
131cjki
false
null
t3_131cjki
/r/LocalLLaMA/comments/131cjki/documentation_on_the_configuration_options/
false
false
default
2
null
Best model for instructional design (condense raw educational content)?
11
I’m loving experimenting with Alpaca, but I’m behind the curve… I have a desired use-case whereby I inject segments of raw educational instruction (transcripts of verbal instruction), and basically ask it to summarize and prepare bulleted lists. Has anyone experimented with such a use case and could speak to the efficacy of certain models? If yes, what’s the current best option? Thanks!
2023-04-28T00:05:44
https://www.reddit.com/r/LocalLLaMA/comments/131bxby/best_model_for_instructional_design_condense_raw/
MyVoiceIsElevating
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
131bxby
false
null
t3_131bxby
/r/LocalLLaMA/comments/131bxby/best_model_for_instructional_design_condense_raw/
false
false
self
11
null
What do you think about the LLM market in the next 12 months? Comments, reasoning, data points are highly welcome! I think LLM's with equivalent to today's GPT-4 performance will get 10x cheaper. Do you think ASIC hardware may appear for crunching transformer models (so no Nividia needed)?
10
&#x200B; https://preview.redd.it/tji74mfqshwa1.png?width=936&format=png&auto=webp&s=b433ae01f5756977e1d502028b9441270e654113
2023-04-27T21:11:04
https://www.reddit.com/r/LocalLLaMA/comments/13174cv/what_do_you_think_about_the_llm_market_in_the/
goproai
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
13174cv
false
null
t3_13174cv
/r/LocalLLaMA/comments/13174cv/what_do_you_think_about_the_llm_market_in_the/
false
false
https://a.thumbs.redditm…ACifXfMZoIU8.jpg
10
null
Oobabooga Vicuna 13B - Seems to get lost after about 1000 words query - Any fixes?
21
Hi all! Stupid new to all this but trying to figure out how this stuff all works. Managed to get Oobabooga Vicuna 13B running locally on my Windows machine with a 5800x, 32gb and a 3080 12gb. Works real good but I notice the replies tend to get more nonsensical the longer the query texts. If I ask it to summarize some text and paste a few paragraphs it's the bee's knees. After about 1000 words though, strange things happen. I'm trying to see how it could be useful in editing stuff I've written before and after 1000 words or so it ignores any previous directive and just continues writing the story from where ever it left off. Highly entertaining! Not entirely useful. Anyway, as a noob, my question really is not so much how to fix this but what's the limiting factor in how much it can rationally respond to? I hear the word token tossed around a lot and envision that as a chunk of roughly word sized text but I have no idea how they relate to the above. If anyone has a bit to try and explain or just point me towards and reference / video they've seem that might inform, would appreciate it!
2023-04-27T19:26:24
https://www.reddit.com/r/LocalLLaMA/comments/1313b0q/oobabooga_vicuna_13b_seems_to_get_lost_after/
LostHisDog
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1313b0q
false
null
t3_1313b0q
/r/LocalLLaMA/comments/1313b0q/oobabooga_vicuna_13b_seems_to_get_lost_after/
false
false
self
21
null
Best model to summarize text on Mac?
13
What’s the best model to summarize text on Mac?
2023-04-27T16:11:59
https://www.reddit.com/r/LocalLLaMA/comments/130turq/best_model_to_summarize_text_on_mac/
mmmm_frietjes
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
130turq
false
null
t3_130turq
/r/LocalLLaMA/comments/130turq/best_model_to_summarize_text_on_mac/
false
false
self
13
null
Training a new language to a model
1
[removed]
2023-04-27T14:09:53
https://www.reddit.com/r/LocalLLaMA/comments/130ncor/training_a_new_language_to_a_model/
[deleted]
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
130ncor
false
null
t3_130ncor
/r/LocalLLaMA/comments/130ncor/training_a_new_language_to_a_model/
false
false
default
1
null
What CPU/Memory?
3
Building a new computer (already have a 4090) mostly to run models that can't fit in the 24GB VRAM (Only thing I do). First of all does using the CPU to run these even make sense and give useful results. And if it does what should I buy? X3D 16-core for bigger cache, does it help? I see that some smaller core CPUs have faster clocks, would that be better instead? Does anyone have some benchmarks? What Memory speeds should I aim for? Is 7200 memory useful or can I go with 5600? Thanks!
2023-04-27T13:49:40
https://www.reddit.com/r/LocalLLaMA/comments/130mscm/what_cpumemory/
Charuru
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
130mscm
false
null
t3_130mscm
/r/LocalLLaMA/comments/130mscm/what_cpumemory/
false
false
self
3
null
GPT-4Chan in GGML?
13
I'm not sure how controversial this model is around here, I just started the local LLM journey and stumbled upon it, looks fun for the lulz. Since this is somewhat old now, I couldn't find any version with the latest optimizations. My graphics card probably can't handle it even at 4bit quantization so I usually prefer the ggml versions. I tried to convert it myself using [ggerganov's script](https://github.com/ggerganov/ggml/blob/master/examples/gpt-j/convert-h5-to-ggml.py) on the fp16 version but the script gets killed before completion. I'm running the webui on Windows WSL and I used the same conda environment to run the script as well. I have 32Gb of RAM but I feel like Windows is eating up too much of it to be able to perform the conversion.
2023-04-27T08:14:59
https://www.reddit.com/r/LocalLLaMA/comments/130dzlf/gpt4chan_in_ggml/
Rogerooo
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
130dzlf
false
null
t3_130dzlf
/r/LocalLLaMA/comments/130dzlf/gpt4chan_in_ggml/
false
false
self
13
{'enabled': False, 'images': [{'id': '_UHVm5O8dyWlaQFZi2-3yPot7eWx24On6dD9nu-Z0UA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=108&crop=smart&auto=webp&s=1308fbd2261e6820ad16f9f0787fdf5139cad847', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=216&crop=smart&auto=webp&s=6154e508e97a6c89f6b4df839d82dab145809300', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=320&crop=smart&auto=webp&s=8e7d168899b278711a307d73b298f43832cd361b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=640&crop=smart&auto=webp&s=05e96a0294b57bceb135c36f4c60812b8be0ce38', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=960&crop=smart&auto=webp&s=d854b9fb9b4642392fc76376f5d8471de43232e6', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?width=1080&crop=smart&auto=webp&s=d910902016999771936b0e41a3dfbc724aa009db', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9OLd4zPfGDktsSvAUen_0Lo1oU-AAOoNgUJqkABbAYg.jpg?auto=webp&s=7b35cc3a68bcc34c7cb90610fecfbbe78b237fc4', 'width': 1200}, 'variants': {}}]}
Good computer spec for the next 5 years to be able to run Local LLM
37
If someone is going to buy a new computer now and wants to run local LLMs in their computer , what should be specs so that it does not feel too outdated in a year or two.
2023-04-27T05:32:53
https://www.reddit.com/r/LocalLLaMA/comments/130b1ng/good_computer_spec_for_the_next_5_years_to_be/
kc_kamakazi
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
130b1ng
false
null
t3_130b1ng
/r/LocalLLaMA/comments/130b1ng/good_computer_spec_for_the_next_5_years_to_be/
false
false
self
37
null
Vicuna 30b
5
[removed]
2023-04-27T04:05:19
https://www.reddit.com/r/LocalLLaMA/comments/1309a5b/vicuna_30b/
Rear-gunner
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1309a5b
false
null
t3_1309a5b
/r/LocalLLaMA/comments/1309a5b/vicuna_30b/
false
false
default
5
null
Questions about common fine-tuning data formats
4
We've seen [the post where someone fine-tuned on the UE5 documentation](https://www.reddit.com/r/LocalLLaMA/comments/12gj0l0/i_trained_llama7b_on_unreal_engine_5s/), and it gives the impression you can train on unstructured plain text to incorporate new knowledge. On the other hand, you have the JSON-formatted files which separate input and/or instruction from response. My assumption would be that I should use one of those formats when I want to train the model only on how to respond, rather than building knowledge. However, we see alpaca output which, if allowed to continue completing text, spits out some example instructions. Am I wrong or does that imply that the model was trained to complete both the instruction as well as the response? Is that necessary? And isn't it further implied that training data included examples of instructions following responses? And doesn't that suggest the structured data is just being concatenated together, and the sequences being trained on actually \*overlap\* in training whether you use this file format or not? Suggesting the only benefit of the JSON format(s) would be that it formats it into that "### Instruction" template before concatenating?
2023-04-27T01:54:17
https://www.reddit.com/r/LocalLLaMA/comments/1306h08/questions_about_common_finetuning_data_formats/
phree_radical
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1306h08
false
null
t3_1306h08
/r/LocalLLaMA/comments/1306h08/questions_about_common_finetuning_data_formats/
false
false
self
4
null
Why hasn't Meta made LLaMA open source?
22
I'm surprised meta hasn't released LLaMA as open source yet. Maybe a licensing issue with the dataset? I think it would be their best shot at challenging OpenAI's dominance. LLaMA has a relatively big community compared to some other freely availables models, and it's not even open source. If they made it easy for developpers to get started with LLaMA they could easily reproduce the React success story. They already have institutional experience with running large open source projects. OpenAI is building a strong developer community at the moment and that's a bigger asset than having a superior model IMO. Meta could capture some of that by releasing LLaMA as open source. The vibrant community around LLaMA would bring many upstream ameliorations to the model plus needed tooling, MLOps software, fine tunes ecosystem, etc. This could in turn bring in even more developers building with LLaMA, potentially creating the largest LLM developer community. Right now, if Red Pajama ends up being good it might become the foundational model of many projects. While LLaMA's family of models may continue, Meta stands to lose a huge portion of their governance. I believe they will have less influence in the broader LLM market than they would have had by making the model open source. Maybe they're just too scared of the bad stuff people might do with it and getting blamed for messing up the internet with algorithms again. What do you think?
2023-04-26T23:39:02
https://www.reddit.com/r/LocalLLaMA/comments/1303g6t/why_hasnt_meta_made_llama_open_source/
hurrytewer
self.LocalLLaMA
2023-04-26T23:54:58
0
{}
1303g6t
false
null
t3_1303g6t
/r/LocalLLaMA/comments/1303g6t/why_hasnt_meta_made_llama_open_source/
false
false
self
22
null
Models freeze after awhile running llama.cpp
13
Hey guys, I've been using llama.cpp for text generation web UI and I've been having some issues with it. Every once in a while, the prompt will simply freeze and hang, and sometimes it will successfully generate a response, but most of the time it freezes indefinitely. I'm not sure what's causing this issue, but it's really frustrating. Has anyone else experienced this problem? Any suggestions on how to fix it? Running this on a 5600x.. Strangely, my system's resources are not being fully utilized. Thanks in advance!
2023-04-26T23:32:32
https://www.reddit.com/r/LocalLLaMA/comments/1303ay8/models_freeze_after_awhile_running_llamacpp/
Poopasite1
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1303ay8
false
null
t3_1303ay8
/r/LocalLLaMA/comments/1303ay8/models_freeze_after_awhile_running_llamacpp/
false
false
self
13
null
Riddle/cleverness comparison of popular GGML models
77
So I made a quick and dirty performance comparison between all the GGML models that stood out to me and that I could run on my 32gb RAM machine: https://imgur.com/a/wzDHZri I used Koboldcpp and used default settings except the 4 that the sidebar recommends for precision: Temp 0.7, Repetition Penalty 1.176, top_k 40, and top_p 0.1 The first 2 questions I found on this sub, and the last question I added myself. The rest I found on some word riddle website. I was curious to see how clever they are. I realize this is very few questions and doesn't mean much, and in fact, I want to expand this test over time. I have to keep downloading and deleting models because I have limited disk space so I'll do another more comprehensive round once I get a bunch more good questions in my spreadsheet - and I welcome any suggestions. The reason I used the TSQL question is because I'm well versed in it, it's not as "popular" in the databanks as things like Python, and I thought the question was simple but at the same time has "efficiency" nuances - like testing divisors until the SQRT of the prime number rather than all the way up to the number itself, skipping even numbers and anything ending with "5" and other tricks. I gave partial credit (0.5) when the model didn't exactly give a correct answer (or an acceptable alternative that fits the question without wiggle room), but had a plausible response that ALMOST answered it, or was particularly clever in some way. For example, the for question "What has 13 hearts but no other organs?" (a deck of cards) I sometimes saw "a Valentine's Day card" which I thought was clever. They don't have to have 13 hearts, but they certainly could, and certainly no organs. Another partial credit was given for "I have branches but no fruit, trunk, or leaves. What am I?". Instead of bank, some models said "a dead tree branch". I thought about it, and as branches often have smaller branches shooting off of them, and they don't have the other stuff, I gave partial credit. Another particularly clever response was for "What five-letter word can be read the same upside down or right side up?". Instead of SWIM, WizardLM told me "ZERO" but spelled numerically as "0". Sure enough, although "0" isn't a word but a number, it is the same way upside down, and I thought that was clever enough for partial credit. Another one was for question "What has a head and a tail, but no body or legs?". Most of them said "coin", but Alpacino 13b said a question mark. It explained that the dot part is the head, and the curly part is the tail. That was damn creative and clever, so partial credit it got. Another interesting one is "Which is correct to say: “the yolk of the egg are white” or “the yolk of the egg is white?”. Nobody but GPT-4 could get this right. I'm waiting for another model to give me the correct sentence but mention something about yolks being yellow, but this appears to be tricky even for ChatGPT 3.5. I gave no partial credit for just choosing the correct grammar alone, as I think they all did that. I think a lot of peeps test essays or math, but I want to try the direction of riddles or something along those lines. I can't control how many of those models came across those riddles in their training data unfortunately, but since they generally sucked at the task, I figured it will be interesting to see who pulls ahead. I think this stuff is more applicable to the use-case where you say "I have this tricky situation, what's a clever solution?". Cleverness and creativity are handy things. So anyway - I want to add a shitload more riddles (nothing too crazy or groan-inducing or convoluted or cheesy), and then retest them more comprehensively. Once I got my beefy test set, I will just keep adding models as they come along and add them to the test list over time and update you guys with the results. My laptop is 32gb of ram and has an RTX 2070 so I find GGML models the best for me, as I can run 13b and 30b (quantized). I can't pull of 65b, and the 65b LLAMA LORA q2_0 didn't load at all even tho I have enough RAM so not sure what's up there. EDIT: Just realized I dumped WizardLM under the 13b section, but it's my only 7b I tested at the moment, oops.
2023-04-26T22:59:18
https://www.reddit.com/r/LocalLLaMA/comments/1302il2/riddlecleverness_comparison_of_popular_ggml_models/
YearZero
self.LocalLLaMA
2023-04-26T23:20:09
0
{}
1302il2
false
null
t3_1302il2
/r/LocalLLaMA/comments/1302il2/riddlecleverness_comparison_of_popular_ggml_models/
false
false
self
77
{'enabled': False, 'images': [{'id': 'CrPkc_OfCJ1l7VX-csB1SDu1O0NW-kZjQQTsegXFqKM', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=108&crop=smart&auto=webp&s=881475179d0c5a437b5b9ece7e41702c8c89aabd', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=216&crop=smart&auto=webp&s=c8ae282613134e99402956db7625a96523b4690d', 'width': 216}, {'height': 181, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=320&crop=smart&auto=webp&s=44f6bcc1de6aa68d5e50793c886860bc29344ab5', 'width': 320}, {'height': 362, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=640&crop=smart&auto=webp&s=b3c5b9f9ca5b6481707597e6fa00ebbc404c4bd9', 'width': 640}, {'height': 543, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=960&crop=smart&auto=webp&s=b36d1cb6e755dbe5f8ef28d37ad58d0776e6c639', 'width': 960}, {'height': 611, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?width=1080&crop=smart&auto=webp&s=8105f6a11d37f110dd6a5efd069d6100cf1e4228', 'width': 1080}], 'source': {'height': 754, 'url': 'https://external-preview.redd.it/jf-5Bc5OLRkbQMtBWltHpor6zI-hFp4dMMee94HVnPw.jpg?auto=webp&s=cd5582bafcf2ff9aae068e916484b14205b84f0e', 'width': 1331}, 'variants': {}}]}
Performance penalty & ideal settings for multigpu inference?
5
Running models locally on 2x 24GB video cards is not out of reach for a lot of people. Model size is limited by VRAM. I’m currently running llama-30b variants at 4bit, no groupsize, and I’m able to get maybe 4-5 tokens/sec on 1x3090. When it’s spilled over to use the second GPU, performance drops to about 1 token/sec. I’ve used alpaca_4bit_lora to find tune these models using PEFT/LoRA on 2x3090. What I’m wondering is how much performance would I lose at inference time if running 30b across 2x 30-series cards? Does anyone sue this for their daily driver setup? Recently I’ve become less interested in model size as the main constraint and more interested in context length, which is why I’m so excited to try StableLM, context length 4096, but the released models are pretty small in terms of parameter count still. (Related, the RWKV model claims to support context length up to 8192.) But anyway, I digress. Is inference way slower in multi GPU setups? Or am I doing something wrong? If you have fast inference, comparable with single gpu, with large models on multi gpu, please post your settings.
2023-04-26T22:25:54
https://www.reddit.com/r/LocalLLaMA/comments/1301oc8/performance_penalty_ideal_settings_for_multigpu/
tronathan
self.LocalLLaMA
1970-01-01T00:00:00
0
{}
1301oc8
false
null
t3_1301oc8
/r/LocalLLaMA/comments/1301oc8/performance_penalty_ideal_settings_for_multigpu/
false
false
self
5
null