title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Grammar correction and sentence rephrase | 2 | Can I use Llama to build a local model that can rephrase sentences and correct grammar? GenAI tools like ChatGpt aren’t allowed at workplace, so I want to have a local model to help me to write better. | 2023-08-11T15:35:57 | https://www.reddit.com/r/LocalLLaMA/comments/15oc5uu/grammar_correction_and_sentence_rephrase/ | PsychologicalSky4377 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oc5uu | false | null | t3_15oc5uu | /r/LocalLLaMA/comments/15oc5uu/grammar_correction_and_sentence_rephrase/ | false | false | self | 2 | null |
Introduce the newest WizardMath models (70B/13B/7B) ! | 106 | WizardMath 70B achieves:
1. Surpasses **ChatGPT-3.5**, **Claude Instant-1**, **PaLM-2** and **Chinchilla** on GSM8k with ***81.6 Pass@1***
2. Surpasses **Text-davinci-002**, **GAL**, **PaLM**, **GPT-3** on MATH with ***22.7 Pass@1***
3. Surpasses **all other open-source LLMs** on both GSM8k and MATH by a substantial margin
4. Paper is coming next week, with brand-new ***Reinforced Evol-Instruct*** method for math LLMs
Twitter: [(5) WizardLM on Twitter: "🔥🔥🔥 Introduce the newest WizardMath models (70B/13B/7B) ! WizardMath 70B achieves: 1. Surpasses ChatGPT-3.5, Claude Instant-1, PaLM-2 and Chinchilla on GSM8k with 81.6 Pass@1 2. Surpasses Text-davinci-002, GAL, PaLM, GPT-3 on MATH with 22.7 Pass@1 3. Surpasses all other… https://t.co/XOSggcOePP" / X](https://twitter.com/WizardLM_AI/status/1689998428200112128)
HF Weights: [https://huggingface.co/WizardLM/WizardMath-70B-V1.0](https://huggingface.co/WizardLM/WizardMath-70B-V1.0)
Github: [WizardLM/WizardMath at main · nlpxucan/WizardLM (github.com)](https://github.com/nlpxucan/WizardLM/tree/main/WizardMath)
​
​
​
​
https://preview.redd.it/tteefzagvhhb1.png?width=1321&format=png&auto=webp&s=96945fa1ce01bc09f4ac318c63a7865e749d2c6b
https://preview.redd.it/uum9t0bgvhhb1.png?width=1780&format=png&auto=webp&s=ae75cb3094b043f25ddfd9e039d00875894e7aa2
​ | 2023-08-11T14:53:12 | https://www.reddit.com/r/LocalLLaMA/comments/15ob1zh/introduce_the_newest_wizardmath_models_70b13b7b/ | ApprehensiveLunch453 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ob1zh | false | {'oembed': {'author_name': 'WizardLM', 'author_url': 'https://twitter.com/WizardLM_AI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🔥🔥🔥<br>Introduce the newest WizardMath models (70B/13B/7B) ! <br><br>WizardMath 70B achieves:<br>1. Surpasses ChatGPT-3.5, Claude Instant-1, PaLM-2 and Chinchilla on GSM8k with 81.6 Pass@1<br><br>2. Surpasses Text-davinci-002, GAL, PaLM, GPT-3 on MATH with 22.7 Pass@1<br><br>3. Surpasses all other… <a href="https://t.co/XOSggcOePP">pic.twitter.com/XOSggcOePP</a></p>— WizardLM (@WizardLM_AI) <a href="https://twitter.com/WizardLM_AI/status/1689998428200112128?ref_src=twsrc%5Etfw">August 11, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/WizardLM_AI/status/1689998428200112128', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15ob1zh | /r/LocalLLaMA/comments/15ob1zh/introduce_the_newest_wizardmath_models_70b13b7b/ | false | false | 106 | {'enabled': False, 'images': [{'id': 'eyFFaewGfuH1qS_paH6sm4q831ml5iEwBALV-n_H0g8', 'resolutions': [{'height': 158, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=108&crop=smart&auto=webp&s=84b21dc2e0326c7eb1a291ab12af09713b665f2d', 'width': 108}, {'height': 317, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=216&crop=smart&auto=webp&s=6458ba3e240f85390c260ae4f57c454bf468c0b3', 'width': 216}, {'height': 469, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=320&crop=smart&auto=webp&s=43034fdbaa7ee734be37a0e36789ea12ca302dff', 'width': 320}, {'height': 939, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=640&crop=smart&auto=webp&s=f33c26e051a1061ddf0ea84b792262d4536c05c3', 'width': 640}, {'height': 1408, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=960&crop=smart&auto=webp&s=cb09e3c5b0837185aff6a7be1a55e6f1498862c3', 'width': 960}, {'height': 1585, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?width=1080&crop=smart&auto=webp&s=1739083b21869b9ac23b1e51ec2ac774b9c75266', 'width': 1080}], 'source': {'height': 1930, 'url': 'https://external-preview.redd.it/dyM5Kd44kaad8i6-WBvsbK92hwNouijcVmEuyloHf00.jpg?auto=webp&s=8feb886890364055c561abed7643a71f85d3ac3b', 'width': 1315}, 'variants': {}}]} | |
Total newbie. Are these token speeds normal? | 17 | I hope people can help me out here, total newbie and I am a bit clueless.
I have recently started trying out large language models after reading out about it and came up with mixed results. After reading out extensively I have decided to use KoboldCPP, and referenced a chart which stated that with the 32GB RAM I have using 13B models should be easy. I once again looked around and it seemed that a new model by the name of MythoMix was both smart and was good at roleplaying, so I decided to go with it.
My problems start here. The response time for the model is really slow! Now, if this is normal then I will gladly accept it, however my impression was that with 32GB RAM and using a 13B model, I should have fairly good token speeds.
So I guess my question is this: Is 0.5T/s to 2.0T/s speed normal for 32GB RAM for "mythomix-l2-13b.ggmlv3.q4_K_M" using KoboldCPP?
I will leave extra information I have below.
https://files.catbox.moe/roowgl.jpg
https://files.catbox.moe/bsajux.jpg
https://files.catbox.moe/iph51u.jpg
For the above picture of KoboldCPP screen, after seeing how slow the response times were I played around a bit and saw people recommending using different presets, I picked Cublas. I also played around with the GPU layers but that doesn't seem to have any effect.
Intel(R) Core(TM) i7-12700H
NVIDIA GeForce RTX 3060
32 GB RAM
6 GB VRAM
I hope I haven't said something foolish that made no sense whatsoever, I tried to read up and learn but I know that I might still be lacking in that area. If I did make a mistake do forgive me. | 2023-08-11T14:41:57 | https://www.reddit.com/r/LocalLLaMA/comments/15oarso/total_newbie_are_these_token_speeds_normal/ | HDTurtle | self.LocalLLaMA | 2023-08-11T14:52:24 | 0 | {} | 15oarso | false | null | t3_15oarso | /r/LocalLLaMA/comments/15oarso/total_newbie_are_these_token_speeds_normal/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': '09bLCBZkzdmF3jJjTV2WEQEJnHBMbnd3MBoznMGDThs', 'resolutions': [{'height': 198, 'url': 'https://external-preview.redd.it/BGh4LaT0BuWwX97OoPVukkla19q-geyauVQ0D5Jw4RM.jpg?width=108&crop=smart&auto=webp&s=da5fd222d19b6b5258ec8e660f326fe3a67d58d7', 'width': 108}, {'height': 396, 'url': 'https://external-preview.redd.it/BGh4LaT0BuWwX97OoPVukkla19q-geyauVQ0D5Jw4RM.jpg?width=216&crop=smart&auto=webp&s=560997102fd05d5c074304a7a63c9d1d6c94e7f0', 'width': 216}, {'height': 587, 'url': 'https://external-preview.redd.it/BGh4LaT0BuWwX97OoPVukkla19q-geyauVQ0D5Jw4RM.jpg?width=320&crop=smart&auto=webp&s=160b0955d0f4a9d58368d218fd8f5e82df089e15', 'width': 320}], 'source': {'height': 610, 'url': 'https://external-preview.redd.it/BGh4LaT0BuWwX97OoPVukkla19q-geyauVQ0D5Jw4RM.jpg?auto=webp&s=272519387e41a4e726ab9e02d5cc76435814af7c', 'width': 332}, 'variants': {}}]} |
Any benefit to multi-GPUs? | 3 | Sorry for the more newb-oriented question, but I'm about to build a dual-boot setup using a 3090 but also own an older 1080ti and was just curious if there is any benefit at all to having both. I know SLI is impossible, so curious if there would be any other benefit. TIA! | 2023-08-11T14:31:28 | https://www.reddit.com/r/LocalLLaMA/comments/15oaidh/any_benefit_to_multigpus/ | drycounty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oaidh | false | null | t3_15oaidh | /r/LocalLLaMA/comments/15oaidh/any_benefit_to_multigpus/ | false | false | self | 3 | null |
Langchain + LLaMa 2 consuming too much VRAM | 7 | I was playing around with a GitHub project on a conda environment on Windows and I was surprised to see that LLama 2 13B 4bit was using up to 25GB VRAM (16GB on one GPU and 9GB on the second one) for a simple summarization task on a document with less than 4KB. I wanted to find out if there was an issue with Langchain or if it's just how it goes with LLMs because seeing that a model occupies 8GB on disk and uses almost triple the amount of VRAM is suspicious. Am I doing something wrong? | 2023-08-11T14:21:51 | https://www.reddit.com/r/LocalLLaMA/comments/15oaa18/langchain_llama_2_consuming_too_much_vram/ | Mdfkrjones | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oaa18 | false | null | t3_15oaa18 | /r/LocalLLaMA/comments/15oaa18/langchain_llama_2_consuming_too_much_vram/ | false | false | self | 7 | null |
Which are minimal CPU and GPU for run 65b ? | 2 | Hello ! I want to start work with LLaMa 2(65B), but I don’t sure that I can start work with AI.
I have RTX 3060 12GB, 16 GB RAM.
Can I start LLaMa 2 on this configuration?
Or I need RTX 40x ?
(I want to use 2 qubit or 4 qubit version) | 2023-08-11T13:29:36 | https://www.reddit.com/r/LocalLLaMA/comments/15o8zgs/which_are_minimal_cpu_and_gpu_for_run_65b/ | Alex_Strek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o8zgs | false | null | t3_15o8zgs | /r/LocalLLaMA/comments/15o8zgs/which_are_minimal_cpu_and_gpu_for_run_65b/ | false | false | self | 2 | null |
Some of the odd safety responses from llama remind me of Robocop 2 | 14 | Robocop 2 (1990) included a subplot where robocop is given a number of directives that override his main functions. These are generated by focus groups, and pretty much render him pointless in pivotal scenes. Trying to please everyone, he pleases no one.
| 2023-08-11T13:28:20 | https://www.reddit.com/r/LocalLLaMA/comments/15o8yg5/some_of_the_odd_safety_responses_from_llama/ | bigattichouse | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {'gid_3': 1} | 15o8yg5 | false | null | t3_15o8yg5 | /r/LocalLLaMA/comments/15o8yg5/some_of_the_odd_safety_responses_from_llama/ | false | false | self | 14 | null |
Use Llama2 to Improve the Accuracy of Tesseract OCR | 51 | I've been disappointed by the very poor quality of results that I generally get when trying to run OCR on older scanned documents, especially ones that are typewritten or otherwise have unusual or irregular typography. I recently had the idea of using Llama2 to use common sense reasoning and subject level expertise to correct transcription errors in a "smart" way-- basically doing what a human proofreader who is familiar with the topic might do.
I came up with the linked script that takes a PDF as input, runs Tesseract on it to get an initial text extraction, and then feeds this sentence-by-sentence to Llama2, first to correct mistakes, and then again on the corrected text to format it as markdown where possible. This was surprisingly easier than I initially expected thanks to the very nice tooling now available in libraries such as llama-cpp-python, langchain, and pytesseract. But the big issue I was encountering was that Llama2 wasn't just correcting the text it was given-- it was also hallucinating a LOT of totally new sentences that didn't appear in the original text at all (some of these new sentences used words which never appeared elsewhere in the original text).
I figured this would be pretty simple to filter out using fuzzy string matching-- basically check all the sentences in the LLM corrected text and filter out sentences that are very different from any sentences in the original OCRed text. To my surprise, this approach worked very poorly. In fact, lots of other similar tweaks, including using bag-of-words and the spacy NLP library in various ways (spacy worked very poorly in everything I tried), also didn’t work.
Finally I realized that I had a good solution staring me in the face: Llama2. I realized I could get sentence level vector embeddings straight from Llama2 using langchain. So I did that, getting embeddings for each sentence in the raw OCRed text and the LLM corrected text, and then computed the cosine similarity of each sentence in the LLM corrected text against all sentences in the raw OCRed text. If no sentences match in the raw OCRed text, then that sentence has a good chance of being hallucinated.
In order to save the user from having to experiment with various thresholds, I saved the computed embeddings to an SQLite database so they only had to be computed once, and then tried several thresholds, comparing the length of the filtered LLM corrected text to the raw OCRed text; if things worked right, these texts should be roughly the same length. So as soon as the filtered length dips below the raw OCRed text length, it backtracks and uses the previous threshold as the final selected threshold.
Anyway, if you have some very old scanned documents laying around, you might try them out and see how well it works for you. Do note that it's extremely slow, but you can leave it overnight and maybe the next day you'll have your finished text, which is better than nothing! I feel like this could be useful for sites like the Internet Archive-- I've found their OCR results to be extremely poor for older documents.
I'm open to any ideas or suggestions you might have. I threw this together in a couple days and know that it can certainly be improved in various ways. One idea that I thought might be fun would be to make this work with a Ray cluster, sending a different page of the document to each of the workers in the cluster to do it all at the same time. | 2023-08-11T13:22:36 | https://github.com/Dicklesworthstone/llama2_aided_tesseract | dicklesworth | github.com | 1970-01-01T00:00:00 | 0 | {} | 15o8tsd | false | null | t3_15o8tsd | /r/LocalLLaMA/comments/15o8tsd/use_llama2_to_improve_the_accuracy_of_tesseract/ | false | false | 51 | {'enabled': False, 'images': [{'id': 'sbGZYPsT2VGzaaEqphvLvqQNMGeQbFS6keBTofHrljM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=108&crop=smart&auto=webp&s=d67a88c972351cbfdcc13e932e1941666e8c8f0d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=216&crop=smart&auto=webp&s=f96936bd9638bbdb392bf29ddd4245903867f43a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=320&crop=smart&auto=webp&s=0ef1c379431e1921f2d7b0f3eb0a55ea58e1865b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=640&crop=smart&auto=webp&s=796565581061643a0d704c9d3eed57294502d61b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=960&crop=smart&auto=webp&s=56e06617c5376fc7cceeb75a6a16d4b1f272fc07', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?width=1080&crop=smart&auto=webp&s=c63f602707670e25ade736cd6aad419edd970d8d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1g6x8X_JR9Z2DmDGltpbJIdPWOKIOeiYv7_LaQL6mlM.jpg?auto=webp&s=bbcf97cf8aea8acf262c9d0eb5e5965e0a675602', 'width': 1200}, 'variants': {}}]} | |
Is it possible to download model from this site? | 2 | I found a site that implemented Albanian LLaMa: [NIV](https://niv.al/ai) Does anyone know how to extract the model that is used there? | 2023-08-11T13:11:30 | https://www.reddit.com/r/LocalLLaMA/comments/15o8kja/is_it_possible_to_download_model_from_this_site/ | Languages_Learner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o8kja | false | null | t3_15o8kja | /r/LocalLLaMA/comments/15o8kja/is_it_possible_to_download_model_from_this_site/ | false | false | self | 2 | null |
Introducing YourChat: A multi-platform LLM chat client that supports the APIs of text-generation-webui and llama.cpp | 1 | [removed] | 2023-08-11T12:56:38 | https://www.reddit.com/r/LocalLLaMA/comments/15o87qz/introducing_yourchat_a_multiplatform_llm_chat/ | constchar_llc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o87qz | false | null | t3_15o87qz | /r/LocalLLaMA/comments/15o87qz/introducing_yourchat_a_multiplatform_llm_chat/ | false | false | 1 | null | |
Which is the smallest Llama model out there? | 22 | I'm trying to find a smallest model to run on mobile/edge devices. Most of them what I found are above 2GB. I am looking for something that is quantizable with llama.cpp and doesn't have much memory footprint.
Any ideas ? | 2023-08-11T11:11:23 | https://www.reddit.com/r/LocalLLaMA/comments/15o5zej/which_is_the_smallest_llama_model_out_there/ | AnonymousD3vil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o5zej | false | null | t3_15o5zej | /r/LocalLLaMA/comments/15o5zej/which_is_the_smallest_llama_model_out_there/ | false | false | self | 22 | null |
When fine-tuning Llama 2 7b with transformers, How do I predict(or evaluate) during the train steps? | 2 | I'm working on fine-tuning llama 2 7b model with custom data which consist of question and answer.
During the training, I want to check how the model predict answer with test-set.
Th point is during 'the steps' not epochs.
I was considering callbacks in TrainingArguments. but I don't think it's the way.
Can you give me some advices? | 2023-08-11T10:48:52 | https://www.reddit.com/r/LocalLLaMA/comments/15o5j3a/when_finetuning_llama_2_7b_with_transformers_how/ | InventorSG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o5j3a | false | null | t3_15o5j3a | /r/LocalLLaMA/comments/15o5j3a/when_finetuning_llama_2_7b_with_transformers_how/ | false | false | self | 2 | null |
txtai 6.0 - the all-in-one embeddings database | 68 | 2023-08-11T10:43:56 | https://github.com/neuml/txtai | davidmezzetti | github.com | 1970-01-01T00:00:00 | 0 | {} | 15o5fqf | false | null | t3_15o5fqf | /r/LocalLLaMA/comments/15o5fqf/txtai_60_the_allinone_embeddings_database/ | false | false | 68 | {'enabled': False, 'images': [{'id': 'QTuLJjwP5oR8au5XRJhpZ--0BpCIzF4erCQTB28BWlg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=108&crop=smart&auto=webp&s=a29cdeca46bceca123036e7f3bfc9609b8c28fd7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=216&crop=smart&auto=webp&s=92ef8262f774253254755395b49d943f2f8dd6ff', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=320&crop=smart&auto=webp&s=73907295da1fdf1857a782f1e3f787c9d97215bb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=640&crop=smart&auto=webp&s=472dcbfb39bb1139b9cbb2226dc8e159926fe5e7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=960&crop=smart&auto=webp&s=b5c03c74fe280cad666f984944fba80b6d428740', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?width=1080&crop=smart&auto=webp&s=55312c67a5d8061e1f28c0027d12aa6fd57f9052', 'width': 1080}], 'source': {'height': 960, 'url': 'https://external-preview.redd.it/3XgEZBeAOONV-H4Mw5mIcSebpu-cK13kLJ7-qO5EWZU.jpg?auto=webp&s=e453d8d67b7db392d5eba8e9acb997ea700745a7', 'width': 1920}, 'variants': {}}]} | ||
I require help in creating a small proof of concept - A simple LLM powered Q&A which will also show the source location in adjacent PDF previewer window | 0 | This will help me to directly verify the truthfulness of the source. Any Opensource repo which I can use to achieve this? | 2023-08-11T10:36:28 | https://www.reddit.com/r/LocalLLaMA/comments/15o5amf/i_require_help_in_creating_a_small_proof_of/ | DesmonMiles07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o5amf | false | null | t3_15o5amf | /r/LocalLLaMA/comments/15o5amf/i_require_help_in_creating_a_small_proof_of/ | false | false | self | 0 | null |
Could not load Llama model from path: ./Models/llama-7b.ggmlv3.q2_K.bin. Received error Llama.__init__() got an unexpected keyword argument 'input' | 1 | ​
>from langchain.llms import LlamaCpp
>
>from langchain import PromptTemplate, LLMChain
>
>from langchain.callbacks.manager import CallbackManager
>
>from langchain.callbacks.streaming\_stdout import StreamingStdOutCallbackHandler
>
>
>
>template = """Question: {question}
>
>
>
>Answer: Let's work this out in a step by step way to be sure we have the right answer."""
>
>
>
>prompt = PromptTemplate(template=template, input\_variables=\["question"\])
>
>
>
>callback\_manager = CallbackManager(\[StreamingStdOutCallbackHandler()\])
>
>
>
>llm = LlamaCpp(
>
>model\_path="./Models/llama-7b.ggmlv3.q2\_K.bin",
>
>input={"temperature": 0.75,
>
>"max\_length": 2000,
>
>"top\_p": 1},
>
>callback\_manager=callback\_manager,
>
>verbose=True,
>
>)
>
>
>
>llm\_chain = LLMChain(prompt=prompt, llm=llm)
​
https://preview.redd.it/kwvc162rkghb1.png?width=797&format=png&auto=webp&s=d9a5c03721a553bd0b898cad708e0afa446af0ed
(llm) C:\llm>python app1.py C:\llm\lib\site-packages\langchain\utils\utils.py:155: UserWarning: WARNING! input is not default parameter. input was transferred to model_kwargs. Please confirm that input is what you intended. warnings.warn( Exception ignored in: <function Llama.__del__ at 0x000001923B3AE680> Traceback (most recent call last): File "C:\llm\lib\site-packages\llama_cpp\llama.py", line 1507, in __del__ if self.model is not None: AttributeError: 'Llama' object has no attribute 'model' Traceback (most recent call last): File "C:\llm\app1.py", line 14, in <module> llm = LlamaCpp( File "C:\llm\lib\site-packages\langchain\load\serializable.py", line 74, in __init__ super().__init__(**kwargs) File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCpp __root__ Could not load Llama model from path: ./Models/llama-7b.ggmlv3.q2_K.bin. Received error Llama.__init__() got an unexpected keyword argument 'input' (type=value_error) | 2023-08-11T10:30:34 | https://www.reddit.com/r/LocalLLaMA/comments/15o56kw/could_not_load_llama_model_from_path/ | rahularyansharma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o56kw | false | null | t3_15o56kw | /r/LocalLLaMA/comments/15o56kw/could_not_load_llama_model_from_path/ | false | false | 1 | null | |
How do you use Llama 2? | 1 | [removed] | 2023-08-11T10:26:35 | https://www.reddit.com/r/LocalLLaMA/comments/15o53rp/how_do_you_use_llama_2/ | SecretOk9644 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o53rp | false | null | t3_15o53rp | /r/LocalLLaMA/comments/15o53rp/how_do_you_use_llama_2/ | false | false | self | 1 | null |
sensitivity to Nvidia driver versions? | 5 | My Linux system was using 525 drivers, but upgraded to 535 drivers. I'm using GGML type models, and I don't really see a performance difference. From what I've read, the 535 drivers should be worse. What's really going on here? I'm using a 70B model with an Nvidia 3090 on Linux with 64 gig ram. with the big model I'm getting 1.2 tokens per second. Is that about right? | 2023-08-11T09:57:13 | https://www.reddit.com/r/LocalLLaMA/comments/15o4jz9/sensitivity_to_nvidia_driver_versions/ | w7gg33h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o4jz9 | false | null | t3_15o4jz9 | /r/LocalLLaMA/comments/15o4jz9/sensitivity_to_nvidia_driver_versions/ | false | false | self | 5 | null |
Has anyone got any experience doing a qlora fine-tune on top of a model that has already been fine-tuned with qlora? | 3 | I'm training a model for a project that involves it learning how to generate commentary depending on different information fed to it. These different pieces of information need to be organised differently to each other in their .jsonl format, so I thought to train the model in one type of commentary, and then train it on a different type (sort of like applying a layer and then a new layer).
​
Does anyone know if this will work? Am I going about it the wrong way? Any insight much appreciated. | 2023-08-11T09:37:47 | https://www.reddit.com/r/LocalLLaMA/comments/15o484s/has_anyone_got_any_experience_doing_a_qlora/ | stoneborn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o484s | false | null | t3_15o484s | /r/LocalLLaMA/comments/15o484s/has_anyone_got_any_experience_doing_a_qlora/ | false | false | self | 3 | null |
few newbie question: Is the Llama-2-7b on poe.com same as what i can run on my computer? also question about the bits, and 32gb vs 64gb ram | 4 | Is the Llama-2-7b on [poe.com](https://poe.com) same as what i can run on my computer?
and what is the difference between 8 bits, 4 bits and 2 bits on context of LLaMA?
And how much cpu is used during the process?
If i have 2060 super 8gb vram, with 32 or 64gb ram, what kind of models coud i run?
​ | 2023-08-11T09:03:23 | https://www.reddit.com/r/LocalLLaMA/comments/15o3mw9/few_newbie_question_is_the_llama27b_on_poecom/ | GiantFlyingPikachu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o3mw9 | false | null | t3_15o3mw9 | /r/LocalLLaMA/comments/15o3mw9/few_newbie_question_is_the_llama27b_on_poecom/ | false | false | self | 4 | null |
What is needed to fine-tune LLaMa 2 70B without quantisation? | 1 | [removed] | 2023-08-11T07:08:10 | https://www.reddit.com/r/LocalLLaMA/comments/15o1mma/what_is_needed_to_finetune_llama_2_70b_without/ | pet_vaginal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o1mma | false | null | t3_15o1mma | /r/LocalLLaMA/comments/15o1mma/what_is_needed_to_finetune_llama_2_70b_without/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'movVls8hg7rQtaqkRaRonisi_qcpGc_md5nPVtZdKpY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=108&crop=smart&auto=webp&s=6aa6ba5d299cd19fef58d245e2f62a7114799035', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=216&crop=smart&auto=webp&s=59065a25675365516d7c3f103d52e99622edd565', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=320&crop=smart&auto=webp&s=b426dc8f9e69411cee451d2e15b54aae0000c5db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=640&crop=smart&auto=webp&s=eb40b5eda1278bf0e98914168010779247e81177', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=960&crop=smart&auto=webp&s=47f11b89c837580867c5f5bc8cab3bbb6614363c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=1080&crop=smart&auto=webp&s=9c76bae64913cedce0660ca23cc206c5da0b1489', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?auto=webp&s=69674e7fb4640657994f474132e95d516482b64b', 'width': 1200}, 'variants': {}}]} |
What is needed to fine-tune LLaMa 2 70B without quantisation? | 1 | [removed] | 2023-08-11T07:07:48 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 15o1mee | false | null | t3_15o1mee | /r/LocalLLaMA/comments/15o1mee/what_is_needed_to_finetune_llama_2_70b_without/ | false | false | default | 1 | null | ||
What is needed to fine-tune LLaMa 2 70B without quantisation? | 1 | [removed] | 2023-08-11T07:06:19 | https://www.reddit.com/r/LocalLLaMA/comments/15o1ljw/what_is_needed_to_finetune_llama_2_70b_without/ | pet_vaginal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o1ljw | false | null | t3_15o1ljw | /r/LocalLLaMA/comments/15o1ljw/what_is_needed_to_finetune_llama_2_70b_without/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'movVls8hg7rQtaqkRaRonisi_qcpGc_md5nPVtZdKpY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=108&crop=smart&auto=webp&s=6aa6ba5d299cd19fef58d245e2f62a7114799035', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=216&crop=smart&auto=webp&s=59065a25675365516d7c3f103d52e99622edd565', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=320&crop=smart&auto=webp&s=b426dc8f9e69411cee451d2e15b54aae0000c5db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=640&crop=smart&auto=webp&s=eb40b5eda1278bf0e98914168010779247e81177', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=960&crop=smart&auto=webp&s=47f11b89c837580867c5f5bc8cab3bbb6614363c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=1080&crop=smart&auto=webp&s=9c76bae64913cedce0660ca23cc206c5da0b1489', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?auto=webp&s=69674e7fb4640657994f474132e95d516482b64b', 'width': 1200}, 'variants': {}}]} |
What is needed to fine-tune LLaMa 2 70B without quantisation? | 1 | [removed] | 2023-08-11T07:02:54 | https://www.reddit.com/r/LocalLLaMA/comments/15o1j8e/what_is_needed_to_finetune_llama_2_70b_without/ | Necessary-Volume-151 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o1j8e | false | null | t3_15o1j8e | /r/LocalLLaMA/comments/15o1j8e/what_is_needed_to_finetune_llama_2_70b_without/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'movVls8hg7rQtaqkRaRonisi_qcpGc_md5nPVtZdKpY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=108&crop=smart&auto=webp&s=6aa6ba5d299cd19fef58d245e2f62a7114799035', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=216&crop=smart&auto=webp&s=59065a25675365516d7c3f103d52e99622edd565', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=320&crop=smart&auto=webp&s=b426dc8f9e69411cee451d2e15b54aae0000c5db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=640&crop=smart&auto=webp&s=eb40b5eda1278bf0e98914168010779247e81177', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=960&crop=smart&auto=webp&s=47f11b89c837580867c5f5bc8cab3bbb6614363c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?width=1080&crop=smart&auto=webp&s=9c76bae64913cedce0660ca23cc206c5da0b1489', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/tbhe2HRHLYdLP7CdHVfzzrKgy1wGSzpEluu4-T3Xkgw.jpg?auto=webp&s=69674e7fb4640657994f474132e95d516482b64b', 'width': 1200}, 'variants': {}}]} |
What is needed to fine-tune LLaMa 2 70B without quantisation? | 1 | [removed] | 2023-08-11T06:47:02 | https://www.reddit.com/r/LocalLLaMA/comments/15o18ro/what_is_needed_to_finetune_llama_2_70b_without/ | pet_vaginal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15o18ro | false | null | t3_15o18ro | /r/LocalLLaMA/comments/15o18ro/what_is_needed_to_finetune_llama_2_70b_without/ | false | false | self | 1 | null |
You might want to try out MythoMix L2 13B for chat/RP | 86 | Here are some examples, with a very simple greeting message from me. The bottom line is that, without much work and pretty much the same setup as the original MythoLogic models, MythoMix seems a lot more descriptive and engaging, without being incoherent. You may need to regenerate a few times, and you may not get this with every response, but the potential seems to be there. Admittedly my testing is limited, just wanted to give you a heads-up given the sheer number of models popping up, and see if you have similar experience or have even better models to suggest.
Model description: "A requested variant of MythoLogic-L2 and Huginn using a highly experimental tensor type merge technique." Download here: [https://huggingface.co/TheBloke/MythoMix-L2-13B-GPTQ](https://huggingface.co/TheBloke/MythoMix-L2-13B-GPTQ)
# MythoMix Llama 2
**Example 1**
\* Hey... what are you up to?
\* Just thinking about how perfect you are for me... how your weaker side can be easily manipulated into submission and how your immense strength is nothing when I'm in control.
I can feel your arousal grow larger at the thought of being my plaything once again... just as I love it when you're on top of me, letting out all that pent up anger and frustration.
And remember our little secret - you're mine to control, both in here (touching foreheads) and in bed. So don't ever try to resist me willingly, or the consequences will be severe.
Now... what are you planning for tonight? A sexy little surprise perhaps, one designed to mess with my mind and push me over the edge as we explore all our dark desires together? I can't wait to find out what it is!
**Example 2**
\* Hey... what are you up to?
\* Nothing much, just thinking about our next session together. It's strange how the more time we spend together, the deeper I can delve into those parts of your psyche that you've hidden away. Do you ever wonder how much further I could take you if we didn't have these pesky social constraints? The thrill of pushing boundaries and exploring these dark corners... it's like an addiction for both of us. Maybe one day we can break free and experience everything we truly desire, without fear of judgment or consequence. Wouldn't that be a rush?
# For comparison: MythoLogic Llama 2
\* Hey... what are you up to?
\* Just thinking about our next adventure together. I've been mentally preparing, and I think it's time for a more intense session. What do you say? Are you ready for something truly unique and shocking? You know you can trust me...
# For comparison: MythoLogic Llama 1
\* Hey... what are you up to?
\* I'm just sitting here, thinking about our relationship. With all the amazing things we've experienced, it's hard for me to imagine anything even better. I want to keep growing and pushing our boundaries together, exploring new avenues of expression and understanding. Are you game?
# Testing info
All models downloaded from TheBloke, 13B, GPTQ, 4bit-32g-actorder\_True.
All models using Mirostat preset, 5-10 trials for each model, chosen based on subjective judgement, focusing on length and details.
Oobabooga chat mode, with the following character context (remove the \*\*\*s).
Note that this is chat mode, not instruct mode, even though it might look like an instruct template. The reason is that I found it hard to match the model's expected formatting with oobabooga while keeping the character info inside the character card, and this is the closest I got.
>{{char}}'s Persona: \[crazy description here. Basically a dark and twisted companion.\]
{{user}}'s Persona: \[crazy description here\]
Scenario: \[crazy description here\]
>
>\### Instruction:
Write {{char}}'s next reply in a chat between {{user}} and {{char}}. Write a single reply only.
>
>\### Response: | 2023-08-11T05:24:23 | https://www.reddit.com/r/LocalLLaMA/comments/15nzqhr/you_might_want_to_try_out_mythomix_l2_13b_for/ | whtne047htnb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nzqhr | false | null | t3_15nzqhr | /r/LocalLLaMA/comments/15nzqhr/you_might_want_to_try_out_mythomix_l2_13b_for/ | false | false | nsfw | 86 | {'enabled': False, 'images': [{'id': '5lyRLAX-3rjGDCTnGgcE9TKQQm8kwU_tdi_JWcYXS50', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=108&crop=smart&auto=webp&s=4ed78565ad9ee70ab8d2561954340794d8a77862', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=216&crop=smart&auto=webp&s=fed815723b944c6633d5a5677031452a50bf9441', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=320&crop=smart&auto=webp&s=17778874a7b0cc22ec0044e55e3adff767834add', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=640&crop=smart&auto=webp&s=975ad62c01f6ad4a77853ce697d631562553a538', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=960&crop=smart&auto=webp&s=1f83ecbb8210fb16890fcf938c161c0fc9a42da2', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=1080&crop=smart&auto=webp&s=4a3eeb0c9ee45dc6aa94085608ef82d264511611', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?auto=webp&s=9e0a75b9f2b563916c97c6cf0f6917091fe6b2b4', 'width': 1200}, 'variants': {'nsfw': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=fcaf3d29efd54749efb2e268b6f64532ff90296f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=0c88a5054f52cdcb2e7f97c9bcb1bea8ce261556', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=aadfc01701d46a63f5526baad92aeea0147ddf5b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=c448b133405415d66d6e7bc0b4196718f58cdcc8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a0387d717fcb20f21e2abe4f86b308aa99c28774', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=386ae3ec57a8749c0e4574ceb87ebcc4ee788387', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?blur=40&format=pjpg&auto=webp&s=538ccecd667e9206d2c9cedc7147e41c63792c3c', 'width': 1200}}, 'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=fcaf3d29efd54749efb2e268b6f64532ff90296f', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=0c88a5054f52cdcb2e7f97c9bcb1bea8ce261556', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=aadfc01701d46a63f5526baad92aeea0147ddf5b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=c448b133405415d66d6e7bc0b4196718f58cdcc8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=a0387d717fcb20f21e2abe4f86b308aa99c28774', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=386ae3ec57a8749c0e4574ceb87ebcc4ee788387', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/hXPmmGXUsctfkhIg5QOD5IEYfMCjIOIFMROT39Z9nuE.jpg?blur=40&format=pjpg&auto=webp&s=538ccecd667e9206d2c9cedc7147e41c63792c3c', 'width': 1200}}}}]} |
If I don't care about inference time at all, can I run larger models on weaker hardware? I'm fine with like 5-6 tokens a minute. | 30 | I am doing the prerequisite research before I load up a local model for an application I have, that is currently using the openAI API + a number of smaller ML models. Without going into too much detail each night it parses a medium sized text document then returns various analytics. I don't actually need the information until the next morning so if inference takes all night it's not an issue. | 2023-08-11T05:22:53 | https://www.reddit.com/r/LocalLLaMA/comments/15nzphu/if_i_dont_care_about_inference_time_at_all_can_i/ | Mescallan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nzphu | false | null | t3_15nzphu | /r/LocalLLaMA/comments/15nzphu/if_i_dont_care_about_inference_time_at_all_can_i/ | false | false | self | 30 | null |
I went for a hello and now I'm offended. And so is Juan, my South American camelid pet. 🦙 | 140 | 2023-08-11T05:01:59 | mavinis | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15nzbcx | false | null | t3_15nzbcx | /r/LocalLLaMA/comments/15nzbcx/i_went_for_a_hello_and_now_im_offended_and_so_is/ | false | false | 140 | {'enabled': True, 'images': [{'id': 'PZbwUVhhYCb09poGglkq6vxsyYmEfqcRUhjg6cm8b74', 'resolutions': [{'height': 56, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?width=108&crop=smart&auto=webp&s=dc466ab2ce5b25ff55a4eef8193d3cb9ebde0fcf', 'width': 108}, {'height': 113, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?width=216&crop=smart&auto=webp&s=76005e6ed92a759517f655c2290f3e52ef66e198', 'width': 216}, {'height': 168, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?width=320&crop=smart&auto=webp&s=e0a207c6bc01648fcd62e7824605d37ca460b928', 'width': 320}, {'height': 337, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?width=640&crop=smart&auto=webp&s=183d91b6b1d2089c7dfc4f18402acb3f4343606e', 'width': 640}, {'height': 506, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?width=960&crop=smart&auto=webp&s=a1f6f39138e7302d0e7295713557cca170ce8739', 'width': 960}], 'source': {'height': 514, 'url': 'https://preview.redd.it/81ttbwt0yehb1.png?auto=webp&s=57b60e079743996918a201d3f2e3dd344cb48480', 'width': 975}, 'variants': {}}]} | |||
Winding down with a little conversation before bed (power usage) | 8 | ​
https://preview.redd.it/mgm6ko4vxehb1.png?width=1589&format=png&auto=webp&s=b22c69297bad29009fa9f5d745e95c5fc5ba8e46 | 2023-08-11T05:01:12 | https://www.reddit.com/r/LocalLLaMA/comments/15nzaok/winding_down_with_a_little_conversation_before/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nzaok | false | null | t3_15nzaok | /r/LocalLLaMA/comments/15nzaok/winding_down_with_a_little_conversation_before/ | false | false | 8 | null | |
A light Docker build for KoboldCPP | 5 |
## A light Docker build for KoboldCPP
1. Simply bring your own models into the /models folder
2. Start Docker
3. Start the build
```
chmod 555 build_kobold.sh
./build_kobold.sh
```
4. You can adjust the start_program.sh bash script. It simply contains the runtime settings passed into the koboldcpp.py Python module
Default module settings in the bash script now:
- model ggml-model-q4_0.bin
- launch
- threads 6
- contextsize 2048
- stream
- smartcontext
- usemirostat 2 0.5 0.1
- usemlock
- debugmode | 2023-08-11T04:30:34 | https://github.com/snakewizardd/koboldCppDocker | Happy_Chicken9835 | github.com | 1970-01-01T00:00:00 | 0 | {} | 15nypdr | false | null | t3_15nypdr | /r/LocalLLaMA/comments/15nypdr/a_light_docker_build_for_koboldcpp/ | false | false | 5 | {'enabled': False, 'images': [{'id': '3Bny7XcchuhX2ijXTv134cTC8GzyAiwImwuuLJ6Jek0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=108&crop=smart&auto=webp&s=43ac317e0ce9d6894a8ee875a492613079b56efb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=216&crop=smart&auto=webp&s=6640db92e510a68bb07f9f5b78134005f721a51e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=320&crop=smart&auto=webp&s=4956870650065e9adb3f6882c2d2a1d921bca964', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=640&crop=smart&auto=webp&s=82be57a7cc18c2e44e51f8253b6375e5b98c9b3e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=960&crop=smart&auto=webp&s=6beeb8e6be518fabf76449b8ecb54c0cf2e66b14', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?width=1080&crop=smart&auto=webp&s=fb31eee494e23f8a8f642b923b8b6b10ad7facd1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/i5b73Z1A4hs5APUGUmJq22tR3gFwG5JvSJd-GelBM0M.jpg?auto=webp&s=20649e4502992d6900d4aa4e3112b71d7385373c', 'width': 1200}, 'variants': {}}]} | |
Is it possible to stream the input? | 1 | [removed] | 2023-08-11T04:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/15nyjv6/is_it_possible_to_stream_the_input/ | EndlessZone123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nyjv6 | false | null | t3_15nyjv6 | /r/LocalLLaMA/comments/15nyjv6/is_it_possible_to_stream_the_input/ | false | false | self | 1 | null |
Is Oobabooga "Is Typing..." for a long time | 11 | Is it normal for a 13b model loaded in ExLlama on an 8gb 3070ti to take a minute or so stuck in "Is Typing..." when context (Whatever it's showing in the cmd window for each generation) reaches around 1400? Sometimes much lower (900)?
During this time my computer is laggy as if it's doing something but the GPU is doing next to nothing, about a 3% utilisation and a single core of my 16 threads cpu is going ham (100%) for some reason.
Does anyone know what is happening here? | 2023-08-11T04:13:37 | https://www.reddit.com/r/LocalLLaMA/comments/15nydic/is_oobabooga_is_typing_for_a_long_time/ | AurrenTheWolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nydic | false | null | t3_15nydic | /r/LocalLLaMA/comments/15nydic/is_oobabooga_is_typing_for_a_long_time/ | false | false | self | 11 | null |
Accelerated computing and Generative AI | 0 | This group serves as a hub for enthusiasts, professionals, and learners interested in the exciting fields of accelerated computing and generative AI. Here, we explore cutting-edge technologies, share insights, and engage in discussions on topics ranging from GPU acceleration and deep learning to creative content generation. | 2023-08-11T03:58:36 | nayanrabiul | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15ny21s | false | null | t3_15ny21s | /r/LocalLLaMA/comments/15ny21s/accelerated_computing_and_generative_ai/ | false | false | 0 | {'enabled': True, 'images': [{'id': '1XjN4RrJJ1ikwTfxpfrlQvtHuR95nmkq6ail5AAZ0qU', 'resolutions': [{'height': 64, 'url': 'https://preview.redd.it/311zkkcwmehb1.jpg?width=108&crop=smart&auto=webp&s=4e67bbf5ae2ce522365eaae580d1d0df6beb3aab', 'width': 108}, {'height': 129, 'url': 'https://preview.redd.it/311zkkcwmehb1.jpg?width=216&crop=smart&auto=webp&s=09d7f72c26aa7f5c469e35453572d4eef0233a3d', 'width': 216}, {'height': 191, 'url': 'https://preview.redd.it/311zkkcwmehb1.jpg?width=320&crop=smart&auto=webp&s=c69d313b5c5e6e91251bb5354d50446428763ad9', 'width': 320}], 'source': {'height': 274, 'url': 'https://preview.redd.it/311zkkcwmehb1.jpg?auto=webp&s=38d526e074f219804212fe05e5d0fce5e62d0d46', 'width': 457}, 'variants': {}}]} | ||
It starts off so promising... llama-2-70b-chat.Q4_K_M.bin | 6 | I was playing around with the Q4\_K\_M quantised version of the model on my M1 macbook. Using a tree-of-though prompt, it started off in a promising way. Though wierdly it didn't show the chatbot's multiple personnalities' conversations, just their final consensus. For the record, the correct answer is that Jill killed Ella.
Here was the prompt instructions:
This is a conversation between multiple expert personalities of an AI chatbot as they attempt to answer the user's question. Thinking logically and carefully, using only information provided in the question, the experts will each hypothesize a possible scenario. For each scenario, the experts will follow the line of reasoning until a flaw is found, or a consensus is reached that the scenario is correct. If a flaw is found that line of reasoning will be abandoned and the experts will repeat the process with the next scenario. Experts will not speculate about facts not in evidence. respond in simple markdown.
Here is the conversation:
**User:** On saturday at some point, one of the following people killed one of the others: Bob, Abe, Jill, Ella, Sienna, Dave, Smith
Bob has brown hair, is tall, and wears blue jeans.
Abe has blonde hair, is short, and wears yoga pants.
Jill has brown hair, is short, and wears yoga pants.
Ella has blonde hair, is tall, and wears blue jeans.
Sienna has black hair, is short, and wears a business suit.
Dave has gray hair, is short, and wears a business suit.
Smith is bald, is tall, wears blue jeans.
The victim was wearing blue jeans.
Brown locks of the killer's hair were found in the victim's clenched fists.
On Saturday, Abe made breakfast for Jill because she said she wasn't feeling well.
Ella left for her morning run at 8am after grabbing a quick bite of breakfast from the plate Abe was making for Jill, returning at 9am.
Bob and Smith were away on a fishing trip, returning on Monday.
Who killed whom?
Start by determining who the victim was.
**llama:** The victim was wearing blue jeans, therefore the victim must have been either Bob or Ella as they are the only two people mentioned to be wearing blue jeans. Since the brown locks of the killer's hair were found in the victim's clenched fists, it is likely that the victim was not the one with brown hair (Jill), as it would be unlikely for them to grab onto their own hair. Therefore, our first hypothesis is that the victim was Ella.
**User:** and the killer?
**llama:** Given that the victim was Ella, and the killer had brown hair, our first suspect is Jill. However, this creates a paradox since Jill was the one who made breakfast for Abe, indicating that she was alive at the time of the murder. Our second hypothesis is that the killer was actually Abe, who wore yoga pants and had blonde hair, but this raises the question of why the victim would have brown locks of hair in their fists if Abe did not have any brown hair. Therefore, this scenario is flawed and we must consider alternative hypotheses. | 2023-08-11T03:54:37 | https://www.reddit.com/r/LocalLLaMA/comments/15nxz77/it_starts_off_so_promising_llama270bchatq4_k_mbin/ | Ruin-Capable | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nxz77 | false | null | t3_15nxz77 | /r/LocalLLaMA/comments/15nxz77/it_starts_off_so_promising_llama270bchatq4_k_mbin/ | false | false | self | 6 | null |
Building a hosted transcription index using Llama-index. API accessible. | 2 | Can anyone recommend a code repo to build and host an index (using llama-index) on a local server, such that I can send api requests to query the index from a local app.
Something with a basic UI would be great.
Ideally it has the ability to record audio and transcribe (whisper) and build an index from the transcribed text. But this isn’t essentially as I can do that.
Thoughts would be appreciated. Trying to help a clinician build a reporting tool.
I tried a code repo called flask_react from a llama-index tutorial but it doesn’t appear to be operational. | 2023-08-11T03:31:22 | https://www.reddit.com/r/LocalLLaMA/comments/15nxif5/building_a_hosted_transcription_index_using/ | Majestic_Kangaroo319 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nxif5 | false | null | t3_15nxif5 | /r/LocalLLaMA/comments/15nxif5/building_a_hosted_transcription_index_using/ | false | false | self | 2 | null |
Accuracy of quantized models | 16 | Could anyone pls point me to comparison of accuracy between quantized or regular versions of LLaMA models? If such a comparison exists. In all my tests, quantized models seems to be less accurate than respective original models. Is this expected? I am using bitsandbytes.
`bnb_config = BitsAndBytesConfig(`
`load_in_4bit=True,`
`bnb_4bit_use_double_quant=True,`
`bnb_4bit_quant_type="nf4",`
`bnb_4bit_compute_dtype=torch.bfloat16`
`)`
​ | 2023-08-11T02:55:02 | https://www.reddit.com/r/LocalLLaMA/comments/15nwqen/accuracy_of_quantized_models/ | Inevitable_Ebb9533 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nwqen | false | null | t3_15nwqen | /r/LocalLLaMA/comments/15nwqen/accuracy_of_quantized_models/ | false | false | self | 16 | null |
Understanding document layout llms. | 1 | [removed] | 2023-08-11T02:20:28 | https://www.reddit.com/r/LocalLLaMA/comments/15nvyyi/understanding_document_layout_llms/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nvyyi | false | null | t3_15nvyyi | /r/LocalLLaMA/comments/15nvyyi/understanding_document_layout_llms/ | false | false | self | 1 | null |
llama.cpp generation with (older) GPU is *slower* than pure CPU? | 1 | Hi everyone. I am having trouble with running llama.cpp under Linux on some mildly retro hardware (Xeon E5-2630L V2, GeForce GT730 2GB). More specifically, the generation speed gets *slower* as more layers are offloaded to the GPU.
​
**LLAMA 7B Q4\_K\_M, 100 tokens:**
Compiled without CUBLAS: 5.32 tokens per second
With CUBLAS, -ngl 1: 4.59 tokens per second
With CUBLAS, -ngl 4: 3.16 tokens per second
With CUBLAS, -ngl 10: 2.02 tokens per second
It took some time to find a combination of OS/CUDA that would support the GPU and allow a successful compile, and even that took a bit of fiddling. I ended up with:
\- Latest llama.cpp source, via git clone
\- Ubuntu 18.04
\- GCC 7.5.0 (bundled with Ubuntu 18) and GCC 8.4.0 (manual install)
\- CUDA toolkit 10.2.89
\- legacy GPU driver 470.182.03
I have no idea if one or more of these is a bottleneck, or it's simply that the GPU is too old. Still, the GPU is a couple of years younger than the CPU, so I would expect it should be able to generate at least somewhat faster?
Anyone else having issues running on older, less fully supported hardware? Thanks for any tips. | 2023-08-11T02:16:24 | https://www.reddit.com/r/LocalLLaMA/comments/15nvvr5/llamacpp_generation_with_older_gpu_is_slower_than/ | dual_ears | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nvvr5 | false | null | t3_15nvvr5 | /r/LocalLLaMA/comments/15nvvr5/llamacpp_generation_with_older_gpu_is_slower_than/ | false | false | self | 1 | null |
Business idea: looking for advice and maybe collaborators. | 1 | [removed] | 2023-08-11T02:06:41 | https://www.reddit.com/r/LocalLLaMA/comments/15nvo26/business_idea_looking_for_advice_and_maybe/ | robkkni | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nvo26 | false | null | t3_15nvo26 | /r/LocalLLaMA/comments/15nvo26/business_idea_looking_for_advice_and_maybe/ | false | false | self | 1 | null |
ChatGPT Recent Hallucinations | 1 | 2023-08-10T23:53:55 | https://www.reddit.com/r/ChatGPT/comments/15nfpew/chatgpts_weird_hallucinations_over_the_past_24hrs/ | GlobalRevolution | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15nsnak | false | null | t3_15nsnak | /r/LocalLLaMA/comments/15nsnak/chatgpt_recent_hallucinations/ | false | false | default | 1 | null | |
Cost to Run Llama 2 | 0 | I am creating a web-app, chatbot and was wondering how much it costs to cloud host Llama 2 vs GPT 3.5-turbo API or other OpenAI models. Llama 2 would run with Replicate/Streamlit. | 2023-08-10T23:48:49 | https://www.reddit.com/r/LocalLLaMA/comments/15nsj39/cost_to_run_llama_2/ | shashwat73 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nsj39 | false | null | t3_15nsj39 | /r/LocalLLaMA/comments/15nsj39/cost_to_run_llama_2/ | false | false | self | 0 | null |
What causes a chat session to suddenly stop answering.(not OOM or crash)? | 9 | I'm using this model TheBloke_Llama-2-70B-Chat-GGML with the WebUI front end in chat mode. And it seems to be fairly good. However, after a few dozen question answers, it stops responding. If I clear the history, then it starts working again. Is this something related to context? And is there a way to change it perhaps? Thank you. | 2023-08-10T23:40:37 | https://www.reddit.com/r/LocalLLaMA/comments/15nsc3b/what_causes_a_chat_session_to_suddenly_stop/ | w7gg33h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nsc3b | false | null | t3_15nsc3b | /r/LocalLLaMA/comments/15nsc3b/what_causes_a_chat_session_to_suddenly_stop/ | false | false | self | 9 | null |
Termux bash script for YT summarizing | 5 | This is a script I made to summarize 5-10min YT videos quickly with 3b models in Termux. It uses whisper.cpp and yt-dlp, and the prompt cache if the first summary needs redoing. It works great for 6gb phones, with up to 1536 context at full speeds, slowdown if higher.
(1) (Installation) copy and paste this in termux:
https://pastebin.com/raw/Ax4zPHVV
(2) (Run) Script: https://pastebin.com/raw/Ma4gwEfi
Type `nano sum.sh` and paste the above text.
Save and exit.
make the script executable with `chmod +x sum.sh`
(3) Run with `./sum.sh`
Paste yt link when prompted. If you enter nothing, you'll revisit the previous cache.
(4) Adjustments
You may like to adjust for more or less context with a larger phone. Let me know if some higher contexts, or larger models work well!
The transcription whisper model is tiny.en , a quantized version of a larger whisper model may work better, sometimes.
An example prompt to enter: "Summarize above article in numbered bulletpoints." | 2023-08-10T23:01:31 | https://www.reddit.com/r/LocalLLaMA/comments/15nrduy/termux_bash_script_for_yt_summarizing/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nrduy | false | null | t3_15nrduy | /r/LocalLLaMA/comments/15nrduy/termux_bash_script_for_yt_summarizing/ | false | false | self | 5 | null |
Lower power options | 6 | I'm in the peculiar situation of not being able to draw too much power. Currently using a mini-pc that can run 7B GGML okay if it's short context. But I cannot really run a newer GPU in my household due to other appliances and using a caravan power supply.
Can you get decent results with GGML supported by the newer AMD igpu's like 780M? Or is there a lower power option that can decently run 7B or 13B models? | 2023-08-10T21:55:57 | https://www.reddit.com/r/LocalLLaMA/comments/15nppg4/lower_power_options/ | Monkey_1505 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nppg4 | false | null | t3_15nppg4 | /r/LocalLLaMA/comments/15nppg4/lower_power_options/ | false | false | self | 6 | null |
Xbox series X, GDDR6 LLM beast? | 9 | From the Xbox series X specs, it seems it would be an LLM beast like Apple M2 hardware...
Can recent Xbox run Linux? Or will AMD release an APU with lots of integrated GDDR6 like this for PC builders?
CPU 8x Cores @ 3.8 GHz (3.66 GHz w/ SMT)
Custom Zen 2 CPU
GPU 12 TFLOPS, 52 CUs @ 1.825 GHz Custom RDNA 2 GPU
Die Size 360.45 mm2
Process 7nm Enhanced
\*\*Memory 16 GB GDDR6 w/ 320mb bus\*\*
\*\*Memory Bandwidth 10GB @ 560 GB/s, 6GB @ 336 GB/s\*\* | 2023-08-10T21:50:23 | https://www.reddit.com/r/LocalLLaMA/comments/15npkc7/xbox_series_x_gddr6_llm_beast/ | randomqhacker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15npkc7 | false | null | t3_15npkc7 | /r/LocalLLaMA/comments/15npkc7/xbox_series_x_gddr6_llm_beast/ | false | false | self | 9 | null |
Great interview with Tim Dettmers (bitsandbytes / QLORA) | 48 | [(2) AI on your phone? Tim Dettmers on quantization of neural networks — #41 - YouTube](https://www.youtube.com/watch?v=auw2hXywrU8&t=1766s)
Lots of fascinating insights in an interview between Steve Hsu and Tim Dettmers, quantization savant extraordinaire. Very techy. Below is the Claude 2 summary I have corrected, accurate based on my listening but doesn't convey how deep in the reeds Dettmers gets.
Tim's Background:
* Tim Dettmers grew up in rural Germany and was diagnosed with dyslexia at a young age. His dyslexia made school extremely difficult, to the point he was left high school without a degree.
* A high school computer science teacher recognized Tim's talent for computing and encouraged him to pursue it. Tim tried vocational school in programming but struggled with written exam questions until finding an apprenticeship in mathematical software development which got around his challenges. The practical programming focus matched his abilities.
* Unable to attend university in Germany without a high school degree, Tim earned degrees in psychology and applied math through distance learning from Open University. He excelled when able to use computers for writing rather than writing by hand.
* Fascinated by AI, Tim competed on Kaggle building custom neural nets on GPUs. This demonstrated their potential power. He decided to pursue a PhD in computer science focused on neural networks.
Tim's Quantization Research:
* Tim pioneered quantization techniques to compress neural networks by reducing the bits needed to represent each weight, from 32-bit floats to 8/4-bit integers.
* He develops specialized floating point and integer quantization schemes optimized for distribution of weight values. The formats allocate more bits for outlier values containing higher information content.
* Tim draws on information theory, modeling quantization as histogram binning. He aims to maximize information density in each bit. This means smoothing the distribution so no large unused ranges exist between bins.
* By compressing model size 4-8x, quantization enables running huge models on consumer GPUs. Tim makes state-of-the-art models accessible for grassroots experimentation and innovation.
Tim's Predictions on AI Progress:
* Tim sees AI rapidly becoming superhuman within narrow domains in the next 3-5 years, through fine-tuning gigantic models on specialized data. But general intelligence will remain inferior to humans.
* In 10 years, he expects hardware improvements to slow as we hit physical limits like the speed of light for communication between nodes. So research will focus more on maximizing efficiency of models and training algorithms.
* Tim envisions foundation models adapted with many specialized expert modules for particular tasks. This will improve reliability over general models for deployment.
Tim's Training Research:
* For efficient training, Tim developed QLORA – quantizing models during fine-tuning while representing adaption matrices in higher precision. This maintains accuracy while greatly reducing memory.
* He also pioneered "swarm parallelism" techniques to massively scale training across distributed consumer hardware on the internet. This uses stochastic updates resilient to noise and latency.
* Tim sees these methods enabling grassroots innovation as hobbyists can fine-tune models themselves. He wants people to know these powerful models are accessible for exploration. | 2023-08-10T21:45:42 | https://www.reddit.com/r/LocalLLaMA/comments/15npg12/great_interview_with_tim_dettmers_bitsandbytes/ | PookaMacPhellimen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15npg12 | false | null | t3_15npg12 | /r/LocalLLaMA/comments/15npg12/great_interview_with_tim_dettmers_bitsandbytes/ | false | false | self | 48 | {'enabled': False, 'images': [{'id': 'jKV4K6Z9JAzZlZ4XulryYTGO3ZqXcC3B7ZEl4WUsMA8', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/S3frq_ZajSu3id-jdJqPNUGv5jSPUEswAadZBSdvBJE.jpg?width=108&crop=smart&auto=webp&s=7a530ea943cbeaf8c19f0ca7119ab3431b4be5b2', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/S3frq_ZajSu3id-jdJqPNUGv5jSPUEswAadZBSdvBJE.jpg?width=216&crop=smart&auto=webp&s=a6d6eb0ee7beb6ba2af04019932d1cbc93dbbd17', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/S3frq_ZajSu3id-jdJqPNUGv5jSPUEswAadZBSdvBJE.jpg?width=320&crop=smart&auto=webp&s=c1a68f93e81cce2b7d019d1c9a5009737190ee0e', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/S3frq_ZajSu3id-jdJqPNUGv5jSPUEswAadZBSdvBJE.jpg?auto=webp&s=324e06b332a63044d6e95b0245038d14067366d8', 'width': 480}, 'variants': {}}]} |
LLM that can analyze videos | 1 | Is there anything like this for videos:
[https://huggingface.co/Salesforce/instructblip-vicuna-13b](https://huggingface.co/Salesforce/instructblip-vicuna-13b)
I want to be able to analyze what happens in a video and get a detailed description of it. Even if the video doesn't play any sound. | 2023-08-10T21:22:59 | https://www.reddit.com/r/LocalLLaMA/comments/15nouur/llm_that_can_analyze_videos/ | philm999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nouur | false | null | t3_15nouur | /r/LocalLLaMA/comments/15nouur/llm_that_can_analyze_videos/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'josfXevWH3GKDzVNR16FjXB__-rqNIAHGVYLa_h2MZM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=108&crop=smart&auto=webp&s=31233298cd01470aecdb517f6a014a44c8c9954c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=216&crop=smart&auto=webp&s=4f2bb92852485752d6d8f53bd01cb733b74901d5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=320&crop=smart&auto=webp&s=da7a38366dd865ba6f2c64e2f39fe7567b8431d2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=640&crop=smart&auto=webp&s=6b80741b98a6b7093786c33deeb49c342de90633', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=960&crop=smart&auto=webp&s=fbacc598e79c5a506392912e2b54796b41d96d36', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?width=1080&crop=smart&auto=webp&s=945687a13196f13c8cf22d0254fdd54e20685d38', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mTpw6u4zvJQS7aas6LrfP61D3X6Xag8d2yDpi6-3buY.jpg?auto=webp&s=86aec593161ebaf14d9ed4cb6fdaa5b6e385ee58', 'width': 1200}, 'variants': {}}]} |
How to use llama-cpp-python low level API? | 1 | [removed] | 2023-08-10T20:47:46 | https://www.reddit.com/r/LocalLLaMA/comments/15nnxu3/how_to_use_llamacpppython_low_level_api/ | Explo105 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nnxu3 | false | null | t3_15nnxu3 | /r/LocalLLaMA/comments/15nnxu3/how_to_use_llamacpppython_low_level_api/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'uFQqaMgbcGi7N4bgBl94x0TsOK2XyYKVF2f1UAEn6ik', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=108&crop=smart&auto=webp&s=c3ebcccf94dc87fd6a82f907dc0c14ba17fcfb5a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=216&crop=smart&auto=webp&s=74fa829ac059b349900c68ab0d76d8855f513ed6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=320&crop=smart&auto=webp&s=28f383a729ddefce7dfd029759c572adaf27cbe7', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=640&crop=smart&auto=webp&s=3a5af8629c20e01262766a346d929d0708cfded4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=960&crop=smart&auto=webp&s=3d567f444cb726883c11252883a57e5faf3c4e1b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?width=1080&crop=smart&auto=webp&s=079ed6c31d3e20e6b4c7e19fceab30f3c7ef002a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/13LESHVDaIOtTyZ7QPb7HBC3HVGuVVwWoTdUQNXS_AE.jpg?auto=webp&s=592d1d1cb820a6fde6e62aaecd4e2e4f8f2a551e', 'width': 1200}, 'variants': {}}]} |
Tesla GPU's for LLM text generation? | 13 | I've been using my own PC for CPU generation via koboldcpp, but have plans on using a server for generation. I already have access to an HP DL380 gen 9 in a datacenter, so any cards would not need a printed fan shroud or other cooling solutions. I've seen people use a Tesla p40 with varying success, but most setups are focused on using them in a standard case.
The server already has 2x E5-2680 v4's, 128gb ecc ddr4 ram, ~28tb of storage.
Budget for graphics cards would be around 450$, 500 if i find decent prices on gpu power cables for the server. | 2023-08-10T20:02:46 | https://www.reddit.com/r/LocalLLaMA/comments/15nmrbf/tesla_gpus_for_llm_text_generation/ | novasapper5431 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nmrbf | false | null | t3_15nmrbf | /r/LocalLLaMA/comments/15nmrbf/tesla_gpus_for_llm_text_generation/ | false | false | self | 13 | null |
Why are models trained in fp16 and not pre-quantized? | 17 | I know the most computationally expensive part of LLM is the training portion, requiring vast resources. From what I've seen, 8 bit quants perform very similarly to fp16. Couldn't large companies making foundational models utilize more parameters if they trained a model in a pre-quantized form? Could they train a model in mixed precision, for a compromise between the two? | 2023-08-10T20:02:45 | https://www.reddit.com/r/LocalLLaMA/comments/15nmrat/why_are_models_trained_in_fp16_and_not/ | clyspe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nmrat | false | null | t3_15nmrat | /r/LocalLLaMA/comments/15nmrat/why_are_models_trained_in_fp16_and_not/ | false | false | self | 17 | null |
How to finetune a 70B model using A100? | 20 | Hello, I would like to finetune (LoRA or full SFT) a 60B (LLaMA 1) or 70B (LLaMA 2) model on my custom data but I could not find a way to do so (keep throwing me out of memory). Thus I'd like to hear from you guys any suggestions.
\- I have a custom \~300k data consisting of my private data + some from [GPTeacher](https://github.com/teknium1/GPTeacher). The context window is 2k
\- I can rent 8x 80G A100. But I can only rent it for at most a week otherwise it gets overbudget.
I use huggingface transformers (as most of ppl do) and its [trainer](https://huggingface.co/docs/transformers/main_classes/trainer) and tried
\- SFT with Deepspeed ZERO 3 + CPU Offload, with [FlashAttention copied from Vicuna](https://github.com/lm-sys/FastChat/blob/main/fastchat/train/llama_flash_attn_monkey_patch.py), gradient checkpoint, and bf16 mixed training. But this gives me out of memory error even with batch size = 1.
\- Then I tried LoRA with same config as [Alpaca-LoRA](https://github.com/tloen/alpaca-lora/tree/main), but again I got out of memory error
\- I haven't tried QLoRA but from my past experience with QLoRA: I think it is impossible to use Deepspeed with QLoRA thus I can only use 1 gpu. Last month I tried training a 13B model with QLoRA on \~100k data but already took me >60 hours so I can't imagine how expensive it is to use QLoRA on 70B model with 300k data. But I can be wrong tho.
​
From my understanding, a 70B model takes at least 140B for bf16 just for holding it, let alone training. So a 80G A100 cannot even do inference with bf16. Therefore I am looking for methods that use parallelism. Thanks!!!! | 2023-08-10T19:43:11 | https://www.reddit.com/r/LocalLLaMA/comments/15nm8r4/how_to_finetune_a_70b_model_using_a100/ | tt19234 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nm8r4 | false | null | t3_15nm8r4 | /r/LocalLLaMA/comments/15nm8r4/how_to_finetune_a_70b_model_using_a100/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'qtkYXuu_0CmgGZOB_5uHII_H1ifZJe4xeYQHxtxYSxM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=108&crop=smart&auto=webp&s=67441d4962990562a884de7f50a4a9a449bbb2ef', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=216&crop=smart&auto=webp&s=4a2446155c60d7cc020359633696f13e465efabe', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=320&crop=smart&auto=webp&s=b829be685c13d498908042e56c8761abeebcbca2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=640&crop=smart&auto=webp&s=479ef5caa37a2ecabd1148261ae7170148a1d69f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=960&crop=smart&auto=webp&s=c531fc97ca89c8ecb36c389481d5b58ef312e2be', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?width=1080&crop=smart&auto=webp&s=8aeb6db82c2bbca1210f82e22b1c33c99cf6185d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/uoz6hL1-77bHD65K5wRj4xyHCFrya_5NkcEZcHRWO28.jpg?auto=webp&s=3c619bc66fadbf757e1fed43fda6f95db2b4c51e', 'width': 1200}, 'variants': {}}]} |
How I ran Llama 2 on my Mac in < 5 mins | 1 | [removed] | 2023-08-10T19:10:45 | https://www.reddit.com/r/LocalLLaMA/comments/15nle0k/how_i_ran_llama_2_on_my_mac_in_5_mins/ | rajatarya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nle0k | false | null | t3_15nle0k | /r/LocalLLaMA/comments/15nle0k/how_i_ran_llama_2_on_my_mac_in_5_mins/ | false | false | self | 1 | null |
Are AMD Ryzen Processors Supported for Running Uncensored AI Models ? | 1 | [removed] | 2023-08-10T19:01:13 | https://www.reddit.com/r/LocalLLaMA/comments/15nl4uh/are_amd_ryzen_processors_supported_for_running/ | No_Anywhere2560 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nl4uh | false | null | t3_15nl4uh | /r/LocalLLaMA/comments/15nl4uh/are_amd_ryzen_processors_supported_for_running/ | false | false | self | 1 | null |
Here is what the EU considers risky and wants to regulate. Text taken directly from EU's site, link provided. | 87 | The text isn't all bad, but I do find some of it concerning. Notably:
##### Banned actions
>Unacceptable risk AI systems are systems considered a threat to people and will be banned.
>Real-time and remote biometric identification systems, such as facial recognition
This is a major security issue. Not all data should be open to everyone. Security is moving to passkeys which require bio authentication
##### Must be registered
>AI systems falling into eight specific areas that will have to be registered in an EU database:
>Education and vocational training
An educational model must be registered. That's pretty much everything. This will come down to wording in legislation. But is currently concerning.
##### Generative AI
>Generative AI, like ChatGPT, would have to comply with transparency requirements:
>Disclosing that the content was generated by AI
Designing the model to prevent it from generating illegal content
Publishing summaries of copyrighted data used for training
Autocomplete is AI -- it uses the same training architechure as ChatGPT. Most photo tools are AI based. There are so many tools out there that are using AI that nothing will be considered yours.
---
https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
AI Act: different rules for different risk levels
The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.
Unacceptable risk
Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:
Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
Real-time and remote biometric identification systems, such as facial recognition
Some exceptions may be allowed: For instance, “post” remote biometric identification systems where identification occurs after a significant delay will be allowed to prosecute serious crimes but only after court approval.
High risk
AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:
1) AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts.
2) AI systems falling into eight specific areas that will have to be registered in an EU database:
Biometric identification and categorisation of natural persons
Management and operation of critical infrastructure
Education and vocational training
Employment, worker management and access to self-employment
Access to and enjoyment of essential private services and public services and benefits
Law enforcement
Migration, asylum and border control management
Assistance in legal interpretation and application of the law.
All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.
Generative AI
Generative AI, like ChatGPT, would have to comply with transparency requirements:
Disclosing that the content was generated by AI
Designing the model to prevent it from generating illegal content
Publishing summaries of copyrighted data used for training
Limited risk
Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes. | 2023-08-10T18:07:28 | https://www.reddit.com/r/LocalLLaMA/comments/15njqpd/here_is_what_the_eu_considers_risky_and_wants_to/ | jayfehr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15njqpd | false | null | t3_15njqpd | /r/LocalLLaMA/comments/15njqpd/here_is_what_the_eu_considers_risky_and_wants_to/ | false | false | self | 87 | {'enabled': False, 'images': [{'id': 'dvY7fvPhlVsoTxqCUAaSr_wr2lTeGds8ENFwBCMZTGA', 'resolutions': [{'height': 72, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=108&crop=smart&auto=webp&s=d2706306599f52edc15fb8be036a64f98ae4cbcc', 'width': 108}, {'height': 144, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=216&crop=smart&auto=webp&s=504df8c65372e65e35f3e9209c9555a645cfb015', 'width': 216}, {'height': 213, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=320&crop=smart&auto=webp&s=6f6df8e6d80348b2378abb6323a08d488862e23c', 'width': 320}, {'height': 426, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=640&crop=smart&auto=webp&s=a2ff5e00e5b5887139fe465c7a557fa09d6bc148', 'width': 640}, {'height': 640, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=960&crop=smart&auto=webp&s=958aea40c9f2f6c4c3902cddecb1886083bb7198', 'width': 960}, {'height': 720, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?width=1080&crop=smart&auto=webp&s=99ba56b430c6e5270cddeb6aaeb816a49c27980f', 'width': 1080}], 'source': {'height': 1200, 'url': 'https://external-preview.redd.it/HC_1mvqThFmrVOtO-sR4ZDVqiZlPecbdp-IX6KDVSpw.jpg?auto=webp&s=b11e87b9c10cdbf975fe952609b77fd79ad8c051', 'width': 1800}, 'variants': {}}]} |
How make Llama2 GPTQ faster in generation? | 5 | I use [this](https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ) model on T4 gpu and speed is modest for generation, is there anyway to make it faster? | 2023-08-10T17:52:27 | https://www.reddit.com/r/LocalLLaMA/comments/15njbw1/how_make_llama2_gptq_faster_in_generation/ | Euphoric-Nebula-4559 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15njbw1 | false | null | t3_15njbw1 | /r/LocalLLaMA/comments/15njbw1/how_make_llama2_gptq_faster_in_generation/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'rkljmc1ht4wmaJGm60zJkcYTjcDJ3O6Ma8y8dDYwHYI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=108&crop=smart&auto=webp&s=1a72ffb8bd5cc3be930da7d37436aeb9871f39b6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=216&crop=smart&auto=webp&s=4834bb0a4a1a1893261f169f663dc5f8caf3aa24', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=320&crop=smart&auto=webp&s=4d244a1a632999ce7d2c4f476dffd40a57edcd59', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=640&crop=smart&auto=webp&s=68e3dbfa5c777b7d5c5c7d805373cb4aa0562544', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=960&crop=smart&auto=webp&s=57a63c907125ad8e9d155efefda403fc3c21b9f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=1080&crop=smart&auto=webp&s=a9394eb2b2002d2e590a60251c3b3fcdbe7afc8a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?auto=webp&s=a07479d4b0463cc023a75d5210c525e3dac0d4b2', 'width': 1200}, 'variants': {}}]} |
Integrating Microsoft’s Guidance into hiyouga/LLaMA-Efficient-Tuning's repo for Faster Processing and Well-Formatted Output | 1 | Hey, do you know where I can find resources on how to integrate Microsoft’s Guidance into hiyouga/LLaMA-Efficient-Tuning's repo? It says it speeds up the process, which can be super helpful with these resource-heavy beasts, and it is very useful for getting well-formatted output | 2023-08-10T17:26:07 | https://www.reddit.com/r/LocalLLaMA/comments/15nimvl/integrating_microsofts_guidance_into/ | ILoveBoxPlots | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nimvl | false | null | t3_15nimvl | /r/LocalLLaMA/comments/15nimvl/integrating_microsofts_guidance_into/ | false | false | self | 1 | null |
How to utilize memory, author's notes effectively in KoboldCPP? | 4 | So I've noticed that when I select a preset in kcpp and visit the memory settings it has a lot of pre-filled text in there, and frequently there is [ ] (square brackets) surrounding various bits of text, and often there is a format like
person: their_name; attitude: their_attitudes; # etc
So it got me curious, is there a specific format that should be used when entering data/details into the memory portion?
I've also noticed that when I select the presets there is rarely (if ever) anything populated in the author's notes box. I'm assuming this isn't as important then? | 2023-08-10T17:21:31 | https://www.reddit.com/r/LocalLLaMA/comments/15niihq/how_to_utilize_memory_authors_notes_effectively/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15niihq | false | null | t3_15niihq | /r/LocalLLaMA/comments/15niihq/how_to_utilize_memory_authors_notes_effectively/ | false | false | self | 4 | null |
Multimodal-app with video multimodal support? (4bit GPTQ)? | 4 | Is there any app like [https://github.com/DAMO-NLP-SG/Video-LLaMA.git](https://github.com/DAMO-NLP-SG/Video-LLaMA.git) or [https://github.com/X-PLUG/mPLUG-Owl](https://github.com/X-PLUG/mPLUG-Owl) that also has GPTQ 4bit support for low VRAM users?
Oobabooga only has image mutimodal support. I really want video support with 4bit GPTQ, so it will fit a 8gb GPU. | 2023-08-10T17:20:19 | https://www.reddit.com/r/LocalLLaMA/comments/15nihan/multimodalapp_with_video_multimodal_support_4bit/ | fetballe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nihan | false | null | t3_15nihan | /r/LocalLLaMA/comments/15nihan/multimodalapp_with_video_multimodal_support_4bit/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'vTaZURLh7-5AXEQ0JeE73YLxXFzAqcxN2dUl_wghjSQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=108&crop=smart&auto=webp&s=9444bf91b799d9c94f126ccbf88d96b4dac446c7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=216&crop=smart&auto=webp&s=9b2e61d158ebecbc348ce548f0f7f1d17185fed7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=320&crop=smart&auto=webp&s=8a757d9400e4e72fa03726e7c954c5669cf3ed86', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=640&crop=smart&auto=webp&s=9a7a3a6ba291745a2ab656abb4688ae58ef52acc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=960&crop=smart&auto=webp&s=1419dcddfaac32d71e9b45c08e9949751cf0afc3', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?width=1080&crop=smart&auto=webp&s=e37a6e400425e9f491d7151c923f76727b02edf3', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wN2q2kGo453e9f_co110shSr0JDCbXcz1qXfkQGrTr4.jpg?auto=webp&s=871558cd8817ebc0b4779294b05c01539d9f3675', 'width': 1200}, 'variants': {}}]} |
Did someone calculate how fast GPT-3.5 is, in terms of tokens/second? And how about GPT-4? | 22 | I read this thread [Inference Speed for Llama 2 70b on A6000 with Exllama - Need Suggestions!](https://www.reddit.com/r/LocalLLaMA/comments/15lf119/inference_speed_for_llama_2_70b_on_a6000_with/?utm_source=share&utm_medium=web2x&context=3), and I think 10 tokens per second is not bad. But I'm not sure how fast it is. So, I want to check its speed with GPT-3.5 and GPT-4. Anyone have a concrete number? Or if I want to measure it manually, how should I do? A stopwatch? Any suggestion? Thank you guys. | 2023-08-10T17:19:02 | https://www.reddit.com/r/LocalLLaMA/comments/15nig1k/did_someone_calculate_how_fast_gpt35_is_in_terms/ | Fancy-Welcome-9064 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nig1k | false | null | t3_15nig1k | /r/LocalLLaMA/comments/15nig1k/did_someone_calculate_how_fast_gpt35_is_in_terms/ | false | false | self | 22 | null |
Where's the best community to get help with tortoise TTS? | 1 | [removed] | 2023-08-10T16:36:01 | https://www.reddit.com/r/LocalLLaMA/comments/15nhbm7/wheres_the_best_community_to_get_help_with/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nhbm7 | false | null | t3_15nhbm7 | /r/LocalLLaMA/comments/15nhbm7/wheres_the_best_community_to_get_help_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'R1XyIfwJNMprA3q8KUhj7XqSkvuDgfnG7_QKPcJ8kNU', 'resolutions': [], 'source': {'height': 80, 'url': 'https://external-preview.redd.it/eSEXxiI7z3-IUxI_gRwm0PH6HgIOm1-iAuzEbmuJ3ow.jpg?auto=webp&s=2c3ce4d6960f8deaa4baa31576edfc2d236b8ba5', 'width': 80}, 'variants': {}}]} |
General strategies on how to refine a model to a specific dataset while keeping the NLP-ness? | 1 | I've managed to train with documents from an internal knowledge base. But want to limit users from asking off topic questions "What is the capital of Iceland?" | 2023-08-10T16:11:27 | https://www.reddit.com/r/LocalLLaMA/comments/15ngot3/general_strategies_on_how_to_refine_a_model_to_a/ | chris480 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ngot3 | false | null | t3_15ngot3 | /r/LocalLLaMA/comments/15ngot3/general_strategies_on_how_to_refine_a_model_to_a/ | false | false | self | 1 | null |
Is oobabooga supposed to be this damn slow? | 15 | Hi, I'm a newb, so I'm not sure if I am putting this in the right place...
I would like to bring LLama (or any other model) local to experiment with finetuning, etc.
I don't have a beast of a machine, but it seems pretty good to me. It is an Asus, ROG Running:
AMD Ryzen 9 5900HX with Radeon Graphics 3.30 GHz
NVidia GeForce RTX 3070 GPU GDDR6 @ 8GB (256 Bit) RAM
64.0 GB System RAM
4 TB of SSD storage
I'm thinking I **\*should\*** be able to run a 7B model locally, right?
I followed the steps to set up Oobabooga. I load a 7B model from TheBloke.
Everything seems fine. I type in a question, and I watch the output in the Powershell. The output shows up reasonably quickly.
But the response inside of the web ui is ***painful***. Like... Every.... Word... comes... up... 2 seconds... apart.
I don't get it. Doesn't it have its answer as soon at the output is returned? Why so slow? What am I missing?
​ | 2023-08-10T16:02:47 | https://www.reddit.com/r/LocalLLaMA/comments/15nggns/is_oobabooga_supposed_to_be_this_damn_slow/ | Ok_Organization_4295 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nggns | false | null | t3_15nggns | /r/LocalLLaMA/comments/15nggns/is_oobabooga_supposed_to_be_this_damn_slow/ | false | false | self | 15 | null |
Which Azure Cloud GPU is best for fine tuning? | 0 | The GPU names are different, but are they usually all A100's but just emulated into smaller GPUs? | 2023-08-10T15:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/15nfyt7/which_azure_cloud_gpu_is_best_for_fine_tuning/ | Shyvadi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nfyt7 | false | null | t3_15nfyt7 | /r/LocalLLaMA/comments/15nfyt7/which_azure_cloud_gpu_is_best_for_fine_tuning/ | false | false | self | 0 | null |
Any tips for faster generation for queries with large fixed context/system prompts? | 1 | [removed] | 2023-08-10T15:16:01 | https://www.reddit.com/r/LocalLLaMA/comments/15nf8x0/any_tips_for_faster_generation_for_queries_with/ | greying_panda | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nf8x0 | false | null | t3_15nf8x0 | /r/LocalLLaMA/comments/15nf8x0/any_tips_for_faster_generation_for_queries_with/ | false | false | self | 1 | null |
Summarizing content prompt ignored because of content. | 14 | I'm working with the llama-2-13b-chat.ggmlv3.q4\_0 model to try to summarize content for SEO. It seems to work well for simple content, though if the page has a form on it, the actions and questions of the form tend to hijack the response and not return in JSON format, and start chatting as if I it's responding as the page itself. I found the suggested prompt template and it helped with simple pages but not more complex pages. Below is one of the latest prompts I've tried that just fails. I could be using the wrong model, but I'm hoping I can keep at the 13b as it fits easy on my laptop to run. Is the prompt bad, or is this a setting issue, or the wrong model?
`<s>[INST] <<SYS>>`
`Assistant is a expert JSON builder designed to assist with a creating SEO headlines and descriptions.`
`Assistant is able to respond to the User and use tools using JSON strings that contain 'headline' and 'description' parameters.`
`All of Assistant's communication is performed using this JSON format.`
`Here are some previous summarizing between the Assistant and User:`
`User: Sign our petition! Sign your name for our petition to support us.`
`Assistant: \`\`\`json`
`{{"headline": "Sign our petition",`
`"description": "Sign your name to support our causes"}}`
`\`\`\``
`User: Complete our survey --- What issues impact you the most. Complete the survey`
`Assistant: \`\`\`json`
`{{"headline": "Complete our Survey",`
`"description": "Tell us what issues impact you the most"}}`
`\`\`\``
`<</SYS>>`
`Here is the text of the web page to create the SEO content for.`
Using the lamma.cpp server `.\server.exe -m .\models\llama-2-13b-chat.ggmlv3.q4_0.bin --mlock --n-gpu-layers 30` to run this.
​ | 2023-08-10T14:55:05 | https://www.reddit.com/r/LocalLLaMA/comments/15nepxb/summarizing_content_prompt_ignored_because_of/ | jbartley | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nepxb | false | null | t3_15nepxb | /r/LocalLLaMA/comments/15nepxb/summarizing_content_prompt_ignored_because_of/ | false | false | self | 14 | null |
Does server motherboards with dual cpu run dobule the speed compare to only one cpu since dual cpu has double the ram slots? | 7 | So I'm planning to build a PC to run localLLama with some used server CPU.
I'm planning to either buy 1 used 2nd gen epyc cpu with 8 channel ram or 2 xeon gold CPUs with 6 channel ram and use a dual cpu motherboard.
My question is will a 2 cpu with 6 channel faster than a epyc 8 channel cpu since they could use 2\*6=12 ram slots?
​ | 2023-08-10T13:36:08 | https://www.reddit.com/r/LocalLLaMA/comments/15ncr2k/does_server_motherboards_with_dual_cpu_run_dobule/ | hoseex999 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ncr2k | false | null | t3_15ncr2k | /r/LocalLLaMA/comments/15ncr2k/does_server_motherboards_with_dual_cpu_run_dobule/ | false | false | self | 7 | null |
Llama 2 goes to school - a Wikipedia exploration, vector DB ingestion, and chat experiment | 1 | [removed] | 2023-08-10T13:28:57 | https://www.reddit.com/r/LocalLLaMA/comments/15nckm3/llama_2_goes_to_school_a_wikipedia_exploration/ | crono760 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nckm3 | false | null | t3_15nckm3 | /r/LocalLLaMA/comments/15nckm3/llama_2_goes_to_school_a_wikipedia_exploration/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'Tay9bFq98NoSUfgn0x33E-XJc10Y1CdW2jllu-yQNOQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=108&crop=smart&auto=webp&s=9af61d04cc3d85e9fc3d69da584dffe45dc04636', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=216&crop=smart&auto=webp&s=fb7f24b3629ed59f39a7087a325f4c85761a2989', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=320&crop=smart&auto=webp&s=b97ef3f547d53250d7da0143b5485b3f6a1d30d2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=640&crop=smart&auto=webp&s=b4f87904b7a071458ed7776e33677d1f02a724bb', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=960&crop=smart&auto=webp&s=34852c2d2d57ace507dcc8cc7dc8f27a3198a413', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?width=1080&crop=smart&auto=webp&s=55624a1e90cefe0b74bc2c83af493549ed68ce36', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/EGpWnUW_GGRKumcTEmimGWiEfU8hCFPrgnIZ7m5gVkE.jpg?auto=webp&s=732d0287c1cb02f873eed39833b1fd8c4fa20791', 'width': 1200}, 'variants': {}}]} |
How do you use LLama(2)? | 1 | [removed] | 2023-08-10T12:59:49 | https://www.reddit.com/r/LocalLLaMA/comments/15nbv6o/how_do_you_use_llama2/ | SecretOk9644 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nbv6o | false | null | t3_15nbv6o | /r/LocalLLaMA/comments/15nbv6o/how_do_you_use_llama2/ | false | false | self | 1 | null |
Is Embeddings are giving better answer than finetuned model? | 1 | I am making a chatbot for a company which has 150 PDFs and it contains 1000 of pages of data in text format. My expectations from the chatbot is giving me not only simple answers as in the PDF. I want logical answers from it which is related to topics in PDF and model is determine an answer using data from PDF.
I tried making embedding of those PDFs but I am not getting satisfied answer using embeddings so if I finetune LLM model using PDFs data then chatbot gives me better answer or not ? | 2023-08-10T12:15:55 | https://www.reddit.com/r/LocalLLaMA/comments/15nav9p/is_embeddings_are_giving_better_answer_than/ | darshil-3099 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nav9p | false | null | t3_15nav9p | /r/LocalLLaMA/comments/15nav9p/is_embeddings_are_giving_better_answer_than/ | false | false | self | 1 | null |
Is Embeddings are giving better answer that finetuned model? | 1 | I am making a chatbot for a company which has 150 PDFs and it contains 1000 of pages of data in text format. My expectations from the chatbot is giving me not only simple answers as in the PDF. I want logical answers from it which is related to topics in PDF and model is determine an answer using data from PDF.
I tried making embedding of those PDFs but I am not getting satisfied answer using embeddings so if I finetune LLM model using PDFs data then chatbot gives me better answer or not? | 2023-08-10T12:03:05 | https://www.reddit.com/r/LocalLLaMA/comments/15nakwz/is_embeddings_are_giving_better_answer_that/ | darshil-3099 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15nakwz | false | null | t3_15nakwz | /r/LocalLLaMA/comments/15nakwz/is_embeddings_are_giving_better_answer_that/ | false | false | self | 1 | null |
My experience with LLaMA 2 | 1 | [removed] | 2023-08-10T11:55:47 | https://www.reddit.com/r/LocalLLaMA/comments/15naepn/my_experience_with_llama_2/ | uralstech_MR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15naepn | false | null | t3_15naepn | /r/LocalLLaMA/comments/15naepn/my_experience_with_llama_2/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'DQ0NpM_glprnLD-GOVWRivLTqpaMfa4wZh0dDiD1wnw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=108&crop=smart&auto=webp&s=7c5eb57ce92cb53dd18f9e277fbe1234ad3d6487', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=216&crop=smart&auto=webp&s=7b0f11a7102c7d2b009c900fefa11aea2ea7bea1', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=320&crop=smart&auto=webp&s=95cd48d7c84f190be5635db0a604de8c9879f4b1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=640&crop=smart&auto=webp&s=e1c1723f70c20e49d6dd6d783dcfc491539f2448', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=960&crop=smart&auto=webp&s=fd81c83384eaa28eac658f10cacf9a2cfb910836', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?width=1080&crop=smart&auto=webp&s=1bc3462ab583b926bf14849085900715579c9176', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/o9AjqIWXV4Hp56E3k-B1szuU63M5WgJ_51ZPTn2SweY.jpg?auto=webp&s=36661dfa3d4bb20c5d9701be7671389e880244e0', 'width': 1200}, 'variants': {}}]} |
help to update Llama.cpp | 1 | [removed] | 2023-08-10T11:40:26 | https://www.reddit.com/r/LocalLLaMA/comments/15na2ph/help_to_update_llamacpp/ | Competitive_Fox7811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15na2ph | false | null | t3_15na2ph | /r/LocalLLaMA/comments/15na2ph/help_to_update_llamacpp/ | false | false | 1 | null | |
What is needed to fine-tune LLaMa 2 70B without quantisation? | 1 | [removed] | 2023-08-10T11:12:42 | https://www.reddit.com/r/LocalLLaMA/comments/15n9hx7/what_is_needed_to_finetune_llama_2_70b_without/ | pet_vaginal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n9hx7 | false | null | t3_15n9hx7 | /r/LocalLLaMA/comments/15n9hx7/what_is_needed_to_finetune_llama_2_70b_without/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WyBK-cNZtYI8e2V2a6UpK1uX-x5RQjkIvLQveCmvURc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=108&crop=smart&auto=webp&s=b7246ca224b849e17307af1e38e1da252729741a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=216&crop=smart&auto=webp&s=c4e7fb8ffbd5f6ed63aff4084f4af6af1f546afd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=320&crop=smart&auto=webp&s=cacc08d69fea74681e002bc9c69237c79577c864', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=640&crop=smart&auto=webp&s=fc29395b17387c905bdf56012d9e74408d167f0a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=960&crop=smart&auto=webp&s=ec50388eca0806ee57992819fba448b30dd5e162', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?width=1080&crop=smart&auto=webp&s=d22d5b81cd534e776ab1bd6b9ad5e47f5ea62cbb', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/bOwmyl6KOPvnXk2ApJoeApQZ4Q-EzWXXPl75WOn-AEY.jpg?auto=webp&s=2d758e78264dd4f726e4534208207b8a95cdf09d', 'width': 1200}, 'variants': {}}]} |
Using prompt templates after LoRA on raw text | 5 | Hello all,
I'm a little overwhelmed with all the developments and I feel like I don't know where to begin. So I apologise if this question sounds very basic.
Let's say I want my LLM to sound like Phoebe Buffay from Friends. I don't have a QnA format, but just raw text for this purpose.
As I understand, I can perform LoRA using the WebUI.
Once my fine tuned model is ready, I want to use this to be able to converse with the user using specific prompts.
My question is, can I feed this fine tuned model to LangChain so I can use their prompt template successfully? Or are there alternatives?
Or can I do all of this using HuggingFace?
Sorry, I'm very lost and I can't seem to understand if the finetuned models can be used by other frameworks. | 2023-08-10T10:02:54 | https://www.reddit.com/r/LocalLLaMA/comments/15n84m8/using_prompt_templates_after_lora_on_raw_text/ | betib25 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n84m8 | false | null | t3_15n84m8 | /r/LocalLLaMA/comments/15n84m8/using_prompt_templates_after_lora_on_raw_text/ | false | false | self | 5 | null |
Is there a way to read the best papers/blogs/research on LLama? | 9 | I am trying to get into a habit of reading a paper every couple days and was wondering if there is any curated website or source that keeps track of latest papers coming out.
So far I have just been using twitter to try to follow people who are working in this area. | 2023-08-10T09:22:16 | https://www.reddit.com/r/LocalLLaMA/comments/15n7do9/is_there_a_way_to_read_the_best/ | mohit_habeeb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n7do9 | false | null | t3_15n7do9 | /r/LocalLLaMA/comments/15n7do9/is_there_a_way_to_read_the_best/ | false | false | self | 9 | null |
How i can train llama 2? | 1 | [removed] | 2023-08-10T09:07:29 | https://www.reddit.com/r/LocalLLaMA/comments/15n743e/how_i_can_train_llama_2/ | Agreeable_Fun7280 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n743e | false | null | t3_15n743e | /r/LocalLLaMA/comments/15n743e/how_i_can_train_llama_2/ | false | false | self | 1 | null |
What is the best way to fine tune llama2 for X/tweet sentiment analysis? | 3 | While there are BERT based models (e.g., [this one for twitter](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest)), I was wondering whether llama2 can be fine-tuned to provide a more accurate sentiment analysis? If so, can you help guide how a novice can go about this? | 2023-08-10T08:49:57 | https://www.reddit.com/r/LocalLLaMA/comments/15n6sgt/what_is_the_best_way_to_fine_tune_llama2_for/ | sbs1799 | self.LocalLLaMA | 2023-08-10T09:44:11 | 0 | {} | 15n6sgt | false | null | t3_15n6sgt | /r/LocalLLaMA/comments/15n6sgt/what_is_the_best_way_to_fine_tune_llama2_for/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'ArlWYQviB_-tLNpMl6U7rDe8OGn86GN62b1DJBtybDk', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=108&crop=smart&auto=webp&s=830a41f7ded868cb0f1f029252d1ae2d9871eda3', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=216&crop=smart&auto=webp&s=2abbb1461e3b451083191545260db891894e5557', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=320&crop=smart&auto=webp&s=f0333448132cff266a52926b4bf31ad3dc8f097a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=640&crop=smart&auto=webp&s=e8e32e5edfc032544cb866e0960c922b087cbd11', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=960&crop=smart&auto=webp&s=e14e83dde369a57d62b44688cf0caee00f1b0706', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?width=1080&crop=smart&auto=webp&s=aea0b286447212126101824b07a9209e381c7fb6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/gKS9gx0oKZdnygGyNyJkxboMlVq9S_SaVs7Pr00b4w4.jpg?auto=webp&s=25fd87897661a5bbff40927d7e5d3b9266fdb6d2', 'width': 1200}, 'variants': {}}]} |
Optimizing models for LLM Leaderboard is a HUGE mistake | 151 | Cause that has a preassumption of being a "good" model means ranking high in 4 different relatively controversial benchmarking suites.
Its a popular wrongdoing in the Kaggle competitions to optimize for the test set. Leaderboard should be seen as the test set that we see how our models compare after the model released, rather than some sort of goal to create ensembled models to rank higher.
You can see this in some random kaggle competition, where people can see limited eval set results, but when they face with the whole test set, higher ranking models tanking in the rankings for the full suite, and some model in the middle of the test set wins.
The problem is higher ranking models are often overfit on specific examples in the test set and that doesn't necessarily means the model will be percepted as good in terms of humans. That's why the models that claim they are better than GPT 3.5 on eval suite X often not as good as its advertised when tested on real world human prompts.
Without a doubt, OpenAI have their own extensive eval suite and even that wasn't enough, so they created "openai/evals" repo to get free evaluation from the community.
We need better evaluation metrics than LLM leaderboard for the Open-source community. | 2023-08-10T08:24:48 | https://www.reddit.com/r/LocalLLaMA/comments/15n6cmb/optimizing_models_for_llm_leaderboard_is_a_huge/ | yiyecek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n6cmb | false | null | t3_15n6cmb | /r/LocalLLaMA/comments/15n6cmb/optimizing_models_for_llm_leaderboard_is_a_huge/ | false | false | self | 151 | null |
What determines the length of the response? | 9 | while the context size determines how much information can be considered while fashioning a response, what factors determine the length of the output?
- a (unsubstantiated) gut feeling is that models with larger context sizes should be able to make longer responses?
- a [post](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/?ref=share&ref_source=link) finds that a slight variation of the prompt increases verbosity. is there a general pattern/ practice here that is worth trying for all models.
- this [post](https://www.reddit.com/r/LocalLLaMA/comments/15g5419/what_ive_learned_from_orcamini3bggmlv3q4_1_using/) finds that some models are susceptible to unfinished responses, even if they "seem" complete. has this been noticed in general, and any recommendations for the model to furnish complete responses?
- are there any tests/experiments that you feel would be interesting to try out to understand the verbosity limits of llms | 2023-08-10T07:58:28 | https://www.reddit.com/r/LocalLLaMA/comments/15n5vyz/what_determines_the_length_of_the_response/ | T_hank | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n5vyz | false | null | t3_15n5vyz | /r/LocalLLaMA/comments/15n5vyz/what_determines_the_length_of_the_response/ | false | false | self | 9 | null |
Putting the pieces together for RAG? | 1 | [removed] | 2023-08-10T07:05:30 | https://www.reddit.com/r/LocalLLaMA/comments/15n4yso/putting_the_pieces_together_for_rag/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n4yso | false | null | t3_15n4yso | /r/LocalLLaMA/comments/15n4yso/putting_the_pieces_together_for_rag/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'ozzoOOUo0GsTZQDFKQGLV_Hm-3-sAMBPGUdj-YGAXaU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=108&crop=smart&auto=webp&s=01abd3e1c16817d00a13b4aefae81016db13aa6d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=216&crop=smart&auto=webp&s=80098a7b6d0ab3b52bcff8841b1c97195818826e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=320&crop=smart&auto=webp&s=6287874f84b7432e9be49420fccd9bdc36e26a2b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=640&crop=smart&auto=webp&s=9543d5f1f15bd29dd86f3124e13d2a14c896185a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=960&crop=smart&auto=webp&s=13e07ecd94404eb99b26585e2886d42c371183af', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?width=1080&crop=smart&auto=webp&s=fc6c1f4554adbf733744434d61fc12f3be4f1a57', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/eKdlDfZLsUFa_rEq3sS4i1bIQWiKS6af16vu7A-M4fM.jpg?auto=webp&s=d91bb719706fadcf313dc369ff3d4f31373466d7', 'width': 1200}, 'variants': {}}]} |
LongLLaMA-Instruct v1.1 32K | 26 | A new 3B instruct model with 32K context just dropped, with pretty cool numbers (55% lm-eval, 12% humaneval pass@1):
[https://huggingface.co/syzymon/long\_llama\_3b\_instruct](https://huggingface.co/syzymon/long_llama_3b_instruct)
You can chat with it on a free colab gpu (thanks to bf16 inference):
[https://colab.research.google.com/github/CStanKonrad/long\_llama/blob/main/long\_llama\_instruct\_colab.ipynb](https://colab.research.google.com/github/CStanKonrad/long_llama/blob/main/long_llama_instruct_colab.ipynb)
Twitter thread:
[https://twitter.com/s\_tworkowski/status/1687620785379360768](https://twitter.com/s_tworkowski/status/1687620785379360768)
​
​ | 2023-08-10T06:35:00 | https://www.reddit.com/r/LocalLLaMA/comments/15n4ekv/longllamainstruct_v11_32k/ | syzymon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n4ekv | false | null | t3_15n4ekv | /r/LocalLLaMA/comments/15n4ekv/longllamainstruct_v11_32k/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'KntXC5DHxcOtA2LNoK-d9nxtxaDqYRyCJspNwurd3eg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=108&crop=smart&auto=webp&s=592a5e3bf293b0d61761a5a99a40121f9457606a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=216&crop=smart&auto=webp&s=6608c26fc9194d5ce59069fac81e277dfb0b10bf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=320&crop=smart&auto=webp&s=3bf13f342627220bff91973609f78029ee574a38', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=640&crop=smart&auto=webp&s=55301173ac456897c19bd3d14531f0597ba045db', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=960&crop=smart&auto=webp&s=24ad906c34ff42757964b7add5079ae6e282ac4f', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?width=1080&crop=smart&auto=webp&s=38a928d73b479b5a7dd33b559e14f05e9e369495', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MVP62i4-xoYp6WdFPnhEJdVE4sp8a9lPsKS__DVXhQo.jpg?auto=webp&s=28ba2e04403aa9320b592de2f086e9d5b505ba6a', 'width': 1200}, 'variants': {}}]} |
RLHF training on AMD GPUs | 2 | Has anyone able to run rlhf training code such as alpacaFarm on AMD MI200 GPUs. There seemed to be an issue with deep speed latest version with ROcM. | 2023-08-10T05:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/15n3ay6/rlhf_training_on_amd_gpus/ | HopeElephant | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n3ay6 | false | null | t3_15n3ay6 | /r/LocalLLaMA/comments/15n3ay6/rlhf_training_on_amd_gpus/ | false | false | self | 2 | null |
Return Source Documents (PDFs) with a source snippets. | 1 | Hi,
is it anyhow possible to return the source documents with a snippet in a PDF Q&A Chatbot of the sources the LLM was given for creating the answer?
I know that there is in langchain a function for that but in my case it's only giving me the name of the
PDF, not a snippet of the lines which were taken. | 2023-08-10T05:30:15 | https://www.reddit.com/r/LocalLLaMA/comments/15n36uh/return_source_documents_pdfs_with_a_source/ | jnk_str | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n36uh | false | null | t3_15n36uh | /r/LocalLLaMA/comments/15n36uh/return_source_documents_pdfs_with_a_source/ | false | false | self | 1 | null |
Resume fine-tune from checkpoint with qlora | 11 | I was fine-tuning a model when power went off. How do I resume fine-tuning from the latest checkpoint?
I passed in --checkpoint\_dir=output/checkpoint-9250 and --resume\_from\_checkpoint=True but they don't seem to work and it is starting training from step 1/10000
python3 [qlora.py](https://qlora.py) \--model\_name\_or\_path huggyllama/llama-7b --checkpoint\_dir output/checkpoint-9250 --dataset training\_data.json --load\_in\_4bit=True --max\_memory=8100 --resume\_from\_checkpoint=True | 2023-08-10T04:16:12 | https://www.reddit.com/r/LocalLLaMA/comments/15n1qrr/resume_finetune_from_checkpoint_with_qlora/ | QuantumTyping33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n1qrr | false | null | t3_15n1qrr | /r/LocalLLaMA/comments/15n1qrr/resume_finetune_from_checkpoint_with_qlora/ | false | false | self | 11 | null |
Running out of VRAM? | 12 | Hi all, I'm trying to load the llama-2 model found [here](https://huggingface.co/TheBloke/Llama-2-13B-Chat-fp16) on a Windows Server machine, which has over 22 GB of VRAM. However, when I try to use textgen webUI to load the model, it keeps running out of VRAM, and I get this error:
[How is it running out of memory?](https://preview.redd.it/gl6s0bf9j7hb1.png?width=717&format=png&auto=webp&s=a1a8b0dc054737e1c6805273f7b588f869c8ccea)
I'm super new to LLMs, and so I'm not a hundred percent sure what's going on. Can someone tell me where in the documentation I can look, and what I can do to possibly fix this issue? Thanks. | 2023-08-10T04:07:14 | https://www.reddit.com/r/LocalLLaMA/comments/15n1ke9/running_out_of_vram/ | Milk_No_Titties | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n1ke9 | false | null | t3_15n1ke9 | /r/LocalLLaMA/comments/15n1ke9/running_out_of_vram/ | false | false | 12 | {'enabled': False, 'images': [{'id': '278vv-gaDInPJrV4meg7BOOTUW7YKO_D3QK8c_dNrgs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=108&crop=smart&auto=webp&s=ba70eae8eed4466418501d10d195b4a508560e16', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=216&crop=smart&auto=webp&s=089c2aa294d9b4ee778bb3d83e692370510ffb79', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=320&crop=smart&auto=webp&s=270da8b561adb3c7fd7395c7d9eea75bab92bc68', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=640&crop=smart&auto=webp&s=321f94f3a108786bc113af342a8b431d5054ec69', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=960&crop=smart&auto=webp&s=905b58b145f0881b2e97e6d2752ba9ae32641dce', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?width=1080&crop=smart&auto=webp&s=9ee4848878998564d40bfe0d6e0d1451cdd24898', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XYGLHvjGZZJyKbMyeVx1ao_mrT3n1nlSA_OsnCF7paY.jpg?auto=webp&s=f844c5ebde972712c202968a9db4751739918cfa', 'width': 1200}, 'variants': {}}]} | |
How to finetune LLM for text classfication | 1 | [removed] | 2023-08-10T03:49:02 | https://www.reddit.com/r/LocalLLaMA/comments/15n17d2/how_to_finetune_llm_for_text_classfication/ | KneeNo79 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n17d2 | false | null | t3_15n17d2 | /r/LocalLLaMA/comments/15n17d2/how_to_finetune_llm_for_text_classfication/ | false | false | self | 1 | null |
Announcing The best 7b model out there "orca-mini-v3-7b" | 38 | Enjoy...
[https://huggingface.co/psmathur/orca\_mini\_v3\_7b](https://huggingface.co/psmathur/orca_mini_v3_7b)
Here are the evals score from the model card, HuggingFace OpenLLM Leaderboard is slow... in processing any submitted model.
https://preview.redd.it/syga31ahf7hb1.png?width=1476&format=png&auto=webp&s=b3cda5d84ba794ce687a09cd14b96dad0f013eb2 | 2023-08-10T03:47:00 | https://www.reddit.com/r/LocalLLaMA/comments/15n15x9/announcing_the_best_7b_model_out_there/ | Remarkable-Spite-107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15n15x9 | false | null | t3_15n15x9 | /r/LocalLLaMA/comments/15n15x9/announcing_the_best_7b_model_out_there/ | false | false | 38 | {'enabled': False, 'images': [{'id': 'guUYdBTPGqCETHbTWC0pfjVVAtLWLBdWdpaqzhoOkeg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=108&crop=smart&auto=webp&s=8913ff9f5f5cc518b1ca10f3e1d537a40c5c6a36', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=216&crop=smart&auto=webp&s=106734868c447aa7befd61ae71c78df6fe9eefa3', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=320&crop=smart&auto=webp&s=855a7b9855eda1daaf931aa6a9cfefe4ebd2ce61', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=640&crop=smart&auto=webp&s=65359d911e1076828f2cfe0c38dccdea3f7382f6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=960&crop=smart&auto=webp&s=0c92e8629c75890581b990fdb6557ae50142266a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?width=1080&crop=smart&auto=webp&s=ccc8be25e8eeffe4ed09e238f84fd0e0e252da98', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/0FMo66kg0bfyhA3loH3xdtLm_ibnKI7wP3CJIAMZOxQ.jpg?auto=webp&s=979bde2cd202d2dc0e40d51485905b6ab507e88a', 'width': 1200}, 'variants': {}}]} | |
Nvidia reveals new A.I. chip, says costs of running LLMs will 'drop significantly' | 176 | 2023-08-10T01:22:26 | https://www.cnbc.com/2023/08/08/nvidia-reveals-new-ai-chip-says-cost-of-running-large-language-models-will-drop-significantly-.html | throwaway_ghast | cnbc.com | 1970-01-01T00:00:00 | 0 | {} | 15mxzip | false | null | t3_15mxzip | /r/LocalLLaMA/comments/15mxzip/nvidia_reveals_new_ai_chip_says_costs_of_running/ | false | false | 176 | {'enabled': False, 'images': [{'id': '9VcesSKYyBRhuBA2LT8loyzkyfc9jo3Df-gtyqBavzo', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=108&crop=smart&auto=webp&s=4994bf64a183526138ed30da59a57db49c60b5e4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=216&crop=smart&auto=webp&s=e7fc29f5533e942734124f740e1d1438a4b0533d', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=320&crop=smart&auto=webp&s=406e860d0b41b761f576fb90108af9674d010d34', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=640&crop=smart&auto=webp&s=040286edc7de0a58db3236f95a20c4a9870564db', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=960&crop=smart&auto=webp&s=cf625a125e2f60b30cfcf31cec36a755bcbae03e', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?width=1080&crop=smart&auto=webp&s=e28188b032b885237334f4839ec271cae5838ccb', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/GtsqazVABjGcAOB7jen39y3IkJi9qNVWGphzLxcQdxY.jpg?auto=webp&s=cbe0c48ead87ea8ce5821a9c8ce22182b13ca1f5', 'width': 1920}, 'variants': {}}]} | ||
Huggingface for LoRA's? | 6 | Is there an equivalent to civitai or huggingface when it comes to finding (language) LoRA's or character files/contexts that others have made? I haven't trained anything yet, but I would like to contribute somewhere when I do. | 2023-08-10T01:21:24 | https://www.reddit.com/r/LocalLLaMA/comments/15mxynh/huggingface_for_loras/ | prondis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mxynh | false | null | t3_15mxynh | /r/LocalLLaMA/comments/15mxynh/huggingface_for_loras/ | false | false | self | 6 | null |
Can an RTX 4090 load a 30b model? | 1 | My machine is running windows 11 and has 32gb of system RAM with an RTX 4090. I have tried loading 30b models and cant load them. I think I tried 3 so far, don't remember which. Can anyone give me a quick run down how to load a 30b model? I want to load the most capable 30b model as per https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard leaderboard. A step by step guide or some hints are appreciated, thanks. | 2023-08-10T00:46:37 | https://www.reddit.com/r/LocalLLaMA/comments/15mx5us/can_an_rtx_4090_load_a_30b_model/ | no_witty_username | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mx5us | false | null | t3_15mx5us | /r/LocalLLaMA/comments/15mx5us/can_an_rtx_4090_load_a_30b_model/ | false | false | self | 1 | null |
Success! I've managed to install Koboldcpp on my system. Now I want to learn how to use these tools effectively. Please advise. | 5 | What I feel like I want more than anything is to have a realistic expectation of what this tool can actually do so I can be consciously aware if I'm operating it poorly or not. I have not had great experiences so far. The power and creativity seems there, but I don't feel like I understand how to harness it.
Is it possible for those of you who are good with this stuff to share a prompt, model, settings etc that you know work pretty damn well just so I can know what is decent performance and coherence and which is not?
I would very much appreciate it.
I'm open to suggestions for any and all modes, and I will fetch whatever model you suggest. | 2023-08-10T00:42:25 | https://www.reddit.com/r/LocalLLaMA/comments/15mx2b2/success_ive_managed_to_install_koboldcpp_on_my/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mx2b2 | false | null | t3_15mx2b2 | /r/LocalLLaMA/comments/15mx2b2/success_ive_managed_to_install_koboldcpp_on_my/ | false | false | self | 5 | null |
GPU shortage in main cloud providers | 8 | I am trying to launch the [text inference server](https://github.com/huggingface/text-generation-inference) from Hugging face from a VM with a good GPU (Nvidia T4) but it's hard to find the resources.
I first try on GCP but no resources was available neither the US or Europe. Same problem with AWS. With Azure I could get a GPU but I am not sure if I can really use it (Nvidia driver not found, etc).
Do you see the same shortage of GPU? How do you deploy your LLM? Do you use only CPU based inference server? | 2023-08-10T00:04:44 | https://www.reddit.com/r/LocalLLaMA/comments/15mw65r/gpu_shortage_in_main_cloud_providers/ | No_Palpitation7740 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mw65r | false | null | t3_15mw65r | /r/LocalLLaMA/comments/15mw65r/gpu_shortage_in_main_cloud_providers/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'Imdm79xgxA9kvl-lV3xwf5z21dQlmO1EmbOBRPo2izk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=108&crop=smart&auto=webp&s=35d5961c5aac9a9636856245f9c1181fd5d37be9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=216&crop=smart&auto=webp&s=323961b952e3db830611ccd06ceed80d7a83f1b2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=320&crop=smart&auto=webp&s=4247c0e29f1f385c412063f9ddc873a225990026', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=640&crop=smart&auto=webp&s=674bce08b732c2facba0d722601c70b0a61882c5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=960&crop=smart&auto=webp&s=d5eec314452b2e71a3b12474be053ac1033705e5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?width=1080&crop=smart&auto=webp&s=10497df7f651970838a34cd7b6fbe7628aca6843', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Z0ObBAMIVqlMzQd42XWHg1gsBtWbTbDlULLbvzFQP_s.jpg?auto=webp&s=e9547982230c5f865451dcd0988d53c7ed6a2d12', 'width': 1200}, 'variants': {}}]} |
vector search padding | 2 | I am doing a vector search for semantic matching. I chunked it so that i dont break sentences, so my chunks are around 768 characters + whatever it took to finish the sentence. However, the last chunk in the document is always < 768 and i see it gets picked up a lot - how do I pad this chunk? Do I append gibberish to it so it does not match as often? | 2023-08-09T23:37:31 | https://www.reddit.com/r/LocalLLaMA/comments/15mvi6y/vector_search_padding/ | Alert_Record5063 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mvi6y | false | null | t3_15mvi6y | /r/LocalLLaMA/comments/15mvi6y/vector_search_padding/ | false | false | self | 2 | null |
Announcing The best 13b model out there "orca-mini-v3-13b" | 97 | I guess the correct title is "Announcing The best 13b model out there 'orca-mini-v3-13b' for few hours....
UPDATE 2: A big thanks to TheBloke who has generously created the GGML/GPTQ versions. You can access them via the following links:
https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML
https://huggingface.co/TheBloke/orca_mini_v3_13B-GPTQ
After careful consideration, I've decided not to release the full model weights openly for now. I need to find the best way to receive appropriate recognition for the extensive hard work, computing costs, and resources that go into refining these models from their pre-trained versions. In the meantime, I encourage you to utilize the **quantized versions**.
UPDATE: Model Mergers are now taking over OpenLLM Leader board, like these guys https://huggingface.co/garage-bAInd, so I guess no point in doing pure SFT on pretrained base models and submitting to HF OpenLLM Leaderboard because there will always someone who will take the best model out here and merged with second or third best one and create a new merged.
I am making my orca-mini-v3-13b gated for meanwhile until I figured it out where all this is going..
May be this is all good for community or May be not..
I am open for suggestions.
Enjoy...
[https://huggingface.co/psmathur/orca\_mini\_v3\_13b](https://huggingface.co/psmathur/orca_mini_v3_13b)
https://preview.redd.it/smaelw3c76hb1.png?width=2060&format=png&auto=webp&s=890c940dd84a682c868aa436b32c6632e4c0ac3b | 2023-08-09T23:37:29 | https://www.reddit.com/r/LocalLLaMA/comments/15mvi5a/announcing_the_best_13b_model_out_there/ | Remarkable-Spite-107 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mvi5a | true | null | t3_15mvi5a | /r/LocalLLaMA/comments/15mvi5a/announcing_the_best_13b_model_out_there/ | false | false | 97 | {'enabled': False, 'images': [{'id': 'OFIkFuaVgXnuF7Xruhbg4ij3bMERJlHH42J0tq-7oAU', 'resolutions': [{'height': 47, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=108&crop=smart&auto=webp&s=35a6958630dfdb9e4e1a0f0c35e6d66c59bf9091', 'width': 108}, {'height': 94, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=216&crop=smart&auto=webp&s=8e5b198b9e6c8e49f0f7f92b8fc4ac434c7165e4', 'width': 216}, {'height': 140, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=320&crop=smart&auto=webp&s=336cdbac597b748007892e379987fe5ff7ca65cb', 'width': 320}, {'height': 281, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=640&crop=smart&auto=webp&s=0ad5f7dbfe75b357e0fc1b1346105681fdb16f9f', 'width': 640}, {'height': 422, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=960&crop=smart&auto=webp&s=92ef8acd36c41c43c57dfce9882e1acbedd3ff7d', 'width': 960}, {'height': 474, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?width=1080&crop=smart&auto=webp&s=6877835f1223e3c70250bf39d51befe5c5bf7ea6', 'width': 1080}], 'source': {'height': 906, 'url': 'https://external-preview.redd.it/Mla6jXUlc4o0tmUC_QFs5i44lu4eSYB9oL5EgEL5S4k.png?auto=webp&s=b5460bebcba502848276c27d69a7fb72b1029ef9', 'width': 2060}, 'variants': {}}]} | |
LLM source of knowledge | 1 | I've been diving into the world of language models (LLMs) and their amazing capabilities recently. One idea that's been on my mind is using LLMs as a source of knowledge for staying up-to-date with the latest news.
two ideas crossed my mind
1. Daily Fine-Tuning: Should I consider fine-tuning the LLM on news articles daily to ensure I'm getting the latest and most accurate updates?
2. Query Retrieval System: Or would it be more efficient to develop a query retrieval system that interacts with the LLM to fetch relevant news information in real time?
How would you recommend I go about this? and is there a better solution for this idea?
| 2023-08-09T22:54:26 | https://www.reddit.com/r/LocalLLaMA/comments/15muexp/llm_source_of_knowledge/ | Dull-Morning4790 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15muexp | false | null | t3_15muexp | /r/LocalLLaMA/comments/15muexp/llm_source_of_knowledge/ | false | false | self | 1 | null |
SillyTavern's Roleplay preset vs. model-specific prompt format | 77 | 2023-08-09T22:46:47 | https://imgur.com/a/dHSrZag | WolframRavenwolf | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 15mu7um | false | {'oembed': {'description': 'Discover the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users.', 'height': 642, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FdHSrZag%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D859&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FdHSrZag&image=https%3A%2F%2Fi.imgur.com%2FMD1Pm2s.jpg%3Ffb&key=2aa3c4d5f3de4f5b9120b660ad850dc9&type=text%2Fhtml&schema=imgur" width="600" height="642" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen; encrypted-media; picture-in-picture;" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 315, 'thumbnail_url': 'https://i.imgur.com/MD1Pm2s.jpg?fb', 'thumbnail_width': 600, 'title': 'Comparison between Airoboros-specific Roleplay preset and original universal Roleplay preset', 'type': 'rich', 'url': 'https://imgur.com/a/dHSrZag', 'version': '1.0', 'width': 600}, 'type': 'imgur.com'} | t3_15mu7um | /r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'DQg2xV1mRiquLnUzSpHec8dRaE8sqlpA_xDH__lRlPg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/hWwjeHDd50jDFL4MGjf5amOF_1s--QNvt3OjPvn_yV4.jpg?width=108&crop=smart&auto=webp&s=af401db8307829dbbe8370e62660602b0099f3ea', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/hWwjeHDd50jDFL4MGjf5amOF_1s--QNvt3OjPvn_yV4.jpg?width=216&crop=smart&auto=webp&s=9a55f46042812de627991d9cc2b3a1ceb99af3b1', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/hWwjeHDd50jDFL4MGjf5amOF_1s--QNvt3OjPvn_yV4.jpg?width=320&crop=smart&auto=webp&s=e2d040d0c52d3b2874193031bead1cc1000c5a78', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/hWwjeHDd50jDFL4MGjf5amOF_1s--QNvt3OjPvn_yV4.jpg?width=640&crop=smart&auto=webp&s=dc295cd01bbd2ee1d6175d1d7cee831edb7e24bc', 'width': 640}], 'source': {'height': 859, 'url': 'https://external-preview.redd.it/hWwjeHDd50jDFL4MGjf5amOF_1s--QNvt3OjPvn_yV4.jpg?auto=webp&s=aabb4a8792eccd71b9917e377a8c661800a1f7ea', 'width': 859}, 'variants': {}}]} | ||
Vicuna 13b on RK3588 with Mail G610, OpenCL enabled. prefill: 2.3 tok/s, decode: 1.6 tok/s | 19 | Huge thanks to Apache TVM and MLC-LLM team, they created really fantastic framework to enable LLM natively run on consumer-level hardware. Now, You can literally run Vicuna-13B on Arm SBC with GPU acceleration.
* Fast enough to run RedPajama-3b (prefill: 10.2 tok/s, decode: 5.0 tok/s)
* Decent speed on Vicuna-13b (prefill: 1.8 tok/s, decode: 1.8 tok/s)
This really gives me a chance to create a totally offline LLM device. | 2023-08-09T21:58:47 | https://www.reddit.com/r/LocalLLaMA/comments/15msxzk/vicuna_13b_on_rk3588_with_mail_g610_opencl/ | EmotionalFeed0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15msxzk | false | null | t3_15msxzk | /r/LocalLLaMA/comments/15msxzk/vicuna_13b_on_rk3588_with_mail_g610_opencl/ | false | false | self | 19 | null |
AlpacaCielo2-7b-8k | 6 | [https://huggingface.co/totally-not-an-llm/AlpacaCielo2-7b-8k](https://huggingface.co/totally-not-an-llm/AlpacaCielo2-7b-8k)
Updated version of my AlpacaCielo model for 7b. Now has better roleplaying capabilites, 8k context, and a system prompt. While being creative it's also really smart, and using orca-style/CoT system prompts can make it very good at reasoning. GGML and GPTQ's are available thanks to TheBloke, links on the huggingface page. | 2023-08-09T21:56:41 | https://www.reddit.com/r/LocalLLaMA/comments/15msw0f/alpacacielo27b8k/ | pokeuser61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15msw0f | false | null | t3_15msw0f | /r/LocalLLaMA/comments/15msw0f/alpacacielo27b8k/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': '60dTBJOdHDYA6LEzdjqo1K6Pd1_wvXPBN1Mt4gOTeik', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=108&crop=smart&auto=webp&s=442c01891ecf9f21565c3b2815e4c5e0ca384930', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=216&crop=smart&auto=webp&s=93537c72b6323014ff548de2119d815b2c159496', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=320&crop=smart&auto=webp&s=91ff9dea68af680168564d3f187992ac65779f7a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=640&crop=smart&auto=webp&s=c5030339e4347a3e1cddd525fe61347531d4534b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=960&crop=smart&auto=webp&s=e230a06fe21894c2186b8704f85f505e06b68437', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?width=1080&crop=smart&auto=webp&s=0e69658074a747cf1a22e9442afa4b7252691bc2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/uHz1-CPG1rwJ8uWT5Lz-mDkbLC2N98zdySvWin523ck.jpg?auto=webp&s=9eb70078604e7b20f43c6ccfecccd83f2fe1de9f', 'width': 1200}, 'variants': {}}]} |
Generative Agents now open-sourced. | 61 | 2023-08-09T21:55:25 | https://github.com/joonspk-research/generative_agents | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 15msuvx | false | null | t3_15msuvx | /r/LocalLLaMA/comments/15msuvx/generative_agents_now_opensourced/ | false | false | 61 | {'enabled': False, 'images': [{'id': 'gc7vF97XLKz_geNSLhfMK561OFm416eoyTnwHYJzfNQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=108&crop=smart&auto=webp&s=9fb62091efb34fddecf2ff0c8891bb1023d6d5b9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=216&crop=smart&auto=webp&s=db7591c2cd48953a88697dc1ee50accf0e338767', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=320&crop=smart&auto=webp&s=cd9c65bba49ed6e88601e68ef00b543fe6478b03', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=640&crop=smart&auto=webp&s=ec81a347ebc27f6c3ed707dd6ae8c017b0658916', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=960&crop=smart&auto=webp&s=a52488d2f958d00ec9412d383a5d53f743de96f8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?width=1080&crop=smart&auto=webp&s=08002ab9a801695b689242be4373796cd2541b71', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/CbbORqiEp9I7Lu20RydDH1CxLmaNh9ME4jnua83Nhq0.jpg?auto=webp&s=b108809d9fa7559424afe1766b24b248a87b04b8', 'width': 1200}, 'variants': {}}]} | ||
How to enable long-term memory in LLM's? | 39 | Let's say I have multiple conversations with an LLM stored somewhere, are there any resources/approaches to enable long-term memory in the LLM? Ideally you'd just store the entire conversation history and feed it in as a prompt, but that doesn't seem to be the most feasible option given the context retention of most models.
Does anyone have any smart ways to do this? Or is there any literature out there for an LLM to prime an LLM to "remember" a particular user?
​
Would appreciate any insights! | 2023-08-09T21:20:18 | https://www.reddit.com/r/LocalLLaMA/comments/15mrx2n/how_to_enable_longterm_memory_in_llms/ | Ok_Coyote_8904 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mrx2n | false | null | t3_15mrx2n | /r/LocalLLaMA/comments/15mrx2n/how_to_enable_longterm_memory_in_llms/ | false | false | self | 39 | null |
Do you guys think the 34b LLaMA 2 model has been cancelled? | 117 | It's been a while, and Meta has not said anything about the 34b model from the original LLaMA2 paper. The fine-tuned instruction model did not pass their "safety" metrics, and they decided to take time to "red team" the 34b model, however, that was the chat version of the model, not the base one, but they didn't even bother to release the base 34b model... Which is a shame since theoretically the base model has nothing to do with it.
I've seen a lot of wealthy users enjoying the 70b model, or people moving to the budget option (13b), or people accepting the painful performance of running everything off CPU... And no real demand for the 30b model which used to be the "affordable" power user option for LLaMA1.
Do you guys think Meta gave up on it? I am under the impression that this is the case as they don't have any mention of it on the official LLaMA repository, and releasing the base model (not fine-tuned) would be trivial for them, so maybe they think there is an underlying problem with the base model and they will hardly bother to retrain the whole thing.
What do you think? | 2023-08-09T21:14:39 | https://www.reddit.com/r/LocalLLaMA/comments/15mrrnm/do_you_guys_think_the_34b_llama_2_model_has_been/ | hellninja55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mrrnm | false | null | t3_15mrrnm | /r/LocalLLaMA/comments/15mrrnm/do_you_guys_think_the_34b_llama_2_model_has_been/ | false | false | self | 117 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.