title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Bright Eye: free mobile IOS app that generates text and art! | 1 | 2023-08-13T18:56:17 | https://v.redd.it/30ad0zhhcxhb1 | AI4MI | /r/LocalLLaMA/comments/15q6zx1/bright_eye_free_mobile_ios_app_that_generates/ | 1970-01-01T00:00:00 | 0 | {} | 15q6zx1 | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/30ad0zhhcxhb1/DASHPlaylist.mpd?a=1694631382%2CYzZiMDIwM2Q1YmZlOTVmMTExNGQ3NDE1YWM4NjAyODc1Zjc3ZmM2ZGRkYTcxNDE4MzcyYWYwZTYwZjg2MDM0OA%3D%3D&v=1&f=sd', 'duration': 84, 'fallback_url': 'https://v.redd.it/30ad0zhhcxhb1/DASH_720.mp4?source=fallback', 'height': 1280, 'hls_url': 'https://v.redd.it/30ad0zhhcxhb1/HLSPlaylist.m3u8?a=1694631382%2CZTY2OTRiNjg2MzBiZGU4YjNlN2VmMzlmNWVhMjQyYmEzNWM1NzEwNGM5MWU4OTIwYzIzNzYwYjc2ZGQ5OTY5Mg%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/30ad0zhhcxhb1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 592}} | t3_15q6zx1 | /r/LocalLLaMA/comments/15q6zx1/bright_eye_free_mobile_ios_app_that_generates/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr.png?width=108&crop=smart&format=pjpg&auto=webp&s=b90bf8bbc0220225066c52d2e68c8cb25eeaf420', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr.png?width=216&crop=smart&format=pjpg&auto=webp&s=1ff593595727c346802557f609c3620c47641ff2', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr.png?width=320&crop=smart&format=pjpg&auto=webp&s=51d0ba0838a5e275dffe70c1a8c443bb5db280f2', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr.png?width=640&crop=smart&format=pjpg&auto=webp&s=3a708ff7ed6078b26ef2bf89318e54d65c2c68d7', 'width': 640}], 'source': {'height': 1792, 'url': 'https://external-preview.redd.it/bDBsdWI2YmhjeGhiMXJosUbBtQYMk-MO6hKQEDCiuHUuB_ymhUsOgHgLkPFr.png?format=pjpg&auto=webp&s=96f4ddbfc6bc6886a3631a30d19ec9d778bf9192', 'width': 828}, 'variants': {}}]} | ||
How important is choosing an embedding model? | 25 | Does it really matter which embedding model I choose for RAG? So far, I've been blindly picking the top overall MTEB models from hugging face. I have no knowledge of what any of the benchmarks mean, but it doesn't seem like the models differ much in performance. | 2023-08-13T18:23:32 | https://www.reddit.com/r/LocalLLaMA/comments/15q66z3/how_important_is_choosing_an_embedding_model/ | malicious510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q66z3 | false | null | t3_15q66z3 | /r/LocalLLaMA/comments/15q66z3/how_important_is_choosing_an_embedding_model/ | false | false | self | 25 | null |
Thoughts on having a MAC MINI Powered local server setup? | 2 | I was looking into building a local server LLM setup and was considering going with two used MAC MINIs or a MAC Studio.
Is there a better price to performance ratio that I can achieve, or any other thoughts about this approach? | 2023-08-13T18:12:08 | https://www.reddit.com/r/LocalLLaMA/comments/15q5wsg/thoughts_on_having_a_mac_mini_powered_local/ | Gravy_Pouch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q5wsg | false | null | t3_15q5wsg | /r/LocalLLaMA/comments/15q5wsg/thoughts_on_having_a_mac_mini_powered_local/ | false | false | self | 2 | null |
How should I chunk text from a textbook for the best embedding results? | 4 | My guess is that I should follow the natural structure of the textbook and chunk my text by chapter, section, subsection, etc while retaining the relevant metadata. The problem is that I have no idea how to do that lol.
​
Can someone tell me a better way to chunk a textbook or give me the basic guidelines so I can ask ChatGPT? | 2023-08-13T17:56:58 | https://www.reddit.com/r/LocalLLaMA/comments/15q5j48/how_should_i_chunk_text_from_a_textbook_for_the/ | malicious510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q5j48 | false | null | t3_15q5j48 | /r/LocalLLaMA/comments/15q5j48/how_should_i_chunk_text_from_a_textbook_for_the/ | false | false | default | 4 | null |
has anyone been able to create production level model? | 8 | has anyone been able to create production level model? I have been researching on the LLaMa and others and unfortunately everyone is complaining.
I have a bunch of documents lets say 10000 pages and i want to the model to be an expert in it and answer questions which chatpdf will not be able to answer. What are my options? is it possible to train the model on all pages and not having to use search and feed in the LLM? | 2023-08-13T17:56:48 | https://www.reddit.com/r/LocalLLaMA/comments/15q5iz2/has_anyone_been_able_to_create_production_level/ | affilitebabra998 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q5iz2 | false | null | t3_15q5iz2 | /r/LocalLLaMA/comments/15q5iz2/has_anyone_been_able_to_create_production_level/ | false | false | self | 8 | null |
LLM Hardware Setup vs Speed Question | 8 | I've been using SillyTavern for a while and although I'm happy with the quality of the outputs, it is just too slow on my old GPU or laptop. I'm curious how worth it an investment into a 3090 or 4090 would be. I'm only using a model like this:
[mythomax-l2-13b.ggmlv3.q4\_K\_M.bin](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML/blob/main/mythomax-l2-13b.ggmlv3.q4_K_M.bin)
so it is quantized and has 13 billion parameters. If any of you have experience with this, I'd love to learn more about how fast a model like this would run and what setup you have respectively. Thanks for the help. | 2023-08-13T17:35:24 | https://www.reddit.com/r/LocalLLaMA/comments/15q4zrd/llm_hardware_setup_vs_speed_question/ | Dramatic_Road3570 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q4zrd | false | null | t3_15q4zrd | /r/LocalLLaMA/comments/15q4zrd/llm_hardware_setup_vs_speed_question/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'oL3fMXfQ2MO77UAAm5ordmM6HOjTmLuuyhcTIG7-kag', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=108&crop=smart&auto=webp&s=abd8b47541465ae92daa7d48de36f185ad7df83e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=216&crop=smart&auto=webp&s=bf56208def4d316d27b536a50605345f5ed8100a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=320&crop=smart&auto=webp&s=6595de4493b07b69df5b749a6260560ad0ddf080', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=640&crop=smart&auto=webp&s=a29443628b49f33d3a6398595400da4d7e4b2be3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=960&crop=smart&auto=webp&s=24353a123ef4c28e1a1d115e5de2ee27a89d903d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?width=1080&crop=smart&auto=webp&s=226d229b2113866132cda77a23d6fd3c74f7c3d5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/UbmQkF4DAuEHoyRsqvf8GLRZ4kaTHo08gPbJsHhpsuE.jpg?auto=webp&s=088c47855c8d6415b20b4ede92e4f71d19d4a859', 'width': 1200}, 'variants': {}}]} |
How should I preprocess my text to optimize text embeddings? | 6 | Pretty much the title.
I've heard of preprocessing strategies such as lowercasing, stop word removal, stemming, lemmatization, punctuation removal, special characters removal, and regular expression removal. However, many of these strategies seem like they might remove semantically relevant information about the text. For example, wouldn't lowercasing, lemmatization, punctuation removal, and stemming get rid of important grammatical information that the embedding model could use to more accurately vectorize text?
My guess is that I should try to preprocess my text using the same methods used in the embedding model's training data. What do y'all think? | 2023-08-13T17:35:02 | https://www.reddit.com/r/LocalLLaMA/comments/15q4ze6/how_should_i_preprocess_my_text_to_optimize_text/ | malicious510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q4ze6 | false | null | t3_15q4ze6 | /r/LocalLLaMA/comments/15q4ze6/how_should_i_preprocess_my_text_to_optimize_text/ | false | false | self | 6 | null |
Llama for document matching? | 1 | [removed] | 2023-08-13T16:34:07 | https://www.reddit.com/r/LocalLLaMA/comments/15q3fg1/llama_for_document_matching/ | BoxLazy8046 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q3fg1 | false | null | t3_15q3fg1 | /r/LocalLLaMA/comments/15q3fg1/llama_for_document_matching/ | false | false | self | 1 | null |
Complete noob: chatting with a set of local documents? | 1 | Hi, I discovered the LocalGPT and Chatdocs projects on github a while ago and really liked the idea of "chatting" with my growing pdf library and receive answers.
However I'm very new to the LM/AI world so I hope you don't mind my question... What would be the easiest option to run something like that locally? Would my current laptop (amd 6800HS + Nvidia RTX 4060, 32gb ram) be enough, or do I need better hardware, different models, etc?
Thanks in advance | 2023-08-13T16:15:46 | https://www.reddit.com/r/LocalLLaMA/comments/15q2y80/complete_noob_chatting_with_a_set_of_local/ | TheGlobinKing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q2y80 | false | null | t3_15q2y80 | /r/LocalLLaMA/comments/15q2y80/complete_noob_chatting_with_a_set_of_local/ | false | false | self | 1 | null |
Noob question: How do I make use of all my VRAM with llama.cpp in oobabooga webui? | 3 | I have two GPUs with 12GB VRAM each. Offloading 28 layers, I get almost 12GB usage on one card, and around 8.5GB on the second, during inference. Setting n-gpu-layers any higher gives me an out of memory error. How can I make use of that remaining 3.5 GB on the second GPU? | 2023-08-13T16:13:07 | https://www.reddit.com/r/LocalLLaMA/comments/15q2vql/noob_question_how_do_i_make_use_of_all_my_vram/ | Acceptable-Trade-46 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q2vql | false | null | t3_15q2vql | /r/LocalLLaMA/comments/15q2vql/noob_question_how_do_i_make_use_of_all_my_vram/ | false | false | self | 3 | null |
Can someone recommend a basic setup for a 4090 and 128 RAM? | 25 | I would like to have something similar to ChatGPT running locally. Its mostly used to improve emails and social media posts. My GPU is often executing 3ds max rendering and SD imagery, so I can't have something that will remain loaded in the VRAM.
I've got Oogabooga running a while ago, and its OK, but the interface and configuration is confusing.
I like the idea of performing these tasks offline and have tried, but so many models and specific tasks for coding, I get bogged down in technical jargon. Thank you! | 2023-08-13T15:53:50 | https://www.reddit.com/r/LocalLLaMA/comments/15q2e8c/can_someone_recommend_a_basic_setup_for_a_4090/ | Sweet_Baby_Moses | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q2e8c | false | null | t3_15q2e8c | /r/LocalLLaMA/comments/15q2e8c/can_someone_recommend_a_basic_setup_for_a_4090/ | false | false | self | 25 | null |
How to get up to speed? | 16 | There is a knowledge pyramid here, and some people who know everything there is to know about LLaMA (the ones who made it) at the top, and me at the bottom. I’m ignorant.
Is there a single book that I can read that will get me to the level of the pyramid, not at the tip top, but the level where one is generally proficient at installing, using, training and deploying LLaMA models?
I’m a CS student about to graduate and none of my classes touched on LLMs. | 2023-08-13T14:39:39 | https://www.reddit.com/r/LocalLLaMA/comments/15q0lmw/how_to_get_up_to_speed/ | Overall-Importance54 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q0lmw | false | null | t3_15q0lmw | /r/LocalLLaMA/comments/15q0lmw/how_to_get_up_to_speed/ | false | false | self | 16 | null |
Newbie is confused about how to train Llama 2, needs hand-holding. | 1 | [removed] | 2023-08-13T14:36:29 | https://www.reddit.com/r/LocalLLaMA/comments/15q0iyc/newbie_is_confused_about_how_to_train_llama_2/ | emotionalHunterEx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q0iyc | false | null | t3_15q0iyc | /r/LocalLLaMA/comments/15q0iyc/newbie_is_confused_about_how_to_train_llama_2/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'z2HdRfGrX_QS4_TnwDeHjTgrpOd2uGmfmEZQf63iZWI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=108&crop=smart&auto=webp&s=d840bf220765e7b6df8c36771f071c82dc53eee4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=216&crop=smart&auto=webp&s=714db9b135c12543746691b8a956acfd07122580', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=320&crop=smart&auto=webp&s=e1a8f89ae830c69fa429ef112b425aba1b64bdf2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=640&crop=smart&auto=webp&s=31e2c79449868e179793a1f2d70f5d78de751d08', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=960&crop=smart&auto=webp&s=262b4daf154aadda8f746529eb973650ecbe9e01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?width=1080&crop=smart&auto=webp&s=700bfff52f422ffd0ff53c1ea12551bbdee98a62', 'width': 1080}], 'source': {'height': 1012, 'url': 'https://external-preview.redd.it/E9s1YS_pvBGEZmSqZIuanbwW6PusBWiPmN9jS6rO-xo.jpg?auto=webp&s=c2f80796e75ceb2043e71b915e84ad78ae348afa', 'width': 2024}, 'variants': {}}]} |
Fine-tune the llama 2 via SFT and DPO | 11 | Finally, I managed to get out from my addiction to Diablo 4 and found some time to work on the llama2 :p. Here is the repo containing the scripts for my experiments with fine-tuning the llama2 base model for my grammar corrector app. So far, the performance of llama2 13b seems as good as llama1 33b. However, I'm not really like with the results after applying DPO alignment that aligns with human preferences. somehow it makes the model's output kind of off-purpose.
[https://github.com/mzbac/llama2-fine-tune](https://github.com/mzbac/llama2-fine-tune)
Hope the code can help you guys to try out the DPO on your custom dataset. | 2023-08-13T14:23:06 | https://www.reddit.com/r/LocalLLaMA/comments/15q07a1/finetune_the_llama_2_via_sft_and_dpo/ | mzbacd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15q07a1 | false | null | t3_15q07a1 | /r/LocalLLaMA/comments/15q07a1/finetune_the_llama_2_via_sft_and_dpo/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'mzlpUxLxgdIHkm_czLL4hXE5jvTQpS4GujfRXdQg7Rs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=108&crop=smart&auto=webp&s=5a504b706b5c91366e8aee7f19537543552d16d3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=216&crop=smart&auto=webp&s=3d2f21e0187dc1d935e559a05bacf9fb66c5e2c5', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=320&crop=smart&auto=webp&s=f597a8e18bef463fd1d11ce3c759dd45e25897ec', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=640&crop=smart&auto=webp&s=290285f6edfacdb69f799b68d5cc7154fd3ff9bc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=960&crop=smart&auto=webp&s=e0be13ab7222ba907ee6d05cc8c8e66ae9441b01', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?width=1080&crop=smart&auto=webp&s=aa83b23eaf5a4d1555d5ae90d8f9eb0adf3bb299', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wQBnWuG_Z6oeYg-plsjYEpuj9i9AjJQZeMgT-ntqr7k.jpg?auto=webp&s=fc5f4772ce55a86ec85a0f93d86993a8b574c751', 'width': 1200}, 'variants': {}}]} |
Why I am disappointed by Llama(2) | 1 | [removed] | 2023-08-13T13:16:16 | https://www.reddit.com/r/LocalLLaMA/comments/15pynzd/why_i_am_disappointed_by_llama2/ | SecretOk9644 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pynzd | false | null | t3_15pynzd | /r/LocalLLaMA/comments/15pynzd/why_i_am_disappointed_by_llama2/ | false | false | self | 1 | null |
Run LLama-2 13B, very fast, Locally on Low-Cost Intel ARC GPU | 1 | 2023-08-13T13:12:59 | https://youtu.be/FRWy7rzOsRs | reps_up | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 15pylcd | false | {'oembed': {'author_name': 'AI Tarun', 'author_url': 'https://www.youtube.com/@aitarun', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/FRWy7rzOsRs?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Run LLama-2 13B, very fast, Locally on Low Cost Intel's ARC GPU , iGPU and on CPU"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/FRWy7rzOsRs/hqdefault.jpg', 'thumbnail_width': 480, 'title': "Run LLama-2 13B, very fast, Locally on Low Cost Intel's ARC GPU , iGPU and on CPU", 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_15pylcd | /r/LocalLLaMA/comments/15pylcd/run_llama2_13b_very_fast_locally_on_lowcost_intel/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'kSQFhAvJQ8IEQAJTvXBRh3sOntWdaff5gmY-OsSQGe4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/9H6PqHNdncp-vBqqNSRqidLtx5P9xZUWHWZAbBgcmdk.jpg?width=108&crop=smart&auto=webp&s=64c55e32f22dc19fd9f0597cb11f0b6632a4a104', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/9H6PqHNdncp-vBqqNSRqidLtx5P9xZUWHWZAbBgcmdk.jpg?width=216&crop=smart&auto=webp&s=c4136fab824134b5de8a3b9f7b5627e87a5212e5', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/9H6PqHNdncp-vBqqNSRqidLtx5P9xZUWHWZAbBgcmdk.jpg?width=320&crop=smart&auto=webp&s=8aeef0d151332450e6191ad298e16c1b9effc776', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/9H6PqHNdncp-vBqqNSRqidLtx5P9xZUWHWZAbBgcmdk.jpg?auto=webp&s=85c6941ee292b62efe6c79e36356fae4a14a2047', 'width': 480}, 'variants': {}}]} | |
Llama 2 70B's Response when asked about Llama | 5 | I was thinking to write an article and was bit lazy when I was almost near to finishing it. So thought why not ask the model itself what to write about it. And this is what happened when asked it to tell some key points about Llama(it's kinda more focusing on that Leak part)
Website: chat.nbox.ai | 2023-08-13T13:07:09 | Automatic-Net-757 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15pygpf | false | null | t3_15pygpf | /r/LocalLLaMA/comments/15pygpf/llama_2_70bs_response_when_asked_about_llama/ | false | false | 5 | {'enabled': True, 'images': [{'id': 'FkiVgLhTOmKn-35ats9vfpKz-c4Thw4THbwzbCONhXI', 'resolutions': [{'height': 46, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=108&crop=smart&auto=webp&s=2ff51fb3c7861979a3c9e98e9aef2e565d6cd005', 'width': 108}, {'height': 92, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=216&crop=smart&auto=webp&s=9acb87d6522fe9eb37f026d5af4d1e8fcf0e26fb', 'width': 216}, {'height': 136, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=320&crop=smart&auto=webp&s=4ac09c12e0150daddd7caa0d4e0580fcd9486d52', 'width': 320}, {'height': 272, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=640&crop=smart&auto=webp&s=8d53d9c891b1404b68880f6f3a86115910b21ee0', 'width': 640}, {'height': 408, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=960&crop=smart&auto=webp&s=0480f2dd0f0192bee5d32827da1dd4390e267c17', 'width': 960}, {'height': 460, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?width=1080&crop=smart&auto=webp&s=3223d4df8463720c0daa6cfa955c8cc837c4bc3d', 'width': 1080}], 'source': {'height': 460, 'url': 'https://preview.redd.it/8bwm63akmvhb1.jpg?auto=webp&s=7129965339b1a7813a6db3d11ad3f31bbee314b9', 'width': 1080}, 'variants': {}}]} | ||
🧠💻Using Code to Boost ChatGPT's Thinking: Can We Teach the LLAMA Model to Do the Same? Share Your Thoughts! | 10 | Hello :)
It seems that chatgpt is more intelligent if it leverages coding to help it think:
[https://chat.openai.com/share/af8e1bdb-b0cc-4e04-b539-6546e67e35c1](https://chat.openai.com/share/af8e1bdb-b0cc-4e04-b539-6546e67e35c1)
[https://chat.openai.com/share/972c2129-3614-40c5-a133-2403ce7bc9b2](https://chat.openai.com/share/972c2129-3614-40c5-a133-2403ce7bc9b2)
​
**Is there a way to tell a LLAMA model to write code to help it think?**
​
thanks :) | 2023-08-13T12:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/15py0no/using_code_to_boost_chatgpts_thinking_can_we/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15py0no | false | null | t3_15py0no | /r/LocalLLaMA/comments/15py0no/using_code_to_boost_chatgpts_thinking_can_we/ | false | false | self | 10 | null |
A local alternative to the code-search-ada-code-001? | 1 | Does anyone know the local alternative to the code-search-ada-code-001 or any embedding generator for the source code?
Thanks! | 2023-08-13T12:28:05 | https://www.reddit.com/r/LocalLLaMA/comments/15pxmrl/a_local_alternative_to_the_codesearchadacode001/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pxmrl | false | null | t3_15pxmrl | /r/LocalLLaMA/comments/15pxmrl/a_local_alternative_to_the_codesearchadacode001/ | false | false | self | 1 | null |
I am confused. So many models to choose from. | 161 | I have been looking at different models from hugginh face and is getting ovewhelmed by all these different models that I am unable to differentiate them.
Airoboros, Guanaco, Vicuna, Orca, Wizard, Platypus, Beluga, Chronos, Hermes, LlongMa, etc.
I mean what are the differences between them? They seem to all strive to become all around AI models and the differences are too technical for me to understand. Is there an easier way to differentiate them to know which one is better? Or do I really have to try them one by one?
May aim is simple. I am looking for a 13B llama-2 based GGML model (q4\_k\_s preferrably) for a simple AI assistant with tweaked personality of my choice (I use oobabooga character chat settings). Nothing extremely hard but I want my AI to be consistent to the context assigned to them while being an AI assistant (ie: tsundere or mischievous personality etc). Any insight to those who are more experienced is greatly appreciated. Thank you! | 2023-08-13T12:10:48 | https://www.reddit.com/r/LocalLLaMA/comments/15px9x1/i_am_confused_so_many_models_to_choose_from/ | Spirited_Employee_61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15px9x1 | false | null | t3_15px9x1 | /r/LocalLLaMA/comments/15px9x1/i_am_confused_so_many_models_to_choose_from/ | false | false | self | 161 | null |
"Lost and Llama-less: Join the Search for the Missing LLAMA - A Heartfelt Plea for Help in The Great LLAMA Hunt" | 0 | "Lost and Llama-less: Join the Search for the Missing LLAMA - A Heartfelt Plea for Help in The Great LLAMA Hunt"
So as a llama-less lost newbie in the world of AI who is hunting/searching for his llama for a long time , I have a simple question. How can I get this llama on my poor pc, without spending a guizilian dollars on new hardware?
1.How can I run LLama2 13b model on my R5 5600G?
2.I don't have a discrete GPU, not planning on getting one any soon.
3.How much RAM I need? Currently I got 16GB, I am fine with buying more. I can tolerate getting more RAM.
4.Please give me some steps to follow or a link, even if it's not detailed, but providing some steps will help me start from there.
5. Thank you in advance 🙂
- Sakamoto, the man pursuing the llama dream for weeks, and still hunting.
Ain't no llama gonna defeat me, am gonna get this llama asap 😎 | 2023-08-13T11:40:59 | https://www.reddit.com/r/LocalLLaMA/comments/15pwp53/lost_and_llamaless_join_the_search_for_the/ | SakamotoKyu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pwp53 | false | null | t3_15pwp53 | /r/LocalLLaMA/comments/15pwp53/lost_and_llamaless_join_the_search_for_the/ | false | false | default | 0 | null |
is there any model/prompt that can generate r/HFY style stories? | 1 | [removed] | 2023-08-13T10:38:37 | https://www.reddit.com/r/LocalLLaMA/comments/15pvlja/is_there_any_modelprompt_that_can_generate_rhfy/ | happydadinau | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pvlja | false | null | t3_15pvlja | /r/LocalLLaMA/comments/15pvlja/is_there_any_modelprompt_that_can_generate_rhfy/ | false | false | self | 1 | null |
EE_Ion my attempt to an English to Spanish translator | 25 | I am frustrated on how wrong Google Translate or Deepl are in certain situations, mostly when dealing with technical, specialised terms or when dealing with all kind of markup (html, xml, Markdown) or named placeholders (Hello ${name} - it is clear that I do not want name to be translated).
So, I was thinking that I need a way to be able to translate in context. I curated a dataset of 250 000 examples with in context translations and I started to fine tune Llama-1 and 2.
After many tries and failures with 7B models I managed to get promising results on a Llama-2 13B model. It is still a long way to go, but it is good enough for an alpha release.
So, I present to you: **EE\_Ion 13B English to Spanish in context translator.** [https://huggingface.co/iongpt/EE\_Ion\_en\_es-v1\_0\_alpha-fp16/settings](https://huggingface.co/iongpt/EE_Ion_en_es-v1_0_alpha-fp16/settings)
It is able to respond in the first block with the actual translation, but it starts spitting garbage after that. For me it is usable because I am using a script to translate apps so I am just ignoring anything after the the first block.
I also did a 4bit GPTQ quant, but that is performing badly (it is not always responding with the correct translation in the first block), probably my quant script was off.
My plan is to continue fine tuning this one and after it works flawlessly to move to other languages. I am already collecting data for the French and German datasets
I am not sure if anyone else is interested in this so I am posting this here to understand if there is a need for something like this. | 2023-08-13T10:28:44 | https://www.reddit.com/r/LocalLLaMA/comments/15pvfqg/ee_ion_my_attempt_to_an_english_to_spanish/ | Ion_GPT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pvfqg | false | null | t3_15pvfqg | /r/LocalLLaMA/comments/15pvfqg/ee_ion_my_attempt_to_an_english_to_spanish/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=108&crop=smart&auto=webp&s=17279fa911dbea17f2a87e187f47ad903120ba87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=216&crop=smart&auto=webp&s=12bf202fa02a8f40e2ad8bab106916e06cceb1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=320&crop=smart&auto=webp&s=90ff2c682d87ee483233b1136984d608f8b5c5c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=640&crop=smart&auto=webp&s=2bc95e1b2395af837db2786db2f84b9c7f86370a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=960&crop=smart&auto=webp&s=67e903b600e020b7bcf93fc2000ed3cf95cb4dbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=1080&crop=smart&auto=webp&s=b4cb1ebc087816d879ac777ed29f74d454f35955', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?auto=webp&s=a4fb691b1b470f21e5ef01685267735cb15b7735', 'width': 1200}, 'variants': {}}]} |
what are the different types of training/finetuning we do? | 8 | trying to make sense of the different training and finetuning options that are usually suggested.
- we can train on raw text. this is mostly to complete stubs or starting passages. but i think this is mostly used for pretraining or is this idea also helpful in finetuning?
- we can collect many prompt/response pairs from a GPT4 , keep high quality ones and finetune on those.
Also finetuning requires we do not disturb too many parameters, so we use lora. I've also seen qlora mentioned, is that when we have quantized models?
Are there other ways of finetuning that exist or are applicable in specific cases? | 2023-08-13T09:26:47 | https://www.reddit.com/r/LocalLLaMA/comments/15puei4/what_are_the_different_types_of/ | olaconquistador | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15puei4 | false | null | t3_15puei4 | /r/LocalLLaMA/comments/15puei4/what_are_the_different_types_of/ | false | false | self | 8 | null |
[Critique this idea] distilling from a larger model to a smaller one | 8 | Hello,
I have been wondering about the application of knowledge distillation from a stronger to a smaller model.
The crux of the idea is to collect responses from the stronger model on a lot of prompts. these may be standalone or also be augmented with retrievals.
These prompt+response pairs are then used to perform a round of training/finetuning on a weaker model.
This idea may be applied to, for e.g., distilling responses from 70b to 13b, or from unquantized to quantized versions. Further, probably an approach like qlora will be needed to allow this operation to happen on typical hardware, so it's "strength" or "impact" is to be expected to be similar to finetuning.
An argument against this could be made that the weaker models are already at "capacity" with what they've learnt, but a counter to that is in other ML domains ( at least how i understand it) distilling from a stronger model has been able to take a weaker model beyond its vanilla performance. Another loose argument could be that finetuning on domain specific questions can pull the weaker model to give better performance on the domain at the cost of general performance, which is also what we expect in general finetuning as well.
Would love to get people's opinions on this. any pitfalls you anticipate? has this already been done? | 2023-08-13T09:18:34 | https://www.reddit.com/r/LocalLLaMA/comments/15pu9rd/critique_this_idea_distilling_from_a_larger_model/ | T_hank | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pu9rd | false | null | t3_15pu9rd | /r/LocalLLaMA/comments/15pu9rd/critique_this_idea_distilling_from_a_larger_model/ | false | false | self | 8 | null |
How to fine-tune btlm 3b | 1 | [removed] | 2023-08-13T08:11:26 | https://www.reddit.com/r/LocalLLaMA/comments/15pt64k/how_to_finetune_btlm_3b/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pt64k | false | null | t3_15pt64k | /r/LocalLLaMA/comments/15pt64k/how_to_finetune_btlm_3b/ | false | false | self | 1 | null |
Whisper ccp GGML Quantized? | 26 | https://github.com/ggerganov/whisper.cpp
I tried this out when it was first released, and I was looking forward to q4 versions of the "large" model being released.
https://huggingface.co/ggerganov/whisper.cpp/tree/main
Given its a GGML model, I thought I could use quantize.exe, but I'm getting an error when I try to run it on the whisper models.
Whisper v2 models have been released as well, but don't have corresponding GGML models:
https://huggingface.co/openai/whisper-large-v2/tree/main
Are there any repos where I can get 1) whispercpp GGML quantized and 2) whispercpp v2 GGML ? | 2023-08-13T07:47:41 | https://www.reddit.com/r/LocalLLaMA/comments/15psrtg/whisper_ccp_ggml_quantized/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15psrtg | false | null | t3_15psrtg | /r/LocalLLaMA/comments/15psrtg/whisper_ccp_ggml_quantized/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'xakWJimd33OFeE8FWiBtxQS91zTgXEV6RUNxWdzm62Y', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=108&crop=smart&auto=webp&s=8d42dd9100bbdc2edde65dc3abbd30b2aaa8cbdc', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=216&crop=smart&auto=webp&s=f1a2d34385f5584cd8fc7e5cc2fbb992be921503', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=320&crop=smart&auto=webp&s=59a1a96a9da29f68c162532a0ce542d6e629224d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=640&crop=smart&auto=webp&s=8fd95c995b5a899d724fa260b810b9cbc2609c03', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=960&crop=smart&auto=webp&s=8332634a9d923a986621ae45c99b4634017c4eaa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?width=1080&crop=smart&auto=webp&s=90445891ce31b4cef62c63ebaf9629a9480ff806', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/dl2tmLDKJP2aIEd7gsoGGIYV81l6UBGBVebn9C7JS74.jpg?auto=webp&s=2b0dc6b19f7b2d2b10ded3abce89062ede2d04f3', 'width': 1280}, 'variants': {}}]} |
where is the code | 0 | i dont get it, it said that its open source, but where is the source code of it? Is [init.py](https://init.py), [generation.py](https://generation.py), [model.py](https://model.py), [tokenizer.py](https://tokenizer.py) the source code in github? i really dont think it would make a whole llm with just 4 python files | 2023-08-13T07:35:49 | https://www.reddit.com/r/LocalLLaMA/comments/15pskyg/where_is_the_code/ | bull_shit123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pskyg | false | null | t3_15pskyg | /r/LocalLLaMA/comments/15pskyg/where_is_the_code/ | false | false | self | 0 | null |
Anyone went to prod with LLAMA-70B? Without quantization? | 17 | I am aiming at implementing LLAMA-V2 on prod with full precision (the problem is here the quality of the output). Are you aware of any resources what tricks and tweaks should I follow? We will use p4dn instances, and maybe fine tune some features also… | 2023-08-13T07:32:45 | https://www.reddit.com/r/LocalLLaMA/comments/15psj9h/anyone_went_to_prod_with_llama70b_without/ | at_nlp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15psj9h | false | null | t3_15psj9h | /r/LocalLLaMA/comments/15psj9h/anyone_went_to_prod_with_llama70b_without/ | false | false | self | 17 | null |
Making an app for GPT and llama | 1 | [removed] | 2023-08-13T06:51:42 | https://www.reddit.com/r/LocalLLaMA/comments/15pru0g/making_an_app_for_gpt_and_llama/ | Ok-Face3238 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pru0g | false | null | t3_15pru0g | /r/LocalLLaMA/comments/15pru0g/making_an_app_for_gpt_and_llama/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'IJqeWGcgeQRTo0q7zvjRC-E6cTtR03hgdDsQVwQnFEQ', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=108&crop=smart&auto=webp&s=758f56d3751ab0cd454d1f17f976d90ec666a701', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=216&crop=smart&auto=webp&s=d07698a3cfd19f2df4d189ccb354ecbdabb79221', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=320&crop=smart&auto=webp&s=918a2f462d5111a000e890b0e98a2ba75a49f939', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=640&crop=smart&auto=webp&s=b1527f91fe66c024dfdedf8565569b0d52cb2b80', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=960&crop=smart&auto=webp&s=b99652a7432ddecea17383f243fd1c86450dd78e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?width=1080&crop=smart&auto=webp&s=1024375d1bf439afd7bfe9206e4fb15473b05796', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/ks9LCB9XkN4FBZ_RN29YpWBLMijFAnFY1DfXi9pHgbE.jpg?auto=webp&s=4b8497256d0fd51d8b679772d0e4c95648d9feea', 'width': 1200}, 'variants': {}}]} |
LLama2 with GeForce 1080 8Gb | 11 | Hi. I am trying to run LLama2 on my server which has mentioned nvidia card. It's a simple hello world case you can [find here](https://huggingface.co/blog/llama2). However I am constantly running into memory issues:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 250.00 MiB (GPU 0; 7.92 GiB total capacity; 7.12 GiB already allocated; 241.62 MiB free; 7.18 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I tried
export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:128
but same effect. Is there anything I can do? | 2023-08-13T06:28:22 | https://www.reddit.com/r/LocalLLaMA/comments/15prfwe/llama2_with_geforce_1080_8gb/ | vonGlick | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15prfwe | false | null | t3_15prfwe | /r/LocalLLaMA/comments/15prfwe/llama2_with_geforce_1080_8gb/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'urd-gOpHx6DzqXeQqsy2yaeJA0EJHFkUW198WyZ0Q3A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=108&crop=smart&auto=webp&s=3a8143bf595d2a1bee3d138841856378eb2e0030', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=216&crop=smart&auto=webp&s=b2a753604d8f09eca2670fe6aa3e3d68577676b7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=320&crop=smart&auto=webp&s=4d730223a776274cf6188d25e0d0f65f9ac64601', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=640&crop=smart&auto=webp&s=cdcc131f68e029b2b0c16d30dea4d25aac49879f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=960&crop=smart&auto=webp&s=2b0b7a4430320f0e902efa0cc656d9422388c6ba', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?width=1080&crop=smart&auto=webp&s=5510c87c7c86d94614d6999ee5c231cae5686436', 'width': 1080}], 'source': {'height': 1160, 'url': 'https://external-preview.redd.it/APagEKjViS_LhDH7O1czDc3SAIFNKdhuvgRCTbfDjH0.jpg?auto=webp&s=328b1af048abef43ece61400b0e074f168198bf7', 'width': 2320}, 'variants': {}}]} |
Training LLM on Call Center Data | 7 | So, I have a call center and I am looking to test automating those calls. I wanted to know the way to go about it. I have call recordings available. What model would be the best to be used as a call center agent? And what would be the go about it? Any suggestions would be appreciated. | 2023-08-13T05:52:04 | https://www.reddit.com/r/LocalLLaMA/comments/15pqsws/training_llm_on_call_center_data/ | nolovenoshame | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pqsws | false | null | t3_15pqsws | /r/LocalLLaMA/comments/15pqsws/training_llm_on_call_center_data/ | false | false | self | 7 | null |
Multi GPU performance | 9 | I've read all the posts I could find on this but haven't seen many actual numbers. Has anyone run a 33b model with multiple 8-16gb GPUs? If yes, what kind of t/s are you able to get? I'm getting 3-4 t/s at 2k context and am wondering if it's worth adding something like a P100 or 3060. I know a 3090 would be much better but if I can get decent performance without it that would be preferable.
I also have an 8gb 5700xt - I'm assuming I can't use it with the 4070 but lmk if I'm wrong. I thought I saw someone say they got it working on linux but I can't find the post.
I'd appreciate any insight anyone has, even if you don't have specific numbers. Thanks in advance.
13700
4070ti 12gb
32gb DDR5 5600
Windows 10
​
oobabooga
llama.cpp
n-gpu-layers: 36
threads: 9 | 2023-08-13T04:38:48 | https://www.reddit.com/r/LocalLLaMA/comments/15ppgz8/multi_gpu_performance/ | alyssa1055 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ppgz8 | false | null | t3_15ppgz8 | /r/LocalLLaMA/comments/15ppgz8/multi_gpu_performance/ | false | false | self | 9 | null |
5X speed boost on oobabooga when using a set seed. | 2 | I was quite surprised to find that while playing around with the settings I got a 5x speed boost on exllama if i had a set seed. I went from 6.6 tk/s to 35.2 tk/s. not sure why this would have that great of an impact but it did. | 2023-08-13T03:27:26 | https://www.reddit.com/r/LocalLLaMA/comments/15po3ex/5x_speed_boost_on_oobabooga_when_using_a_set_seed/ | BackyardAnarchist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15po3ex | false | null | t3_15po3ex | /r/LocalLLaMA/comments/15po3ex/5x_speed_boost_on_oobabooga_when_using_a_set_seed/ | false | false | self | 2 | null |
LLM trained on fiction / literature? | 6 | Is there an LLM that was trained on literature specifically? To use as an editor / corrector for example? | 2023-08-13T03:04:09 | https://www.reddit.com/r/LocalLLaMA/comments/15pnmny/llm_trained_on_fiction_literature/ | myreptilianbrain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pnmny | false | null | t3_15pnmny | /r/LocalLLaMA/comments/15pnmny/llm_trained_on_fiction_literature/ | false | false | self | 6 | null |
Weird llama.cpp failure | 1 | I've been very happily using llama.cpp for inference with various GGML-formatted models for months, and just yesterday my model-downloader script finished pulling down TheBloke's starcoderplus-GGML, yaay! I was looking forward to playing with it over the weekend.
When I ran the model card's example prompt through it, though, llama.cpp's main failed to load the model, claiming unexpected end of file:
ttk@kirov:/home/ttk/tools/ai$ llama.cpp.git/main -m models.local/starcoderplus.ggmlv3.q4_1.bin -n 300 -p "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world'3:30 tomorrow')<fim_middle>"
main: build = 978 (f64d44a)
main: seed = 1691890406
llama.cpp: loading model from models.local/starcoderplus.ggmlv3.q4_1.bin
error loading model: unexpectedly reached end of file
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model 'models.local/starcoderplus.ggmlv3.q4_1.bin'
main: error: unable to load model
I "git pull"'d to update my copy of llama.cpp, recompiled main, and there was no change. I can still load and use other 4-bit quantized GGML models, like guanaco-7b.
I checked the sha256 checksum of the model file, and it matches the checksum in the huggingface repo's LFS pointer, so it's a faithful copy of what's on huggingface:
ttk@kirov:/raid/models$ shasum -a 256 starcoderplus-GGML.git/starcoderplus.ggmlv3.q4_1.bin
78c612e4ebd7a49de32b085dc7b05afca88c132f63a7231e037dbcc175bd9b3e starcoderplus-GGML.git/starcoderplus.ggmlv3.q4_1.bin
ttk@kirov:/raid/models$ grep sha256 starcoderplus-GGML.git/starcoderplus.ggmlv3.q4_1.bin.orig
oid
sha256:78c612e4ebd7a49de32b085dc7b05afca88c132f63a7231e037dbcc175bd9b3e
Here's a full transcript of updating llama.cpp, rebuilding main, and re-running the prompt, also "uname -a" output:
http://ciar.org/h/llamacpp_fail.txt
Has anyone else been using this model successfully?
u/The-Bloke does this make any sense to you?
That machine has 32GB of RAM, which should be plenty for CPU inference. | 2023-08-13T02:38:32 | https://www.reddit.com/r/LocalLLaMA/comments/15pn3mx/weird_llamacpp_failure/ | ttkciar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pn3mx | false | null | t3_15pn3mx | /r/LocalLLaMA/comments/15pn3mx/weird_llamacpp_failure/ | false | false | self | 1 | null |
Core Dumped error when loading model KoboldCPP | 1 | [https://github.com/YellowRoseCx/koboldcpp-rocm](https://github.com/YellowRoseCx/koboldcpp-rocm)I
dentified as LLAMA model: (ver 5)
Attempting to Load...
\---
Using automatic RoPE scaling (scale:1.000, base:10000.0)
System Info: AVX = 1 | AVX2 = 1 | AVX512 = 1 | AVX512\_VBMI = 1 | AVX512\_VNNI = 1 | FMA = 1 | NEON = 0 | ARM\_FMA = 0 | F16C = 1 | FP16\_VA = 0 | WASM\_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
llama.cpp: loading model from /home/??????/Downloads/chronos-hermes-13b-v2.ggmlv3.q4\_0.bin
llama\_model\_load\_internal: format = ggjt v3 (latest)
llama\_model\_load\_internal: n\_vocab = 32032
llama\_model\_load\_internal: n\_ctx = 512
llama\_model\_load\_internal: n\_embd = 5120
llama\_model\_load\_internal: n\_mult = 6912
llama\_model\_load\_internal: n\_head = 40
llama\_model\_load\_internal: n\_head\_kv = 40
llama\_model\_load\_internal: n\_layer = 40
llama\_model\_load\_internal: n\_rot = 128
llama\_model\_load\_internal: n\_gqa = 1
llama\_model\_load\_internal: rnorm\_eps = 5.0e-06
llama\_model\_load\_internal: n\_ff = 13824
llama\_model\_load\_internal: freq\_base = 10000.0
llama\_model\_load\_internal: freq\_scale = 1
llama\_model\_load\_internal: ftype = 2 (mostly Q4\_0)
llama\_model\_load\_internal: model size = 13B
llama\_model\_load\_internal: ggml ctx size = 0.11 MB
ggml\_init\_cublas: found 2 CUDA devices:
Device 0: AMD Radeon RX 6900 XT, compute capability 10.3
Device 1: AMD Radeon Graphics, compute capability 10.3
llama\_model\_load\_internal: using CUDA for GPU acceleration
ggml\_cuda\_set\_main\_device: using device 0 (AMD Radeon RX 6900 XT) as main device
llama\_model\_load\_internal: mem required = 594.09 MB (+ 400.00 MB per state)
llama\_model\_load\_internal: allocating batch\_size x (640 kB + n\_ctx x 160 B) = 360 MB VRAM for the scratch buffer
llama\_model\_load\_internal: offloading 40 repeating layers to GPU
llama\_model\_load\_internal: offloading non-repeating layers to GPU
llama\_model\_load\_internal: offloading v cache to GPU
llama\_model\_load\_internal: offloading k cache to GPU
llama\_model\_load\_internal: offloaded 43/43 layers to GPU
llama\_model\_load\_internal: total VRAM used: 7656 MB
llama\_new\_context\_with\_model: kv self size = 400.00 MB
Segmentation fault (core dumped)
​ | 2023-08-13T01:42:10 | https://www.reddit.com/r/LocalLLaMA/comments/15plxzx/core_dumped_error_when_loading_model_koboldcpp/ | meutron | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15plxzx | false | null | t3_15plxzx | /r/LocalLLaMA/comments/15plxzx/core_dumped_error_when_loading_model_koboldcpp/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'P1smCGPSCZsuOp4Te2lbGteOQjrcfy6j7e0DMacnxKM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=108&crop=smart&auto=webp&s=52647eb7a82946dce5c2d509054ecbb7810f6f41', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=216&crop=smart&auto=webp&s=0e72e3d00429f8008d201dae83463b5705d30124', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=320&crop=smart&auto=webp&s=71132008d17895a8669da7174fed7ef454375a2c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=640&crop=smart&auto=webp&s=e2fc632147f5039fb503e911d0c3b67035b2352d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=960&crop=smart&auto=webp&s=996682acaa0c315cd34ecc5fb89a8827eab4b7cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?width=1080&crop=smart&auto=webp&s=a22baaca9f63519f77be4ea5d6adfa4540e5a101', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ERCN-X7o2afa0tRgW-JG4YIt_pVv54p7aikWScYJaSA.jpg?auto=webp&s=58ac0ff38047da84b93a0bcb7d27310dd8739429', 'width': 1200}, 'variants': {}}]} |
Improving the speed of a GGML model running on GPU | 5 | I am using Vicuna 1.5 13b quantized to 8 bits in llama.cpp. All layers have been offloaded to GPU. I had tried earlier with the 5 bit quantized model but its performance was lacking so using the 8 bit one now. To get longer answers, I also increased the max_tokens to 1000 from 250.
Have noticed significant slowdowns when increasing the max_tokens. Is it due to the autoregressive nature of the generation, where as its output becomes larger, it has to consume a larger amount of text to produce the next token?
I've tried model_n_batch=1024 to see if larger number of parallel tokens helps improve speed. I am seeing a plateau here where the same value that worked for the 5 bit model continues to work well here, with higher values not being helpful.
Any other settings that might be helpful here? | 2023-08-13T00:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/15pktxn/improving_the_speed_of_a_ggml_model_running_on_gpu/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pktxn | false | null | t3_15pktxn | /r/LocalLLaMA/comments/15pktxn/improving_the_speed_of_a_ggml_model_running_on_gpu/ | false | false | self | 5 | null |
🎨🦙I Finetuned LLAMA2 on SD Prompts | 1 | 2023-08-13T00:19:45 | https://youtu.be/dg_8cGzzfY4 | ImpactFrames-YT | youtu.be | 1970-01-01T00:00:00 | 0 | {} | 15pk70f | false | {'oembed': {'author_name': 'ImpactFrames', 'author_url': 'https://www.youtube.com/@impactframes', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/dg_8cGzzfY4?feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="🎨👩🏻\u200d🎨LLM for SD prompts IF_PromptMKR_GPTQ 🦙🦙"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/dg_8cGzzfY4/hqdefault.jpg', 'thumbnail_width': 480, 'title': '🎨👩🏻\u200d🎨LLM for SD prompts IF_PromptMKR_GPTQ 🦙🦙', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_15pk70f | /r/LocalLLaMA/comments/15pk70f/i_finetuned_llama2_on_sd_prompts/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'SywhsScbWVzef9Co4jGVFSa8xdCQa_H3Msft-PCJa7U', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/fPSOIbLjC6K4ofTIWKe38yt5mBIcjAwaenmxgeXwmxI.jpg?width=108&crop=smart&auto=webp&s=940d2de0c93e4cb904320a27e6c83cdb7b9bba6e', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/fPSOIbLjC6K4ofTIWKe38yt5mBIcjAwaenmxgeXwmxI.jpg?width=216&crop=smart&auto=webp&s=7005cebecc21f2bde630792644df96c9a0570387', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/fPSOIbLjC6K4ofTIWKe38yt5mBIcjAwaenmxgeXwmxI.jpg?width=320&crop=smart&auto=webp&s=2daf4d6b308174b862835f9845936239b60ddde2', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/fPSOIbLjC6K4ofTIWKe38yt5mBIcjAwaenmxgeXwmxI.jpg?auto=webp&s=25bf8b2cc21013da8fc57b4a66972af1de2515cc', 'width': 480}, 'variants': {}}]} | ||
EverythingLM-13b-16k: New uncensored model trained on experimental new dataset | 72 | [https://huggingface.co/totally-not-an-llm/EverythingLM-13b-16k](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-16k)
Trained on a LIMA style dataset of only 1k samples. The dataset combines principles from WizardLM (evol-instruct), and Orca (systems prompts & CoT). From my testing the model performs well, however treat this as a preview model. I have a lot of future plans for better models.
GPTQ's & GGML's are available thanks to TheBloke, links are on the HF page. The ggml's are buggy and is an issue I am working on, so use GPTQ's if you can for now. | 2023-08-13T00:11:41 | https://www.reddit.com/r/LocalLLaMA/comments/15pk0ia/everythinglm13b16k_new_uncensored_model_trained/ | pokeuser61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pk0ia | false | null | t3_15pk0ia | /r/LocalLLaMA/comments/15pk0ia/everythinglm13b16k_new_uncensored_model_trained/ | false | false | self | 72 | {'enabled': False, 'images': [{'id': 'AZvlMlPQKij9jyNTa1Fec2KKfNfs6cOECEgEvphnk_Y', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=108&crop=smart&auto=webp&s=e58720b4e47f2e35477d17c5adc1942ac5689792', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=216&crop=smart&auto=webp&s=b0f7e87fbfeb088221eaa0cac0e6f6d0b277e5c5', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=320&crop=smart&auto=webp&s=0d148d927c6bd6ca47e16f0598e50543d70cddaf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=640&crop=smart&auto=webp&s=fcedf0cedfbf36cd68cf1834484c37ccf69d7067', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=960&crop=smart&auto=webp&s=33e6962e4bada6896d2d8b5f07b7de8d968bb76c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?width=1080&crop=smart&auto=webp&s=dbd8c63454f58dd69485f6240a886839885f6c30', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Lou7aTiqZBNYBeGTMfRLKyiMBSV-O7CJfqv2dwQ2efQ.jpg?auto=webp&s=2911b4d72de75af0c0da931754af9ffd81ea9c2a', 'width': 1200}, 'variants': {}}]} |
Running Llama Faster | 5 | I am currently trying to run llama-7b-chat but the 8bit version on GPU and it is taking about 20 seconds to load a response each time is there anyway to make this faster? | 2023-08-13T00:03:27 | https://www.reddit.com/r/LocalLLaMA/comments/15pjtnd/running_llama_faster/ | Grand-Garage-6479 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pjtnd | false | null | t3_15pjtnd | /r/LocalLLaMA/comments/15pjtnd/running_llama_faster/ | false | false | self | 5 | null |
Vicuna on AMD APU via Vulkan & MLC | 28 | After much trial and error I got this working so thought I'd jot down some notes. Both for myself & perhaps it helps others (esp since AMD APU LLMs is not something I've seen on here).
On a 4700U (AMD Radeon RX Vega 7) so we're talking APU on a low TDP processor...and passively cooled in my case. Unsurprisingly it's not winning the speed race:
>Statistics: prefill: 7.5 tok/s, decode: 2.2 tok/s
...but this is a headless server so the GPU part of APU is literally idle 24/7. Free performance haha.
----------
**Includes some really ugly hacks because I have no idea what I'm doing :p You've been warned.**
Also, this is on proxmox. If you're on vanila debian/ubuntu chances are you'll need less hacky stuff. Hope I got everything...pulled this out of cli history that had lots of noise from trial & error.
----------
Check that we've got the APU listed:
apt install lshw -y
lshw -c video
OpenCL install:
apt install ocl-icd-libopencl1 mesa-opencl-icd clinfo -y
clinfo
Mesa drivers:
apt install libvulkan1 mesa-vulkan-drivers vulkan-tools
Vulkan SDK. It seems to require specifically the SDK. Just Vulkan didn't work for me. pytorch couldn't pick it up.
apt update
wget -qO - http://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo apt-key add -
wget -qO - http://packages.lunarg.com/lunarg-signing-key-pub.asc | apt-key add -
wget -qO /etc/apt/sources.list.d/lunarg-vulkan-focal.list http://packages.lunarg.com/vulkan/lunarg-vulkan-focal.list
apt update
apt upgrade -y
apt install vulkan-sdk
If you're lucky that'll just worked. For me I get that did not work. I was missing libjsoncpp1_1.7.4 which I just installed as a deb. qt5-default metapackage I could get installed at all (likely due to proxmox) due to vulkancapsviewer module refusing to install. I won't need that so just installed everything in the meta package except that:
echo "vulkancapsviewer" >> dont-want.txt
apt-cache depends vulkan-sdk | awk '$1 == "Depends:" {print $2}' | grep -vFf dont-want.txt
apt install vulkan-headers libvulkan-dev vulkan-validationlayers vulkan-validationlayers-dev vulkan-tools lunarg-via lunarg-vkconfig lunarg-vulkan-layers spirv-headers spirv-tools spirv-cross spirv-cross-dev glslang-tools glslang-dev shaderc lunarg-gfxreconstruct dxc spirv-reflect vulkan-extensionlayer vulkan-profiles volk vma
Check if it worked:
vulkaninfo
To get pytorch to pick up vulkan we need to recompile it with vulkan.
git clone https://github.com/pytorch/pytorch.git
USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 python3 setup.py install
The github version didn't compile for me. So had to edit the code. Specifically:
/root/pytorch/aten/src/ATen/native/vulkan/impl/Arithmetic.cpp
Around line 10 the case statement needed a default case:
default:
// Handle any other unspecified cases
throw std::invalid_argument("Invalid OpType provided");
After that the above compile line worked. This one:
USE_VULKAN=1 USE_VULKAN_SHADERC_RUNTIME=1 USE_VULKAN_WRAPPER=0 python3 setup.py install
That means vulkan-tools shows, but wasn't enough to get pytorch pick up vulkan.
import torch
print(torch.is_vulkan_available())
If everything worked then you'll get a TRUE.
You'll likely also need change the amount of memory allocated to GPU in your bios. In my case that was called UMA Frame buffer. Mine seems to be limited to 8GB much to my dismay (was hoping 16gb).
You can check that it worked via:
clinfo | grep Global
Alternative check htop...the total memory shown will have reduced.
Next I installed MLC-AI [here](https://mlc.ai/package/). Installed the CPU package.
Next tried their MLC [chat app](https://mlc.ai/mlc-llm/docs/get_started/try_out.html). The default llama2 model was using vulkan but generating gibberish (?!?). Switched to mlc-chat-vicuna-v1-7b-q3f16_0 instead and now it works. :)
System automatically detected device: vulkan
Using model folder: /root/dist/prebuilt/mlc-chat-vicuna-v1-7b-q3f16_0
Using mlc chat config: /root/dist/prebuilt/mlc-chat-vicuna-v1-7b-q3f16_0/mlc-chat-config.json
Using library model: /root/dist/prebuilt/lib/vicuna-v1-7b-q3f16_0-vulkan.so | 2023-08-12T23:15:30 | https://www.reddit.com/r/LocalLLaMA/comments/15pipso/vicuna_on_amd_apu_via_vulkan_mlc/ | AnomalyNexus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pipso | false | null | t3_15pipso | /r/LocalLLaMA/comments/15pipso/vicuna_on_amd_apu_via_vulkan_mlc/ | false | false | self | 28 | null |
Does local llama2 remember all the conversations and make it my customised assistant? | 9 | I'm thinking to setup Llama2 in my local machine and make all the personal related conversations in one chat session. Will Llama2 remember all the history conversations and response based on it? Not sure if it any limitations on how long and how many the conversations history will keep. | 2023-08-12T20:15:50 | https://www.reddit.com/r/LocalLLaMA/comments/15peebg/does_local_llama2_remember_all_the_conversations/ | newfire1112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15peebg | false | null | t3_15peebg | /r/LocalLLaMA/comments/15peebg/does_local_llama2_remember_all_the_conversations/ | false | false | self | 9 | null |
what does a loss of 1e+9 mean? | 6 | I'm trying to finetune llama2-7B and my loss [appears to be out of control](https://i.imgur.com/0N8Momf.png).
but. what does that actually mean? Pausing the training to test some output, the LLM seems coherent and picked up some of the style of my training data. Nothing seems "broken" other than this number. | 2023-08-12T19:28:09 | https://www.reddit.com/r/LocalLLaMA/comments/15pd8sx/what_does_a_loss_of_1e9_mean/ | scibot9000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pd8sx | false | null | t3_15pd8sx | /r/LocalLLaMA/comments/15pd8sx/what_does_a_loss_of_1e9_mean/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'GfNyyU8vCykXPHu-Ru2Rd0wbbiID_z4JTvgy_P-lN7A', 'resolutions': [{'height': 78, 'url': 'https://external-preview.redd.it/wgsyrWfRsstkwqmJ1M_CGDIGLvohvbOG5cMOp21xt4M.png?width=108&crop=smart&auto=webp&s=0fb6fe69febc016423d368e20d64547c2792a47c', 'width': 108}, {'height': 156, 'url': 'https://external-preview.redd.it/wgsyrWfRsstkwqmJ1M_CGDIGLvohvbOG5cMOp21xt4M.png?width=216&crop=smart&auto=webp&s=a30f80c0631bd9c111bf9a64341b1b2473c1f885', 'width': 216}, {'height': 232, 'url': 'https://external-preview.redd.it/wgsyrWfRsstkwqmJ1M_CGDIGLvohvbOG5cMOp21xt4M.png?width=320&crop=smart&auto=webp&s=4acffa0e0bbfd4b7ef06da4947fcc0c3e6f2056e', 'width': 320}], 'source': {'height': 247, 'url': 'https://external-preview.redd.it/wgsyrWfRsstkwqmJ1M_CGDIGLvohvbOG5cMOp21xt4M.png?auto=webp&s=e7c67769be4cf7c6341beccb0af19878662f73c9', 'width': 340}, 'variants': {}}]} |
Current best codebase for pretraining a model from scratch? | 4 | Hello, does anyone know if there is a codebase that supports pretraining with FlashAttention2, grouped query attention, and rotary embeddings? | 2023-08-12T19:02:17 | https://www.reddit.com/r/LocalLLaMA/comments/15pcm26/current_best_codebase_for_pretraining_a_model/ | ZealousidealBlock330 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pcm26 | false | null | t3_15pcm26 | /r/LocalLLaMA/comments/15pcm26/current_best_codebase_for_pretraining_a_model/ | false | false | self | 4 | null |
Adding LLaMa2.c support for Web with GGML.JS | 16 | Hey guys!
ggml.js is a JavaScript framework that lets you to power web application with Language Models (or LLM). The model runs on browser using WebAssembly and currently is supports GGML models with addition to....
In my latest release of **ggml.js,** I've added support for Karapathy's [llama2.c](https://github.com/karpathy/llama2.c) model.
You can head over to the demo to try out the llama2.c tinystories example.
LLaMa 2 Demo: [https://rahuldshetty.github.io/ggml.js-examples/llama2\_tinystories.html](https://rahuldshetty.github.io/ggml.js-examples/llama2_tinystories.html)
Documentation: [https://rahuldshetty.github.io/ggml.js](https://rahuldshetty.github.io/ggml.js)
​ | 2023-08-12T18:39:55 | https://www.reddit.com/r/LocalLLaMA/comments/15pc2d3/adding_llama2c_support_for_web_with_ggmljs/ | AnonymousD3vil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pc2d3 | false | null | t3_15pc2d3 | /r/LocalLLaMA/comments/15pc2d3/adding_llama2c_support_for_web_with_ggmljs/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'fuysvROS0w0fAkvWAFuBmJ507qgm68vfA5btZZybPNs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=108&crop=smart&auto=webp&s=f4de47905326b71d5b4b0299156cd8429590f373', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=216&crop=smart&auto=webp&s=e6f6c866c0cfbfed175cee14fdc88d1a02e2e1c9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=320&crop=smart&auto=webp&s=6d8044d7c02ecb0e1568350f64a8a6d3f202c406', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=640&crop=smart&auto=webp&s=de7fafe23a18cea71d9d219f3f9938caeac6b346', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=960&crop=smart&auto=webp&s=aa822fa312021e92d38ec49fcc9ffa9e71653768', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?width=1080&crop=smart&auto=webp&s=09f5be1febd7ff5911ae5c49113a09bcbcc24193', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/WfgawjqoQqEbzX0z_FYPx2fmJzXwwlH872oISV17XgE.jpg?auto=webp&s=3b73c6953b00ba46f7d35881aa0ceec5d9d71c25', 'width': 1200}, 'variants': {}}]} |
Welp. Since they didn't recommend CoT with simple math questions... Temperature 0. | 42 | 2023-08-12T18:39:39 | bot-333 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15pc25a | false | null | t3_15pc25a | /r/LocalLLaMA/comments/15pc25a/welp_since_they_didnt_recommend_cot_with_simple/ | false | false | 42 | {'enabled': True, 'images': [{'id': 'k38h6gvybYGIWWXIL4SDjxqZlHNC_XNf5aJ5H0DQhnY', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=108&crop=smart&auto=webp&s=20015fe1881bc46a5358e02e47b08e455fb3e005', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=216&crop=smart&auto=webp&s=09d69491adf272012917cc3d8117af1ffa8f41eb', 'width': 216}, {'height': 165, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=320&crop=smart&auto=webp&s=4bde54536c9368c9e61cbdfd8eeb515288b9d107', 'width': 320}, {'height': 330, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=640&crop=smart&auto=webp&s=695812dde1a7363efd1de36685367a7a20c792fb', 'width': 640}, {'height': 495, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=960&crop=smart&auto=webp&s=0c35751d488ae8f411f8500e4b6268cef40b72fa', 'width': 960}, {'height': 557, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?width=1080&crop=smart&auto=webp&s=46d690b654c409b4b2601f5a659ad4a575af67f8', 'width': 1080}], 'source': {'height': 1360, 'url': 'https://preview.redd.it/ul48vqow4qhb1.png?auto=webp&s=122debe661a7d30dbc1236f1e33f36eff63505c7', 'width': 2634}, 'variants': {}}]} | |||
LlongOrca-7b-16k is here! and some light spoilers! :D | 102 | Today we are releasing LlongOrca-7B-16k!
​
This 7B model is our first long context release, able to handle 16,000 tokens at once!
We've done this while achieving >99% the performance of the best 7B models available today (which are all limited to 4k tokens).
​
[https://huggingface.co/Open-Orca/LlongOrca-7B-16k](https://huggingface.co/Open-Orca/LlongOrca-7B-16k)
​
This release is trained on a curated filtered subset of most of our GPT-4 augmented data. It is the same subset of our data as was used in our OpenOrcaxOpenChat-Preview2-13B model.
​
This release reveals that stacking our training on an existing long context fine-tuned model yields significant improvements to model performance. We measured this with BigBench-Hard and AGIEval results, finding \~134% of the base Llongma2-16k model's performance on average. As well, we've found that it may be the first 7B model to score over 60% on SAT English evaluation, more than a 2X improvement over base Llama2-7B!
​
We did this training as part of testing integration of OpenChat's MultiPack algorithm into the Axolotl trainer. MultiPack achieves 99.85% bin-packing efficiency on our dataset. This has significantly reduced training time, with efficiency improvement of 3-10X over traditional methods.
​
We have this running unquantized on fast GPUs for you to play with now in your browser:
[https://huggingface.co/spaces/Open-Orca/LlongOrca-7B-16k](https://huggingface.co/spaces/Open-Orca/LlongOrca-7B-16k)
(the preview card below is erroneously showing the name of our Preview2 release, but rest assured the link is to the LlongOrca-7B-16k space)
​
Many thanks to Enrico Shippole, emozilla, and kaiokendev1 for the fine work on creating the LlongMA-2-7b-16k model this was trained on top of!
​
We are proud to be pushing the envelope of what small models that can run easily on modest hardware can achieve!
​
Stay tuned for another big announcement from our Platypus-wielding friends Ariel Lee, ColeJHunter, Natanielruizg very soon too!
follow along at our development server, and pitch in if you want to learn more about our many other projects (seriously some of them are wild) all the links can be found at AlignmentLab.ai | 2023-08-12T18:15:48 | https://www.reddit.com/r/LocalLLaMA/comments/15pbhcx/llongorca7b16k_is_here_and_some_light_spoilers_d/ | Alignment-Lab-AI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pbhcx | false | null | t3_15pbhcx | /r/LocalLLaMA/comments/15pbhcx/llongorca7b16k_is_here_and_some_light_spoilers_d/ | false | false | spoiler | 102 | {'enabled': False, 'images': [{'id': 'PWRlymRVhoVc55SaWi7XBaBFOCAk_F49maYO8ReHxgI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=108&crop=smart&auto=webp&s=42732f8fb985f6329580bdd8134286909b29cd19', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=216&crop=smart&auto=webp&s=07ad9899bd66cdfd4d56977e5c5745614225a84d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=320&crop=smart&auto=webp&s=0f529a1cf996943a5f2c29a0872f6794221f57ac', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=640&crop=smart&auto=webp&s=80190b09a0118c4fb2485dc7b971d549cb0a848c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=960&crop=smart&auto=webp&s=071dcc3b70a192b2c578fda6f607369cf89969b3', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=1080&crop=smart&auto=webp&s=1abd1be3f076a6d5e4dd850c82386a115e1c9abe', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?auto=webp&s=66ef275763679aa6ec227c3073c7457deea2601c', 'width': 1200}, 'variants': {'obfuscated': {'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=b84124f80ba5c86e0326fd2adaae1abac73724f0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=33dc7fc50fbf3151e26d7768e29d71e934214d31', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=e87c3eb9a39e5bcf7a049da1dbbff0e3fc43a033', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=aaded894198ce3d29a24eb010ae8a0e5a480119e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=56b877cfeb43d8212c6fd540c074af071acfe08b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=9a718f565c796190b35c865ca8180395894101d2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Aya8zwiQM2diaj_795gT-JC_sIEmOyT-YVokDCcK_Ik.jpg?blur=40&format=pjpg&auto=webp&s=c49a7ac9e33140791419c619edb4bfb4315a23b0', 'width': 1200}}}}]} |
I think I'm ready to call llama2 almost unusable because of the repitition thing | 89 | Anyone else? It's like a carrot on a stick, because obviously there are many aspects where it shows that it's much better than llama1. But then the repetition destroys pretty much any use case.
And this is not just about getting stuck in a loop, repeating messages in a conversation. You ask it to do one thing, and it makes mistakes, and you can't tell it to fix them. It will just be stuck because apparently the previous response is SO sticky. Like, it formatted something with the wrong brackets. It is, imho, impossible for deterministic llama2 models to correct that after being told.
Same goes for describing some syntax that the model can use. You know how powerful examples are. But you can't use them. It will be completely stuck repeating the examples no matter how much time you spend explaining that it must not use the example input and must think of original input. I have even tried formulating all these explanations without negations, but it still does not work at all.
In case you have less of these problems, it is probably due to temperature. But the temp-0 response is just the actual, real quality the model produces, you can't really fix anything with randomness around a wrong target. It will just never become a solution that does not "sometimes" require regeneration, at least.
Idk. I don't even feel like I have to say what finetune I tried most of this stuff with. It's just llama-2, even if some are better at getting around that.
Oh and as bonus observation: I changed from q4_1 to q5_1 and I think the impact of quantization is largely talked down or overlooked. To some extent, it was almost like talking to a different model. I think there's a lot going on, even if those perplexity scores don't move that much. I once suggested that maybe a more efficient model means quantization is more harmful? Just thought it was a good time to repeat that.
Anyway, with the better version of the model, I had *more* problems with the repetition. Seems to have eliminated some errorous, temperature-like fuzzing that the quantization causes, so I ran into more such problems, just like when you reduce the temperature.
Kay, thanks for listening to my rant. I tried to make it somewhat constructive. If anyone does a bit more complex things with llama2 and has some tips&tricks to share to combat all that, it would be very much appreciated.
For completeness, my latest observations are from airoboros 2.0 13B with ggml K quants. Tried up to q_6_K. But I really don't blame airoboros since I have still gotten the best results with that model so far (also trying m2.0). It is smart enough to maybe get away without examples, but that doesn't really fix it until after the first usage either. | 2023-08-12T17:21:10 | https://www.reddit.com/r/LocalLLaMA/comments/15pa5zd/i_think_im_ready_to_call_llama2_almost_unusable/ | involviert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15pa5zd | false | null | t3_15pa5zd | /r/LocalLLaMA/comments/15pa5zd/i_think_im_ready_to_call_llama2_almost_unusable/ | false | false | self | 89 | null |
For researchers, and model trainers | 1 | [removed] | 2023-08-12T17:11:41 | https://www.reddit.com/r/LocalLLaMA/comments/15p9xyj/for_researchers_and_model_trainers/ | JaysonGent | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p9xyj | false | null | t3_15p9xyj | /r/LocalLLaMA/comments/15p9xyj/for_researchers_and_model_trainers/ | false | false | self | 1 | null |
Clarify the issues of WizardMath, and share official online demos. | 25 |
Thanks for your attention of WizardMath!
We share you two online demos of WizardMath **7B** V1.0 model.
7B D-1: **http://777957f.r10.cpolar.top**
7B D-2: **http://2be2671b.r10.cpolar.top**
🚫For the **simple** math questions (such as **1+1=?**), we do **NOT** recommend to use the CoT prompt.
We will update more demos of **70B and 13B** tomorrow and please refer to (https://github.com/nlpxucan/WizardLM/tree/main/WizardMath) for ***the latest URLs***.
We welcome everyone to use your professional and difficult instructions to evaluate WizardMath, and show us examples of poor performance and your suggestions.
❗❗❗ ***Note****: Please use the* ***same systems prompts strictly*** *with us, and we do not guarantee the accuracy of the* ***quantified versions****.*
​
For WizardMath, the prompts should be as following:
***Default version:***
*"Below is an instruction that describes a task. Write a response that appropriately completes the request.\\n\\n### Instruction:\\n{instruction}\\n\\n### Response:"*
***CoT Version:***(❗For the \*\*simple\*\* math questions, we do NOT recommend to use the CoT prompt.)
*"Below is an instruction that describes a task. Write a response that appropriately completes the request.\\n\\n### Instruction:\\n{instruction}\\n\\n### Response: Let's think step by step."*
​
https://preview.redd.it/fz8txy8okphb1.png?width=1994&format=png&auto=webp&s=e987ccc2510e62684accf25419bfae38175b56a6
*Processing img t17ylw8okphb1...*
https://preview.redd.it/b93nyx8okphb1.png?width=1920&format=png&auto=webp&s=568a3d98f4a1d054e1f6f371fdb40d739d63a519 | 2023-08-12T16:50:55 | https://www.reddit.com/r/LocalLLaMA/comments/15p9gfl/clarify_the_issues_of_wizardmath_and_share/ | ApprehensiveLunch453 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p9gfl | false | null | t3_15p9gfl | /r/LocalLLaMA/comments/15p9gfl/clarify_the_issues_of_wizardmath_and_share/ | false | false | 25 | null | |
what is the best prompt on making realistic person because it not going the way I want LMAO | 1 | [removed] | 2023-08-12T16:10:08 | Small_Platypus4165 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15p8hgh | false | null | t3_15p8hgh | /r/LocalLLaMA/comments/15p8hgh/what_is_the_best_prompt_on_making_realistic/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'YVww1Bm93nA5gFqkOYOnzxr8-W3rH3I6IneySZkRuRc', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=108&crop=smart&auto=webp&s=636d786e9c17e60c277e7e4674dad6fe7619756a', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=216&crop=smart&auto=webp&s=3efacaab302871e6fc09434280d9a2e53bdb8cdb', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=320&crop=smart&auto=webp&s=1e71ce5fea746fb9c1640cbf2290b9b0044be266', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=640&crop=smart&auto=webp&s=5e2a872fdf7005a5412f4801e39707cdb856f048', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=960&crop=smart&auto=webp&s=951300887ef28ee2771977833caa020ea226ce8a', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/344f50gaephb1.png?width=1080&crop=smart&auto=webp&s=44d200880940e777fc41ad45a5b44691ff5d1541', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://preview.redd.it/344f50gaephb1.png?auto=webp&s=a1cea85d922a2f554143afe1be20de81cf700153', 'width': 1920}, 'variants': {}}]} | ||
Expected inference speed? | 3 | What does token/seconds speeds actually translate into when doing inference? Let's say I'd like to use it to write summaries of documents, going from say 3000 tokes to a 2-300 token summary.
Is the math as simple as (tokens in + tokens out) / token speed?
For the given example with a 10t/s system, will I spend 300 seconds waiting, and then 20-30 seconds looking at text streaming back?
Or is it just (load-input-tokens-time-if-so-how-much?) + 20-30 seconds inference time?
I've only used chat GPT, but would like to invest in some hardware for local experimentation. But unsure of what to expect. | 2023-08-12T16:03:01 | https://www.reddit.com/r/LocalLLaMA/comments/15p8baa/expected_inference_speed/ | gradientdancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p8baa | false | null | t3_15p8baa | /r/LocalLLaMA/comments/15p8baa/expected_inference_speed/ | false | false | self | 3 | null |
What's the best (and cheap) way to try out all the new LLMs on cloud services. | 23 | I want to try out the LLMs but do not have proper infrastructure. So i thought to use AWS or Azure or some other cloud service. What are the CPU , GPU and RAM requirements I need to run any of the 70B LLMs? | 2023-08-12T15:39:04 | https://www.reddit.com/r/LocalLLaMA/comments/15p7qrh/whats_the_best_and_cheap_way_to_try_out_all_the/ | timedacorn369 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p7qrh | false | null | t3_15p7qrh | /r/LocalLLaMA/comments/15p7qrh/whats_the_best_and_cheap_way_to_try_out_all_the/ | false | false | self | 23 | null |
Is there some place where I can use the uncensored version online rather than locally? | 1 | [removed] | 2023-08-12T14:37:03 | https://www.reddit.com/r/LocalLLaMA/comments/15p68v1/is_there_some_place_where_i_can_use_the/ | MasterDisillusioned | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p68v1 | false | null | t3_15p68v1 | /r/LocalLLaMA/comments/15p68v1/is_there_some_place_where_i_can_use_the/ | false | false | self | 1 | null |
Tried to deploy vicuna on sage-maker (aws) but got some errors | 2 | ​
[\(the gpu\)](https://preview.redd.it/a984ai0ovohb1.png?width=1366&format=png&auto=webp&s=0a092af062b73c8a5a532dca3277abde3398610b)
​
​
UnexpectedStatusException: Error hosting endpoint huggingface-pytorch-tgi-inference-2023-08-12-14-06-11-491: Failed. Reason: The primary container for production variant AllTraffic did not pass the ping health check. Please check CloudWatch logs for this endpoint..
​
​
Any help is welcomed! | 2023-08-12T14:27:28 | https://www.reddit.com/r/LocalLLaMA/comments/15p60nk/tried_to_deploy_vicuna_on_sagemaker_aws_but_got/ | Expensive_Breakfast6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p60nk | false | null | t3_15p60nk | /r/LocalLLaMA/comments/15p60nk/tried_to_deploy_vicuna_on_sagemaker_aws_but_got/ | false | false | 2 | {'enabled': False, 'images': [{'id': 'kvOxhBrkQsKDZDFDpUmXfe7SlhsRzIUjJ-pIjzJq6lw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=108&crop=smart&auto=webp&s=e2a18922dbc730b6fdcf2fa2806081ee67323147', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=216&crop=smart&auto=webp&s=47833fabdb1b07432ec38465c2868cfcc0ff8eec', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=320&crop=smart&auto=webp&s=7786d030771a94f116ff44d423690b0bac3f1a9f', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=640&crop=smart&auto=webp&s=ff70e12afb698dbc4860d2bd8cbb12fb4f132456', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=960&crop=smart&auto=webp&s=f2a00fd59d26a3aa6a80ea3f57de6fead76cc76b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?width=1080&crop=smart&auto=webp&s=4d3a603acb8f5a1a1eeeebc4744f58d9f7203ede', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/eizCkhAyEN6JvHyiRHnz3ZmSkHFzVKbxGhsnXE-5uOs.jpg?auto=webp&s=c78228e082b96ec2c3ddfef949be0a0337917f84', 'width': 1200}, 'variants': {}}]} | |
When starting LoRA training, first steps already showing very low losses, is that right? | 14 | Hello everyone.
I'm tryng to use [axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) for my LoRA training instead of oobabooga. I've prepared my 30MB dataset as completion raw corpus format in JSONL. I'm using **meta-llama/Llama-2-13b-hf** for my LoRA training.
Here some \`yml\` configs:
load_in_8bit: true
load_in_4bit: false
strict: false
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
bf16: true
fp16: false
tf32: false
I'm using **RTX A6000** for my LoRA training. And at first training steps its started to output strange results like this:
{'loss': 1.6568, 'learning_rate': 2e-05, 'epoch': 0.01}
{'loss': 1.6157, 'learning_rate': 4e-05, 'epoch': 0.02}
{'loss': 1.6146, 'learning_rate': 6e-05, 'epoch': 0.03}
{'loss': 1.6502, 'learning_rate': 8e-05, 'epoch': 0.04}
{'loss': 1.8111, 'learning_rate': 0.0001, 'epoch': 0.04}
{'loss': 1.8191, 'learning_rate': 0.00012, 'epoch': 0.05}
{'loss': 1.688, 'learning_rate': 0.00014, 'epoch': 0.06}
{'loss': 1.503, 'learning_rate': 0.00016, 'epoch': 0.07}
{'loss': 1.8784, 'learning_rate': 0.00018, 'epoch': 0.08}
{'loss': 1.5776, 'learning_rate': 0.0002, 'epoch': 0.09}
{'loss': 1.7116, 'learning_rate': 0.00019999535665248002, 'epoch': 0.1}
{'loss': 1.6978, 'learning_rate': 0.0001999814270411335, 'epoch': 0.11}
{'loss': 1.5436, 'learning_rate': 0.000199958212459561, 'epoch': 0.12}
{'loss': 1.5556, 'learning_rate': 0.00019992571506363, 'epoch': 0.13}
{'loss': 1.6217, 'learning_rate': 0.00019988393787127441, 'epoch': 0.13}
{'loss': 1.5164, 'learning_rate': 0.0001998328847622148, 'epoch': 0.14}
Which is very strange. At **0.01** epoch it shows it has very low losses. When I've been using oobabooga, it started from about **3** then wen down to **1.4.**
Is that even right? Does that mean that LoRA is successfully trained on fraction of epoch? | 2023-08-12T12:04:25 | https://www.reddit.com/r/LocalLLaMA/comments/15p2v7r/when_starting_lora_training_first_steps_already/ | DaniyarQQQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p2v7r | false | null | t3_15p2v7r | /r/LocalLLaMA/comments/15p2v7r/when_starting_lora_training_first_steps_already/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'EO6qVfOQXm2_-d9cG85lSO-sJ2QZ2XZUzLO4YrGnUZ0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=108&crop=smart&auto=webp&s=967b806868da1f8b68e1d466ba68230b80437ff9', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=216&crop=smart&auto=webp&s=f00227225acdb9efbb994870d05b3a7242553633', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=320&crop=smart&auto=webp&s=82b34cbb10cf089230703c29486f4f648abf0741', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=640&crop=smart&auto=webp&s=1bfd3c08b17cc0648cdae5edc50b2911e7528e80', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=960&crop=smart&auto=webp&s=c020393ef732eec1eea41e977b9cb8432c1e9884', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?width=1080&crop=smart&auto=webp&s=62b97811afd2153004ff121449be77bf2c9020b2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/D5bgRLgemo9x3MWYuYNoAOz1bI-0ZAJapSfkBQ_xOTI.jpg?auto=webp&s=e8d3e829f033b4b832150f61c48b2db95d475b25', 'width': 1200}, 'variants': {}}]} |
Let my character remember the conversation | 25 | My progress since last week has been towards creating my friendly chat partner, personal assistant.
A quick recap of what it can do:
* import tavernai (webp) characters
* listen to my voice, and reply in SAPI5 voices
* communicate with different languages models at the same time, with different roles (RP, summarization), different sizes (22b, 13b, 7b)
* chat rooms are ~~simple text files~~ in Trilium notes
​
There are hierarchical note taking tools that are used to keep personal notes. Why not use it to keep my personal chat logs.
​
Keep chat logs in a note taking tool (Trilium)
* save chat log as notes
* each chat character has a directory named after them
* the chat long-term memories come from the notes in directory (inner note)
​
[ The character's past memories are written on a white background.](https://preview.redd.it/f5h47fpgpnhb1.png?width=1522&format=png&auto=webp&s=d48972040c26ad7ed3bd1da61814d447d731477b)
​
The documents (notes) are not only stored in the Trilium app, but also in a separate area (space)
* store short dialogues (question - answer)
* index dialogues for fast retrieval
* search by synonyms (by matrix operations)
​
versus Traditional text search
* exact match
* fuzzy search
​
These dialogues can be
* a simple chat log,
* a knowledge base in Q&A format
​
The character can remember the summary of the conversation, but not the whole text. If I have a question to the character, the search is done in the whole text! Most people are unable to recall what exactly was said, the same applies to a virtual character.
​
[The memories of several characters from the week](https://preview.redd.it/uhgbt8jiqnhb1.png?width=973&format=png&auto=webp&s=39cf7c5f923ac52651c441ed06d5e38e167643b4)
Every time I chat with a character, the conversation is saved in a daily note. The number of notes and, thus the number of conversations is unlimited, the character is capable of recalling events from days ago.
There are no server requirements to run the chat, besides the lightweight koboldcpp. LTM can be done using any generic, popular math library.
It could be further improved by sending different questions (rp, math, finance, dba) to any language model - based on their specialization. | 2023-08-12T11:04:47 | https://www.reddit.com/r/LocalLLaMA/comments/15p1q7d/let_my_character_remember_the_conversation/ | justynasty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p1q7d | false | null | t3_15p1q7d | /r/LocalLLaMA/comments/15p1q7d/let_my_character_remember_the_conversation/ | false | false | 25 | null | |
how to make wizardmath-70b-v1.0.ggmlv3.q8_0.bin correctly answer this puzzle? | 4 | [https://paste.c-net.org/PlungedLackey](https://paste.c-net.org/PlungedLackey)
It fails to see the relevance of Bob participating in both marathons.
Is there a technique to make the model answer correctly?
Is there an alternative model more suited to this problem? thanks
my previous attempts with other models:
[https://www.reddit.com/r/LocalLLaMA/comments/15dfzag/how\_to\_make\_the\_models\_like/](https://www.reddit.com/r/LocalLLaMA/comments/15dfzag/how_to_make_the_models_like/) | 2023-08-12T10:56:37 | https://www.reddit.com/r/LocalLLaMA/comments/15p1kgd/how_to_make_wizardmath70bv10ggmlv3q8_0bin/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p1kgd | false | null | t3_15p1kgd | /r/LocalLLaMA/comments/15p1kgd/how_to_make_wizardmath70bv10ggmlv3q8_0bin/ | false | false | self | 4 | null |
New to this, need some questions answered. | 8 | Hello! I'm pretty new to all of this, so I'm sorry if anything I say sounds stupid. I recently downloaded oobabooga and got georgesung\_llama2\_7b\_chat\_uncensored running successfully. However, I have a few questions:
1. What model loader should I be using and what are the main differences between them?
2. How do I offload the model to the GPU? I notice the model has a cpu tag in the model options. Does this mean its already running on the GPU by default unless I check the cpu option?
3. How do I get the model to really take advantage of my computer's resources? At least while running the model mentioned above, I've noticed my computer's fans don't really spin up. I know that its a small model, but I can't help but feel that I could be running it faster. I have a pretty beefy computer (i9-13900k, 64GB DDR5, RTX 4080) and want to make sure that I'm making the most of my hardware.
4. Given my hardware, what would you say is the most advanced language model that I could run?
I'm using Windows btw.
I'm really looking forward to getting into this! Language models are very fascinating to me and I'm very interested in learning more about them! | 2023-08-12T09:49:20 | https://www.reddit.com/r/LocalLLaMA/comments/15p0e2b/new_to_this_need_some_questions_answered/ | BombTime1010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15p0e2b | false | null | t3_15p0e2b | /r/LocalLLaMA/comments/15p0e2b/new_to_this_need_some_questions_answered/ | false | false | self | 8 | null |
SILO Language Models: Isolating Legal Risk In a Nonparametric Datastore (a model that lets owners remove their data) | 5 | 2023-08-12T09:14:47 | https://twitter.com/ssgrn/status/1689256059234361344 | saintshing | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15ozsmo | false | {'oembed': {'author_name': 'Suchin Gururangan', 'author_url': 'https://twitter.com/ssgrn', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Feel risky to train your language model on copyrighted data?<br><br>Check out our new LM called SILO✨, with co-lead <a href="https://twitter.com/sewon__min?ref_src=twsrc%5Etfw">@sewon__min</a><br><br>Recipe: collect public domain & permissively licensed text data, fit parameters on it, and use the rest of the data in an inference-time-only datastore. <a href="https://t.co/PqlqtbIFIS">pic.twitter.com/PqlqtbIFIS</a></p>— Suchin Gururangan (@ssgrn) <a href="https://twitter.com/ssgrn/status/1689256059234361344?ref_src=twsrc%5Etfw">August 9, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/ssgrn/status/1689256059234361344', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15ozsmo | /r/LocalLLaMA/comments/15ozsmo/silo_language_models_isolating_legal_risk_in_a/ | false | false | 5 | {'enabled': False, 'images': [{'id': 'LmWeAOeip9W2tpSy2skNoH72_0V4VYugfoVxJcuLXi8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/StLqUbdnDlSYajXN6uu44lefCAFNGSLN3kLPmQKLemQ.jpg?width=108&crop=smart&auto=webp&s=d9c24c04b01b732ad79419c855c336af6d7469d5', 'width': 108}], 'source': {'height': 70, 'url': 'https://external-preview.redd.it/StLqUbdnDlSYajXN6uu44lefCAFNGSLN3kLPmQKLemQ.jpg?auto=webp&s=804eadaefa46a344578aaf99d382e2764a86602c', 'width': 140}, 'variants': {}}]} | ||
What is the best API right now for self-hosted LLM usage? | 5 | I want to deploy a model and chat with it via messenger or some other interface and maybe give access to other people. Ideally API should have authorisation (ooga does not as I understand). | 2023-08-12T08:44:32 | https://www.reddit.com/r/LocalLLaMA/comments/15oz9iz/what_is_the_best_api_right_now_for_selfhosted_llm/ | InkognetoInkogneto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oz9iz | false | null | t3_15oz9iz | /r/LocalLLaMA/comments/15oz9iz/what_is_the_best_api_right_now_for_selfhosted_llm/ | false | false | self | 5 | null |
Are there any models which do something similar to Sudowrite? | 3 | Lately I started writing and while being pretty fluent in English, it’s not my mother tongue so on a long and creative texts it shows, I simply don’t know some phrases, expressions and I find myself repeating things I know multiple times, using same words when there are other synonyms. I found it pretty useful for descriptions, I could input my rough vision of what I imagined and it wrote some pretty good paragraphs or sentences.
Are there any models I could run on my i7 6700k, 32GB of RAM and 3060 Ti that would do similar things to what Sudowrite is doing? Of course I don’t expect perfectly similar alternative, but for example I input large parts of my text and then ask to rewrite certain paragraphs with prompts or something like that. | 2023-08-12T07:34:31 | https://www.reddit.com/r/LocalLLaMA/comments/15oy1oa/are_there_any_models_which_do_something_similar/ | JozoBozo121 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oy1oa | false | null | t3_15oy1oa | /r/LocalLLaMA/comments/15oy1oa/are_there_any_models_which_do_something_similar/ | false | false | self | 3 | null |
Google search extension for the webui | 17 | I stumbled upon this great extension for text-generation-webui.
All you have to do is install and start your question with “search X”.
The context will be shown on the console and the answer should be based on the google search result.
This is not my project but I have been using it for a couple of days with great success!
Would recommend you guys trying it.
https://github.com/simbake/web_search | 2023-08-12T07:16:31 | https://www.reddit.com/r/LocalLLaMA/comments/15oxqas/google_search_extension_for_the_webui/ | iChrist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oxqas | false | null | t3_15oxqas | /r/LocalLLaMA/comments/15oxqas/google_search_extension_for_the_webui/ | false | false | self | 17 | {'enabled': False, 'images': [{'id': 'KkOZ34ewH8CkmAZoKJ9_9OFtSBEZZoiE4Nj2KvRoQ54', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=108&crop=smart&auto=webp&s=020a6a4f13a947d8a648ebd3c72b1555c72c8420', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=216&crop=smart&auto=webp&s=1546b33e34521c67a406d4060fdf19b964a68d54', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=320&crop=smart&auto=webp&s=a9bc6ec116104a5f816003ccf30aed67da2cb0a2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=640&crop=smart&auto=webp&s=2851a65c169291965b51d935290cedd52e8f4f7a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=960&crop=smart&auto=webp&s=a9c6ea5a3a41fc27d70968ff849c1844abc5f6e4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?width=1080&crop=smart&auto=webp&s=8a74de8888a05640d99ad21055242b29971f3fe7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/pNWJFJ5QLoHsUUHYj0KTByurJDwhZbGrXkXxsTeJcYg.jpg?auto=webp&s=26028ecea4458f6b21ba2049b621b47e99dd0a46', 'width': 1200}, 'variants': {}}]} |
Unleash the Power of LLMs in Your Telegram Bot on a Budget | 1 | Interested in supercharging your Telegram bot with large language models (LLMs)? Here's a concise guide:
* **Introduction**: Harness LLMs like llama2-chat and vicuna. The bot is hosted on Amazon's free-tier EC2, with LLM inference on Beam Cloud.
* **Telegram Bot Setup**: Initiate with u/botfather on Telegram, get your token, and start a conversation with your bot.
* **Hosting**: Deploy on Amazon’s free-tier EC2 instance. The guide provides steps from EC2 setup to bot launch.
* **LLM Integration**: Beam Cloud, an affordable choice, is used for LLM inference. The bot taps into langchain and huggingface.
🔗 [**GitHub Repo**](https://github.com/ma2za/telegram-llm-guru) 🔗 [**Full Medium Article**](https://medium.com/@saverio3107/crafting-a-cost-effective-llm-powered-telegram-bot-a-step-by-step-guide-4d1e760e7eec) 🔗 [**Join Medium for More Updates**](https://medium.com/@saverio3107/membership)
Dive in, experiment, and enhance your Telegram bot's capabilities! Feedback and insights are welcome. 🚀 | 2023-08-12T06:18:02 | https://www.reddit.com/r/LocalLLaMA/comments/15owpt8/unleash_the_power_of_llms_in_your_telegram_bot_on/ | Xavio_M | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15owpt8 | false | null | t3_15owpt8 | /r/LocalLLaMA/comments/15owpt8/unleash_the_power_of_llms_in_your_telegram_bot_on/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WQaQz-DVrgtgbDxYcPOn0564CHaCPQWuay69Tl4JfVA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=108&crop=smart&auto=webp&s=1a1c62cfebf549c00745e3b0bec276fee8698bc4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=216&crop=smart&auto=webp&s=51a855dd70a18369401098f32e7b92460c4c4144', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=320&crop=smart&auto=webp&s=4fbb3d6cab69e2b3cfb7b4aa4d373445c4edd898', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=640&crop=smart&auto=webp&s=92c9fac434252b7e584d9149d9b68e831c8c6a19', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=960&crop=smart&auto=webp&s=c6dc21f88a39e4d4d7ff855b433df7791e74682e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?width=1080&crop=smart&auto=webp&s=d4c32ae1e8625aaa249c9e49806aae9635a57efe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xg9gt9rvQpSPMckyHJPtPHLl_IzUpU1f421xgoTXTiM.jpg?auto=webp&s=77a8a02ac93ca90d3a1a5266971608bc94875092', 'width': 1200}, 'variants': {}}]} |
Error when attempting to train raw data | 1 | [removed] | 2023-08-12T05:29:20 | https://www.reddit.com/r/LocalLLaMA/comments/15ovuax/error_when_attempting_to_train_raw_data/ | Lower_Spasm | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ovuax | false | null | t3_15ovuax | /r/LocalLLaMA/comments/15ovuax/error_when_attempting_to_train_raw_data/ | false | false | self | 1 | null |
Llama 2 chatbot performance for multiple users | 6 | Im thinking about hosting a local llama 2 chat chat using vector embedding internally within my company. Inference was avg about 20 seconds per query on v100. If I put a front end and allow multiple users to query it in simultaneously:
1) is this even possible or would the queries be queued
2) if it was possible to inference multiple requests at once, would performance take a proportionate hit? | 2023-08-12T05:06:36 | https://www.reddit.com/r/LocalLLaMA/comments/15ovffk/llama_2_chatbot_performance_for_multiple_users/ | godspeedrebel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ovffk | false | null | t3_15ovffk | /r/LocalLLaMA/comments/15ovffk/llama_2_chatbot_performance_for_multiple_users/ | false | false | self | 6 | null |
How to use QLora for LLaMa 2? | 6 | Hello ! I want to use QLora for fine-tune LLaMA-2 70b (maybe 13b). And i don't know how to use qlora.
*"python qlora.py --model\_name\_or\_path TheBloke/**llama-2-70b.ggmlv3.q4\_K\_M**.bin*
*--dataset my-data"*
It's correct command for use ?
| 2023-08-12T03:57:03 | https://www.reddit.com/r/LocalLLaMA/comments/15ou360/how_to_use_qlora_for_llama_2/ | Alex_Strek | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ou360 | false | null | t3_15ou360 | /r/LocalLLaMA/comments/15ou360/how_to_use_qlora_for_llama_2/ | false | false | self | 6 | null |
Cloud GPU Quotas | 10 | I've been running various different LLMs on the cloud and have been able to run 7 and 13 billion parameter models with ease. I am working on a small personal project with no strong commercial value. However, 30 billion parameter models require about 40 GB GPU space. There are two problems: Llambda allows you to allocate machines but almost never has any available. AWS, Azure, and Paper space have appropriate machines (apparently) but have quotas that limit you from ever creating a machine with the correct metrics. AWS is a little confusing: as with many clouds they ask you to file tickets to increase your quota. After filing around 6 tickets, it's clear they won't increase my limit beyond 20 GB GPU.
Does anyone know of a cloud that allows you to actually allocate large (40+GB) GPU machines easily? Thanks | 2023-08-12T03:43:38 | https://www.reddit.com/r/LocalLLaMA/comments/15ottvi/cloud_gpu_quotas/ | Pristine_Drag_5695 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ottvi | false | null | t3_15ottvi | /r/LocalLLaMA/comments/15ottvi/cloud_gpu_quotas/ | false | false | self | 10 | null |
Introducing YourChat: A multi-platform LLM chat client that supports the APIs of llama.cpp and text-generation-webui. | 7 | Introducing YourChat: A multi-platform LLM chat client that supports the APIs of text-generation-webui and llama.cpp.
​
Features:
\* Subscription Links: Our distinctive feature allows you to consolidate your services into a single shareable link. Share your LLM with your team or friends.
\* Multi-Platform: YourChat is available on Windows, MacOS, Android, and iOS, ensuring a seamless experience whether you're on mobile or desktop.
\* Built-In Prompts: Channel creativity using integrated prompts sourced from github.com/f/awesome-chatgpt-prompts.
​
API Extensiveness:
\* text-generation-webui
\* llama.cpp
\* GPT Compatible API (for third-party OpenAI-like APIs)
\* OpenAI API (not available on apple app store version)
​
Some Screenshots:
[Chat with preset prompt](https://preview.redd.it/qibwbf2xmlhb1.png?width=2000&format=png&auto=webp&s=6c1974842e4bea3a6f062859764f69496713a5f6)
[Completion Mode](https://preview.redd.it/9am6zi2xmlhb1.png?width=2000&format=png&auto=webp&s=8be7c17c4d5b853c957bc9eb1ef6ee99288647f5)
[Download LLMs with subscription URL](https://preview.redd.it/dudsfg2xmlhb1.png?width=2000&format=png&auto=webp&s=80a8800e4b3da36ad15e7eb4eabc92c132870898)
​
Download:
Play Store: [https://play.google.com/store/apps/details?id=app.yourchat](https://play.google.com/store/apps/details?id=app.yourchat)
App Store: [https://apps.apple.com/app/yourchat/id6449383819](https://apps.apple.com/app/yourchat/id6449383819)
Desktop Version: [https://yourchat.app/download](https://yourchat.app/download) | 2023-08-12T03:34:50 | https://www.reddit.com/r/LocalLLaMA/comments/15otnrb/introducing_yourchat_a_multiplatform_llm_chat/ | constchar_llc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15otnrb | false | null | t3_15otnrb | /r/LocalLLaMA/comments/15otnrb/introducing_yourchat_a_multiplatform_llm_chat/ | false | false | 7 | null | |
Introducing YourChat: A multi-platform LLM chat client that supports the APIs of llama.cpp and text-generation-webui | 1 | Introducing YourChat: A multi-platform LLM chat client that supports the APIs of text-generation-webui and llama.cpp.
​
Features:
\* Subscription Links: Our distinctive feature allows you to consolidate your services into a single shareable link. Share your LLM with your team or friends.
\* Multi-Platform: YourChat is available on Windows, MacOS, Android, and iOS, ensuring a seamless experience whether you're on mobile or desktop.
\* Built-In Prompts: Channel your creativity using our integrated prompts sourced from [github.com/f/awesome-chatgpt-prompts](https://github.com/f/awesome-chatgpt-prompts).
​
API Extensiveness:
\* text-generation-webui
\* llama.cpp
\* GPT Compatible API (for third-party OpenAI-like APIs)
\* OpenAI API (not available on apple app store)
​
Some Screenshots:
​
[Chat with preset prompt](https://preview.redd.it/ne8npkz4mlhb1.png?width=2000&format=png&auto=webp&s=4bc91195e3853d5abe3c247c960e229c18b685dc)
[Completion Mode](https://preview.redd.it/aa3cb605mlhb1.png?width=2000&format=png&auto=webp&s=475fcd45e8e7e3702518d4a0ef79190c881a5c6a)
[Download LLMs with subscription URL](https://preview.redd.it/gucvdmz4mlhb1.png?width=2000&format=png&auto=webp&s=df32be4b9c8170f9d3c69d49411234ecb3fc186d)
​
Download:
Play Store: [https://play.google.com/store/apps/details?id=app.yourchat](https://play.google.com/store/apps/details?id=app.yourchat)
App Store: [https://apps.apple.com/app/yourchat/id6449383819](https://apps.apple.com/app/yourchat/id6449383819)
Desktop Version: [https://yourchat.app/download](https://yourchat.app/download) | 2023-08-12T03:27:39 | https://www.reddit.com/r/LocalLLaMA/comments/15otin3/introducing_yourchat_a_multiplatform_llm_chat/ | constchar_llc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15otin3 | false | null | t3_15otin3 | /r/LocalLLaMA/comments/15otin3/introducing_yourchat_a_multiplatform_llm_chat/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'iPAxVKyDrGKh6Fy565L9IfeVxI98eU-gpQ8iVdEZJnI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=108&crop=smart&auto=webp&s=ab3f6d772980f572178a1d5757d0e6f5bea255f3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=216&crop=smart&auto=webp&s=332fe311b43f5a5f859d9279c9def2d00eb06023', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=320&crop=smart&auto=webp&s=3f6dd960c376db66dd213d024ae3b3c93a679fbd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=640&crop=smart&auto=webp&s=f658714715aa641c6a36b35f078c1668d9c8c74a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=960&crop=smart&auto=webp&s=4e23c5c8e4c70238b19c50294757ab09e4eadc2a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?width=1080&crop=smart&auto=webp&s=a0b345ce1215db3acfdb307d3e49649d3de40dbe', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/yFtD8XmoiZBwgJ25h84reJvMOtINt19tk9nnYO8iIQE.jpg?auto=webp&s=b9e5c5ed6eb2186b6e5d914c915197dd7df3de6a', 'width': 1200}, 'variants': {}}]} | |
How to measure effective context length? | 7 | I'd like to verify how much text llms can actually consider while giving responses. for this i came up with 2 experiments:
- give a piece of text with a certain amount of words, and have the llm respond to the query: "what is the first line of the text? what is the last line of the text?" | 2023-08-12T02:08:18 | https://www.reddit.com/r/LocalLLaMA/comments/15orwir/how_to_measure_effective_context_length/ | nlpllama | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15orwir | false | null | t3_15orwir | /r/LocalLLaMA/comments/15orwir/how_to_measure_effective_context_length/ | false | false | self | 7 | null |
I have tuned llama2 7B and openai davinci for text generation, is there a way i can compare the results of both. | 1 | [removed] | 2023-08-12T01:51:35 | https://www.reddit.com/r/LocalLLaMA/comments/15orjkh/i_have_tuned_llama2_7b_and_openai_davinci_for/ | mrtac96 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15orjkh | false | null | t3_15orjkh | /r/LocalLLaMA/comments/15orjkh/i_have_tuned_llama2_7b_and_openai_davinci_for/ | false | false | self | 1 | null |
I been recently wondering, is there any way to train an LLM to output something specific. Kinda like a rating system (can be as simple as thumbs up or thumbs down) or is that what LORAs are? | 1 | I'm still really new to this so forgive me if it's a silly question. | 2023-08-12T01:45:06 | https://www.reddit.com/r/LocalLLaMA/comments/15orelr/i_been_recently_wondering_is_there_any_way_to/ | VirylLucas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15orelr | false | null | t3_15orelr | /r/LocalLLaMA/comments/15orelr/i_been_recently_wondering_is_there_any_way_to/ | false | false | self | 1 | null |
I'm trying to get TheBloke_airoboros-33B-GPT4-2.0-GPTQ to create an accurate list of modern science fiction books, and just... I just... | 34 | 2023-08-12T01:27:29 | CatastrophicallyEmma | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15or13q | false | null | t3_15or13q | /r/LocalLLaMA/comments/15or13q/im_trying_to_get_thebloke_airoboros33bgpt420gptq/ | false | false | 34 | {'enabled': True, 'images': [{'id': 'nhPGyOPLreXGcl0PkmXeulWfEvVfyVPv6JDhG8P4UZ0', 'resolutions': [{'height': 142, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=108&crop=smart&auto=webp&s=2ef2b16e278a32b318572f54a215747385a1d1ef', 'width': 108}, {'height': 284, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=216&crop=smart&auto=webp&s=33351d7e5de5e385095993150698d6d68726c9a7', 'width': 216}, {'height': 421, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=320&crop=smart&auto=webp&s=be3831565d3e7f9fd9148adf25cf26507479c1cb', 'width': 320}, {'height': 843, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=640&crop=smart&auto=webp&s=be6b5c1ae09e2fc7cb38faec85b63cd836292de2', 'width': 640}, {'height': 1264, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=960&crop=smart&auto=webp&s=eead07270d7b71b88f02a937162a659e3c9b01db', 'width': 960}, {'height': 1422, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?width=1080&crop=smart&auto=webp&s=834e4b96aa08846246f5627eff87cad8ca4721f8', 'width': 1080}], 'source': {'height': 1432, 'url': 'https://preview.redd.it/z5nh22eq0lhb1.png?auto=webp&s=816af06ea9734807d6c17a7a1a08cdeb66568a44', 'width': 1087}, 'variants': {}}]} | |||
Are there any good fantasy writing Lora for llama 2 or otherwise? | 0 | Looking for some lora to work as a prose enhancer that doesn't shy away from nsfw (violence) or even some sex scenes.
Thanks in advance! | 2023-08-12T01:04:50 | https://www.reddit.com/r/LocalLLaMA/comments/15oqjaq/are_there_any_good_fantasy_writing_lora_for_llama/ | Squeezitgirdle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oqjaq | false | null | t3_15oqjaq | /r/LocalLLaMA/comments/15oqjaq/are_there_any_good_fantasy_writing_lora_for_llama/ | false | false | self | 0 | null |
Our Workflow for a Custom Question-Answering App | 85 | Live demoed our MVP custom answering app today. It’s a Falcon-7b model fine tuned on an instruction set generated from one of the military services’ doctrine and policies. That’s then pointed at a vector database with the same publications indexed via Lama index, with engineering to force answers from context only, and set to "verbose" (links to the context chunks).
Our workflow:
1. Collected approx 4k unclassified/non-CUI pubs from one of the services.
2. Chunked each document into 2k tokens, and then ran them up against Davinci in our Azure enclave, with prompts generating questions.
3. Re-ran the same chunks to generate answers to those questions
4. Collated Q&A to create an instruct dataset (51k) in the target domain's discourse.
5. LoRA fine-tuned Falcon-7b on the Q&A dataset
6. Built a vector database (Chroma DB) on the same 4k publications
7. Connected a simple web UI to Llama-Index that passes natural language questions as vectors to the vector DB, returns 4-nearest neighbor chunks ("context") and the question to fine-tuned LLM.
8. Prompt includes language forcing the LLM to answer from context only.
9. Llama-Index returns the answer to the UI, along with link to the hosted context chunks.
The one thing we are still trying to improve is alignment training--currently Llama-Index and the prompt engineering keep it on rails but natively the model can be pretty toxic or dangerous. | 2023-08-11T23:41:01 | https://www.reddit.com/r/LocalLLaMA/comments/15oome9/our_workflow_for_a_custom_questionanswering_app/ | Mbando | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oome9 | false | null | t3_15oome9 | /r/LocalLLaMA/comments/15oome9/our_workflow_for_a_custom_questionanswering_app/ | false | false | self | 85 | null |
Documentation based qa | 3 | 2023-08-11T23:16:59 | https://huggingface.co/Arc53/docsgpt-7b-falcon | ale10xtu | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15oo2a0 | false | null | t3_15oo2a0 | /r/LocalLLaMA/comments/15oo2a0/documentation_based_qa/ | false | false | 3 | {'enabled': False, 'images': [{'id': '9xTJELL1YL4PyriMXYWRWD3cUAbTClyF15_unUlVjVQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=108&crop=smart&auto=webp&s=398d02814010f50239d36285cce603a9956e5ce6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=216&crop=smart&auto=webp&s=c613c8979bcf43402af4901fdc8156a3f611c490', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=320&crop=smart&auto=webp&s=670b9c1adbc0fed8074ee29e2bd406b0b7020aa1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=640&crop=smart&auto=webp&s=69cf0de3bac96a35ffb4bd30aae6064bffe844ec', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=960&crop=smart&auto=webp&s=f868a22c69d74d6e6c59860eccef9f753299edc1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?width=1080&crop=smart&auto=webp&s=a52c4898cf5d426d686010532a09d408d73000b7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MAdI_F3Sjyc65tBJxktzGh8eySyEKgp7Np0BU1nEI_o.jpg?auto=webp&s=e4f15d7baf297e601bd2eb8e04bc505d16cb0b28', 'width': 1200}, 'variants': {}}]} | ||
If i hosted a chatgpt like website running a uncensored model on my rtx 3080 at home is that legal? | 5 | For people without expensive computers to have access to a uncensored model. i can probably make it work selfhosted at home as any gpu cloud is very pricey | 2023-08-11T22:51:06 | https://www.reddit.com/r/LocalLLaMA/comments/15onfng/if_i_hosted_a_chatgpt_like_website_running_a/ | jptboy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15onfng | false | null | t3_15onfng | /r/LocalLLaMA/comments/15onfng/if_i_hosted_a_chatgpt_like_website_running_a/ | false | false | self | 5 | null |
strange behavior with newhope.ggmlv3.q4_K_S | 1 | I've been testing the [newhope.ggmlv3.q4\_K\_S](https://huggingface.co/TheBloke/NewHope-GGML) by theBloke and it's been acting super weird and im not sure if its just poor parameters, system prompt, the 4-bit quant or the model just sucks in general?
https://preview.redd.it/u02b8s4r1khb1.png?width=1544&format=png&auto=webp&s=ab570dc78f71f75015d8be09437a3c00ba436acf
\`sys prompt\`: " You are a gifted python developer. Provide ALL your scripts in within single python markdown block. Ensure they are executable. Be efficient with compute. Maintain clear communication and a friendly demeanor. Use emojis occasionally." lol
core parameters: \` {"-c", "2048", "-ngl", "200"}
\`inference params\`:
\`\`\`python
const params = signal({
temperature: 0.7,
repeat\_last\_n: 256, *// 0 = disable penalty, -1 = context size*
repeat\_penalty: 1.18, *// 1.0 = disabled*
top\_k: 40, *// <= 0 to use vocab size*
top\_p: 0.5, *// 1.0 = disabled*
tfs\_z: 1.0, *// 1.0 = disabled*
typical\_p: 1.0, *// 1.0 = disabled*
presence\_penalty: 0.0, *// 0.0 = disabled*
frequency\_penalty: 0.0, *// 0.0 = disabled*
mirostat: 0, *// 0/1/2*
mirostat\_tau: 5, *// target entropy*
mirostat\_eta: 0.1, *// learning rate*
})
\`\`\`
​ | 2023-08-11T22:27:18 | https://www.reddit.com/r/LocalLLaMA/comments/15omu0u/strange_behavior_with_newhopeggmlv3q4_k_s/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15omu0u | false | null | t3_15omu0u | /r/LocalLLaMA/comments/15omu0u/strange_behavior_with_newhopeggmlv3q4_k_s/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'q933B8y48VjFiOf9DmJnoMHpcG_sNy-2VRxGOgaJblE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=108&crop=smart&auto=webp&s=1e1b58069998283803dc36c46425e88e56cf1aad', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=216&crop=smart&auto=webp&s=4a7fd6d93595a17e59a3dd39c70220d208194ab2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=320&crop=smart&auto=webp&s=5c92b1f14dc65191a09dee27407ceb263fc26d9d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=640&crop=smart&auto=webp&s=c7825f39480c3777d49bfafcc1cd46037abf08d9', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=960&crop=smart&auto=webp&s=0643c3d28f88e1385ce223fc02c9be1231643d39', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?width=1080&crop=smart&auto=webp&s=574b545858aa29fabbf723dff3f1b20629d71794', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/NbPMAtdGD0wzNzeW6bfXQxUamK3UR4zTtCBokSuyWyY.jpg?auto=webp&s=8155d622d2491334a597038a984aa27b45c66213', 'width': 1200}, 'variants': {}}]} | |
Trouble Running Llama-2 70B on HPC with Limited GPUs - Need Help! | 3 | I'm utilizing Llama-2 on a high-performance computing (HPC) setup and dispatching tasks through SLURM. I managed to run the Llama 7B model, but I ran into problems with the 70B variant.
It seems that the Llama-2 70B model anticipates 8 distinct GPUs, given its MP configuration of 8, implying nproc\_per\_node = 8 in torchrun settings. However, my HPC only allows 4 GPUs per node.
Does anyone know if running the 70B model is feasible under this constraint? I think I might be missing a workaround, especially since the HPC boasts high-end GPUs like the A100. For clarity, here's the SLURM configuration I'm deploying.
​
\#!/bin/bash
\#SBATCH --job-name=llama\_chat\_run # Change the job name to something more descriptive
\#SBATCH --nodes=1
\#SBATCH --cpus-per-task=4
\#SBATCH --ntasks-per-node=1
\#SBATCH --mem=16GB
\#SBATCH --gres=gpu:1
\#SBATCH --time=2:00:00 # Extend runtime based on your expectations
\#SBATCH --output=llama\_chat\_run.%j.out # Optional: name the output file to reflect the job
\#SBATCH --error=llama\_chat\_run.%j.err # Optional: name the error file to reflect the job
​
module purge;
module load anaconda3/2020.07;
export OMP\_NUM\_THREADS=$SLURM\_CPUS\_PER\_TASK;
source /share/apps/anaconda3/2020.07/etc/profile.d/conda.sh;
conda activate ./penv;
export PATH=./penv/bin:$PATH;
​
\# Use torchrun command instead of the python command
torchrun --nproc\_per\_node 1 example\_chat\_completion.py \\
\--ckpt\_dir llama-2-70b-chat/ \\
\--tokenizer\_path tokenizer.model \\
\--max\_seq\_len 512 --max\_batch\_size 6
​
​
​ | 2023-08-11T22:12:55 | https://www.reddit.com/r/LocalLLaMA/comments/15omgnq/trouble_running_llama2_70b_on_hpc_with_limited/ | MasterJaguar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15omgnq | false | null | t3_15omgnq | /r/LocalLLaMA/comments/15omgnq/trouble_running_llama2_70b_on_hpc_with_limited/ | false | false | self | 3 | null |
How to use multiple GPUs on different systems? | 1 | I want to use my Gaming Laptop with an 8GB 3080, and one other system with an 8GB RX580. I don't know what the performance hit would be. TBH I just want to do it for the sake of it! I'm pretty new to running LLMs, so some explanation would be really helpful!
Thankyou! | 2023-08-11T20:28:25 | https://www.reddit.com/r/LocalLLaMA/comments/15ojptj/how_to_use_multiple_gpus_on_different_systems/ | KvAk_AKPlaysYT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ojptj | false | null | t3_15ojptj | /r/LocalLLaMA/comments/15ojptj/how_to_use_multiple_gpus_on_different_systems/ | false | false | self | 1 | null |
Encourage Your Workplace to Host its Own LLMs | 121 | [ChatGPT fever spreads to US workplace, sounding alarm for some](https://www.reuters.com/technology/chatgpt-fever-spreads-us-workplace-sounding-alarm-some-2023-08-11/)
The biggest issue with the ubiquitous use of ChatGPT in the workplace is that all of the information gets leaked. Most corporations who are interested in making money off their ideas should be interested in keeping those ideas largely quiet. This is not widely appreciated by employers today.
You think you may care about NSFW work or censorship. Think about how much Siemens will care once they realize their workers are divulging their trade secrets.
If we can a reasonable number of corporations to start running internal LLMs, the size of the addressable market for LLMs will grow exponentially. It will be fantastic for LLaMA and home LLM use orthogonally.
Please advocate for local LLM use at work. Thank you for your time. | 2023-08-11T20:18:04 | https://www.reddit.com/r/LocalLLaMA/comments/15ojg5c/encourage_your_workplace_to_host_its_own_llms/ | friedrichvonschiller | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ojg5c | false | null | t3_15ojg5c | /r/LocalLLaMA/comments/15ojg5c/encourage_your_workplace_to_host_its_own_llms/ | false | false | self | 121 | {'enabled': False, 'images': [{'id': '67c-IjzWz8qmTmo-aRRdu58s6Tmf9wruf18jvgnZz0w', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=108&crop=smart&auto=webp&s=c0022678e0ff8b9660760f7d8383f89338d69f0f', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=216&crop=smart&auto=webp&s=ae94d7e0f34a909ea0a594bae054b56a5802a21f', 'width': 216}, {'height': 167, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=320&crop=smart&auto=webp&s=71f85e485328bf95a39bf1b2d2413550faf66815', 'width': 320}, {'height': 334, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=640&crop=smart&auto=webp&s=d8ba89c96f3a4c336b059eed7cffc634b0eca737', 'width': 640}, {'height': 502, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=960&crop=smart&auto=webp&s=727363fa992caac483cbaa04f7b22b95469f969b', 'width': 960}, {'height': 565, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?width=1080&crop=smart&auto=webp&s=45f2e263d547168fdc730e1f99703c9b28972403', 'width': 1080}], 'source': {'height': 628, 'url': 'https://external-preview.redd.it/2kMixBo1N7fReiDsbtxGR48VdJ3f9OeQ0gdMGmMxz7c.jpg?auto=webp&s=b6e5440e0dca913a3dff038c1edfc3e4384fac67', 'width': 1200}, 'variants': {}}]} |
Preventing LLAMA from hallucinating responses. | 22 | So we are using LLAMA is a typical RAG scenario, give it some context and ask it a question. What i have found is, no matter how much i yell at it in the prompt, for certain questions, it always gives the wrong, hallucinated answer, even if the right answer is in the document inside.
For example, the document would be like:
Student A has score 100
Student B has score 95
Student C has score 99
(very very simplified, in reality these are all chunks - about 200 tokens - and there are a dozen chunks)
LLAMA will always answer Student B wrong. All others are right - just one is wrong. GPT3/4 do not have this problem, although GPT3 did once in a while, and on the same student. Which is bizarre.
GPT 4 however, 100% of time is correct.
​
Second category - if you ask it score history for example - when did student x score decrease - it will bring up a place where the score increased and write about it as if it decreased. Which is non sensical. GPT3 does the same thing. GPT4 will tell you - It does not appear the score for this student decreased.
What is GPT 4 doing? How can I help llama or even gpt3 do something like this? Is this just RLHF? Am i stuck using gpt4 for this?
​
​ | 2023-08-11T20:09:17 | https://www.reddit.com/r/LocalLLaMA/comments/15oj83h/preventing_llama_from_hallucinating_responses/ | Alert_Record5063 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oj83h | false | null | t3_15oj83h | /r/LocalLLaMA/comments/15oj83h/preventing_llama_from_hallucinating_responses/ | false | false | self | 22 | null |
Platypus models | 12 | 2023-08-11T19:19:39 | https://twitter.com/natanielruizg/status/1690048207030493189 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15ohxvx | false | {'oembed': {'author_name': 'Nataniel Ruiz', 'author_url': 'https://twitter.com/natanielruizg', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">We are 🔥super excited🔥 to release the Platypus family of finetuned LLMs 🥳🥳. Platypus achieves the top score in the Hugging Face Open LLM Leaderboard 🏆! The main focus of our work is to achieve cheap, fast and powerful refinement of base LLMs.<br>page: <a href="https://t.co/QHJ6kDoCYa">https://t.co/QHJ6kDoCYa</a> <a href="https://t.co/MOSiflQLDU">pic.twitter.com/MOSiflQLDU</a></p>— Nataniel Ruiz (@natanielruizg) <a href="https://twitter.com/natanielruizg/status/1690048207030493189?ref_src=twsrc%5Etfw">August 11, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/natanielruizg/status/1690048207030493189', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15ohxvx | /r/LocalLLaMA/comments/15ohxvx/platypus_models/ | false | false | 12 | {'enabled': False, 'images': [{'id': 'vVURZSvN8RjsPJbzUNoyI4xYt1V2yGYOSNsvuKbFmaQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/RgulP3L45yR0BcX2I4VThtZt02S3L7XjFZ8D3rEGKo4.jpg?width=108&crop=smart&auto=webp&s=62a0e833e88283fd675c0789e2ffad916cb1b1f3', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/RgulP3L45yR0BcX2I4VThtZt02S3L7XjFZ8D3rEGKo4.jpg?auto=webp&s=cb48fa81b98753463cf4c36e24d2b3eda10da9a1', 'width': 140}, 'variants': {}}]} | ||
Is it possible to use multiple GPUs of different generations on a single PC? | 5 | I recently swapped out my 1070 with a 3060 to load 13b 4-bit models, and it's working like a charm, but I would like to try for more. I am on a budget, so I am hesitant to buy another 3060 if it's not necessary. Would it be possible to put the 1070 back in my machine, and use both GPUs at once?
I currently have the drivers for the 3060 installed, and I heard that it's not possible to install two different sets of Geforce drivers on one machine. Would the 1070 function with the 3060 drivers for the purpose of loading bigger models through exllama? Would my machine recognize it, and would I benefit from it? | 2023-08-11T19:17:34 | https://www.reddit.com/r/LocalLLaMA/comments/15ohw0g/is_it_possible_to_use_multiple_gpus_of_different/ | Zugzwang_CYOA | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ohw0g | false | null | t3_15ohw0g | /r/LocalLLaMA/comments/15ohw0g/is_it_possible_to_use_multiple_gpus_of_different/ | false | false | self | 5 | null |
What the fuck is wrong with WizardMath??? | 262 | 2023-08-11T18:48:55 | bot-333 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15oh576 | false | null | t3_15oh576 | /r/LocalLLaMA/comments/15oh576/what_the_fuck_is_wrong_with_wizardmath/ | false | false | 262 | {'enabled': True, 'images': [{'id': 'xbug615PVicVm3MHsJ-wILaxpgeEIJx5h-4v1PZoxAQ', 'resolutions': [{'height': 44, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=108&crop=smart&auto=webp&s=a81fb1bfbf27bf6a8d9bd4458ae4eb8578d42dbf', 'width': 108}, {'height': 89, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=216&crop=smart&auto=webp&s=ebf65c1ae8ef5de63d96926611fb6a2eb07813b4', 'width': 216}, {'height': 131, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=320&crop=smart&auto=webp&s=1ace9e8d2f4bbd82cf42b2bd4eb879fc51075174', 'width': 320}, {'height': 263, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=640&crop=smart&auto=webp&s=ecf1d96609e661cc91206f3bdb7659b8ef039c1d', 'width': 640}, {'height': 395, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=960&crop=smart&auto=webp&s=d725b6df4e5816a9e32e914c41255f2d48c1dcf2', 'width': 960}, {'height': 445, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?width=1080&crop=smart&auto=webp&s=bed9bb6b349616ac22f527e979dc29b936b51d68', 'width': 1080}], 'source': {'height': 716, 'url': 'https://preview.redd.it/47823tkm1jhb1.png?auto=webp&s=12513bec3bda3b09f781799a2ea95a1454181bc4', 'width': 1736}, 'variants': {}}]} | |||
Thinking about purchasing a 4090 for KoboldCPP... Got some questions. | 9 | So currently I'm using a 5600g with 32GB of ram and a 12GB 3060 on Linux.
What I would like to do is try and find a 24GB-ish LLM model that excels at collaborative story writing that I can run on my present hardware (doesn't matter how slow it is) just to get an idea of what improvements the 4090 would give me. My main goal is to have coherent generation and have the model stay on track and produce few anomalies.
It would be amazing if you could make two suggestions for me:
1. Which model I should use (something that a 4090 can fully utilize but will also be "usable" on my present hardware).
2. What command launch options I should use. For example: I don't particularly understand what ropeconfig is and am confused why some of us set context size as a launch option when you can set it in the interface.
Thanks for reading and any replies.
What an awesome community this is turning out to be. Very happy to be here with y'all.
Cheers. | 2023-08-11T18:47:00 | https://www.reddit.com/r/LocalLLaMA/comments/15oh3ie/thinking_about_purchasing_a_4090_for_koboldcpp/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oh3ie | false | null | t3_15oh3ie | /r/LocalLLaMA/comments/15oh3ie/thinking_about_purchasing_a_4090_for_koboldcpp/ | false | false | self | 9 | null |
Out of memory using multiple GPUs | 2 | I have an EC2 p2.8xlarge instance running on AWS with 8x Nvidia K80 GPUs, each with 12 GB VRAM for a total of 96 GB. I am trying to run LLaMA 2, and have tried both 7B and 70B. If I run it with 7B, I get the error `loading checkpoint for MP=1 but world size is 8`, and with 70B, `torch.cudatorch.OutOfMemoryError: Tried to allocate 448.00 MiB (GPU 7; 11.17 GiB total capacity, 10.21 GiB already allocated; 324.19 MiB free; 10.62 GiB reserved in total by PyTorch)`. How can I spread the memory across all GPUs? | 2023-08-11T18:28:31 | https://www.reddit.com/r/LocalLLaMA/comments/15ogmc9/out_of_memory_using_multiple_gpus/ | EffectiveFood4933 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ogmc9 | false | null | t3_15ogmc9 | /r/LocalLLaMA/comments/15ogmc9/out_of_memory_using_multiple_gpus/ | false | false | self | 2 | null |
New Model RP Comparison/Test (7 models tested) | 68 | This is a follow-up to my previous post here: [Big Model Comparison/Test (13 models tested) : LocalLLaMA](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/)
Here's how I evaluated these (same methodology as before) for their role-playing (RP) performance:
- Same (complicated and limit-testing) long-form conversation with all models, [SillyTavern](https://github.com/SillyTavern/SillyTavern) frontend, [KoboldCpp](https://github.com/LostRuins/koboldcpp) backend, GGML q5\_K\_M, Deterministic generation settings preset, [Roleplay instruct mode preset](https://www.reddit.com/r/LocalLLaMA/comments/15mu7um/sillytaverns_roleplay_preset_vs_modelspecific/), > 22 messages, going to full 4K context, noting especially good or bad responses.
So here's the list of models and my notes plus my very personal rating (➕ = worth a try, - ➖ disappointing, ❌ = unusable):
- ➕ **[huginnv1.2](https://huggingface.co/TheBloke/huginnv1.2-GGML)**: Much better than the previous version (Huginn-13B), very creative and elaborate, focused one self-made plot point early on, nice writing and actions/emotes, repetitive emoting later, redundant speech/actions (says what she's going to do and then emotes doing it), missed important detail later and became nonsensical because of that. More creative but less smart than other models.
- ➖ **[MythoMix-L2-13B](https://huggingface.co/TheBloke/MythoMix-L2-13B-GGML)**: While other models often went too fast, this one needed a bit of coaxing to proceed, got confused about who's who and anatomy, mixing up people and instructions, wrote what User does, actions switched between second and third person. But good actions and descriptions, and believable and lively characters, and no repetition/looping all the way to full 4K context and beyond! **Only gets a ➖ instead of a ➕ because there's already a successor, [MythoMax-L2-13B-GGML](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML), which I like even more!**
- ➕ **[MythoMax-L2-13B](https://huggingface.co/TheBloke/MythoMax-L2-13B-GGML)**: Started talking/acting as User (had to use non-deterministic preset and enable "Include Names" for the first message)! While other models often went too fast, this one needed a bit of coaxing to proceed, got confused about who's who and anatomy, mixing up people and instructions, mentioned scenario being a simulation. But nice prose and excellent writing, and no repetition/looping all the way to full 4K context and beyond! **This is my favorite of this batch! I'll use this a lot more from now on, right now it's my second favorite model next to my old favorite [Nous-Hermes-Llama2](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML)!**
- ➖ **[orca_mini_v3_13B](https://huggingface.co/TheBloke/orca_mini_v3_13B-GGML)**: Repeated greeting message verbatim (but not the emotes), talked without emoting, spoke of agreed upon parameters regarding limits/boundaries, terse/boring prose, had to ask for detailed descriptions, description was in past tense, speech within speech, wrote what User does, got confused about who's who and anatomy, became nonsensical later. **May be a generally smart model, but apparently not a good fit for roleplay!**
- ➖ **[Stable-Platypus2-13B](https://huggingface.co/TheBloke/Stable-Platypus2-13B-GGML)**: Extremely short and terse responses (despite Roleplay preset!), had to ask for detailed descriptions, got confused about who's who and anatomy, repetitive later. But good and long descriptions when specifically asked for! **May be a generally smart model, but apparently not a good fit for roleplay!**
- ❌ **[vicuna-13B-v1.5-16K](https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GGML)**: Confused about who's who from the start, acted and talked as User, repeated greeting message verbatim (but not the very first emote), normal afterwards (talks and emotes and uses emoticons normally), but mentioned boundaries/safety multiple times, described actions without doing them, needed specific instructions to act, switched back from action to description in the middle of acting, repetitive later, some confusion. Seemed less smart (grammar errors, mix-ups), but great descriptions and sense of humor, but broke down completely within 20 messages (> 4K tokens)! **SCALING ISSUE (despite using `--contextsize 16384 --ropeconfig 0.25 10000`)?**
- ❌ **[WizardMath-13B-V1.0](https://huggingface.co/TheBloke/WizardMath-13B-V1.0-GGML)**: Ends every message with "The answer is: ", making it unsuitable for RP! So I instead did some logic tests - unfortunately it failed them all ("Sally has 3 brothers...", "What weighs more, two pounds of feathers or one pound of bricks?", and "If I have 3 apples and I give two oranges...") even with "Let's think step by step." added.
Looking forward to your comments, especially if you have widely different experiences, so I may go back to retest some models with different settings... | 2023-08-11T18:17:40 | https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ogc60 | false | null | t3_15ogc60 | /r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/ | false | false | self | 68 | {'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=108&crop=smart&auto=webp&s=22d2e1896c94ecebda58fed69478453d4b16fd4f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=216&crop=smart&auto=webp&s=019bd779b582098d4b9aa01b87ee530132195fa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=320&crop=smart&auto=webp&s=55daeabbed00d9b3c1e7f3207edea4d0a265db39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=640&crop=smart&auto=webp&s=47d7877d194270162d75f4922c4ecb60b17c101d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=960&crop=smart&auto=webp&s=004f5643d41eee63624b163efc53427073882f4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=1080&crop=smart&auto=webp&s=e6ee7ad7840a9a71890c76db5e4df6a3f669e762', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?auto=webp&s=44d160d8b5087122f25fba2443dc2c5a77adf472', 'width': 1280}, 'variants': {}}]} |
Access to my server with a httpRequest or other | 2 | My model is runing on localhost:7860
I want to access it, I have tryed with python
import requests
request = {'prompt': 'hi', 'max\_new\_tokens': 4096}
r = requests.post(url='[http://localhost:7860/api/v1/generate](http://localhost:7860/api/v1/generate)', json=request)
print(r.json())
I have on request reply : detail:not found or detail:method not allowed
What's wrong?
CG. | 2023-08-11T18:01:38 | https://www.reddit.com/r/LocalLLaMA/comments/15ofwpo/access_to_my_server_with_a_httprequest_or_other/ | ppcfbadsfree | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ofwpo | false | null | t3_15ofwpo | /r/LocalLLaMA/comments/15ofwpo/access_to_my_server_with_a_httprequest_or_other/ | false | false | self | 2 | null |
LLM, Semantic search and large volume of documents | 1 | Hello,
I know that this question was probably asked a few times but I really cannot decide on the best approach and could use your help.
We have hundreds of thousands of documents and we want to create a "chatbot" that could possibly answer questions that can only be found in those documents. Now, the documents can be very very similar but contain data for different dates (textual data).
Would a vector database using semantic search work? And then passing the result to an LLM? (Llama 2) Or is there a better approach these days? Currently thinking of running milvus as a vector db and connecting that to an LLM via langchain. Any guidance, recommendations, suggestions are highly appreciated!
We do have the resources to host an LLM and a vector db. | 2023-08-11T17:58:26 | https://www.reddit.com/r/LocalLLaMA/comments/15oftk7/llm_semantic_search_and_large_volume_of_documents/ | Mayloudin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oftk7 | false | null | t3_15oftk7 | /r/LocalLLaMA/comments/15oftk7/llm_semantic_search_and_large_volume_of_documents/ | false | false | self | 1 | null |
How to get the answer from local llama2 and send it to my app? | 1 | I'm building a software that needs to use information from Llama2 query.
I'm using oobabooga. Is there a way to do this with it?
Do you need to code your own local API? How do you guys retrieve the information from the chat? | 2023-08-11T17:48:25 | https://www.reddit.com/r/LocalLLaMA/comments/15ofjzj/how_to_get_the_answer_from_local_llama2_and_send/ | ppcfbadsfree | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ofjzj | false | null | t3_15ofjzj | /r/LocalLLaMA/comments/15ofjzj/how_to_get_the_answer_from_local_llama2_and_send/ | false | false | self | 1 | null |
Anyone got TextGen/LlamaCPP working with Metal for new GGML models and Llama2? | 2 | I'm getting constant errors/crashes, even though I updated Torch to nightly and rebuilt the LlamaCPP wheel with Metal. | 2023-08-11T17:16:57 | https://www.reddit.com/r/LocalLLaMA/comments/15oeqaf/anyone_got_textgenllamacpp_working_with_metal_for/ | -becausereasons- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oeqaf | false | null | t3_15oeqaf | /r/LocalLLaMA/comments/15oeqaf/anyone_got_textgenllamacpp_working_with_metal_for/ | false | false | self | 2 | null |
llama | 0 | 2023-08-11T17:16:23 | https://teesdesk-us.shop/limited-edition-273 | AnneCampbell54 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 15oepre | false | null | t3_15oepre | /r/LocalLLaMA/comments/15oepre/llama/ | false | false | 0 | null | ||
Does HF inference endpoint work? | 2 | I’ve been trying to deploy LLMs using HF inference endpoint (e.g. stablecode-instruct-alpha-3b, llama2, etc) but the deployment fails all the time, yet I’ve been charged for an hour while it remains in “installing” state. It never becomes “ready for inference” with the form to query model, I’ve tried them with various CPU/GPU configs.
Is there any rocket science behind the deployment?
And the support via email — it’s so weird. | 2023-08-11T16:42:15 | https://www.reddit.com/r/LocalLLaMA/comments/15odu23/does_hf_inference_endpoint_work/ | Greg_Z_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15odu23 | false | null | t3_15odu23 | /r/LocalLLaMA/comments/15odu23/does_hf_inference_endpoint_work/ | false | false | self | 2 | null |
Can a team of 10-20 people access a Llama 2 model deployed in a local server with medium requirements? | 38 | I'm planning on spending $3-5k on a local server with Llama v2 deployed on it, such as a team of 10-20 people can each access the inference from their own computers whenever they please. Since I'm not really an infra guy, I have questions on how to approach this. I guess that while someone is running a query nobody else can run theirs until the first one is complete, correct? Is there any easy way to run Llama locally in a way where multiple people can access it synchronously? | 2023-08-11T16:41:57 | https://www.reddit.com/r/LocalLLaMA/comments/15odtsn/can_a_team_of_1020_people_access_a_llama_2_model/ | Heco1331 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15odtsn | false | null | t3_15odtsn | /r/LocalLLaMA/comments/15odtsn/can_a_team_of_1020_people_access_a_llama_2_model/ | false | false | self | 38 | null |
PrivateGPT example with Llama 2 Uncensored | 21 | 2023-08-11T16:30:51 | https://github.com/jmorganca/ollama/tree/main/examples/privategpt | helloPenguin006 | github.com | 1970-01-01T00:00:00 | 0 | {} | 15odjmy | false | null | t3_15odjmy | /r/LocalLLaMA/comments/15odjmy/privategpt_example_with_llama_2_uncensored/ | false | false | 21 | {'enabled': False, 'images': [{'id': 'qWYf_hGwsFfjEOzHhraYQjkUJJlsotgW5CofgR3t1f4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=108&crop=smart&auto=webp&s=cc5a7d81b1db7f17d71cab5a1a022c483ba8d216', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=216&crop=smart&auto=webp&s=d28ec690d6c6c33b274bf2dde3c7f27ead4be5bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=320&crop=smart&auto=webp&s=30fa83699356cf1925298ab5259a705b8f24ccee', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=640&crop=smart&auto=webp&s=294fd6b1499ee75bc2808308b8a406a1361b2611', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=960&crop=smart&auto=webp&s=21dae982cf5c0af79b36998e542094273e9f52b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?width=1080&crop=smart&auto=webp&s=89265375793472b434f3b2c71bf5f029ca9c1d5b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/YHtfBPBmwuqL4EW9yt8j6W7UxDd5zR0pFNarkurUzC0.jpg?auto=webp&s=2f0d610fee1af34ec04e0abed720ec9c5180e0c8', 'width': 1200}, 'variants': {}}]} | ||
New model and new app - Layla | 1 | [removed] | 2023-08-11T15:39:15 | https://www.reddit.com/r/LocalLLaMA/comments/15oc8rp/new_model_and_new_app_layla/ | Tasty-Lobster-8915 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oc8rp | false | null | t3_15oc8rp | /r/LocalLLaMA/comments/15oc8rp/new_model_and_new_app_layla/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'WcDPzOnQZ3t8b1fwQPJ1k01l878a2HIs1GCu8CJR5Wc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=108&crop=smart&auto=webp&s=94f499cdd8453f6de73be6128e8745af9395e73a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=216&crop=smart&auto=webp&s=fc6609edde1d19bad64a317c91df2995357948cc', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=320&crop=smart&auto=webp&s=fa369a7fe8db0d33cdeacdf39a268a07a308c1c1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=640&crop=smart&auto=webp&s=b5ecbc1e760ea8a7663379961b869a67fa6d308d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=960&crop=smart&auto=webp&s=2c75459f65f93f055b4ad48195bd3825f3a32c66', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?width=1080&crop=smart&auto=webp&s=8aff96d05385988fabfdbdecfa12e2cac1a3f579', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/okf3dEs9GIyG6sSxiAzvSv19-OLWzycgPVIcM18CaSQ.jpg?auto=webp&s=fbf3159bd48475d763aeb36af21027a0c93108c3', 'width': 1200}, 'variants': {}}]} |
ChatGPT and its Doppelgangers: A Study on the Limits of Model Imitation | 7 | I found an [interesting study](https://arxiv.org/abs/2305.15717) discussing ChatGPT "imitation models" like Alpaca and Vicuna. Here are the bullet points:
* Emerging method involves finetuning weaker language models on outputs from stronger models, like ChatGPT, to imitate their capabilities using open-source models.
* Research involved finetuning various LMs to mimic ChatGPT using different model sizes, data sources, and imitation data amounts.
* Initial findings showed the imitation models were good at following instructions and were rated similarly to ChatGPT by crowd workers.
* Targeted automatic evaluations revealed imitation models failed to bridge the capability gap between the base LM and ChatGPT, especially in tasks not prevalent in imitation data.
* Imitation models effectively mimic ChatGPT's style but fall short in factuality.
* Conclusion: Model imitation is not the best approach due to the capabilities gap. Emphasis should be on improving base LMs instead of trying to imitate proprietary systems.
What are your thoughts on this? Do you agree with their conclusion? | 2023-08-11T15:38:59 | https://www.reddit.com/r/LocalLLaMA/comments/15oc8ji/chatgpt_and_its_doppelgangers_a_study_on_the/ | DecipheringAI | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15oc8ji | false | null | t3_15oc8ji | /r/LocalLLaMA/comments/15oc8ji/chatgpt_and_its_doppelgangers_a_study_on_the/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.