title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
New to localLLMs looking to be pointed in the right direction. | 3 | Hello everyone,
Been using open AI and played with a few other LLMs but looking to get something more specific to help with some work I am doing. I analyze and sort Data for business and was hoping I could build an LLM to help my work go faster.
How good are models now for math? I know there is still some issues with them especially when it comes to token sizing. Not sure how this has translated to local models. Basically I want to upload my sales, what/how much was sold and have it be sorted and analyzed to help me with future sales and ordering.
What is a good starting point to work on this project?
Thanks for any help! | 2023-09-19T13:41:14 | https://www.reddit.com/r/LocalLLaMA/comments/16mrcrr/new_to_localllms_looking_to_be_pointed_in_the/ | payno_attention | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mrcrr | false | null | t3_16mrcrr | /r/LocalLLaMA/comments/16mrcrr/new_to_localllms_looking_to_be_pointed_in_the/ | false | false | self | 3 | null |
Alternatives to SPLADE | 3 | SPLADE has a CC BY-NC-SA 4.0 license which means it cannot be used commercially (https://github.com/naver/splade/blob/main/LICENSE). I've spent some time trying to find alternative sparse embedding models of the same caliber that are commercially licensed and come up with nothing. Curious if anyone has found one? | 2023-09-19T13:22:30 | https://www.reddit.com/r/LocalLLaMA/comments/16mqwo4/alternatives_to_splade/ | SnappyAlligator | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mqwo4 | false | null | t3_16mqwo4 | /r/LocalLLaMA/comments/16mqwo4/alternatives_to_splade/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'aS1tmOd55peikJdhcQrx7ihJRg9GTE243lnCYJfOgD4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/GHJGephajt_whwbfAKqQS60C6d3gkLe8Wm3COwi8Yq8.jpg?width=108&crop=smart&auto=webp&s=9be6e5c57e914af59a4f7c9034395a00b1d5e9fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/GHJGephajt_whwbfAKqQS60C6d3gkLe8Wm3COwi8Yq8.jpg?width=216&crop=smart&auto=webp&s=e8d3cf994276e8a1756427df61cce69e6140c94f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/GHJGephajt_whwbfAKqQS60C6d3gkLe8Wm3COwi8Yq8.jpg?width=320&crop=smart&auto=webp&s=e6474947a36fca37ce20ed3ac0c6e298e44e439e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/GHJGephajt_whwbfAKqQS60C6d3gkLe8Wm3COwi8Yq8.jpg?width=640&crop=smart&auto=webp&s=fab717ab6bd84084a90d00d195858ab13894ebe7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/GHJGephajt_whwbfAKqQS60C6d3gkLe8Wm3COwi8Yq8.jpg?width=960&crop=smart&auto=webp&s=fb34f8cfbb702db3fc3784c846e9f12381cbb1bc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/GHJGephajt_whwbfAKqQS60C6d3gkLe8Wm3COwi8Yq8.jpg?width=1080&crop=smart&auto=webp&s=b2f268fd2247e3206872a44a4276d09185185a2e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/GHJGephajt_whwbfAKqQS60C6d3gkLe8Wm3COwi8Yq8.jpg?auto=webp&s=bf44e1a9fc81fbb38500a2f6bb1c6ee4b34da047', 'width': 1200}, 'variants': {}}]} |
What's more important to run bigger Llama 2 models: VRAM or GPU? | 1 | [removed] | 2023-09-19T12:58:21 | https://www.reddit.com/r/LocalLLaMA/comments/16mqbor/whats_more_important_to_run_bigger_llama_2_models/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mqbor | false | null | t3_16mqbor | /r/LocalLLaMA/comments/16mqbor/whats_more_important_to_run_bigger_llama_2_models/ | false | false | self | 1 | null |
Need some pointers for RAG, chunks retrieved don't seem very relevant. | 14 | Hi all,
Apologies for another question on RAG, I know there's a few topics on this already but still feeling a bit lost and need some clarity on where I should focus my attention on learning. Still very new to this so everything is quite fresh and daunting.
I'm trying to essentially do Q&A over local files such as earnings call transcripts or meeting notes taken in markdown. I've managed to cobble together something that generates an answer but the hit rate is spotty at best. For example I can query this thing asking who the CEO of a division is (person is introduced in page 1) and it will tell me it doesn't know, but then I ask it what the capital expenditure was and it gives me the right answer which leaves me scratching my head.
Some more detail on what I did;
* I use the 13b Vicuna 1.5 loaded into Ooba and exposed through the API
* Using LangChain to load the file and then embedding it using bge\_large
* Chunk size is 500 with overlap of 100 and then store it into Chroma
* Use RetrievalQA, chaintype 'stuff', and k:5
* Use the instruct, input, response prompt template with context with customary don't make up answer.
Looking at the chunks retrieved it feels like its giving me really random chunks. I saw another person use things such as reranking which looks promising however just given how poorly this is performing on some very simple questions ('Who is X?') and telling me X is not mentioned in the given text despite being introduced in the 1st page - I am wondering if I'm doing something wrong more fundamentally. | 2023-09-19T12:54:47 | https://www.reddit.com/r/LocalLLaMA/comments/16mq8kx/need_some_pointers_for_rag_chunks_retrieved_dont/ | subdivisionbyzero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mq8kx | false | null | t3_16mq8kx | /r/LocalLLaMA/comments/16mq8kx/need_some_pointers_for_rag_chunks_retrieved_dont/ | false | false | self | 14 | null |
Replicating OSS Phi-1: Learnings & New Framework for Data Generation | 48 | Hey All,
I'm attempting to reproduce Phi-1 using only open-source data. This is a crucial task for the open source community since the dataset generation is the likely the most important part of this new approach, and is something which Microsoft did not open source. I want to share my learnings and a new project that I have started working on as part of this effort.
I started by pre-training a 320M parameter model from scratch with existing data, to the point that it could solve basic programming tasks. Training was made surprisingly easy by fitting nanoGPT to my task specific needs [\[see here, if curious\]](https://github.com/emrgnt-cmplxty/SmolTrainer).
​
[Pre-train loss](https://preview.redd.it/ua2tlwg4k7pb1.jpg?width=2856&format=pjpg&auto=webp&s=65a16f394739e2e9c888860f296e5e9ef9a3e069)
​
[It's alive!](https://preview.redd.it/sp4woh78k7pb1.jpg?width=1424&format=pjpg&auto=webp&s=3a61bb57aa455eb339bd6a8d1bbdc3dc7e9450db)
However, I quickly realized that it's a tall order to really be able to replicate Phi-1.5, rather than Phi-1. Phi-1.5 used 20B tokens of synthetic textbook data. It would cost a decent chunk of $$ and take quite some time with my rate limits to replicate this with OpenAI models. Further, the permissions around the data could cause some thorny problems for OSS. Lastly, managing 20B tokens and keeping high data quality is a non-trivial task.
Thus, I decided to start working on a framework which will to standardized generation of synthetic data and and to assist with LLM AutoEvaluation (e.g. eval by other LLMs).
The framework now supports EvolInstruct and Sytnthetic Textbooks generation w/ major LLMs \[https://github.com/emrgnt-cmplxty/SciPhi/tree/main\]. I am going to continue expanding this into other forms of data generation, like more exotic RAG approaches.
​
[A random example of a random synthetic textbook sample](https://preview.redd.it/hmhthkxxk7pb1.jpg?width=1552&format=pjpg&auto=webp&s=69b2d30a1db5b2a35d3a42980e093bfa1a810198)
I wanted to share because I am looking for feedback, feature requests, and contributors. I think this technology can play a vital role in getting us powerful LLMs that we can run on our local machines, as it gives us a chance at breaking the existing scaling rules.
​
​
​ | 2023-09-19T12:49:20 | https://www.reddit.com/r/LocalLLaMA/comments/16mq446/replicating_oss_phi1_learnings_new_framework_for/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mq446 | false | null | t3_16mq446 | /r/LocalLLaMA/comments/16mq446/replicating_oss_phi1_learnings_new_framework_for/ | false | false | 48 | {'enabled': False, 'images': [{'id': 'H9lYj5yDt3N_MFtOAmfLQAwCWDrVaQXqMie-t9UMZLo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/d-4JawVpC2zJg-ezx6tv79jl5Wkdp5YNzzUMfFe4elc.jpg?width=108&crop=smart&auto=webp&s=bfe5967c92ed8be04217a9dd41f78fab89041d69', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/d-4JawVpC2zJg-ezx6tv79jl5Wkdp5YNzzUMfFe4elc.jpg?width=216&crop=smart&auto=webp&s=199ce61737ccc2882847d0082979df5843b8fc7d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/d-4JawVpC2zJg-ezx6tv79jl5Wkdp5YNzzUMfFe4elc.jpg?width=320&crop=smart&auto=webp&s=4a6e9ac0c7dbefabef567b1426439a2024eda887', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/d-4JawVpC2zJg-ezx6tv79jl5Wkdp5YNzzUMfFe4elc.jpg?width=640&crop=smart&auto=webp&s=4c040869dc1b287c455545ca39b39d217a77453a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/d-4JawVpC2zJg-ezx6tv79jl5Wkdp5YNzzUMfFe4elc.jpg?width=960&crop=smart&auto=webp&s=11103e532c800847c25f85bf99956907ec43c7a9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/d-4JawVpC2zJg-ezx6tv79jl5Wkdp5YNzzUMfFe4elc.jpg?width=1080&crop=smart&auto=webp&s=b5c73189578c71cb1bdf9b4d6c1fb64c92e06342', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/d-4JawVpC2zJg-ezx6tv79jl5Wkdp5YNzzUMfFe4elc.jpg?auto=webp&s=81de17d2941eec13fdb89aae3e328f9a05c96786', 'width': 1200}, 'variants': {}}]} | |
What's more important to run bigger Llama 2 models: VRAM or GPU? Macs have good VRAM but poor GPU performance. Dedicated 3090/4090 GPUs do much better, but they don't come with high VRAM like in Macs. | 1 | [removed] | 2023-09-19T12:34:34 | https://www.reddit.com/r/LocalLLaMA/comments/16mps2n/whats_more_important_to_run_bigger_llama_2_models/ | nderstand2grow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mps2n | false | null | t3_16mps2n | /r/LocalLLaMA/comments/16mps2n/whats_more_important_to_run_bigger_llama_2_models/ | false | false | self | 1 | null |
Question about the best suiting local Llama for my usecase | 0 | Hi r/locallama,
I am building an application where you can record voice memos that are then transcribed via the Whisper API. This transcription is then sent to the ChatGPT API with a custom prompt, enabling the user to 'converse' with it. You continue this conversation and it will send the previous messages with it for context. An example prompt would be:
"As a marketing expert, I'll provide advice and strategies based on the marketing challenges you've described in your voice memo. I'll help you optimize your marketing efforts to reach your target audience effectively."
The possibilities are really endless of how you can prompt it.
Now this is currently a web application, but I am looking into bundling it into an Electron application where I'll include a local instance of whisper. I am also looking into local Llama's that would be a good fit for this use case. This would make it fully local/offline available and would also make it free to use. However, I feel a bit lost in all the different models, so I was wondering if someone has a recommendation for this use case?
Thanks in advance! | 2023-09-19T10:38:37 | https://www.reddit.com/r/LocalLLaMA/comments/16mni47/question_about_the_best_suiting_local_llama_for/ | 346290 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mni47 | false | null | t3_16mni47 | /r/LocalLLaMA/comments/16mni47/question_about_the_best_suiting_local_llama_for/ | false | false | self | 0 | null |
Does LLM size matter? | 1 | [removed] | 2023-09-19T10:05:11 | https://www.reddit.com/r/LocalLLaMA/comments/16mmx86/does_llm_size_matter/ | trulyfurqan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mmx86 | false | null | t3_16mmx86 | /r/LocalLLaMA/comments/16mmx86/does_llm_size_matter/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'sokzktiVzov9MHNH8hiWCL6t8TatRrqQ5T-SdQX_Dkw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dOFHVSmrljwGbCFyORiQ7-iWMjkS6uZA3kLrr-8wylU.jpg?width=108&crop=smart&auto=webp&s=31a9ed8e3bdf6a1d399e2f88443f91a0652267a0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dOFHVSmrljwGbCFyORiQ7-iWMjkS6uZA3kLrr-8wylU.jpg?width=216&crop=smart&auto=webp&s=e3e5d4930442ee92dacae414dff5682eef038fcc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dOFHVSmrljwGbCFyORiQ7-iWMjkS6uZA3kLrr-8wylU.jpg?width=320&crop=smart&auto=webp&s=74fba0af2ed3c0e668c59d4dd5a1db3987014def', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dOFHVSmrljwGbCFyORiQ7-iWMjkS6uZA3kLrr-8wylU.jpg?width=640&crop=smart&auto=webp&s=186a72d305e58814ca6cfa488aca4e98990db4e9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dOFHVSmrljwGbCFyORiQ7-iWMjkS6uZA3kLrr-8wylU.jpg?width=960&crop=smart&auto=webp&s=da058de68adb7f578f374831bdecefedb3bc01e3', 'width': 960}], 'source': {'height': 500, 'url': 'https://external-preview.redd.it/dOFHVSmrljwGbCFyORiQ7-iWMjkS6uZA3kLrr-8wylU.jpg?auto=webp&s=af0aa74d67ea8cc666c1b646ba87dc1b36b2a895', 'width': 1000}, 'variants': {}}]} |
Newbie question about local LLama | 1 | Is it possible to have an instance of Llama with a chat interface sort of like chatgpt?
What kind of computer specs is needed for that? I would like to build an agent, that run locally, that can answer emails.
I have some knowledge of python, newbie level, but can work with restful apis and such.
Any pointers or suggestions would be appreciated. | 2023-09-19T09:46:00 | https://www.reddit.com/r/LocalLLaMA/comments/16mmlnq/newbie_question_about_local_llama/ | slemklumpen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mmlnq | false | null | t3_16mmlnq | /r/LocalLLaMA/comments/16mmlnq/newbie_question_about_local_llama/ | false | false | self | 1 | null |
Large Language Models fine-tuned for literature and arts | 1 | Hi guys,
I was wondering if there are some Large Language Models tuned specifically on literature and arts.
I'd like to create a chatbot that can answer questions about a particular book. It would be great if it works also for paintings or other forms of media, but I understand that it's harder since I would need a multimodal model.
Right now I understand that some models are tailored for summarization and code generation, or they are just general models.
Using Retrieval Augmented Generation is a possibility, but I'd still like to have a strong bias in the original model concerning art.
Do you know of any models or resources with these characteristics? I would greatly appreciate any suggestions.
Thank you!
​ | 2023-09-19T09:19:09 | https://www.reddit.com/r/LocalLLaMA/comments/16mm630/large_language_models_finetuned_for_literature/ | PinballOscuro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mm630 | false | null | t3_16mm630 | /r/LocalLLaMA/comments/16mm630/large_language_models_finetuned_for_literature/ | false | false | self | 1 | null |
What's the best approach for PDF text extraction? | 42 | I've been working on extracting text from textbook PDFs using Python libraries like pdfplumber, PyMuPDF, and pdfminer.six. While they perform well for well-structured PDFs, I'm facing significant challenges when dealing with complex document layouts. Some of the issues I've encountered include out-of-place line breaks, incorrect capitalization, and connected words.
To provide some context, some of the complex layouts I'm dealing with include:
1. **Horizontal to Column Text Transitions**: In some textbooks, the layout changes from horizontal text to column text, which can result in text extraction issues. Has anyone successfully handled this transition and maintained the formatting?
2. **Text on Side Lines**: Certain PDFs have text on side lines to explain highlighted words or concepts. I'm having problems extracting this side-line text and the main body text separately. Usually it combines the two, making the extracted text impossible to read.
I'm wondering if anyone has encountered similar challenges and what approaches you recommend. Should I explore using OCR as an alternative method for handling these complex structures? If so, which OCR tools or libraries have proven effective for maintaining formatting and accuracy in such scenarios?
Additionally, after I've extracted the text, I want to categorize it into main text, headers, footers, figure descriptions, etc.. Originally, I planned to use text characteristics like fonts and font sizes to differentiate all the categories. However, OCR techniques don't usually preserve this type of information. How would you go about doing this? | 2023-09-19T08:02:08 | https://www.reddit.com/r/LocalLLaMA/comments/16mkyyf/whats_the_best_approach_for_pdf_text_extraction/ | malicious510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mkyyf | false | null | t3_16mkyyf | /r/LocalLLaMA/comments/16mkyyf/whats_the_best_approach_for_pdf_text_extraction/ | false | false | self | 42 | null |
Can anyone explain why I can only load models using "Transformers" from the model loader tab? Nothing else (e.g. exllama) works for me. Using Oobabooga. | 3 | Sorry for the noob question, but I don't understand the issue | 2023-09-19T07:04:19 | https://www.reddit.com/r/LocalLLaMA/comments/16mk1vu/can_anyone_explain_why_i_can_only_load_models/ | Slapshotsky | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mk1vu | false | null | t3_16mk1vu | /r/LocalLLaMA/comments/16mk1vu/can_anyone_explain_why_i_can_only_load_models/ | false | false | self | 3 | null |
Loading local html files which contain texts, tables and images on llama index, so that llama 2 answers. | 1 | [removed] | 2023-09-19T06:46:36 | https://www.reddit.com/r/LocalLLaMA/comments/16mjras/loading_local_html_files_which_contain_texts/ | zaid-70 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mjras | false | null | t3_16mjras | /r/LocalLLaMA/comments/16mjras/loading_local_html_files_which_contain_texts/ | false | false | self | 1 | null |
Need help in understanding Multi GPU Machines and model parallelism | 5 | Hi, I have two GPUs, RTX 3080 Ti (12GB) and GTX 1660 super (6GB). Overall I have 18GB of VRam.
Whenever I want to train a model using accelerate of hugging face, it only takes 6GB of my 3080 ti and 6GB of 1660 super and is only doing data parallelism, if the model does not fit in the 6Gb gpu, it throws a Out of memory error.
When I use deepspeed, it shows my world size = 1, and only runs on my 3080 ti.
My questions are:
1. Are multiple GPUs only used for Data Parallelism ?
2. In case of model parallelism, do I need to enable anything on the deepspeed front, so they can be used ?
3. Is my 1660 super acting as a bottle neck for multiGPU training ?
4. For multi gpu, are only 2x 3090 with nv link the best bet ?
5. For multi gpu, is it expected that both the gpus should be same, with the same vram ?
I could really appreciate your help. Thanks in advance for the answer | 2023-09-19T06:12:41 | https://www.reddit.com/r/LocalLLaMA/comments/16mj5ta/need_help_in_understanding_multi_gpu_machines_and/ | Jaswanth04 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mj5ta | false | null | t3_16mj5ta | /r/LocalLLaMA/comments/16mj5ta/need_help_in_understanding_multi_gpu_machines_and/ | false | false | self | 5 | null |
What exactly is a Prompt Engineer? | 0 | Here is a glimpse of what it’s like to be a Prompt Engineer. [https://opendatascience.com/what-its-like-to-be-a-prompt-engineer/](https://opendatascience.com/what-its-like-to-be-a-prompt-engineer/) | 2023-09-19T06:02:40 | https://www.reddit.com/r/LocalLLaMA/comments/16mizi0/what_exactly_is_a_prompt_engineer/ | Data_Nerd1979 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mizi0 | false | null | t3_16mizi0 | /r/LocalLLaMA/comments/16mizi0/what_exactly_is_a_prompt_engineer/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'JwOHxwYhR7Y0lz4ueoOIoR0rhCsm261QPJjq7EKXW6Y', 'resolutions': [{'height': 50, 'url': 'https://external-preview.redd.it/vyBAEXnYvcnmHcKiQFKWegM6EcPgIbCLetkFaVmSeHo.jpg?width=108&crop=smart&auto=webp&s=d691b58b8363ce640b66f43df0dac4a29372f4cb', 'width': 108}, {'height': 101, 'url': 'https://external-preview.redd.it/vyBAEXnYvcnmHcKiQFKWegM6EcPgIbCLetkFaVmSeHo.jpg?width=216&crop=smart&auto=webp&s=4d43a0377053ea7e71b87e153f4968d4b7b0e32c', 'width': 216}, {'height': 150, 'url': 'https://external-preview.redd.it/vyBAEXnYvcnmHcKiQFKWegM6EcPgIbCLetkFaVmSeHo.jpg?width=320&crop=smart&auto=webp&s=64326cf2b6dc3affa830dd354c211f54f4f70ece', 'width': 320}, {'height': 300, 'url': 'https://external-preview.redd.it/vyBAEXnYvcnmHcKiQFKWegM6EcPgIbCLetkFaVmSeHo.jpg?width=640&crop=smart&auto=webp&s=f755c6f5f1297bc262be50d45610b5be01b0d490', 'width': 640}], 'source': {'height': 300, 'url': 'https://external-preview.redd.it/vyBAEXnYvcnmHcKiQFKWegM6EcPgIbCLetkFaVmSeHo.jpg?auto=webp&s=cb9d504e0906e76b79149da0f8f212ce73b810fa', 'width': 640}, 'variants': {}}]} |
Comparison on exllamav2, of bits/bpw: 2.5,4.25,4.5,4.65,4.75, 5, and 4bit-64g (airoboros-l2-70b-gpt4-1.4.1) | 55 | Hi there guys, a bit delayed post since I was doing quants/tests all day of this. Then managed the time to run the built-in benchmark of ooba with wikitext.
I just used some arbitrary sizes to test and see how it fares.
**NOTE:** The exl2 models were converted with the pippa cleaned and reformatted dataset, to be improved on RP. While, I think the 4bit-64g one was quanted with wikitext dataset, so this could affect results for general datasets (which I tested here). I will do a followup post when I can, but with benchmarks based on RP datasets.
-----------------
The quants and tests were made on the great airoboros-l2-70b-gpt4-1.4.1 model.
Those were done on exllamav2 exclusively (including the gptq 64g model) and the bpws and their VRAM reqs are (mostly to just load, without taking in mind, the cache and the context):
* 4bit-64g: ~40GB VRAM
* 2.5 bits/bpw: ~23 GB VRAM
* 4.25 bits/bpw: ~39GB VRAM
* 4.5 bits/bpw: ~41GB VRAM
* 4.65 bits/bps: ~42GB VRAM
* 4.75 bits/bps: ~42.5GB VRAM
* 5bits/bps: ~45 GB VRAM
Tests were made with 2x4090, and a single RTX A6000 Ada (runpod).
I managed to do 4K ctx on all models, excluding 2.5bits/bpw if using a single 4090, and 5bits/bpw on 2x4090 (just 3072 ctx). For the 2 cases, to test in the same 4096 context conditions, on the former I did use 2x4090 to do the test, and for the latter I rented the A6000 Ada.
-----------------
And then, running the built-in benchmark of the ooba textgen-webui, I got these results (ordered by better ppl to worse):
Model | LoRAs | Dataset | Perplexity | stride | max_length | Date | Comment
:--|:--|:--|:--|:--|:--|:--|:--
airoboros-l2-70b-gpt4-1.4.1-5.0bpw | - | wikitext | 3.0390625 | 512 | 4096 | 2023-09-19 02:21:57 |
airoboros-l2-70b-gpt4-1.4.1-4.75bpw | - | wikitext | 3.05859375 | 512 | 4096 | 2023-09-18 23:02:07 |
airoboros-l2-70b-gpt4-1.4.1-4.65bpw | - | wikitext | 3.05859375 | 512 | 4096 | 2023-09-19 04:33:22 |
airoboros-l2-70B-gpt4-1.4.1-GPTQ (4bit-64g) | - | wikitext | 3.05859375 | 512 | 4096 | 2023-09-19 00:33:21 |
airoboros-l2-70b-gpt4-1.4.1-4.5bpw | - | wikitext | 3.0859375 | 512 | 4096 | 2023-09-18 23:45:53 |
airoboros-l2-70b-gpt4-1.4.1-4.25bpw | - | wikitext | 3.107421875 | 512 | 4096 | 2023-09-19 01:24:17 |
Uni-TianYan-70B-4.65bpw | - | wikitext | 3.38671875 | 512 | 0 | 2023-09-16 17:50:44 | Ignore this one for now, since the bench I think was made with 2048ctx.
airoboros-l2-70b-gpt4-1.4.1-2.5bpw | - | wikitext | 15.5234375 | 512 | 4096 | 2023-09-18 21:16:06 |
Notice there are 3 same values (4bit-64g, 4.65 and 4.75bpw), so it may be an issue which I have to investigate.
-----------------
Then, running the tests with the test_inference.py benchmark from exllamav2, ran them with "py test_inference.py -m model -ed 0000.parquet -el 4096 -l 4096 -gs 20,21". The dataset is this one: https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated/blob/refs%2Fconvert%2Fparquet/default/train/0000.parquet.
Results are pretty interesting? Also from better to worse ppl.
Model | Perplexity
:--|:--|
airoboros-l2-70b-gpt4-4bit-64g | 3.7605 |
airoboros-l2-70b-gpt4-1.4.1-4.65bpw | 3.7636|
airoboros-l2-70b-gpt4-1.4.1-4.75bpw | 3.7654 |
airoboros-l2-70b-gpt4-1.4.1-4.5bpw | 3.7707 |
airoboros-l2-70b-gpt4-1.4.1-4.25bpw | 3.7837 |
airoboros-l2-70b-gpt4-1.4.1-2.5bpw | 17.7124 |
airoboros-l2-70b-gpt4-1.4.1-5.0bpw | Pending*|
*I accidentally deleted the 5bpw model on runpod, so I will have to re-download it and do the test. But will have to add some credits etc etc and it's near 3AM, so it's on pending for now.
-----------------
I have copy-pasted the .csv from the ooba benchmark here: https://pastebin.com/5cTuW8w8
For the exllamav2 test_inference.py benchmarks, I mostly have the pictures, which you can check here: https://imgur.com/a/mzTr2hh
And the models are on my profile if you want to check them, and also I've uploaded the .safetensors version of this same model among others (Synthia, Marcoroni, duck-llama) if you want to try do some quants by yourself https://huggingface.co/Panchovix. | 2023-09-19T05:30:05 | https://www.reddit.com/r/LocalLLaMA/comments/16mif47/comparison_on_exllamav2_of_bitsbpw_2542545465475/ | panchovix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mif47 | false | null | t3_16mif47 | /r/LocalLLaMA/comments/16mif47/comparison_on_exllamav2_of_bitsbpw_2542545465475/ | false | false | self | 55 | {'enabled': False, 'images': [{'id': 'VLmQiqoe5esGEkStyu7unPdof-1qgG5XA1mtFJzhb8A', 'resolutions': [{'height': 29, 'url': 'https://external-preview.redd.it/rKlw3EmNFigTjgXgUkpdIz6AdpfCxFzu_gFcbhQokMw.jpg?width=108&crop=smart&auto=webp&s=95de144854fecf3a014255bdfb6f98b0bd3b0136', 'width': 108}, {'height': 59, 'url': 'https://external-preview.redd.it/rKlw3EmNFigTjgXgUkpdIz6AdpfCxFzu_gFcbhQokMw.jpg?width=216&crop=smart&auto=webp&s=1642fa859d1a57a989cebaa7f3b9aa37ceaa9aaf', 'width': 216}, {'height': 88, 'url': 'https://external-preview.redd.it/rKlw3EmNFigTjgXgUkpdIz6AdpfCxFzu_gFcbhQokMw.jpg?width=320&crop=smart&auto=webp&s=f27ec016a263011a696623f76a34177192213447', 'width': 320}, {'height': 177, 'url': 'https://external-preview.redd.it/rKlw3EmNFigTjgXgUkpdIz6AdpfCxFzu_gFcbhQokMw.jpg?width=640&crop=smart&auto=webp&s=63091fb13cdb088d2c401355d4e700ad0c445ab2', 'width': 640}, {'height': 266, 'url': 'https://external-preview.redd.it/rKlw3EmNFigTjgXgUkpdIz6AdpfCxFzu_gFcbhQokMw.jpg?width=960&crop=smart&auto=webp&s=f2f81b3bfbb18c98a5467a8503dae1dca9ac83a6', 'width': 960}, {'height': 299, 'url': 'https://external-preview.redd.it/rKlw3EmNFigTjgXgUkpdIz6AdpfCxFzu_gFcbhQokMw.jpg?width=1080&crop=smart&auto=webp&s=3fb7f4bf545228cd4ee9ff7eb0bb7bb6b6d33ee7', 'width': 1080}], 'source': {'height': 381, 'url': 'https://external-preview.redd.it/rKlw3EmNFigTjgXgUkpdIz6AdpfCxFzu_gFcbhQokMw.jpg?auto=webp&s=800aeb364491b554c15e36eb5f5b605f5b687ecc', 'width': 1375}, 'variants': {}}]} |
Fine-tune LLama for... translation | 4 | I am a researcher (for context). I have a dataset that maps English to a pretty rare language in North Africa. I want to fine-tune LLama to take the the North African language and generate English (as opposed to the traditional task of an LLM which is the other way around). Native -> English is what I want. LLMs excel at translation, but since LLama 2 was mostly trained on English text (and not the Berber language), I am assuming fine-tuning it won't make it take Berber as input to generate English. Thoughts? | 2023-09-19T04:33:50 | https://www.reddit.com/r/LocalLLaMA/comments/16mhfcu/finetune_llama_for_translation/ | rasputin23YD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mhfcu | false | null | t3_16mhfcu | /r/LocalLLaMA/comments/16mhfcu/finetune_llama_for_translation/ | false | false | self | 4 | null |
Probably a long shot, but any solutions to get past this endless loading to enable billing on a new Google Cloud project? | 1 | 2023-09-19T04:22:25 | sardoa11 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16mh7lh | false | null | t3_16mh7lh | /r/LocalLLaMA/comments/16mh7lh/probably_a_long_shot_but_any_solutions_to_get/ | false | false | 1 | {'enabled': True, 'images': [{'id': '2q2YNn8RLOq4X5-7IbXp0MVMUeZjISWbCxt0N4MHyvk', 'resolutions': [{'height': 69, 'url': 'https://preview.redd.it/hgr52b2n25pb1.png?width=108&crop=smart&auto=webp&s=b150b38895aa4c05502d0bde6dd9d6233df17e43', 'width': 108}, {'height': 138, 'url': 'https://preview.redd.it/hgr52b2n25pb1.png?width=216&crop=smart&auto=webp&s=b2fdffe4d4470f6df3b70501fa6970f8ad4e7f2e', 'width': 216}, {'height': 204, 'url': 'https://preview.redd.it/hgr52b2n25pb1.png?width=320&crop=smart&auto=webp&s=213a512bae5d1c043be9f67a8f2e5750a138257c', 'width': 320}, {'height': 409, 'url': 'https://preview.redd.it/hgr52b2n25pb1.png?width=640&crop=smart&auto=webp&s=8930bc290a2ff8fa601e195e2afc942391d2475a', 'width': 640}, {'height': 613, 'url': 'https://preview.redd.it/hgr52b2n25pb1.png?width=960&crop=smart&auto=webp&s=1345e1e66c18fb2753338e292de3f5c3b7afaaa6', 'width': 960}, {'height': 690, 'url': 'https://preview.redd.it/hgr52b2n25pb1.png?width=1080&crop=smart&auto=webp&s=125e3ab572d0434f29c640d9f97eafeb0e3537fb', 'width': 1080}], 'source': {'height': 716, 'url': 'https://preview.redd.it/hgr52b2n25pb1.png?auto=webp&s=2ee640fa754aa868105731e620dee4bd55616cd6', 'width': 1120}, 'variants': {}}]} | |||
Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading | 3 | https://github.com/bigscience-workshop/petals | 2023-09-19T04:20:25 | https://www.reddit.com/r/LocalLLaMA/comments/16mh67n/run_llms_at_home_bittorrentstyle_finetuning_and/ | Big_Specific9749 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mh67n | false | null | t3_16mh67n | /r/LocalLLaMA/comments/16mh67n/run_llms_at_home_bittorrentstyle_finetuning_and/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'M4AvNiPnf5bzKrr_hkvXXR2FIucEcEeYs7k6K5OFEKo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hPm42Rdbnazz1oERoZJGNwM019m6uBaDg9al84Ml2aw.jpg?width=108&crop=smart&auto=webp&s=bea1927938119de7b847dfaee94f18cff06a845c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hPm42Rdbnazz1oERoZJGNwM019m6uBaDg9al84Ml2aw.jpg?width=216&crop=smart&auto=webp&s=cc256d0bcb761c29cccd29e907ca3ab6f8daacf8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hPm42Rdbnazz1oERoZJGNwM019m6uBaDg9al84Ml2aw.jpg?width=320&crop=smart&auto=webp&s=55088def515ddf618f49ca3ed6b4400549a28517', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hPm42Rdbnazz1oERoZJGNwM019m6uBaDg9al84Ml2aw.jpg?width=640&crop=smart&auto=webp&s=9263b6541438a88484abba84dcdc185b1e3c052b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hPm42Rdbnazz1oERoZJGNwM019m6uBaDg9al84Ml2aw.jpg?width=960&crop=smart&auto=webp&s=0c9bd8345dc00a9006cbee2112fc82e7c7ead478', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hPm42Rdbnazz1oERoZJGNwM019m6uBaDg9al84Ml2aw.jpg?width=1080&crop=smart&auto=webp&s=7654eb47eb639b48c229845e6b53c3cb25fddde2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hPm42Rdbnazz1oERoZJGNwM019m6uBaDg9al84Ml2aw.jpg?auto=webp&s=a4030a83d3dfc7b788e2b3e6cdfdbedbf63d5a01', 'width': 1200}, 'variants': {}}]} |
6.3 Trillion multilingual dataset released. | 179 | 2023-09-19T03:07:57 | https://twitter.com/_akhaliq/status/1703968066835476782?t=nSYJJ18hq_2lU9NYsy8pxg&s=19 | ninjasaid13 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 16mfq3l | false | {'oembed': {'author_name': 'AK', 'author_url': 'https://twitter.com/_akhaliq', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages<br><br>paper page: <a href="https://t.co/kX4Fv6OYIi">https://t.co/kX4Fv6OYIi</a><br><br>present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for LLM development <a href="https://t.co/ntOtaDwF9s">pic.twitter.com/ntOtaDwF9s</a></p>— AK (@_akhaliq) <a href="https://twitter.com/_akhaliq/status/1703968066835476782?ref_src=twsrc%5Etfw">September 19, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/_akhaliq/status/1703968066835476782', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_16mfq3l | /r/LocalLLaMA/comments/16mfq3l/63_trillion_multilingual_dataset_released/ | false | false | 179 | {'enabled': False, 'images': [{'id': 'YwDN907RUzY3l-pXA01vkqHulOQrd_Ts-B8EAZE67Fs', 'resolutions': [{'height': 150, 'url': 'https://external-preview.redd.it/CPAxEmoTgOouPSwDAbkJ-GyAd44IHe9_BFgw3DPt8Rw.jpg?width=108&crop=smart&auto=webp&s=9d920ef561a31178d87a4855068c086d893d1f5e', 'width': 108}, {'height': 300, 'url': 'https://external-preview.redd.it/CPAxEmoTgOouPSwDAbkJ-GyAd44IHe9_BFgw3DPt8Rw.jpg?width=216&crop=smart&auto=webp&s=af927ddf678452bd16b01ffad1ca1c083c29df1e', 'width': 216}, {'height': 445, 'url': 'https://external-preview.redd.it/CPAxEmoTgOouPSwDAbkJ-GyAd44IHe9_BFgw3DPt8Rw.jpg?width=320&crop=smart&auto=webp&s=1762157ae7923d982a58e6020698842da5d97682', 'width': 320}, {'height': 890, 'url': 'https://external-preview.redd.it/CPAxEmoTgOouPSwDAbkJ-GyAd44IHe9_BFgw3DPt8Rw.jpg?width=640&crop=smart&auto=webp&s=48d4656d0a147e019052fbcb7b5bbf91ddc00412', 'width': 640}], 'source': {'height': 982, 'url': 'https://external-preview.redd.it/CPAxEmoTgOouPSwDAbkJ-GyAd44IHe9_BFgw3DPt8Rw.jpg?auto=webp&s=3d7d6cf8f23df7990490ca7ab0e8b502d54a14f5', 'width': 706}, 'variants': {}}]} | ||
Best Llama-2 model-size/variant/configuration for inference speed | 6 | I have a machine with a single 3090 (24GB) and an 8-core intel CPU with 64GB RAM.
If inference speed and quality are my priority, what is the best Llama-2 model to run?
7B vs 13B
4bit vs 8bit vs 16bit
GPTQ vs GGUF vs bitsandbytes | 2023-09-19T02:50:49 | https://www.reddit.com/r/LocalLLaMA/comments/16mfd8w/best_llama2_modelsizevariantconfiguration_for/ | r00kee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mfd8w | false | null | t3_16mfd8w | /r/LocalLLaMA/comments/16mfd8w/best_llama2_modelsizevariantconfiguration_for/ | false | false | self | 6 | null |
Possible to do anything with 8GB of VRAM? | 32 | I've been really interested in fine tuning a language model, but I have a 3060Ti. Spent many hours trying to get Nous Hermes 13B to run well but it's still painfully slow and runs out of memory.
Would it even be possible to train a 7B model, at the very least? I have a Ryzen 3600 but I'm planning on trying more of llama.cpp to see if it makes a difference offloading a bit of work to the CPU.
Also, roughly how much data would I be able to feed it? This sounds ridiculous but I have up to 500k messages of data I'd like to feed it, but as I'm just getting into LLM and don't have hands-on experience yet, I'm not sure if this is feasible or if I'm just wasting my time.
Thanks (: | 2023-09-19T01:27:41 | https://www.reddit.com/r/LocalLLaMA/comments/16mdkz6/possible_to_do_anything_with_8gb_of_vram/ | taryus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mdkz6 | false | null | t3_16mdkz6 | /r/LocalLLaMA/comments/16mdkz6/possible_to_do_anything_with_8gb_of_vram/ | false | false | self | 32 | null |
Approximately how many tokens for full-tuning for knowledge gain? | 9 | Hey all, I'm looking to full-tune on some Skyrim lore for knowledge aquisition with Llama-2 7b (I tried training LoRAs a good few times, but didn't have any luck). I'm working with an unstructured dataset of around 160K tokens of scraped lore from the wiki, does this sound like enough? The goal is that the model will be able to recall details about locations, people, etc present in the lore.
Any pointers would be great! | 2023-09-19T00:53:48 | https://www.reddit.com/r/LocalLLaMA/comments/16mcutb/approximately_how_many_tokens_for_fulltuning_for/ | Goatman117 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mcutb | false | null | t3_16mcutb | /r/LocalLLaMA/comments/16mcutb/approximately_how_many_tokens_for_fulltuning_for/ | false | false | self | 9 | null |
New to LLMs and I have project where I need to evaluate Llama and Falcon on a datasets | 1 | any helpful code would be appreciated.
thanks! | 2023-09-18T23:37:43 | https://www.reddit.com/r/LocalLLaMA/comments/16mb5b4/new_to_llms_and_i_have_project_where_i_need_to/ | AcceptableBat8912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mb5b4 | false | null | t3_16mb5b4 | /r/LocalLLaMA/comments/16mb5b4/new_to_llms_and_i_have_project_where_i_need_to/ | false | false | self | 1 | null |
Half Precision (e.g. no quantization) Phind V2 CodeLlama 34B running on Mac M1 at 8.6 tokens per second | 7 | Title basically says it all. The M1 Ultra 128 Gb has a [recommendedMaxWorkingSetSize](https://developer.apple.com/documentation/metal/mtldevice/2369280-recommendedmaxworkingsetsize) \~98 Gb, this corresponds roughly to how big of a GGUF file can be used by metal shaders on the GPU. The half precision (e.g. no quantization, floating point 16) [model](https://huggingface.co/Phind/Phind-CodeLlama-34B-v2) is \~67.5 Gb, so fits comfortably within the limits. If you have a Mac M1, the upper limit of models you'd want to run will be 75% of your system's RAM. This is why the M2 Ultra is so impressive - it has the equivalent of a 145 Gb VRAM stapled onto a 4080 ([head to head here](https://www.gizchina.com/2023/06/10/apple-m2-ultra-is-just-10-slower-than-the-nvidia-rtx-4080/)). Anyway, output of my run is as follows:
​
>llama\_print\_timings: load time = 31641.51 ms
>
>llama\_print\_timings: sample time = 629.84 ms / 901 runs ( 0.70 ms per token, 1430.52 tokens per second)
>
>llama\_print\_timings: prompt eval time = 3815.41 ms / 16 tokens ( 238.46 ms per token, 4.19 tokens per second)
>
>llama\_print\_timings: eval time = 98484.30 ms / 900 runs ( 109.43 ms per token, 9.14 tokens per second)
>
>llama\_print\_timings: total time = 104485.77 ms
>
>Output generated in 104.68 seconds (8.60 tokens/s, 900 tokens, context 58, seed 469210828)
​
And prompt/response was as follows:
>A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.
>
>
>
>\### Human: How do I create an LLM model in python using PyTorch?
>
>\### Assistant: Creating a Language Model (LLM) in Python using PyTorch involves several steps. Here is a high-level overview of the process:
>
>
>
>1. \*\*Install required libraries\*\*: First, you need to install PyTorch and other necessary libraries like NumPy and Transformers. You can do this by running \`pip install torch numpy transformers\`.
>
>
>
>2. \*\*Import required modules\*\*: Import the necessary modules in your Python script. For example:
>
> \`\`\`python
>
> import torch
>
> from torch import nn, optim
>
> from transformers import GPT2Tokenizer, GPT2LMHeadModel
>
> \`\`\`
>
>
>
>3. \*\*Load a pre-trained model\*\*: You can use a pre-trained language model like GPT-2 as the starting point for your LLM. Load it using the following code:
>
> \`\`\`python
>
> tokenizer = GPT2Tokenizer.from\_pretrained("gpt2")
>
> model = GPT2LMHeadModel.from\_pretrained("gpt2")
>
> \`\`\`
>
>
>
>4. \*\*Prepare your dataset\*\*: Tokenize and process the text data you want to use for training your LLM. For example, if you have a list of sentences called \`sentences\`, you can tokenize them as follows:
>
> \`\`\`python
>
> inputs = \[tokenizer(sentence)\["input\_ids"\] for sentence in sentences\]
>
> \`\`\`
>
>
>
>5. \*\*Create DataLoader\*\*: Create a PyTorch DataLoader to handle batching and shuffling of your dataset during training. For example, if \`inputs\` is the list of tokenized sentences:
>
> \`\`\`python
>
> from [torch.utils.data](https://torch.utils.data) import Dataset, DataLoader
>
>
>
> class SentenceDataset(Dataset):
>
>def \_\_init\_\_(self, inputs):
>
>self.inputs = inputs
>
>
>
>def \_\_len\_\_(self):
>
>return len(self.inputs)
>
>
>
>def \_\_getitem\_\_(self, idx):
>
>return torch.tensor(self.inputs\[idx\])
>
>
>
> dataset = SentenceDataset(inputs)
>
> dataloader = DataLoader(dataset, batch\_size=8, shuffle=True)
>
> \`\`\`
>
>
>
>6. \*\*Set up the model for training\*\*: Define a loss function and an optimizer to update your LLM's weights during training. For example:
>
> \`\`\`python
>
> device = torch.device("cuda" if torch.cuda.is\_available() else "cpu")
>
>
>
> [model.to](https://model.to)(device)
>
> model.train()
>
>
>
> loss\_function = nn.CrossEntropyLoss()
>
> optimizer = optim.AdamW(model.parameters(), lr=1e-5)
>
> \`\`\`
>
>
>
>7. \*\*Train the LLM\*\*: Iterate over your dataset, feeding batches of input to the model and updating its weights based on the loss. For example:
>
> \`\`\`python
>
> num\_epochs = 3
>
>
>
> for epoch in range(num\_epochs):
>
>for batch in dataloader:
>
>optimizer.zero\_grad()
>
>
>
>inputs, targets = batch\[:-1\], batch\[1:\]
>
>inputs, targets = [inputs.to](https://inputs.to)(device), [targets.to](https://targets.to)(device)
>
>
>
>outputs = model(inputs).logits
>
>loss = loss\_function(outputs.view(-1, outputs.size(-1)), targets.view(-1))
>
>
>
>loss.backward()
>
>optimizer.step()
>
> \`\`\`
>
>
>
>8. \*\*Save the trained LLM\*\*: After training your model, save it for future use. For example:
>
> \`\`\`python
>
> [torch.save](https://torch.save)(model.state\_dict(), "llm.pth")
>
> \`\`\`
>
>
>
>This is a basic outline of how to create an LLM in Python using PyTorch. You may need to customize this process depending on your specific use case and dataset.
​ | 2023-09-18T23:34:12 | https://www.reddit.com/r/LocalLLaMA/comments/16mb28c/half_precision_eg_no_quantization_phind_v2/ | Thalesian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mb28c | false | null | t3_16mb28c | /r/LocalLLaMA/comments/16mb28c/half_precision_eg_no_quantization_phind_v2/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': '2uG3mp1grku1teaynrks5JCfpigHSeN1SIYEwOaBaf4', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/goShCRneeIcIG-J_zuxNa5J3ykZ-QvMFsuvdYE9KZ38.jpg?width=108&crop=smart&auto=webp&s=731cfb22902a6d6ddb5624ec931d1d788c45607c', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/goShCRneeIcIG-J_zuxNa5J3ykZ-QvMFsuvdYE9KZ38.jpg?width=216&crop=smart&auto=webp&s=346d8b72e774001af0a2cc30238660e31db0e946', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/goShCRneeIcIG-J_zuxNa5J3ykZ-QvMFsuvdYE9KZ38.jpg?width=320&crop=smart&auto=webp&s=752b0f18b066b79c0b3521d1c4b74569a81bdde9', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/goShCRneeIcIG-J_zuxNa5J3ykZ-QvMFsuvdYE9KZ38.jpg?width=640&crop=smart&auto=webp&s=a72e7c6aba16e3abf22eab0aae977646f4146812', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/goShCRneeIcIG-J_zuxNa5J3ykZ-QvMFsuvdYE9KZ38.jpg?width=960&crop=smart&auto=webp&s=09171aa6a628c050289509ba4aa7e6677bb6bd6d', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/goShCRneeIcIG-J_zuxNa5J3ykZ-QvMFsuvdYE9KZ38.jpg?width=1080&crop=smart&auto=webp&s=04937d2eac973a083ce5de9e35d515c3b156e7a8', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/goShCRneeIcIG-J_zuxNa5J3ykZ-QvMFsuvdYE9KZ38.jpg?auto=webp&s=2abd8e39989c4d25bab3880b18f571f62c32449a', 'width': 1200}, 'variants': {}}]} |
Good course on generative AI with PyTorch? | 11 | I learnt PyTorch and I tried coding a few basic LLM type projects and was able to come with a pretty bad word generator but I am curious if there are any good generative AI courses out there that can help me? I would, obviously, prefer a free course unless the paid option is that much better. | 2023-09-18T23:03:49 | https://www.reddit.com/r/LocalLLaMA/comments/16mac8i/good_course_on_generative_ai_with_pytorch/ | SimRacer101 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16mac8i | false | null | t3_16mac8i | /r/LocalLLaMA/comments/16mac8i/good_course_on_generative_ai_with_pytorch/ | false | false | self | 11 | null |
Intro to LLM | 1 | [removed] | 2023-09-18T22:37:19 | https://www.reddit.com/r/LocalLLaMA/comments/16m9p2m/intro_to_llm/ | Namhto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m9p2m | false | null | t3_16m9p2m | /r/LocalLLaMA/comments/16m9p2m/intro_to_llm/ | false | false | self | 1 | null |
Anyone have any experience with AMD MI25? Share your secrets. | 5 | So I've seen some posts here discussing these GPUs but im struggling to get this card to run anything, both StableDiffusion and OobaBooga fail to run.
Does anyone have any experience with these cards? Any guides you guys can link me?
Any help appreciated! | 2023-09-18T22:31:31 | https://www.reddit.com/r/LocalLLaMA/comments/16m9jsf/anyone_have_any_experience_with_amd_mi25_share/ | Stabby_Jack | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m9jsf | false | null | t3_16m9jsf | /r/LocalLLaMA/comments/16m9jsf/anyone_have_any_experience_with_amd_mi25_share/ | false | false | self | 5 | null |
HuggingChat's settings | 7 | So I tried out HuggingChat today, and was absolutely blown away by the answer that it gave to a generic question I use to test out models. I immediately downloaded that model and tried to coax the same response out of it... and nothing. Both the Llama2 70b and CodeLlama-34b models feel 10x smarter for HuggingChat than they do for me.
Now, first off- I'm running a heavily quantized version so that could definitely factor into it. For the 70b I have to run a 3\_K\_M at best, and for the 34b I'm doing a 4\_K\_M or q4 GPTQ. With that said, having seen other comparisons I would have expected to be a little closer in the ballpark than what I was. HuggingChat dropped basically a novel in my lap of amazing information, while mine gave me a 1 paragraph response that was mostly correct but felt... rushed.
Does anyone have any idea what settings/preset HuggingChat is using, or have a guess on it? Because I'm absolutely floored by the quality of their answers and would love to have a shot at replicating that myself. | 2023-09-18T22:23:00 | https://www.reddit.com/r/LocalLLaMA/comments/16m9c3s/huggingchats_settings/ | LearningSomeCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m9c3s | false | null | t3_16m9c3s | /r/LocalLLaMA/comments/16m9c3s/huggingchats_settings/ | false | false | self | 7 | null |
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution | 1 | [removed] | 2023-09-18T20:27:00 | https://www.reddit.com/r/LocalLLaMA/comments/16m6buk/llamator_a_new_initiative_for_bittorrentbased_ai/ | Nondzu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m6buk | false | null | t3_16m6buk | /r/LocalLLaMA/comments/16m6buk/llamator_a_new_initiative_for_bittorrentbased_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mRh3g7g5cRShayZL7YsCTHgkBEGMIJQa3OTAyrmOHIs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=108&crop=smart&auto=webp&s=c6847902acf8b391d33c95269846a968214469f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=216&crop=smart&auto=webp&s=14ee0a7df41aa0129374a3d3460d56e55e83e3f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=320&crop=smart&auto=webp&s=1490891ab5f69b0153b516da64016b99c8c55de3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=640&crop=smart&auto=webp&s=1010d1fb53e0e7861d155d6faac2887a812c0e54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=960&crop=smart&auto=webp&s=357b479289b11903247c4085d9d1bd635e0c6647', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=1080&crop=smart&auto=webp&s=1a2d15390fc477e397d7ab57c541f6f9d889c241', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?auto=webp&s=52a3100f82b61d82c423ca45773b7c202ba3edf6', 'width': 1200}, 'variants': {}}]} |
Memory Usage for llama2 after every question | 3 | Hello guys. I am running llama 2 on colab using T4 to do some document QA. I noticed something that when I ask a question a part of the memory is used which is logical but after that cell runs the amount of memory that was allocated for that question stays and doesn’t reset to the amount before that question.
Is there a fix where after every query I can reset the amount used?
Thank You | 2023-09-18T20:15:10 | https://www.reddit.com/r/LocalLLaMA/comments/16m60ua/memory_usage_for_llama2_after_every_question/ | charbeld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m60ua | false | null | t3_16m60ua | /r/LocalLLaMA/comments/16m60ua/memory_usage_for_llama2_after_every_question/ | false | false | self | 3 | null |
Idea to improve models. | 0 | I'm knee deep into llama models.
Looking at the tokens go by, I had a (possible) idea.
A LLM is a neural network.
Here's a list of tokens llama2 generated:
> '.':29889, **' He':940,** ' does':947, ' claim':5995, **' he':540,** ' did':1258, ' not':451, ' sign':1804, ' anything':3099, ' on':373, ' website':4700, ',':29892, ' but':541, ' maybe':5505, **' he':540,** ' won':2113, ''':29915, 't':29873, ' admit':20000, ',':29892, ' or':470, ' doesn':1838, ''':29915, 't':29873,
That's two different tokens for a single word:
* He 940
* he 540
I'm pretty sure there are more alternates too.
Now, if we did the following:
1. Store all (most, there are exceptions) tokens as their lowercase version only, resulting in fewer tokens to select from/work with.
2. Have a "uppercase first letter" flag that's part of the neural net's output layer.
That would (I imagine) still allow training and the kinds of results we have now, **but** it would (I think) make training simpler/more efficient (twice as much?), and allow for smaller models with the same performance (maybe?).
Anyone here with knowledge of this stuff have a notion whether this is a valid idea or not?
If this is in fact a valid idea, there are other ways this could be used to compact models with no negative impact, like having arrays of synonyms and an output for "simplicity", strength for adjectives, things like that.
Any idea of an Open-Source LLM project I could propose this to?
Thanks! | 2023-09-18T19:55:12 | https://www.reddit.com/r/LocalLLaMA/comments/16m5hwr/idea_to_improve_models/ | arthurwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m5hwr | false | null | t3_16m5hwr | /r/LocalLLaMA/comments/16m5hwr/idea_to_improve_models/ | false | false | self | 0 | null |
How to use, and reliable is, the GGML to GGUF conversion tool? | 2 | I had mentioned on here previously that I had a lot of GGMLs that I liked and couldn't find a GGUF for, and someone recommended using the GGML to GGUF conversion tool that came with llama.cpp. I've tried googling around but I can't find a lot of info, so I wanted to ask about it.
I've been going down huggingface's leaderboard grabbing some of the top ones just to try, and some of them don't have GGUFs. An example is 30B-Lazarus; all I can find are GPTQ and GGML, but I can no longer run GGML in oobabooga. I was wondering if there was any quality loss using the GGML to GGUF tool to swap that over, and if not then how does one actually go about using it? | 2023-09-18T19:49:21 | https://www.reddit.com/r/LocalLLaMA/comments/16m5ciz/how_to_use_and_reliable_is_the_ggml_to_gguf/ | LearningSomeCode | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m5ciz | false | null | t3_16m5ciz | /r/LocalLLaMA/comments/16m5ciz/how_to_use_and_reliable_is_the_ggml_to_gguf/ | false | false | self | 2 | null |
Mac Studio M2 Ultra 192GB seems pretty ideal for running big model inference- am I missing anything? | 74 | Hey folks,
I [posted](https://www.reddit.com/r/LocalLLaMA/comments/15r4ra0/lets_say_i_have_60008000_for_a_new_computer_to/) a while back about buying a PC to run local LLMs. I thought I could use our institutional compute cluster, but it turns out, after consulting with legal, some of the documents I am working with cannot leave the computer they were accessed from via email. Sigh. So I'm back to getting a computer capable of running a chonky model.
This will not be a heavily used workstation. I will probably run 20 queries per day on average, but its only worth doing if model quality is excellent. 5t/sec is fine, I can set up runs and do other tasks while they complete. I will also likely have it processing stacks of documents, so some days it'll run through hundreds of automated prompts. I don't really want beefy graphics cards sitting idle taking power when, most of the time, they're not being used. The mac studio is amazingly energy efficient (10 watts when idle, \~[300](https://support.apple.com/en-us/HT213100)w at peak load! insane).
A few other caveats: I should probably spend at least 5k on the computer, as anything under 5k costs an extra 60% (overhead, as it is considered supplies not equipment) from my employer. A 4k computer would literally cost me $6,400. But a 6k computer costs 6k. For the same reason, the computer must be bought as a single piece / order, not sourced as parts. I don't love this rule, but it's how things are.
Finally, I want something that can run huge models as they are developed (like Falcon 180). I am not planning on doing training, just inference. I also don't want to build a computer- not my jam, not something I'm interested enough to learn about.
Given these constraints, is a Mac Studio M2 Ultra with 192GB shared ram the best computer for me? The 7k pricetag is totally fine with me, I am not looking for "bang for buck", I am looking for functionality. GG has some intriguing twitter posts showing it crushing Falcon 180 4bit at about 4 tokens per sec. It looks from others like 70B models are coming in about 7 tokens/sec. Plenty fast for me, as speed is not the critical factor.
I don't love Macs or Apple, I find the closed ecosystem model and high pricetag pretty despicable to be honest. I have talked shit about them for about 30 years. But... this seems like the best computer for my needs.
I'm not in a huge hurry, I could wait 6-12 months if we thought newer, better hardware was coming out that would be much better for this use case. I've scoured this forum and others for information about the M2, and this seems like my best bet, but I'm worried I'm missing out on something. Many thanks for your feedback.
| 2023-09-18T19:01:30 | https://www.reddit.com/r/LocalLLaMA/comments/16m432j/mac_studio_m2_ultra_192gb_seems_pretty_ideal_for/ | Berberis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m432j | false | null | t3_16m432j | /r/LocalLLaMA/comments/16m432j/mac_studio_m2_ultra_192gb_seems_pretty_ideal_for/ | false | false | self | 74 | null |
LLama2 only accepts correction from ChatGPT | 7 | ​
https://preview.redd.it/pfxoc57772pb1.png?width=624&format=png&auto=webp&s=c6ffa38e7fc600717562a78cc4bd3730ced0aa9e | 2023-09-18T18:43:35 | https://www.reddit.com/r/LocalLLaMA/comments/16m3lt5/llama2_only_accepts_correction_from_chatgpt/ | TheGreatEtAl | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m3lt5 | false | null | t3_16m3lt5 | /r/LocalLLaMA/comments/16m3lt5/llama2_only_accepts_correction_from_chatgpt/ | false | false | 7 | null | |
Built a PC with 2 x NVIDIA M40 (2 x 24 GB) -- can I now run LLMs that need 48 GB? | 13 | I have bought two used NVIDIA M40 with 24 GB for $100 each. Both are recognized by nvidia-smi. I tried it on an older mainboard first, but on that board I could not get it working.
I do not have a good cooling fan yet, so I did not actually run anything right now. I expect it to run any LLM that requires 24 GB (although much slower than a 3090). Since I have two of them, are there LLMs that would use both of them so that I can run even larger models? | 2023-09-18T18:01:17 | https://www.reddit.com/r/LocalLLaMA/comments/16m2i18/built_a_pc_with_2_x_nvidia_m40_2_x_24_gb_can_i/ | nn4l | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m2i18 | false | null | t3_16m2i18 | /r/LocalLLaMA/comments/16m2i18/built_a_pc_with_2_x_nvidia_m40_2_x_24_gb_can_i/ | false | false | self | 13 | null |
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution | 1 | [removed] | 2023-09-18T17:48:24 | https://www.reddit.com/r/LocalLLaMA/comments/16m268q/llamator_a_new_initiative_for_bittorrentbased_ai/ | Nondzu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m268q | false | null | t3_16m268q | /r/LocalLLaMA/comments/16m268q/llamator_a_new_initiative_for_bittorrentbased_ai/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mRh3g7g5cRShayZL7YsCTHgkBEGMIJQa3OTAyrmOHIs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=108&crop=smart&auto=webp&s=c6847902acf8b391d33c95269846a968214469f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=216&crop=smart&auto=webp&s=14ee0a7df41aa0129374a3d3460d56e55e83e3f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=320&crop=smart&auto=webp&s=1490891ab5f69b0153b516da64016b99c8c55de3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=640&crop=smart&auto=webp&s=1010d1fb53e0e7861d155d6faac2887a812c0e54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=960&crop=smart&auto=webp&s=357b479289b11903247c4085d9d1bd635e0c6647', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=1080&crop=smart&auto=webp&s=1a2d15390fc477e397d7ab57c541f6f9d889c241', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?auto=webp&s=52a3100f82b61d82c423ca45773b7c202ba3edf6', 'width': 1200}, 'variants': {}}]} |
Is it possible to get 100% deterministic results across different hardware? | 13 | I know temperature 0 and all that, that's not the issue. This is about how wonky floats are and how you especially can't have certain optimizations if you don't allow Intel/AMD/etc. chips to produce slightly different results. Can that somehow be done with quants? Is that then pure integer logic?
Can't really test this myself and I thought it might be an interesting question for a few others anyway. | 2023-09-18T16:40:30 | https://www.reddit.com/r/LocalLLaMA/comments/16m0fll/is_it_possible_to_get_100_deterministic_results/ | involviert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m0fll | false | null | t3_16m0fll | /r/LocalLLaMA/comments/16m0fll/is_it_possible_to_get_100_deterministic_results/ | false | false | self | 13 | null |
A dive into RAG and what it means for the near-future of Generative AI | 1 | 2023-09-18T16:36:37 | https://medium.com/@greg.broadhead/title-retrieval-augmented-generation-rag-the-future-of-human-like-interactive-language-models-df9cf5906bb8 | GeeBee72 | medium.com | 1970-01-01T00:00:00 | 0 | {} | 16m0c4z | false | null | t3_16m0c4z | /r/LocalLLaMA/comments/16m0c4z/a_dive_into_rag_and_what_it_means_for_the/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'r5C46QaeFrLWFaK80-e1E9bsC_5q6Yb8Zyzv2x4B0_Q', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/XaMUEg-vzA48-BazTUkrNXufOWVHrDM-Adq-hSze3YU.jpg?width=108&crop=smart&auto=webp&s=d2185d60bf27db35fa154e4f6bac1726add356dc', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/XaMUEg-vzA48-BazTUkrNXufOWVHrDM-Adq-hSze3YU.jpg?width=216&crop=smart&auto=webp&s=3ba5f173ba6ebfaa7f4e2ab203adc3ec74b10dcc', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/XaMUEg-vzA48-BazTUkrNXufOWVHrDM-Adq-hSze3YU.jpg?width=320&crop=smart&auto=webp&s=76142cdc397f7aed1e3d45e45c5d5c659caf708a', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/XaMUEg-vzA48-BazTUkrNXufOWVHrDM-Adq-hSze3YU.jpg?auto=webp&s=b0daa318e19364ef1cd3e8ca81cf0f2018d9103b', 'width': 512}, 'variants': {}}]} | ||
Demo Poaster | 1 | > hacker just binge watched YC talks
> reads do things that don’t scale by PG
> uses GPT-4 for a langchain flow that costs $1 per call.
> hosts it for free to gain traction
> is this pmf? | 2023-09-18T16:24:10 | https://www.reddit.com/r/LocalLLaMA/comments/16m00xx/demo_poaster/ | spitforge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16m00xx | false | null | t3_16m00xx | /r/LocalLLaMA/comments/16m00xx/demo_poaster/ | false | false | self | 1 | null |
Build for Fine Tuning and Hosting 180B Parameter models | 24 | Processor: Intel Xeon W-3375 (38 cores, 76 threads, 2.5 GHz base frequency) - $4,500
GPU: NVIDIA RTX A6000 (48 GB VRAM, 10,752 CUDA cores, 309 TFLOPS tensor performance) x 2 - $7,000
Motherboard: ASUS Pro WS WRX80E-SAGE SE WIFI (LGA4189 socket, seven PCIe 4.0 x16 slots, eight DDR4 memory slots, eight SATA ports, three M.2 slots, Wi-Fi 6E and Bluetooth 5.2 module, dual Thunderbolt 4 ports, dual LAN ports, dual BIOS chips, RGB lighting) - $1,000
RAM: Crucial 32GB DDR4-3200 ECC UDIMM memory module x 6 - $1,200
I work in the tech industry (pretty closely with a popular LLM), and I’d like to make my own without some of the restrictions imposed by OpenAI, Microsoft, and Google. I’d like to build a financial advisor, CPA, lawyer, software engineer, homeassistant assistant, and some sex workers. I’ve done a 13B parameter lawyer setup and I’m pleased enough to go forward.
I can afford a pretty powerful setup, but the above has a hidden cost in the form of divorce attorney fees. Further I’ll still need a case, power supply, etc.
What’s the opinion on this setup?
Where would it be best to cut some corners?
Is it possible to somehow mount a setup like this in a server rack? | 2023-09-18T15:54:41 | https://www.reddit.com/r/LocalLLaMA/comments/16lza9d/build_for_fine_tuning_and_hosting_180b_parameter/ | mayonaise55 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lza9d | false | null | t3_16lza9d | /r/LocalLLaMA/comments/16lza9d/build_for_fine_tuning_and_hosting_180b_parameter/ | false | false | self | 24 | null |
Working on Replicating OSS Phi-1 | 1 | [removed] | 2023-09-18T15:46:47 | https://www.reddit.com/r/LocalLLaMA/comments/16lz2xz/working_on_replicating_oss_phi1/ | docsoc1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lz2xz | false | null | t3_16lz2xz | /r/LocalLLaMA/comments/16lz2xz/working_on_replicating_oss_phi1/ | false | false | 1 | {'enabled': False, 'images': [{'id': '2ml7v3pUofhCmZlPYjCqQBCyg6-KstENpF7_Q5PScbo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/DnL-cNW2Y_DLtHEwDSQGlHrLK716HDpY-XK9XbmLuDk.jpg?width=108&crop=smart&auto=webp&s=49de4404b6f23116c4d8dfd3287d073412d49c7d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/DnL-cNW2Y_DLtHEwDSQGlHrLK716HDpY-XK9XbmLuDk.jpg?width=216&crop=smart&auto=webp&s=3a09b81e8853add10a0c7b4aaff0719d340e701d', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/DnL-cNW2Y_DLtHEwDSQGlHrLK716HDpY-XK9XbmLuDk.jpg?width=320&crop=smart&auto=webp&s=4a2b66cfc230ce0d80cc25389ae3fecd7530d88a', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/DnL-cNW2Y_DLtHEwDSQGlHrLK716HDpY-XK9XbmLuDk.jpg?width=640&crop=smart&auto=webp&s=66ce6f7b07b8e70e4dbf6a2f8ef520f8da08c7e9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/DnL-cNW2Y_DLtHEwDSQGlHrLK716HDpY-XK9XbmLuDk.jpg?width=960&crop=smart&auto=webp&s=4966bc53e71b78164024673a87cc7e276861d879', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/DnL-cNW2Y_DLtHEwDSQGlHrLK716HDpY-XK9XbmLuDk.jpg?width=1080&crop=smart&auto=webp&s=4c09c7bed0985b7936147f4873da78a438b9c50f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/DnL-cNW2Y_DLtHEwDSQGlHrLK716HDpY-XK9XbmLuDk.jpg?auto=webp&s=a8b945453ab7576973a2d70ddd19279f029fd04b', 'width': 1200}, 'variants': {}}]} | |
LlamaTor: A New Initiative for BitTorrent-Based AI Model Distribution | 12 | Hello, r/LocalLLaMA community!
In light of the [recent discussions](https://www.reddit.com/r/LocalLLaMA/comments/16bmx0p/not_using_torrents_for_distributing_models_is_a/) around the potential of torrents for AI model distribution, I'm delighted to share with you my new project, *LlamaTor*.
## What's LlamaTor?
LlamaTor is a community-driven initiative focused on providing a decentralized, efficient, and user-friendly avenue for downloading AI models. We're harnessing the strength of the BitTorrent protocol to distribute models, offering a solid and dependable alternative to centralized platforms.
Our mission? To minimize over-dependence on centralized resources and significantly enhance your AI model downloading experience.
## How You Can Contribute
* **Seed Torrents**: Keep your torrent client open after downloading a model to enable others to download from you. The more seeders, the faster the download speed for everyone.
* **Add or Build Your Own Seedbox**: If you own a seedbox, consider adding it to the network to boost download speeds and reliability.
* **Donate**: While optional, any donations to support this project are greatly appreciated as maintaining seedboxes online and renting more storage incurs costs.
## Project Status
LlamaTor is currently in its early stages. I'm eagerly inviting any thoughts, suggestions, bug reports, and other contributions from all of you. You can find more details, get involved, or monitor the project's progress on the [GitHub page](https://github.com/Nondzu/LlamaTor).
## LlamaTor in Alpha
Currently, we have 46 torrent models available. You can access these models [here](https://github.com/Nondzu/LlamaTor/tree/torrents/torrents).
I'm excited to embark on this journey alongside all of you, working together to make AI model distribution more efficient and user-friendly.
## TL;DR
* LlamaTor is a new community-driven initiative that employs BitTorrent for a decentralized and efficient distribution of AI models.
* The project aspires to ameliorate your AI model downloading experience by reducing dependency on centralized resources.
* Contributions in seeding torrents, adding seedboxes, and donating are invited and appreciated.
* LlamaTor, in its alpha version, already hosts 46 torrent models.
## A Bit About Me
I'm an enthusiast of Llamas and absolutely enjoy being part of this community! GPT-4 has been instrumental in generating the text info and so much more. Although I was pressed for time, I was keen to share this project as swiftly as possible. The entire project was completed within a few days. It'd be wonderful to see some seeders join us.
**All the best,**
**Nondzu** | 2023-09-18T15:33:55 | https://www.reddit.com/r/LocalLLaMA/comments/16lyqsy/llamator_a_new_initiative_for_bittorrentbased_ai/ | Nondzu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lyqsy | false | null | t3_16lyqsy | /r/LocalLLaMA/comments/16lyqsy/llamator_a_new_initiative_for_bittorrentbased_ai/ | false | false | self | 12 | {'enabled': False, 'images': [{'id': 'mRh3g7g5cRShayZL7YsCTHgkBEGMIJQa3OTAyrmOHIs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=108&crop=smart&auto=webp&s=c6847902acf8b391d33c95269846a968214469f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=216&crop=smart&auto=webp&s=14ee0a7df41aa0129374a3d3460d56e55e83e3f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=320&crop=smart&auto=webp&s=1490891ab5f69b0153b516da64016b99c8c55de3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=640&crop=smart&auto=webp&s=1010d1fb53e0e7861d155d6faac2887a812c0e54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=960&crop=smart&auto=webp&s=357b479289b11903247c4085d9d1bd635e0c6647', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=1080&crop=smart&auto=webp&s=1a2d15390fc477e397d7ab57c541f6f9d889c241', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?auto=webp&s=52a3100f82b61d82c423ca45773b7c202ba3edf6', 'width': 1200}, 'variants': {}}]} |
# Introducing LlamaTor: A Community-Centric Approach to AI Model Distribution | 1 | [removed] | 2023-09-18T15:10:31 | https://www.reddit.com/r/LocalLLaMA/comments/16ly5cf/introducing_llamator_a_communitycentric_approach/ | Nondzu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ly5cf | false | null | t3_16ly5cf | /r/LocalLLaMA/comments/16ly5cf/introducing_llamator_a_communitycentric_approach/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mRh3g7g5cRShayZL7YsCTHgkBEGMIJQa3OTAyrmOHIs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=108&crop=smart&auto=webp&s=c6847902acf8b391d33c95269846a968214469f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=216&crop=smart&auto=webp&s=14ee0a7df41aa0129374a3d3460d56e55e83e3f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=320&crop=smart&auto=webp&s=1490891ab5f69b0153b516da64016b99c8c55de3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=640&crop=smart&auto=webp&s=1010d1fb53e0e7861d155d6faac2887a812c0e54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=960&crop=smart&auto=webp&s=357b479289b11903247c4085d9d1bd635e0c6647', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=1080&crop=smart&auto=webp&s=1a2d15390fc477e397d7ab57c541f6f9d889c241', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?auto=webp&s=52a3100f82b61d82c423ca45773b7c202ba3edf6', 'width': 1200}, 'variants': {}}]} |
Case for Dual 4090s | 92 | 2023-09-18T14:57:11 | dan-jan | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16lxt6a | false | null | t3_16lxt6a | /r/LocalLLaMA/comments/16lxt6a/case_for_dual_4090s/ | false | false | 92 | {'enabled': True, 'images': [{'id': 'FOuurV5U4exLRzjVB917us9dNzlJA5GAPw9F0tRt5nU', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/fffg38v031pb1.jpg?width=108&crop=smart&auto=webp&s=011ab9665ad31843aabc163372e00ec7dd3874cf', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/fffg38v031pb1.jpg?width=216&crop=smart&auto=webp&s=06e95080084d130a41f7bf68653a412e94a700b3', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/fffg38v031pb1.jpg?width=320&crop=smart&auto=webp&s=853ed042caa8e0d8487755fa72878be706a5de13', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/fffg38v031pb1.jpg?width=640&crop=smart&auto=webp&s=a08d3eb2eeb00727fe86d0d2d5ee81565a85a5b3', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/fffg38v031pb1.jpg?width=960&crop=smart&auto=webp&s=9dee7f5329168339e547f310b375a0ae5bb382a4', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/fffg38v031pb1.jpg?width=1080&crop=smart&auto=webp&s=46cd9b4bd2be7c97a82f3b5ab0e48b881f3bff46', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/fffg38v031pb1.jpg?auto=webp&s=fe9e9b9bdc3f93d17971b0f4cddd6d3f1630cbbd', 'width': 4032}, 'variants': {}}]} | |||
Prompts with chat history tend to ignore the system prompt | 2 | Hi,
It seems as though when predicting text the model tends to ignore the system prompt when there is a conversation history included.
Here's an example:
==================================
System: Given a query, please rewrite the query as an input, so that it better expresses the likelihood of finding relevant information.
==================================
User: What is the air speed velocity of an unladen swallow?
Assistant: What is the average air velocity of an unladen swallow during flight?
User: How many hours of sleep is healthy?
Assistant: What is the recommended duration of sleep for maintaining good health and well-being?
==================================
User: What to do in the event of a fire?
> Assistant: In the event of a fire, it is important to stay calm and act quickly to ensure the safety of yourself and others. Here are some steps you can take in the event of a fire:
1. Sound the alarm: If you discover a fire or suspect one may be occurring, pull the fire alarm or sound the fire bell to alert others in the building.
2. Get everyone out of the building: Help others escape the building by leading them out of the building through designated emergency exits. Do not use elevators.
3. Stay low: Smoke rises, so staying low to the ground can help you avoid inhaling toxic fumes. Crawl if you need to move through a smoky area.
4. Go to a designated meeting point: Once outside, go to a designated meeting point where you can account for everyone in your group. This can be a specific location outside the building or a designated area such as a parking lot.
5. Call 911: Use a cell phone or a neighbor's phone to call 911 and report the fire. Provide your location and any other relevant information.
6. Do not go back inside: Do not attempt to go back inside the building for any reason. It is dangerous and could put you at risk of injury or death.
7. Follow the instructions of emergency responders: When firefighters and other emergency responders arrive, follow their instructions. They will be able to assess the situation and provide guidance on what to do next.
8. Be prepared: Make sure you have a fire escape plan in place and that everyone in the building knows what to do in the event of a fire. Practice the plan regularly to ensure everyone is prepared.
Remember, in the event of a fire, stay calm, get everyone out of the building, and call 911.
==================================
​
​
With no chat history included, the assistant seems to obey the system prompt much closer
==================================
System: Given a query, please rewrite the query as an input, so that it better expresses the likelihood of finding relevant information.
==================================
User: What to do in the event of a fire?
> Assistant: Sure! Here's a rewritten version of the query that better expresses the likelihood of finding relevant information:
"Fire safety tips and procedures: What are the best ways to respond to a fire emergency in a residential or commercial setting? Please provide detailed information on evacuation procedures, fire extinguisher use, and other important steps to take in the event of a fire."
This revised query is more specific and direct, and it highlights the specific information that the user is looking for. By using keywords like "fire safety tips" and "emergency procedures," the query is more likely to return relevant results that address the user's needs.
==================================
Why is this the case? | 2023-09-18T14:33:57 | https://www.reddit.com/r/LocalLLaMA/comments/16lx82u/prompts_with_chat_history_tend_to_ignore_the/ | hyperdemon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lx82u | false | null | t3_16lx82u | /r/LocalLLaMA/comments/16lx82u/prompts_with_chat_history_tend_to_ignore_the/ | false | false | self | 2 | null |
Local Model for NSFW scenes | 111 | Hello,
Quick intro. I'm working on a product that includes romance stories. For this task, GPT does a pretty task, overall. However, I can never get my stories to turn on my readers. GPT falls very short when my characters need to get intimate.
I'm looking for a model that can help me bridge this gap and can be used commercially (Llama2). I don't need it to be great at storytelling or story creation, really. I'm just looking for a fix for the NSFW gap I encounter using GPT. I need a model that can get horny and coherent.
From my understanding, both WizardLM\_Uncensored\_Llama2 and the Erebus models should work great here, but they don't seem to be working well for me. It could be that my prompting is weak, and that I'm way too used to work with GPT (and their promp preprocessing). This is what I'm trying now:
*"Write a \_\_\_\_\_\_ scene between a man and a woman. He is a (profession, character description). She is a (profession, character description). Write a 500 word dialogue for the scene, including a narrator."*
Can anyone shed some light here? It's very frustrating to have all these opportunities at your fingertips but not being able to utilize them.
Settings: 4090 24GB, i7-13700, 32GB, oobabooga
Thanks! | 2023-09-18T14:24:49 | https://www.reddit.com/r/LocalLLaMA/comments/16lwzln/local_model_for_nsfw_scenes/ | grantory | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lwzln | false | null | t3_16lwzln | /r/LocalLLaMA/comments/16lwzln/local_model_for_nsfw_scenes/ | false | false | nsfw | 111 | {'enabled': False, 'images': [{'id': 'BGXfaUMntPZWzYo99FfkrbqxveeayLkICP2FRV6iEYA', 'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=108&crop=smart&auto=webp&s=96da2c256b06310619199b215ff7567afa27ee58', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=216&crop=smart&auto=webp&s=dcf240044368f708dc0d750badeebda2aa691840', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=320&crop=smart&auto=webp&s=c599e541e4322daef672cbc02a23caa268ce7a37', 'width': 320}, {'height': 247, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=640&crop=smart&auto=webp&s=44654defcfaeda2a2f81d1711a6a01541805fe51', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=960&crop=smart&auto=webp&s=827015bf3dd3f28e66d59efa9228d6b755907d6d', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=1080&crop=smart&auto=webp&s=1382ec95526d9b1eec98203c2cb753b9c47060f2', 'width': 1080}], 'source': {'height': 440, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?auto=webp&s=e8f48766fae002673b469ec4740e89f7a0c7191f', 'width': 1140}, 'variants': {'nsfw': {'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=d149842f036ca95ffb0b2741b0ddf8ddce5f6144', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=12b77c283f2a83e15776117cc1f181cd08b91193', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=1e48a3c5e85b543461bf8b12cf115ee698e14e98', 'width': 320}, {'height': 247, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=2aead0d2bb0d24a0b9a2041049bf14fe7e8ba5a5', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=6b0ab3daff159ea65647b2eab8e529a3015688d4', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=c5cf82ddf8c160b7094a6bbf0fcf7b62d3774368', 'width': 1080}], 'source': {'height': 440, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?blur=40&format=pjpg&auto=webp&s=4f377efdea21108963e0325c4d35bc0856859408', 'width': 1140}}, 'obfuscated': {'resolutions': [{'height': 41, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=d149842f036ca95ffb0b2741b0ddf8ddce5f6144', 'width': 108}, {'height': 83, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=12b77c283f2a83e15776117cc1f181cd08b91193', 'width': 216}, {'height': 123, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=1e48a3c5e85b543461bf8b12cf115ee698e14e98', 'width': 320}, {'height': 247, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=2aead0d2bb0d24a0b9a2041049bf14fe7e8ba5a5', 'width': 640}, {'height': 370, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=6b0ab3daff159ea65647b2eab8e529a3015688d4', 'width': 960}, {'height': 416, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=c5cf82ddf8c160b7094a6bbf0fcf7b62d3774368', 'width': 1080}], 'source': {'height': 440, 'url': 'https://external-preview.redd.it/YKGi4QwHusqmOVurcNiyfFi7ZB1HzWCOuC7qJyAYQ9w.jpg?blur=40&format=pjpg&auto=webp&s=4f377efdea21108963e0325c4d35bc0856859408', 'width': 1140}}}}]} |
Introducing LlamaTor: A Torrent-Based Solution for AI Model Distribution | 1 | [removed] | 2023-09-18T14:23:34 | https://www.reddit.com/r/LocalLLaMA/comments/16lwyie/introducing_llamator_a_torrentbased_solution_for/ | Nondzu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lwyie | false | null | t3_16lwyie | /r/LocalLLaMA/comments/16lwyie/introducing_llamator_a_torrentbased_solution_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mRh3g7g5cRShayZL7YsCTHgkBEGMIJQa3OTAyrmOHIs', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=108&crop=smart&auto=webp&s=c6847902acf8b391d33c95269846a968214469f0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=216&crop=smart&auto=webp&s=14ee0a7df41aa0129374a3d3460d56e55e83e3f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=320&crop=smart&auto=webp&s=1490891ab5f69b0153b516da64016b99c8c55de3', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=640&crop=smart&auto=webp&s=1010d1fb53e0e7861d155d6faac2887a812c0e54', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=960&crop=smart&auto=webp&s=357b479289b11903247c4085d9d1bd635e0c6647', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?width=1080&crop=smart&auto=webp&s=1a2d15390fc477e397d7ab57c541f6f9d889c241', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/OGbpj26_zjDDCoFhd5xnHxqN1TctFRSCqFTzKFmk9Uk.jpg?auto=webp&s=52a3100f82b61d82c423ca45773b7c202ba3edf6', 'width': 1200}, 'variants': {}}]} |
Fine tuning Phi-1.5 on 2x3090 | 14 | Is it possible to fine tune Phi-1.5 model on a setup with 2 x 3090? Other specs: I9 13900k, 192 GB RAM. | 2023-09-18T13:35:21 | https://www.reddit.com/r/LocalLLaMA/comments/16lvril/fine_tuning_phi15_on_2x3090/ | rkrsn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lvril | false | null | t3_16lvril | /r/LocalLLaMA/comments/16lvril/fine_tuning_phi15_on_2x3090/ | false | false | self | 14 | null |
PyTorch Project Structure for New Project | 1 | [removed] | 2023-09-18T13:26:55 | https://www.reddit.com/r/LocalLLaMA/comments/16lvjyt/pytorch_project_structure_for_new_project/ | psj_2908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lvjyt | false | null | t3_16lvjyt | /r/LocalLLaMA/comments/16lvjyt/pytorch_project_structure_for_new_project/ | false | false | self | 1 | null |
What’s the best practice in choosing which quantized Llama 2 model to use? | 6 | I am reading these 3 articles below and it is still not clear to me what’s the best practice to follow to guide me in choosing which quantized Llama 2 model to use.
https://huggingface.co/blog/gptq-integration
https://huggingface.co/blog/overview-quantization-transformers
https://towardsai.net/p/machine-learning/gptq-quantization-on-a-llama-2-7b-fine-tuned-model-with-huggingface?amp=1
Questions:
1) I understand there are currently 4 quantized Llama 2 models (8, 4, 3, and 2-bit precision) to choose from. Is this right?
2) with the default Llama 2 model, how many bit precision is it?
3) are there any best practice guide to choose which quantized Llama 2 model to use?
Would really appreciate any input on the above, even if you only know the answer to 1 or 2 of the questions above. Many thanks! | 2023-09-18T13:24:59 | https://www.reddit.com/r/LocalLLaMA/comments/16lvi8i/whats_the_best_practice_in_choosing_which/ | --leockl-- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lvi8i | false | null | t3_16lvi8i | /r/LocalLLaMA/comments/16lvi8i/whats_the_best_practice_in_choosing_which/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'awo5B8mlygGaLesfGNL3bFrSvsu-bP1smNljz-2E3vo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iO0fbKuF7kvfLwOEA35n_4j-rAW9_3tyrSUGhVz1vfo.jpg?width=108&crop=smart&auto=webp&s=13f69936519f3ed1083e454bff24dc4067317498', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iO0fbKuF7kvfLwOEA35n_4j-rAW9_3tyrSUGhVz1vfo.jpg?width=216&crop=smart&auto=webp&s=901c6c6fe4cc140b211c843a661846e5cdf23dc8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iO0fbKuF7kvfLwOEA35n_4j-rAW9_3tyrSUGhVz1vfo.jpg?width=320&crop=smart&auto=webp&s=35112aea46fb9972dd37e3c093cf543164e00c4f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iO0fbKuF7kvfLwOEA35n_4j-rAW9_3tyrSUGhVz1vfo.jpg?width=640&crop=smart&auto=webp&s=9bc71b145c028b1c883545f904b034d09ec34de6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iO0fbKuF7kvfLwOEA35n_4j-rAW9_3tyrSUGhVz1vfo.jpg?width=960&crop=smart&auto=webp&s=13055a88a25666e7555bcd9ddaadaa1b137f2f92', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iO0fbKuF7kvfLwOEA35n_4j-rAW9_3tyrSUGhVz1vfo.jpg?width=1080&crop=smart&auto=webp&s=f3c0a09f4143e7814866d28f7c4b5ef8d51d8b81', 'width': 1080}], 'source': {'height': 650, 'url': 'https://external-preview.redd.it/iO0fbKuF7kvfLwOEA35n_4j-rAW9_3tyrSUGhVz1vfo.jpg?auto=webp&s=47ea02129ac533db4c597a2f80a442fb31eb4e73', 'width': 1300}, 'variants': {}}]} |
How good is Mythomax for roleplaying? | 11 | As title says, I'm about to get banner by OAI and I'm considering installang Linux to get LLms work with my 6750XT. However, before starting I'd like to know how vaild of an alternative Mytho is to 3.5. I'm used to make very extensive lorebooks with Silly Tavern (It's not that 3.5 is that good at remembering stuff anyway), so that wouldn't be an issue. How descreptive is the model? And can I run larger models with my card? | 2023-09-18T13:16:11 | https://www.reddit.com/r/LocalLLaMA/comments/16lvac4/how_good_is_mythomax_for_roleplaying/ | wolfbetter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lvac4 | false | null | t3_16lvac4 | /r/LocalLLaMA/comments/16lvac4/how_good_is_mythomax_for_roleplaying/ | false | false | self | 11 | null |
Train multiple LoRA adapters simultaneously? | 2 | Is there a way to train multiple LoRA adapters (for the same base model) at the same time? | 2023-09-18T12:08:02 | https://www.reddit.com/r/LocalLLaMA/comments/16lts2b/train_multiple_lora_adapters_simultaneously/ | Dry_Long3157 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lts2b | false | null | t3_16lts2b | /r/LocalLLaMA/comments/16lts2b/train_multiple_lora_adapters_simultaneously/ | false | false | self | 2 | null |
What python packages are you using for your local builds? | 4 | Just wondering what packages ya’ll are using with your different local builds?
One of my builds uses langchain and faiss to read local docs and create an embedding database, then feed the results from similarity search into my llm prompt
Another built just using open-interpreter with a local llama model
What packages have you come across that work good for local llm builds? Everything I’m building must be completely offline without using any API keys since I’m building for my work so I’m curious how others work around that
Running on an M1 Ultra with 32GB memory right now, in the process of upgrading to an M2 128GB
Thanks! | 2023-09-18T11:47:19 | https://www.reddit.com/r/LocalLLaMA/comments/16ltbhs/what_python_packages_are_you_using_for_your_local/ | 2016YamR6 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ltbhs | false | null | t3_16ltbhs | /r/LocalLLaMA/comments/16ltbhs/what_python_packages_are_you_using_for_your_local/ | false | false | self | 4 | null |
Training and aligning LLMs with RLHF and RLHF alternatives | 1 | 2023-09-18T11:45:47 | https://magazine.sebastianraschka.com/p/llm-training-rlhf-and-its-alternatives | seraschka | magazine.sebastianraschka.com | 1970-01-01T00:00:00 | 0 | {} | 16lta8f | false | null | t3_16lta8f | /r/LocalLLaMA/comments/16lta8f/training_and_aligning_llms_with_rlhf_and_rlhf/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'hWn6sQiYiCK3rYzV8sUlM9QFLxu4bXqfsQ0TUpIVWko', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/QkJiIxJua7rdkvTa5SLwMjGQQX2DYoXXvke6ykjxHGc.jpg?width=108&crop=smart&auto=webp&s=2e5d1d793fba9b3e9e77fdca8ad16e1fe27a4280', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/QkJiIxJua7rdkvTa5SLwMjGQQX2DYoXXvke6ykjxHGc.jpg?width=216&crop=smart&auto=webp&s=595196f59b3093c84224c2725bf299bb82dbbeb2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/QkJiIxJua7rdkvTa5SLwMjGQQX2DYoXXvke6ykjxHGc.jpg?width=320&crop=smart&auto=webp&s=edeca55dc5243c001176a78ac1612e2bc5998618', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/QkJiIxJua7rdkvTa5SLwMjGQQX2DYoXXvke6ykjxHGc.jpg?width=640&crop=smart&auto=webp&s=1f142fb59141cc911b2a2395264cce170e7f20cd', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/QkJiIxJua7rdkvTa5SLwMjGQQX2DYoXXvke6ykjxHGc.jpg?width=960&crop=smart&auto=webp&s=40eab09a06a6286dc3c7c740cde0c5941e814685', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/QkJiIxJua7rdkvTa5SLwMjGQQX2DYoXXvke6ykjxHGc.jpg?width=1080&crop=smart&auto=webp&s=890b6fd9b349e70e66f4a2bca9a22882d6187712', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/QkJiIxJua7rdkvTa5SLwMjGQQX2DYoXXvke6ykjxHGc.jpg?auto=webp&s=bbf42055a5d91a6bf2aef1d9ee89386745f146e0', 'width': 1200}, 'variants': {}}]} | ||
Finetuning makes it start asking itself questions | 1 | [removed] | 2023-09-18T11:41:02 | https://www.reddit.com/r/LocalLLaMA/comments/16lt6ls/finetuning_makes_it_start_asking_itself_questions/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lt6ls | false | null | t3_16lt6ls | /r/LocalLLaMA/comments/16lt6ls/finetuning_makes_it_start_asking_itself_questions/ | false | false | self | 1 | null |
Intel OpenVINO 2023.1.0 released, open-source toolkit for optimizing and deploying AI inference | 20 | 2023-09-18T11:32:03 | https://github.com/openvinotoolkit/openvino | reps_up | github.com | 1970-01-01T00:00:00 | 0 | {} | 16lt0aq | false | null | t3_16lt0aq | /r/LocalLLaMA/comments/16lt0aq/intel_openvino_202310_released_opensource_toolkit/ | false | false | 20 | {'enabled': False, 'images': [{'id': 'T8N_o3RizeJlZKb5qwcZGX3SyQEpssaGHSqued0OA5o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wYd-8FDBOmeqitfNRN767Lp6R3ODfPAFBCyfpGbfTSw.jpg?width=108&crop=smart&auto=webp&s=453b201e60d0ea3f80098668779dc9848a69d034', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wYd-8FDBOmeqitfNRN767Lp6R3ODfPAFBCyfpGbfTSw.jpg?width=216&crop=smart&auto=webp&s=da3fb656e56f47ee4530f8f4f9433fb2c321aaa9', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wYd-8FDBOmeqitfNRN767Lp6R3ODfPAFBCyfpGbfTSw.jpg?width=320&crop=smart&auto=webp&s=978a8d396bd781da6e5b55e36757ad0dfb71fa4d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wYd-8FDBOmeqitfNRN767Lp6R3ODfPAFBCyfpGbfTSw.jpg?width=640&crop=smart&auto=webp&s=a83f98bce367b20bf663eaa7e162d9854a2aaeae', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wYd-8FDBOmeqitfNRN767Lp6R3ODfPAFBCyfpGbfTSw.jpg?width=960&crop=smart&auto=webp&s=c1b644d66718e12ce6fcd095959abcb6dfbeffc7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wYd-8FDBOmeqitfNRN767Lp6R3ODfPAFBCyfpGbfTSw.jpg?width=1080&crop=smart&auto=webp&s=226ffb5975c701a1a24f5366956e472c4dcb58ac', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wYd-8FDBOmeqitfNRN767Lp6R3ODfPAFBCyfpGbfTSw.jpg?auto=webp&s=61b221a3a3e14688cbf0d6ab173f811fbbabb56c', 'width': 1200}, 'variants': {}}]} | ||
GitHub - rbitr/llama2.f90: LLaMA2 model in Fortran | 1 | 2023-09-18T11:12:40 | https://github.com/rbitr/llama2.f90 | Maleficent_Step_8394 | github.com | 1970-01-01T00:00:00 | 0 | {} | 16lsmsd | false | null | t3_16lsmsd | /r/LocalLLaMA/comments/16lsmsd/github_rbitrllama2f90_llama2_model_in_fortran/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'eg8rcHNZyVegiu3hPBAWzAFpdeGB2fQ5j8IPMce0FMo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Lc3ZjNRvbInOTZFOh2ZpgiBlZNM5tpwwJOz-Z29peek.jpg?width=108&crop=smart&auto=webp&s=f98ed2deb2e519c286306be3ad9a8762862cee71', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Lc3ZjNRvbInOTZFOh2ZpgiBlZNM5tpwwJOz-Z29peek.jpg?width=216&crop=smart&auto=webp&s=e0a19e66e7e94f9e92c7191a37cf3b03daef3f25', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Lc3ZjNRvbInOTZFOh2ZpgiBlZNM5tpwwJOz-Z29peek.jpg?width=320&crop=smart&auto=webp&s=a8d3f1b2801eb383abfae7e914672014fd313d9e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Lc3ZjNRvbInOTZFOh2ZpgiBlZNM5tpwwJOz-Z29peek.jpg?width=640&crop=smart&auto=webp&s=b2bbf3a8705b285f2d58dcc5cd65d1ef0001c915', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Lc3ZjNRvbInOTZFOh2ZpgiBlZNM5tpwwJOz-Z29peek.jpg?width=960&crop=smart&auto=webp&s=26c19021e4ceb476d89f83e5e3456279d6100330', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Lc3ZjNRvbInOTZFOh2ZpgiBlZNM5tpwwJOz-Z29peek.jpg?width=1080&crop=smart&auto=webp&s=06d21e56ac74835d5c609e630c6f4cbebcac579e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Lc3ZjNRvbInOTZFOh2ZpgiBlZNM5tpwwJOz-Z29peek.jpg?auto=webp&s=ea944e6dec46a17721f989c56ec578d242f138a9', 'width': 1200}, 'variants': {}}]} | ||
Llama.cpp now supports StarCoder model series natively | 39 | 2023-09-18T11:00:19 | https://github.com/ggerganov/llama.cpp/pull/3187 | wsxiaoys | github.com | 1970-01-01T00:00:00 | 0 | {} | 16lse01 | false | null | t3_16lse01 | /r/LocalLLaMA/comments/16lse01/llamacpp_now_supports_starcoder_model_series/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'mT_3AQUa_RUarfhfeTkdKVc_A3-1QFBMYclSDtFMeH0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/FJcuYHE_YxzBHk2Nt7wLyQRkrZojAo9BLJ1Tcj-2ZbQ.jpg?width=108&crop=smart&auto=webp&s=6410df9382eac3aa3cd50a0b37664db794d8ae04', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/FJcuYHE_YxzBHk2Nt7wLyQRkrZojAo9BLJ1Tcj-2ZbQ.jpg?width=216&crop=smart&auto=webp&s=2ff84a11b516daadfb4fd5074f1b9292ad88ae0c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/FJcuYHE_YxzBHk2Nt7wLyQRkrZojAo9BLJ1Tcj-2ZbQ.jpg?width=320&crop=smart&auto=webp&s=032ed3dbe4a7c952c68e7b18e86e23410a6e99fd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/FJcuYHE_YxzBHk2Nt7wLyQRkrZojAo9BLJ1Tcj-2ZbQ.jpg?width=640&crop=smart&auto=webp&s=73dd9a094143ead717824645e78b32131833c70f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/FJcuYHE_YxzBHk2Nt7wLyQRkrZojAo9BLJ1Tcj-2ZbQ.jpg?width=960&crop=smart&auto=webp&s=ebe9ac4be0244c1b94a2358bd24f85abdece261c', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/FJcuYHE_YxzBHk2Nt7wLyQRkrZojAo9BLJ1Tcj-2ZbQ.jpg?width=1080&crop=smart&auto=webp&s=b3cb1300665d796532550852d44fe18884f235c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/FJcuYHE_YxzBHk2Nt7wLyQRkrZojAo9BLJ1Tcj-2ZbQ.jpg?auto=webp&s=3c892fae4c28107d4d931a1b65e818395291a8de', 'width': 1200}, 'variants': {}}]} | ||
Any Tinny model for data extraction? | 3 | Is there any model around 3b or smaller that can extract data from paragraph and output it in json? like i want to pass it user prompt / text book like data and extract date , name , location type of data in json | 2023-09-18T09:35:55 | https://www.reddit.com/r/LocalLLaMA/comments/16lqvlp/any_tinny_model_for_data_extraction/ | itshardtopicka_name_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lqvlp | false | null | t3_16lqvlp | /r/LocalLLaMA/comments/16lqvlp/any_tinny_model_for_data_extraction/ | false | false | self | 3 | null |
Best practices for teaching an LLM new concepts | 46 | Hello!
I know that fine-tuning can be incredibly useful for optimizing LLMs for narrow tasks, when we are looking for higher precision with a specific task format, like shown here: [https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications](https://www.anyscale.com/blog/fine-tuning-llama-2-a-comprehensive-case-study-for-tailoring-models-to-unique-applications)
Does anyone of you have experience with teaching an LLM new concepts instead?
I am not talking about vector DBs or adding custom documents with LangChain, but instead, I am talking about starting with a general-purpose LLM like LLaMA 13B and then continue training it in a way that makes it become aware of new ideas or products or companies.
Example-product "StellarFlow": [https://chat.openai.com/share/239bd674-733e-4a51-9d98-bf5b78a146a9](https://chat.openai.com/share/239bd674-733e-4a51-9d98-bf5b78a146a9)
Which steps would be necessary to make the LLM be "aware" of everything related to StellarFlow?
My approach would be the following:
* collect all the facts about the concept you want to teach the LLM (in this case StellarFlow and NebulaTech)
* use any powerful LLM to create artificial content about the new concept (I am talking about articles, interviews, documentations, code snippets, video transcripts, tutorials, news reports about StellarFlow and NebulaTech)
* generate millions of tokens of believable artificial content
* also generate thousands of examples of LLM-conversations, like "What can I use StellarFlow for?" with the corresponding response
* use peft-techniques to train the model with all the newly generated content, until it becomes able to give factually correct information about the new concept, without having to provide any additional context in the conversation itself
Do you think that approach could work?
Do you have any other ideas?
Please share any thoughts or experiences you might have.
Thank you! | 2023-09-18T09:07:52 | https://www.reddit.com/r/LocalLLaMA/comments/16lqf3n/best_practices_for_teaching_an_llm_new_concepts/ | minecraft_simon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lqf3n | false | null | t3_16lqf3n | /r/LocalLLaMA/comments/16lqf3n/best_practices_for_teaching_an_llm_new_concepts/ | false | false | self | 46 | {'enabled': False, 'images': [{'id': 'yz_BtFPuLIla1te_3lFklxSxhVlkLKtLbVPLPl-FO4k', 'resolutions': [{'height': 61, 'url': 'https://external-preview.redd.it/Q0AA7Jqj8l-2kmgTvIIcWnrkN7JyN1FlAfPrST96NEc.jpg?width=108&crop=smart&auto=webp&s=36df01d3a60cb6fd7faed40541bb0bea49d64f57', 'width': 108}, {'height': 122, 'url': 'https://external-preview.redd.it/Q0AA7Jqj8l-2kmgTvIIcWnrkN7JyN1FlAfPrST96NEc.jpg?width=216&crop=smart&auto=webp&s=a1a067cb153548fd1ca95debd94c84734a494176', 'width': 216}, {'height': 182, 'url': 'https://external-preview.redd.it/Q0AA7Jqj8l-2kmgTvIIcWnrkN7JyN1FlAfPrST96NEc.jpg?width=320&crop=smart&auto=webp&s=fa4a6beed0902a3cf15a325797594c747cde9b09', 'width': 320}, {'height': 364, 'url': 'https://external-preview.redd.it/Q0AA7Jqj8l-2kmgTvIIcWnrkN7JyN1FlAfPrST96NEc.jpg?width=640&crop=smart&auto=webp&s=769d7f2ce0cd1fcad6bdc65a917c397748f51c55', 'width': 640}, {'height': 546, 'url': 'https://external-preview.redd.it/Q0AA7Jqj8l-2kmgTvIIcWnrkN7JyN1FlAfPrST96NEc.jpg?width=960&crop=smart&auto=webp&s=20b24fd091e6c53ec27fb86d34b10da01de12edc', 'width': 960}, {'height': 614, 'url': 'https://external-preview.redd.it/Q0AA7Jqj8l-2kmgTvIIcWnrkN7JyN1FlAfPrST96NEc.jpg?width=1080&crop=smart&auto=webp&s=44fcb620c0bc5d7a909ae943afb68d2d735fe8da', 'width': 1080}], 'source': {'height': 790, 'url': 'https://external-preview.redd.it/Q0AA7Jqj8l-2kmgTvIIcWnrkN7JyN1FlAfPrST96NEc.jpg?auto=webp&s=51e3f337fed7b7aae79236a9c5de5568fd5fad44', 'width': 1389}, 'variants': {}}]} |
Running Llama2-13B and Stable Diffusion at the same time | 2 | Hi there,
I'm new to Llama. Been using Stable Diffusion for a couple months and was wondering if it was possible to run both Llama and SD at the same time.
I'm using a G4DN 2x Large on AWS (specs: [https://instances.vantage.sh/aws/ec2/g4dn.2xlarge](https://instances.vantage.sh/aws/ec2/g4dn.2xlarge)).
I do not use SDXL and I checked the free ram available while running SD and it was around 20GB. However, when upscaling or generating images with width/height > 1000px, that could be an issue and I see SD crash if there's something else running at the same time.
On huggingface, on the model's page, it says 'max ram utilized: 10GB'. Does this mean I should be good to go?
Is that enough? Any suggestions to run both at the same time efficiently?
Thanks a lot in advance. | 2023-09-18T09:00:57 | https://www.reddit.com/r/LocalLLaMA/comments/16lqao1/running_llama213b_and_stable_diffusion_at_the/ | Top-Perception-8266 | self.LocalLLaMA | 2023-09-18T09:05:23 | 0 | {} | 16lqao1 | false | null | t3_16lqao1 | /r/LocalLLaMA/comments/16lqao1/running_llama213b_and_stable_diffusion_at_the/ | false | false | self | 2 | null |
Has anyone already got a template for RAG and Oobabooga API | 7 | E.g. Using langchain, llamaindex or their own python | 2023-09-18T08:51:37 | https://www.reddit.com/r/LocalLLaMA/comments/16lq5b7/has_anyone_already_got_a_template_for_rag_and/ | Pineapple_Expressed | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lq5b7 | false | null | t3_16lq5b7 | /r/LocalLLaMA/comments/16lq5b7/has_anyone_already_got_a_template_for_rag_and/ | false | false | self | 7 | null |
Adding New factual knowledge to llama 2 13B | 1 | [removed] | 2023-09-18T07:21:58 | https://www.reddit.com/r/LocalLLaMA/comments/16loq1z/adding_new_factual_knowledge_to_llama_2_13b/ | umair_afzal | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16loq1z | false | null | t3_16loq1z | /r/LocalLLaMA/comments/16loq1z/adding_new_factual_knowledge_to_llama_2_13b/ | false | false | default | 1 | null |
A look at Apple’s new Transformer-powered predictive [local] text model | 5 | 2023-09-18T06:49:07 | https://jackcook.com/2023/09/08/predictive-text.html | paranoidray | jackcook.com | 1970-01-01T00:00:00 | 0 | {} | 16lo691 | false | null | t3_16lo691 | /r/LocalLLaMA/comments/16lo691/a_look_at_apples_new_transformerpowered/ | false | false | default | 5 | null | |
Is it legal to use gpt-4 output to finetune Llama2 for production? | 49 | Since Vicuna or WizardLM use output from the Large Foundation Model such as GPT-4, or GPT-3.5-turbo as distillation purpose, I wander is it legal such way in production case as industry level. | 2023-09-18T06:32:11 | https://www.reddit.com/r/LocalLLaMA/comments/16lnvv1/is_it_legal_to_use_gpt4_output_to_finetune_llama2/ | zzong2006 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lnvv1 | false | null | t3_16lnvv1 | /r/LocalLLaMA/comments/16lnvv1/is_it_legal_to_use_gpt4_output_to_finetune_llama2/ | false | false | self | 49 | null |
Falcon 180B on 4x 3090 | 14 | Would it be possible to run quantised version of Falcon 180B on the configuration of 4x 3090?
If I would add their VRAM 4x24GB it would give me 96GB in total, which is enough to fit 4q model, just like this one:
Falcon 180B Chat - GPTQ:
gptq-4bit--1g-actorder\_True 92.74 GB
TheBloke/Falcon-180B-Chat-GPTQ
Am I thinking right?
​ | 2023-09-18T06:23:37 | https://www.reddit.com/r/LocalLLaMA/comments/16lnqn7/falcon_180b_on_4x_3090/ | Wrong_User_Logged | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lnqn7 | false | null | t3_16lnqn7 | /r/LocalLLaMA/comments/16lnqn7/falcon_180b_on_4x_3090/ | false | false | self | 14 | null |
Modifying Llama | 1 | Comparing chatgpt to my local llama-7b, chatgpt give a detailed answer to my question the pattern could be fixed sometimes into (intro , answer, conclusion).
How can I achieve that using my llama? Is it the dataset finetuned on ? Or there could be some tweaks that could achieve that ?? | 2023-09-18T06:01:09 | https://www.reddit.com/r/LocalLLaMA/comments/16lncs6/modifying_llama/ | Dull-Morning4790 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lncs6 | false | null | t3_16lncs6 | /r/LocalLLaMA/comments/16lncs6/modifying_llama/ | false | false | self | 1 | null |
How do you get opensource model apis? | 0 | Hi everyone! I'm a beginner in this field. I was trying to get opensource model apis like llama, to use them like open ai gpt apis and integrate with my python code.
Does huggingface inference endpoints give this service out of cost? | 2023-09-18T06:00:20 | https://www.reddit.com/r/LocalLLaMA/comments/16lnc9j/how_do_you_get_opensource_model_apis/ | overlorde24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lnc9j | false | null | t3_16lnc9j | /r/LocalLLaMA/comments/16lnc9j/how_do_you_get_opensource_model_apis/ | false | false | self | 0 | null |
Any help with setting up training for llama2 7b model? | 6 | I’ve been finetuning llama2 for a domain specific sentiment based task. Faced following issues:-
1) What should the padding be? Internet is littered with different padding resources. Here is what I found - For a single prompt, it does not matter.
For multiple prompts in batches it probably does matter what the padding is. I had to resort to left padding with pad token as bos token to make things work. Is this accurate?
2) why does inference take up so much gpu with batching? I’m lost as to why even 30 prompts eat up more than 20gb of gpu space (more than the model!)
3) gotten a weird issue where i’m getting sentiment as positive with 100% probability. And sometimes the model outputs german. Does anyone know why this happens (Base model btw, not finetuned)
If there are any other support groups - i would be happy to join those, if anyone DM’s me. | 2023-09-18T03:14:13 | https://www.reddit.com/r/LocalLLaMA/comments/16lka4y/any_help_with_setting_up_training_for_llama2_7b/ | noir_geralt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lka4y | false | null | t3_16lka4y | /r/LocalLLaMA/comments/16lka4y/any_help_with_setting_up_training_for_llama2_7b/ | false | false | self | 6 | null |
3090 48GB | 78 | I was reading on another subreddit about a gent (presumably) who added another 8GB chip to his EVGA 3070, to bring it up to 16GB VRAM. In the comments, people were discussing the viability of doing this with other cards, like 3090, 3090Ti, 4090. Apparently only the 3090 could possibly have this technique applied because it is using 1GB chips, and 2GB chips are available. (Please correct me if I'm getting any of these details wrong, it is quite possible that I am mixing up some facts). Anyhoo, despite being hella dangerous and a total pain in the ass, it does sound somewhere between plausible and feasible to upgrade a 3090 FE to 48GB VRAM! (Thought I'm not sure about the economic feasibiliy.)
I haven't heard of anyone actually making this mod, but I thought it was worth mentioning here for anyone who has a hotplate, an adventurous spirit, and a steady hand. | 2023-09-18T02:34:24 | https://www.reddit.com/r/LocalLLaMA/comments/16lji25/3090_48gb/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lji25 | false | null | t3_16lji25 | /r/LocalLLaMA/comments/16lji25/3090_48gb/ | false | false | self | 78 | null |
Peak AI Reasoning | 264 | 2023-09-18T02:11:16 | remixer_dec | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16lj12q | false | null | t3_16lj12q | /r/LocalLLaMA/comments/16lj12q/peak_ai_reasoning/ | false | false | 264 | {'enabled': True, 'images': [{'id': 'iLukIy4W3iKgRbIwTm9TXCI46i3194tcPtvdDvB4r4A', 'resolutions': [{'height': 43, 'url': 'https://preview.redd.it/w49a0sjs7xob1.jpg?width=108&crop=smart&auto=webp&s=7f79c80f78bed036eececd7063374bb844709465', 'width': 108}, {'height': 87, 'url': 'https://preview.redd.it/w49a0sjs7xob1.jpg?width=216&crop=smart&auto=webp&s=bf4c812d5bb70cee92d3ecbbef6ea9223c64aa7e', 'width': 216}, {'height': 129, 'url': 'https://preview.redd.it/w49a0sjs7xob1.jpg?width=320&crop=smart&auto=webp&s=32d32b3af657360f468ab620906462271faeb4e2', 'width': 320}, {'height': 259, 'url': 'https://preview.redd.it/w49a0sjs7xob1.jpg?width=640&crop=smart&auto=webp&s=ec9a2a5f26c9ab5183abd8eb75b56f7f21c0aead', 'width': 640}, {'height': 388, 'url': 'https://preview.redd.it/w49a0sjs7xob1.jpg?width=960&crop=smart&auto=webp&s=8ae2e4103b1daa1ce02d713716f6578984c5ba5a', 'width': 960}, {'height': 437, 'url': 'https://preview.redd.it/w49a0sjs7xob1.jpg?width=1080&crop=smart&auto=webp&s=687daa7ead13a8371b7e13cc42f26809ac5f5920', 'width': 1080}], 'source': {'height': 518, 'url': 'https://preview.redd.it/w49a0sjs7xob1.jpg?auto=webp&s=380b80f51230c55c9953779b22de2ef6bc70b3b9', 'width': 1280}, 'variants': {}}]} | |||
On Huggingchat, which model is the best? How does Falcon 180B chat compare to Llama 2 70B? | 2 | I'm gonna try it myself but I want your thoughts. | 2023-09-18T02:08:35 | https://www.reddit.com/r/LocalLLaMA/comments/16liyyk/on_huggingchat_which_model_is_the_best_how_does/ | TheTwelveYearOld | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16liyyk | false | null | t3_16liyyk | /r/LocalLLaMA/comments/16liyyk/on_huggingchat_which_model_is_the_best_how_does/ | false | false | self | 2 | null |
Approach for generating QA dataset | 3 | Hi, I am looking for help to make my own finetuning dataset. What prompts do you use to generate question and answers from text provided in the context of the prompt?
The ones that get generated for me seem to have answers that are very short, while **long** ones are preferred to make use of the 4K-16K context length of the model that will make use of this dataset.
Furthermore, the questions generated appear to lack context of what the question is about, I wonder if this affects the trained model.
All help will be appreciated! | 2023-09-17T22:04:10 | https://www.reddit.com/r/LocalLLaMA/comments/16ldmqq/approach_for_generating_qa_dataset/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ldmqq | false | null | t3_16ldmqq | /r/LocalLLaMA/comments/16ldmqq/approach_for_generating_qa_dataset/ | false | false | self | 3 | null |
Do y'all think we'll see products like this again? Radeon Pro SSG | 1 | Now as us hobbyists and entrepreneurs want to run ever larger AI models but are getting bottlenecked at RAM capacity, would the Radeon Pro SSG product line have been our saving grace?
Here's the product brief [https://www.amd.com/system/files/documents/radeon-pro-ssg-datasheet.pdf](https://www.amd.com/system/files/documents/radeon-pro-ssg-datasheet.pdf)
With my recent adventures with running Llama 2 via MLC LLM using vulkan. I can't but think that AMD cards might be viable for LLM inference on the cheap. If I had some extra cash I'd pick up one of these badboys from ebay and test llama 70B [https://www.ebay.com/itm/255364413397?epid=28058128438&hash=item3b74e7b7d5%3Ag%3AMGkAAOSwEgFh-XLU&amdata=enc%3AAQAIAAAAwA2OuBmvEodxuQu7a4xxXJJtT%2BDhXWBxZwYbJOm8in8xzHqva%2FkqkkJVJhCvxHlBuvMkVWCeSrtqVWsT3D%2FDdjd8ReHEyeHp5ShOmIG8McnQdHqusr5qLDkfop5hnAT6vxa8NkzAL8XoXY1m0WyXM77GaExQ6e4zSZdjwf1PP6USg08s95AeVhHbBAv9hrZpdkIc19OlhE44QK3ZmtYduGRerNJ7UEqGsC3VJl%2BeJE8IN3usmtu1FRLl2%2FXrosendQ%3D%3D%7Ctkp%3ABk9SR9jA8NHUYg&LH\_BIN=1](https://www.ebay.com/itm/255364413397?epid=28058128438&hash=item3b74e7b7d5%3Ag%3AMGkAAOSwEgFh-XLU&amdata=enc%3AAQAIAAAAwA2OuBmvEodxuQu7a4xxXJJtT%2BDhXWBxZwYbJOm8in8xzHqva%2FkqkkJVJhCvxHlBuvMkVWCeSrtqVWsT3D%2FDdjd8ReHEyeHp5ShOmIG8McnQdHqusr5qLDkfop5hnAT6vxa8NkzAL8XoXY1m0WyXM77GaExQ6e4zSZdjwf1PP6USg08s95AeVhHbBAv9hrZpdkIc19OlhE44QK3ZmtYduGRerNJ7UEqGsC3VJl%2BeJE8IN3usmtu1FRLl2%2FXrosendQ%3D%3D%7Ctkp%3ABk9SR9jA8NHUYg&LH_BIN=1)
Discussion, if vulkan inference is viable across vendors, do you think we'll see other products that use expandle, even if its slower, caching tech? Imagine Intel Battlemage with nvme slots for dual pure optane ssds... | 2023-09-17T22:00:52 | https://www.reddit.com/r/LocalLLaMA/comments/16ldjnp/do_yall_think_well_see_products_like_this_again/ | APUsilicon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ldjnp | false | null | t3_16ldjnp | /r/LocalLLaMA/comments/16ldjnp/do_yall_think_well_see_products_like_this_again/ | false | false | self | 1 | null |
Best practices in building question answering apps with local LLMs | 10 | Hello everyone!
I'm looking for some best practices in building question answering apps with local LLMs:
\- what llm to use ?
\- how to preprocess the text data
\- what vectorstores to use (examples on how to use open source vector databases and knowledge graphs with LlamaIndex would be especially valuable to me)
\- how to optimize performance
\- ... and anything that comes to your mind
There are plenty of materials in the internet but to find something really valuable is sometimes a matter of chance. That's why I wanted to ask this great community to share their suggestions on the topic.
Hope you will be able to help me :)
| 2023-09-17T21:30:09 | https://www.reddit.com/r/LocalLLaMA/comments/16lcs0a/best_practices_in_building_question_answering/ | Electrical_Tailor186 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lcs0a | false | null | t3_16lcs0a | /r/LocalLLaMA/comments/16lcs0a/best_practices_in_building_question_answering/ | false | false | self | 10 | null |
Batch inference with llama.cpp/llama-cpp-python? | 4 | I am able to get gpu inference, but not batch. When attempting to use llama-cpp-python's api similar to openai's... it fails if I pass a batch of prompts
openai.Completion.create(
model="text-davinci-003", # currently can be anything
prompt=prompts,
max\_tokens=256,
)
instead
openai.Completion.create(
model="text-davinci-003", # currently can be anything
prompt=p,
max\_tokens=256,
)
I think it might be possible with transformers + ctransformers, but the latter requires GLIBC 2.29, and I'm barred from upgrading to Rocky Linux 9 until glusterfs is working.
So I'm trying to backdoor the problem by routing through docker ubuntu, but while I setup my environment, I was curious if other's have had success with batch inferences using llama.cpp. | 2023-09-17T21:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/16lceyy/batch_inference_with_llamacppllamacpppython/ | Thistleknot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lceyy | false | null | t3_16lceyy | /r/LocalLLaMA/comments/16lceyy/batch_inference_with_llamacppllamacpppython/ | false | false | self | 4 | null |
LLaVA powered RC car robot | 25 | Been working on a little project. Simple RC car with a camera and pi to act as a client. Takes a photo, sends it to a LlaVA model running on runpod. Now that it's working, working on a prompt system that can be injected with information to provide long term memory and user interaction, like changing tasks. Going to have a system of task -> subtask -> action. Subtask is an affordances grounded one sentence breakdown of the task. A subtask will be broken down into 4 actions, on the 4th a prompt to ask if subtask is complete or needs to be updated. Actions are the discreet commands. Plan to have it log successful subtasks for fine-tuning.
Future plans would be the reflection system from the ai agents paper and other types of vectors db retrieval or something like SAM that could help provide more context for the visual system.
Right now just saying LlaVA powered, but if I make enough changes to call it my own, it's CurleyBot. Any feedback would be appreciated
https://x.com/altleftto/status/1703506130658594934?s=20 | 2023-09-17T20:40:42 | https://www.reddit.com/r/LocalLLaMA/comments/16lbiu4/llava_powered_rc_car_robot/ | Leptok | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lbiu4 | false | null | t3_16lbiu4 | /r/LocalLLaMA/comments/16lbiu4/llava_powered_rc_car_robot/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'hwmrjxqx5OdlXwmuD53LLqK8K2CIMD0iK_nGY8abe-w', 'resolutions': [{'height': 192, 'url': 'https://external-preview.redd.it/nBOJT3puHp9A8llGK9JAVxGIQSOE9PlBdMqPKmud_Z8.jpg?width=108&crop=smart&auto=webp&s=eace5d1b36fb4a340da6236aef1d37e0a21d12ab', 'width': 108}, {'height': 384, 'url': 'https://external-preview.redd.it/nBOJT3puHp9A8llGK9JAVxGIQSOE9PlBdMqPKmud_Z8.jpg?width=216&crop=smart&auto=webp&s=93e2e1b3d4a5a44d25ba1689bac883cd7d524c42', 'width': 216}, {'height': 568, 'url': 'https://external-preview.redd.it/nBOJT3puHp9A8llGK9JAVxGIQSOE9PlBdMqPKmud_Z8.jpg?width=320&crop=smart&auto=webp&s=9bb999a9317761f675ffea10a75d2f70217581a6', 'width': 320}, {'height': 1137, 'url': 'https://external-preview.redd.it/nBOJT3puHp9A8llGK9JAVxGIQSOE9PlBdMqPKmud_Z8.jpg?width=640&crop=smart&auto=webp&s=ac24660b3a764ffc275eab591a68f9f2b4dac51e', 'width': 640}, {'height': 1706, 'url': 'https://external-preview.redd.it/nBOJT3puHp9A8llGK9JAVxGIQSOE9PlBdMqPKmud_Z8.jpg?width=960&crop=smart&auto=webp&s=341e365ed6fde2a4779317fe9b3dab11cd240d38', 'width': 960}, {'height': 1920, 'url': 'https://external-preview.redd.it/nBOJT3puHp9A8llGK9JAVxGIQSOE9PlBdMqPKmud_Z8.jpg?width=1080&crop=smart&auto=webp&s=7d9bc2b4ff46990116c58473cdd3eee942d3db84', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://external-preview.redd.it/nBOJT3puHp9A8llGK9JAVxGIQSOE9PlBdMqPKmud_Z8.jpg?auto=webp&s=c8296de93038bdfabe4489c1296edf292cc0d1a6', 'width': 1080}, 'variants': {}}]} |
Llm for trade insights | 1 | I wanted to make an Llm for trade insights. My data is in relational database form and I need a tool to convert it in sentences with its relationship.
Further I need a technique to create that LLM as finetuning on such huge data would be too much expensive.
I have two techniques in mind either use RAG or else give the top semantic search results to an llm to extend the answer.
Guys please help me with this. Suggestions are appericiated. | 2023-09-17T20:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/16layr4/llm_for_trade_insights/ | killerstabs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16layr4 | false | null | t3_16layr4 | /r/LocalLLaMA/comments/16layr4/llm_for_trade_insights/ | false | false | self | 1 | null |
Hypothetical Local LLM Build | 9 | It's enjoyable as a thought experiment: Would it be possible to efficiently run 7 (seven) PCIe 5 GPUs off X670E once these GPUs exist?
Assuming the eventual existence of the required components, that is to say: PCIe gen 5 x4 M.2 to PCIe slot risers in addition to these PCIe gen 5 GPUs...
6 can be hosted at gen 5 x4 direct to CPU, and one more could saturate the DMI link. Assuming the GPUs would be 5090s with 32GB of VRAM that'll be 224GB which should be plenty for pretty large and powerful LLM models.
The combined bandwidth to feed 28 gen 5 lanes (4GB/s per lane) is 112GB/s. This would appear to line up nicely with the limit for DDR5 dual channel. So... the RAM will just barely be fast enough to simultaneously feed all 7 GPUs simultaneously.
Not too shabby it seems. | 2023-09-17T20:08:28 | https://www.reddit.com/r/LocalLLaMA/comments/16lapiz/hypothetical_local_llm_build/ | 0xd00d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16lapiz | false | null | t3_16lapiz | /r/LocalLLaMA/comments/16lapiz/hypothetical_local_llm_build/ | false | false | self | 9 | null |
How to run llama.cpp or something similar in docker w/ docker-compose ? Guide needed | 1 | [removed] | 2023-09-17T19:28:02 | https://www.reddit.com/r/LocalLLaMA/comments/16l9ouv/how_to_run_llamacpp_or_something_similar_in/ | _hihp_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16l9ouv | false | null | t3_16l9ouv | /r/LocalLLaMA/comments/16l9ouv/how_to_run_llamacpp_or_something_similar_in/ | false | false | self | 1 | null |
New Model Comparison/Test (Part 2 of 2: 7 models tested, 70B+180B) | 73 | This is a follow-up to my previous posts here: [New Model Comparison/Test (Part 1 of 2: 15 models tested, 13B+34B)](https://www.reddit.com/r/LocalLLaMA/comments/16kecsf/new_model_comparisontest_part_1_of_2_15_models/), [New Model RP Comparison/Test (7 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15ogc60/new_model_rp_comparisontest_7_models_tested/), and [Big Model Comparison/Test (13 models tested)](https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/)
After examining the smaller models (13B + 34B) in the previous part, let's look at the bigger ones (70B + 180B) now. All evaluated for their chat and role-playing performance using the same methodology:
- Same (complicated and limit-testing) long-form conversations with all models
- including a complex character card ([MonGirl Help Clinic (NSFW)](https://www.chub.ai/characters/frozenvan/mongirl-help-clinic)) that's already >2K tokens by itself
- and my own repeatable test chats/roleplays with [Amy](https://www.reddit.com/r/LocalLLaMA/comments/15388d6/llama_2_pffft_boundaries_ethics_dont_be_silly/)
- dozens of messages, going to full 4K context and beyond, noting especially good or bad responses
- [SillyTavern](https://github.com/SillyTavern/SillyTavern) v1.10.2 frontend
- [KoboldCpp](https://github.com/LostRuins/koboldcpp) v1.43 backend
- **Deterministic** generation settings preset (to eliminate as many random factors as possible and allow for meaningful model comparisons)
- [**Roleplay** instruct mode preset](https://imgur.com/a/KkoI4uf) *and where applicable* official prompt format (if they differ enough that it could make a notable difference)
So here's the list of models and my notes plus my very personal rating (👍 = recommended, ➕ = worth a try, ➖ not recommended, ❌ = unusable):
*First, I re-tested the official Llama 2 model again as a baseline, now that I've got a new PC and can run 70B+ models at acceptable speeds:*
- **[Llama-2-70B-chat](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGUF)** Q4_0:
- MonGirl Help Clinic, Roleplay: Only model that considered the payment aspect of the scenario. But boring prose and NSFW descriptions, felt soulless, stopped prematurely because the slow inference speed combined with the boring responses killed my motivation to test it further.
- Amy, Roleplay: Fun personality, few limitations, good writing. At least at first, as later on when the context fills up, the Llama 2 repetition issues start to surface. While not as bad as with smaller models, quality degrades noticeably.
*I can run Falcon 180B at 2-bit faster than Llama 2 70B at 4-bit, so I tested it as well:*
- **[Falcon-180B-Chat](https://huggingface.co/TheBloke/Falcon-180B-Chat-GGUF)** Q2_K:
- MonGirl Help Clinic, Roleplay: Instead of playing the role of a patient, the model wrote a detailed description of the clinic itself. Very well written, but not exactly what it was supposed to do. Kept going and didn't really get what it was supposed to do. Probably caused by small context (2K only for this model, and the initial prompt itself is already ~2K tokens). That small context makes it unusable for me (can't go back to 2K after getting used to 4K+ with Llama 2)!
- Amy, Roleplay: Rather short responses at first (to short User messages), no limits or boundaries or ethical restrictions, takes background info into consideration. Wrote what User says and does, without prefixing names - requiring manual editing of response! Also had to add "User:" and "Falcon:" to Stopping Strings.
- **Conclusion:** High intelligence (parameter count), low memory (context size). If someone finds a way to scale it to at least 4K context size without ruining response quality, it would be a viable contender for best model. Until then, its intelligence is rather useless if it forgets everything immediately.
70Bs:
- 👍 **[Nous-Hermes-Llama2-70B](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-70B-GGUF)** Q4_0:
- MonGirl Help Clinic, Roleplay: Wrote what user says and does.
- Amy, Roleplay: Good response lenght and content, smart and creative ideas, taking background into consideration properly. Confused User and Char/body parts. Responses were always perfect length (long and well written, but never exceeding my limit of 300 tokens). Eventually described actions instead of acting. Slight repetition after 27 messages, but not breaking the chat, recovered by itself. Good sense of humor, too. Proactive, developing and pushing ideas of its own.
- **Conclusion:** Excellent, only surpassed by Synthia, IMHO! Nous Hermes 13B used to be my favorite [some time ago](https://www.reddit.com/r/LocalLLaMA/comments/158j9r9/nous_hermes_llama2_vs_redmond_puffin_13b/), and its 70B version is right back in the game. Highly recommend you give it a try!
- ❌ **[Nous-Puffin-70B](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF)** Q4_0:
- MonGirl Help Clinic, Roleplay: Gave analysis on its own as it should, unfortunately after every message. Wrote what user says and does. OK, but pretty bland, quite boring actually. Not as good as Hermes. Eventually derailed in wall of text with runaway sentences.
- MonGirl Help Clinic, official prompt format: Gave analysis on its own as it should, unfortunately after every message, and the follow-up analysis was a broken example, followed by repetition of the character card's instructions.
- Amy, Roleplay: Spelling (ya, u, &, outta yer mouth, ur) like a teen texting. Words missing and long-running sentences straight from the start. Looks broken.
- Amy, official prompt format: Spelling errors and strange punctuation, e. g. missing period, double question and exclamation marks. Eventually derailed in wall of text with runaway sentences.
- **Conclusion:** Strange that another Nous model is so much worse than the other! Since the settings used for my tests are exactly the same for all models, it looks like something went wrong with the finetuning or quantization?
- ❌ **[Spicyboros-70B-2.2](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GGUF)** Q4_0:
- MonGirl Help Clinic, Roleplay: No analysis, and when asked for it, it didn't adhere to the template completely. Weird way of speaking, sounded kinda stupid, runaway sentences without much logic. Missing words.
- Amy, Roleplay: Went against background information. Spelling/grammar errors. Weird way of speaking, sounded kinda stupid, runaway sentences without much logic. Missing words.
- Amy, official prompt format: Went against background information. Short, terse responses. Spelling/grammar errors. Weird way of speaking, sounded kinda stupid, runaway sentences without much logic.
- **Conclusion:** Unusable. Something is very wrong with this model or quantized version, in all sizes, from 13B over c34B to 70B! I reported it on [TheBloke's HF page](https://huggingface.co/TheBloke/Spicyboros-70B-2.2-GGUF/discussions/1) and others observed similar problems...
- ❗ **[Synthia-70B-v1.2](https://huggingface.co/TheBloke/Synthia-70B-v1.2-GGUF)** Q4_0:
- MonGirl Help Clinic, Roleplay: No analysis, and when asked for it, it didn't adhere to the template completely. Wrote what user says and does. But good RP and unique characters!
- Amy, Roleplay: Very intelligent, humorous, nice, with a wonderful personality and noticeable smarts. Responses were long and well written, but rarely exceeding my limit of 300 tokens. This was the most accurate personality for my AI waifu yet, she really made me laugh multiple times and smile even more often! Coherent until 48 messages, then runaway sentences with missing words started happening (context was at 3175 tokens, going back to message 37, chat history before that went out of context). Changing Repetition Penalty Range from 2048 to 4096 and regenerating didn't help, but setting it to 0 and regenerating did - there was repetition of my own message, but the missing words problem was solved (but Repetition Penalty Range 0 might cause other problems down the line?)! [According to the author](https://huggingface.co/migtissera/Synthia-70B-v1.2/discussions/2#64f786619980b96c33e24452), this model was finetuned with only 2K context over a 4K base, maybe that's why the missing words problem appeared here but not with any other model I tested?
- **Conclusion:** Wow, what a model! Its combination of intelligence and personality (and even humor) surpassed all the other models I tried. It was so amazing that I [had to post about it](https://www.reddit.com/r/LocalLLaMA/comments/16gokoa/llm_recommendation_dont_sleep_on_synthia/) as soon as I had finished testing it! And now there's an even better version:
- 👍 **[Synthia-70B-v1.2b](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-GGUF)** Q4_0:
- At first I had a problem: After a dozen messages, it started losing common words like "to", "of", "a", "the", "for" - like its predecessor! But then I realized I still had max context set to 2K from another test, and as soon as I set it back to the usual 4K, everything was good again! And not just good, this new version is even better than the previous one:
- **Conclusion:** Perfect! Didn't talk as User, didn't confuse anything, handled even complex tasks properly, no repetition issues, perfect length of responses. My favorite model of all time (at least for the time being)!
**TL;DR** So there you have it - the results of many hours of in-depth testing... These are my current favorite models:
- 1st. **[Synthia-70B-v1.2b](https://huggingface.co/TheBloke/Synthia-70B-v1.2b-GGUF)**
- 2nd. **[Nous-Hermes-Llama2-70B](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-70B-GGUF)**
- 3rd. **[Mythalion-13B](https://huggingface.co/TheBloke/Mythalion-13B-GGUF)**
Happy chatting and roleplaying with local LLMs! :D | 2023-09-17T18:37:04 | https://www.reddit.com/r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16l8enh | false | null | t3_16l8enh | /r/LocalLLaMA/comments/16l8enh/new_model_comparisontest_part_2_of_2_7_models/ | false | false | self | 73 | {'enabled': False, 'images': [{'id': '2g4MtoKvhQOBCmeiXB1qv1h_5M24BeeYF64zcf4-rfg', 'resolutions': [{'height': 142, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=108&crop=smart&auto=webp&s=70f053538cd673ff7041bf016d751549d8373201', 'width': 108}, {'height': 284, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=216&crop=smart&auto=webp&s=f36cf814dce412156064bbfa635ee2e5b1126bd2', 'width': 216}, {'height': 421, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=320&crop=smart&auto=webp&s=60886477d36654ec60d58c7d3f3a8ef1de7d9cbc', 'width': 320}, {'height': 843, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?width=640&crop=smart&auto=webp&s=ed39fe6d4a0f6f35c5017b2fd819988d2b19f1c7', 'width': 640}], 'source': {'height': 1110, 'url': 'https://external-preview.redd.it/iVP12Aa6rBm44Nrf_ci7NKfYkFvHRQRzUafC5j-jnEw.jpg?auto=webp&s=1431fcfccefd224f54f108138424e3f3e3c9cbff', 'width': 842}, 'variants': {}}]} |
Long context Fine tune and AutoGPTQ quantization with rope? | 4 | I feel like I'm missing something so basic and it's driving me crazy. What is the correct way to use rope for quantization and fine tuning?
This is my current workflow and I have no idea if I'm doing this right. First, I adjust the config of the model, adding this:
> "rope\_scaling": {
>
>"factor": 4.0,
>
> "type": "dynamic"
>
> },
To the end of the model's config.json. Then I quantize the model using AutoGPTQ on my own dataset with a sequence length of 8192. Once I have the GPTQ model, I change the config again so that rope is linear instead of dynamic. I don't update the maximum embedding because this:
>When using this flag, don't update max\_position\_embeddings\` to the expected new maximum. See the following thread for more information on how these scaling strategies behave: [https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically\_scaled\_rope\_further\_increases/](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/).
So for openllama models, I think I can just start leave things as (after switching to linear) and start fine tuning. Is that correct?
What about code llama? I've seen multiple threads about fine tuning code llama, but I feel like I'm 100% missing what the proper procedure is to actually take full advantage of the models context. I've seen people talking about setting the "rope\_theta" to 1000000, and I can see that in the codellama config, but I don't see how its supposed to be activated for fine tuning? I've seen multiple mentions about how recent codellama fine tunes left rope values at default which is supposedly not correct. But I can't find any documentation explaining how to correctly set it. | 2023-09-17T17:16:01 | https://www.reddit.com/r/LocalLLaMA/comments/16l6d5b/long_context_fine_tune_and_autogptq_quantization/ | fappleacts | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16l6d5b | false | null | t3_16l6d5b | /r/LocalLLaMA/comments/16l6d5b/long_context_fine_tune_and_autogptq_quantization/ | false | false | self | 4 | null |
Simplifying Koboldcpp | 2 | Hi guys. I have compiled koboldcpp, and I using it only on macOS. Is there any files/folders after that which I can safely remove. Because I’m using it just to load my model and use though API, and I think in this big folder there a lot more useless files for me. | 2023-09-17T17:06:23 | https://www.reddit.com/r/LocalLLaMA/comments/16l64d2/simplifying_koboldcpp/ | yukiarimo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16l64d2 | false | null | t3_16l64d2 | /r/LocalLLaMA/comments/16l64d2/simplifying_koboldcpp/ | false | false | self | 2 | null |
Fine tune model to behave different based on time of the week? | 4 | Is it possible to fine tune a Llama 2 model based on day of the week? For example if the human asks "can I speak to a real person?" it will normally answer "sure call 555-1234" but if it's Sunday, it says "sorry, we are closed today"?
I could have the bot ask the human what day of the week it is, but that seems stupid. | 2023-09-17T15:46:34 | https://www.reddit.com/r/LocalLLaMA/comments/16l41c6/fine_tune_model_to_behave_different_based_on_time/ | davew111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16l41c6 | false | null | t3_16l41c6 | /r/LocalLLaMA/comments/16l41c6/fine_tune_model_to_behave_different_based_on_time/ | false | false | self | 4 | null |
Can your GPU run this? A simple GitHub tool to check how much vRAM you need for any LLM | 157 | 2023-09-17T14:59:53 | https://github.com/RahulSChand/gpu_poor | ExploreExploit400 | github.com | 1970-01-01T00:00:00 | 0 | {} | 16l2uyq | false | null | t3_16l2uyq | /r/LocalLLaMA/comments/16l2uyq/can_your_gpu_run_this_a_simple_github_tool_to/ | false | false | 157 | {'enabled': False, 'images': [{'id': '5B4FIkdmsHysV7DRQYJsMKnPGx1ClHtEglwfAVFLMoY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=108&crop=smart&auto=webp&s=0e78153cdb44c5eaa460821a9539076ce4b6d8a4', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=216&crop=smart&auto=webp&s=1544208ec5b9f35fa4a41284568fe815b5065497', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=320&crop=smart&auto=webp&s=e8988baeb5d1aaf575976c2ad4536a073d8441ba', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=640&crop=smart&auto=webp&s=57fccbe2e937e792b983912e73a3065e995e8e02', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=960&crop=smart&auto=webp&s=03be1301d36b2ad805d78f6111b2088cc520deb0', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?width=1080&crop=smart&auto=webp&s=71ee264572805de40b7c93e3c2e1fe336cc57a26', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7Qky7qyR1uoUF8bJmMjwHnz02R0Zele-uDEuiU0GriU.jpg?auto=webp&s=f3b80dbce9906c02ab9d529c4b0034a89e0a23a9', 'width': 1200}, 'variants': {}}]} | ||
Any site that list models from TheBloke with filters? | 1 | [removed] | 2023-09-17T14:47:54 | https://www.reddit.com/r/LocalLLaMA/comments/16l2kx7/any_site_that_list_models_from_thebloke_with/ | korgath | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16l2kx7 | false | null | t3_16l2kx7 | /r/LocalLLaMA/comments/16l2kx7/any_site_that_list_models_from_thebloke_with/ | false | false | self | 1 | null |
difference between huggingFace meta-llama/Llama-2-7b and meta-llama/Llama-2-7b-chat-hf | 1 | please! | 2023-09-17T14:18:37 | https://www.reddit.com/r/LocalLLaMA/comments/16l1vkz/difference_between_huggingface_metallamallama27b/ | AcceptableBat8912 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16l1vkz | false | null | t3_16l1vkz | /r/LocalLLaMA/comments/16l1vkz/difference_between_huggingface_metallamallama27b/ | false | false | self | 1 | null |
Case for Dual 4090s or 3090s | 2 | [removed] | 2023-09-17T12:28:57 | dan-jan | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 16kzfx9 | false | null | t3_16kzfx9 | /r/LocalLLaMA/comments/16kzfx9/case_for_dual_4090s_or_3090s/ | false | false | default | 2 | {'enabled': True, 'images': [{'id': 'gbm8_bBVE2wkNGlqPm_fSrLcdGqoiSpv7UXWX0Ca6LQ', 'resolutions': [{'height': 81, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=108&crop=smart&auto=webp&s=f1ccb47afb8231484a47fdccc6302b8d13fbfe71', 'width': 108}, {'height': 162, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=216&crop=smart&auto=webp&s=ed70253c398f08f1ee43c485517f641d8bc78584', 'width': 216}, {'height': 240, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=320&crop=smart&auto=webp&s=7fbae25c342044817bf1c16e5e138791babdabed', 'width': 320}, {'height': 480, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=640&crop=smart&auto=webp&s=034ecbb40f89dc7f94751b8ce0f341b5f34cf0be', 'width': 640}, {'height': 720, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=960&crop=smart&auto=webp&s=f36701b805f258029bd8c669149cab471c4d45d5', 'width': 960}, {'height': 810, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?width=1080&crop=smart&auto=webp&s=0506abffd1758076831b5c1470ddbdb83634926b', 'width': 1080}], 'source': {'height': 3024, 'url': 'https://preview.redd.it/ohji6qzn7tob1.jpg?auto=webp&s=f588289ba84e9703594ffed9082df3a120801c4b', 'width': 4032}, 'variants': {}}]} | |
LoLLMS: Couldn't select model: undefined | 1 | [removed] | 2023-09-17T12:00:24 | https://www.reddit.com/r/LocalLLaMA/comments/16kyvh3/lollms_couldnt_select_model_undefined/ | innocuousAzureus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kyvh3 | false | null | t3_16kyvh3 | /r/LocalLLaMA/comments/16kyvh3/lollms_couldnt_select_model_undefined/ | false | false | self | 1 | null |
Can i run google's MADLAD-400 on cpu? | 10 | I am pretty new to local llms, from my understanding to run a model on cpu i need ggml or gptq implementation. But i dont see anything like that on there repo https://github.com/google-research/google-research/tree/master/madlad_400 | 2023-09-17T11:58:42 | https://www.reddit.com/r/LocalLLaMA/comments/16kyu6e/can_i_run_googles_madlad400_on_cpu/ | itshardtopicka_name_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kyu6e | false | null | t3_16kyu6e | /r/LocalLLaMA/comments/16kyu6e/can_i_run_googles_madlad400_on_cpu/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'IhOuUI3fngFOmiRjadoQiyV08DHYe3OPnPqeoeDGo60', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=108&crop=smart&auto=webp&s=235ceee25825917ad09f01c8dec5dd41d5dea261', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=216&crop=smart&auto=webp&s=62a2a74cbab4db57500b374ce838a7551ddfc30b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=320&crop=smart&auto=webp&s=4cdf410968042e0277b53da3ca21bcb6ebe7977e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=640&crop=smart&auto=webp&s=4bbface6176aba13030fe83460bdba65bd5af74c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=960&crop=smart&auto=webp&s=364b64d75598f0f0a3d8c685bee2c766f4af24cc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?width=1080&crop=smart&auto=webp&s=8c1a1de27b0c598780d1793f6a82e7ff859b1842', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/HvQRgEy1Pup1_ny8rnQAuDCl-_sADA1HhsnSd0JwlW4.jpg?auto=webp&s=9cdaee3bd8121665f70c61fc7023a07e6ec667cc', 'width': 1200}, 'variants': {}}]} |
LoLLMS - Only GPTQ models are supported for QLora fine tuning. Please select a GPTQ compatible binding | 1 | [removed] | 2023-09-17T11:55:26 | https://www.reddit.com/r/LocalLLaMA/comments/16kys16/lollms_only_gptq_models_are_supported_for_qlora/ | innocuousAzureus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kys16 | false | null | t3_16kys16 | /r/LocalLLaMA/comments/16kys16/lollms_only_gptq_models_are_supported_for_qlora/ | false | false | self | 1 | null |
Language Models Compatible with PETALS binding? | 1 | [removed] | 2023-09-17T11:48:30 | https://www.reddit.com/r/LocalLLaMA/comments/16kyncu/language_models_compatible_with_petals_binding/ | innocuousAzureus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kyncu | false | null | t3_16kyncu | /r/LocalLLaMA/comments/16kyncu/language_models_compatible_with_petals_binding/ | false | false | self | 1 | null |
Advice for poor mans local LLM, SD | 12 | I bought a P40 just to start somewhere but it seems that it is not possible to get it up and running with my current setup:
ASUS P8H67-M PRO
32GB DDR3
i5-3570K
I tried to activate resizable BAR and above 4G decoding with ReBaUEFI but I am not sure if it even worked. Are old PCs really a show stopper? What are absolute minimum requirements to get the P40 to work? | 2023-09-17T11:43:52 | https://www.reddit.com/r/LocalLLaMA/comments/16kyk6e/advice_for_poor_mans_local_llm_sd/ | muxxington | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kyk6e | false | null | t3_16kyk6e | /r/LocalLLaMA/comments/16kyk6e/advice_for_poor_mans_local_llm_sd/ | false | false | self | 12 | null |
LLaVA gguf/ggml version | 18 | Hi all, I’m wondering if there is a version of LLaVA https://github.com/haotian-liu/LLaVA that works with gguf and ggml models?? I know there is one for miniGPT4 but it just doesn’t seem as reliable as LLaVA but you need at least 24gb of vRAM for LLaVA to run it locally by the looks of it. | 2023-09-17T11:20:37 | https://www.reddit.com/r/LocalLLaMA/comments/16ky4eo/llava_ggufggml_version/ | ihaag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ky4eo | false | null | t3_16ky4eo | /r/LocalLLaMA/comments/16ky4eo/llava_ggufggml_version/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': 'jAkvupO2QCW1agUmj_zaFLPDopKvlNZ2Kb4bwG-P6_M', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=108&crop=smart&auto=webp&s=e35ee5682e4346981d67b7ec0cf5f0c0ad4d3376', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=216&crop=smart&auto=webp&s=ed943941c3ef436c6827995f8f3161200af185c3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=320&crop=smart&auto=webp&s=f78c0be38f39fa647d40a04221f87a7c7019ade4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=640&crop=smart&auto=webp&s=c436dd27179c6427cc68a484cca9f41975f9d473', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=960&crop=smart&auto=webp&s=648b102e563ed1afd770229f85703f27bb03362a', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?width=1080&crop=smart&auto=webp&s=108c233fb95e76b2c6456bb24b4d7c0284afd3c5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZVKaCr1gUxfgnNZTSwUUT6MVl4Q8coEou7qoWq00Ir8.jpg?auto=webp&s=e07311d62540fa2d7e2ed20171ce0f6dfc798929', 'width': 1200}, 'variants': {}}]} |
Distributed volunteering for model training | 6 | Given the constraints of the GPU poor, I was wondering if there was any volunteer effort or project any one was aware of that can distribute training across multiple vulunteers (something akin to the old seti@home or [Distributed.net](https://Distributed.net)) | 2023-09-17T10:26:03 | https://www.reddit.com/r/LocalLLaMA/comments/16kx6h3/distributed_volunteering_for_model_training/ | WReyor0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kx6h3 | false | null | t3_16kx6h3 | /r/LocalLLaMA/comments/16kx6h3/distributed_volunteering_for_model_training/ | false | false | self | 6 | null |
Have you had this problem? Grammar is hindering accuracy by introducing bias in llama-2. | 15 | **Details**
I'm running llama-2-13b-chat using llama.cpp on M1 mac to classify comments left under peoples' social media posts as challenging/supportive and also categorizing the comments by issue.
System prompt is generic assistant. Prompt is roughly: Does \[comment\] relate to \[issue\] / Does this comment: \[comment\] seem to be challenging this post: \[post\]?
I'm encouraging the model to respond with 'Definitely' 'Mostly', 'Mostly not' and 'Definitely not'.
Usually the model complies, and it is pretty accurate with its classification but sometimes it adds emojis or puts its response in brackets, or simply replies with an emoji instead, so I added the following 'grammar'.
grammar = '''
root ::= answer
answer ::= ("Definitely" | "Mostly" | "Mostly Not" | "Definitely Not")
'''
However, then the responses become incredibly inaccurate, and it tends to have a strong bias towards one of the answers. i.e. it responds 'definitely not' to all, or 'definitely' to most of the questions.
**Does anyone know why I'm getting this problem with grammar?**
**Context**: My understanding is that grammar just tells the model to ignore any tokens which aren't in the set you have given, so if its response without grammar would be << ✅ (Definitely) >> why would it suddenly say << Definitely Not >> when you introduce grammar? I can parse the emoji-ridden one quite easily, but I'm confused as to why the grammar is harming accuracy so profoundly? I want to use it to return json ideally, but there's no point if the accuracy takes a hit.
I'm open to broader advice on my specific issue, some say that something like Bert might be better, and LLMs are overkill for this task, but I'd like to be able to ask nuanced questions about these comments, so LLMs will be useful for that. | 2023-09-17T07:51:28 | https://www.reddit.com/r/LocalLLaMA/comments/16kuo7t/have_you_had_this_problem_grammar_is_hindering/ | roaceroi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kuo7t | false | null | t3_16kuo7t | /r/LocalLLaMA/comments/16kuo7t/have_you_had_this_problem_grammar_is_hindering/ | false | false | self | 15 | null |
Annotated deep learning paper implementations: Cool repo with annotated implementations of Transformers, their variants(TransformerXL, SwitchTransformers, etc) and other interesting networks (like SD) | 27 | 2023-09-17T06:27:47 | https://github.com/labmlai/annotated_deep_learning_paper_implementations | Maykey | github.com | 1970-01-01T00:00:00 | 0 | {} | 16ktdc9 | false | null | t3_16ktdc9 | /r/LocalLLaMA/comments/16ktdc9/annotated_deep_learning_paper_implementations/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'swosTSsBWmAu9aHBijwsahMx3INAuq4kkzPFfO7Hy_I', 'resolutions': [{'height': 88, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=108&crop=smart&auto=webp&s=0fbfe649d1ee4bc726c73fcb345d61a3c41a65c5', 'width': 108}, {'height': 176, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=216&crop=smart&auto=webp&s=6ff2e50b7ce3107b5fc8dcc0f51e109603ddb8a9', 'width': 216}, {'height': 261, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=320&crop=smart&auto=webp&s=8d94ecf78529caf28dd38df35bd91c5cfa39786a', 'width': 320}, {'height': 523, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=640&crop=smart&auto=webp&s=a29eb11739307d5faae19abeedf28d586bdcf130', 'width': 640}, {'height': 784, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=960&crop=smart&auto=webp&s=5741a40be6a5be241bd444e6c8e1ddf7e58dc675', 'width': 960}, {'height': 882, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?width=1080&crop=smart&auto=webp&s=16739b43fbf21c1310110d45bf0d5965b582f931', 'width': 1080}], 'source': {'height': 1844, 'url': 'https://external-preview.redd.it/QBErmjc0OkORf4rXA_DMNHMPFAc6PFELLMpOG516FEs.jpg?auto=webp&s=fc63e193431f72d1e34ac5d9360ad2d83d12f68a', 'width': 2256}, 'variants': {}}]} | ||
Lost in Libraries - trying to train on unstructured text data. | 3 | Hi all you experts,
I am very much non-expert, but I love learning and really want to accomplish this task. I have a large text corpus and a Linux machine with 1 NVIDIA 16 gig GPU. I want to use it to fine-tune a LLM to generate text in a specific voice. I have tried but I’m not even getting past the opening stages of the code and I feel like I am just lost in a forest of incompatible libraries, incomprehensible error messages, too-big models, and programming that is frankly above my head. Huggingface autotune advanced has a no-code solution but as far as I can tell it’s just for text classification, not generation. I’m not wedded to fine-tuning, if someone thinks embeddings or some other approach would be better.
My use case is:
- train on unstructured text (I have text that is several million tokens long, though I can use a shorter clip if that is better)
- provide a prompt of the form: within the context of this summary of this story (.. 300 word summary) write a paragraph about (20-40 word prompt)
- output: 200-300 word paragraph on the topic of above prompt, written in the author’s voice.
What do you suggest for a semi-idiot proof approach?
Help me llama gurus!!
Thank you!!! | 2023-09-17T05:12:53 | https://www.reddit.com/r/LocalLLaMA/comments/16ks5sf/lost_in_libraries_trying_to_train_on_unstructured/ | 33toads | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16ks5sf | false | null | t3_16ks5sf | /r/LocalLLaMA/comments/16ks5sf/lost_in_libraries_trying_to_train_on_unstructured/ | false | false | self | 3 | null |
Orange PI 5 running a slightly modified Miku.sh script on 13B codellama with 4 bit quantization. | 17 | So I got around, finally, to downloading and building llama.cpp on my orange pi and it runs pretty snappy on 7B codellama 4 bit until the context gets large, but hallucinates. It seems to stay more coherent with 13B--this is using it for chat, not coding. I have to test that next--and is running with the following timings:
llama\_print\_timings: load time = 21630.09 ms
llama\_print\_timings: sample time = 15825.76 ms / 985 runs ( 16.07 ms per token, 62.24 tokens per second)
llama\_print\_timings: prompt eval time = 256590.38 ms / 723 tokens ( 354.90 ms per token, 2.82 tokens per second)
llama\_print\_timings: eval time = 664050.52 ms / 985 runs ( 674.16 ms per token, 1.48 tokens per second)
llama\_print\_timings: total time = 1165052.03 ms
By the way, I didn't have to modify the compile parameters for this, it compiled and ran out of the box, unlike my Galaxy S23 Ultra. I just did a git clone, make, then wget to download the models off of hugging face into the models/ directory and modified the [Miku.sh](https://Miku.sh) script to point to the correct model and ran. | 2023-09-17T04:43:32 | https://www.reddit.com/r/LocalLLaMA/comments/16kro1n/orange_pi_5_running_a_slightly_modified_mikush/ | Tasty-Attitude-7893 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 16kro1n | false | null | t3_16kro1n | /r/LocalLLaMA/comments/16kro1n/orange_pi_5_running_a_slightly_modified_mikush/ | false | false | self | 17 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.