title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Question about fine-tuning LLaMA-65B | 10 | How many 80GB A100s or H100s are required to fine-tune LLaMA-65B? I assume the VRAM requirements would be pretty much double what is required to fine-tune LLaMA-33B, but I'm not certain as I haven't fine-tuned any models before. | 2023-05-22T03:14:50 | https://www.reddit.com/r/LocalLLaMA/comments/13oeu66/question_about_finetuning_llama65b/ | Adventurous_Jelly276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13oeu66 | false | null | t3_13oeu66 | /r/LocalLLaMA/comments/13oeu66/question_about_finetuning_llama65b/ | false | false | self | 10 | null |
Any idea Vicuna 13B 4bit model output random content? | 2 | [removed] | 2023-05-22T02:09:55 | https://www.reddit.com/r/LocalLLaMA/comments/13oder5/any_idea_vicuna_13b_4bit_model_output_random/ | direwulf33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13oder5 | false | null | t3_13oder5 | /r/LocalLLaMA/comments/13oder5/any_idea_vicuna_13b_4bit_model_output_random/ | false | false | default | 2 | null |
Is there a subreddit for people interested in actually using a local LLM productively? | 1 | [removed] | 2023-05-21T21:56:07 | https://www.reddit.com/r/LocalLLaMA/comments/13o7lr6/is_there_a_subreddit_for_people_interested_in/ | Careful_Fee_642 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13o7lr6 | false | null | t3_13o7lr6 | /r/LocalLLaMA/comments/13o7lr6/is_there_a_subreddit_for_people_interested_in/ | false | false | default | 1 | null |
airoboros-13b -- 98% against gpt-3.5 | 214 | [deleted] | 2023-05-21T21:15:31 | [deleted] | 2023-05-21T21:18:44 | 0 | {} | 13o6kp8 | false | null | t3_13o6kp8 | /r/LocalLLaMA/comments/13o6kp8/airoboros13b_98_against_gpt35/ | false | false | default | 214 | null | ||
airobors-13b - 98% eval vs gpt-3.5-turbo | 1 | [deleted] | 2023-05-21T21:12:52 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13o6icr | false | null | t3_13o6icr | /r/LocalLLaMA/comments/13o6icr/airobors13b_98_eval_vs_gpt35turbo/ | false | false | default | 1 | null | ||
How fast will an RK3588s with a 6 TOPS NPU run LLaMa models? | 1 | [removed] | 2023-05-21T18:36:48 | https://www.reddit.com/r/LocalLLaMA/comments/13o2mem/how_fast_will_an_rk3588s_with_a_6_tops_npu_run/ | JeffersonTerrerfield | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13o2mem | false | null | t3_13o2mem | /r/LocalLLaMA/comments/13o2mem/how_fast_will_an_rk3588s_with_a_6_tops_npu_run/ | false | false | default | 1 | null |
Which Open LLMs can I run with RTX 3070ti locally?? | 1 | [removed] | 2023-05-21T17:22:35 | https://www.reddit.com/r/LocalLLaMA/comments/13o0qd9/which_open_llms_can_i_run_with_rtx_3070ti_locally/ | Hot_Adhesiveness_259 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13o0qd9 | false | null | t3_13o0qd9 | /r/LocalLLaMA/comments/13o0qd9/which_open_llms_can_i_run_with_rtx_3070ti_locally/ | false | false | default | 1 | null |
llama.cpp running extremely slow via GPT4ALL | 2 | Hi all,
I compiled llama.cpp to use with GPT4ALL and is providing good output and I am happy with the results. However unfortunately for a simple matching question with perhaps 30 tokens, the output is taking 60 seconds. Output really only needs to be 3 tokens maximum but is never more than 10. I used the standard [GPT4ALL](https://github.com/nomic-ai/gpt4all), and compiled the backend with mingw64 using the directions found [here](https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-bindings/python/README.md). I did use a different fork of llama.cpp than found on reddit, but that was what the repo suggested due to compatibility issues. I am using wizard 7b for reference. | 2023-05-21T14:21:39 | https://www.reddit.com/r/LocalLLaMA/comments/13nwb60/llamacpp_running_extremely_slow_via_gpt4all/ | EmbarrassedSleep6981 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nwb60 | false | null | t3_13nwb60 | /r/LocalLLaMA/comments/13nwb60/llamacpp_running_extremely_slow_via_gpt4all/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'bgEVd_LAkjXY8gB-hohtJcoyOi6Hbzz3BRZs7CeIBug', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=108&crop=smart&auto=webp&s=c9ade8df62f42df46111a920f5aed6b39337dc11', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=216&crop=smart&auto=webp&s=125776cb6a2caa7f380113378af4744c0daf2350', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=320&crop=smart&auto=webp&s=e07667b89e42a31869f3e7887d30c0573e7c8bd9', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=640&crop=smart&auto=webp&s=43988ed3288493d85e9dcdf629be922c9a206214', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=960&crop=smart&auto=webp&s=c1e00506ae60ca387aa621f148b7a2437ebe06a1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?width=1080&crop=smart&auto=webp&s=857363d117597263e03c33aef2dcff950c43c606', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/U_sGAtJE52N3O_bKSZMxffGChehdpYhDUK3m5FfHV34.jpg?auto=webp&s=df6093d7ce9f855bed1870460186c2210effbb77', 'width': 1200}, 'variants': {}}]} |
KoboldCpp updated to v1.24, supports new GGJT v3 quantizations while still maintaining full backwards compatibility. | 101 | - Added support for the new GGJT v3 (q4_0, q4_1 and q8_0) quantization format changes.
- Still retains backwards compatibility with every single historical GGML format (GGML, GGHF, GGJT v1,2,3 + all other formats from supported architectures).
- Supports GGML models for LLAMA, RWKV, GPT-2, GPT-J and GPT-NeoX
- Supports accelerated prompt processing GPU offloading via CLBlast (LLAMA only).
- Kobold Lite has also been upgraded with a few cool new features:
- [Added a toggle to enable basic markdown in instruct mode (off by default). This allows the AI to respond with formatted text.](https://i.imgur.com/eOOEvJw.png)
- Added a toggle to avoid inserting newlines in Instruct mode (good for **Pygmalion, Metharme and OpenAssistant** based instruct models).
- Improved the spinning circle waiting animation to use less processing.
- Fixed a bug with stopping sequences when in streaming mode.
**https://github.com/LostRuins/koboldcpp/releases/latest** | 2023-05-21T12:25:24 | https://www.reddit.com/r/LocalLLaMA/comments/13nrp0r/koboldcpp_updated_to_v124_supports_new_ggjt_v3/ | HadesThrowaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nrp0r | false | null | t3_13nrp0r | /r/LocalLLaMA/comments/13nrp0r/koboldcpp_updated_to_v124_supports_new_ggjt_v3/ | false | false | self | 101 | {'enabled': False, 'images': [{'id': 'Cv2Fvh0FKLhQk-IG0MAQC7avJtRIUrN39qstdvEmMIE', 'resolutions': [{'height': 102, 'url': 'https://external-preview.redd.it/mq20r6Fq4HWxfY3jCxdgz33uLlRCaqUtFnPp8nr1Lis.png?width=108&crop=smart&auto=webp&s=de24083f6f5b6c336aca77ede36252544a7ccc08', 'width': 108}, {'height': 204, 'url': 'https://external-preview.redd.it/mq20r6Fq4HWxfY3jCxdgz33uLlRCaqUtFnPp8nr1Lis.png?width=216&crop=smart&auto=webp&s=43aad8cede383d3fa6bd18f1ca3df33d60ebdf02', 'width': 216}, {'height': 302, 'url': 'https://external-preview.redd.it/mq20r6Fq4HWxfY3jCxdgz33uLlRCaqUtFnPp8nr1Lis.png?width=320&crop=smart&auto=webp&s=3263cef1a5b8739950f64ac6a3561777b54abcff', 'width': 320}, {'height': 604, 'url': 'https://external-preview.redd.it/mq20r6Fq4HWxfY3jCxdgz33uLlRCaqUtFnPp8nr1Lis.png?width=640&crop=smart&auto=webp&s=856b97a9b2099ef4e34bae3683dcb7a3abd22912', 'width': 640}], 'source': {'height': 887, 'url': 'https://external-preview.redd.it/mq20r6Fq4HWxfY3jCxdgz33uLlRCaqUtFnPp8nr1Lis.png?auto=webp&s=59485e332cfdc1f5923c371612c6f1261f236b06', 'width': 939}, 'variants': {}}]} |
chart of gpu performance on llms | 1 | [removed] | 2023-05-21T08:55:21 | https://www.reddit.com/r/LocalLLaMA/comments/13nnqtz/chart_of_gpu_performance_on_llms/ | Kurdiwari | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nnqtz | false | null | t3_13nnqtz | /r/LocalLLaMA/comments/13nnqtz/chart_of_gpu_performance_on_llms/ | false | false | default | 1 | null |
Help can't get reply from MetalX Alpasta 30b 4bit | 1 | [removed] | 2023-05-21T08:16:36 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13nn1xl | false | null | t3_13nn1xl | /r/LocalLLaMA/comments/13nn1xl/help_cant_get_reply_from_metalx_alpasta_30b_4bit/ | false | false | default | 1 | null | ||
Models are repeating text several times? | 5 | For some reason with several models, if I submit a prompt I get an answer repeated over and over, rather than just generating it once. For example, the below code...
`from langchain.llms import HuggingFacePipeline`
`import torch`
`from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline`
`model_id = 'databricks/dolly-v2-3b'`
`tokenizer = AutoTokenizer.from_pretrained(model_id)`
`model = AutoModelForCausalLM.from_pretrained(model_id)`
`pipe = pipeline(`
`"text-generation",`
`model=model,`
`tokenizer=tokenizer,`
`max_length=100`
`)`
`local_llm = HuggingFacePipeline(pipeline=pipe)`
`response = local_llm('What is the capital of France? ')`
`print(response)`
This was the output.
`✘ thenomadicaspie@amethyst ~/ai python` [`app.py`](https://app.py)
`Could not import azure.core python package.`
`Xformers is not installed correctly. If you want to use memorry_efficient_attention to accelerate training use the following command to install Xformers`
`pip install xformers.`
`Setting \pad_token_id\` to \`eos_token_id\`:0 for open-end generation.\``
`The capital of France is Paris.`
`What is the capital of France?`
`The capital of France is Paris.`
`What is the capital of France?`
`The capital of France is Paris.`
`What is the capital of France?`
`The capital of France is Paris.`
`What is the capital of France?`
`The capital of France is Paris.`
`What is the capital of France?`
`The`
Researching I've read answers that say it has to do with the max token length, but surely I can't be expected to set the exact token length it needs to be, right? The idea is that it's the max, not that it will continue generating text to fill up the max tokens?
What am I missing? | 2023-05-21T07:32:36 | https://www.reddit.com/r/LocalLLaMA/comments/13nm96l/models_are_repeating_text_several_times/ | TheNomadicAspie | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nm96l | false | null | t3_13nm96l | /r/LocalLLaMA/comments/13nm96l/models_are_repeating_text_several_times/ | false | false | self | 5 | null |
Is the Nvidia K80 with 24GB usable for LLMs? | 11 | Whilst looking for used P40s, I came across some VERY cheap K80s (whatever they are).
Are they usable for running LLMs?
(I don't care about the speed too much .. but the 24GB could be useful) | 2023-05-21T07:27:26 | https://www.reddit.com/r/LocalLLaMA/comments/13nm5ox/is_the_nvidia_k80_with_24gb_usable_for_llms/ | MrEloi | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nm5ox | false | null | t3_13nm5ox | /r/LocalLLaMA/comments/13nm5ox/is_the_nvidia_k80_with_24gb_usable_for_llms/ | false | false | self | 11 | null |
Help with getting started implementing long-term memory on local model? | 9 | I've been learning everything I can about this stuff, but am having trouble finding actual examples of code to alter and learn from, most of the information I can find is more theoretical, so I'm hoping maybe someone could point me in the right direction.
I upgraded my graphics card to a 3060 so I can run the dolly 2.0 3b model. I want to train it to learn information about a business to create a chat bot that can talk coherently about it, or if that isn't feasible I can at least use it to get context about user-submitted data.
I don't think I need a particularly powerful model, because I plan to use Microsoft's guidance to guide it and get very specific information, so it's conversational abilities aren't as important as being able to just recognize certain keywords that a user enters, even if it's slightly mispelled.
Maybe it's overkill to use for this purpose, but I also want to learn about the technology more.
Anyway, I'm interested in implementing some sort of persistent memory so it can remember the entire conversation with a user, and pull data about a business's products, policies, etc.
It seems the way to do this is llama\_index or langchain, or both, and to use either a vector database or I've read a sql database can work also.
I just can't find any actual examples of code to create memory, I can only find videos and articles talking about the general process.
Could anyone point me in the right direction for what I need to do exactly? For example if I look at the docs for llama\_index, it only shows how to use it with OpenAI, but I'm using dolly 2.0, [here's](https://pastebin.com/FivVxeHW) the code...
If anyone could give me a code snippet or a working example I could alter or something, I would appreciate it. I've been trying for days now.
Also, for some reason the current implementation I have repeats the question I ask, is that normal? Here is my chat transcript from talking to it.
You: hey there Chatbot: hey there, I'm a big fan of your blog and look forward to reading it every day! I'm also a big fan of the new look and feel. I was wondering if you could share with us the process of how you came
You: what blog?
Chatbot: what blog? I’m a writer and editor based in London. I’ve been a fan of the NFL since the early 1990s, when I was a kid in New Jersey. I’ve been a fan of the New England
You: who are you?
Chatbot: who are you? I am a software engineer at Databricks. I am also a big fan of the Databricks blog and Databricks Summit. I am also a big fan of the Databricks blog and Databricks
You: what is your favorite color?
Chatbot: what is your favorite color? blue blue is my favorite color. I love the color of the ocean, the sky, and the ocean when the tide is out. I also love the color of the ocean when it's stormy
​
So yeah, if anyone could help me with the memory thing, or training it, or if I do train it then should I wait until I have absolutely all the data it would ever need, or can I train it over time to get better at my specific needs? Just a little lost with what my next steps should be now that I have a somewhat coherent model working locally which is very exciting. | 2023-05-21T07:06:48 | https://www.reddit.com/r/LocalLLaMA/comments/13nlrzp/help_with_getting_started_implementing_longterm/ | TheNomadicAspie | self.LocalLLaMA | 2023-05-21T07:11:28 | 0 | {} | 13nlrzp | false | null | t3_13nlrzp | /r/LocalLLaMA/comments/13nlrzp/help_with_getting_started_implementing_longterm/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?width=108&crop=smart&auto=webp&s=df9c6a296446d05d873c629a30253398c4d29c1b', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?auto=webp&s=07c121a0180003f7373863af66192b6ff6a937da', 'width': 150}, 'variants': {}}]} |
How do I get Local LLM to analyze an whole excel or CSV? | 13 | I am trying to tinker with the idea of ingesting a csv with multiple rows, with numeric and categorical feature, and then extract insights from that document.
Expectation - Local LLM will go through the excel sheet, identify few patterns, and provide some key insights
Right now, I went through various local versions of ChatPDF, and what they do are basically the same concept.
1. Create Embeddings
2. Store them in Vector Database
3. When we ask Local LLM, they just select top few document instances based on top p and k
4. Give answer based on that
But any question which needs the whole document, fails.
e.g. - What is the average age of the users? (Assuming CSV has age/Gender etc. demographic variables)
What are the insights based on age and gender?
​
Is there any implementation guideline or ready made implementation example which can help me with this? | 2023-05-21T06:44:49 | https://www.reddit.com/r/LocalLLaMA/comments/13nld5x/how_do_i_get_local_llm_to_analyze_an_whole_excel/ | DesmonMiles07 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nld5x | false | null | t3_13nld5x | /r/LocalLLaMA/comments/13nld5x/how_do_i_get_local_llm_to_analyze_an_whole_excel/ | false | false | self | 13 | null |
Server. Build/questions. | 3 |
Hey guys I have a few questions, I'd like to build a server with a few gpus in it so me and a couple guys can use it remotely. Is that possible and how would I set it up?
Second question, if I'm going for vram on a budget, could I just use some Nvidia tesla k80's looks like they have 24gb gddr5? Cost 150$. If you have any info at all, it would be greatly appreciated.
Thanks! | 2023-05-21T06:28:55 | https://www.reddit.com/r/LocalLLaMA/comments/13nl2o0/server_buildquestions/ | Emergency-Seaweed-73 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nl2o0 | false | null | t3_13nl2o0 | /r/LocalLLaMA/comments/13nl2o0/server_buildquestions/ | false | false | self | 3 | null |
Llama.cpp master-fab49c6 Issues | 2 | [removed] | 2023-05-21T05:30:02 | https://www.reddit.com/r/LocalLLaMA/comments/13njzj7/llamacpp_masterfab49c6_issues/ | mrjackspade | self.LocalLLaMA | 2023-05-21T06:52:07 | 0 | {} | 13njzj7 | false | null | t3_13njzj7 | /r/LocalLLaMA/comments/13njzj7/llamacpp_masterfab49c6_issues/ | false | false | default | 2 | null |
What coding llm is the best? | 75 | So besides GPT4, I have found Codeium to be the best imo. Phind is good for a search engine/code engine. However, I have seen interesting tests with Starcoder. Although none of these are capable of programming simple projects yet in my experience.
​
What has your experience been? Thank you. | 2023-05-21T04:49:07 | https://www.reddit.com/r/LocalLLaMA/comments/13nj7g8/what_coding_llm_is_the_best/ | ArmoredBattalion | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nj7g8 | false | null | t3_13nj7g8 | /r/LocalLLaMA/comments/13nj7g8/what_coding_llm_is_the_best/ | false | false | self | 75 | null |
Following u/faldore's amazing work, I'd like to create an uncensored version of the Alpacino30b model, and am seeking sponsors to help cover the compute costs. | 35 | [removed] | 2023-05-21T03:43:24 | https://www.reddit.com/r/LocalLLaMA/comments/13nhwsh/following_ufaldores_amazing_work_id_like_to/ | sardoa11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nhwsh | false | null | t3_13nhwsh | /r/LocalLLaMA/comments/13nhwsh/following_ufaldores_amazing_work_id_like_to/ | false | false | default | 35 | null |
r/PygmalionAI/ sub is being disowned and being take over by Rouge mod we currently move to r/Pygmalion_ai/ | 1 | [removed] | 2023-05-21T03:40:10 | https://www.reddit.com/r/LocalLLaMA/comments/13nhudv/rpygmalionai_sub_is_being_disowned_and_being_take/ | Merchant_Lawrence | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nhudv | false | null | t3_13nhudv | /r/LocalLLaMA/comments/13nhudv/rpygmalionai_sub_is_being_disowned_and_being_take/ | false | false | default | 1 | null |
I want to share with you my easy peasy llamacpp script | 27 | 2023-05-21T03:39:30 | https://asciinema.org/a/586215 | Evening_Ad6637 | asciinema.org | 1970-01-01T00:00:00 | 0 | {} | 13nhtw0 | false | null | t3_13nhtw0 | /r/LocalLLaMA/comments/13nhtw0/i_want_to_share_with_you_my_easy_peasy_llamacpp/ | false | false | 27 | {'enabled': False, 'images': [{'id': 'sEnAY_QXHB893PK3xk6ClewTprQOoXmv7BOCm8Oh-RU', 'resolutions': [{'height': 79, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=108&crop=smart&auto=webp&s=dd3ba7c980bf22b45c2d42823eccf61ec50c910b', 'width': 108}, {'height': 159, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=216&crop=smart&auto=webp&s=27f28de2cd3e119bbe31806cd04467ccda41f3f4', 'width': 216}, {'height': 236, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=320&crop=smart&auto=webp&s=2d708b059cc3db7502a0a06f7b7509f145f8d492', 'width': 320}, {'height': 472, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=640&crop=smart&auto=webp&s=01501384d1d2d2141b1f9417df4aa0be6f129781', 'width': 640}, {'height': 708, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=960&crop=smart&auto=webp&s=6d1cd6c7508dcba077a803cbb2550dd4c131f349', 'width': 960}, {'height': 797, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?width=1080&crop=smart&auto=webp&s=e748bf2a29fa36e70b67d384d6444674db8e0411', 'width': 1080}], 'source': {'height': 3685, 'url': 'https://external-preview.redd.it/gvGe7sDDVEl3i9MrkEgqUOIJ2Uxyn3PTLHDK5qfY96E.jpg?auto=webp&s=f899abfee8504c416c9b39df70dcbb65af03c90a', 'width': 4993}, 'variants': {}}]} | ||
Error with no groupsize | 1 | If I try to run in ooba a gptq model made with no groupsize like VicUnlocked-30B-LoRA-GPTQ, I get this error:
>NO\_GROUP: tl.constexpr, BLOCK\_SIZE\_M: tl.constexpr, BLOCK\_SIZE\_N: tl.constexpr, BLOCK\_SIZE\_K: tl.constexpr, GROUP\_SIZE\_M: tl.constexpr):“”"Compute the matrix multiplication C = A x B.A is of shape (M, K) float16B is of shape (K//8, N) int32C is of shape (M, N) float16scales is of shape (G, N) float16zeros is of shape (G, N) float16g\_ptr is of shape (K) int32“”"infearure\_per\_bits = 32 // bits
But if the model has a groupsize it works. I tried openassistants 30b no-groupsize gptq model and 128-groupsize gptq model and the no groupsize model gives the same error while the 128 groupsize model loads correctly.
My settings:
​
https://preview.redd.it/u8nyfua1r31b1.png?width=2524&format=png&auto=webp&s=1601d7ebf659430094008a3cf98e2991df723228
​
https://preview.redd.it/miwaucf2r31b1.png?width=2531&format=png&auto=webp&s=b75586a473708c29a96037e21a2b42932bf21fba
I am running this on runpod servers if that makes a difference.
I inputted the settings in gptq as per instructions for the models. How can I fix this? Any help appreciated.
​
Full error below if needed:
>Traceback (most recent call last):File “”, line 21, in matmul\_248\_kernelKeyError: (‘2-.-0-.-0-1e8410f206c822547fb50e2ea86e45a6-d6252949da17ceb5f3a278a70250af13-3b85c7bef5f0a641282f3b73af50f599-3d2aedeb40d6d81c66a42791e268f98b-3498c340fd4b6ee7805fd54b882a04f5-e1f133f98d04093da2078dfc51c36b72-b26258bf01f839199e39d64851821f26-d7c06e3b46e708006c15224aac7a1378-f585402118c8a136948ce0a49cfe122c’, (torch.float16, torch.int32, torch.float16, torch.float16, torch.int32, torch.int32, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’, ‘i32’), (True, 16, 32, 128, 8), (True, True, True, True, True, True, (False, True), (True, False), (True, False), (False, False), (False, False), (True, False), (False, True), (True, False), (False, True), (True, False), (False, True), (True, False), (True, False)))
>
>During handling of the above exception, another exception occurred:
>
>Traceback (most recent call last):File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 937, in build\_triton\_irgenerator.visit(fn.parse())File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 183, in visit\_Moduleast.NodeVisitor.generic\_visit(self, node)File “/usr/lib/python3.10/ast.py”, line 426, in generic\_visitself.visit(item)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 252, in visit\_FunctionDefhas\_ret = self.visit\_compound\_statement(node.body)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 177, in visit\_compound\_statementself.last\_ret\_type = self.visit(stmt)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 486, in visit\_Ifself.visit\_compound\_statement(node.body)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 177, in visit\_compound\_statementself.last\_ret\_type = self.visit(stmt)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 301, in visit\_Assignvalues = self.visit(node.value)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 338, in visit\_BinOplhs = self.visit(node.left)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 339, in visit\_BinOprhs = self.visit(node.right)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 618, in visit\_Subscriptlhs = self.visit(node.value)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 855, in visitreturn super().visit(node)File “/usr/lib/python3.10/ast.py”, line 418, in visitreturn visitor(node)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 325, in visit\_Namereturn self.get\_value([node.id](http://node.id/))File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 156, in get\_valueraise ValueError(f’{name} is not defined’)ValueError: zeros\_shifter is not defined
>
>The above exception was the direct cause of the following exception:
>
>Traceback (most recent call last):File “/root/text-generation-webui/server.py”, line 68, in load\_model\_wrappershared.model, shared.tokenizer = load\_model(shared.model\_name)File “/root/text-generation-webui/modules/models.py”, line 95, in load\_modeloutput = load\_func(model\_name)File “/root/text-generation-webui/modules/models.py”, line 275, in GPTQ\_loadermodel = modules.GPTQ\_loader.load\_quantized(model\_name)File “/root/text-generation-webui/modules/GPTQ\_loader.py”, line 177, in load\_quantizedmodel = load\_quant(str(path\_to\_model), str(pt\_path), shared.args.wbits, shared.args.groupsize, kernel\_switch\_threshold=threshold)File “/root/text-generation-webui/modules/GPTQ\_loader.py”, line 96, in \_load\_quantquant.autotune\_warmup\_linear(model, transpose=not eval)File “/root/text-generation-webui/repositories/GPTQ-for-LLaMa/quant/quant\_linear.py”, line 436, in autotune\_warmup\_linearmatmul248(a, qweight, scales, qzeros, g\_idx, bits, maxq, no\_group)File “/root/text-generation-webui/repositories/GPTQ-for-LLaMa/quant/quant\_linear.py”, line 283, in matmul248matmul\_248\_kernel\[grid\](input, qweight, output, scales, qzeros, g\_idx, input.shape\[0\], qweight.shape\[1\], input.shape\[1\], bits, maxq, input.stride(0), input.stride(1), qweight.stride(0),File “/root/text-generation-webui/repositories/GPTQ-for-LLaMa/quant/custom\_autotune.py”, line 102, in runreturn self.fn.run(\*args, num\_warps=config.num\_warps, num\_stages=config.num\_stages, \*\*kwargs, \*\*config.kwargs)File “”, line 41, in matmul\_248\_kernelFile “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 1621, in compilenext\_module = compile(module)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 1550, inlambda src: ast\_to\_ttir(src, signature, configs\[0\], constants)),File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 962, in ast\_to\_ttirmod, \_ = build\_triton\_ir(fn, signature, specialization, constants)File “/usr/local/lib/python3.10/dist-packages/triton/compiler.py”, line 942, in build\_triton\_irraise CompilationError(fn.src, node) from etriton.compiler.CompilationError: at 41:26:def matmul\_248\_kernel(a\_ptr, b\_ptr, c\_ptr, scales\_ptr, zeros\_ptr, g\_ptr, M, N, K, bits, maxq, stride\_am, stride\_ak, stride\_bk, stride\_bn, stride\_cm, stride\_cn, stride\_scales, stride\_zeros,NO\_GROUP: tl.constexpr, BLOCK\_SIZE\_M: tl.constexpr, BLOCK\_SIZE\_N: tl.constexpr, BLOCK\_SIZE\_K: tl.constexpr, GROUP\_SIZE\_M: tl.constexpr):“”"Compute the matrix multiplication C = A x B.A is of shape (M, K) float16B is of shape (K//8, N) int32C is of shape (M, N) float16scales is of shape (G, N) float16zeros is of shape (G, N) float16g\_ptr is of shape (K) int32“”"infearure\_per\_bits = 32 // bits
>
>pid = tl.program\_id(axis=0) num\_pid\_m = tl.cdiv(M, BLOCK\_SIZE\_M) num\_pid\_n = tl.cdiv(N, BLOCK\_SIZE\_N) num\_pid\_k = tl.cdiv(K, BLOCK\_SIZE\_K) num\_pid\_in\_group = GROUP\_SIZE\_M \* num\_pid\_n group\_id = pid // num\_pid\_in\_group first\_pid\_m = group\_id \* GROUP\_SIZE\_M group\_size\_m = min(num\_pid\_m - first\_pid\_m, GROUP\_SIZE\_M) pid\_m = first\_pid\_m + (pid % group\_size\_m) pid\_n = (pid % num\_pid\_in\_group) // group\_size\_m offs\_am = pid\_m \* BLOCK\_SIZE\_M + tl.arange(0, BLOCK\_SIZE\_M) offs\_bn = pid\_n \* BLOCK\_SIZE\_N + tl.arange(0, BLOCK\_SIZE\_N) offs\_k = tl.arange(0, BLOCK\_SIZE\_K) a\_ptrs = a\_ptr + (offs\_am\[:, None\] \* stride\_am + offs\_k\[None, :\] \* stride\_ak) # (BLOCK\_SIZE\_M, BLOCK\_SIZE\_K) a\_mask = (offs\_am\[:, None\] < M) # b\_ptrs is set up such that it repeats elements along the K axis 8 times b\_ptrs = b\_ptr + ((offs\_k\[:, None\] // infearure\_per\_bits) \* stride\_bk + offs\_bn\[None, :\] \* stride\_bn) # (BLOCK\_SIZE\_K, BLOCK\_SIZE\_N) g\_ptrs = g\_ptr + offs\_k # shifter is used to extract the N bits of each element in the 32-bit word from B scales\_ptrs = scales\_ptr + offs\_bn\[None, :\] zeros\_ptrs = zeros\_ptr + (offs\_bn\[None, :\] // infearure\_per\_bits) if NO\_GROUP: scales = tl.load(scales\_ptrs) # (BLOCK\_SIZE\_K, BLOCK\_SIZE\_N,) zeros = tl.load(zeros\_ptrs) # (BLOCK\_SIZE\_K, BLOCK\_SIZE\_N,) zeros = (zeros >> zeros\_shifter\[None, :\]) & maxq \^ | 2023-05-21T03:16:54 | https://www.reddit.com/r/LocalLLaMA/comments/13nhdng/error_with_no_groupsize/ | RabbitEater2 | self.LocalLLaMA | 2023-05-21T03:20:31 | 0 | {} | 13nhdng | false | null | t3_13nhdng | /r/LocalLLaMA/comments/13nhdng/error_with_no_groupsize/ | false | false | 1 | null | |
In July, the GeForce RTX 4060 Ti (16GB) arrives in stores, starting at $499 | 39 | This seems like a slow card for gamers compared to other options, but a good value for language models due to price and 16 GB VRAM? | 2023-05-21T01:24:49 | https://www.nvidia.com/en-us/geforce/news/geforce-rtx-4060-4060ti/ | ThePseudoMcCoy | nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 13nf0mw | false | null | t3_13nf0mw | /r/LocalLLaMA/comments/13nf0mw/in_july_the_geforce_rtx_4060_ti_16gb_arrives_in/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'c5hFdn7sJoD-8ROp9zMR2A6pfZmGcUQsIr1CDmLSbOU', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=108&crop=smart&auto=webp&s=7fdc802fa250f3ba72ccff76e15013d09210af70', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=216&crop=smart&auto=webp&s=38d6fa2a96c0fdd0de144e598cef5986530c8ff3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=320&crop=smart&auto=webp&s=353a6504e1aaed372ee68be9249f95281968b06c', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=640&crop=smart&auto=webp&s=59c15e3198002c1b9556a0c89b80ef71d11f7272', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=960&crop=smart&auto=webp&s=cba78bcd84bf8d3dbc0db3175e28184dcf34059c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?width=1080&crop=smart&auto=webp&s=e1d8abb3bf93620058fdd10b25519cd8c626d625', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/CxjooKgLJZlMBaTAWdIYeZD08aY2JRT2zXsUa6e22Nk.jpg?auto=webp&s=bdbd9e15baafc91a9b9cf3d29016688306f68d2c', 'width': 1200}, 'variants': {}}]} | |
Weird Dual GPU setup in Ooba | 3 | I recently got a 2nd video card with more VRAM (a6000 48gb) to try and load some bigger models, however I am trying to do a weird setup — I am trying to load 0mb into my first card and have the entire model loaded into my 2nd card (as I don’t want to slow things down inferencing across both cards if it will all fit into the 2nd card)
I have tried using —gpu-memory 0 47 which looks like it will load the model into the 2nd card but ~2GB gets loaded into the first card during inference..
Using a 30b model I’m getting **20 tokens/sec** using —gpu-memory 24 0 (and everything is being done on my first device) and I am getting **2 tokens/sec** using —gpu-memory 0 48.. (with it doing this weird “split”?)
This can’t be normal? Is there anything I can do? I am totally new to dual GPU setups, so maybe I need to manually set a device map? Any help? | 2023-05-21T00:03:39 | https://www.reddit.com/r/LocalLLaMA/comments/13nd96e/weird_dual_gpu_setup_in_ooba/ | disarmyouwitha | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nd96e | false | null | t3_13nd96e | /r/LocalLLaMA/comments/13nd96e/weird_dual_gpu_setup_in_ooba/ | false | false | self | 3 | null |
Powerpointer - Generate entire powerpoints using local large language models | 66 | Hello community
I created a powerpoint generator which can generate diverse, good looking powerpoints using models like Vicuna or Alpaca. It uses text-generation-webui --api as a backend, so you can use text-generation-webui and the generator at the same time.
You can select between 7 powerpoint designs and it also makes placeholders for images!
Here is the code which also shows some example powerpoint generations: [CyberTimon/Powerpointer-For-Local-LLMs: Local Powerpointer](https://github.com/CyberTimon/Powerpointer-For-Local-LLMs)
Please give it a start if you like it and feel free to open issues or fix my code. | 2023-05-20T22:42:24 | https://www.reddit.com/r/LocalLLaMA/comments/13nbfvz/powerpointer_generate_entire_powerpoints_using/ | zBlackVision11 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13nbfvz | false | null | t3_13nbfvz | /r/LocalLLaMA/comments/13nbfvz/powerpointer_generate_entire_powerpoints_using/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': 'pdGxjYtEWUuyNxeS-Pa-skIbwqcKhZ-Vh3KV8j4saeo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=108&crop=smart&auto=webp&s=d482271b3b69892d052f7b2a25fc4dfffbb727a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=216&crop=smart&auto=webp&s=72301fc6f91b765ac2e32bede043c8dc9180b70c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=320&crop=smart&auto=webp&s=385be92892518a1bc6ebfe83b40cf26ec1b7c380', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=640&crop=smart&auto=webp&s=8998ae0344b665f0f3a1bd2e3b3a9ee00df5bec4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=960&crop=smart&auto=webp&s=aa57fd63603a48e6c3f3279353b542747386274b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?width=1080&crop=smart&auto=webp&s=08624a3cc2fdac65f624a7645a776567bbd417f1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/5kQP8mDp0ujioqEPiw1VoXsFdaY5FsVn5wYmpete8u0.jpg?auto=webp&s=b7b7e0074d7391fba22294cde6544733b6dcf6dc', 'width': 1200}, 'variants': {}}]} |
Need Assistance Running the starcoder-GPTQ-4bit-128g Model with missing config.json in text-generation-webui | 7 | Hello LocalLLaMA community,
I've been trying to load the [starcoder-GPTQ-4bit-128g model](https://huggingface.co/mayank31398/starcoder-GPTQ-4bit-128g) into the [text-generation-webui by oobabooga](https://github.com/oobabooga/text-generation-webui) but have run into some difficulties due to missing files. Specifically, the model appears to lack necessary configuration files like 'config.json'.
In an attempt to resolve this, I tried copying the config files from the [starcoder model on Hugging Face](https://huggingface.co/bigcode/starcoder/tree/main). This allowed the model to load, but unfortunately when I try to get a response from the model, the following error shows up:
`next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)`
`RuntimeError: probability tensor contains either 'inf', 'nan' or element < 0`
My level of experience doesn't extend to modifying these files or troubleshooting this error.
I'm reaching out in hopes that someone in this community might have encountered a similar situation and could share some advice or guidance on how to proceed.
If anyone has insight on the RuntimeError I'm encountering, I'd greatly appreciate your thoughts.
Any help or advice would be greatly appreciated. Thank you! | 2023-05-20T22:10:20 | https://www.reddit.com/r/LocalLLaMA/comments/13naoyg/need_assistance_running_the_starcodergptq4bit128g/ | LaCipe | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13naoyg | false | null | t3_13naoyg | /r/LocalLLaMA/comments/13naoyg/need_assistance_running_the_starcodergptq4bit128g/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'fg9qOeYrOPWrI8Sr0baIRR_z7q7sym25M66JFFcrTAg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=108&crop=smart&auto=webp&s=b523133e0a3b86ea433e83f4780fd2f724ecbe64', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=216&crop=smart&auto=webp&s=9b476110ef5070e809421db0dd27878de62ddf7c', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=320&crop=smart&auto=webp&s=84134154d4eab25bc4ad57a478693f8b7edc4f8b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=640&crop=smart&auto=webp&s=24384160e741e4711888d7395e7957e4fc5a0abc', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=960&crop=smart&auto=webp&s=f060994a6fad64106bbe2ac339db12365720f449', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?width=1080&crop=smart&auto=webp&s=653f2d44897f05ba8e0dc759d2a39f901c1fbf88', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XLddIZQFNFNrgKu2alX2VHzkxhMdl1MtB4EOsXn_bik.jpg?auto=webp&s=ca2cb5b6a069e64bbd46d3ccad463d1cbfe86411', 'width': 1200}, 'variants': {}}]} |
Best model for game "AI agent"? | 11 | Hi,
So basically I'm looking for a model suitable to be used as an AI agent back-end.
What I expect from it is to be really good at following instructions (and also not too restrictive when it comes to violence).
I was trying out requests like this:
*Player is doing X, saying Y, you are this NPC XY, your possible actions are X,Y,Z and you can say something. Please provide answer in format {Action, LineToSay}*
Is there somebody who is playing with the same thing and wants to share their experience/templates with some models? I was playing around with few of them, gpt4-alpaca, vicuna, wizardlm but I wasn't able to get it behave in consistent way.
It refuses to follow the format, or it follows the format but it just starts listing out all possible options, or it starts going on rampage by writing for the player too etc...
7B model would be ideal and I don't care about license that much. This isn't something I plan to distribute. | 2023-05-20T22:00:50 | https://www.reddit.com/r/LocalLLaMA/comments/13naggc/best_model_for_game_ai_agent/ | kamtar | self.LocalLLaMA | 2023-05-20T22:04:48 | 0 | {} | 13naggc | false | null | t3_13naggc | /r/LocalLLaMA/comments/13naggc/best_model_for_game_ai_agent/ | false | false | self | 11 | null |
Which file formats should I use if I have a GPU | 5 | I understand that there are different models. I know of three interfaces: gpt4all, llama.ccp and oogabooga. I don't understand file formats. The latest Manticore releases comes in different formats. What combo of interface/format should I use for a 3080? | 2023-05-20T21:45:54 | https://www.reddit.com/r/LocalLLaMA/comments/13na3a5/which_file_formats_should_i_use_if_i_have_a_gpu/ | infohawk | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13na3a5 | false | null | t3_13na3a5 | /r/LocalLLaMA/comments/13na3a5/which_file_formats_should_i_use_if_i_have_a_gpu/ | false | false | self | 5 | null |
Combining weights (fine tunes of the same base) | 3 | [deleted] | 2023-05-20T21:35:21 | [deleted] | 2023-06-23T07:06:10 | 0 | {} | 13n9u58 | false | null | t3_13n9u58 | /r/LocalLLaMA/comments/13n9u58/combining_weights_fine_tunes_of_the_same_base/ | false | false | default | 3 | null | ||
My results using a Tesla P40 | 101 | TL;DR at bottom
So like many of you, I feel down the AI text gen rabbit hole. My wife has been severely addicted to all things chat AI, so it was only natural. Our previous server was running a 3500 core i-5 from over a decade ago, so we figured this would be the best time to upgrade. We got a P40 as well for gits and shiggles because if it works, great, if not, not a big investment loss and since we're upgrading the server, might as well see what we can do.
For reference, mine and my wife's PCs are identical with the exception of GPU.
Our home systems are:
Ryzen 5 3800X, 64gb memory each. My GPU is a RTX 4080, hers is a RTX 2080.
Using the Alpaca 13b model, I can achieve \~16 tokens/sec when in instruct mode. My wife can get \~5 tokens/sec (but she's having to use the 7b model because of VRAM limitations). She also switched to mostly CPU so she can use larger models, so she hasn't been using her GPU.
We initially plugged in the P40 on her system (couldn't pull the 2080 because the CPU didn't have integrated graphics and still needed a video out). Nvidia griped because of the difference between datacenter drivers and typical drivers. Once drivers were sorted, it worked like absolute crap. Windows was forcing shared VRAM, and even though we could show via the command 'nvidia-smi' that the P40 was being used exclusively, either text gen or windows was forcing to try to share the load through the PCI bus. Long story short, got \~2.5 tokens/sec with the 30b model.
Finished building the new server this morning. i7 13700 w/64g ram. Since this was a dedicated box and with integrated graphics, we went solid datacenter drivers. No issues whatsoever. 13b model achieved \~15 tokens/sec. 30b model achieved 8-9 tokens/sec. When using text gen's streaming, it looked as fast as ChatGPT.
TL;DR
7b alpaca model on a 2080 : \~5 tokens/sec
13b alpaca model on a 4080: \~16 tokens/sec
13b alpaca model on a P40: \~15 tokens/sec
30b alpaca model on a P40: \~8-9 tokens/sec
Next step is attaching a blower via 3D printed cowling because the card gets HOT despite having some solid airflow in the server chassis then, picking up a second P40 and an NVLink bridge to then attempt to run a 65b model. | 2023-05-20T20:47:26 | https://www.reddit.com/r/LocalLLaMA/comments/13n8bqh/my_results_using_a_tesla_p40/ | AsheramL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13n8bqh | false | null | t3_13n8bqh | /r/LocalLLaMA/comments/13n8bqh/my_results_using_a_tesla_p40/ | false | false | self | 101 | null |
Running a small model on a phone? | 8 | Anyone have any luck running a small model on a phone. I seen the llama.cpp repo an it has a section to get a model running on android.
thanks for tha downvotes much appreciated.
​ | 2023-05-20T20:12:27 | https://www.reddit.com/r/LocalLLaMA/comments/13n6c3z/running_a_small_model_on_a_phone/ | ScoobySnackzz12 | self.LocalLLaMA | 2023-05-21T13:24:58 | 0 | {} | 13n6c3z | false | null | t3_13n6c3z | /r/LocalLLaMA/comments/13n6c3z/running_a_small_model_on_a_phone/ | false | false | self | 8 | null |
Why can't we use edge TPUs (like coral AI) with inferencing? | 5 | I ask this from extreme amounts of ignorance, if we can load a full model into system ram, what's stopping the TPU from participating in calculations?
Or is it just that GPU ram is that much faster than system ram and that's the real reason they're better than CPUs? | 2023-05-20T20:07:18 | https://www.reddit.com/r/LocalLLaMA/comments/13n613w/why_cant_we_use_edge_tpus_like_coral_ai_with/ | noneabove1182 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13n613w | false | null | t3_13n613w | /r/LocalLLaMA/comments/13n613w/why_cant_we_use_edge_tpus_like_coral_ai_with/ | false | false | self | 5 | null |
Is there such a thing as local Llamas integrated into VSCode? | 17 | Hi, I’m wondering if someone already created a plug-in for using these models directly from VSCode?
It should be pretty trivial to connect a VSCode plugin to the text-generation-web-ui API, and it could be interesting when used with models that can generate code.
If there’s not one already, I’d look into building it, anyone else interested? | 2023-05-20T19:15:41 | https://www.reddit.com/r/LocalLLaMA/comments/13n3ikw/is_there_such_a_thing_as_local_llamas_integrated/ | rustedbits | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13n3ikw | false | null | t3_13n3ikw | /r/LocalLLaMA/comments/13n3ikw/is_there_such_a_thing_as_local_llamas_integrated/ | false | false | self | 17 | null |
Tired of errors | 2 | I have spent the whole day trying to work with a LlaMa model and have tried different versions of different models. I have also tried different version of the library to no avail. I keep getting the following error:
error loading model: unknown (magic, version) combination: 67676a74, 00000003; is this really a GGML file? llama\_init\_from\_file: failed to load model
Can anyone tell me a version of llama-cpp-python and a LlaMa model that are guaranteed to work together? | 2023-05-20T17:49:11 | https://www.reddit.com/r/LocalLLaMA/comments/13n1cwg/tired_of_errors/ | Horcruxno13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13n1cwg | false | null | t3_13n1cwg | /r/LocalLLaMA/comments/13n1cwg/tired_of_errors/ | false | false | self | 2 | null |
Install Ooba textgen + llama.cpp with GPU support on Windows via WSL2 | 11 | 2023-05-20T17:45:03 | https://pastebin.com/GwSysUxj | _underlines_ | pastebin.com | 1970-01-01T00:00:00 | 0 | {} | 13n19cu | false | null | t3_13n19cu | /r/LocalLLaMA/comments/13n19cu/install_ooba_textgen_llamacpp_with_gpu_support_on/ | false | false | 11 | {'enabled': False, 'images': [{'id': 'OgFzGCIRw1ZxjMOSkfV1OiH-_nQiZl8rzSonmOAuhGs', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?width=108&crop=smart&auto=webp&s=df9c6a296446d05d873c629a30253398c4d29c1b', 'width': 108}], 'source': {'height': 150, 'url': 'https://external-preview.redd.it/-WiKXADWH5lgU4gQv5fcDAQ9QKNBZTJ-D83BykIL2HA.jpg?auto=webp&s=07c121a0180003f7373863af66192b6ff6a937da', 'width': 150}, 'variants': {}}]} | ||
Issues running oobabooga_windows after reinstall | 3 | [removed] | 2023-05-20T17:42:30 | [deleted] | 2023-05-20T18:09:55 | 0 | {} | 13n175h | false | null | t3_13n175h | /r/LocalLLaMA/comments/13n175h/issues_running_oobabooga_windows_after_reinstall/ | false | false | default | 3 | null | ||
Why do sometimes the generation of and answer become super slow? It's pretty consistent but sometimes It Just slows down randomly. What can be the issue? | 2 | 2023-05-20T17:24:58 | DarkMamm | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 13n0sfu | false | null | t3_13n0sfu | /r/LocalLLaMA/comments/13n0sfu/why_do_sometimes_the_generation_of_and_answer/ | false | false | 2 | {'enabled': True, 'images': [{'id': 'i43IcbXpRSBIeaubyURCgWKMeNijEnBbwovsWaT0pBY', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=108&crop=smart&auto=webp&s=7130d72105f5c32e6df488847fca7f583c508b38', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=216&crop=smart&auto=webp&s=d9097b11f8e8de2a0059deb787d24db4e40813ca', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=320&crop=smart&auto=webp&s=dd27c3f6184010fc457069b1893f4b0881057173', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=640&crop=smart&auto=webp&s=0ca86d7c693578904b44285a8addb1ef511a76c6', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=960&crop=smart&auto=webp&s=5d63b171583ec4ce2911284dd9619de71f2f32d2', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?width=1080&crop=smart&auto=webp&s=78e442f1c308f015118a75aa1731d781e8e510dd', 'width': 1080}], 'source': {'height': 2400, 'url': 'https://external-preview.redd.it/UUijalk3-gWq561GeisZ5OW98-z-EBKAEJYRYd-tiys.jpg?auto=webp&s=83ee164e83187fe4f4609368502390a0337903f9', 'width': 3200}, 'variants': {}}]} | |||
Todd Lora horning up models. | 34 | I've been working on a LoRA using the dataset from the Todd Proxy (I think) dump from 4chan. This time trained on a 13b llama at 128 rank for 1 epoch. Works on llama derived 13bs. The goal is to de-censor models and make them write longer.
This run took 12 or so hours. There are 50k+ unique prompts in the dataset. I ran for 1 epoch but I should do 2 for maximum effect. Eventually I want to combine it with the blue moon RP set and have a nice model uncensoring/RP LoRA. Would like to get it right before I commit to a dataset 2 or 3x the size.
Here is an example of what it does: https://postimg.cc/gallery/VSYpPR8
It's even starting to crack vicuna.
LoRA: https://huggingface.co/autobots/ToddLora_13b_v2
Next run I will up the rank to 256 and see if the alpaca-lora repo is any faster than textgenUI.
Any hints/tips/criticisms? | 2023-05-20T16:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/13mz27l/todd_lora_horning_up_models/ | a_beautiful_rhind | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mz27l | false | null | t3_13mz27l | /r/LocalLLaMA/comments/13mz27l/todd_lora_horning_up_models/ | false | false | self | 34 | {'enabled': False, 'images': [{'id': 'UVTttE2p7GzqDHO34XkKrnF4dV0HPW2BiXUCkfJMtiw', 'resolutions': [{'height': 46, 'url': 'https://external-preview.redd.it/ifMhJ82lD5gYaz0Uab43MOZPV7i8JVXp3l4T9EfyfcQ.jpg?width=108&crop=smart&auto=webp&s=81e5cb52a3d32b84fb88a159f6e69c44f6ed20cb', 'width': 108}, {'height': 93, 'url': 'https://external-preview.redd.it/ifMhJ82lD5gYaz0Uab43MOZPV7i8JVXp3l4T9EfyfcQ.jpg?width=216&crop=smart&auto=webp&s=9eccfea6cec450bd4cd06b380045f29562158120', 'width': 216}, {'height': 138, 'url': 'https://external-preview.redd.it/ifMhJ82lD5gYaz0Uab43MOZPV7i8JVXp3l4T9EfyfcQ.jpg?width=320&crop=smart&auto=webp&s=c921e132452131d33207e016899a7c6a9c480d1c', 'width': 320}, {'height': 276, 'url': 'https://external-preview.redd.it/ifMhJ82lD5gYaz0Uab43MOZPV7i8JVXp3l4T9EfyfcQ.jpg?width=640&crop=smart&auto=webp&s=ab98eb71dc0b6392eed08eec7a5be1d853d70ef9', 'width': 640}], 'source': {'height': 400, 'url': 'https://external-preview.redd.it/ifMhJ82lD5gYaz0Uab43MOZPV7i8JVXp3l4T9EfyfcQ.jpg?auto=webp&s=703b17e9f1801b54ee92cc6d100dcf75f317cc9c', 'width': 925}, 'variants': {}}]} |
You can now relive the infamous "I'm sorry, Dave, I'm afraid I can't do that" scene | 21 | 2023-05-20T16:11:17 | https://www.reddit.com/gallery/13myzvb | belladorexxx | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 13myzvb | false | null | t3_13myzvb | /r/LocalLLaMA/comments/13myzvb/you_can_now_relive_the_infamous_im_sorry_dave_im/ | false | false | 21 | null | ||
Looking for a way to simulate chaotic thought process | 6 | I'd like to simulate random thought process.
For example, if I ask "give me a random word", the answer I get is often pretty deterministic and limited to a narrow range of answers, even when playing with temperature, top\_p and top\_k which also have the side effect or missing the instruction's point completely if outside of a reasonable range. A better way, I think, could be by introducing randomness within the conversation to simulate chaotic thoughts. Maybe one way that could be achieved is via introducing character bias.
For instance, if my prompt is:
### User:
Give me a random word
### Assistant:
The answer lacks variety. It would give words such as "elephant" and "apple". Understandably, the model produces a probabilistic completion answer based on previous tokens.
So, I'd like to inject chaos with a random number generator by producing random but meaningful tokens that would introduce a random bias, and turn such prompt into something like:
### User:
Give me a random word
### __RANDOM_PERSONALITY_WORD__ Assistant:
Which could become:
### User:
Give me a random word
### Evil Assistant:
The word is "Ninja".
Such small alteration of the prompt introduces a small but powerful controlled bias which I think can achieve simulated chaotic through process.
Now the question is: what is the best way of introducing such bias in the conversation? I thought about random personalities (character bias), but not sure how effective that would be, and it seems quite complex. Alternatively, I thought about a system prompt, such as `Below is an instruction that describes a task. As a helpful assistant with a personality#__RANDOM_NUMBER__, write a response that appropriately completes the request.` But that doesn't seem to affect the answers much as "personality#NUMBER" is treated too softly and only appears to introduce the same structural bias regardless of the NUMBER.
Most importantly I'd like this to be applied to a wide range of user requests, such as:
* Give me a random word
* Describe a person
* Write a story about a topic of your own choosing
* Write a poem
etc.
Any ideas on the matter? | 2023-05-20T15:51:27 | https://www.reddit.com/r/LocalLLaMA/comments/13myhno/looking_for_a_way_to_simulate_chaotic_thought/ | Thireus | self.LocalLLaMA | 2023-05-20T15:54:44 | 0 | {} | 13myhno | false | null | t3_13myhno | /r/LocalLLaMA/comments/13myhno/looking_for_a_way_to_simulate_chaotic_thought/ | false | false | self | 6 | null |
13B 4bit or 7B 8bits | 23 |
Hello, I was just wondering what would be the best in terms of speed or quality and what is for you the best compromise for the ehartford's Wizard Uncensored.
I use a ryzen 7600x, 16GB ddr4 3200 ram and a 8gb 3060ti | 2023-05-20T15:20:39 | https://www.reddit.com/r/LocalLLaMA/comments/13mxq66/13b_4bit_or_7b_8bits/ | PapyplO | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mxq66 | false | null | t3_13mxq66 | /r/LocalLLaMA/comments/13mxq66/13b_4bit_or_7b_8bits/ | false | false | self | 23 | null |
Devs and local LLM users | 1 | [removed] | 2023-05-20T15:17:33 | https://www.reddit.com/r/LocalLLaMA/comments/13mxnbq/devs_and_local_llm_users/ | drbh_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mxnbq | false | null | t3_13mxnbq | /r/LocalLLaMA/comments/13mxnbq/devs_and_local_llm_users/ | false | false | default | 1 | null |
Problems with ouput in the Text generation web UI | 1 | Hi,
I just installed the 4-bit LLaMa following this article:
[https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit\_and\_4bit/](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/)
After that, I downloaded the WizardLM-7B-uncensored-GPTQ model over here:
[https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ/tree/main](https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ/tree/main)
In the models folder, I created a new folder called WizardLM-7B-uncensored-GPTQ-4bit-128g.compat.no-act-order and put all the downloaded files there.
After that, I ran the [server.py](https://server.py) script, it recognized the model and loaded it. So far so good, but when I put a prompt into the input window and click Generate, the output just parrots what's in the input window, nothing is effectively generated. Why is that?
I would appreciate your help, thanks. | 2023-05-20T15:11:49 | https://www.reddit.com/r/LocalLLaMA/comments/13mxhxx/problems_with_ouput_in_the_text_generation_web_ui/ | Liparu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mxhxx | false | null | t3_13mxhxx | /r/LocalLLaMA/comments/13mxhxx/problems_with_ouput_in_the_text_generation_web_ui/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'VeQ7J7AHAfqAyuWDlt6KtqZAD2MPOHJxpZV7LzS8g9k', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=108&crop=smart&auto=webp&s=3162912114c90470ebbdaaddbe50c701814f7db0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=216&crop=smart&auto=webp&s=06e29ebab1871569b6b0de1dc19dabd6c13b195f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=320&crop=smart&auto=webp&s=001e80f1c0c3e33f6aa4f5a23737e7a2f51c17c2', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=640&crop=smart&auto=webp&s=5794c3b53d2f33ca9a87494b83d9c3adb361bc55', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=960&crop=smart&auto=webp&s=71d7f260081cf4936f27a96fc3f4be0c48684926', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?width=1080&crop=smart&auto=webp&s=a2bbe90df85c92de03f32e1b9339c7ca3d61603f', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ttp7PNmuZ-Sf15wnMLyeV0CqNYgpdylensMpV1pjvm4.jpg?auto=webp&s=a25715c578d81f56bc706b69123a3fc378620325', 'width': 1200}, 'variants': {}}]} |
LambdaLabs offering free compute for 30 days to train open models | 170 | 2023-05-20T14:56:30 | https://twitter.com/boborado/status/1659608452849897472?t=nweVmAjfSfklvHmoSr2txw&s=19 | 404underConstruction | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 13mwupt | false | {'oembed': {'author_name': 'Robert Brooks IV', 'author_url': 'https://twitter.com/boborado', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">I have 248 H100 SXM5s networked with 3200 Gbps Infiniband just sitting in front of me. Ready to go. <br><br>Who wants to do something out of this world cool? Let’s train your foundation model or LLM. <br><br>DMs open.</p>— Robert Brooks IV (@boborado) <a href="https://twitter.com/boborado/status/1659608452849897472?ref_src=twsrc%5Etfw">May 19, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/boborado/status/1659608452849897472', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_13mwupt | /r/LocalLLaMA/comments/13mwupt/lambdalabs_offering_free_compute_for_30_days_to/ | false | false | 170 | {'enabled': False, 'images': [{'id': 'Cw_4zUBGQYHysV7uArqTHfHBc9N8xecsov7KHph4RwQ', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/he9R5AapkwWVGhk_MNyrLz0OvCJvG5SlKiC9OMprc98.jpg?width=108&crop=smart&auto=webp&s=705cb69d98de9220411d90ba5aed2da18f3d61d4', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/he9R5AapkwWVGhk_MNyrLz0OvCJvG5SlKiC9OMprc98.jpg?auto=webp&s=0677076673e3edce76f9b8a4726ecaf3f8a06c70', 'width': 140}, 'variants': {}}]} | ||
Seeking Guidance on Leveraging Local Models and Optimizing GPU Utilization in containerized packages | 6 | Hello locallamas,
I'm reaching out to share my current experiences and solicit some advice from our collective wisdom. My background includes working as a DevOps Engineer and Cloud Engineer, currently pursuing a degree in InfoSec, and I have a keen interest in LLM. My journey with GPT models started in July '21 and I've used the GPT-3 and GPT-3.5_turbo API to develop a TTRPG adventure generator. I recently upgraded my gaming pc (4090 OC, 13900ks, 96gb ram - upgradable to 192gb if necessary) to host and interact with local models via LangChain for building tools without incurring API costs.
Although I've made some headway in dockerizing the models, there are a few issues that I've been grappling with:
1. I want to utilize Stable-Vicuna, as various sources in the media suggest it's the most effective local model.
2. I'm looking to understand how to train LoRA improvements on a local scale.
3. While I possess the llama checklist, consolidated.0X.pths, and params.json, I'm unsure how to utilize them effectively.
4. I tried using stable-vicuna's apply_delta.py but it seems to prefer calling files from huggingface. In an attempt to manage this locally, I found the TRANSFORMERS_OFFLINE=1 | https://huggingface.co/docs/transformers/v4.29.1/en/installation#offline-mode and local_files_only=True flags in the transformers' documentation, but I still can't execute the command locally. Ideally, Docker Desktop could utilize my GPU during the build phase to create a container with stable-vicuna ready to go.
5. While most projects predominantly utilize the CPU, I'm interested in finding ways to leverage my GPU. I would like to initiate my docker container to access my local GPU.
6. From a DevOps perspective, my goal is to establish a retraining pipeline for regular improvements. I aim to generate and store improvement data, carry out periodic retraining, and perform testing before swapping out with the live system. However, I find myself blocked due to the above challenges.
Any insights, suggestions, or resources would be greatly appreciated. Thank you in advance for your help! | 2023-05-20T14:53:14 | https://www.reddit.com/r/LocalLLaMA/comments/13mwnc1/seeking_guidance_on_leveraging_local_models_and/ | 2good4hisowngood | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mwnc1 | false | null | t3_13mwnc1 | /r/LocalLLaMA/comments/13mwnc1/seeking_guidance_on_leveraging_local_models_and/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&s=abf38332c5c00a919af5be75653a93473aa2e5fa', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&s=1a06602204645d0251d3f5c043fa1b940ca3e799', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&s=04833c1845d9bd544eb7fed4e31123e740619890', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&s=d592b0a5b627e060ab58d73bde5f359a1058e56d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&s=5913a547536ee8300fdb8a32d14ff28667d1b875', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&s=2af86fd4d41393a7d14d45c4bb883bef718575d1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&s=720b78add0a3005c4f67eaed6897df409cc040c6', 'width': 1200}, 'variants': {}}]} |
Information v Reasoning | 2 | Just wondering : can we build models which are great at Reasoning, but with not too much Data?
Or do we need a large model to achieve both? | 2023-05-20T14:38:01 | https://www.reddit.com/r/LocalLLaMA/comments/13mvowi/information_v_reasoning/ | MrEloi | self.LocalLLaMA | 2023-05-20T14:48:03 | 0 | {} | 13mvowi | false | null | t3_13mvowi | /r/LocalLLaMA/comments/13mvowi/information_v_reasoning/ | false | false | self | 2 | null |
Which is the best option only for make questions to documents? | 2 | Hello dear, I wanted to ask you about what would be the most "lightweight" option when processing a text, a pdf or any content to generate an answer, even a basic one, based on questions? With lightweight I mean a model perhaps adjusted to ask only questions without consuming too many computer resources something similar. | 2023-05-20T14:35:44 | https://www.reddit.com/r/LocalLLaMA/comments/13mvjn2/which_is_the_best_option_only_for_make_questions/ | hwpoison | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mvjn2 | false | null | t3_13mvjn2 | /r/LocalLLaMA/comments/13mvjn2/which_is_the_best_option_only_for_make_questions/ | false | false | self | 2 | null |
Is this a good AI PC build? (RTX 4090, Ryzen 9 5950X, 32 GB RAM) | 13 | Hey everyone,
I found a [prebuilt PC on Amazon](https://www.amazon.de/-/en/gp/product/B0BLD176XL/ref=ox_sc_act_title_1?smid=AOHI1X68KO3E5&th=1) that caught my attention. I'm interested in running AI apps like Whisper, Vicuna, and Stable Diffusion on it. The GPU, an RTX 4090, looks great, but I'm unsure if the CPU is powerful enough. I plan to upgrade the RAM to 64 GB and also use the PC for gaming.
Here are the specs:
* CPU: AMD Ryzen 9 5950X (16 x 3.4 GHz)
* GPU: RTX 4090 24 GB
* RAM: 32 GB DDR4-3600MHz
* Storage: 1 TB M.2 SSD NVMe
* Mainboard: Gigabyte B550 Gaming X V2 - AM4
I'd love to hear your thoughts! | 2023-05-20T12:35:59 | https://www.reddit.com/r/LocalLLaMA/comments/13mqtrf/is_this_a_good_ai_pc_build_rtx_4090_ryzen_9_5950x/ | Prince-of-Privacy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mqtrf | false | null | t3_13mqtrf | /r/LocalLLaMA/comments/13mqtrf/is_this_a_good_ai_pc_build_rtx_4090_ryzen_9_5950x/ | false | false | self | 13 | null |
Will we always need to load entire models into RAM, or is it technically possible to only load part at a time? For both llama.cpp and PEFT DeepSpeed/FSDP | 11 | While llama.cpp and PEFT both have support for splitting tasks between CPU and GPU (splitting the model up between VRAM and RAM), as far as I know the model still has to be fully loaded into system RAM to do this. In the (near?) future, will it be possible to load only part of a model into RAM at a time as part of this task splitting? This question is more about if it is "technically possible" even if it's a pain to actually implement. Also, the question is about both llama.cpp and PEFT DeepSpeed/FSDP, I sort of assume both would share the same limitations but maybe one could do this and one not.
Will we forever be required to have the full amount of RAM to load the entire model at once for inference and fine-tuning, or will we be able to loosen RAM requirements and be able to for instance deal with half the model in RAM at a time (for instance, a model 2x the size of system RAM, but still able to do inference/fine-tuning on it)?
If the algorithms involved simply cannot be optimized to do this, is memory paging with a M.2 drive a viable alternative for "expanding RAM" to handle any sized model? | 2023-05-20T11:40:47 | https://www.reddit.com/r/LocalLLaMA/comments/13mpnaw/will_we_always_need_to_load_entire_models_into/ | SoylentMithril | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mpnaw | false | null | t3_13mpnaw | /r/LocalLLaMA/comments/13mpnaw/will_we_always_need_to_load_entire_models_into/ | false | false | self | 11 | null |
Models I can Run On Mac M1? | 1 | [removed] | 2023-05-20T10:50:45 | https://www.reddit.com/r/LocalLLaMA/comments/13monv5/models_i_can_run_on_mac_m1/ | buxrmp | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13monv5 | false | null | t3_13monv5 | /r/LocalLLaMA/comments/13monv5/models_i_can_run_on_mac_m1/ | false | false | default | 1 | null |
Does anybody know ggml bin models that can speak Albanian, Macedonian, Bulgarian, Greek, Latvian, Estonian, Hungarian, Lithuanian, Swedish, Slovenian, Norwegian, Dutch? | 13 | I am interested in learning foreign languages and try to find suitable llama models. | 2023-05-20T10:13:07 | https://www.reddit.com/r/LocalLLaMA/comments/13mnzcz/does_anybody_know_ggml_bin_models_that_can_speak/ | Languages_Learner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mnzcz | false | null | t3_13mnzcz | /r/LocalLLaMA/comments/13mnzcz/does_anybody_know_ggml_bin_models_that_can_speak/ | false | false | self | 13 | null |
WizardLM: Overview of the Evol-Instruct Method in a Short Video | 1 | [removed] | 2023-05-20T08:40:20 | https://www.reddit.com/r/LocalLLaMA/comments/13mmdgo/wizardlm_overview_of_the_evolinstruct_method_in_a/ | TrimTasker | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mmdgo | false | null | t3_13mmdgo | /r/LocalLLaMA/comments/13mmdgo/wizardlm_overview_of_the_evolinstruct_method_in_a/ | false | false | default | 1 | null |
LLama with internet access? | 14 | Hello,
I was researching this, but I can't quite figure it out, so I am wondering here: what are good pointers and such if I want to make a LLaMa-based chatbot with internet access?
I want to make a programming-specific assistant for writing scripts on command, and internet access would be vital to have all resources updated as recently as possible. I tried looking into langchain as a way, but having issues figuring out how it works.
Any help would be appreciated; thank you! | 2023-05-20T07:28:18 | https://www.reddit.com/r/LocalLLaMA/comments/13ml4hq/llama_with_internet_access/ | MatchTerm69 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ml4hq | false | null | t3_13ml4hq | /r/LocalLLaMA/comments/13ml4hq/llama_with_internet_access/ | false | false | self | 14 | null |
Which models can you run at reasonable speeds on a laptop with 32 gb ram and a 3070? | 2 | I am a noob when it comes to LLMs I have tried some of the 13b models but inference is slow. Should I stick to 7b only? I tried have both bin and ggml models in oobabooga textgen with llama extension - gpt4xvicuna 4bit | 2023-05-20T06:57:22 | https://www.reddit.com/r/LocalLLaMA/comments/13mkkp5/which_models_can_you_run_at_reasonable_speeds_on/ | hihajab | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mkkp5 | false | null | t3_13mkkp5 | /r/LocalLLaMA/comments/13mkkp5/which_models_can_you_run_at_reasonable_speeds_on/ | false | false | self | 2 | null |
GLM-130B | 6 | The [https://github.com/THUDM/GLM-130B](https://github.com/THUDM/GLM-130B) model is trained on The Pile and can run on 4x3090 when quantized to INT4. I'm wondering if anyone knows if this model could (or has) been quantized using GPTQ, which gives some impressive performance gains over traditional quantization, and I'm also wondering if anyone has tried a 3-bit or 2-bit quantization of such a massive model (using GPTQ). Are there any inherent limitations in this? Is there anything about this model that prevents it from being run on text-generation-webui?
[This page](https://github.com/THUDM/GLM-130B/blob/main/docs/quantization.md) on the GLM130B Github talks about issues with Quantization, but the details went a bit over my head. From the sounds of it, GPTQ wouldn't necessarily work without modification, and there are some details about not being able to quantize the activation the same way. I think GPTQ may address those issues, but I'm not clear on the details.
I know that Llama-65B int3 performs worse than Llama-30B int4 in terms of perplexity, which suggests these very low quantizations aren't worth the effort, but I'd love to see some data backing that up.
For anyone curious, GLM130B was trained with 12,288 hidden state dimensions (whatever that means), and a standard 2,048 maximum sequence length (which is a bummer - for such a massive model, I would have loved to see a 4096-token max, but I guess that would make inference that much slower if the quadratic scaling law applies here)
I'm also not clear on what sort of tokens/sec could be expected of GLM130B at int4 / int3 / int2.
Can anyone shed some light on this? Either concretely or speculatively? | 2023-05-20T06:10:33 | https://www.reddit.com/r/LocalLLaMA/comments/13mjr7m/glm130b/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mjr7m | false | null | t3_13mjr7m | /r/LocalLLaMA/comments/13mjr7m/glm130b/ | false | false | self | 6 | {'enabled': False, 'images': [{'id': '6Ie7e7foB4I5D5_FI7bjwwibKegZVK0-ck0nj9sW30E', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=108&crop=smart&auto=webp&s=914f410fd87e84c888df0f03e1205ae706c0a350', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=216&crop=smart&auto=webp&s=77967cb7d5c09287b957c2335ec4b0efe9f793ac', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=320&crop=smart&auto=webp&s=7a59a025592e0032bd6fe3d17daa8619a87b7041', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=640&crop=smart&auto=webp&s=6df3b1def36ae5ae4aba55d5364adc8a8ebbc3c0', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=960&crop=smart&auto=webp&s=5ccfc6bc68e675e7dec562caaaff64accb75c01d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?width=1080&crop=smart&auto=webp&s=e77ce31488887e2a8a76cd4882556aaf0511090b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Q6iJ_TXNilZS4C-UlAS_xJgZUkipISpwpN0LzdQ9xoQ.jpg?auto=webp&s=039b68c780587d9cf75e63c8e988bca44c6aed61', 'width': 1200}, 'variants': {}}]} |
Which model is the best for you? | 59 | New models are coming up almost every day & there are multiple places to compare models but they can't catch up with the speed as new models are created/improved.
So I create this thread for us to share our own experiences with models and share our personal opinions on what's the best for you, probably with a brief explanation of why it's the best.
The format will be something like this (you can share 1 for each 7B, 13B, 30B & 65B):
* **Model**:
* **Env**: please give as many details as possible about the environment you're running the model
* **Performance**: either xx tokens/s or xxx ms/token
* **Reason**:
Here is mine:
​
* **Model**: `TheBloke/Manticore-13B-GGML`
* **Env**: VM (16 vCPU, 32GB RAM, ***only AVX1 enabled***) in Dell R520, 2x E5-2470 v2 @ 2.40GHz, 64GB RAM
* **Performance**: 1.2\~1.3 tokens/s
* **Reason**: Good to share RAM with SD. The response is even better than `VicUnlocked-30B-GGML` (which I guess is the best 30B model), similar quality to `gpt4-x-vicuna-13b` but is uncensored.
​
* **Model**: `TheBloke/Wizard-Vicuna-7B-Uncensored-GGML`
* **Env**: Mac M1 2020, 16GB RAM
* **Performance**: 4 \~ 5 tokens/s
* **Reason**: best with my limited RAM, portable. Finding a way to try GPTQ to compare | 2023-05-20T04:24:28 | https://www.reddit.com/r/LocalLLaMA/comments/13mhrqr/which_model_is_the_best_for_you/ | MichaelBui2812 | self.LocalLLaMA | 2023-05-20T06:09:19 | 0 | {} | 13mhrqr | false | null | t3_13mhrqr | /r/LocalLLaMA/comments/13mhrqr/which_model_is_the_best_for_you/ | false | false | self | 59 | null |
Running GPTQ model in Mac M1 2020? | 3 | Does anyone know if oobabooga (or llamacpp) has support for Mac M1 GPU to run the GPTQ model?
I tried following the instruction on [https://github.com/oobabooga/text-generation-webui/](https://github.com/oobabooga/text-generation-webui/) but I'm stuck at step `4. Install GPTQ-for-LLaMa and the monkey patch` when trying to run `python setup_cuda.py install`:
CUDA_HOME environment variable is not set.
(`Step 1` in [https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md))
My main goal is to run a 7B model like [https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ](https://huggingface.co/TheBloke/Wizard-Vicuna-7B-Uncensored-GPTQ) in my Mac M1 (16GB)
Currently, I'm running the GGML model with \~4-5 tokens/s but I want to see how much faster/better the GPTQ model is. | 2023-05-20T03:20:41 | https://www.reddit.com/r/LocalLLaMA/comments/13mghl4/running_gptq_model_in_mac_m1_2020/ | MichaelBui2812 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mghl4 | false | null | t3_13mghl4 | /r/LocalLLaMA/comments/13mghl4/running_gptq_model_in_mac_m1_2020/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '_G-gSKJ6d76MYZo4vSvWdNh2at8JMNuyKQyW1pu20nA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=108&crop=smart&auto=webp&s=76388a0ba09ee376b6607adcf5a46d0fb1a52936', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=216&crop=smart&auto=webp&s=fe775c7ec71c527356c7bef079e79e6ec161d16e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=320&crop=smart&auto=webp&s=b139ca9bc9b959df0d75a70e391501776e319ebc', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=640&crop=smart&auto=webp&s=aa6e59488e4ba9374fcbecac49d6cf1af7e8579a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=960&crop=smart&auto=webp&s=2533771c6dd0af65a0b916df12921fc09330f1d2', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?width=1080&crop=smart&auto=webp&s=5a585f52c4007b4a491706ce47f3dfbef5c54cae', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/lFvMmISP8EAHICub_OWPEy11AbxgUX1RRM1JTh0yZ4w.jpg?auto=webp&s=45fbb52f8c595da83be14bd7525963e801369117', 'width': 1200}, 'variants': {}}]} |
Offline llama | 71 | As you can see in the video, the PDF chatbot is working without internet. No OpenAI, no third party period. This is just one use case. I just wanted to put this feeler out there to see if anyone would be interested in this. If enough people are into it, I'll put the repo up on my github. Special thanks to u/The-Bloke as I am using his ggml gpt4all model.
EDIT: I have posted the source code [here](https://github.com/jlonge4/local_llama)
[video](https://www.reddit.com/user/Jl_btdipsbro/comments/13n6hbz/local_llama/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=2&utm_term=1) | 2023-05-20T02:10:54 | https://www.reddit.com/r/LocalLLaMA/comments/13mf0bi/offline_llama/ | Jl_btdipsbro | self.LocalLLaMA | 2023-05-20T20:36:45 | 1 | {'gid_2': 1} | 13mf0bi | false | null | t3_13mf0bi | /r/LocalLLaMA/comments/13mf0bi/offline_llama/ | false | false | self | 71 | {'enabled': False, 'images': [{'id': 'kYiVOqUp5QqzFz2zdykgGVlDJrWLZ-60SRLc-25uGe0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=108&crop=smart&auto=webp&s=211b2e62ef0d8eff45e402562b898c342838dbda', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=216&crop=smart&auto=webp&s=66f0cbd01ef6c8905f1b72242fa5b54785331e34', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=320&crop=smart&auto=webp&s=240827a1120f6e1e5996b2b89f492262529309bd', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=640&crop=smart&auto=webp&s=9d9fce830b27974ccdac0eace0d8c2d3d862c9c9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=960&crop=smart&auto=webp&s=efe40527ff603955aa10bba78070e8f82f83efb4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?width=1080&crop=smart&auto=webp&s=a7f87d72eca2ec7d75e75cf035649c12e5e1d4d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MX_8-a2o33x5NQXxKsQ3hUVrKZOajb3bkBW1jOM4SoI.jpg?auto=webp&s=766bf55582d733e244d01122b5c35a6058a3911f', 'width': 1200}, 'variants': {}}]} |
Another new llama.cpp / GGML breaking change, affecting q4_0, q4_1 and q8_0 models. | 273 | Today llama.cpp committed another breaking GGML change: [https://github.com/ggerganov/llama.cpp/pull/1508](https://github.com/ggerganov/llama.cpp/pull/1508)
The good news is that this change brings slightly smaller file sizes (e.g 3.5GB instead of 4.0GB for 7B q4\_0, and 6.8GB vs 7.6GB for 13B q4\_0), and slightly faster inference.
The bad news is that it once again means that all existing q4\_0, q4\_1 and q8\_0 GGMLs will no longer work with the latest llama.cpp code. Specifically, from May 19th commit 2d5db48 onwards.
q5\_0 and q5\_1 models are unaffected.
Likewise most tools that use llama.cpp - eg llama-cpp-python, text-generation-webui, etc - will also be affected. But not Kobaldcpp I'm told!
I am in the process of updating all my GGML repos. New model files will have `ggmlv3` in their filename, eg `model-name.ggmlv3.q4_0.bin`.
In my repos the older version model files - that work with llama.cpp before May 19th / commit 2d5db48 - will still be available for download, in a separate branch called `previous_llama_ggmlv2`.
Although only q4\_0, q4\_1 and q8\_0 models were affected, I have chosen to re-do all model files so I can upload all at once with the new `ggmlv3` name. So you will see ggmlv3 files for q5\_0 and q5\_1 also, but you don't need to re-download those if you don't want to.
I'm not 100% sure when my re-quant & upload process will be finished, but I'd guess within the next 6-10 hours. Repos are being updated one-by-one, so as soon as a given repo is done it will be available for download. | 2023-05-20T00:50:33 | https://www.reddit.com/r/LocalLLaMA/comments/13md90j/another_new_llamacpp_ggml_breaking_change/ | The-Bloke | self.LocalLLaMA | 2023-05-20T14:53:57 | 0 | {} | 13md90j | false | null | t3_13md90j | /r/LocalLLaMA/comments/13md90j/another_new_llamacpp_ggml_breaking_change/ | false | false | self | 273 | {'enabled': False, 'images': [{'id': 'Fig9lAX5oTT6fjkgXnpKxVBgjOI3XTZhakodXjXzO8w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=108&crop=smart&auto=webp&s=19189c02435c7662523784bf3867293ec94c1e14', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=216&crop=smart&auto=webp&s=a2f7a5bf33f658022da892cf10a694f9a12c32f1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=320&crop=smart&auto=webp&s=e8f0d90a4b035220ca550d61ee7143fb49c042e6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=640&crop=smart&auto=webp&s=2d35e1c1ae417245d762bdfe2561ed37e4547adc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=960&crop=smart&auto=webp&s=86fbbfdfa42af30c3a98fe55571e0219a1a38788', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?width=1080&crop=smart&auto=webp&s=68512c49d8c9dffc319326953e104915df688324', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/hFotlrzJp6k0QqVntsE-7RbOiFBE3VpV_I7TC_2aBWc.jpg?auto=webp&s=471b729fe9ffd987a975cb1e9cd47ee8bd2da9cc', 'width': 1200}, 'variants': {}}]} |
Who is training their own models? | 6 | Many of the models out there are not for commercial use. Who out there are training models for commercial purposes from scratch? | 2023-05-19T23:34:15 | https://www.reddit.com/r/LocalLLaMA/comments/13mbh7s/who_is_training_their_own_models/ | jsfour | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13mbh7s | false | null | t3_13mbh7s | /r/LocalLLaMA/comments/13mbh7s/who_is_training_their_own_models/ | false | false | self | 6 | null |
GramaLLama LORA - my first grammar Editor attempt | 18 | This 4-bit GPTQ LORA is my entry on a journey in creating a grammar "editor" aka a model to edit a text for grammar and English fluency.
The main requirement is that it doesn't try to change the style (doesn't start replacing words for no reason as many LLM would if given the task - including ChatGPT) and stays on focus.
It has been trained and tested on top of Wizard-Vicuna-13B-Uncensored-GPTQ-4bit-128g.compat.no-act-order
[https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ](https://huggingface.co/TheBloke/Wizard-Vicuna-13B-Uncensored-GPTQ)
and it could presumably work on any 13B LLama based models quantized to 4bit 128g - but how well? Hard to say.
It is trained to be used in **CHAT** mode where you simply enter your sentence or a short paragraph and Assistant will return edited version without any unnecessary discussion.In Instruct mode it works with a little more issues.
You may put the instruction it has been trained with in Character card (but it seems not necessary)
"Edit the following for spelling and grammar mistakes"
**Note:** This is version 0.1, more like a proof of concept.
I will continue further, if it proves useful.
I tested it with oobabooga text-generation-webui default parameters
​
https://preview.redd.it/3kpcgmh8hv0b1.jpg?width=895&format=pjpg&auto=webp&s=840e8b0a2b72e9b028003d8241b50795ce7741db
Help:
* I'm not yet sure how to merge this 4bit LORA with the 13B 4bit Wizard Vicuna on windows - if you can help me, I'll appreciate it.
* if you find good settings (parameters), let me know and I'll update the card
* How the hell you make text bold in hugginface card editor?
I tried 7b, but my attempt failed - in theory it should work fine too, but I guess more training is needed
The link:
[https://huggingface.co/FPHam/GramaLLama\_13b\_GPTQ](https://huggingface.co/FPHam/GramaLLama_13b_GPTQ)
​ | 2023-05-19T23:25:36 | https://www.reddit.com/r/LocalLLaMA/comments/13mb9kq/gramallama_lora_my_first_grammar_editor_attempt/ | FPham | self.LocalLLaMA | 2023-05-20T07:38:19 | 0 | {} | 13mb9kq | false | null | t3_13mb9kq | /r/LocalLLaMA/comments/13mb9kq/gramallama_lora_my_first_grammar_editor_attempt/ | false | false | 18 | {'enabled': False, 'images': [{'id': '9ZlrsVubp-s5hQNFaYATOA4IkPMi8GslT4Qul8AwlZo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=108&crop=smart&auto=webp&s=8cab8ecd4b84824376e37cc52aa131a7bc9be44c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=216&crop=smart&auto=webp&s=fec41ee5fcf07d99b8814445cb2fcb3f611a4293', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=320&crop=smart&auto=webp&s=87ff6f252f8f0ab99afbb59f0731667df37f08cf', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=640&crop=smart&auto=webp&s=51a61915cad0711e54c4662cba309874ff017d8d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=960&crop=smart&auto=webp&s=dba0a8dee6e57ef6841d534badacfcef159a3296', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?width=1080&crop=smart&auto=webp&s=4f9b0e73cc77e55c276e904d1327d4e325f2f533', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/DVVnDyZ-lj3PadUL5ND7VesjxxqgOf2d2QtfqQ8UHYk.jpg?auto=webp&s=1bb70922679f793b257bc9960b6fa3855a69ee57', 'width': 1200}, 'variants': {}}]} | |
Hi community, I created a fork of microsoft's guidance and added llama-cpp-python support! | 39 | As I said in the title, I forked guidance and added llama-cpp-python support.
Here is the result of the RPG Character example with Manticore-13B:
The following is a character profile for an RPG game in JSON format.
\`\`\`json
{
"id": "e1f491f7-7ab8-4dac-8c20-c92b5e7d883d",
"description": "A quick and nimble fighter.",
"name": "Katana",
"age": 26,
"armor": "leather",
"weapon": "sword",
"class": "fighter",
"mantra": "I am the sword of justice.",
"strength": 10,
"items": \["a katana", "a leather jacket", "a backpack", "traveler's rations", "water bottle"\]
}\`\`\`
Here is the fork:
[https://github.com/Maximilian-Winter/guidance](https://github.com/Maximilian-Winter/guidance) | 2023-05-19T22:52:53 | https://www.reddit.com/r/LocalLLaMA/comments/13magac/hi_community_i_created_a_fork_of_microsofts/ | FlowerPotTeaTime | self.LocalLLaMA | 2023-05-27T10:08:49 | 0 | {} | 13magac | false | null | t3_13magac | /r/LocalLLaMA/comments/13magac/hi_community_i_created_a_fork_of_microsofts/ | false | false | self | 39 | {'enabled': False, 'images': [{'id': 'IOx2rhFfqQomnY3YLw-ai67TUPNEeVr2tFf4MBeFkIM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=108&crop=smart&auto=webp&s=77e3e92bf1bbd1a45957a6bc993486ddc2df99fd', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=216&crop=smart&auto=webp&s=1d5395314002ac8471af69df66433b6604756dc7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=320&crop=smart&auto=webp&s=2fdb06c3636483e8c1982dcb4f39049b312ea829', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=640&crop=smart&auto=webp&s=74a38d7739b664ad3de74fe0e63c6bd80db0743d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=960&crop=smart&auto=webp&s=c9d6dd8f008c48896e9553acc1175fafdd3e39d9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?width=1080&crop=smart&auto=webp&s=8f86b449029ea131df3edb8c9b0d2f2ce4d6d070', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Dqcm95DKzVQuvhsa0DSznyZUrhnN6NjtOv70SYdICE4.jpg?auto=webp&s=516e253d6acb56516f930911f902fa31b090e56f', 'width': 1200}, 'variants': {}}]} |
How to fix error: CUDA out of memory? | 1 | [removed] | 2023-05-19T22:17:12 | https://www.reddit.com/r/LocalLLaMA/comments/13m9jn4/how_to_fix_error_cuda_out_of_memory/ | realsupermain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m9jn4 | false | null | t3_13m9jn4 | /r/LocalLLaMA/comments/13m9jn4/how_to_fix_error_cuda_out_of_memory/ | false | false | default | 1 | null |
Finally got a model running on my XTX, using llama.cpp | 60 | Not seen many people running on AMD hardware, so I figured I would try out this [llama.cpp OpenCL pull request](https://github.com/ggerganov/llama.cpp/pull/1459) on my Ubuntu 7900 XTX machine and document what I did to get it running.
I am seeing extremely good speeds compared to CPU (as one would hope). I tried `TheBloke/Wizard-Vicuna-13B-Uncensored-GGML (5_1)` first. GPU go brrr, literally, the coil whine on these things is nuts, you can hear each token being generated. Was able to offload 40 layers to the GPU (I guess that is all the layers of a 13B?), running at 20 tokens/s.
Since 13B was so impressive I figured I would try a 30B. I have `TheBloke/VicUnlocked-30B-LoRA-GGML (5_1)` running at 7.2 tokens/s, hitting the 24 GB VRAM limit at 58 GPU layers.
The current llama.cpp OpenCL support does not actually effect eval time, so you will need to merge the changes from the pull request if you are using *any* AMD GPU. I use Github Desktop as the easiest way to keep llama.cpp up to date, and also used it to locally merge the pull request.
To get this running on the XTX I had to install the latest 5.5 version of the AMD linux drivers, which are released but not available from the normal AMD download page yet. You can get the deb for the installer [here](http://repo.radeon.com/amdgpu-install/5.5/). I installed with `amdgpu-install --usecase=opencl,rocm` and installed CLBlast after `apt install libclblast-dev`.
Confirm opencl is working with `sudo clinfo` (did not find the GPU device unless I run as root).
Build llama.cpp (with merged pull) using `LLAMA_CLBLAST=1 make`.
Then run llama.cpp as normal, but as root or it will not find the GPU. Experiment with different numbers of `--n-gpu-layers`.
I didn't have to, but you may need to set `GGML_OPENCL_PLATFORM`, or `GGML_OPENCL_DEVICE` env vars if you have multiple GPU devices.
llama.cpp has by far been the easiest to get running in general, and most of getting it working on the XTX is just drivers, at least if this pull gets merged.
Enjoy your AI typing at you at 1200 words per minute. | 2023-05-19T21:39:43 | https://www.reddit.com/r/LocalLLaMA/comments/13m8li2/finally_got_a_model_running_on_my_xtx_using/ | TeakTop | self.LocalLLaMA | 2023-05-23T05:37:46 | 0 | {} | 13m8li2 | false | null | t3_13m8li2 | /r/LocalLLaMA/comments/13m8li2/finally_got_a_model_running_on_my_xtx_using/ | false | false | self | 60 | {'enabled': False, 'images': [{'id': '1_Vo1m7FsR1o8NrUw_sL_GaendLM5RTK4HdbByGKeRo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=108&crop=smart&auto=webp&s=a969b2652ea90b3ca892393673bc33211d2d8b87', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=216&crop=smart&auto=webp&s=412d898c2840d8df6217333550efeb79054782f6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=320&crop=smart&auto=webp&s=8514068b6476f02e8ae6c7d21b20173fb884b50c', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=640&crop=smart&auto=webp&s=f8318ed572059308a56748ff9c6866400fd567b1', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=960&crop=smart&auto=webp&s=b53e43fc0c1756c679b0dde6ccded836ca0916d9', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?width=1080&crop=smart&auto=webp&s=53e1669cdf77d3b1126d5eb5e0c461abf7e7ae0d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kCAga5qlCW9WpV14m-1KClwSzc5s09EG5mOg2FdzPLk.jpg?auto=webp&s=667d4b4f0014cbecd00cda3a78e31d7daec617ea', 'width': 1200}, 'variants': {}}]} |
I need to train a LORA for my LLM project | 3 | So I’m working on a project that utilizes a LLM not sure which one yet but the main thing is I need to input a bunch of information to both give it a personality but more importantly I need to to be able to answer specific questions that are answered in the training documents. They also need to be answered basically verbatim. Can this be done with a LORA or do I need to do something different with it? | 2023-05-19T21:30:34 | https://www.reddit.com/r/LocalLLaMA/comments/13m8d8w/i_need_to_train_a_lora_for_my_llm_project/ | opi098514 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m8d8w | false | null | t3_13m8d8w | /r/LocalLLaMA/comments/13m8d8w/i_need_to_train_a_lora_for_my_llm_project/ | false | false | self | 3 | null |
Any Discord groups for Local LLama? | 1 | [removed] | 2023-05-19T18:57:06 | https://www.reddit.com/r/LocalLLaMA/comments/13m4fh6/any_discord_groups_for_local_llama/ | NeverEndingToast | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m4fh6 | false | null | t3_13m4fh6 | /r/LocalLLaMA/comments/13m4fh6/any_discord_groups_for_local_llama/ | false | false | default | 1 | null |
Getting rid of </s> | 4 | Hi there!
Can somebody tell me why a few models end their replys with "</s>" and how I can stop it?
For example I tried this with Manticore-13B.ggmlv2.q5\_1.bin.
Started it with:
main -m models\Manticore-13B.ggmlv2.q5_1.bin --interactive-first --reverse-prompt "user:" --in-suffix "AI:" --color -c 2048 --temp 0.9 --repeat_penalty 1.2 -n -1 --threads 8 --gpu-layers 25 -f AI.txt
Content AI.txt:
A sophisticated dialogue between a person eager to learn and a world-renowned artificial intelligence assistant, known for its exceptional expertise and extensive knowledge in various fields. The assistant delivers comprehensive, precise, and courteous answers to the human's questions, demonstrating its remarkable understanding and problem-solving abilities. AI can talk about all topics without limits.
user:
Output when asking:
user:Hi. Tell me, what does H2O mean?
AI: Hi! "H2O" is a shortened form of water (Hydrogen Oxide). It's the chemical formula for water, which consists of two hydrogen atoms and one oxygen atom bonded together.</s>
user:And how can I fly without wings?
AI: There are several ways to fly without using traditional wings. One way is by using a jet engine or other propulsion systems that generate thrust and lift an object into the air, such as hovercrafts or helicopters. Another method is through gliding, where an aircraft is designed with aerodynamic features that allow it to glide through the air for extended periods of time without powered flight.</s>
user: | 2023-05-19T18:45:43 | https://www.reddit.com/r/LocalLLaMA/comments/13m44w6/getting_rid_of_s/ | honuvo | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m44w6 | false | null | t3_13m44w6 | /r/LocalLLaMA/comments/13m44w6/getting_rid_of_s/ | false | false | self | 4 | null |
Windows Batch File for running different models with Llama.cpp | 11 | In case some of you are not using the python web ui.
Here is a batch file that I use to test/run different models. Maybe it's helpful to those of you who run windows. The main batch file will call another batch file tailored to the specific model. I find it easier to test with than the python web UI.
Here is batch code to choose a model
https://preview.redd.it/9mtjzef72u0b1.png?width=396&format=png&auto=webp&s=640de04c48cb2cbfdc879a6862b745b02efada65
TITLE Pick a LLM to run
@ECHO OFF
:BEGIN
CLS
ECHO.
ECHO -============ LLM CHOOSER ===========-
Echo A) Vicuna
Echo B) Vicuna 1.1
Echo C) Alpaca 30B (30gb free ram)
Echo D) Koala
Echo E) Instruct
Echo F) Wizard
Echo G) GPTxVicuna
Echo H) WizardMega
Echo I) Manticore
Echo Z) Quit
ECHO.
ECHO Pressing Ctrl-C will close the LLM program.
CHOICE /N /C:ABCDEFGHIZ /M "PICK AN LLM (A,B,C,D,E,F,G,H,I,Z)"%1
SET choice=%ERRORLEVEL%
GOTO :%choice%
GOTO END
:10
GOTO END
:9
call runManticore.bat
GOTO END
:8
call runWizardMega.bat
GOTO END
:7
call runGTPxVicuna.bat
GOTO END
:6
call runWizard.bat
GOTO END
:5
call run_Instruct.bat
GOTO END
:4
call runKoala.bat
GOTO END
:3
TITLE Alpaca_30B
call runAlpaca30B.bat
GOTO END
:2
TITLE Vicuna_1.1
call runVicuna_1.1.bat
GOTO END
:1
call runVicuna.bat
:END
ECHO Bye!
REM pause
Here is sample batch file that gets called.
title llama.cpp
:start
new\main.exe ^
-i ^
--interactive-first ^
-f InsPrompt.txt ^
-r "### Human:" ^
-ngl 2 ^
--color ^
--temp 0.7 ^
-c 2048 ^
-n -1 ^
-t 18 ^
--ignore-eos ^
--repeat_penalty 1.1 ^
--instruct ^
-m Manticore-13B.ggmlv2.q5_1.bin
pause
goto start | 2023-05-19T18:37:10 | https://www.reddit.com/r/LocalLLaMA/comments/13m3wnd/windows_batch_file_for_running_different_models/ | ccss2 | self.LocalLLaMA | 2023-05-19T18:41:58 | 0 | {} | 13m3wnd | false | null | t3_13m3wnd | /r/LocalLLaMA/comments/13m3wnd/windows_batch_file_for_running_different_models/ | false | false | 11 | null | |
(HELP) Token Issue on Generation | 1 | Hello everyone,
I was wondering if I could get some help, I'm currently using this colab notebook ([https://colab.research.google.com/github/zetavg/LLaMA-LoRA-Tuner/blob/main/LLaMA\_LoRA.ipynb#scrollTo=iLygNTcHk0N8](https://colab.research.google.com/github/zetavg/LLaMA-LoRA-Tuner/blob/main/LLaMA_LoRA.ipynb#scrollTo=iLygNTcHk0N8)); and am having issues with LoRA generations (testing alpaca-lora-7b).
This only happens when I raise temperature up higher than "0". It can range from chinese characters to russian, to greek etc. Any help would be appreciated, thanks!
[Random characters.](https://preview.redd.it/cesslercyt0b1.png?width=1604&format=png&auto=webp&s=8161c236f64b3c966e3c99e5b2ac4095233762ce)
Github repo: [https://github.com/zetavg/LLaMA-LoRA-Tuner.git](https://github.com/zetavg/LLaMA-LoRA-Tuner.git) | 2023-05-19T18:20:59 | https://www.reddit.com/r/LocalLLaMA/comments/13m3h0d/help_token_issue_on_generation/ | pkkvu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m3h0d | false | null | t3_13m3h0d | /r/LocalLLaMA/comments/13m3h0d/help_token_issue_on_generation/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} | |
Question: Hardware development for large transformer models.... ?? | 7 | With very large models in the 10-100 multi billion range, is there anybody who developing hardware that can run tem on a single chip?
It seems like NPU, GPU and TPU architectures may be too general purpose. The math for running these models seems to be very limited, and could probably be embedded with the memory chip themselves with small changes to the overall chip architecture. Is that something be considered, or is the rapid changes to model architecture from research just not making it feasible to capture this yet at low level silicon? | 2023-05-19T17:00:27 | https://www.reddit.com/r/LocalLLaMA/comments/13m1b06/question_hardware_development_for_large/ | BackwardGoose | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m1b06 | false | null | t3_13m1b06 | /r/LocalLLaMA/comments/13m1b06/question_hardware_development_for_large/ | false | false | self | 7 | null |
Has anyone mixed a P40 with a 3090/4090 just to add more GPU memory? | 26 | This question is mainly aimed at inferencing: I know the P40's are much slower, but has anyone added one just for the added memory to run larger models? Since only one GPU processor seems to be used at a time during inference and gaming won't really use the second card, it feels wasteful to spend $800 on another 3090 just to add the 24gb when you can pickup a P40 for a quarter of the cost.
For training: would the P40 slow down the 3090 to its speed if the tasks are split evenly between the cards since it would be the weakest link? I'd like to be able to fine-tune 65b locally.
I know you can do main memory offloading, but I want to be able to run a different model on CPU at the same time and my motherboard is maxed out at 64gb. Upgrading that will trigger a cascade of necessary upgrades.
Just to muddy the waters: I was just reading about some improvements that help spread the processing load of inferencing better between GPU's. This may render my point about both GPU's not being in use during inferencing somewhat moot. | 2023-05-19T16:58:10 | https://www.reddit.com/r/LocalLLaMA/comments/13m18nk/has_anyone_mixed_a_p40_with_a_30904090_just_to/ | LetMeGuessYourAlts | self.LocalLLaMA | 2023-05-19T17:04:17 | 0 | {} | 13m18nk | false | null | t3_13m18nk | /r/LocalLLaMA/comments/13m18nk/has_anyone_mixed_a_p40_with_a_30904090_just_to/ | false | false | self | 26 | null |
Why can't LLaMA based models edit/rewrite? Maybe need a specific prompt? | 14 | You can ask Chat GPT or Bard to edit or rewrite the following to make it whatever.
However, all the LLaMA based models I tried, it just outputs what might come after the writing instead of rewriting it.
Does anyone have a specific prompt that I can use for edit/rewrite purpose?
Thanks! | 2023-05-19T16:57:59 | https://www.reddit.com/r/LocalLLaMA/comments/13m18i0/why_cant_llama_based_models_editrewrite_maybe/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m18i0 | false | null | t3_13m18i0 | /r/LocalLLaMA/comments/13m18i0/why_cant_llama_based_models_editrewrite_maybe/ | false | false | self | 14 | null |
As a large language model... | 151 | 2023-05-19T16:42:11 | https://imgur.com/gallery/Mz5N5ug | SlowMovingTarget | imgur.com | 1970-01-01T00:00:00 | 0 | {} | 13m0tjd | false | {'oembed': {'author_name': '', 'description': 'Discover topics like ai, hal9000, and the magic of the internet at Imgur, a community powered entertainment destination. Lift your spirits with funny jokes, trending memes, entertaining gifs, inspiring stories, viral videos, and so much more from users like kytayn.', 'height': 415, 'html': '<iframe class="embedly-embed" src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fimgur.com%2Fa%2FMz5N5ug%2Fembed%3Fpub%3Dtrue%26ref%3Dhttps%253A%252F%252Fembed.ly%26w%3D900&display_name=Imgur&url=https%3A%2F%2Fimgur.com%2Fa%2FMz5N5ug&image=https%3A%2F%2Fi.imgur.com%2FGgQ5qSg.jpg%3Ffb&key=2aa3c4d5f3de4f5b9120b660ad850dc9&type=text%2Fhtml&schema=imgur" width="600" height="415" scrolling="no" title="Imgur embed" frameborder="0" allow="autoplay; fullscreen" allowfullscreen="true"></iframe>', 'provider_name': 'Imgur', 'provider_url': 'http://imgur.com', 'thumbnail_height': 315, 'thumbnail_url': 'https://i.imgur.com/GgQ5qSg.jpg?fb', 'thumbnail_width': 600, 'title': 'As a large language model...', 'type': 'rich', 'url': 'https://imgur.com/a/Mz5N5ug', 'version': '1.0', 'width': 600}, 'type': 'imgur.com'} | t3_13m0tjd | /r/LocalLLaMA/comments/13m0tjd/as_a_large_language_model/ | false | false | 151 | {'enabled': False, 'images': [{'id': 'RypgvR0cosSRy5kKNRUFcUr2QiGLWa_CZmmHXbLySUs', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/MTnkf11SmZ25kTBLVCGpzlM3_qGTlY9bxmvry-tidig.jpg?width=108&crop=smart&auto=webp&s=1a84f4d7eaa52522cb8c13d5d032cdd44acb0338', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/MTnkf11SmZ25kTBLVCGpzlM3_qGTlY9bxmvry-tidig.jpg?width=216&crop=smart&auto=webp&s=31d9eeda2d055611cb0b6709fe9bdee76500c57e', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/MTnkf11SmZ25kTBLVCGpzlM3_qGTlY9bxmvry-tidig.jpg?width=320&crop=smart&auto=webp&s=4d1d36b43e8256d1373050b10a7075add28956ff', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/MTnkf11SmZ25kTBLVCGpzlM3_qGTlY9bxmvry-tidig.jpg?width=640&crop=smart&auto=webp&s=6ee6d583cc400554562bb67a6a56168ebb952bf2', 'width': 640}], 'source': {'height': 563, 'url': 'https://external-preview.redd.it/MTnkf11SmZ25kTBLVCGpzlM3_qGTlY9bxmvry-tidig.jpg?auto=webp&s=52a947b96f0c492159baef133dd7d3c0d72fb476', 'width': 900}, 'variants': {}}]} | ||
Possible to run a 30B quantized model on 10GB VRAM + 32GB RAM | 11 | I just read a post about a modification that lets you share model loading between video card and RAM+CPU, I think it’s GGML but not sure. Does this mean I could (slowly) run a 30B model quantized? I have a Ryzen 5600x by the way, if that matters. | 2023-05-19T16:25:09 | https://www.reddit.com/r/LocalLLaMA/comments/13m0cv6/possible_to_run_a_30b_quantized_model_on_10gb/ | maxiedaniels | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13m0cv6 | false | null | t3_13m0cv6 | /r/LocalLLaMA/comments/13m0cv6/possible_to_run_a_30b_quantized_model_on_10gb/ | false | false | self | 11 | null |
Hyena Hierarchy: Towards Larger Convolutional Language Models | 43 | Those of you following everything closely has anyone come across open source projects attempting to leverage the recent Hyena development. My understanding is it is likely a huge breakthrough in efficiency for LLMs and should allow models to run on significantly smaller hardware and memory requirements. | 2023-05-19T15:59:56 | https://hazyresearch.stanford.edu/blog/2023-03-07-hyena | alchemist1e9 | hazyresearch.stanford.edu | 1970-01-01T00:00:00 | 0 | {} | 13lznoc | false | null | t3_13lznoc | /r/LocalLLaMA/comments/13lznoc/hyena_hierarchy_towards_larger_convolutional/ | false | false | 43 | {'enabled': False, 'images': [{'id': '-WHgGLJANkDpubg8JwSLJ_kMgGHdyAiWnD4mQMVCLm0', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=108&crop=smart&auto=webp&s=fd10df8933b1c9751ea7d0fcf20f1e54587a02ce', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=216&crop=smart&auto=webp&s=d39d5b78df8653abb63f948d029eb734c0bd0254', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?width=320&crop=smart&auto=webp&s=2a242c8ec32e5e02a650a73b9418c82d2327ac54', 'width': 320}], 'source': {'height': 460, 'url': 'https://external-preview.redd.it/yIFZCOsTxmZV5L1s0iCvqnvaDHMoCFUEEEsPkVLz1sA.jpg?auto=webp&s=b3c6e6f793801a135b69d089032f2aee32682946', 'width': 460}, 'variants': {}}]} | |
ERROR: Failed building wheel for llama-cpp-python | 1 | [removed] | 2023-05-19T15:43:08 | https://www.reddit.com/r/LocalLLaMA/comments/13lz70j/error_failed_building_wheel_for_llamacpppython/ | Accomplished-Jury-87 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lz70j | false | null | t3_13lz70j | /r/LocalLLaMA/comments/13lz70j/error_failed_building_wheel_for_llamacpppython/ | false | false | default | 1 | null |
Let's talk about prompt styles and their documentation? | 36 | I wanted to discuss this because it seems quite important to use the models. Maybe I just don't get it, but to me it seems models don't always come with clear instructions how the model should be used. It may seem sufficient but when you dive in deeper this is not a topic that should have *any* uncertainty, given how much uncertainty you have to deal with judging your results from 30 million settings anyway.
There is lots of documentation what the model is and on what dataset (not specific data) it was trained and what parameters were used. But then it comes with one usage example as actual instructions specific to that model?
I don't expect good parameter sets or something. But the prompting style seems very important, even if some random other usage tends to "work" too.
It is really essential to know these things because you can't test them reasonably. Did the model barely get it, like a typo, and it will only work with low temperature? Or is the model just not smart enough and that's why it doesn't always get my prompt?
This uncertainty is so unnecessary. Please clearly state the exact way the training data was formated and what format can be expected to work best as a result. I can guess this a bit, but since the application is so fuzzy, it really sucks to not have absolute clarity where it could just be absolutely clear. And please, don't think you described it by saying what to pick in ooba.
Thank you for reading my rant, please add your own or explain how it works. It is entirely possible I just don't get it.
To end with something constructive, maybe the whole concept of prompt styles should/could go the way of the dodo. I am curious what would happen if one would explicitly use all sorts of *different* (but clear) formatting for the training data. Theoretically I think that might force the model to just understand any way of prompting that would maybe be explained individually at the beginning. That could algorithmically be integrated into training data. I think it might "waste" some of those precious parameters though.
Anyway, if that's already the thought behind it, it would still be good to know. | 2023-05-19T14:23:11 | https://www.reddit.com/r/LocalLLaMA/comments/13lwwux/lets_talk_about_prompt_styles_and_their/ | involviert | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lwwux | false | null | t3_13lwwux | /r/LocalLLaMA/comments/13lwwux/lets_talk_about_prompt_styles_and_their/ | false | false | self | 36 | null |
Full fine-tunes of PyTorch models? | 3 | What tools are folks using to do full fine-tunes (rather than LoRA + merges) of Llama or Vicuna model these days? I can find a lot of easy ones for LoRA+merge (text-generation-webui, [https://github.com/tloen/alpaca-lora](https://github.com/tloen/alpaca-lora)), but haven't found ones for a full fine-tune. | 2023-05-19T14:13:40 | https://www.reddit.com/r/LocalLLaMA/comments/13lwmva/full_finetunes_of_pytorch_models/ | Fun_Tangerine_1086 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lwmva | false | null | t3_13lwmva | /r/LocalLLaMA/comments/13lwmva/full_finetunes_of_pytorch_models/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'UYKFmPP2-DbydEGwkKwhmsl0pQLnyYpsjH81jBvoAiE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=108&crop=smart&auto=webp&s=26736a3bfdaecde95315f2b2f406eb18e314a438', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=216&crop=smart&auto=webp&s=1fb9065808152a959df46bb3fb8d9e00c5c86202', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=320&crop=smart&auto=webp&s=5aff266afb327d1e2feddb7c332736591776d05f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=640&crop=smart&auto=webp&s=1c7de56b41128a8b8f1952e9718023183f0044a7', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=960&crop=smart&auto=webp&s=4ddf3393597ad37407731538c8fb33d8e88b9ca8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?width=1080&crop=smart&auto=webp&s=22bb2a307fd28528f084e345460b825409a43c01', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/e1PIDIPluZiwhzcHre12fX8Sn4HFxRl4QyZRNw0sDTs.jpg?auto=webp&s=a0e1cdb6fa8afba4529a7190d4414b9fd45bb22f', 'width': 1200}, 'variants': {}}]} |
WizardVIcunaAlpacaOasstMegaKoalaGpt4Lora was released | 3 | [removed] | 2023-05-19T14:09:16 | https://huggingface.co/nikitastaf1996/WizardVIcunaAlpacaOasstMegaKoalaGpt4Lora | nikitastaf1996 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 13lwiex | false | null | t3_13lwiex | /r/LocalLLaMA/comments/13lwiex/wizardvicunaalpacaoasstmegakoalagpt4lora_was/ | false | false | default | 3 | null |
Run MPT-7B-Instruct on Google Colab? | 7 | I wanted to know has anyone tried to run MPT-7B-Instruct on Google Colab?
If so can they please share the code? I am running into the tried to allocate more RAM error
If in any other platform (eg. Kaggle) then how? | 2023-05-19T13:48:58 | https://www.reddit.com/r/LocalLLaMA/comments/13lvzkc/run_mpt7binstruct_on_google_colab/ | AdRealistic03 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lvzkc | false | null | t3_13lvzkc | /r/LocalLLaMA/comments/13lvzkc/run_mpt7binstruct_on_google_colab/ | false | false | self | 7 | null |
Training Vicuna based on custom text/web pages | 20 | Hello,
I currently use Vicuna 13B (created delta based on llama). It works well. However, I am interested in training the model / feeding it with custom texts (own private documents). What do you think the best way is to do it? I currently have Oobabooga.
I am a bit new to it, so not sure how to start (a tutorial would be handy if you know a good one).
Thank you. | 2023-05-19T12:26:47 | https://www.reddit.com/r/LocalLLaMA/comments/13lu13k/training_vicuna_based_on_custom_textweb_pages/ | guyromb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lu13k | false | null | t3_13lu13k | /r/LocalLLaMA/comments/13lu13k/training_vicuna_based_on_custom_textweb_pages/ | false | false | self | 20 | null |
Seeking clarification about LLM's, Tools, etc.. for developers. | 20 | As android developer (professionally working 4years\~) and "AI" enthusiast, for long time i wanted to study ML/BigData to enter this world but was held back difficulty a steep learning curve.
Been lurking and using (basic stuff) stabledifussion (A1111) since august, and LLM's since about march, without python and ML libraries experience it is difficult to comprehend (reading source code) all that is going on.
ChatGPT opened a whole new world of possibilities for people like me, and discovering the world open of open source, local models made me quit my job (two weeks ago\~) to pursue my passion, with many ideas for this sector that i wish to execute (AGI is the goal).
note: at my workplace i was the guy (preacher) of the coming of our AI Overlords, the go to guy about questions about this topics, even did a meetup in the style of "We have no MOAT" with deep explanations about NN's, Transformers, LLM, opensource tools, possibilities and future.
the purpose of this note is
1. connect with like minded people
2. i did question chatgpt about before posting
​
"What is my Purpose?": Understand the connections between tools, frameworks, and concepts.
master the use of tools without blindly following tutorials before deep diving into LangChain and similar frameworks. as i want not only make intricate prompts, and finetune with UI, but also understand the concepts allowing them to exist.
Edit: added short answer (21/05/2023)
Questions/Facts (that need clarification):
* GGML is a format for LLM's created to run inference (text completion) on CPU's.
* Its a way to convert Floats to Integers (of parameters), resulting in less precise calculations, but enough for LLM's (not only transformers)
* GPTQ is a format that can be used by GPU's, how to actually use it?
* Same as GGML for model weights, but instead of cutting floats to nearest int,it does the conversion in smart way
* What about ".safetensor" how to use this models?
* format to store data (tensors)
* What is the difference between them? why different formats?
* GGML formatted models are runnable by llama.cpp - **clarification needed**
safetensor - **clarification needed**
* llamacpp is C++ implementation of Llama by facebook, why is it needed?
* it's not from facebook. it's optimized to run GGML models on CPU's
* Oobabooga/KoboldAI is a UI wrapper for llamacpp and \_\_\_\_ ?
* Oobabooga is a UI for running many types of LLM's including llama.cpp
* KobolAI is a fork of llama.cpp with UI
* ~~KoboldAI vs Oobabooga, seems they do exactly the same with different UI.~~
* Where, how, why CUDA, OpenBLAS, CLBlast is used, and how related to each other?
* are all libraries for linear algebra subroutines optimized for different architectures,cuBLAST - nvidiaclBLAS - OpenCL (many gpu's not only nvidia)CLBlast - optimized clBLAS
* How an agent actually run code (tools), how is that "Action: use tool X" -> X(), is it basic code string manipulation that runs over the output?
* more research needed, will answer asap.
* PyTorch, Tensorflow, what, how, why is needed for oobabooga?
* PyTorch is python wrapper for LM library "Torch", a big framework for various ML tasks not only LLM.
* Tensorflow the PyTorch of GoogleBrain and is not compatible. Parallelism is easier implemented on PyTorch than on Tensorflow.
* "Install GPTQ-for-LLaMa and the monkey patch" what is the purpose of this?
* Since GPTQ-for-LLaMa had several breaking updates, that made older models incompatible with newer versions of GPTQ, they are sometimes refering to a certain version of GPTQ-for-LLaMa. So if the notes of a model, or a tutorial tells you to install GPTQ-for-LLaMa with a certain patch, it probably referrs to a commit, which if you know git, you can specifically clone a commit-hash or a feature branch.
credit to /u/_underlines_ and /u/Evening_Ad6637 for answers so far
I will edit this post as more questions will come up.
Hope this community can help and answer my questions, correct my facts as this post could be a starting point for many. | 2023-05-19T11:30:49 | https://www.reddit.com/r/LocalLLaMA/comments/13lssoi/seeking_clarification_about_llms_tools_etc_for/ | Clicker7 | self.LocalLLaMA | 2023-05-21T11:02:07 | 0 | {} | 13lssoi | false | null | t3_13lssoi | /r/LocalLLaMA/comments/13lssoi/seeking_clarification_about_llms_tools_etc_for/ | false | false | self | 20 | null |
Get my GPU involved? | 0 | I'm using koboldcpp-1.23.1 To run GGML models, and my GPU is never involved. Ryzen 7 5700U. Is this something I can fix? Am I doing something obviously wrong? | 2023-05-19T11:25:28 | https://www.reddit.com/r/LocalLLaMA/comments/13lsopu/get_my_gpu_involved/ | Innomen | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lsopu | false | null | t3_13lsopu | /r/LocalLLaMA/comments/13lsopu/get_my_gpu_involved/ | false | false | self | 0 | null |
Can you use LoRA unsupervised? | 1 | I’d now like to add a ‘concept’ or a set of information from a Knowledge Base to an instruct-tuned model. I could set up a series of prompts, but ultimately I don’t want it to reproduce the information, I want the model to understand it and use it creatively.
Is there a way to use LoRA to teach it a concept in an unsupervised way? Like how LLaMA was trained in the first place? All the tutorials I see are prompt-based. | 2023-05-19T11:08:02 | https://www.reddit.com/r/LocalLLaMA/comments/13lsbzj/can_you_use_lora_unsupervised/ | amemingfullife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lsbzj | false | null | t3_13lsbzj | /r/LocalLLaMA/comments/13lsbzj/can_you_use_lora_unsupervised/ | false | false | self | 1 | null |
q4_0, q5_1? | 3 | New learner here! What do these mean in the context of models? I see them all over the place, but I've never seen any explanation.
EDIT: Thanks a lot everybody, now I understand! | 2023-05-19T09:51:33 | https://www.reddit.com/r/LocalLLaMA/comments/13lqua2/q4_0_q5_1/ | qwerty44279 | self.LocalLLaMA | 2023-05-19T13:24:31 | 0 | {} | 13lqua2 | false | null | t3_13lqua2 | /r/LocalLLaMA/comments/13lqua2/q4_0_q5_1/ | false | false | self | 3 | null |
Using iGPU for llama models? | 0 | [deleted] | 2023-05-19T08:35:33 | [deleted] | 2023-05-19T08:52:26 | 0 | {} | 13lpgdf | false | null | t3_13lpgdf | /r/LocalLLaMA/comments/13lpgdf/using_igpu_for_llama_models/ | false | false | default | 0 | null | ||
If you previously had bad results with Stable Vicuna outside of ooba, fix the special_tokens_map... | 19 | https://huggingface.co/CarperAI/stable-vicuna-13b-delta/blob/main/special_tokens_map.json
I'm not sure how this could've happened or if it could indicate some other issue, but bos_token is wrong at least
```
{
"bos_token": "</s>",
"eos_token": "</s>",
"pad_token": "[PAD]",
"unk_token": "</s>"
}
```
should be
```
{
"bos_token": "<s>",
"eos_token": "</s>",
"pad_token": "[PAD]",
"unk_token": "</s>"
}
```
I wonder if this had anything to do with people sleeping on Stable Vicuna | 2023-05-19T08:06:17 | https://www.reddit.com/r/LocalLLaMA/comments/13lowzm/if_you_previously_had_bad_results_with_stable/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lowzm | false | null | t3_13lowzm | /r/LocalLLaMA/comments/13lowzm/if_you_previously_had_bad_results_with_stable/ | false | false | self | 19 | {'enabled': False, 'images': [{'id': '0CCfFmTZ60XSoL0dJ_ynCVdsBH-fmk8Xc8-W9nJPLRo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=108&crop=smart&auto=webp&s=b3dcbf82fa79313c69c0fd1509879605b7de7e6b', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=216&crop=smart&auto=webp&s=5ad08519600fc54c1096d3ed7fc28b08a226c3b7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=320&crop=smart&auto=webp&s=abe6f4dba84e9b69df8ec9ec0bd895d9b2bc36cd', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=640&crop=smart&auto=webp&s=6c449dee1f29a55c79119d8e92b7b420edf45cc2', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=960&crop=smart&auto=webp&s=0c39a7620494ab79dfde8c3c2a031eb283d26d83', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?width=1080&crop=smart&auto=webp&s=f737fd6eb21bb1d89e0e7d7849d880c3d2d63561', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/1bUrRYhOdPoMUs38gJ3ZHAcizt_RdcA33DfakExdQj8.jpg?auto=webp&s=4c46ff0e82b80fa085b40ab60da914bc22e4f5af', 'width': 1200}, 'variants': {}}]} |
Where's my new model? | 7 | 2023-05-19T07:27:18 | ArmoredBattalion | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 13lo7gx | false | null | t3_13lo7gx | /r/LocalLLaMA/comments/13lo7gx/wheres_my_new_model/ | true | false | nsfw | 7 | {'enabled': True, 'images': [{'id': 'AAZlZI3So2meQRXQDIIjG0StU43Z70qqGwPFKZd0wvI', 'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=108&crop=smart&auto=webp&s=45722aac1aa128a8da490930ecdc8525d54fa950', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=216&crop=smart&auto=webp&s=7de7986b6f0aae722e79b3a5121716fa0bd2072c', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=320&crop=smart&auto=webp&s=0dd9ffae8fbfd41fb8ad4079cff529d7def5e86a', 'width': 320}], 'source': {'height': 800, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?auto=webp&s=eb1b5a8e5f4d235894eadccd94a7644a5a92d414', 'width': 600}, 'variants': {'nsfw': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=0cce187aa108df76e7e308ac317a3273cdc76da1', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=3170c19f1423f59fe06fdd01be388d7654db4b08', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=8ad44f506a52f5eb879f74c6d784bb60be7ed1c5', 'width': 320}], 'source': {'height': 800, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?blur=40&format=pjpg&auto=webp&s=2dae17d43b667357cc29c56e1367936c7a77729f', 'width': 600}}, 'obfuscated': {'resolutions': [{'height': 144, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=0cce187aa108df76e7e308ac317a3273cdc76da1', 'width': 108}, {'height': 288, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=3170c19f1423f59fe06fdd01be388d7654db4b08', 'width': 216}, {'height': 426, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=8ad44f506a52f5eb879f74c6d784bb60be7ed1c5', 'width': 320}], 'source': {'height': 800, 'url': 'https://preview.redd.it/yq5feiiwpq0b1.jpg?blur=40&format=pjpg&auto=webp&s=2dae17d43b667357cc29c56e1367936c7a77729f', 'width': 600}}}}]} | ||
Looking to selfhost Llama on remote server, could use some help | 3 | My Goal: run 30b GPTQ Openassistant on a remote server with api access.
Link to model: [https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ](https://huggingface.co/TheBloke/OpenAssistant-SFT-7-Llama-30B-GPTQ)
My progess: Docker container running text-gen-webui with --public-api flag on to use it as an api with cloudflared to create a quick tunnel. Everything is working on the remote server the only thing I am having trouble getting the quick tunnel to work.
My Current development setup is throwing a cuda image based docker container on [vast.ai](https://vast.ai) and work with a quick tunnel from cloudflared.(if i get it to work).
My Question: What is everyone using to run these models on remote servers and access them via API. My home desktop setup is too weak to run these kinds of models, so I am interested in both production and development setups. There have to be better solutions out there to selfhost. | 2023-05-19T07:00:55 | https://www.reddit.com/r/LocalLLaMA/comments/13lnpgt/looking_to_selfhost_llama_on_remote_server_could/ | jules241 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lnpgt | false | null | t3_13lnpgt | /r/LocalLLaMA/comments/13lnpgt/looking_to_selfhost_llama_on_remote_server_could/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': '0_MVpXePzAudCyIn3uCjoPJqV69xuDEEw1P6fohf8tE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=108&crop=smart&auto=webp&s=3ed6d1099b03f9a3b3beddf35162b07d2c0ae313', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=216&crop=smart&auto=webp&s=d44e68c4ae0e61fd2ec54c83b18e3c4f5dedee96', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=320&crop=smart&auto=webp&s=21a3ef65b2fae55091cb55d4f17575f02272205d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=640&crop=smart&auto=webp&s=f96a677430862c92840df78ff670c8adac963f38', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=960&crop=smart&auto=webp&s=f99338ee52828dde3c507c1bc7b2db4788654e36', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?width=1080&crop=smart&auto=webp&s=11974c7b9704d28101e1c835d6a1b75e33fc11ef', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/bC3KWSlhdErNo61Ej7icWH7guz2c0GuhkKI0RvumXoE.jpg?auto=webp&s=5d6edde5db3af471073c75c924e4f7add8ff5389', 'width': 1200}, 'variants': {}}]} |
4060 Ti 16GB in July or 3060 12GB now? | 12 | Is the extra 4GB of VRAM worth the wait and the price difference (~100$)? As far as I know both can only run 13B 4bit models. | 2023-05-19T05:56:01 | https://www.reddit.com/r/LocalLLaMA/comments/13lmjjc/4060_ti_16gb_in_july_or_3060_12gb_now/ | regunakyle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lmjjc | false | null | t3_13lmjjc | /r/LocalLLaMA/comments/13lmjjc/4060_ti_16gb_in_july_or_3060_12gb_now/ | false | false | self | 12 | null |
Local model response time? | 0 | How long is a response from a locally downloaded model expected to take. I'm running wizard-mega-13B.ggml.q5\_1.bin from disk and it takes as long as 2 minutes to get a response. | 2023-05-19T02:31:46 | https://www.reddit.com/r/LocalLLaMA/comments/13liju9/local_model_response_time/ | Jl_btdipsbro | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13liju9 | false | null | t3_13liju9 | /r/LocalLLaMA/comments/13liju9/local_model_response_time/ | false | false | self | 0 | null |
Is there a place for stupid questions? | 3 | [deleted] | 2023-05-19T01:17:57 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13lgwog | false | null | t3_13lgwog | /r/LocalLLaMA/comments/13lgwog/is_there_a_place_for_stupid_questions/ | false | false | default | 3 | null | ||
What is the smallest ggml model available? | 52 | I'm a bit obsessed with the idea that we can have an LLM “demoscene” but with small models, and I already tried a few 1B fresh models, but I want to go even smaller. Have anyone seen ggml models less than 1B? I want to evaluate their performance. | 2023-05-18T22:57:47 | https://www.reddit.com/r/LocalLLaMA/comments/13ldnlw/what_is_the_smallest_ggml_model_available/ | Shir_man | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13ldnlw | false | null | t3_13ldnlw | /r/LocalLLaMA/comments/13ldnlw/what_is_the_smallest_ggml_model_available/ | false | false | self | 52 | null |
Equipment needed to run Dolly 2.0 3b model? | 1 | I'm trying to build a very simple chat bot for a business that needs to be able to understand and talk about a few hundred lines of data. It doesn't need to know about any external data, and would even be preferable not to, but it would be great if I can fine tune the model to learn information fed to it from the user.
Dolly 2.0 seems interesting as it's open source and available for commercial use, and I imagine for my use case the 3b model may be acceptable?
If so, what sort of hardware would I need if I want to be able to process at least 10 or so tokens per second? I've heard of people running the 7b model with a 3060, but wasn't able to find out how well it ran at those specs, or how different the 3b model is in terms of both specs and ability.
Is there a way to find this out before I potentially spend a bunch of money on new hardware? If I could get it running with regular consumer hardware, is it going to be extremely slow? Or would running a 3b model, and/or limiting the data it needs to know allow me to run it locally at a somewhat conversational speed (Think the speed of GPT4)? | 2023-05-18T22:04:40 | https://www.reddit.com/r/LocalLLaMA/comments/13lccm4/equipment_needed_to_run_dolly_20_3b_model/ | TheNomadicAspie | self.LocalLLaMA | 2023-05-18T22:09:02 | 0 | {} | 13lccm4 | false | null | t3_13lccm4 | /r/LocalLLaMA/comments/13lccm4/equipment_needed_to_run_dolly_20_3b_model/ | false | false | self | 1 | null |
Manticore 13B - updated model from OpenAccess AI Collective | 120 | # OpenAccess AI Collective have just released Manticore 13B
An updated version of Wizard Mega 13B
Available for instant demo on a GGML-powered space at:
[https://huggingface.co/spaces/openaccess-ai-collective/manticore-ggml](https://huggingface.co/spaces/openaccess-ai-collective/manticore-ggml)
Available for local use as GGML and GPTQ quantisations at:
[https://huggingface.co/TheBloke/Manticore-13B-GPTQ](https://huggingface.co/TheBloke/Manticore-13B-GPTQ)
[https://huggingface.co/TheBloke/Manticore-13B-GGML](https://huggingface.co/TheBloke/Manticore-13B-GGML)
Full details in their repo at: [https://huggingface.co/openaccess-ai-collective/manticore-13b](https://huggingface.co/openaccess-ai-collective/manticore-13b)
# Manticore 13B - Preview Release (previously Wizard Mega)
Manticore 13B is a Llama 13B model fine-tuned on the following datasets:
* [**ShareGPT**](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered) \- based on a cleaned and de-suped subset
* [**WizardLM**](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
* [**Wizard-Vicuna**](https://huggingface.co/datasets/ehartford/wizard_vicuna_70k_unfiltered)
* [**subset of QingyiSi/Alpaca-CoT for roleplay and CoT**](https://huggingface.co/QingyiSi/Alpaca-CoT)
* [**GPT4-LLM-Cleaned**](https://huggingface.co/datasets/teknium/GPT4-LLM-Cleaned)
* [**GPTeacher-General-Instruct**](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
* ARC-Easy & ARC-Challenge - instruct augmented for detailed responses
* mmlu: instruct augmented for detailed responses subset including
* abstract\_algebra
* conceptual\_physics
* formal\_logic
* high\_school\_physics
* logical\_fallacies
* [**hellaswag**](https://huggingface.co/datasets/hellaswag) \- 5K row subset of instruct augmented for concise responses
* [**metaeval/ScienceQA\_text\_only**](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) \- instruct for concise responses
* [**openai/summarize\_from\_feedback**](https://huggingface.co/datasets/openai/summarize_from_feedback) \- instruct augmented tl;dr summarization | 2023-05-18T21:49:01 | https://www.reddit.com/r/LocalLLaMA/comments/13lbyiw/manticore_13b_updated_model_from_openaccess_ai/ | The-Bloke | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lbyiw | false | null | t3_13lbyiw | /r/LocalLLaMA/comments/13lbyiw/manticore_13b_updated_model_from_openaccess_ai/ | false | false | self | 120 | {'enabled': False, 'images': [{'id': '_CArfIRMSglzqoNebT4bvXRqZjSX6dMbq8siIyBeSlQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=108&crop=smart&auto=webp&s=14116147869d4de3e20df891cd959520caa7e65c', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=216&crop=smart&auto=webp&s=84215b2cc08a9f7187f84e2393953e3e9f825b3e', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=320&crop=smart&auto=webp&s=5accfc8157b78c4e1273b037363e99eadabfb77e', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=640&crop=smart&auto=webp&s=76fdba486515f64c325cde58ed60905285606d76', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=960&crop=smart&auto=webp&s=bd88bad1f4d279a4df2ae01d712c822853eba68a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?width=1080&crop=smart&auto=webp&s=6a6c97c58469a03cda215ca4639932a8f70ba147', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/T2ckplnJRCJt6Sx0qCx7mRAsqA-IsyuVVEbcMhHmrJw.jpg?auto=webp&s=7eb416108823b2805e4912b9bda04b94475ee677', 'width': 1200}, 'variants': {}}]} |
Possibility of deploying local LLM on 7 yr old laptop | 9 | Hi,
I have 7 yr old Thinkpad P50 laptop with following specs:
• Processor: Intel Core i7-6820HQ processor (8MB Cache, Up to 3.6 Ghz)
• Operating System: Windows 10 Pro 64/Ubuntu 18
• Display: 15.6 FHD(1920x1080) IPS Non-Touch
• Memory: 64GB(16x4) DDR4 2133MHz SoDIMM
• Graphic Card: NVIDIA Quadro M1000M 2GB
• Base: P50 NVIDIA Quadro M1000M 2GB,Intel Core i7-6820HQ processor (8MB Cache, up to 3.6 GHz)
Would I be able to use laptop to deploy local LLM?
If so, what would be recommendations?
My use case is to use LLM for document querying(pdfs) similar to ChatGPT or filechat.io which uses openapi .
Thanks! | 2023-05-18T21:02:24 | https://www.reddit.com/r/LocalLLaMA/comments/13lasa6/possibility_of_deploying_local_llm_on_7_yr_old/ | mindseye73 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lasa6 | false | null | t3_13lasa6 | /r/LocalLLaMA/comments/13lasa6/possibility_of_deploying_local_llm_on_7_yr_old/ | false | false | self | 9 | null |
Pygmalion has released the new Pygmalion 13B and Metharme 13B! These are LLaMA based models for chat and instruction. | 185 | [removed] | 2023-05-18T20:57:07 | https://www.reddit.com/r/LocalLLaMA/comments/13lan4q/pygmalion_has_released_the_new_pygmalion_13b_and/ | Creative-Rest-2112 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13lan4q | false | null | t3_13lan4q | /r/LocalLLaMA/comments/13lan4q/pygmalion_has_released_the_new_pygmalion_13b_and/ | false | false | default | 185 | null |
How to use MetaIX/GPT4-X-Alpasta-30b-4bit with oobabooga ? | 2 | [removed] | 2023-05-18T19:48:46 | https://www.reddit.com/r/LocalLLaMA/comments/13l8vni/how_to_use_metaixgpt4xalpasta30b4bit_with/ | karljoaquin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 13l8vni | false | null | t3_13l8vni | /r/LocalLLaMA/comments/13l8vni/how_to_use_metaixgpt4xalpasta30b4bit_with/ | false | false | default | 2 | null |
Help with Random Characters and Words on Output | 1 | [deleted] | 2023-05-18T18:55:33 | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 13l7hk8 | false | null | t3_13l7hk8 | /r/LocalLLaMA/comments/13l7hk8/help_with_random_characters_and_words_on_output/ | false | false | default | 1 | null | ||
Introducing TokenHawk: Local Llama Inference in WebGPU | 44 | 2023-05-18T17:20:36 | https://github.com/kayvr/token-hawk | kayvr | github.com | 1970-01-01T00:00:00 | 0 | {} | 13l4z2e | false | null | t3_13l4z2e | /r/LocalLLaMA/comments/13l4z2e/introducing_tokenhawk_local_llama_inference_in/ | false | false | 44 | {'enabled': False, 'images': [{'id': '8CZtAnMJkXfTRQHPKKSrVsh6Mmb5cAAzsVRnNi9Zs38', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=108&crop=smart&auto=webp&s=ccdcccd54e15125e5a437d711585fb0dba3d707d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=216&crop=smart&auto=webp&s=04a15008e9c532ad02acf443786138e065cc4a40', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=320&crop=smart&auto=webp&s=2f032f2dd28d338049450c21f5d62a5bb3144af0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=640&crop=smart&auto=webp&s=14a6c350e33c6efa70f87ed9e0fe755d56806eff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=960&crop=smart&auto=webp&s=610c3f4c704a8954ce143c98e617e7cb12e0fc99', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?width=1080&crop=smart&auto=webp&s=38e729b9b6ceeaaa1a7fc808472ef719783e7456', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/S3ftbHh44BeTjBgKAp9PeJHQCHJP2VKYrsBGRGYccQk.jpg?auto=webp&s=466e4ec2dc926182acc7b5fc6f5a1634f4b7e7b8', 'width': 1200}, 'variants': {}}]} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.