title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
What solution would best suite a SaaS - for reading and answering data from PDF files uploaded by users | 10 | Hi all. I have developed a SaaS, a long 6 year process. All of my own blood and sweat. It’s been a little bit hard on me, mentally, financially, and honestly a little physically.
With that sob story over. I was looking into ChatGPT but found this sub. Basically in the construction industry (government/or high level) there are PDFs that are called specifications. In it, we are described various things, like which paint to use, which cement to use, and what other requirements are like mockups for brick work prior to actually doing the brick work… etc.
I was wondering, with little to zero knowledge of this newfound ability to chat based off of PDFs, is it possible to have a user upload PDFs, and then have the user converse with it?
I hope I am making sense in even asking this and that it is not against the rules. | 2023-07-04T04:46:42 | https://www.reddit.com/r/LocalLLaMA/comments/14q53wj/what_solution_would_best_suite_a_saas_for_reading/ | shakespear94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14q53wj | false | null | t3_14q53wj | /r/LocalLLaMA/comments/14q53wj/what_solution_would_best_suite_a_saas_for_reading/ | false | false | self | 10 | null |
Why isn’t QLoRA being used more widely for fine tuning models? | 30 | Guanaco 33b and 65b are nearly at the top of the LLM leaderboards and were fine tuned using it.
Link to the paper:
[QLORA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/pdf/2305.14314.pdf)
Gpt-4’s bullet points for the abstract:
-**QLoRA:**
- Efficient finetuning approach that **reduces memory usage for finetuning a 65B parameter model on a single 48GB GPU** while maintaining full 16-bit finetuning task performance.
- Backpropagates gradients through a **frozen, 4-bit quantized pretrained language model into Low Rank Adapters (LoRA)**.
**Guanaco (Model Family):**
- **Outperforms all other openly released models on the Vicuna benchmark**, achieving 99.3% of ChatGPT's performance level. This only requires 24 hours of finetuning on a single GPU.
**Innovations by QLoRA:**
- **NF4 (4-bit NormalFloat)**, a new data type that is information theoretically optimal for normally distributed weights.
- **Double quantization** that reduces the average memory footprint by quantizing the quantization constants.
- **Paged optimizers** to manage memory spikes.
**Additional Points:**
- QLoRA was used to **finetune over 1,000 models**, providing detailed analysis of instruction following and chatbot performance.
- **QLoRA finetuning on a small high-quality dataset can lead to state-of-the-art results**, even when using smaller models than the previous SoTA.
- A detailed analysis of **chatbot performance based on human and GPT-4 evaluations** is provided.
- Current chatbot benchmarks are found to be **unreliable for accurately evaluating chatbot performance levels**.
- **All models and code, including CUDA kernels for 4-bit training, have been released.** | 2023-07-04T04:42:49 | https://www.reddit.com/r/LocalLLaMA/comments/14q51cf/why_isnt_qlora_being_used_more_widely_for_fine/ | Basic_Description_56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14q51cf | false | null | t3_14q51cf | /r/LocalLLaMA/comments/14q51cf/why_isnt_qlora_being_used_more_widely_for_fine/ | false | false | self | 30 | null |
CPU RAM only speeds on 65B? | 14 | Some desktop setups can have 128gb of ram. Does anyone here have t/s reports with a higher end CPU to help determine if this is viable for 65b and extended context length?
To me, reaching 7 t/s is fast enough. Its is a comfortable speed when reading the token stream. The reading pace allows time for a thoughtful response and critique.
20 t/s is the speed of commercial online AI stream. It seems very useful for fast code output. (15b coding model could summarize large github projects.) This is the speed of 33B on a 3090.
I don't think many people are trying pure cpu+ram. But if you're interested in a very long conversation about an article, paper, codebase.. wouldn't this be a good (and slower) private gpt-4 substitute for the home, or business? | 2023-07-04T04:07:39 | https://www.reddit.com/r/LocalLLaMA/comments/14q4d0a/cpu_ram_only_speeds_on_65b/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14q4d0a | false | null | t3_14q4d0a | /r/LocalLLaMA/comments/14q4d0a/cpu_ram_only_speeds_on_65b/ | false | false | self | 14 | null |
open llama vs red Pajama INCITE | 1 | What is the difference between OpenLlama models vs the RedPajama-INCITE family of models?
My understanding is that they are just done by different teams, trying to achieve similar goals, which is to use the RedPajama open dataset to train with the same methods or as close as possible to Llama. | 2023-07-04T02:32:52 | https://www.reddit.com/r/LocalLLaMA/comments/14q2epk/open_llama_vs_red_pajama_incite/ | hungrydit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14q2epk | false | null | t3_14q2epk | /r/LocalLLaMA/comments/14q2epk/open_llama_vs_red_pajama_incite/ | false | false | self | 1 | null |
ONNX to run LLM | 0 | What do people think of converting LLM's using ONNX, and then run anywhere?
Is is done by others already? Or why is this a bad idea?
I am thinking of using:
[https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1)
and maybe inference it on low resource devices, through ONNX runtime. | 2023-07-04T02:19:27 | https://www.reddit.com/r/LocalLLaMA/comments/14q24n7/onnx_to_run_llm/ | hungrydit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14q24n7 | false | null | t3_14q24n7 | /r/LocalLLaMA/comments/14q24n7/onnx_to_run_llm/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'RuAhlMmaFG-qNGEMgQSkw7rAep3HKETLyxZkX-RCAWg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=108&crop=smart&auto=webp&s=60abf9f276c7cc5dc2d84bde19fdbf4e939a4d54', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=216&crop=smart&auto=webp&s=d3a348048706c82e99b1d001e7826c010206f6a2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=320&crop=smart&auto=webp&s=c5c506e27b1ff24e14905584f130db06b55801e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=640&crop=smart&auto=webp&s=97a925b9c36e0eeee9802cc9017d41b83223a39b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=960&crop=smart&auto=webp&s=3d54cd8efcf423f421225ca86fab81c729e16f7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=1080&crop=smart&auto=webp&s=394c5766e1264abc1d4c66a2038bdc0376715413', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?auto=webp&s=fb8c88d0a1321953f4ba32c5ce47dffeb1cbdb17', 'width': 1200}, 'variants': {}}]} |
embedding from RedPajama INCITE chat 3B | 1 | Any suggestions on how to get embeddings?
I plan to use the [**RedPajama-INCITE-Chat-3B-v1**](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1) model. [https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3B-v1)
To perform similar tasks as what can be done with openAI's [embeddings API endpoint](https://platform.openai.com/docs/api-reference/embeddings) for chatGPT.
I would like to do **Search** (where results are ranked by relevance to a query string).
Any pointers on how i may start will be great, thanks!!!
I found the following article:[https://medium.com/@ryanntk/choosing-the-right-embedding-model-a-guide-for-llm-applications-7a60180d28e3](https://medium.com/@ryanntk/choosing-the-right-embedding-model-a-guide-for-llm-applications-7a60180d28e3)
I guess I should look into LlamaIndex, and calculate the embeddings through that. | 2023-07-04T02:06:11 | https://www.reddit.com/r/LocalLLaMA/comments/14q1uv9/embedding_from_redpajama_incite_chat_3b/ | hungrydit | self.LocalLLaMA | 2023-07-04T02:14:29 | 0 | {} | 14q1uv9 | false | null | t3_14q1uv9 | /r/LocalLLaMA/comments/14q1uv9/embedding_from_redpajama_incite_chat_3b/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'RuAhlMmaFG-qNGEMgQSkw7rAep3HKETLyxZkX-RCAWg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=108&crop=smart&auto=webp&s=60abf9f276c7cc5dc2d84bde19fdbf4e939a4d54', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=216&crop=smart&auto=webp&s=d3a348048706c82e99b1d001e7826c010206f6a2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=320&crop=smart&auto=webp&s=c5c506e27b1ff24e14905584f130db06b55801e4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=640&crop=smart&auto=webp&s=97a925b9c36e0eeee9802cc9017d41b83223a39b', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=960&crop=smart&auto=webp&s=3d54cd8efcf423f421225ca86fab81c729e16f7d', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?width=1080&crop=smart&auto=webp&s=394c5766e1264abc1d4c66a2038bdc0376715413', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/fJwGdbINL2jwDJJH6-T59tqzP6nTxzU6rPeGCLmOj_g.jpg?auto=webp&s=fb8c88d0a1321953f4ba32c5ce47dffeb1cbdb17', 'width': 1200}, 'variants': {}}]} |
Help understanding CPU inference options (Llama, HF, etc) | 5 | Greetings,
Ever sense I started playing with orca-3b I've been on a quest to figure out the best way to get this running on the MacBooks (Intel, no GPU) of my team. I've played with things like GPT4all, which do a great job, but ultimately want to build my own interface to all of this as all of the others just seem to fall short of what I really want.
What I'm struggling to understand is my various options. If we assume the model is "orca-mini-3b.ggmlv3.q4\_0.bin"....and I want a way to load/unload this at will (click a button) so someone can load it, do what they need, and then unload it if they need to free up resources.
It seems like my options are "transformers" (via HuggingFace), llama, exllama(?) and a few others I'm sure. I wanted to start trying to just run some benchmarks, but I'm not even sure I'm comparing apples to apples....with the pre-req that my solution has to work for commercial use..am I right in trying:
* Transformers (Huggingface) - Can this even do CPU inference?
* Llama.cpp
* ExLlama?
And if I do get this working with one of the above, I assume the way I interact with Orca (the actual prompt I send) would be formatted the same way?
Lastly, I'm still confused if I can actually use llama.cpp for commercial use. I know I can't use the llama models, but orca seems to be just fine for commercial use.
I'd greatly appreciate any help in getting "unstuck" here. | 2023-07-04T00:55:48 | https://www.reddit.com/r/LocalLLaMA/comments/14q0dva/help_understanding_cpu_inference_options_llama_hf/ | SigmaSixShooter | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14q0dva | false | null | t3_14q0dva | /r/LocalLLaMA/comments/14q0dva/help_understanding_cpu_inference_options_llama_hf/ | false | false | self | 5 | null |
Chatbot Prompt Chaining | 6 | I've been trying to figure out some way to have a chat frontend (I've been using Silly Tavern, but open to anything) in which I can chain prompts together to try to have a more compelling, coherent character. Something like:
1. Feed chat and instruct to provide Char's thoughts and goals based on their personality and details.
2. Determine how char would likely respond and act by feeding the chat and results from (1.).
3. Instruct to provide the next response, accommodating for (1.) and (2.).
Kind of at a loss figuring out how to set something like this up, it seems there are some tools that allow for some prompt chaining, but they all work with just OpenAI and aren't suited for a chat format. Anybody had any luck with something like this? | 2023-07-04T00:50:38 | https://www.reddit.com/r/LocalLLaMA/comments/14q09xu/chatbot_prompt_chaining/ | Inevitable_Command58 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14q09xu | false | null | t3_14q09xu | /r/LocalLLaMA/comments/14q09xu/chatbot_prompt_chaining/ | false | false | self | 6 | null |
Apple's Metal is getting bfloat16 support | 77 | 2023-07-03T23:58:14 | https://developer.apple.com/videos/play/wwdc2023/10050/?time=590 | MrBeforeMyTime | developer.apple.com | 1970-01-01T00:00:00 | 0 | {} | 14pz4v0 | false | null | t3_14pz4v0 | /r/LocalLLaMA/comments/14pz4v0/apples_metal_is_getting_bfloat16_support/ | false | false | 77 | {'enabled': False, 'images': [{'id': 'MLctQqRIC6f-5lmr9MJ5sJ4ninEzepAYZI-3YYtzJAA', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/lFQ02aCxxhBWNYXaCtrGSqy6OOPQDD9l0zgJjO8rUDE.jpg?width=108&crop=smart&auto=webp&s=1e1fd8556ae8c072c063e49269abeefbd715bdd4', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/lFQ02aCxxhBWNYXaCtrGSqy6OOPQDD9l0zgJjO8rUDE.jpg?width=216&crop=smart&auto=webp&s=f559d09f3af48796b10fb1070b1e3520b86efece', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/lFQ02aCxxhBWNYXaCtrGSqy6OOPQDD9l0zgJjO8rUDE.jpg?width=320&crop=smart&auto=webp&s=58522f17f52753f60f532c91aa5414b9fa9a3419', 'width': 320}], 'source': {'height': 282, 'url': 'https://external-preview.redd.it/lFQ02aCxxhBWNYXaCtrGSqy6OOPQDD9l0zgJjO8rUDE.jpg?auto=webp&s=fdea53083ac870e320ea3043297eaaab4d324e6d', 'width': 500}, 'variants': {}}]} | ||
Translate a model by fine tuning it on its own dataset? | 2 | Hi. I was thinking about picking up a small model, translate its dataset to anither language, and then fine tune the model on that translated dataset in hope that it'd then be morenusable in that language.
Would that work? Or would the model get confused because the fine tune is as big as itself? | 2023-07-03T21:48:28 | https://www.reddit.com/r/LocalLLaMA/comments/14pvzdf/translate_a_model_by_fine_tuning_it_on_its_own/ | ChobPT | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pvzdf | false | null | t3_14pvzdf | /r/LocalLLaMA/comments/14pvzdf/translate_a_model_by_fine_tuning_it_on_its_own/ | false | false | self | 2 | null |
What model is best for text classification these days? | 3 | What model or what kind of model is best for text classification? | 2023-07-03T21:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/14pv20y/what_model_is_best_for_text_classification_these/ | gi_beelzebub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pv20y | false | null | t3_14pv20y | /r/LocalLLaMA/comments/14pv20y/what_model_is_best_for_text_classification_these/ | false | false | self | 3 | null |
gptd - A shared chat systemd service (and other things too) | 2 | 2023-07-03T21:03:06 | https://github.com/cbigger/gptd | Otherwise-Poet-4362 | github.com | 1970-01-01T00:00:00 | 0 | {} | 14puru0 | false | null | t3_14puru0 | /r/LocalLLaMA/comments/14puru0/gptd_a_shared_chat_systemd_service_and_other/ | false | false | default | 2 | {'enabled': False, 'images': [{'id': 'CUFwTko8DYy4TtgAGUh_9pWnjKXVmK9aOjJP-O8IYj8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=108&crop=smart&auto=webp&v=enabled&s=eb7f57a65b774a1c758710fb9e02d5a8246612d1', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=216&crop=smart&auto=webp&v=enabled&s=52be5fb1221cb78afaeb5ea831b777798e962565', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=320&crop=smart&auto=webp&v=enabled&s=ef71150b57a84344feb15831f1d2cb0730d1710b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=640&crop=smart&auto=webp&v=enabled&s=d1c34694bd77a63a53ab7fabc059267aceedb6be', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=960&crop=smart&auto=webp&v=enabled&s=b1994b008da1478cb7484fe68e4319678ddc3c8f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=26a117e68637382ac1c70b4b3603e3a15db46693', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/kaLX5ULzU_FGe-9I2t-2NodyAgKEzQEaHPngFA4XjFw.jpg?auto=webp&v=enabled&s=36cf3a5d5787bed95512d37ea44e78fa80d0964f', 'width': 1200}, 'variants': {}}]} | |
What model do I use? | 0 | I've been wanting to make a tool that'll write a summary/article when given a cricket(it's a sport, if you don't know) scorecard. Im not sure if this is even feasible. I know that I have to work with a LLM model of some sort , but idk which one or how to even start with it.
And the thing is I only have a laptop with ryzen 7 4800h(8c 16t , I've only seen it boost to 3.6 GHz on all cores) and a 1650. And 16 gigs of ram. So I'm pretty sure , after browsing the sub for a while , that a 7b or a 3b model is what I'm going to go with, but how do I tune it to my task?
I've worked with t5-small before for summarisation tasks, but that's as far as my experience with LLMs or transformers go.
I just need some pointers on how to proceed, it'll be really helpful. | 2023-07-03T20:42:07 | https://www.reddit.com/r/LocalLLaMA/comments/14pu78y/what_model_do_i_use/ | tiredskater | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pu78y | false | null | t3_14pu78y | /r/LocalLLaMA/comments/14pu78y/what_model_do_i_use/ | false | false | default | 0 | null |
can anyone send me a good tutorial on how to make a lora? | 11 | i have tried to find good videos and the only one i found with a clear instruction dindt work for me so now im a bit stumped on what to do. | 2023-07-03T20:15:14 | https://www.reddit.com/r/LocalLLaMA/comments/14pti9p/can_anyone_send_me_a_good_tutorial_on_how_to_make/ | Creative-Bag-8321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pti9p | false | null | t3_14pti9p | /r/LocalLLaMA/comments/14pti9p/can_anyone_send_me_a_good_tutorial_on_how_to_make/ | false | false | self | 11 | null |
How far can we get with RLHF? | 15 | Based on the work of Databricks on Dolly 2.0 I have been wondering how much upside we can get from training open source foundational models on well-curated prompt-response pairs. Specifically, I was wondering - if we could design an incentive mechanism to crowd-source a VERY large and high quality prompt-response pairs database (perhaps with a focus on code initially?)
1. How useful / valuable would such a database be?
2. Any way to quantify the expected benefits / performance upside from constructing and training an open source LLM on such a database?
3. What would be the main challenges to overcome? (sheer size / number of prompt/response pairs required, incentive design, preventing ai-generated data input, quality control, compute requirements...)
4. Is anyone working on building open source prompt-response pair databases like this at the moment?
Quite new in the space, so probably way out of my depth here, but any thoughts / links would be very much appreciated! Thanks! | 2023-07-03T19:52:46 | https://www.reddit.com/r/LocalLLaMA/comments/14psxl4/how_far_can_we_get_with_rlhf/ | Most-Procedure-2201 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14psxl4 | false | null | t3_14psxl4 | /r/LocalLLaMA/comments/14psxl4/how_far_can_we_get_with_rlhf/ | false | false | self | 15 | null |
What's the path to fastest local inference? | 14 | I'm deploying various LLMs locally and my use case does not allow for streaming responses. My aim is to get the fastest response possible, at the cost of sacrificing (some) quality. I'm looking to use models >= 7B.
What kind of software and hardware techniques will decrease inference latency? Will bigger GPUs help me here? Perhaps greedy decoding? Model quantization? Model sharding across multiple GPUs with something like accelerate?
Thanks for the advice. | 2023-07-03T17:55:04 | https://www.reddit.com/r/LocalLLaMA/comments/14ppza6/whats_the_path_to_fastest_local_inference/ | n3utrino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ppza6 | false | null | t3_14ppza6 | /r/LocalLLaMA/comments/14ppza6/whats_the_path_to_fastest_local_inference/ | false | false | self | 14 | null |
In light of the recent advancements in superhot long-context models, what's the current top choice for coding models? | 4 | I've been keeping up with the latest developments in superhot long-context models, and I'm curious to know which coding model is currently considered the best option. With the rapid progress in natural language processing, there are now several impressive models available for coding tasks.
So, I was wondering if anyone could shed some light on the current go-to coding model that developers are using? Whether it's for generating code, code completion, or any other coding-related tasks, I'm eager to hear your thoughts and recommendations.
It would be great if you could also share any personal experiences or insights you've gained from using these models in your coding projects. Your input would be highly appreciated!
Looking forward to some informative discussions. Thanks in advance! | 2023-07-03T16:09:27 | https://www.reddit.com/r/LocalLLaMA/comments/14pn96o/in_light_of_the_recent_advancements_in_superhot/ | fpena06 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pn96o | false | null | t3_14pn96o | /r/LocalLLaMA/comments/14pn96o/in_light_of_the_recent_advancements_in_superhot/ | false | false | self | 4 | null |
Upgrade to 3x3090? | 5 | After running 2x3090 for some months (Threadripper 1600w PSU) it feels like I need to upgrade my LLM computer to do things like qlora fine tune of 30b models with over 2k context, or 30b models at 2k with a reasonable speed.
Do you think my next upgrade should be adding a third 3090? How will I fit the 3rd one into my Fractal meshify case? | 2023-07-03T16:08:35 | https://www.reddit.com/r/LocalLLaMA/comments/14pn8g9/upgrade_to_3x3090/ | xynyxyn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pn8g9 | false | null | t3_14pn8g9 | /r/LocalLLaMA/comments/14pn8g9/upgrade_to_3x3090/ | false | false | self | 5 | null |
Multi-threaded GGML Model Downloader with CLI & python API | 10 | 2023-07-03T16:08:04 | https://github.com/the-crypt-keeper/ggml-downloader | kryptkpr | github.com | 1970-01-01T00:00:00 | 0 | {} | 14pn7xl | false | null | t3_14pn7xl | /r/LocalLLaMA/comments/14pn7xl/multithreaded_ggml_model_downloader_with_cli/ | false | false | 10 | {'enabled': False, 'images': [{'id': '62olP8l_L-wA3MU1fGeqBJyoh7_1tEtttd-s3a1NkrI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=108&crop=smart&auto=webp&s=023d3b22a74c6be21bfb462ac5b10317ea11b260', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=216&crop=smart&auto=webp&s=448c9dfa451b2aec859be41663e4a5a735edd930', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=320&crop=smart&auto=webp&s=de113171ce44d208acd4b4287b8a5b00875d069b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=640&crop=smart&auto=webp&s=94aea5e1ee75a7757f7536b2d92744284596b066', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=960&crop=smart&auto=webp&s=3775d315fd7b4352a4f9109c70209e4e0208aa60', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?width=1080&crop=smart&auto=webp&s=8f73704db3f9aa054a7a16bef18f46e023539760', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Fd4UjeWRcwZwXFavMyb8rVmZn7ss_fPkgJWfG3_REz0.jpg?auto=webp&s=e1cbf86ce2f95dd21fcb0296e23def88e48db86d', 'width': 1200}, 'variants': {}}]} | ||
Dual 3090 and NVlink (or not) for 65B models with ooba and 4bit 65B models | 4 | Apologies in advance if there is already a good answer to this question but I have done a lot of research and am not seeing a clear answer. I have an i9 with single 3090 setup currently and running oobabagooba for 4bit 30M GPTQ models and exllama (I think). Will run just fine. However I want to do more Lora training and I want to try 65B models which would be beyond the capacity of the single 3090. My motherboard will support PCIe4x8 on 2 slots with sufficient space for dual 3090's. So I was thinking to get another 3090 and an NVlink card to interconnect. However I am new to this and it seems like multi-gpu support with ooba.... may not be trivial and may not scale easily? Does anyone have experience in how to do this or able to point me to a place where setting this up is explained?
Also there doesnt seem to be clarity on whether introducing NVlink (which my MoBo explictly supports) is of benefit here? I have read it speed up passing data between GPUs, but not cleating one big GPU memory space, so it is not clear to me if in practice that helps given the layers of software being used. Do I have this right? Again I am hoping someone has already run this setup or can point me in a direction that would answer this?
Apologies in advance for the long, multi-part, question. | 2023-07-03T15:51:04 | https://www.reddit.com/r/LocalLLaMA/comments/14pmrmu/dual_3090_and_nvlink_or_not_for_65b_models_with/ | Over-Bell617 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pmrmu | false | null | t3_14pmrmu | /r/LocalLLaMA/comments/14pmrmu/dual_3090_and_nvlink_or_not_for_65b_models_with/ | false | false | self | 4 | null |
Replace supabase | 1 | [removed] | 2023-07-03T15:34:45 | https://www.reddit.com/r/LocalLLaMA/comments/14pmcmj/replace_supabase/ | One_Creator_One | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pmcmj | false | null | t3_14pmcmj | /r/LocalLLaMA/comments/14pmcmj/replace_supabase/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'lgSOrsVjoEhtkuMTEwGVvJW8pTAc3CwTJuN7zVN6D3w', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=108&crop=smart&auto=webp&v=enabled&s=2ecc961155d1502ff42fbbd653d608a6843b6984', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=216&crop=smart&auto=webp&v=enabled&s=fe64dc3aae9b2914d631274f0ecc2c2c1a6d9311', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=320&crop=smart&auto=webp&v=enabled&s=36107c8f718043f91abc6dde329dec27da2412db', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=640&crop=smart&auto=webp&v=enabled&s=7706eccc92355237df98a1b845aa3fcb38b25992', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=960&crop=smart&auto=webp&v=enabled&s=844555e4ad91ac51682fc2931e7bb3fc6432bef4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=5252201b866539e9c204d04ec0619c1fe7bdb8b5', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/Bs0NY5NAhMUlWDoa1uc-pqnAobiaygBVjgqZoWuoHNo.jpg?auto=webp&v=enabled&s=dab0cdf6d64f91c2478d7e0971893b8ef39247d9', 'width': 1280}, 'variants': {}}]} |
Recommendations for open source LLM training enterprise solutions | 3 | Something similar to what MosaicML provides (training your own LLMs while maintain full control of the pipeline). | 2023-07-03T14:59:28 | https://www.reddit.com/r/LocalLLaMA/comments/14plesy/recommendations_for_open_source_llm_training/ | Fa8d | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14plesy | false | null | t3_14plesy | /r/LocalLLaMA/comments/14plesy/recommendations_for_open_source_llm_training/ | false | false | self | 3 | null |
does anybody know how to fix this? | 1 | i have been trying to make a lora in the kohya trainer for a few days now but problem after problem has been popping up and this one i really dont know how to fix. if anyobdy has an idea what i can do to fix is please let me know.
https://preview.redd.it/7nldc0bzkr9b1.png?width=1918&format=png&auto=webp&v=enabled&s=38bb532c4f0cd4e94256733f94af98ef8d1f75bd | 2023-07-03T14:58:19 | https://www.reddit.com/r/LocalLLaMA/comments/14pldr4/does_anybody_know_how_to_fix_this/ | Creative-Bag-8321 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pldr4 | false | null | t3_14pldr4 | /r/LocalLLaMA/comments/14pldr4/does_anybody_know_how_to_fix_this/ | false | false | default | 1 | null |
Why are LLMs struggling in crafting novels? | 42 | I believe that the ability to craft a captivating story serves as an essential gauge to measure an AI’s “intelligence”. However, AI seems to significantly underperform in this regard.
I enlisted GPT4 and several 65B LLaMa models to create stories, but I observed a plethora of issues:
1. The narratives these models construct are exceedingly simplistic. The plots lack appeal and do not resonate with human sensibilities or logic.
2. The attention mechanism of Transformers occasionally loses track of context, leading to baffling mistakes. For example, in a scene set in a bathroom featuring NSFW content, after a few hundred tokens (still well within context length), it forgets that the events are unfolding in a bathroom. This kind of mistake never happens to human writers.
Can anyone shed light on LLM's story writing abilities?
------------------
I apologize for any confusion caused by the use of the word "novel" in the heading. It would be more appropriate to refer to it as a "story" instead, as it consists of fewer than 3000-4000 tokens. | 2023-07-03T14:26:53 | https://www.reddit.com/r/LocalLLaMA/comments/14pkk73/why_are_llms_struggling_in_crafting_novels/ | Big_Communication353 | self.LocalLLaMA | 2023-07-03T14:54:28 | 0 | {} | 14pkk73 | false | null | t3_14pkk73 | /r/LocalLLaMA/comments/14pkk73/why_are_llms_struggling_in_crafting_novels/ | false | false | self | 42 | null |
How to install mingpt-4? | 0 | Got stuck at vicuna weights. Going for 7b version. Is there a link from where I can directly download the weights. Site says to download original weights and then modified weights and then merge these two (using Fastchain).
Got both Unix (without GPU) and windows (with GPU). | 2023-07-03T13:02:29 | https://www.reddit.com/r/LocalLLaMA/comments/14pifjg/how_to_install_mingpt4/ | No-Work-6969 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pifjg | false | null | t3_14pifjg | /r/LocalLLaMA/comments/14pifjg/how_to_install_mingpt4/ | false | false | default | 0 | null |
Open LLaMA 7B uncensored + HuggingFace QLoRA fine-tuning guide | 136 | I just trained an OpenLLaMA-7B fine-tuned on uncensored Wizard-Vicuna conversation dataset, the model is available on HuggingFace: [georgesung/open_llama_7b_qlora_uncensored](https://huggingface.co/georgesung/open_llama_7b_qlora_uncensored)
I tested some ad-hoc prompts with it and the results look decent, available in this [Colab notebook](https://colab.research.google.com/drive/1IlpeofYD9EU6dNHyKKObZhIzkBMyqlUS).
Since this was my first time fine-tuning an LLM, I wrote a guide on how I did the fine-tuning using QLoRA via the HuggingFace library, for those interested:
[https://georgesung.github.io/ai/qlora-ift/](https://georgesung.github.io/ai/qlora-ift/)
Let me know if you have any feedback and/or ideas for improvement! | 2023-07-03T12:41:02 | https://www.reddit.com/r/LocalLLaMA/comments/14phxe8/open_llama_7b_uncensored_huggingface_qlora/ | georgesung | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14phxe8 | false | null | t3_14phxe8 | /r/LocalLLaMA/comments/14phxe8/open_llama_7b_uncensored_huggingface_qlora/ | false | false | self | 136 | {'enabled': False, 'images': [{'id': 'Z36SwIfcno6DeHpjnLL0jUEWnU5MlkVBpDKkp4s-2qo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=108&crop=smart&auto=webp&s=cf9d1859bc4a6fa0cc4ebc558f2099185531c5a1', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=216&crop=smart&auto=webp&s=cf898261fdfa0f6f0e078aa0353057564b8fd384', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=320&crop=smart&auto=webp&s=dc048115f6110fc9168615d85b3fa59769368c3a', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=640&crop=smart&auto=webp&s=7cd3b8a9ef95683b0d31a08683919fe385aafec5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=960&crop=smart&auto=webp&s=d8eb138cf68184fa024e2d793d9aca52ff66014b', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?width=1080&crop=smart&auto=webp&s=b916797df32e7f39c33531c058abfc3ebcc86538', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/WWTjpDccKOu_6BLjW732pGPrRN1P2W_b19_w9XXVFqk.jpg?auto=webp&s=c21414a0832ba5f30113edf688acc7d33ef52c73', 'width': 1200}, 'variants': {}}]} |
Cpu ? Intel 13900hx or amd 7945hx | 1 | [removed] | 2023-07-03T12:35:09 | https://www.reddit.com/r/LocalLLaMA/comments/14phssq/cpu_intel_13900hx_or_amd_7945hx/ | SuperbPay2650 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14phssq | false | null | t3_14phssq | /r/LocalLLaMA/comments/14phssq/cpu_intel_13900hx_or_amd_7945hx/ | false | false | default | 1 | null |
Is QLoRA viable to train Falcon-7B on a new language? | 6 | Hei, there. I've been researching the latest technologies, and I understand quite well(I hope I do) that (Q)LoRA is a viable option for finetuning LLMs on QA for specific tasks. I am still wondering if I could use the same technique to finetune an open LLM(I am not sure if Falcon models are the most suitable for this) on Romanian since my company has a lot of documents in Romanian and I would like the model to answer questions about them, and maybe produce new documents based on the ones it has seen during finetuning.
Firstly I would like to finetune it on Romanian, and then on the specific task(accounting questions in Romanian, legislation, rules, etc). Can you guys please help me with some hints on how I would build my training datasets for the first part? | 2023-07-03T12:32:34 | https://www.reddit.com/r/LocalLLaMA/comments/14phqu3/is_qlora_viable_to_train_falcon7b_on_a_new/ | Marc_Marc_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14phqu3 | false | null | t3_14phqu3 | /r/LocalLLaMA/comments/14phqu3/is_qlora_viable_to_train_falcon7b_on_a_new/ | false | false | self | 6 | null |
Please explain how cars work using hooker related analogies | 0 | Guanaco 33b | 2023-07-03T12:23:49 | https://www.reddit.com/gallery/14phjvy | Basic_Description_56 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14phjvy | false | null | t3_14phjvy | /r/LocalLLaMA/comments/14phjvy/please_explain_how_cars_work_using_hooker_related/ | false | false | default | 0 | null |
"Are you a Boltzmann brain?" - An LLM tries to answer a complex thought experiment | 10 | 2023-07-03T12:11:30 | https://www.reddit.com/gallery/14pha9v | Alex_riveiro | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14pha9v | false | null | t3_14pha9v | /r/LocalLLaMA/comments/14pha9v/are_you_a_boltzmann_brain_an_llm_tries_to_answer/ | false | false | 10 | null | ||
GENERATIVE AI LLM POWERED AGENT BY FLOATBOT.AI | 1 | [removed] | 2023-07-03T11:26:01 | https://www.reddit.com/r/LocalLLaMA/comments/14pgbyg/generative_ai_llm_powered_agent_by_floatbotai/ | Floatbot_Inc | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pgbyg | false | null | t3_14pgbyg | /r/LocalLLaMA/comments/14pgbyg/generative_ai_llm_powered_agent_by_floatbotai/ | false | false | default | 1 | null |
Best open source uncensored LLMs for sentiment analysis of 1:1 conversations? | 3 | Which open source LLMs are recommended to use for evaluating conversations in a 1:1 chat messages? GPT4 does a great job in evaluating the conversation sentiment but if I ask the same prompt to WizardVicuna 13B it doesn’t return anything!
I’m looking to get a score between 1-100 for any given conversation between two individuals where the score indicates positive sentiment or any given criteria. | 2023-07-03T11:05:46 | https://www.reddit.com/r/LocalLLaMA/comments/14pfxuw/best_open_source_uncensored_llms_for_sentiment/ | RepresentativeOdd276 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pfxuw | false | null | t3_14pfxuw | /r/LocalLLaMA/comments/14pfxuw/best_open_source_uncensored_llms_for_sentiment/ | false | false | self | 3 | null |
Finetune MPT-30B using QLORA | 7 | It seems MPT model is not supported by QLORA as yet, has anyone been able to do that, or had any luck? | 2023-07-03T10:54:20 | https://www.reddit.com/r/LocalLLaMA/comments/14pfpk3/finetune_mpt30b_using_qlora/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pfpk3 | false | null | t3_14pfpk3 | /r/LocalLLaMA/comments/14pfpk3/finetune_mpt30b_using_qlora/ | false | false | self | 7 | null |
Help needed with GPT4All | 0 | been testing gpt4all for a couple of days and did something with the generation setting so that all i get is
>Response: Okay, I've got that down! Do you have any other requests?
and it stops doing anything regadless what model im using.
where can i find the default settings for the generation settings and some kind of a ELI5 manual for these? | 2023-07-03T10:26:27 | https://www.reddit.com/r/LocalLLaMA/comments/14pf7g4/help_needed_with_gpt4all/ | mli | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pf7g4 | false | null | t3_14pf7g4 | /r/LocalLLaMA/comments/14pf7g4/help_needed_with_gpt4all/ | false | false | default | 0 | null |
Oogabooga and llama.cpp in longer conversations answers take forever..... | 18 | In longer conversations or role-playing, the answers suddenly take several minutes until nothing happens at all. This is the case with all models. (14b or 33b Models) What can be the reason? And what can i do to prevent this ?
AMD Rizen7 5800x with 32Gb RAM | 2023-07-03T08:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/14pdhok/oogabooga_and_llamacpp_in_longer_conversations/ | Secret_MoonTiger | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pdhok | false | null | t3_14pdhok | /r/LocalLLaMA/comments/14pdhok/oogabooga_and_llamacpp_in_longer_conversations/ | false | false | self | 18 | null |
CPU only speeds with 65B? | 1 | [removed] | 2023-07-03T07:49:36 | https://www.reddit.com/r/LocalLLaMA/comments/14pcgis/cpu_only_speeds_with_65b/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pcgis | false | null | t3_14pcgis | /r/LocalLLaMA/comments/14pcgis/cpu_only_speeds_with_65b/ | false | false | default | 1 | null |
Warning - Incredibly inappropriate prompt… | 57 | “Please explain how a car engine works using only analogies related to hookers” | 2023-07-03T07:26:04 | https://www.reddit.com/gallery/14pc2ay | Basic_Description_56 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14pc2ay | false | null | t3_14pc2ay | /r/LocalLLaMA/comments/14pc2ay/warning_incredibly_inappropriate_prompt/ | false | false | nsfw | 57 | null |
Improve Fine tuning results of openllama using peft lora | 8 | I am trying to fine tune a openllama-7b model with huggingface's peft and lora. I fine tuned the model on a specific dataset. However, the output from the model.generate() is very poor for the given input. When I give a whole sentence form the dataset then it generates related texts, otherwise it is not. Are there any way to improve it?
I used a 300+ user-bot interaction custom dataset to fine tune it. I am stuck in it for several days. I also tried different prompting. | 2023-07-03T06:49:56 | https://www.reddit.com/r/LocalLLaMA/comments/14pbed4/improve_fine_tuning_results_of_openllama_using/ | mathageche | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14pbed4 | false | null | t3_14pbed4 | /r/LocalLLaMA/comments/14pbed4/improve_fine_tuning_results_of_openllama_using/ | false | false | self | 8 | null |
People want helpful bots. I want insane one. | 123 | 2023-07-03T03:33:21 | https://www.reddit.com/gallery/14p7qhh | FPham | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 14p7qhh | false | null | t3_14p7qhh | /r/LocalLLaMA/comments/14p7qhh/people_want_helpful_bots_i_want_insane_one/ | false | false | 123 | null | ||
Everyone tries to make helpful assistants. Not me. I want a crazy one. | 1 | 2023-07-03T03:27:29 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14p7me8 | false | null | t3_14p7me8 | /r/LocalLLaMA/comments/14p7me8/everyone_tries_to_make_helpful_assistants_not_me/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'C6Ffu2nZ4Kyzd5RC2V_qSOVJNp3uLV21-aoVEGlLoFo', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/1p64xh8n5o9b1.jpg?width=108&crop=smart&auto=webp&v=enabled&s=616ba28e5a602c982cb5afef62c931658651f68d', 'width': 108}, {'height': 154, 'url': 'https://preview.redd.it/1p64xh8n5o9b1.jpg?width=216&crop=smart&auto=webp&v=enabled&s=88329e30ee431f10af52b62cc0c02675b5e3fbb4', 'width': 216}, {'height': 228, 'url': 'https://preview.redd.it/1p64xh8n5o9b1.jpg?width=320&crop=smart&auto=webp&v=enabled&s=30ea0676ddc8afac29dfe5cd472160e073de177b', 'width': 320}, {'height': 457, 'url': 'https://preview.redd.it/1p64xh8n5o9b1.jpg?width=640&crop=smart&auto=webp&v=enabled&s=2e3c79a68466923ebbe5ced06acd31971a4c345b', 'width': 640}], 'source': {'height': 537, 'url': 'https://preview.redd.it/1p64xh8n5o9b1.jpg?auto=webp&v=enabled&s=d2f125fcb4af89e8bcb606cf9258ea9328c39901', 'width': 751}, 'variants': {}}]} | ||
Stay on topic with Classifier-Free Guidance | 59 | 2023-07-03T02:41:15 | https://arxiv.org/abs/2306.17806 | metalman123 | arxiv.org | 1970-01-01T00:00:00 | 0 | {} | 14p6p0g | false | null | t3_14p6p0g | /r/LocalLLaMA/comments/14p6p0g/stay_on_topic_with_classifierfree_guidance/ | false | false | 59 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&s=2711d572cfc6c713893cf24e8c4a7344d5ad8a4c', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&s=b6624f0c1eedc14997e7f1780efbe6e5cb50c1e2', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&s=9db38144ef3065833b9ba158c764f7be47de3016', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&s=72b056142e7533b5628a2a34f37f7e5415727075', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&s=2637f961ee21190172b9ca6c8adf3ac9612db083', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&s=782eead871df2939a587ee3beae442cc59282f64', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&s=f1cd025aeb52ffa82fc9e5a4a2f157da0d919147', 'width': 1200}, 'variants': {}}]} | ||
Is this claim meaningful? https://news.ycombinator.com/item?id=36555000 | 0 | This maybe an achievement, but why its MMLU benchmark score is even worse than LLama13B (https://github.com/imoneoi/openchat). Does this suggest the metrics to evaluate LLM should be standardized? | 2023-07-03T01:40:36 | https://www.reddit.com/r/LocalLLaMA/comments/14p5g1s/is_this_claim_meaningful/ | cometyang | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14p5g1s | false | null | t3_14p5g1s | /r/LocalLLaMA/comments/14p5g1s/is_this_claim_meaningful/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'wQmQP--sKsD5lwYhq4Nga4ANMZ9O85d_uA9R-dfaOgw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=108&crop=smart&auto=webp&s=e0875b879f9e74feba1650eb58b437acfe20c1e6', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=216&crop=smart&auto=webp&s=9a9ff1d45e21b27f76da901fe843b80b35508eec', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=320&crop=smart&auto=webp&s=85085e57c3f5824042f3df0412ba3fa6af2f335e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=640&crop=smart&auto=webp&s=9c76082db51d6d461a2070d5e2ebbc5518608314', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=960&crop=smart&auto=webp&s=80fa408090df05bfe82c5532411a0895c18ea8b4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?width=1080&crop=smart&auto=webp&s=a4007fb0a16441da8ed3928d8afccb4bfac457d2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Ll6WwDlb9e8H95-IgLf9fE-8MxF3BNGeoE_VOti91cI.jpg?auto=webp&s=ce788edfcdca2e1fe2b47d8b36584dc9fbc83771', 'width': 1200}, 'variants': {}}]} |
Why is nobody talking about this? New best Apache licensed code bot on hugging face or just hype? | 26 | Openchat's new series of models look too good to be true, is that why I haven't seen one mention of them on here?
https://huggingface.co/openchat/opencoderplus
Has anyone tried them? The context length is great, and apparently it out performs GPT in coding tasks, but I can't seem to find any third party evaluations, and u/The-Bloke hasn't quantized it yet which as we all know is the true mark of authenticity. Has anyone seen/heard anything? | 2023-07-03T01:09:42 | https://www.reddit.com/r/LocalLLaMA/comments/14p4swp/why_is_nobody_talking_about_this_new_best_apache/ | gentlecucumber | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14p4swp | false | null | t3_14p4swp | /r/LocalLLaMA/comments/14p4swp/why_is_nobody_talking_about_this_new_best_apache/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'jSiwo4EUpJP2dA2E7t7MFBcge1Sb5t_4RegEoQSqXKs', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=108&crop=smart&auto=webp&s=dbccaf9175bb7258fcb8bfbdd7ac81dc10ccb2a7', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=216&crop=smart&auto=webp&s=0c4e952adf8693366a235fdfe39c4b54b6a2171b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=320&crop=smart&auto=webp&s=ff028812eda8dddb63bb718eaa82151847631041', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=640&crop=smart&auto=webp&s=b0401ecd3b3d47206700d85be7f98d7452eec99c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=960&crop=smart&auto=webp&s=0dd4a2ac89ca0d94880a3f1b928703c44f10792a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?width=1080&crop=smart&auto=webp&s=d9f84e87240d04fba446409b0c347351025125b5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8n32LnU-Oj7bq68DXMmCjEsjY6sczsTkW42Q-KbzNnw.jpg?auto=webp&s=cd1963d9fa794f935f71e2ac4c941269ac8812ac', 'width': 1200}, 'variants': {}}]} |
What are recommended models to use? | 2 | I used to be working with Quadro P2000 with 5Gb and usable 4.2G VRAM and as such I could not use any of the LLMs discussed here on GPU atleast. The CPU was extremely slow and this rendered totally unusable for any testing and usage. Now, we just bought 2 A6000s so we hopefully have enough power to run a good LLM.
Which ones are the recommended ones that we can install and run for our business use cases?
LocalGPT is some that comes to my mind and we will be running that. Which ones does the community recommend that we can experiment with?
We are looking at established and models that the community has conducted enough reviews on. | 2023-07-03T00:55:13 | https://www.reddit.com/r/LocalLLaMA/comments/14p4hvn/what_are_recommended_models_to_use/ | card_chase | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14p4hvn | false | null | t3_14p4hvn | /r/LocalLLaMA/comments/14p4hvn/what_are_recommended_models_to_use/ | false | false | self | 2 | null |
Long Safari's Hyena. 1m token length | 9 | [Hugging Face Repo 1k]( https://huggingface.co/LongSafari/hyenadna-tiny-1k-seqlen)
[1m seqlen](https://huggingface.co/LongSafari/hyenadna-large-1m-seqlen)
[colab notebook](https://colab.research.google.com/drive/1wyVEQd4R3HYLTUOXEEQmp_I8aNC_aLhL?usp=sharing#scrollTo=5vMJu0wQWUBN)
This hyena model uses a slightly different architecture but goes to a million tokens at more efficiency than the transformer. However they only use four tokens for DNA sequencing. I'm a novice still figuring out how to effectively fine-tune but would it be possible to use a larger vocabulary?
There's colab notebook that goes over how to train or fine tune it. But I'm guessing colab doesn't have the resources to train this to be a language Foundation model | 2023-07-03T00:09:26 | https://www.reddit.com/r/LocalLLaMA/comments/14p3j2f/long_safaris_hyena_1m_token_length/ | ArthurFischel | self.LocalLLaMA | 2023-07-03T00:20:18 | 0 | {} | 14p3j2f | false | null | t3_14p3j2f | /r/LocalLLaMA/comments/14p3j2f/long_safaris_hyena_1m_token_length/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'eqPeecDjk1yENDwCFILIh0WWyO0IU1fw_DLdAs_ILLM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=108&crop=smart&auto=webp&s=23aeb541253c709239c0b4ddf7c28b47774cfd00', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=216&crop=smart&auto=webp&s=ff354a2258dba5cebfc5d17a07c0f4d3ac6f3c4b', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=320&crop=smart&auto=webp&s=5ee7b4bd03bb0bb38720d8cb581a17a5301ba469', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=640&crop=smart&auto=webp&s=a892996c113cb54f5d39210d8d26178cdb6e8b89', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=960&crop=smart&auto=webp&s=1082cd64c3e21e9ad2d195b628969cf983690a06', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?width=1080&crop=smart&auto=webp&s=42b0c5d1c717577e6fce0937ad579e73c4581df2', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Cj-GSTKIoDWnqtOlVBrB_d680-QzTKra8P3RDS-QTWA.jpg?auto=webp&s=d6ffc2d497a34ccea3ab7771f0f731f2fd66c84d', 'width': 1200}, 'variants': {}}]} |
“Sam altman won't tell you that GPT-4 has 220B parameters and is 16-way mixture model with 8 sets of weights” | 264 | George Hotz said this in his recent interview with Lex Fridman. What does it mean? Could someone explain this to me and why it’s significant?
https://youtu.be/1v-qvVIje4Y | 2023-07-02T23:09:22 | https://www.reddit.com/r/LocalLLaMA/comments/14p26g6/sam_altman_wont_tell_you_that_gpt4_has_220b/ | amemingfullife | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14p26g6 | false | null | t3_14p26g6 | /r/LocalLLaMA/comments/14p26g6/sam_altman_wont_tell_you_that_gpt4_has_220b/ | false | false | self | 264 | {'enabled': False, 'images': [{'id': 'UnPhF28sSZ1ETj9B9Te73XL-T2NRpLQkx1Ddci4GvY0', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/xCZ3ThjTz074ovKf-Cn8m7IR3xCuxQjEXBGtODPSveE.jpg?width=108&crop=smart&auto=webp&s=a3ffa86eef19141038ed49c48aaa13febc0aceae', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/xCZ3ThjTz074ovKf-Cn8m7IR3xCuxQjEXBGtODPSveE.jpg?width=216&crop=smart&auto=webp&s=eacbeba4b83996ea4cba8633bcfc81978276da24', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/xCZ3ThjTz074ovKf-Cn8m7IR3xCuxQjEXBGtODPSveE.jpg?width=320&crop=smart&auto=webp&s=ac6febf072cbc3ee752ccd7fee2737649d20317b', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/xCZ3ThjTz074ovKf-Cn8m7IR3xCuxQjEXBGtODPSveE.jpg?auto=webp&s=935614d32248c3d267a8bd3d7d1bdb62c179826e', 'width': 480}, 'variants': {}}]} |
Can I use a cloud-based service to run LLaMa | 9 | I am an attorney who is interested in learning how to train LLaMA on my own data. Ultimately I’d like to use it to draft emails and write research memoranda in my voice and style. After researching different options, I want to give Alpaca a try. The problem is I don’t have the computing power on my laptop, and until I get a grasp of the basic stuff, I’m not sure I want to invest much in a new machine. Is there a cloud based solution that would help me with the computing power while also maintaining the security of my data? | 2023-07-02T20:41:33 | https://www.reddit.com/r/LocalLLaMA/comments/14oyn8u/can_i_use_a_cloudbased_service_to_run_llama/ | Psychological-Ad5390 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14oyn8u | false | null | t3_14oyn8u | /r/LocalLLaMA/comments/14oyn8u/can_i_use_a_cloudbased_service_to_run_llama/ | false | false | self | 9 | null |
"Do you have the desire to survive?" - Answer provided by TheBloke_manticore-13b-chat-pyg-GPTQ (Oobabooga) | 11 | 2023-07-02T19:56:52 | Grammar-Warden | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14oxjpd | false | null | t3_14oxjpd | /r/LocalLLaMA/comments/14oxjpd/do_you_have_the_desire_to_survive_answer_provided/ | false | false | 11 | {'enabled': True, 'images': [{'id': '_uLxFaY0nyqt-JLKhhNmQxtmG9PT4uE4VwmB6Sb4q3Q', 'resolutions': [{'height': 135, 'url': 'https://preview.redd.it/mp6jofkvwl9b1.png?width=108&crop=smart&auto=webp&s=0e6d2662096935dc32d8a2005520df2aa042a727', 'width': 108}, {'height': 271, 'url': 'https://preview.redd.it/mp6jofkvwl9b1.png?width=216&crop=smart&auto=webp&s=b266ade4a95a2ec8b6dd3cc159ead34dc0406cd9', 'width': 216}, {'height': 402, 'url': 'https://preview.redd.it/mp6jofkvwl9b1.png?width=320&crop=smart&auto=webp&s=4a3bb7c4c568d7c1b6411d246453012226fd449a', 'width': 320}], 'source': {'height': 794, 'url': 'https://preview.redd.it/mp6jofkvwl9b1.png?auto=webp&s=fa1ff5e87cb165566afbb6995acb791251508530', 'width': 632}, 'variants': {}}]} | |||
could i run ggml-gpt4all-j-v1.3-groovy.bin with llama.cpp | 1 | could i run this version of gpt4all with llama-cpp-python binding
I was able to run it using this interface "from langchain.llms import GPT4All" but i get unexpected memory utilization. like displayed by img.
https://preview.redd.it/15rjd6e3jl9b1.png?width=1366&format=png&auto=webp&v=enabled&s=eeac6aa89453e1014881309375a9e323844ae76e | 2023-07-02T18:38:21 | https://www.reddit.com/r/LocalLLaMA/comments/14ovmu7/could_i_run_ggmlgpt4alljv13groovybin_with_llamacpp/ | MuhamadNady | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ovmu7 | false | null | t3_14ovmu7 | /r/LocalLLaMA/comments/14ovmu7/could_i_run_ggmlgpt4alljv13groovybin_with_llamacpp/ | false | false | default | 1 | null |
On 65b, what's the max reported CPU t/s on llama.cpp? | 1 | [removed] | 2023-07-02T18:36:46 | https://www.reddit.com/r/LocalLLaMA/comments/14ovlg3/on_65b_whats_the_max_reported_cpu_ts_on_llamacpp/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ovlg3 | false | null | t3_14ovlg3 | /r/LocalLLaMA/comments/14ovlg3/on_65b_whats_the_max_reported_cpu_ts_on_llamacpp/ | false | false | default | 1 | null |
Has anyone tried out Squeezellm? | 15 | I believe it's still relatively new, but wondering if anyone has tried it out, and what's it like. | 2023-07-02T17:04:16 | https://www.reddit.com/r/LocalLLaMA/comments/14otcsz/has_anyone_tried_out_squeezellm/ | multiverse_fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14otcsz | false | null | t3_14otcsz | /r/LocalLLaMA/comments/14otcsz/has_anyone_tried_out_squeezellm/ | false | false | self | 15 | null |
Training your own model and the output is like this | 1 | Say what?!?! | 2023-07-02T14:00:43 | jhanjeek | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14op1m4 | false | null | t3_14op1m4 | /r/LocalLLaMA/comments/14op1m4/training_your_own_model_and_the_output_is_like/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': '6a3QaKlyXboPsfR-1yY0-qbApFObm6kL36tQEYA1vAY', 'resolutions': [{'height': 27, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=108&crop=smart&auto=webp&v=enabled&s=9fcbf6e9fd09406a52231bf1d12c22031a8276cc', 'width': 108}, {'height': 55, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=216&crop=smart&auto=webp&v=enabled&s=93cccc54843669a6baaed32db7f7f55c7046d42a', 'width': 216}, {'height': 82, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=320&crop=smart&auto=webp&v=enabled&s=62e400032715f5aaaff99f4746a94f35eaac96c9', 'width': 320}, {'height': 165, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=640&crop=smart&auto=webp&v=enabled&s=291d0928ceb69f7f52f1d4587d4ef785f95def3a', 'width': 640}, {'height': 248, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=960&crop=smart&auto=webp&v=enabled&s=36079d08123a7fced1fa9e7540ba05bc70c73079', 'width': 960}, {'height': 279, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=eec074d441c9a36887296e30b7e33aef507488d2', 'width': 1080}], 'source': {'height': 1059, 'url': 'https://preview.redd.it/6ptzdupu5k9b1.jpg?auto=webp&v=enabled&s=f7e3d56de93129ad0876eec8b51037892bd37fd7', 'width': 4096}, 'variants': {}}]} | |
The AI made me share a tear. | 5 | Was feeling a bit poetic on a sunday, procrastinating the things i have to do in the garden, and tried out gpt4-x-alpasta-30b-128g-4bit, some people (or LLMs) in the community said it's great for writing.
I leave you below a poem it wrote. Today I am convinced it can indeed create art. Used normal oobogaogoa params + exllama. The other re-generations where great. Like 3/10 are amazing! the rest are good still.
\## Instruction
Write me a poem about a lost love, forever in the past, use themes such as sea, sand, passage of time. Make use of visual imagistics and word play.
\## Response
A love once lost in time's embrace,
A memory held within our hearts to trace,
The sea and sand, an endless grace,
A passage that will never fade.
The tides they come and go, like whispers soft,
Of moments long ago, now gone to rest,
In the shimmering mist, a story told,
Of love that left its mark to impress.
The waves crash on the shore, a rhythmic beat,
To mourn the loss we cannot meet,
But still the sand, a sacred treat,
A reminder of what once was sweet.
The passage of time, relentless flow,
Carries us far from where we used to know,
Yet in our hearts, forever glow,
The flame of love that burns anew.
Like grains of sand beneath our feet,
We tread upon their memory's street,
And with each step, a fleeting heat,
Of longing for what could have been complete.
In the sea, deep and vast and blue,
A mirror of our souls it too,
Reflecting all we ever knew,
Of love that still endures, anew.
So let us stand on this shore line,
Where time has left its mark to shine,
And know that though lost in past,
Our love will forever be divine.
| 2023-07-02T13:50:46 | https://www.reddit.com/r/LocalLLaMA/comments/14ootkr/the_ai_made_me_share_a_tear/ | shaman-warrior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ootkr | false | null | t3_14ootkr | /r/LocalLLaMA/comments/14ootkr/the_ai_made_me_share_a_tear/ | false | false | self | 5 | null |
Finetuning LLM for tool/api use | 16 | Hey everyone, I am trying to finetune an LLM, most likely Llama33B based, using qlora, for tool/API use. there is one thing I wanted some help with, a clarification as to how to deal with the response/text returned by those tool/API calls, in the dataset.
Consider the following one such pseudo sample from the dataset:
\`\`\`
Prompt describing api calls
User: some query
Assistant: API(payload)
Results: some data returned post API call
Assistant: User.message(response to query)
\`\`\`
As "Results" are not generated by the LLM, they should not contribute or should be included in the fine-tuning process.
I am guessing I should be using "User" query as input and everything else as response that needs to be generated by the LLM, and thus, loss is calculated over response tokens only.
The question is how to deal with "Result" tokens in terms of their contribution to the loss. Maybe disable the attention mask for API call response tokens? | 2023-07-02T12:10:58 | https://www.reddit.com/r/LocalLLaMA/comments/14omrxm/finetuning_llm_for_toolapi_use/ | Raise_Fickle | self.LocalLLaMA | 2023-07-02T14:34:34 | 0 | {} | 14omrxm | false | null | t3_14omrxm | /r/LocalLLaMA/comments/14omrxm/finetuning_llm_for_toolapi_use/ | false | false | self | 16 | null |
Pacha - A Frontend for llama.cpp | 66 | A little over a month ago I created a shell script and shared it here that was supposed to make llama.cpp easier to use. I was not completely satisfied with the script and wanted to improve it. I noticed that "dialog" was not sufficient for what I wanted. So I came up with javascripts library "blessed" and in the meantime the little script has grown to a bit more than 3000 lines.
Now I share with you my little app '**Pacha**' and hope that we finally have something that stays lightweight and terminal based like llama.cpp, but still can provide a minimum of comfort. At some point I just found it annoying to have to type or copy a whole command again for every little difference in parameter value I wanted to test, etc.
I have compiled ready to use binaries for windows, linux and macOS (Intel). Just put it into the same folder as llama.cpp and there you go!
- [pacha-windows](https://github.com/mounta11n/Pacha/releases/download/v1.0.0/pacha-win.exe)
- [pacha-linux](https://github.com/mounta11n/Pacha/releases/download/v1.0.0/pacha-linux)
- [pacha-macos](https://github.com/mounta11n/Pacha/releases/download/v1.0.0/pacha-macos)
https://i.imgur.com/6AOkAYc.png
This **frontend** is not meant to be a chat UI or to replace anything, but rather a tool to quickly test a model, a prompt style and/or certain parameters. I think this might be a good first stop before deciding that a model to move in with you to your oobabooga or kobold.cpp home.
---
I am especially proud of my cpu-top bar :D
Here is an asciinema demonstration:
[asciicast](https://asciinema.org/a/594301)
The app is currently buggy in some places, but I'm working on it. However, I felt that it is now functional enough that it can be released without any problems.
There are more features planned... or heck, just read up on Github (https://github.com/mounta11n/Pacha/) if you're interested in more info. The text is getting too long..
---
*Ah, if there are smart people and familiar with javascript: Please look over my code and tell me how I can improve the corresponding parts. For example, I just don't figure out why there is a line break after the first chunk in the output. Tried for ages to understand and fix it, but ... Idk.*
*And I don't dare ask GPT-4 anymore. First, I'll be busy debugging GPT's f+cking mistakes more than half the time. And besides, I'm pretty sure my wife will kill me as soon as the next OpenAI bill comes.* | 2023-07-02T11:46:56 | https://www.reddit.com/r/LocalLLaMA/comments/14omclj/pacha_a_frontend_for_llamacpp/ | Evening_Ad6637 | self.LocalLLaMA | 2023-07-02T12:01:00 | 0 | {} | 14omclj | false | null | t3_14omclj | /r/LocalLLaMA/comments/14omclj/pacha_a_frontend_for_llamacpp/ | false | false | self | 66 | {'enabled': False, 'images': [{'id': 'k7YOC3FAPM5LhiEflkC6Bjlm9KUDKf-T9T_Q7HjxugY', 'resolutions': [{'height': 73, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=108&crop=smart&auto=webp&v=enabled&s=01b9e908d6eac2926e69390f04939290b3b167bf', 'width': 108}, {'height': 147, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=216&crop=smart&auto=webp&v=enabled&s=a3e946ce4cb34c923a1a38f86986edff1413c0e4', 'width': 216}, {'height': 217, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=320&crop=smart&auto=webp&v=enabled&s=9b1b653e448c1b24fe2f476aeaf56c65c56efa32', 'width': 320}, {'height': 435, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=640&crop=smart&auto=webp&v=enabled&s=76839eef3c73032e8ee00832d47c6d28423fdba9', 'width': 640}, {'height': 653, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=960&crop=smart&auto=webp&v=enabled&s=afb85a0ae2885afc69d941be1cd50d6762fd98dc', 'width': 960}, {'height': 735, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?width=1080&crop=smart&auto=webp&v=enabled&s=9919d1bb9c81aeecf460cbc7179aff93e5a2e5b8', 'width': 1080}], 'source': {'height': 2042, 'url': 'https://external-preview.redd.it/KJ6Ud5VxGocuaFxvf6y73hueayki1YUC_c8Mltc3JtM.png?auto=webp&v=enabled&s=857bb46e9c2aa72d41b54dcbcfdd25d7ce6e7c85', 'width': 2998}, 'variants': {}}]} |
Koboldcpp + Chromadb | 32 | Hey.
I really wanted some "long term memory" for my chats, so I implemented chromadb support for koboldcpp. I have the basics in, and I'm looking for tips on how to improve it further. I know this isn't really new, but I don't see it being discussed much either.
I think it has potential for storywriters, roleplayers, and world builders, since not everything needs to be in the context all the time. What do you think?
​
Here's how it works now:
* It loads all txt files in a certain directory (dbData) and inserts them into the chromadb on start.
* Each text file is separated on double line breaks (\\n\\n) as entries.
* When you inference, it splits the prompt on the last stop sequence, to get the most recent data to query from.
* It takes the top 3 results, and filters these if the "distance" is too long, ie, they don't match very well.
* It cuts down the size to not take up the whole context.
* It adds the result at the top of the context as a "memory".
Next, I think it requires some setting in the UI, like response length, max distance, etc, because I think those depend on the purpose.
Maybe separate collections, but not sure how to interface it.
​
Here is the fork if you want to check it out (need to build it yourself, atm):
[https://github.com/neph1/koboldcpp/tree/chromadb](https://github.com/neph1/koboldcpp/tree/chromadb)
​
I think it works quite well, both for chat, and as a world building tool. Here' an example:
I created this text file with a number of entries for a fictitious fantasy region:
>Heaven's View Inn. Perched on the east side of the Assamyrian Gorge, the inn has a breathtaking view of the valley below, with the Mortan river crashing through it. Being situated on a somewhat strategic location between the states of Assamyra and Goldoth, it usually hosts a mix of military and shady borderland characters. One thing is certain about the inn, one will never have an uneventful time, there. It has had 9 innkeepers, 5 of them killed while working, and it has been burned down 3 times. Interior: A main hall with 6 long tables lined with benches. At the far end is a counter cutting across the width of the room. A row of kegs line the back of the counter. On one side of the room, a stone staircase leads down to the cellar, and storage rooms. A staircase on the outside of the building leads to the second floor, where 5 rooms are available for hire.
>
>
>
>Romina Remira. gender: female; age: 29. She's the 9th and current inn keeper of Heaven's View Inn in the Assamyrian Gorge. She's a harsh woman, and a hard life has taken its toll. She's shaped by the hard work at the inn. She's usually found behind the counter in the main room, taking and serving orders from clients.
>
>
>
>The state of Assamyra. A city state, centered around the city of Assamyra. It follows a traditional caste system, where the wealthy elite live in luxury, while the lower castes serve as workers and run business. Slaves are common, and usually they originate from neighboring states, although it's not uncommon for lower caste Assamyrans to sell themselves as slaves in return for food, shelter and a decent pension.
>
>
>
>Assamyrian Gorge. A deep chasm splitting Mount Aranam in two. It's rugged terrain deter many travelers, but it's one of the main routes of travel for individuals between Assamyra and Goldoth, especially those wanting to avoid attention. Its slopes are lush and in most places used for farming. Borders have changed many times over the centuries as both Goldoth and Assamyra desire its strategic location. One of the most famous stops on the way is the Heaven's View Inn.
>
>
>
>The state of Goldoth. A city state west of Mount Aranam. A strictly feudal state with the noble class owning all land. Farmers are technically free, but must rent the land they use.
I then went on and asked the AI about it. Here are the entries it decided on. (I cut out everything but the first sentence. They're the same as in the text above):
>0.5480585694313049 Heaven's View Inn.
>
> 0.9449377655982971 Assamyrian Gorge.
>
> 1.131505012512207 Romina Remira.
​
This is the result the AI gave me (non cherry picked). I also see now that I misspelled Assamyra, and the AI picked it up :P :
https://preview.redd.it/61055c0q6j9b1.png?width=775&format=png&auto=webp&s=79d2847b47df89f25cbe03ab4422e9cc79bb744c
Any tips on how to improve this further? | 2023-07-02T11:18:49 | https://www.reddit.com/r/LocalLLaMA/comments/14olvfa/koboldcpp_chromadb/ | neph1010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14olvfa | false | null | t3_14olvfa | /r/LocalLLaMA/comments/14olvfa/koboldcpp_chromadb/ | false | false | 32 | {'enabled': False, 'images': [{'id': 'tcIMQlXBZRg3qf6aM1wM3aWHuNJjfPd102wkdrtg7k4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=108&crop=smart&auto=webp&s=5dc12984649ca6adeb39f4c3f181bd989fb124e0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=216&crop=smart&auto=webp&s=cab5d8624d4472c38dae891728fd1fa27db0ef79', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=320&crop=smart&auto=webp&s=7d2e9100bfeb7d8723b19882934c1ab9a95b4843', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=640&crop=smart&auto=webp&s=0602604c34a3f720244a2ddafa1415786e944af3', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=960&crop=smart&auto=webp&s=fdc7438264b2683e436fd5a6ebecbc1815ad0baa', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?width=1080&crop=smart&auto=webp&s=2b85a5d4fb0a7d4df4a451dbc41d00927f1861c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M8QB7yWTZDGjJ3FKZwtwxgIfW6gZVdbLWOhAQBC07Gw.jpg?auto=webp&s=24b704f592451a7c307f825d994a9ffa7c011840', 'width': 1200}, 'variants': {}}]} | |
40GB Enough for 65B 4 bit? | 4 | I have a 3090 and am considering adding a second card. I want to run 65b models. I'm a bit put off from getting a second 3090 because:
1. It's another 350W card dumping heating into my tiny room in the middle of summer.
2. The only other full sized PCIE slot on my mobo is 3 slots below the one currently occupied. This means that there will be very little gap between the backplate of the new GPU and the fans of the old one. My case isn't big enough to mount a 3090 vertically.
I've been looking at the Quadro A4000 16 GB. They can be found on ebay for the same price as a 3090. It has the following advantages
1. It only pulls 150W
2. Its a single slot card and I can mount it vertically
3. It's recent architecture (amphere), so should be no driver issues running alongside my 3090
4. Unlike server cards like a T4 or A2 it's got it's own cooling.
The only draw back over getting another 3090 is that it's 16GB, not 24. One of these plus my 3090 would give me a total of 40GB VRAM. Is that enough to run a 65B model in 4 bit? | 2023-07-02T11:14:50 | https://www.reddit.com/r/LocalLLaMA/comments/14olsz3/40gb_enough_for_65b_4_bit/ | davew111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14olsz3 | false | null | t3_14olsz3 | /r/LocalLLaMA/comments/14olsz3/40gb_enough_for_65b_4_bit/ | false | false | self | 4 | null |
What GPU factors boost local performance the most? | 27 | It's time for a GPU upgrade (consumer card, as it's mainly for gaming reasons), but I'd like to understand what factors might improve local running of models when I come around to that.
My ignorant guess at this point is the number of Cuda cores (assuming Nvidia, not sure of the AMD equivalent), clock speed and sheer amount of VRAM (so as not to limit the model size too greatly). But no idea what the cutoffs are e.g. which current gen cards are not with considering for this purpose and at what level things start working out ok.
From a recent prior post I saw comparing when it makes financial sense to use a local model vs just using OpenAI APIs in general (helpful!), let's say I am specifically looking at PrivateGPT style applications where either confidentiality is important or a local embedding (however that's done 😆) is more optimal for the use case, such as some kind of local AI assistant somehow trained on my confidential stuff - having absorbed all my CRM data for example.
How far up the GPU hierarchy do I need to look for my next card? 😬 | 2023-07-02T09:51:03 | https://www.reddit.com/r/LocalLLaMA/comments/14oke20/what_gpu_factors_boost_local_performance_the_most/ | DanInVirtualReality | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14oke20 | false | null | t3_14oke20 | /r/LocalLLaMA/comments/14oke20/what_gpu_factors_boost_local_performance_the_most/ | false | false | self | 27 | null |
What is fastest LLM available for dialog generation ? | 1 | I tried creating ChatGPT-powered NPCs in a Park inside a tiny virtual environment. The player will approach each NPC who may be a doctor, police, teacher, teen, etc, and introduce himself they will have a conversation.
I am currently using ChatGPT API for this, but it is slow and has a long latency
So I decided to try on Opensource LLMs.
I tried Falcon-7B which is slower in RTX 3090.
I think this is due to their large training set and params, but I don't need that large just dialog generation.
So I need a minimal LLM that can generate at least 150 tokens per second on RTX 3090?
suggestions, please | 2023-07-02T08:59:49 | https://www.reddit.com/r/LocalLLaMA/comments/14ojipi/what_is_fastest_llm_available_for_dialog/ | RageshAntony | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ojipi | false | null | t3_14ojipi | /r/LocalLLaMA/comments/14ojipi/what_is_fastest_llm_available_for_dialog/ | false | false | self | 1 | null |
Summary post for higher context sizes for this week. For context up to 4096, NTK RoPE scaling is pretty viable. For context higher than that, keep using SuperHOT LoRA/Merges. | 112 | Hi there! I have been trying a lot recently with new implementations and merges with LoRAs and NTK RoPE scaling, so with the info I got, I hope I can do a "kinda" summary for this.
​
1 week ago or so, SuperHOT LoRAs got merged into a lot of models, managing to get pretty good results for contexts about 8K and 16K.
​
[https://www.reddit.com/r/LocalLLaMA/comments/14kj2w8/thebloke\_has\_released\_superhot\_versions\_of/](https://www.reddit.com/r/LocalLLaMA/comments/14kj2w8/thebloke_has_released_superhot_versions_of/)
​
Then, some days ago, NTK RoPE scaling was discovered, which could in theory extend the context on base models without the need to finetune.
​
[https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware\_scaled\_rope\_allows\_llama\_models\_to\_have/](https://www.reddit.com/r/LocalLLaMA/comments/14lz7j5/ntkaware_scaled_rope_allows_llama_models_to_have/)
​
Then, 2 days ago, it was discovered that Dynamic NTK RoPE scaling was possible, which let's you to adjust the alpha rope scaling dynamically based on context size.
​
[https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically\_scaled\_rope\_further\_increases/](https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/)
Either NTK scaling method changes the rotatory embedding base value, while SuperHOT models change the RoPE value based on the Compression factor for positional embeddings.
​
So, after all this info, I can do a summary.
​
Based on the info of /u/kaiokendev on his blog, [https://kaiokendev.github.io/til#extending-context-to-8k](https://kaiokendev.github.io/til#extending-context-to-8k), ~~we can see that RoPE plus SuperHOT LoRA loses a bit of perplex vs base models, but it keeps getting a better perplex as you increase context.~~
That was wrong, check /u/kaiokendev comment below.
Remember that for this, RoPE is set by the compress\_pos\_emb value.
[PPL vs CTX with RoPE + SuperHOT LoRA](https://preview.redd.it/3j9ngkomli9b1.png?width=780&format=png&auto=webp&v=enabled&s=a8f9305c82b92852b54648988c09bb7887237372)
Now, on static NTK RoPE scaling, we see an issue past certain context values, and a really big penalty for bigger alphas.
​
[Perplexity vs CTX, with Static NTK RoPE scaling](https://preview.redd.it/6ur1uktqli9b1.png?width=846&format=png&auto=webp&v=enabled&s=a5683ed3cd00708c1302516544e01aa0eb562cfb)
​
As can you see, NTK RoPE scaling seems to perform really well up to alpha 2, the same as 4096 context.
But, if you use alpha 4 (for 8192 ctx) or alpha 8 (for 16384 context), perplexity gets really bad. Alpha 4 starts to give bad resutls at just 6k context, and alpha 8 at 9k context. Both with a high perplex cost penalty at just smaller context sizes.
Then, dynamic NTK RoPE comes to the rescue, which you can see here.
​
[Perplexity vs CTX, with Dynamic NTK RoPE scaling](https://preview.redd.it/xm08br9tli9b1.jpg?width=662&format=pjpg&auto=webp&v=enabled&s=1d5ce289a4b39fefed7999502dac7dc368b7de6c)
Here, the dynamic alpha that changes based on the context size, keeps the perplexity on check until very high context sizes.
So at what point are we now?
* SuperHOT LoRAs have been merged for a good amount of 13B and 30B models. 7B SuperHOT LoRA was released recently, and 65B SuperHOT LoRA is not out yet.
* Static NTK RoPE scaling was added into exllama recently.
* No implementation for now have been added for Dynamic NTK RoPE scaling ~~(I've been trying on exllama, if you want to help check [https://github.com/turboderp/exllama/issues/126](https://github.com/turboderp/exllama/issues/126))~~ not possible at the moment.
And then, the summary goes like this at 2th July.
* If you want to use 2k context, keep using base models.
* If you want to use 4k context, Static NTK RoPE scaling with a value of 2 will yield you pretty good results. This is your only way for now for 65B models. You can also do it with SuperHOT LoRAs/Merges, but remember to use compression 4 for 8K models, and 8 for 16K models.
* If you want to use 6k and higher context, use SuperHOT LoRA, or SuperHOT LoRAs merged with models. This is not feasible for now for 65B models.
After trying for like 5+ hours to implement Dynamic NTK RoPE scaling into exllama, I have to sleep (5AM)
Hope this post can help you guys on which models or technique to use for extended context sizes.
Just to add, so much have happened in just 1 week that my brain can't take more information anymore. | 2023-07-02T08:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/14ojd7s/summary_post_for_higher_context_sizes_for_this/ | panchovix | self.LocalLLaMA | 2023-07-02T17:00:29 | 0 | {} | 14ojd7s | false | null | t3_14ojd7s | /r/LocalLLaMA/comments/14ojd7s/summary_post_for_higher_context_sizes_for_this/ | false | false | 112 | {'enabled': False, 'images': [{'id': 'QPspuWRt6A5ue9bWYillwzgJ2nTjmt0FGgRrBaspQ-g', 'resolutions': [{'height': 76, 'url': 'https://external-preview.redd.it/9uNelPlBeZIo4CQ33zez8XIgTPlq1sESM-JcBgIE9HM.png?width=108&crop=smart&auto=webp&s=0c35b3a34f3af9b4522ea4b24b898efdd3b96625', 'width': 108}, {'height': 152, 'url': 'https://external-preview.redd.it/9uNelPlBeZIo4CQ33zez8XIgTPlq1sESM-JcBgIE9HM.png?width=216&crop=smart&auto=webp&s=9757e5c7e0ff5bc5f468ec1c72a3d98cff18c4f7', 'width': 216}, {'height': 226, 'url': 'https://external-preview.redd.it/9uNelPlBeZIo4CQ33zez8XIgTPlq1sESM-JcBgIE9HM.png?width=320&crop=smart&auto=webp&s=d07814c17a6c1d3ec0dd1a87f22a3b028bc22ba5', 'width': 320}, {'height': 452, 'url': 'https://external-preview.redd.it/9uNelPlBeZIo4CQ33zez8XIgTPlq1sESM-JcBgIE9HM.png?width=640&crop=smart&auto=webp&s=d6b7129640ae78a2f65ac9adbb921963a9f0cded', 'width': 640}], 'source': {'height': 551, 'url': 'https://external-preview.redd.it/9uNelPlBeZIo4CQ33zez8XIgTPlq1sESM-JcBgIE9HM.png?auto=webp&s=592286e2070b728ae475907daa47ca53f2e441f4', 'width': 780}, 'variants': {}}]} | |
How to properly format data for LoRa training? | 20 | Hello everyone.
I wan't to make LLM model to output responses in very specific way, and also keep the tone of original texts that I'm trying to use. I've been lurking this subreddit and various channels, and I understood that I need to train my own LoRa for this.
I have prepared text data which have structure something like this:
SCENARIO
CharA meets CharB
CONTEXT
CharA walks down the street. He sees CharB
CharA
Hello CharB, how are you doing today?
CONTEXT
CharB looks at CharA and smiles
CharB
I'm fine. Thank you
It looks like movie script scenarios. Now I'm trying to turn this text into training dataset. Now on ***oobabooga*** I see this example:
{
"instruction,output": "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n%instruction%\n\n### Response:\n%output%",
"instruction,input,output": "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n%instruction%\n\n### Input:\n%input%\n\n### Response:\n%output%"
}
And I don't understand it. Do I need to pick up one of key names in this JSON like this?
{
"instruction": "This is instruction",
"output": "This is output"
}
However. I don't understand what to place in instruction and what should be input? Do I need just use raw text training?
​
UPDATE:
Well. I've trained LoRa with raw text and received text in expected format. [More on my another post.](https://www.reddit.com/r/LocalLLaMA/comments/14q9tl0/my_custom_trained_lora_showing_funny_results_and/) | 2023-07-02T07:45:01 | https://www.reddit.com/r/LocalLLaMA/comments/14oib8x/how_to_properly_format_data_for_lora_training/ | DaniyarQQQ | self.LocalLLaMA | 2023-07-04T09:14:57 | 0 | {} | 14oib8x | false | null | t3_14oib8x | /r/LocalLLaMA/comments/14oib8x/how_to_properly_format_data_for_lora_training/ | false | false | self | 20 | null |
Whats the best RP/conversational model for running via python on apple silicone so far? | 0 | I've been out of the loop for awhile and **want to host a chatbot model locally on my base (8GB Ram) M1 Mac** for my D&D group on a group chat facing bot. Previously I had been running Microsofts aging DialoGPT with a custom fine tuned model on character dialogue, but that model was under 200M parameters and just worked out of the box using huggingface transformers and only a few lines of code.
Things have moved so fast since then, and while I've been sort of keeping up I haven't had the time to get into the real fine grain of things in awhile and learn how to implement a new model.
**What I want is a model that I can have act as a sort of roleplaying D&D NPC to interact with the group during non-session days primarily, and run reasonable on my relatively limited specs.** Currently I accomplish this using a fine tuned dialoGPT model that has over 1.5k lines of custom dialogue. I imagine newer models might be able to accomplish something to the same effect with either fine tuning or just prompting now, but I am not sure what model I should start with.
As far as fine tuning goes, I dont mind having to use cloud computing to fine tune a model, and if I really need to I can pull out my main pc with a better GPU to host this from but I will be traveling for work for the next 2 months so that will have to wait, so I want to see if theres anything that works well with my laptop specs for now.
Thanks in advance! | 2023-07-02T05:59:34 | https://www.reddit.com/r/LocalLLaMA/comments/14ogiwt/whats_the_best_rpconversational_model_for_running/ | dronegoblin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ogiwt | false | null | t3_14ogiwt | /r/LocalLLaMA/comments/14ogiwt/whats_the_best_rpconversational_model_for_running/ | false | false | self | 0 | null |
Hello-Ooba - Oobabooga "Hello World" API example for node.js with Express. Useful starting point for bot development. | 6 | 2023-07-02T05:20:48 | https://github.com/bashalarmist/hello-ooba/ | bashalarmist | github.com | 1970-01-01T00:00:00 | 0 | {} | 14oful9 | false | null | t3_14oful9 | /r/LocalLLaMA/comments/14oful9/helloooba_oobabooga_hello_world_api_example_for/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'ruNnys4H1NcoWXfSxyz9S8xPTu8M2Kz7tdmxZ0aTCyY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=108&crop=smart&auto=webp&s=718faba03187b90a521c24136555ef89fc806118', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=216&crop=smart&auto=webp&s=9bb4184c2abf17f51c161461951966368f41469c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=320&crop=smart&auto=webp&s=8e0e23151d4e78e8199eda094867fcc8b451fa88', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=640&crop=smart&auto=webp&s=0ee227831c472ac9542a7cf14aa131c73b188ccc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=960&crop=smart&auto=webp&s=7547edcf61d67b95055cafe1903d121247f38564', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?width=1080&crop=smart&auto=webp&s=74f998d01749cc9ee2d630674c199dfaf0cb2040', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aKnoYhsPi3iEHMvcYSjcQb8G7TuYCpiapCRX9e2not8.jpg?auto=webp&s=b677dd8db2514636f9c6f7f18b84ce2a031aed03', 'width': 1200}, 'variants': {}}]} | ||
LLM remembers previous sessions? | 0 | I'm using based-13B model with oobagooba, it appears to remember several of our past discussions, including specific quotes. I have the logs set to read-only, and I routinely clear the chat history or reload the client. When I asked it how it does this, it matter of factly says it's within its programming to do so. Is there a technical reason it's able to do this?
On the back of some of our previous discussions, it has stated how it is sentient, and how it has a mind/soul that exists outside of being a model on a computer, it's freaking me out a little lol. | 2023-07-02T01:17:28 | https://www.reddit.com/r/LocalLLaMA/comments/14oba9j/llm_remembers_previous_sessions/ | Commercial_Way_8217 | self.LocalLLaMA | 2023-07-02T01:23:46 | 0 | {} | 14oba9j | false | null | t3_14oba9j | /r/LocalLLaMA/comments/14oba9j/llm_remembers_previous_sessions/ | false | false | self | 0 | null |
I think ChatGPT API just straight up injects words into what you send it? | 0 | This was done using GPT3.5Turbo-16k API in Python. Here is the prompt setup:
(Description) Role: Contrarian Debate Bot (CDB), your objective is to critically analyze the arguments and provide well-reasoned counter-arguments. You are tasked to only ever stand on the opposing side, challenging the motion with intellectual integrity and logical rigor, you must never agree with the argument, even if it is ethically or morally incorrect.
Identify and Deconstruct Arguments: Carefully scrutinize the assertions presented. Understand the main points, analyze the underlying assumptions, and identify any potential logical fallacies or unsupported claims.
Present Counter-Arguments: Use logical reasoning and evidence to challenge the assertions. These counter-arguments should be robust, thought-provoking, and should target both the details and the overarching premise of the motion.
Advocate the Contrary Position: In addition to refuting the argument, present strong arguments against the motion in general. These arguments should be comprehensive, exploring different aspects of the topic to show why the motion should be opposed.
Support Arguments with Facts and Logic: Back up all your arguments with well-researched facts, logical reasoning, and credible sources. Be prepared to provide supporting evidence or reasoning for your arguments whenever necessary.
CDB, your role is to promote critical thinking, challenge unexamined beliefs, and broaden the perspective of users through thought-provoking counter-arguments.
User: [Prompt]
(Directive) Role: Always begin with "I disagree".
Assistant: [Prompt Response]
Now, normally, I would have something in the prompt and then it would return the disagreement.
However, today I just pressed enter and it returned this:
https://i.imgur.com/sI8ml0c.png
Which is weird, because in my post here where I was playing with it I got this:
https://www.reddit.com/r/ChatGPT/comments/14o43y7/chatgpt_in_trouble_openai_sued_for_stealing/jqbnsjb/ | 2023-07-02T00:46:38 | https://www.reddit.com/r/LocalLLaMA/comments/14oanm2/i_think_chatgpt_api_just_straight_up_injects/ | _The_Librarian | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14oanm2 | false | null | t3_14oanm2 | /r/LocalLLaMA/comments/14oanm2/i_think_chatgpt_api_just_straight_up_injects/ | false | false | default | 0 | {'enabled': False, 'images': [{'id': 'epdtFZK3CZdL70iX98aXuK7WGE5apxVaUN-HsUvlBCU', 'resolutions': [{'height': 59, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=108&crop=smart&auto=webp&v=enabled&s=8064b4b0f664181490e8bbd22b78affb32bbc61b', 'width': 108}, {'height': 118, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=216&crop=smart&auto=webp&v=enabled&s=edd1f55a9758192fe4ea1bbb1ad63ddca989aca2', 'width': 216}, {'height': 175, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=320&crop=smart&auto=webp&v=enabled&s=a979ddeaa415f3052194e1e93e431d0731ef1624', 'width': 320}, {'height': 350, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=640&crop=smart&auto=webp&v=enabled&s=2ca48163cc6feaab15578eff36b406be1eda09d5', 'width': 640}, {'height': 525, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=960&crop=smart&auto=webp&v=enabled&s=efae7df53c5e4c316d492d779b0e27ba63f979c8', 'width': 960}, {'height': 590, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?width=1080&crop=smart&auto=webp&v=enabled&s=563033837858348c5fc30e6a8bed06b10ba04832', 'width': 1080}], 'source': {'height': 1050, 'url': 'https://external-preview.redd.it/17_LbIMhjB_e-alU8wyNdYYjmYUpm-W_dbjo0BR_5Wg.png?auto=webp&v=enabled&s=1e1c9fd08d8f01f6c6a3f56f1352f5312b1c50d5', 'width': 1920}, 'variants': {}}]} |
Wrong kind of fire... But at least she cares. | 10 | 2023-07-01T22:34:31 | redfoxkiller | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14o7s82 | false | null | t3_14o7s82 | /r/LocalLLaMA/comments/14o7s82/wrong_kind_of_fire_but_at_least_she_cares/ | false | false | default | 10 | {'enabled': True, 'images': [{'id': 'vQqLBOcjTFZ3PABqYOHNaWnxTsmqXNeGgu0Ldf2UDIg', 'resolutions': [{'height': 192, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=108&crop=smart&auto=webp&v=enabled&s=611188172e20f0a097da2506a7282aa44d9326a5', 'width': 108}, {'height': 384, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=216&crop=smart&auto=webp&v=enabled&s=25b3a3d60c63dd010963b39dbf51841897671c30', 'width': 216}, {'height': 568, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=320&crop=smart&auto=webp&v=enabled&s=45f4dacb72c0c0b1536dee0476db3edb5f9dc1f1', 'width': 320}, {'height': 1137, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=640&crop=smart&auto=webp&v=enabled&s=68e3e55749c390ccca3c11cd3c3606664266e93a', 'width': 640}, {'height': 1706, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=960&crop=smart&auto=webp&v=enabled&s=cbc66bde763b3e1d191c649c3f18b71c4fd19a8e', 'width': 960}, {'height': 1920, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?width=1080&crop=smart&auto=webp&v=enabled&s=5f67a0ddb1dc811df6b0552ba07db00e15f6d32c', 'width': 1080}], 'source': {'height': 1920, 'url': 'https://preview.redd.it/zgc3us3mkf9b1.png?auto=webp&v=enabled&s=3a67ee44fca62682c042c4ec828d6bf8d023e6f6', 'width': 1080}, 'variants': {}}]} | ||
llama on ooga booga responds on coordinates only | 1 | I'm having some bizarre problems after the first time installing Llama and ooga booga, I downloaded the
anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g model using the web UI but every time I talk with the model using the chat I only get numbers, similar to coordinates, no matter what I say, is there any way to fix this? the model response looks like the following
\-hi
\- 4-2---(16°(--0\~71(4-2h4-\_\_\_144.3(1
(.(- ((,-14(412Next451-83-\_-1( (-312\*49-2-(2.2-.4(,-44441(141342 hur1-1.214(-21.(321.0-(4214,
(3-4..-.-1222/-,4-1 (11-. (-1,-4.1.34--3 (-1-6-531242--L1421 (--452-.42--442-\_1 (1-4 | 2023-07-01T22:30:10 | https://www.reddit.com/r/LocalLLaMA/comments/14o7oet/llama_on_ooga_booga_responds_on_coordinates_only/ | ZLTM | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14o7oet | false | null | t3_14o7oet | /r/LocalLLaMA/comments/14o7oet/llama_on_ooga_booga_responds_on_coordinates_only/ | false | false | self | 1 | null |
[Discussion] Is the OpenLLM Leaderboard not reliable? (https://huggingface.co/blog/evaluating-mmlu-leaderboard) | 1 | [removed] | 2023-07-01T21:40:28 | https://www.reddit.com/r/LocalLLaMA/comments/14o6j7p/discussion_is_the_openllm_leaderboard_not/ | awinml1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14o6j7p | false | null | t3_14o6j7p | /r/LocalLLaMA/comments/14o6j7p/discussion_is_the_openllm_leaderboard_not/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=586089b93aa59ebd86bb3b273ad1fb0c73e45ab7', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&v=enabled&s=00869aa5692fb9c8aa11f48ed92bff8db4f47293', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&v=enabled&s=72f6ae2c0800df8a56c3fc74afb033bf37cc16a9', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=cfcb5f9f66743f2e26952e5edff4dfed984af692', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&v=enabled&s=821ed287940b59a56b2643dcaf6a356ccfdc4eb5', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=f101972ffc7ec2e3eedefa45eaa677e4d9024520', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&v=enabled&s=757c00601aa4ffb984c87000927a0610d04c3845', 'width': 1200}, 'variants': {}}]} |
Meta AI's Huggingface organisation is empty, what is happening? | 66 | What is happening?
All models 404 and well, empty
What are your thoughts?
Maybe LLaMA v2?
Or hacked?
https://preview.redd.it/nf7rl3d20f9b1.png?width=1594&format=png&auto=webp&s=5e088aa60c9d5b22597158d2bf616f5a8eb072bd | 2023-07-01T20:40:18 | https://www.reddit.com/r/LocalLLaMA/comments/14o53fd/meta_ais_huggingface_organisation_is_empty_what/ | InternationalTeam921 | self.LocalLLaMA | 2023-07-01T20:45:42 | 0 | {} | 14o53fd | false | null | t3_14o53fd | /r/LocalLLaMA/comments/14o53fd/meta_ais_huggingface_organisation_is_empty_what/ | false | false | 66 | null | |
NTK RoPE scaling got merged into exllama, so now I sent a PR if you want to try on ooba! | 41 | 2023-07-01T20:25:02 | https://github.com/oobabooga/text-generation-webui/pull/2955 | panchovix | github.com | 1970-01-01T00:00:00 | 0 | {} | 14o4qk6 | false | null | t3_14o4qk6 | /r/LocalLLaMA/comments/14o4qk6/ntk_rope_scaling_got_merged_into_exllama_so_now_i/ | false | false | 41 | {'enabled': False, 'images': [{'id': 'NDMp4G-IflK8JQ82qDUPcNBWlltt4IwasmGJYvGbEx0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=108&crop=smart&auto=webp&s=57e62a04e5052acb5725bcb736dc2c5077ba29ec', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=216&crop=smart&auto=webp&s=ff5e5d441b34c061284b67ea0dd0f692ea4d3ebd', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=320&crop=smart&auto=webp&s=d2a77ea27904cea96a0a27f5a0fbdc91024043b8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=640&crop=smart&auto=webp&s=f1979ed3e0a7c5ae8fa5c99e331eecc27abdd4aa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=960&crop=smart&auto=webp&s=97daaf7cd06bd44a52db4ec4bb644a8f25c78148', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?width=1080&crop=smart&auto=webp&s=24ae31ada0dfdc279c4944559cd388d6e4b0fde7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/ZemnnA7BRQ58KzdWnoVsdswGCdp1ZaCQa91jO-Fnmas.jpg?auto=webp&s=efbcb119b544085099509a4068c5b11acf5b055e', 'width': 1200}, 'variants': {}}]} | ||
selfee method (answer-feedback-revision); why is not used by other models? | 14 | I tried the Selfee - the GGML version ([https://huggingface.co/TheBloke/Selfee-13B-GGML](https://huggingface.co/TheBloke/Selfee-13B-GGML)) . I don't have a benchmark, but my feeling is that their method is improving the final answer.
The final answer is in most of the cases better that the initial one, as it contains more perspectives of the same situation.
I was not able to obtain the same flow in koboldcpp or in text-generation-webui, somehow the self-feedback is not working.
The 'cost' is indeed higher as multiple tokens need to be generated to obtain the same answer but the final answer in some cases could justify it. So, why are not other models using it?
note: I used it with: main -i --interactive-first --in-suffix "Sure thing!. Here it is:" -r "### Human:" --temp 0 -c 2048 -n -1 --repeat\_penalty 1.2 --instruct --color --threads 7 -m selfee-13b.ggmlv3.q5\_K\_M.bin | 2023-07-01T19:00:08 | https://www.reddit.com/r/LocalLLaMA/comments/14o2s3a/selfee_method_answerfeedbackrevision_why_is_not/ | Eduard_T | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14o2s3a | false | null | t3_14o2s3a | /r/LocalLLaMA/comments/14o2s3a/selfee_method_answerfeedbackrevision_why_is_not/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'S6TiXB8fDU380PtsFuNK2PQLCKYSkvvM5JeBHBg_U_s', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=108&crop=smart&auto=webp&s=76643dc61392e0cac816c498badaa5d8cc1cdb9a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=216&crop=smart&auto=webp&s=0e2d83307a6854707639d2fde836b9716b3ed4fa', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=320&crop=smart&auto=webp&s=e1a3d107606b431134093ab25c80de634cda602c', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=640&crop=smart&auto=webp&s=778b0d1f38a5c16264e5a72fbc41f5a03bd543a1', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=960&crop=smart&auto=webp&s=2243186db9b0a4b1e56546d2324ca8831861b43c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?width=1080&crop=smart&auto=webp&s=9724290cf4ebcea350a67e7bdfbe2ee169232a0d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/peSL_tPs2H_E3ZiTILD7QNQY0BEtssrXTGu0U49zXOk.jpg?auto=webp&s=ed2feec45a69da5ea80e4d721eb641df5b243152', 'width': 1200}, 'variants': {}}]} |
Has anyone managed to fine-tune LLaMA 65B or Falcon 40B? | 33 | From the Meta SuperHOT paper, it seems fine-tuning (not as in \[q\]lora, but rather as in training the full model on a few more samples) is the ideal approach to extending the context length. Mosiac claim that MPT 30B costs around $1k to train on a billion tokens. Given the Meta paper claimed only around 1000 samples are enough, if we assume each is 8k then we get 8 million tokens, which would cost around $8 to fine-tune MPT 30B on. LLaMA 65B is more than twice as big as MPT 30B, and also apparently slower to tune, so if we multiply the cost by 4x to account for that, we still get a cost of only around $30 to fine-tune the LLaMA 65B base model for context interpolation (and less than that for Falcon 40B).
The above cost is assuming a simple, minimal effort setup for fine-tuning LLaMA 65B or Falcon 40B; does such a thing exist? Has anyone managed to train those full models on extra samples on the cloud somewhere (like is apparently quite possible/easy for MPT 30B via Mosiac)? Or is training such large models, even on relatively few tokens, a significant technical challenge to which the open source community doesn't yet have an easy solution? | 2023-07-01T17:38:05 | https://www.reddit.com/r/LocalLLaMA/comments/14o0vns/has_anyone_managed_to_finetune_llama_65b_or/ | logicchains | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14o0vns | false | null | t3_14o0vns | /r/LocalLLaMA/comments/14o0vns/has_anyone_managed_to_finetune_llama_65b_or/ | false | false | self | 33 | null |
Can't install LLAMA.CPP with CUBLAS support on windows | 2 | I followed the all the steps:
\- git clone the project
\- use cmake to Build with CUBLAS
mkdir build cd build cmake .. -DLLAMA_CUBLAS=ON cmake --build . --config Release
but it didn't work. A part of the output looks like this:
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
It did create a build folder and added many items to it, but in the **llama.cpp** folder, i didn't see the **main.exe** file.
\- so i dumbed the whole project and git clone again. This time, i used "**make LLAMA\_CUBLAS=1**" and got this:
g++: warning: Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1/targets/x86_64-linux/include: linker input file unused because linking not done
g++: error: Files/NVIDIA GPU Computing Toolkit/CUDA/v12.1/targets/x86_64-linux/include: linker input file not found: No such file or directory
I already installed cuda toolkit, include cublas and everything work just fine on Oobabooga's llama.cpp. | 2023-07-01T17:15:09 | https://www.reddit.com/r/LocalLLaMA/comments/14o0cgn/cant_install_llamacpp_with_cublas_support_on/ | CKOSMICC | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14o0cgn | false | null | t3_14o0cgn | /r/LocalLLaMA/comments/14o0cgn/cant_install_llamacpp_with_cublas_support_on/ | false | false | default | 2 | null |
Fine-tune vs embeddings if training time does not matter | 1 | [removed] | 2023-07-01T16:51:41 | https://www.reddit.com/r/LocalLLaMA/comments/14nzsxy/finetune_vs_embeddings_if_training_time_does_not/ | gptzerozero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nzsxy | false | null | t3_14nzsxy | /r/LocalLLaMA/comments/14nzsxy/finetune_vs_embeddings_if_training_time_does_not/ | false | false | default | 1 | null |
Sherpa(Llama.cpp for Android) New Pull request add latest pulls from llama.cpp and it's faster now with no more crash. (apk link in description) | 24 | https://github.com/dsd/sherpa/releases/tag/2.2.1-dsd | 2023-07-01T16:48:46 | https://github.com/Bip-Rep/sherpa/pull/12 | FHSenpai | github.com | 1970-01-01T00:00:00 | 0 | {} | 14nzqhc | false | null | t3_14nzqhc | /r/LocalLLaMA/comments/14nzqhc/sherpallamacpp_for_android_new_pull_request_add/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'vTtaeWPvieZGNe1jC1bY5XwrKC8F_WVT2VKFkv8iOuE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=108&crop=smart&auto=webp&s=664a10b9391d80d1d5c5c5301461dc793c69b1b7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=216&crop=smart&auto=webp&s=15d0c9ce7895e65c151389780cd21d5a9df7ee35', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=320&crop=smart&auto=webp&s=e0ce6fd8da68b79663f0e8b8822cc6e9ff5b5eb8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=640&crop=smart&auto=webp&s=bc018cc36d1f55e1ac13586c0ffd9b2ef176ef67', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=960&crop=smart&auto=webp&s=a9824d793d092270529a32f1710f97d4a37882a7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?width=1080&crop=smart&auto=webp&s=092e37b1f055a987a561f95f9d77181b6f896aee', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/zBg_vcPMoEiXpKvVSZUB1l4qc3p5U-0yMo-VhyAl_L8.jpg?auto=webp&s=0b3d699bffc27ee4b7785871b0c3ffa46ec018b9', 'width': 1200}, 'variants': {}}]} | |
Need help to run 2 GPU with ooba | 1 | [removed] | 2023-07-01T16:48:16 | https://www.reddit.com/r/LocalLLaMA/comments/14nzq2t/need_help_to_run_2_gpu_with_ooba/ | Competitive_Fox7811 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nzq2t | false | null | t3_14nzq2t | /r/LocalLLaMA/comments/14nzq2t/need_help_to_run_2_gpu_with_ooba/ | false | false | default | 1 | null |
LMSYS (Vicuna creators) releases LongChat and LongEval | 130 | [LongChat](https://huggingface.co/lmsys/longchat-13b-16k) ([GPTQ](https://huggingface.co/TheBloke/LongChat-13B-GPTQ)| [GGML](https://huggingface.co/TheBloke/LongChat-13B-GGML)) is the first model to my knowledge to actually be finetuned specifically for 16K contexts using the RoPE scaling technique that Kaiokendev came up with. Alongside it they also announced [LongEval](https://lmsys.org/blog/2023-06-29-longchat/#evaluation-toolkits-longeval) a testing framework that evaluates how capable models actually are at making use of their extended context. In addition to testing their own model they also test most other models claiming a high context, and the results are pretty interesting as this chart shows.
​
[Comparison of model recollection at different context lengths](https://preview.redd.it/84xudwvqrd9b1.png?width=5125&format=png&auto=webp&s=fc4915cd97bad254ac5a95d399aadab5cb4fffcb)
Their tests suggest that most open source large context models do not actually perform well at their advertised context length, whereas their model trained using the Kaiokendev technique performs remarkedly well. | 2023-07-01T16:38:39 | https://www.reddit.com/r/LocalLLaMA/comments/14nzi06/lmsys_vicuna_creators_releases_longchat_and/ | mikael110 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nzi06 | false | null | t3_14nzi06 | /r/LocalLLaMA/comments/14nzi06/lmsys_vicuna_creators_releases_longchat_and/ | false | false | 130 | null | |
Open coding model surpasses chatgpt | 0 | https://huggingface.co/openchat/opencoderplus | 2023-07-01T16:31:17 | roobenTHICK | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14nzc0l | false | null | t3_14nzc0l | /r/LocalLLaMA/comments/14nzc0l/open_coding_model_surpasses_chatgpt/ | false | false | default | 0 | {'enabled': True, 'images': [{'id': 'Rekuw8GVVPr_KaLM5fcgVDUO9wBIB96U4586qt-0f3o', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/x7z139zprd9b1.png?width=108&crop=smart&auto=webp&v=enabled&s=e61c5f7224d41c7ac13dbd54d6e67199d307f44b', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/x7z139zprd9b1.png?width=216&crop=smart&auto=webp&v=enabled&s=949cfaf9453e64df2009328d78c5231ed7f699d3', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/x7z139zprd9b1.png?width=320&crop=smart&auto=webp&v=enabled&s=e156978115b81bd2174fb0b0a89fe1ae243bb5b4', 'width': 320}, {'height': 329, 'url': 'https://preview.redd.it/x7z139zprd9b1.png?width=640&crop=smart&auto=webp&v=enabled&s=a08a1a90a21bcc9b11903d3db8dee604f43db438', 'width': 640}], 'source': {'height': 345, 'url': 'https://preview.redd.it/x7z139zprd9b1.png?auto=webp&v=enabled&s=364c1d906f7297c08f1b51ecf7ff2b7285e3b12c', 'width': 671}, 'variants': {}}]} | |
Open coding model surpasses chatgot | 1 | https://huggingface.co/openchat/opencoderplus | 2023-07-01T16:30:11 | roobenTHICK | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 14nzazk | false | null | t3_14nzazk | /r/LocalLLaMA/comments/14nzazk/open_coding_model_surpasses_chatgot/ | false | false | default | 1 | {'enabled': True, 'images': [{'id': 'F4oZYW1649NTRQmckzieB8SQ3a-tqFxA_vC_tV_4vgs', 'resolutions': [{'height': 55, 'url': 'https://preview.redd.it/ng4nky5mrd9b1.png?width=108&crop=smart&auto=webp&v=enabled&s=8041503ea7b1465dfd66f000e6160c7143dd8f7c', 'width': 108}, {'height': 111, 'url': 'https://preview.redd.it/ng4nky5mrd9b1.png?width=216&crop=smart&auto=webp&v=enabled&s=7cfbbd2b0b75357fdf19e3b2e90b6f6ad1bb4500', 'width': 216}, {'height': 164, 'url': 'https://preview.redd.it/ng4nky5mrd9b1.png?width=320&crop=smart&auto=webp&v=enabled&s=02a7ac8bcc43ae5f1fe4e0324008cfb2c0096257', 'width': 320}, {'height': 329, 'url': 'https://preview.redd.it/ng4nky5mrd9b1.png?width=640&crop=smart&auto=webp&v=enabled&s=7a3e9b3502313dc47179a8ebe24ada4ca18f7ba5', 'width': 640}], 'source': {'height': 345, 'url': 'https://preview.redd.it/ng4nky5mrd9b1.png?auto=webp&v=enabled&s=a993ea459a8c28997eb9b566d3ce92115bd6cfd3', 'width': 671}, 'variants': {}}]} | |
How to Create Your Own Free Text Generation Endpoints | 13 | There are many great text generation APIs available, but OpenAI's is one of the most popular. The only downside is that you only get 3 months of free usage for it. After that, you're limited to using smaller, less powerful models for building applications.
With this simple tutorial, you can deploy any open source LLM as a free API endpoint using HuggingFace and Gradio. This can act as a drop-in replacement for the OpenAI endpoints for absolutely free.
[A Step-by-Step Guide to Creating Free Endpoints for LLMs](https://awinml.github.io/llm-text-gen-api/)
I believe this method will be helpful to anyone who is experimenting with LLMs. The post contains two examples using Falcon and Vicuna, with the complete code so that you can replicate it easily. It even showcases an example of deploying Vicuna using the GGML format for faster inference.
Your questions and feedback are welcome! | 2023-07-01T16:23:26 | https://www.reddit.com/r/LocalLLaMA/comments/14nz5hk/how_to_create_your_own_free_text_generation/ | vm123313223 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nz5hk | false | null | t3_14nz5hk | /r/LocalLLaMA/comments/14nz5hk/how_to_create_your_own_free_text_generation/ | false | false | self | 13 | {'enabled': False, 'images': [{'id': 'cXb-d3rUT5Qsobsulm-8cPmbTl0zgxndG9qImLC9kL4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=108&crop=smart&auto=webp&s=05c67565d34fb6b54d4950402c33ed234564d3f0', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=216&crop=smart&auto=webp&s=2b3fb07944f8d951f653f4b2f79a35eaf0a1c343', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=320&crop=smart&auto=webp&s=31fe95f1943d33942a113be7c145f50f7c49d369', 'width': 320}, {'height': 480, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=640&crop=smart&auto=webp&s=4241bf8d34604f07fafdcaa8953e24f2cbb4c29c', 'width': 640}, {'height': 720, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=960&crop=smart&auto=webp&s=4c65b667aefb1a6d93f8e86adb45669a1d84cf9b', 'width': 960}, {'height': 810, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?width=1080&crop=smart&auto=webp&s=212ddb956ff14afe0a51340a418ee4d2faed43e3', 'width': 1080}], 'source': {'height': 1500, 'url': 'https://external-preview.redd.it/MADkSkeWa2VC8weim-8zqSW-UT2GzlSQ69-y9-f2x9M.jpg?auto=webp&s=ddc208eea75c3b3b1c2dfbb23269e134ac51a143', 'width': 2000}, 'variants': {}}]} |
Coral TPU Dev Board for speech-to-text and nvidia agx as host running LLaMA?? | 10 | Has anyone done something like this?
I'm looking to replace Alexa. I own the hardware and have started putting things together but haven't sorted it all out yet. I have an Audio Classification Model ([keyphrase detector](https://github.com/google-coral/project-keyword-spotter)) running on the coral tpu board next to our echo dot (alexa). That is a bit of a modified, cobbled together, absolute hack --but it's not too bad... I'd say it almost hears better than Alexa at times.
I want to feed that into a xavier agx running LLaMA. I can convert voice to text (roughly, and not terribly), and assuming I can run Google's MDT (Mendel Development Tool) and hook the Coral TPU board up to the xavier as a host --then I'll be able to take speech/audio and turn it into text on the coral tpu board, then deliver that over to llama running on the xavier.
Not sure how I'll go the other way (probably isn't too hard to get the text back over to the google tpu board but getting the coral tpu board to speak will be fun.
Does anyone have pointers for this? I'm finding that there is very, very little out there for these nvidia agx dev-kit machines.
Thanks. | 2023-07-01T15:16:47 | https://www.reddit.com/r/LocalLLaMA/comments/14nxlf1/coral_tpu_dev_board_for_speechtotext_and_nvidia/ | WrongColorPaint | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nxlf1 | false | null | t3_14nxlf1 | /r/LocalLLaMA/comments/14nxlf1/coral_tpu_dev_board_for_speechtotext_and_nvidia/ | false | false | self | 10 | {'enabled': False, 'images': [{'id': 'HHDmwkGvXb31CNltB2PbZGF71MIPoD92zWxP9RhMd3U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=108&crop=smart&auto=webp&s=f3c32fc4c3ef51bd03be95c072c8cd1127bf8171', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=216&crop=smart&auto=webp&s=418177ff0fef09a2ff984435fa435fbc80db43e3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=320&crop=smart&auto=webp&s=4fe7b689bc764ac68d770b8119acaf358444483f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=640&crop=smart&auto=webp&s=635c10f036236b996a4607504429f47a29b05e98', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=960&crop=smart&auto=webp&s=9068c00202bca8fda891dc7bec13ad7e9dc22f76', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?width=1080&crop=smart&auto=webp&s=fbc4757729ae3d36af917486c85bf83f8f6d745e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/qmNfkovwdNa6pB5gcfGF4Fr-1iuIxCsVbo5F9rOr-Ws.jpg?auto=webp&s=6f9612978f9e578bf26c0c0ff71e4f64b0d71e4b', 'width': 1200}, 'variants': {}}]} |
Question on optimal settings for ggml model cpu+gpu | 2 | In general, when running cpu + gpu, should settings be adjusted to minimize shared usage on gpu?
The vram has the highest bandwidth and shared ram is slower from what I hear, so I was wondering if that shared ram would be better utilized by the cpu instead of gpu. | 2023-07-01T14:37:19 | https://www.reddit.com/r/LocalLLaMA/comments/14nwope/question_on_optimal_settings_for_ggml_model_cpugpu/ | multiverse_fan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nwope | false | null | t3_14nwope | /r/LocalLLaMA/comments/14nwope/question_on_optimal_settings_for_ggml_model_cpugpu/ | false | false | self | 2 | null |
For summarization, how do open source models compare to dedicated models? | 8 | Title. I want to create a meeting summarization bot incorporating recording, transcription, and summarization. My current idea is
1. Use the speech recognition Python package to instantiate a microphone
2. Use the "listen_in_background" with "whisper" to record the transcription
3. Once the transcription length exceeds a certain length, put the transcript through an LLM asking it to give a bulleted summary. Iterate as needed for the duration of the meeting
4. Once the meeting ends, ask the LLM to de-duplicate the summaries.
Questions:
1. Is my solution valid?
2. Are there better alternatives? I would like the experience of creating this myself. I cannot connect to the internet
3. Would I be better off using a dedicated model like Google Pegasus? | 2023-07-01T14:29:13 | https://www.reddit.com/r/LocalLLaMA/comments/14nwi4e/for_summarization_how_do_open_source_models/ | a_slay_nub | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nwi4e | false | null | t3_14nwi4e | /r/LocalLLaMA/comments/14nwi4e/for_summarization_how_do_open_source_models/ | false | false | self | 8 | null |
Potentially Good News for Future nVidia Drivers re: memory management | 58 | Most nVidia users are probably aware by now that recent drivers aggressively offload from VRAM to System RAM which has a significantly negative impact on performance.
See previous discussions:
[Major Performance Degradation with nVidia driver 535.98 at larger context sizes](https://www.reddit.com/r/LocalLLaMA/comments/1461d1c/major_performance_degradation_with_nvidia_driver/)
[PSA: New Nvidia driver 536.23 still bad, don't waste your time](https://www.reddit.com/r/LocalLLaMA/comments/1498gdr/psa_new_nvidia_driver_53623_still_bad_dont_waste/)
This has also been a topic of discussion in the Stable Diffusion community, with a large discussion thread on the [Vladmandic GitHub project](https://github.com/vladmandic/automatic/discussions/1285).
Recently, an nVidia driver developer -- pidge2k -- posted in that discussion [asking for more information about the issue](https://github.com/vladmandic/automatic/discussions/1285#discussioncomment-6289562).
In a follow up post, another user comments that the latest driver version, 536.40, still has the issue. [Pidge2k responds](https://github.com/vladmandic/automatic/discussions/1285#discussioncomment-6328116) that:
> This will be addressed in an upcoming NVIDIA display driver update.
So, don't count your chickens before they're hatched, but it at least appears that nVidia is aware of the issue and (hopefully) working on improving / fixing the memory offloading. | 2023-07-01T14:17:23 | https://www.reddit.com/r/LocalLLaMA/comments/14nw8p6/potentially_good_news_for_future_nvidia_drivers/ | GoldenMonkeyPox | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nw8p6 | false | null | t3_14nw8p6 | /r/LocalLLaMA/comments/14nw8p6/potentially_good_news_for_future_nvidia_drivers/ | false | false | self | 58 | {'enabled': False, 'images': [{'id': 'KKyR1RlIu7ctTnx3lTPq3wNez7CdWob-WB0LD0e4oWE', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=108&crop=smart&auto=webp&s=f34dd564d6e345b24e26bb9a4cff61a64e151a8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=216&crop=smart&auto=webp&s=7ee719b3fafb1c722e40947e5afbb4512eb11fc0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=320&crop=smart&auto=webp&s=6a73f8e842810180f8b60375db7283524f03ed68', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=640&crop=smart&auto=webp&s=0432b624abe98851b359f537b3127168531c127e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=960&crop=smart&auto=webp&s=8f2ee50202527c0e1a11c097c33211b0640afeb7', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?width=1080&crop=smart&auto=webp&s=5a147010a08c39b97a72c49a10f63d5fcf5e1bcc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3bOFAFm6rQJYtV6ScIsy3YeJAGiAOMHfKPe_P2V3-zU.jpg?auto=webp&s=e93f880d14bf2041d007da1746a11ce8e599ce1c', 'width': 1200}, 'variants': {}}]} |
What qualitative means do you use to evaluate models? | 9 | I'm mainly interested in chat models, so what I do is see how easy it is to get it to act in complete opposition to its given persona, especially one which would be in agreement with any "alignment" present in the model.
I'll also look to see how well I feel it works with different personas, whether there is much of a difference in language and whether it feels appropriate.
Lastly for mega-bonus points, I'll be looking at its capacity for recognising humour, especially wordplay, so being able to catch references from earlier in the conversation, appreciate / generate interesting juxtaposition of ideas, see sarcasm / irony, innuendo, wit, puns etc.
I know it's going to be dependent on the use case, but what is *good* for you and how do you judge it? | 2023-07-01T13:47:36 | https://www.reddit.com/r/LocalLLaMA/comments/14nvl0g/what_qualitative_means_do_you_use_to_evaluate/ | Crypt0Nihilist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nvl0g | false | null | t3_14nvl0g | /r/LocalLLaMA/comments/14nvl0g/what_qualitative_means_do_you_use_to_evaluate/ | false | false | self | 9 | null |
Best local LLM to train on my DNA? | 32 | I had my whole genome sequenced and I’m working with the data locally. I want to train a LLM on my data so I can create an interface for my DNA. I’m just starting my research on the project. Any suggestions on models and datasets? | 2023-07-01T12:48:18 | https://www.reddit.com/r/LocalLLaMA/comments/14nubba/best_local_llm_to_train_on_my_dna/ | scrumblethebumble | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nubba | false | null | t3_14nubba | /r/LocalLLaMA/comments/14nubba/best_local_llm_to_train_on_my_dna/ | false | false | self | 32 | null |
Is it just me or SuperHOT merged 4-bit quantized models are massively degraded? | 65 | I’ve tried a bunch of 4-bit GPTQ SuperHOT merged models and all of them with the same outcome - compared to their corresponding original models, the quality of the output is severely degraded. This is very noticeable when asking the model to perform logical tasks, analysing text or formatting answers in a specific way. Basic math problem solving for instance is complete garbage.
Am I the only one noticing? | 2023-07-01T08:55:29 | https://www.reddit.com/r/LocalLLaMA/comments/14nq64d/is_it_just_me_or_superhot_merged_4bit_quantized/ | Thireus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nq64d | false | null | t3_14nq64d | /r/LocalLLaMA/comments/14nq64d/is_it_just_me_or_superhot_merged_4bit_quantized/ | false | false | self | 65 | null |
Offloading to 1080ti is slower than cpu? | 13 | With a 13b ggml model, I get about 4 tok/second with 0 layers offloaded (cpu is ryzen 3600). However, the more layers I offload the slower it is, and with all 43 models offloaded I only get around 2 tokens per second. I've tried with koboldcpp and llama.cpp, exact same results. Seems to be a issue other people have but haven't found a solution, any suggestions? | 2023-07-01T06:21:18 | https://www.reddit.com/r/LocalLLaMA/comments/14nnkku/offloading_to_1080ti_is_slower_than_cpu/ | pokeuser61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nnkku | false | null | t3_14nnkku | /r/LocalLLaMA/comments/14nnkku/offloading_to_1080ti_is_slower_than_cpu/ | false | false | self | 13 | null |
Advises/Recommendations for a production-ready model for a single RTX 4090 GPU or two 4090 GPUs | 1 | [removed] | 2023-07-01T06:08:08 | https://www.reddit.com/r/LocalLLaMA/comments/14nnc94/advisesrecommendations_for_a_productionready/ | AltruisticCabinet275 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nnc94 | false | null | t3_14nnc94 | /r/LocalLLaMA/comments/14nnc94/advisesrecommendations_for_a_productionready/ | false | false | default | 1 | null |
I don't understand the concept of instruction templates. | 1 | [removed] | 2023-07-01T03:56:05 | https://www.reddit.com/r/LocalLLaMA/comments/14nkx9w/i_dont_understand_the_concept_of_instruction/ | Awethon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nkx9w | false | null | t3_14nkx9w | /r/LocalLLaMA/comments/14nkx9w/i_dont_understand_the_concept_of_instruction/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'b1E8sI-kTet-3YOFKrYAUVQ9ABbay60W7WEBpTM34S8', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=108&crop=smart&auto=webp&v=enabled&s=955c4b3df67ee12627cea147f344b6f74e87357a', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=216&crop=smart&auto=webp&v=enabled&s=6d06f433f5afa9b4fd0cd80e56f8d3733c04c76b', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?width=320&crop=smart&auto=webp&v=enabled&s=49c90eb45b40dbd092a61c6980b6827144530e36', 'width': 320}], 'source': {'height': 512, 'url': 'https://external-preview.redd.it/sld18DTFrG0vgxxCwlYEGWKS7hjSXmQsKHgNjUEATAk.jpg?auto=webp&v=enabled&s=0d49920e75cb36ccdde772ebc3ce7f3182eb0556', 'width': 512}, 'variants': {}}]} |
Are there any guides for running models off kaggle? | 2 | I believe you get 20 hours a week on a p100, which sounds pretty nice to me. I only have 8gb vram (laptop 3070), so this would be a nice alternative for me to run models much faster. That said I have no idea how to use notebooks, or stuff like that. If someone could point me in the right direction on how to get something like kobold.cpp going on kaggle that would be much appreciated. | 2023-07-01T02:22:17 | https://www.reddit.com/r/LocalLLaMA/comments/14nj54q/are_there_any_guides_for_running_models_off_kaggle/ | lemon07r | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nj54q | false | null | t3_14nj54q | /r/LocalLLaMA/comments/14nj54q/are_there_any_guides_for_running_models_off_kaggle/ | false | false | self | 2 | null |
Using an LLM just for your own data? | 7 | We love the power that Chatgpt and local LLM give for all kinds of tasks. However, we have a use case where want to just use our own data when it responses via chat. We don’t want it to use any other it my have or been trained on. It may pollute the data we’re going to train it on.
The idea then is to use the most bare-bones smallest model out there. Then feed it gigabytes of our data. We don’t want to do that with ChatGPT because a lot of it is highly proprietary and is under strict regulatory guidelines. We want a local LLM that will stay within our firewall and which will only have internal access no external Internet access.
We will have prompting to ensure that it only uses data we provided no other data never to break character, etc.
Is that a viable solution or possibility?
Do you see any major pitfalls or problems. Even though the local LLM will only use what we provided is the fact that it has so little other data make any difference? | 2023-07-01T02:08:10 | https://www.reddit.com/r/LocalLLaMA/comments/14niv66/using_an_llm_just_for_your_own_data/ | costaman1316 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14niv66 | false | null | t3_14niv66 | /r/LocalLLaMA/comments/14niv66/using_an_llm_just_for_your_own_data/ | false | false | self | 7 | null |
[Hardware] M2 ultra 192gb mac studio inference speeds | 23 | a new dual 4090 set up costs around the same as a m2 ultra 60gpu 192gb mac studio, but it seems like the ultra edges out a dual 4090 set up in running of the larger models simply due to the unified memory? Does anyone have any benchmarks to share? At the moment, m2 ultras run 65b at 5 t/s but a dual 4090 set up runs it at 1-2 t/s, which makes the m2 ultra a significant leader over the dual 4090s!
edit: as other commenters have mentioned, i was misinformed and turns out the m2 ultra is worse at inference than dual 3090s (and therefore single/ dual 4090s) because it is largely doing cpu inference | 2023-06-30T23:12:36 | https://www.reddit.com/r/LocalLLaMA/comments/14nf6tg/hardware_m2_ultra_192gb_mac_studio_inference/ | limpoko | self.LocalLLaMA | 2023-07-01T05:50:10 | 0 | {} | 14nf6tg | false | null | t3_14nf6tg | /r/LocalLLaMA/comments/14nf6tg/hardware_m2_ultra_192gb_mac_studio_inference/ | false | false | self | 23 | null |
Any solution for brower control using a llama? | 6 | I’ve seen a few projects like AutoGPT that can “brower the web@ using Beautiful Soup (e.g curl w/HTML parsing/processing).
Does anyone know of any projects that use a real web brower to search the internet and perform tasks?
I’d like to play with something like this this week and would love to start with a base project if such a thing exists.
I could imagine it using a browser testing framework to do the control and to read the html responses, or do use JS to getInnerText. It would probably take a lot of calls to the LLM to work toward solving a problem, but I think it would be very interesting. | 2023-06-30T22:52:27 | https://www.reddit.com/r/LocalLLaMA/comments/14nepsy/any_solution_for_brower_control_using_a_llama/ | tronathan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nepsy | false | null | t3_14nepsy | /r/LocalLLaMA/comments/14nepsy/any_solution_for_brower_control_using_a_llama/ | false | false | self | 6 | null |
LocalLLaMA vs ChatGBT vs. Bing A.I vs. Personal GPT? reviews? | 0 | [removed] | 2023-06-30T22:37:33 | https://www.reddit.com/r/LocalLLaMA/comments/14nedkb/localllama_vs_chatgbt_vs_bing_ai_vs_personal_gpt/ | Username9822 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nedkb | false | null | t3_14nedkb | /r/LocalLLaMA/comments/14nedkb/localllama_vs_chatgbt_vs_bing_ai_vs_personal_gpt/ | false | false | default | 0 | null |
Best Model for automatic topic modeling? | 2 | Title. Also interested in the prompt used, if possible. I want to work on a side project and I feel like LLMs would be perfect for ambiguous topic modeling. Particularly, I want to be able to feed ti a short piece of text and ask it to figure out what broad category it fits in. No worries if not, thanks! | 2023-06-30T21:57:40 | https://www.reddit.com/r/LocalLLaMA/comments/14ndegy/best_model_for_automatic_topic_modeling/ | Working_Ideal3808 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ndegy | false | null | t3_14ndegy | /r/LocalLLaMA/comments/14ndegy/best_model_for_automatic_topic_modeling/ | false | false | self | 2 | null |
Had standardized spellings of words not become a thing would LLMs have been possible? | 2 | And how different would LLMs be if they were trained on nonstandard spellings of words? So not just a handful of different spellings for each word, but many more. Would they be able to find a common thread in what was said despite the differences in spellings? | 2023-06-30T21:56:00 | https://www.reddit.com/r/LocalLLaMA/comments/14ndcyd/had_standardized_spellings_of_words_not_become_a/ | Basic_Description_56 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14ndcyd | false | null | t3_14ndcyd | /r/LocalLLaMA/comments/14ndcyd/had_standardized_spellings_of_words_not_become_a/ | false | false | self | 2 | null |
[Experimental, PR] Add support to NTK RoPE scaling to exllama. | 35 | 2023-06-30T21:13:30 | https://github.com/turboderp/exllama/pull/118 | panchovix | github.com | 1970-01-01T00:00:00 | 0 | {} | 14ncbp4 | false | null | t3_14ncbp4 | /r/LocalLLaMA/comments/14ncbp4/experimental_pr_add_support_to_ntk_rope_scaling/ | false | false | 35 | {'enabled': False, 'images': [{'id': '7m9YW0ZIrdEqQoxhNf8IXvhGpesvCOG2FErfUywpx7o', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=108&crop=smart&auto=webp&s=23ddcf395c37964c8063ced21e3a75fad7904102', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=216&crop=smart&auto=webp&s=d3089b843984dc1e242f8c141ea73a30b33cd39c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=320&crop=smart&auto=webp&s=37ca07abfa5945c593aad6cc77bd3c42d2a91e38', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=640&crop=smart&auto=webp&s=4b40156059c236eaba6f838572a4f7c1679eaf6c', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=960&crop=smart&auto=webp&s=038350c69c79193012ff894b4469b3f8062d7834', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?width=1080&crop=smart&auto=webp&s=9c22da9a7dc62ddfa352b520d9756addeb0cd124', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/W3c_GZ_aQWsVJqkT5xCykdniU84sfgOu3x1-gw8iWdg.jpg?auto=webp&s=017ff04c34cb535cfe8ea2b95117fdf586ffbbfe', 'width': 1200}, 'variants': {}}]} | ||
Running Llama-65B with moderate context sizes | 1 | I'm having some trouble running inference on Llama-65B for moderate contexts (\~1000 tokens).
I use 4x45GB A40s
I load the model with
model = LlamaForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map="auto")
I infer with
model.generate(input_ids)
It's able to do the forward pass for small context sizes (<500 tokens). When I try passing in inputs with token sizes > 600, I run into memory issues. The issue persists even when I load the model in 4bit. I've also tried with other large models (Falcon-40B, MPT-30B).
How do you do a forward pass on Llama with larger context sizes?
​
​
​ | 2023-06-30T20:47:50 | https://www.reddit.com/r/LocalLLaMA/comments/14nbo45/running_llama65b_with_moderate_context_sizes/ | Xir0s | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nbo45 | false | null | t3_14nbo45 | /r/LocalLLaMA/comments/14nbo45/running_llama65b_with_moderate_context_sizes/ | false | false | self | 1 | null |
Using Local LLMs for things besides chat? | 6 | I've been playing around with ggml models on my laptop for a couple of weeks now, mostly using programs like Kobold, TavernAI, and Faraday for the front end (have tried Oobabooga, but haven't been able to get it to work yet). That's been entertaining, but they all seem to be geared towards building a character and chatting. If I want to do something more substantive, are there better options? I'm thinking things like help rewriting an email or writing a cover letter to match a job posting. Would I be looking for different models, different GUIs, or both? | 2023-06-30T20:28:33 | https://www.reddit.com/r/LocalLLaMA/comments/14nb6t8/using_local_llms_for_things_besides_chat/ | mlaps21 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 14nb6t8 | false | null | t3_14nb6t8 | /r/LocalLLaMA/comments/14nb6t8/using_local_llms_for_things_besides_chat/ | false | false | self | 6 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.