title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Assessing Learning and Progress in Training with LoRa Method: A Case Study with llama2 | 3 | I am trying to overfit llama2 and understand if it's learning. To achieve this, I am training it using a single example and the LoRa method. I expect that it will learn from it—thus, the loss will decrease—and when I generate text, it will reproduce that example given only the beginning.
​
When I train the 13B model, the loss decreases to 0, and the model memorizes the training example. However, when I train the same code with a 7B model using LoRa, the loss remains at zero initially and doesn't make any progress. Moreover, when generating from the training example, it produces the same text as before training.
​
Could you please suggest how to ensure that the training is taking place using the LoRa method? have you successfully trained the 7B model using LoRa; how did you confirm this?
for finetuning with Lora i used this code
[https://github.com/tloen/alpaca-lora/tree/main](https://github.com/tloen/alpaca-lora/tree/main)
and additionally tried
[https://github.com/facebookresearch/llama-recipes/tree/main](https://github.com/facebookresearch/llama-recipes/tree/main) | 2023-08-09T20:37:53 | https://www.reddit.com/r/LocalLLaMA/comments/15mqrv6/assessing_learning_and_progress_in_training_with/ | GooD404 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mqrv6 | false | null | t3_15mqrv6 | /r/LocalLLaMA/comments/15mqrv6/assessing_learning_and_progress_in_training_with/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'dSnK02WXdmgmsoIp5lR1xLca8kIYz6n7guVtLbmPaO0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=108&crop=smart&auto=webp&s=2e732da77d05b2417646488bd3c5c0d657e11ef7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=216&crop=smart&auto=webp&s=f31232d3a8d1811a9add298710af45b2844b18c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=320&crop=smart&auto=webp&s=14fc21efeb2432bdef3aaeaa3511b141aa99d37f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=640&crop=smart&auto=webp&s=3ad2f264fcb0ab0112ca93fd2be1f92a008cbb00', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=960&crop=smart&auto=webp&s=0b9828f01ea1d42e6976e1eb7afd1ba36c8374dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?width=1080&crop=smart&auto=webp&s=ece1eeddb60c7186862bbbb74ccc40629aeb728b', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8n_7DTxnb5b6fTZT9_pZP9m3IYc-iFD0pokseeNBltY.jpg?auto=webp&s=ca0e7aa949fd38a947499cbaaf4629bfd026cbaf', 'width': 1200}, 'variants': {}}]} |
What are the text chunking/splitting and embedding best practices for RAG applications? | 18 | I'm trying to make an LLM powered RAG application without LangChain that can answer questions about a document (pdf) and I want to know some of the strategies and libraries that you guys have used to transform your text for text embedding. I would also like to know which embedding model you used and how you dealt with the sequence length.
My documents will be long textbooks and I'm currently using the MTEB text embedders from Hugging Face which all have sequence lengths of 512.
| 2023-08-09T20:10:05 | https://www.reddit.com/r/LocalLLaMA/comments/15mq1ri/what_are_the_text_chunkingsplitting_and_embedding/ | malicious510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mq1ri | false | null | t3_15mq1ri | /r/LocalLLaMA/comments/15mq1ri/what_are_the_text_chunkingsplitting_and_embedding/ | false | false | self | 18 | null |
Do LLMs take embeddings directly or is it converted to text for RAG? | 2 | Pretty much the title.
​
I want to know how LLMs answer questions based off embeddings retrieved from a vector database. To my understanding, text embeddings are just a list of numbers that represent the text's semantic meaning. How does an LLM understand these numbers if the embedding model is not the same as the LLM's? Do I have to convert the text embeddings back into the input text? | 2023-08-09T19:53:21 | https://www.reddit.com/r/LocalLLaMA/comments/15mpl6p/do_llms_take_embeddings_directly_or_is_it/ | malicious510 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mpl6p | false | null | t3_15mpl6p | /r/LocalLLaMA/comments/15mpl6p/do_llms_take_embeddings_directly_or_is_it/ | false | false | self | 2 | null |
Best model for text generation in low-end computers? | 10 | I need to implement text generation into a program that is already a bit VRAM heavy, in order to be usable for the maximum number of users, what would be some good performing small models that don't need much VRAM (maybe 2-4 GB) and respond decently fast? | 2023-08-09T19:06:07 | https://www.reddit.com/r/LocalLLaMA/comments/15moc8z/best_model_for_text_generation_in_lowend_computers/ | Valevergus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15moc8z | false | null | t3_15moc8z | /r/LocalLLaMA/comments/15moc8z/best_model_for_text_generation_in_lowend_computers/ | false | false | self | 10 | null |
Sharding LLMs | 3 | I have been wondering about this for a few days now and haven't been able to find much information on the how-to. I'm trying to fine-tune the Aguila HuggingFace model on the free version of Google Colab, but I haven't been able to locate a sharded version anywhere. I thought I might need to do it myself, but at the moment, I still haven't been able to figure it out.
I'd really appreciate it if someone could provide some guidance or point me in the right direction. Thanks so much in advance for any help! | 2023-08-09T18:52:05 | https://www.reddit.com/r/LocalLLaMA/comments/15mnysc/sharding_llms/ | Jaded-Armadillo8348 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mnysc | false | null | t3_15mnysc | /r/LocalLLaMA/comments/15mnysc/sharding_llms/ | false | false | self | 3 | null |
Train with custom knowledge | 3 | Hi, I have have ~100 XML files in a simple schema, these files contain logic and rules. At the moment these files are created manually. Is it possible to take for example the llama2 model and train it with this additional data, so that it could generate or modify these XML files by instructions? | 2023-08-09T18:08:02 | https://www.reddit.com/r/LocalLLaMA/comments/15mmsj6/train_with_custom_knowledge/ | FroyoAbject | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mmsj6 | false | null | t3_15mmsj6 | /r/LocalLLaMA/comments/15mmsj6/train_with_custom_knowledge/ | false | false | self | 3 | null |
[Project] Making AMD GPUs Competitive for LLM inference | 85 | ML compilation (MLC) techniques makes it possible to run LLM inference performantly. An AMD 7900xtx at $1k could deliver 80-85% performance of RTX 4090 at $1.6k, and 94% of RTX 3900Ti previously at $2k.
Most of the performant inference solutions are based on CUDA and optimized for NVIDIA GPUs nowadays. In the meantime, with the high demand for compute availability, it is useful to bring support to a broader class of hardware accelerators. AMD is a potential candidate.
MLC LLM makes it possible to compile LLMs and deploy them on AMD GPUs using its ROCm backend, getting competitive performance. More specifically, AMD RX 7900 XTX ($1k) gives 80% of the speed of NVIDIA RTX 4090 ($1.6k), and 94% of the speed of NVIDIA RTX 3090Ti (previously $2k).
Besides ROCm, our Vulkan support allows us to generalize LLM deployment to other AMD devices, for example, a SteamDeck with an AMD APU.
- Blogpost describing the techniques: https://blog.mlc.ai/2023/08/09/Making-AMD-GPUs-competitive-for-LLM-inference
- Github: https://github.com/mlc-ai/mlc-llm/ | 2023-08-09T18:01:21 | https://www.reddit.com/r/LocalLLaMA/comments/15mmlte/project_making_amd_gpus_competitive_for_llm/ | yzgysjr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mmlte | false | null | t3_15mmlte | /r/LocalLLaMA/comments/15mmlte/project_making_amd_gpus_competitive_for_llm/ | false | false | self | 85 | null |
Documents on interfacing with local models | 0 | No i don’t mean existing UIs i mean creating a UI and interfacing with the local model. I’ve searched high and low and can’t find any existing documentation. I’ve spent the last few days going over multiple websites, looking at how to articles, and I’ve yet to find “here’s the code”. Hell i still can’t figure out if there’s a separation between the python im seeing on say LLAMA 2s git and the model itself. Is there some code framework that interacts with the model(data) is it one large bundle…when i look at hugging face models and the files there all i see is a config.json that says something like:
{ “mofel_type” : “llama” }
Zero code and just a bunch of .bin files.
For example it would be nice to not run a local model on my 4090 and instead run it on my 3090 except finding the documentation to perform such a feet is elusive.
If someone could point me in the right direction that would be appreciated | 2023-08-09T17:49:34 | https://www.reddit.com/r/LocalLLaMA/comments/15mma90/documents_on_interfacing_with_local_models/ | tickleMyBigPoop | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mma90 | false | null | t3_15mma90 | /r/LocalLLaMA/comments/15mma90/documents_on_interfacing_with_local_models/ | false | false | self | 0 | null |
Why is this so strange? Can you change the system prompt and get it to be more uncensored? (Using Llama-2-13b) | 1 | 2023-08-09T17:40:32 | BetterProphet5585 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15mm1fg | false | null | t3_15mm1fg | /r/LocalLLaMA/comments/15mm1fg/why_is_this_so_strange_can_you_change_the_system/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'G1pcZuWRUSCn62iMjroPQ1ZFPyMxDURHxagLUSDhZaA', 'resolutions': [{'height': 102, 'url': 'https://preview.redd.it/lam2k65pf4hb1.png?width=108&crop=smart&auto=webp&s=5b9855142578d231ece29f708c04c1f9aeb66d72', 'width': 108}, {'height': 205, 'url': 'https://preview.redd.it/lam2k65pf4hb1.png?width=216&crop=smart&auto=webp&s=9485efae475d4e78bdeb8307bcb9a75a04cf036d', 'width': 216}, {'height': 304, 'url': 'https://preview.redd.it/lam2k65pf4hb1.png?width=320&crop=smart&auto=webp&s=4fdd251ff34450c966bc109cb6e6962c3b5f221e', 'width': 320}, {'height': 609, 'url': 'https://preview.redd.it/lam2k65pf4hb1.png?width=640&crop=smart&auto=webp&s=6be6f23ae5c427318293f7932d4057c24a8c3164', 'width': 640}], 'source': {'height': 695, 'url': 'https://preview.redd.it/lam2k65pf4hb1.png?auto=webp&s=a2cb342c1a316e57c65dab972a39f6c5a82fb7d9', 'width': 730}, 'variants': {}}]} | |||
👨💻 An awesome and curated list of best code-LLM for research. | 20 | [https://github.com/huybery/Awesome-Code-LLM](https://github.com/huybery/Awesome-Code-LLM)
Letting LLMs help humans write code (named Code-LLMs) would be the best way to free up productivity, and we're collecting the research progress on this repo. If this resonates with you, please 🌟 star the repo on GitHub, contribute your pull request. 😊
Let's make it more comprehensive together. Feel free to ask questions or share your thoughts in the comments below. | 2023-08-09T17:30:50 | https://www.reddit.com/r/LocalLLaMA/comments/15mlrt0/an_awesome_and_curated_list_of_best_codellm_for/ | huybery | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mlrt0 | false | null | t3_15mlrt0 | /r/LocalLLaMA/comments/15mlrt0/an_awesome_and_curated_list_of_best_codellm_for/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'BsZMx6KFo0Ls1MX1YXO9BRz_V7c3QO-lr0wVkKQrzIc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=108&crop=smart&auto=webp&s=9209768234ed8ca24eb2f935493c9d2de0bdd74f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=216&crop=smart&auto=webp&s=8c6e330b6418073e87038df42003a799562f58de', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=320&crop=smart&auto=webp&s=b3eabd94378105fd2f0fff6586d470e7a5243f0b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=640&crop=smart&auto=webp&s=be1e4a53d7528b15dcdd412f323af9c6ea4f0e41', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=960&crop=smart&auto=webp&s=64389939d0f269f631cd949eb3985ab491ada231', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?width=1080&crop=smart&auto=webp&s=ff1e0a5b6542e41a104689ffd2ddd2803fa6e644', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/8AolF4KdkARFOYE2UPPcRcQlEj8KETjBymwVzK8yqh8.jpg?auto=webp&s=e2ec13f60ab6f50ce59cd3351377d45968126894', 'width': 1200}, 'variants': {}}]} |
Newbie needing some clarification in regards to koboldcpp, please and thank you. | 6 | Other than the selection of the model using the -model flag,
1. What options should I be using?
2. Is it always beneficial to put the maximum amount of layers on the GPU to use all available VRAM?
3. Do I need to specify how many threads the application should use? And do I always want to use all available threads?
4. Can I make the system try to generate images locally first, and then fall back to the stable horde if the generation fails due to lack of vram?
5. Is there a way to have the system generate me a 5000 word (or any number really) short story without my intervention?
Thanks so much for reading. What an incredible time to be alive. I can't believe how good these self hostable tools are becoming. | 2023-08-09T17:09:48 | https://www.reddit.com/r/LocalLLaMA/comments/15ml757/newbie_needing_some_clarification_in_regards_to/ | wh33t | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ml757 | false | null | t3_15ml757 | /r/LocalLLaMA/comments/15ml757/newbie_needing_some_clarification_in_regards_to/ | false | false | self | 6 | null |
Can Llama convert English cue words into a Japanese sentences? | 7 | Apologies, but I'm a beginner. I want to convert few English words into Japanese sentences. The sentences would be easy so I don't think Llama would make a mistake there.
Is llama capable enough to do both of these tasks? I would appreciate if you can provide me some links that I can read and implement.
As of now, I feel llama doesn't support Japanese language. 😪 | 2023-08-09T15:52:49 | https://www.reddit.com/r/LocalLLaMA/comments/15mj4tm/can_llama_convert_english_cue_words_into_a/ | JapaniRobot | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mj4tm | false | null | t3_15mj4tm | /r/LocalLLaMA/comments/15mj4tm/can_llama_convert_english_cue_words_into_a/ | false | false | self | 7 | null |
What is the best embedding search? | 12 | I am working to a small personal project.
I plan to vectorise a lot of text using sBERT and then I would like to be able to search through this content by asking "natural" language questions.
I know that I need to tokenize the question and the use the tokens to run a similarity search through the vectors.
I am currently stuck here, I am not sure what is the best available search.
I tried FAISS, but I have to handle the storage of the vectors and load them in the memory to use FAISS.
I heard that there are already complete solutions like DuckDB or Chroma or Qdrant that are handling everything (storage, integrity, indexing, search). Are those better? Or I should use FAISS? | 2023-08-09T15:43:41 | https://www.reddit.com/r/LocalLLaMA/comments/15mivz5/what_is_the_best_embedding_search/ | aiworshipper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mivz5 | false | null | t3_15mivz5 | /r/LocalLLaMA/comments/15mivz5/what_is_the_best_embedding_search/ | false | false | self | 12 | null |
Faster than OpenAI Api? | 3 | I am currently running a process where I have to analyse large chunks of texts by an LLM. I divide the texts into parts that fit the token window, but it still takes about an hour per text to run the prompt on everything and receive a reply for all chunks.
Can Llama be faster? I would probably connect to an Api provider for now, but might install it on my server later if the volume increases. | 2023-08-09T15:13:56 | https://www.reddit.com/r/LocalLLaMA/comments/15mi446/faster_than_openai_api/ | ekevu456 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mi446 | false | null | t3_15mi446 | /r/LocalLLaMA/comments/15mi446/faster_than_openai_api/ | false | false | self | 3 | null |
Has anyone tried deploying GGML models in production? | 2 | Considering they’re not thread-safe, I’d be curious how you could deploy them behind a reliable API on AWS. Maybe use the LocalAI kubernetes config on EKS? | 2023-08-09T15:08:04 | https://www.reddit.com/r/LocalLLaMA/comments/15mhyk0/has_anyone_tried_deploying_ggml_models_in/ | bangarangguy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mhyk0 | false | null | t3_15mhyk0 | /r/LocalLLaMA/comments/15mhyk0/has_anyone_tried_deploying_ggml_models_in/ | false | false | self | 2 | null |
EU Parliament approved the text of the AI Regulation Law (it is not applied yet, but we might be very near) - Which models should I hoard? Which are the best uncensored before the blackout? | 57 | I am overreacting a bit to this, but I still want to be prepared in case models and UIs will not be available in EU or if Open Source projects will have to obscure themselves before everyone understands and apply the transparency EU requires.
**I want to prepare to a possible AI winter in EU.**
---
This might even scare big companies away, but the main problem is that EU will inevitably be stuck in the past.
I think this is catastrophic, as much as regulation might be a good thing for the general public, this will also privatize the technology making it more available with the ones with the resources to actually use models.
Based on what I read, I assume it would also have an impact on hardware market and much more to come in a few months, if this is fully passed.
EU always has been the lead for privacy and I am proud of that, but this... seems rushed to me and I don't think they fully understood our position nor the technology itself to regulate it this way.
They seem to have completely forgotten about the Open Source.
This might also turn out to be good, with more transparency, but also bad with more privatization and basically money bullying open source out the continent. | 2023-08-09T14:46:55 | https://www.reddit.com/r/LocalLLaMA/comments/15mhebl/eu_parliament_approved_the_text_of_the_ai/ | BetterProphet5585 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mhebl | false | null | t3_15mhebl | /r/LocalLLaMA/comments/15mhebl/eu_parliament_approved_the_text_of_the_ai/ | false | false | self | 57 | null |
Introducing the newest WizardLM-70B V1.0 model! | 151 | Introducing the newest **WizardLM-70B V1.0** model !
1. WizardLM-70B V1.0 achieves a substantial and comprehensive improvement on **coding**, **mathematical reasoning** and **open-domain conversation** capacities.
2. This model is license friendly, and follows the same license with Meta Llama-2.
3. Next version is in training and will be public together with our new paper soon.
For more details, please refer to:
Model weight: [https://huggingface.co/WizardLM/WizardLM-70B-V1.0](https://huggingface.co/WizardLM/WizardLM-70B-V1.0)
Demo and Github: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
Twitter: [https://twitter.com/WizardLM\_AI](https://twitter.com/WizardLM_AI)
​
https://preview.redd.it/d61gunflg3hb1.png?width=900&format=png&auto=webp&s=bd3a9a77124d6d7dcbbfcd6ecfd0d1aaa1d4d7ed | 2023-08-09T14:24:55 | https://www.reddit.com/r/LocalLLaMA/comments/15mgthr/introducing_the_newest_wizardlm70b_v10_model/ | cylaw01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mgthr | false | null | t3_15mgthr | /r/LocalLLaMA/comments/15mgthr/introducing_the_newest_wizardlm70b_v10_model/ | false | false | 151 | {'enabled': False, 'images': [{'id': 'lOaLM5PtpNjrwQBaVnzypT1kCPzSVsujOefNAVhO5CQ', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=108&crop=smart&auto=webp&s=9e2bd842483d97a3a5070984e9c32b6df4165eb0', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=216&crop=smart&auto=webp&s=56d61187f3f2ce092927a58b01bb4429bef50baf', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=320&crop=smart&auto=webp&s=f1854e6e0a24b3a5acf5574286ebf8d3483d84a1', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=640&crop=smart&auto=webp&s=94ae94e10c5f717c9dcc2ad7a55728edf14bbe24', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=960&crop=smart&auto=webp&s=9455647e7f989af7f0e2e89be70f44f6a35436c0', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?width=1080&crop=smart&auto=webp&s=54f146ddf6f12904286d4832a7ae47f6c6b7c9eb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/XFh7NFmpmoin81UW6jgdBY3EwhyzfY1hVX4SA-fJ3SM.jpg?auto=webp&s=edaa911b536e91fdb6fe1081df28a049e1f0b658', 'width': 1200}, 'variants': {}}]} | |
EU Parliament approved the text of the AI Regulation Law (it is not applied yet, but we might be very near) - Which models should I hoard? Which are the best unconsored before the blackout? | 1 | [removed] | 2023-08-09T14:15:54 | https://www.reddit.com/r/LocalLLaMA/comments/15mgl52/eu_parliament_approved_the_text_of_the_ai/ | PuzzledAd1197 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mgl52 | false | null | t3_15mgl52 | /r/LocalLLaMA/comments/15mgl52/eu_parliament_approved_the_text_of_the_ai/ | false | false | self | 1 | null |
Lost In Translation: LLMs Unleashed | 0 | You know those digital buddies we’ve come to rely on for a bit of tech magic? Well, they’ve decided to put on their cryptic cloaks and mess with our heads a bit. Buckle up, because we’re about to dive into a world where your AI assistant isn’t just your sidekick – it’s your partner in eerie shenanigans.
This: [https://daystosingularity.com/2023/08/09/lost-in-translation-llms-unleashed/](https://daystosingularity.com/2023/08/09/lost-in-translation-llms-unleashed/) | 2023-08-09T13:57:12 | https://www.reddit.com/r/LocalLLaMA/comments/15mg3g4/lost_in_translation_llms_unleashed/ | Powerful-Pumpkin-938 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mg3g4 | false | null | t3_15mg3g4 | /r/LocalLLaMA/comments/15mg3g4/lost_in_translation_llms_unleashed/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': 'z-4lvjVtpSargyO33nCIO081kU4se1phI4FktUjpbSk', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/drC0jbTnpopuKQDb-MNy1YcYpX7df0thDcmTqlmaM00.jpg?width=108&crop=smart&auto=webp&s=6e5e2d2bb460e4c25f694fb1e5b11b952566ed92', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/drC0jbTnpopuKQDb-MNy1YcYpX7df0thDcmTqlmaM00.jpg?width=216&crop=smart&auto=webp&s=2a120494fc7b3df28d8f4b106e1a4fd09046febb', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/drC0jbTnpopuKQDb-MNy1YcYpX7df0thDcmTqlmaM00.jpg?width=320&crop=smart&auto=webp&s=58c41c956cc8fcec882590fb4bca2b3810fac690', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/drC0jbTnpopuKQDb-MNy1YcYpX7df0thDcmTqlmaM00.jpg?width=640&crop=smart&auto=webp&s=f1f6f77e16af8364ca06224759e08ec9425a726b', 'width': 640}], 'source': {'height': 768, 'url': 'https://external-preview.redd.it/drC0jbTnpopuKQDb-MNy1YcYpX7df0thDcmTqlmaM00.jpg?auto=webp&s=23460e8dded89bda515507b0080175c4fb85a4f2', 'width': 768}, 'variants': {}}]} |
Fine-tuning Llama 2 using spot instances across multiple clouds or Lambda Cloud | 2 | 2023-08-09T13:52:12 | https://dstack.ai/examples/finetuning-llama-2/ | cheptsov | dstack.ai | 1970-01-01T00:00:00 | 0 | {} | 15mfysb | false | null | t3_15mfysb | /r/LocalLLaMA/comments/15mfysb/finetuning_llama_2_using_spot_instances_across/ | false | false | 2 | {'enabled': False, 'images': [{'id': '1CJ80XE7TBliPhvCtW1XlPN9rEz_QAFB-YxQSDbhaKw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=108&crop=smart&auto=webp&s=34039547ea3d5e89dd33add5eff01d197aa2f659', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=216&crop=smart&auto=webp&s=789e4eb5bddc9467bb4588fac00ed43933281130', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=320&crop=smart&auto=webp&s=7dfdba67cba947a90854cd2d4d13b48fb266577b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=640&crop=smart&auto=webp&s=bd6e340f3810899f1bbc315f27bb80d000f6085f', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=960&crop=smart&auto=webp&s=ed36155cc3c6873399daf516fd85f84bc83ad524', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?width=1080&crop=smart&auto=webp&s=6132101025b785e35af7caac410a496a225a062e', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/d2ygPVR83FIieJANJ9_95HhWGCsLNy6kaV-7_NMpORA.jpg?auto=webp&s=76e59c56ef72b9e2172ea53448d60950578f86ad', 'width': 1200}, 'variants': {}}]} | ||
PC build for LLMs | 1 | [removed] | 2023-08-09T12:48:36 | https://www.reddit.com/r/LocalLLaMA/comments/15mednm/pc_build_for_llms/ | 04RR | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mednm | false | null | t3_15mednm | /r/LocalLLaMA/comments/15mednm/pc_build_for_llms/ | false | false | self | 1 | null |
Llama2 Embeddings FastAPI Service | 43 |
I just wanted a quick and easy way to easily submit strings to a REST API and get back the embedding vectors in JSON using Llama2 and other similar LLMs, so I put this together over the past couple days. It's very quick and easy to set up and totally self-contained and self-hosted. You can easily add new models to it by simply adding the HuggingFace URL to the GGML format model weights. Two models are included by default. | 2023-08-09T12:37:34 | https://github.com/Dicklesworthstone/llama_embeddings_fastapi_service | dicklesworth | github.com | 1970-01-01T00:00:00 | 0 | {} | 15me4i9 | false | null | t3_15me4i9 | /r/LocalLLaMA/comments/15me4i9/llama2_embeddings_fastapi_service/ | false | false | default | 43 | {'enabled': False, 'images': [{'id': 'LIFpZZ97uHRTB2fR9xb2TOd2YvmaGyOm5rq7V0YRxTo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=108&crop=smart&auto=webp&s=32c0d6a94a728c1dc48428fae9ea4b7325f806d8', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=216&crop=smart&auto=webp&s=0f1c7814beff6ff5a1da5e3557300a7e4a9d6639', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=320&crop=smart&auto=webp&s=3ba4fefcc0d86b8071a125f8e564fc4c636cc359', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=640&crop=smart&auto=webp&s=47d354cd43497ea54a9fc0e649c31d22fc3b9faa', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=960&crop=smart&auto=webp&s=78dea0fe84430517e01f3f0145796832ba47997e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?width=1080&crop=smart&auto=webp&s=cb603e2cfae9c323a5d8fe2078ce086531bd97c2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/weqXXUTkHKcprVLMpFtKxJW6yVKgvU5uxjJ0NyI-me8.jpg?auto=webp&s=28fc3b94ef0a30be422e4fb8480b9f97796e6580', 'width': 1200}, 'variants': {}}]} |
Is there any hope at all for getting a 13B GPTQ model running entirely on an 8 GB GPU? | 22 | With the release of ExLlama and its incredible optimizations, I was hoping that I'd finally be able to load 13B models into my GPU, but unfortunately it's not quite there yet. While it OOMs with regular ExLlama, I can load it with ExLlama\_HF but it still OOMs upon inference.
I know that of course I can offload some layers to the CPU or run GGML, but at that point it's incredibly slow.
That being said, has anyone figured out a way to load a 13B GPTQ model onto a 8 GB card? Maybe some sort of way to run my computer with nothing besides the LLM and an output window so that VRAM doesn't get allocated to anything else? Or perhaps someone knows if further optimizations are being done? | 2023-08-09T12:32:46 | https://www.reddit.com/r/LocalLLaMA/comments/15me0oc/is_there_any_hope_at_all_for_getting_a_13b_gptq/ | Gyramuur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15me0oc | false | null | t3_15me0oc | /r/LocalLLaMA/comments/15me0oc/is_there_any_hope_at_all_for_getting_a_13b_gptq/ | false | false | self | 22 | null |
This dataset trains openai models but has no effect on llama2. Why? | 1 | [removed] | 2023-08-09T12:16:58 | https://www.reddit.com/r/LocalLLaMA/comments/15mdnq5/this_dataset_trains_openai_models_but_has_no/ | kielerrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mdnq5 | false | null | t3_15mdnq5 | /r/LocalLLaMA/comments/15mdnq5/this_dataset_trains_openai_models_but_has_no/ | false | false | default | 1 | {'enabled': False, 'images': [{'id': 'bWxZxCMhP9jfrcdBLdv9A8KMN-S-eA8s6v7OcxWVjKw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=108&crop=smart&auto=webp&s=4be6acec1540e26a5b8f50c6e781047d4bc6acdf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=216&crop=smart&auto=webp&s=298dafc7c016f29265b51840deceb215f0624ca8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=320&crop=smart&auto=webp&s=6178af642a5d291347dc1ca9d72b5d9aaaa8ebeb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=640&crop=smart&auto=webp&s=dff332329f003fe7c3305760ad47c0d38a5fe9fa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=960&crop=smart&auto=webp&s=0474cc429ea74a8bab362757377ccbb9d92aaafc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=1080&crop=smart&auto=webp&s=65715568fe66a5654d000fe7ab8ec1c4d660862e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?auto=webp&s=3d2a6710aee1ad462f994a45fef27d0a391b784a', 'width': 1200}, 'variants': {}}]} |
This | 1 | [removed] | 2023-08-09T12:14:26 | https://www.reddit.com/r/LocalLLaMA/comments/15mdlnk/this/ | kielerrr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mdlnk | false | null | t3_15mdlnk | /r/LocalLLaMA/comments/15mdlnk/this/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'bWxZxCMhP9jfrcdBLdv9A8KMN-S-eA8s6v7OcxWVjKw', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=108&crop=smart&auto=webp&s=4be6acec1540e26a5b8f50c6e781047d4bc6acdf', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=216&crop=smart&auto=webp&s=298dafc7c016f29265b51840deceb215f0624ca8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=320&crop=smart&auto=webp&s=6178af642a5d291347dc1ca9d72b5d9aaaa8ebeb', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=640&crop=smart&auto=webp&s=dff332329f003fe7c3305760ad47c0d38a5fe9fa', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=960&crop=smart&auto=webp&s=0474cc429ea74a8bab362757377ccbb9d92aaafc', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?width=1080&crop=smart&auto=webp&s=65715568fe66a5654d000fe7ab8ec1c4d660862e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/JJXfYOqCYQgSnarrKjpjN3-vINMeVr8v0bKxiSeignE.jpg?auto=webp&s=3d2a6710aee1ad462f994a45fef27d0a391b784a', 'width': 1200}, 'variants': {}}]} |
What should be my expectations from models trained on a PC? | 7 | I'm considering making a serious upgrade to my PC so I could start playing around with AI. But before ponying up thousands of dollars in triple digit 5DDR RAM and thousand dollar GPUs, I want to know what I should expect from "PC AI" as most of the AI I am familiar with are billion dollar models like GPT4, trained by the smartest people in the world on the topic (I am "just" a developer for comparison. no compsci degree).
Could I develop niche-specific chatbots for clients? Could I train the model on West's Respiratory Physiology and tell it to analyze my CPAP data? What should I expect? | 2023-08-09T11:48:29 | https://www.reddit.com/r/LocalLLaMA/comments/15md13z/what_should_be_my_expectations_from_models/ | BigBootyBear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15md13z | false | null | t3_15md13z | /r/LocalLLaMA/comments/15md13z/what_should_be_my_expectations_from_models/ | false | false | self | 7 | null |
My meta-llama/Llama-2-7b-hf fine-tuned model does not learn to use the additional special tokens | 21 | I am trying to fine-tune the meta-llama/Llama-2-7b-hf model on a recipe dataset using QLoRA and SFTTrainer. My dataset contains special tokens (such as <RECIPE\_TITLE>, <END\_TITLE>, , <END\_STEPS>, etc.) which helps with structuring the recipes. During fine-tuning I have added these additional tokens to the tokenizer:
>special\_tokens\_dict = {‘additional\_special\_tokens’: \[“<RECIPE\_TITLE>”, “<END\_TITLE>”, “”, “<END\_INGREDIENTS>”, “”, “<END\_STEPS>”\], ‘pad\_token’: “”}
tokenizer.add\_special\_tokens(special\_tokens\_dict)
I also resized the token embeddings for the model so that it matches the length of the tokenizer. However, the fine-tuned model predicts all these newly added tokens in the right places (the generated recipe is well-structured), but it predicts these tokens through a combination of token ids, not utilizing the additional token ids.
From what I found in other posts, LoRA does not automatically update the embedding matrix, so i made sure to specify this in the lora config:
>peft\_config = LoraConfig(
lora\_alpha=lora\_alpha,
lora\_dropout=lora\_dropout,
r=lora\_r,
bias=“none”,
task\_type=“CAUSAL\_LM”,
target\_modules=\[“q\_proj”, “v\_proj”, “k\_proj”\],
modules\_to\_save=\[“embed\_tokens”, “lm\_head”\],
)
What is the reason behind the model not being able to learn the embeddings of the newly added tokens? | 2023-08-09T11:20:29 | https://www.reddit.com/r/LocalLLaMA/comments/15mcgjv/my_metallamallama27bhf_finetuned_model_does_not/ | rares13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mcgjv | false | null | t3_15mcgjv | /r/LocalLLaMA/comments/15mcgjv/my_metallamallama27bhf_finetuned_model_does_not/ | false | false | self | 21 | null |
Fine-tune Llama 2 with DPO, has anyone tried? | 6 | 2023-08-09T11:15:57 | https://huggingface.co/blog/dpo-trl | Nondzu | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 15mcd1b | false | null | t3_15mcd1b | /r/LocalLLaMA/comments/15mcd1b/finetune_llama_2_with_dpo_has_anyone_tried/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'AmQgpyNhIlUEj9zr9gG-RgzMA6CHeTDaZ7Q4LqqedT8', 'resolutions': [{'height': 52, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=108&crop=smart&auto=webp&s=74b9d4f0311999391894918231269f3bcd31e1d3', 'width': 108}, {'height': 105, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=216&crop=smart&auto=webp&s=8147b6d2b8c6c97c952ae07f1ee06d61f65fc7f8', 'width': 216}, {'height': 155, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=320&crop=smart&auto=webp&s=851cc0f3c4f578a71abcf252ec0e49191e842a4a', 'width': 320}, {'height': 311, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=640&crop=smart&auto=webp&s=0b81473a67554129a8d51aaca7996b487d428f88', 'width': 640}, {'height': 467, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=960&crop=smart&auto=webp&s=5c1230adc455c2abf5b857e66e7a903395dd7543', 'width': 960}, {'height': 526, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?width=1080&crop=smart&auto=webp&s=d236e1aed2e2b05e1ca72080d0cfdbbfb5e8b09f', 'width': 1080}], 'source': {'height': 579, 'url': 'https://external-preview.redd.it/gVC-CxRm2EPj2om6Rni76VF_RDOD7rroGVi0OohYP3k.jpg?auto=webp&s=e951b9097b184faf1f05e6c9334226b25d2e8b8e', 'width': 1188}, 'variants': {}}]} | ||
Prompt Processing Times? (GGML CPU-only) | 32 | Tested with the following:
Prompt: 478 tokens
BLAS: 512
System: i3 9th Gen 4 cores/4 threads with 16GB DDR4 2400 RAM
-----------
LLaMA 7B Q2
Processing: 82ms/token
Generation: 206ms/token
-----------
LLaMA 7B Q4
Processing: 81ms/token
Generation: 258ms/token
-----------
LLaMA 13B Q2
Processing: 146ms/token
Generation: 380 ms/token
The generation times make sense, they're increasing with the quant as well as the model size but is the prompt processing time independent of the quantization? It seems to increase only with model size and not quant. | 2023-08-09T10:08:28 | https://www.reddit.com/r/LocalLLaMA/comments/15mb1f0/prompt_processing_times_ggml_cpuonly/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15mb1f0 | false | null | t3_15mb1f0 | /r/LocalLLaMA/comments/15mb1f0/prompt_processing_times_ggml_cpuonly/ | false | false | self | 32 | null |
Lamma Context length is it max(4096) or can it be increased?? | 0 | I am running the model through replicate and i am getting error while i am testing on large input. is 4096 maximum that llama model can support or can i increase that ... if i try to pass in chunks will it give me same results because i am working on identifying the tone.
\--------------------------------------------------------------------------- ModelError
Traceback (most recent call last) Cell In\[58\], line 1
\----> 1 imposter\_scam\_resp = generate\_llama2\_response\_fraud(fraud\_detection\_prompt, imposter\_scam) **2** print(imposter\_scam\_resp)
Cell In\[57\], line 30, in generate\_llama2\_response\_fraud(prompt\_template, prompt\_input) **24** output = replicate.run( **25** \# 'a16z-infra/llama13b-v2-chat:df7690f1994d94e96ad9d568eac121aecf50684a0b0963b25a41cc40061269e5', # LLM model **26** "replicate/llama-2-70b-chat:2c1608e18606fad2812020dc541930f2d0495ce32eee50074220b87300bc16e1", **27** input={"prompt": f"{full\_prompt} {prompt\_input} Assistant: ", # prompt **28** "temperature":0.1, "top\_p":0.9, "max\_length":512, "repetition\_penalty":1}) # model parameters **29** full\_response = '' ---> 30 for item in output: **31** full\_response += item **33** return full\_response File [\~/my\_python\_venvs/gpt\_env/lib/python3.10/site-packages/replicate/prediction.py:79](https://file+.vscode-resource.vscode-cdn.net/c%3A/Users/Charan%20Teju/Documents/langchain/audio-processing/notebooks/~/my_python_venvs/gpt_env/lib/python3.10/site-packages/replicate/prediction.py:79), in Prediction.output\_iterator(self) **76** self.reload() **78** if self.status == "failed": ---> 79 raise ModelError(self.error) **81** output = self.output or \[\] **82** new\_output = output\[len(previous\_output) :\]
ModelError: start (0) + length (4097) exceeds dimension size (4096).
\-- | 2023-08-09T09:12:18 | https://www.reddit.com/r/LocalLLaMA/comments/15m9zyo/lamma_context_length_is_it_max4096_or_can_it_be/ | Dry_Sink_597 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15m9zyo | false | null | t3_15m9zyo | /r/LocalLLaMA/comments/15m9zyo/lamma_context_length_is_it_max4096_or_can_it_be/ | false | false | self | 0 | null |
Difference between meta-llama-2-7b and meta-llama-2-7b-hf | 1 | Was browsing through hugginfFace Llama's model card and came across the hf variant for all models. It says that it's pretrained on HuggingFace transformer. What exactly does it mean? | 2023-08-09T06:17:16 | https://www.reddit.com/r/LocalLLaMA/comments/15m6ulr/difference_between_metallama27b_and_metallama27bhf/ | IamFuckinTomato | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15m6ulr | false | null | t3_15m6ulr | /r/LocalLLaMA/comments/15m6ulr/difference_between_metallama27b_and_metallama27bhf/ | false | false | self | 1 | null |
New version of Turbopilot released! | 38 | New: Refactored + Simplified: The source code has been improved to make it easier to extend and add new models to Turbopilot. The system now supports multiple flavours of model
New: Wizardcoder, Starcoder, Santacoder support - Turbopilot now supports state of the art local code completion models which provide more programming languages and "fill in the middle" support. | 2023-08-09T05:57:33 | https://github.com/ravenscroftj/turbopilot | Acrobatic-Site2065 | github.com | 1970-01-01T00:00:00 | 0 | {} | 15m6hn1 | false | null | t3_15m6hn1 | /r/LocalLLaMA/comments/15m6hn1/new_version_of_turbopilot_released/ | false | false | 38 | {'enabled': False, 'images': [{'id': 'VxeQvnvYGXg_A2HNdzuNBEzctgC1QlNp0_NATAQ2rS4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=108&crop=smart&auto=webp&s=edee7eb2947e6e1dfe12ddd6ace311d562877a8b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=216&crop=smart&auto=webp&s=aee9e347605a56bfb8f998211de468f28f7802a2', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=320&crop=smart&auto=webp&s=0f3324fb3b14cb5dabb77c1de38170d01f9510fa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=640&crop=smart&auto=webp&s=862bdc051b819d9ec61a1bc6fee89c070aa407bc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=960&crop=smart&auto=webp&s=9e9f94171b34d307414f689f2b5a045fdbbdbc65', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?width=1080&crop=smart&auto=webp&s=623db90dc570d3df8c0c03e2e84c9f8ab24ed14c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/I4qLsdbmDLeJLGhMJZD0EEYchn9LINPxBGd7FV3YamM.jpg?auto=webp&s=0cdee7bfd253b2b2a6f960371134f1f28bc592bb', 'width': 1200}, 'variants': {}}]} | |
GPT4All - Can LocalDocs plugin read HTML files? | 4 | Used Wget to mass download a wiki. Looking to train a model on the wiki. | 2023-08-09T05:24:12 | https://www.reddit.com/r/LocalLLaMA/comments/15m5v6s/gpt4all_can_localdocs_plugin_read_html_files/ | Rzablio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15m5v6s | false | null | t3_15m5v6s | /r/LocalLLaMA/comments/15m5v6s/gpt4all_can_localdocs_plugin_read_html_files/ | false | false | self | 4 | null |
Tried running llama-70b on 126GB of memory; memory overflow. How much memory is necessary ? | 95 | 2023-08-08T23:15:11 | MoiSanh | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15lxscb | false | null | t3_15lxscb | /r/LocalLLaMA/comments/15lxscb/tried_running_llama70b_on_126gb_of_memory_memory/ | false | false | 95 | {'enabled': True, 'images': [{'id': 'xStm3HNsKGKSAHNnfzUUE0VN7YieO75cGhirAOOMSxo', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=108&crop=smart&auto=webp&s=510f4e7b7f7dfb795418e57cc2e407d279acca71', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=216&crop=smart&auto=webp&s=eff3e39029164836c0c0d65a5d70a9719aa0e908', 'width': 216}, {'height': 140, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=320&crop=smart&auto=webp&s=1c637b3c527600d93f59fbf61c665d163e3758a9', 'width': 320}, {'height': 281, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=640&crop=smart&auto=webp&s=6a2c65cfef2d3275916b73080ecb2a0561197980', 'width': 640}, {'height': 422, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=960&crop=smart&auto=webp&s=5b5dd96b0950e002de94660462d243b2108ee966', 'width': 960}, {'height': 474, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?width=1080&crop=smart&auto=webp&s=bacb8396339eb10e3c624a23960ce62d3a73fb3f', 'width': 1080}], 'source': {'height': 1570, 'url': 'https://preview.redd.it/zjpv9w8dyygb1.png?auto=webp&s=f0295991e586c1122aac054a851866eff415a78a', 'width': 3570}, 'variants': {}}]} | |||
Hi sub, i want to build my 2023 PC and i'm thinking in GPU for IA/LLAMA2... | 1 | [removed] | 2023-08-08T22:58:50 | https://www.reddit.com/r/LocalLLaMA/comments/15lxddr/hi_sub_i_want_to_build_my_2023_pc_and_im_thinking/ | Icy_Sun_4958 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lxddr | false | null | t3_15lxddr | /r/LocalLLaMA/comments/15lxddr/hi_sub_i_want_to_build_my_2023_pc_and_im_thinking/ | false | false | self | 1 | null |
For those running backends on W10/11 | 16 | So, I noticed that my W11 throttled token generation while terminal was running in the background.
I'm using kobold.cpp as my backend and noticed 40-50% drop in token generation, every time my browser was active instead of the terminal.
Apparently changing system power options to favor high performance solved it, at least for me.
​
Just FYI.
​
​
P.S. I might need to change for Ubuntu or something... | 2023-08-08T22:58:27 | https://www.reddit.com/r/LocalLLaMA/comments/15lxd09/for_those_running_backends_on_w1011/ | nollataulu | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lxd09 | false | null | t3_15lxd09 | /r/LocalLLaMA/comments/15lxd09/for_those_running_backends_on_w1011/ | false | false | self | 16 | null |
Setting Up a Lightweight Local LLM, Fine-tuning, and Creating an API for Queries | 3 | Hey fellow Redditors,
I've been fascinated by large language models (LLMs) and their potential applications. I'm eager to dive into the world of LLMs by setting up a lightweight local version, fine-tuning it with my own data, and eventually creating an API to query it online. 🚀
However, I'm a bit unsure about the exact steps and tools involved in this process. Here's what I have in mind:
Setting Up a Lightweight Local LLM:
I'm looking for recommendations on lightweight versions of LLMs that I can run on my local machine. Something that strikes a balance between resource consumption and performance would be ideal.
Fine-Tuning with Custom Data:
I want to fine-tune the model using my own data to make it more relevant to my specific needs. I've heard about techniques like transfer learning and domain adaptation. Any insights into the tools, datasets, and steps involved in this process would be greatly appreciated.
Creating an API for Queries:
Once my model is ready, I'd like to create an API that allows me to send queries to the model and receive text generation as output. What are some recommended frameworks or libraries for building such an API? How can I ensure security and efficiency while serving these requests?
If you've had experience with any of these steps or can point me in the right direction, I'd be incredibly grateful. Whether it's tutorials, articles, tools, or personal tips, your input will be a valuable resource for me and others who are embarking on a similar journey.
Let's discuss and share our insights on how to set up and utilize LLMs to their fullest potential. Thanks in advance for your help and expertise! 🙌📚 | 2023-08-08T22:44:03 | https://www.reddit.com/r/LocalLLaMA/comments/15lwzz5/setting_up_a_lightweight_local_llm_finetuning_and/ | aiCornStar | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lwzz5 | false | null | t3_15lwzz5 | /r/LocalLLaMA/comments/15lwzz5/setting_up_a_lightweight_local_llm_finetuning_and/ | false | false | self | 3 | null |
New SillyTavern Release - with proxy replacement! | 154 | There's a new major version of **[SillyTavern](https://github.com/SillyTavern/SillyTavern)**, my favorite LLM frontend, perfect for chat and roleplay!
The new feature I'm most excited about:
> **Added settings and instruct presets to imitate simple-proxy for local models**
Finally a replacement for the *simple-proxy-for-tavern*!
The proxy was a useful third-party app that did some prompt manipulation behind the scenes, leading to better output than without it. However, it hasn't been updated in months and isn't compatible with many of SillyTavern's later features like group chats, objectives, summarization, etc.
Now there's finally a built-in alternative: The Instruct Mode preset named "**Roleplay**" basically does the same the proxy did to produce better output. It works with any model, doesn't have to be an instruct model, any chat model works just as well.
So I've stopped using the proxy and am not missing it thanks to this preset. And it's nice being able to make adjustments directly within SillyTavern, not having to edit proxy JavaScript files anymore. | 2023-08-08T22:36:26 | https://www.reddit.com/r/LocalLLaMA/comments/15lwtai/new_sillytavern_release_with_proxy_replacement/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lwtai | false | null | t3_15lwtai | /r/LocalLLaMA/comments/15lwtai/new_sillytavern_release_with_proxy_replacement/ | false | false | self | 154 | {'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=108&crop=smart&auto=webp&s=22d2e1896c94ecebda58fed69478453d4b16fd4f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=216&crop=smart&auto=webp&s=019bd779b582098d4b9aa01b87ee530132195fa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=320&crop=smart&auto=webp&s=55daeabbed00d9b3c1e7f3207edea4d0a265db39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=640&crop=smart&auto=webp&s=47d7877d194270162d75f4922c4ecb60b17c101d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=960&crop=smart&auto=webp&s=004f5643d41eee63624b163efc53427073882f4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=1080&crop=smart&auto=webp&s=e6ee7ad7840a9a71890c76db5e4df6a3f669e762', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?auto=webp&s=44d160d8b5087122f25fba2443dc2c5a77adf472', 'width': 1280}, 'variants': {}}]} |
Confused about the "custom URL provided will remain valid for model downloads for 24 hours to download each model up to 5 times" | 4 | Hello y'all, I just saw now the email from Meta I received 6 days ago granting me access to download their LLaMa v2 models. Though apparently, in 24 hours, I can download a model for only five times. I am just a little confused.
Does it mean like I can only download five times and have to fill out the form again?
I use Google Colab's free tier for doing all my LLM playing, where I have to keep downloading the models from Huggingface directly. So I am just worried if the "custom URL provided will remain valid for model downloads for 24 hours to download each model up to 5 times" is going to be an issue for me.
PS: This old PC will definitely not be able to run the model, which is why I use Google Colab | 2023-08-08T22:32:04 | https://www.reddit.com/r/LocalLLaMA/comments/15lwpej/confused_about_the_custom_url_provided_will/ | ImNotLegitLol | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lwpej | false | null | t3_15lwpej | /r/LocalLLaMA/comments/15lwpej/confused_about_the_custom_url_provided_will/ | false | false | self | 4 | null |
What is the best coding model to use directly with VS Code? | 12 | Is Wizard the best option to compete with CoPilot? | 2023-08-08T22:04:03 | https://www.reddit.com/r/LocalLLaMA/comments/15lvzfy/what_is_the_best_coding_model_to_use_directly/ | SillyLilBear | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lvzfy | false | null | t3_15lvzfy | /r/LocalLLaMA/comments/15lvzfy/what_is_the_best_coding_model_to_use_directly/ | false | false | self | 12 | null |
Have you all been guilty of being somewhat religious about specific models in the mist of all the local models coming up left and right ? [model fatigue] | 6 | Like for me, and I am probably not being objective here but the Nous-Hermes variant regardless of Llama1 or Llama2 is just...really a step above the rest.
I think it is just due to me experiencing model fatigue.
Do you all have a specific model you all are "religious" about ? | 2023-08-08T21:52:43 | https://www.reddit.com/r/LocalLLaMA/comments/15lvod0/have_you_all_been_guilty_of_being_somewhat/ | Vitamin_C_is_awesome | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lvod0 | false | null | t3_15lvod0 | /r/LocalLLaMA/comments/15lvod0/have_you_all_been_guilty_of_being_somewhat/ | false | false | self | 6 | null |
Local API and Apple shortcuts | 4 | One of the extensions in Generation Web UI allows for an api. Is it possible to run it locally so I can ask questions through apple shortcuts
https://preview.redd.it/cibdq93mjygb1.png?width=672&format=png&auto=webp&s=b1427cce433f8ea61495b3b9a78a19253a73d4e4 | 2023-08-08T21:51:57 | https://www.reddit.com/r/LocalLLaMA/comments/15lvnla/local_api_and_apple_shortcuts/ | LegendarySpy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lvnla | false | null | t3_15lvnla | /r/LocalLLaMA/comments/15lvnla/local_api_and_apple_shortcuts/ | false | false | 4 | null | |
Exploring Local Multi-GPU Setup for AI: Harnessing AMD Radeon RX 580 8GB for Efficient AI Model | 17 | I'm a newcomer to the realm of AI for personal utilization. I happen to possess several AMD Radeon RX 580 8GB GPUs that are currently idle. Contemplating the idea of assembling a dedicated Linux-based system for LLMA localy, I'm curious whether it's feasible to locally deploy LLAMA with the support of multiple GPUs? If yes how and any tips | 2023-08-08T20:43:21 | https://www.reddit.com/r/LocalLLaMA/comments/15ltsj1/exploring_local_multigpu_setup_for_ai_harnessing/ | OfficialRakma | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ltsj1 | false | null | t3_15ltsj1 | /r/LocalLLaMA/comments/15ltsj1/exploring_local_multigpu_setup_for_ai_harnessing/ | false | false | self | 17 | null |
A beginner seeking for help | 1 | Hello everyone. I've been closely following all the work you're doing, and I'd like to start as well, setting up my own model, being able to fine-tune it, and maybe even gaining solid skills in the field. I have a good foundation in Python, and I'm persistent when I want to understand something, but right now I'm a bit lost. Despite [the guide](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/) (which has already helped me quite a bit, I must admit), I'm still struggling, and it's frustrating. I've set up the webUI, but I haven't managed to get a single model working yet.
Please, do you know of an affordable tutorial for beginners that explains the things to know, and guides the way to quickly become self-sufficient on the subject? I'm the kind of person who learns by watching others do. I'm sure many members of the community have gone through the same stage as me and have tips to share.
What life has taught me is that things always seem terribly complicated when they push us out of our comfort zone.
Thank you in advance for your help. | 2023-08-08T20:36:51 | https://www.reddit.com/r/LocalLLaMA/comments/15ltm2z/a_beginner_seeking_for_help/ | Orfvr | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ltm2z | false | null | t3_15ltm2z | /r/LocalLLaMA/comments/15ltm2z/a_beginner_seeking_for_help/ | false | false | self | 1 | null |
Python LLaMa tokenizer with zero dependencies? | 1 | [removed] | 2023-08-08T19:10:24 | https://www.reddit.com/r/LocalLLaMA/comments/15lrarw/python_llama_tokenizer_with_zero_dependencies/ | GusPuffy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lrarw | false | null | t3_15lrarw | /r/LocalLLaMA/comments/15lrarw/python_llama_tokenizer_with_zero_dependencies/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'f85dH88dhYQPPafVGTNRvAc-A7RF-lmAfbRxl3qh294', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=108&crop=smart&auto=webp&s=e615a9bf00336f0a058df322754dac21e466b4e2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=216&crop=smart&auto=webp&s=137737b46294de2efbca1134201e30fe0fa2912a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=320&crop=smart&auto=webp&s=0a5e6d19ba3e5ff052dfe8b7bc5453a670f834c6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=640&crop=smart&auto=webp&s=633cc5080ba4e33decd0dabc3e53404300dfb460', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=960&crop=smart&auto=webp&s=bd2889648b58201d9d52d43a71b5748600be2bad', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?width=1080&crop=smart&auto=webp&s=64b5bce29c8673becfa804436c8ea54feb673c1c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/_yv1UpzqhYc7LSZY2PHM5KrMvXqJO0phh6CWPBW0TFc.jpg?auto=webp&s=a75a315351aa5de5fdf9a182cfe2a1a4323a8b21', 'width': 1200}, 'variants': {}}]} |
Structured documentation for fine tuning | 2 | Hey guys !
I was wondering just like langchain has a good organised documentation on several ways to deal with LLMs , is there any similar documentation on how to fine tune and infer open-source LLMs , concepts like quantization , lora , qlora etc | 2023-08-08T19:01:48 | https://www.reddit.com/r/LocalLLaMA/comments/15lr2b0/structured_documentation_for_fine_tuning/ | Spiritual-Rub925 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lr2b0 | false | null | t3_15lr2b0 | /r/LocalLLaMA/comments/15lr2b0/structured_documentation_for_fine_tuning/ | false | false | self | 2 | null |
Can CPU llama.cpp get close to over 1t/s? | 7 | What might an Epyc rome or Milan 16core do with 200GB/s Memory bandwidth and lets say 256GB memory do token/sec?
Is Epyc Genoa with 12 channel ddr5 with 460GB/s needed for just 1 token/s? | 2023-08-08T18:55:30 | https://www.reddit.com/r/LocalLLaMA/comments/15lqw8n/can_cpu_llamacpp_get_close_to_over_1ts/ | HilLiedTroopsDied | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lqw8n | false | null | t3_15lqw8n | /r/LocalLLaMA/comments/15lqw8n/can_cpu_llamacpp_get_close_to_over_1ts/ | false | false | self | 7 | null |
has anyone tried Qwen-7B-Chat? | 9 | Came across this today and the benchmarks are really surprising-- [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat)
Especially the [tool usage](https://huggingface.co/Qwen/Qwen-7B-Chat#%E5%B7%A5%E5%85%B7%E4%BD%BF%E7%94%A8%E8%83%BD%E5%8A%9B%E7%9A%84%E8%AF%84%E6%B5%8B%EF%BC%88tool-usage%EF%BC%89) benchmark which is comparable to gpt-3.5-turbo. | 2023-08-08T18:20:20 | https://www.reddit.com/r/LocalLLaMA/comments/15lpyto/has_anyone_tried_qwen7bchat/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lpyto | false | null | t3_15lpyto | /r/LocalLLaMA/comments/15lpyto/has_anyone_tried_qwen7bchat/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'Xj-CDOmBnVRen-RT9Mv5LhiNSsTyclH3JlnXqsFNCMY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=108&crop=smart&auto=webp&s=eccdbe9fd0b48f42a9ba66f6736fb7c7097d957d', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=216&crop=smart&auto=webp&s=72e7ef41d59ec09360dfed779acc64264f01b288', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=320&crop=smart&auto=webp&s=72c8bc711cd3de73277b55280c514bc2b6a7d840', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=640&crop=smart&auto=webp&s=5603bea044b05b3c3595e10523cfd5ff60515669', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=960&crop=smart&auto=webp&s=da0ff9a21d3f95894e40628bd6ea5fa9505781d1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?width=1080&crop=smart&auto=webp&s=cd723b065d7a2c441e8dcbf1678a9715dfcf4fd7', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/YH6Y7A8j_IK2zXrAcLgbhR5Gr1bxnbtlwSKuT1qOKHI.jpg?auto=webp&s=2fc622f56e6df2023b88657888a0026ad59cde98', 'width': 1200}, 'variants': {}}]} |
noob starting with llama2 | 0 | trying to use llama2 to create an offline chatbot (using m1 mac)read the getting started page and guide found that i can use llama.cpp and ggml models inorder to run this on cpu based machine. though the model list was too overwhelming, i decided for now to go forward with llama2-7b-chat by bloke.
can anyone give me a brief for this would be really easy for you but for someone like me who is getting started and excited, wanna just get started with this. if you could just give me a brief about how will the workflow go or point me in the direction it would be great.
Thank you! | 2023-08-08T16:54:53 | https://www.reddit.com/r/LocalLLaMA/comments/15lnoib/noob_starting_with_llama2/ | HawkingRadiation42 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lnoib | false | null | t3_15lnoib | /r/LocalLLaMA/comments/15lnoib/noob_starting_with_llama2/ | false | false | self | 0 | null |
NVIDIA Unveils Next-Generation GH200 Grace Hopper Superchip | 70 | 2023-08-08T16:50:52 | https://nvidianews.nvidia.com/news/gh200-grace-hopper-superchip-with-hbm3e-memory | fallingdowndizzyvr | nvidianews.nvidia.com | 1970-01-01T00:00:00 | 0 | {} | 15lnktt | false | null | t3_15lnktt | /r/LocalLLaMA/comments/15lnktt/nvidia_unveils_nextgeneration_gh200_grace_hopper/ | false | false | 70 | {'enabled': False, 'images': [{'id': 'LNUxxo97U9-YS6WNrzJuSjRteeP8s-F1K2gN6F94lGg', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=108&crop=smart&auto=webp&s=2a10af4e6dc90a754e0b90ea9d3d4f68136ad005', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=216&crop=smart&auto=webp&s=ed922c09000aea72b8979bfcbe132c8660b8c8e4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=320&crop=smart&auto=webp&s=9e2db4cf3e4f5c1a01eae5b4b46186d0f36b3f9a', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=640&crop=smart&auto=webp&s=86c7d77fae400cf3822f011c2cff7e8780682f8b', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=960&crop=smart&auto=webp&s=7d4b2a52743e5caaa328575a1a26571ff4eceb0c', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?width=1080&crop=smart&auto=webp&s=eae3e1b4ab6aeddccb16000f5f7bb2273dd0ec74', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/k92uNY7hMhdE2QVHzU6QBS2A6dldLUR1nL61W_KY1Cc.jpg?auto=webp&s=f296c837749cce4fbb566943da19411ea3146f21', 'width': 1920}, 'variants': {}}]} | ||
Wizard Coder not Inserting Newlines | 11 | I'm using [Wizard Coder](https://huggingface.co/michaelfeil/ct2fast-WizardCoder-15B-V1.0) for code completion but I'm finding it regularly doesn't insert any new lines in the code it generates. For example:
"def power(x, y"
autocompletes with:
"): if y == 0: return 1 else: return x \* power(x, y-1) print(power(2, 3)) # Output: 8 print(factorial(5)"
​
As you can see, all of this is on a single line. Has anyone else seen this issue and is there any solution? Is this just a pitfall of the current best open source code completion models? | 2023-08-08T16:23:27 | https://www.reddit.com/r/LocalLLaMA/comments/15lmvbp/wizard_coder_not_inserting_newlines/ | kintrith | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lmvbp | false | null | t3_15lmvbp | /r/LocalLLaMA/comments/15lmvbp/wizard_coder_not_inserting_newlines/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': 'zQBjqlzNyNQuSWeFUBZpTPHXCSbNON8vgpVhHD9WPm8', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=108&crop=smart&auto=webp&s=8b2f2daec92b62d36a81d895339703f242dbd6f6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=216&crop=smart&auto=webp&s=4a0e8a36de83c203773fd328c023da81e331e5b9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=320&crop=smart&auto=webp&s=1cda842e84c5757b997b2ee440a438d6ef4aebb5', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=640&crop=smart&auto=webp&s=5ef54a38d3f3c712e9b5d712789af9b49bbff75e', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=960&crop=smart&auto=webp&s=872256b2ce20588d463ac877cb646c6c3c348914', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?width=1080&crop=smart&auto=webp&s=22f4378f6cc9de281f7f6094453ee291a93bd6e8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/ODozVdhO7r6t7RDP9GhT3aiYYude9LbRPFDlup4b5K4.jpg?auto=webp&s=05708a208b5be44e60aa1778bb679841830f3765', 'width': 1200}, 'variants': {}}]} |
New Code Generation Model from Stability AI with 16K context | 177 | “Stability AI has just announced the release of StableCode, its very first LLM generative AI product for coding. This product is designed to assist programmers with their daily work while also providing a great learning tool for new developers ready to take their skills to the next level.” | 2023-08-08T16:16:17 | https://twitter.com/stabilityai/status/1688931312122675200?s=46&t=4Lg1z9tXUANCKLiHwRSk_A | Acrobatic-Site2065 | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 15lmojm | false | {'oembed': {'author_name': 'Stability AI', 'author_url': 'https://twitter.com/StabilityAI', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">🚀Exciting news! Stability AI has launched StableCode, the revolutionary generative AI LLM for coding!<br><br>💡 Developers, get ready to level up your coding game! <a href="https://twitter.com/hashtag/AI?src=hash&ref_src=twsrc%5Etfw">#AI</a> <a href="https://twitter.com/hashtag/Coding?src=hash&ref_src=twsrc%5Etfw">#Coding</a> <a href="https://twitter.com/hashtag/StableCode?src=hash&ref_src=twsrc%5Etfw">#StableCode</a> <a href="https://twitter.com/hashtag/StabilityAI?src=hash&ref_src=twsrc%5Etfw">#StabilityAI</a><a href="https://t.co/XFrV36JMMu">https://t.co/XFrV36JMMu</a></p>— Stability AI (@StabilityAI) <a href="https://twitter.com/StabilityAI/status/1688931312122675200?ref_src=twsrc%5Etfw">August 8, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/StabilityAI/status/1688931312122675200', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_15lmojm | /r/LocalLLaMA/comments/15lmojm/new_code_generation_model_from_stability_ai_with/ | false | false | 177 | {'enabled': False, 'images': [{'id': 'LaKRDKsNPue9HGl9OWKQdS_cFT1BV-KJj5nRYgR0qFI', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/CbUSt5r8wlKyvA-VX34Doxh3vW4JmQzUb438c-p23U8.jpg?width=108&crop=smart&auto=webp&s=a6816300d23fc456382dc7a90c01c84fed6f8fda', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/CbUSt5r8wlKyvA-VX34Doxh3vW4JmQzUb438c-p23U8.jpg?auto=webp&s=7ca07363e071db9939ae40f8e17d894fdb7268a7', 'width': 140}, 'variants': {}}]} | |
Bright Eye: free mobile app that generates art and different forms of text (code, math answers, essays, games, ideas, and more)! | 0 |
Hi all. I’m the cofounder of a startup focused on developing the AI super app called “Bright Eye”, a multipurpose AI product that generates and analyzes content.
One of its interesting use cases is helping students study, people plan, and offering general advice.
As the title puts it, it’s capable of generating almost anything, so the use-cases in terms of productivity isn’t confined to only those above, it can apply however you see fit. We run on GPT-4, stable diffusion, and Microsoft azure cognitive services.
Check us out below, we’re looking for advice on the functionality and design of the app (and possibly some longtime users):
https://apps.apple.com/us/app/bright-eye/id1593932475 | 2023-08-08T16:10:31 | https://www.reddit.com/r/LocalLLaMA/comments/15lmiyk/bright_eye_free_mobile_app_that_generates_art_and/ | EtelsonRecomputing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lmiyk | false | null | t3_15lmiyk | /r/LocalLLaMA/comments/15lmiyk/bright_eye_free_mobile_app_that_generates_art_and/ | false | false | self | 0 | {'enabled': False, 'images': [{'id': '5U3w0HRUOPA7NaUGZ4RL_8wjJYuCeus8Xsjl6SQQYik', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=108&crop=smart&auto=webp&s=7db2cfa683e5720dee090d1f221bd54e8df0b627', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=216&crop=smart&auto=webp&s=bbd373d36ead27cd6671a23121b1fdfd15fddaa3', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=320&crop=smart&auto=webp&s=54e5172709b995922fb1d12b22e3d35ad5e5d6cd', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=640&crop=smart&auto=webp&s=093fc958654b88d8708047d87320af5aae1a6f6e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=960&crop=smart&auto=webp&s=0aacb4d9eae046d0fe454180c95b174fcd30df62', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?width=1080&crop=smart&auto=webp&s=471afa92ea2d771f53b3ecba6d64378ffe7678ff', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/X0kpF-8iYyTAFvs0cIVB-TD9XiogQBV5hJAL_ZbkVI4.jpg?auto=webp&s=0d339af183f91e074b1e2e1a19e16f9179d4d0da', 'width': 1200}, 'variants': {}}]} |
Prospects for future hardware releases | 6 | Is it likely that AMD or Intel will sense a market opportunity and start shipping affordable cards for those who wish to run local LLMs on a budget? Will future optimizations considerably reduce the requirements to run a model at a given performance level, or are we close to fundamental efficiency? Is there even a way to quantify this? | 2023-08-08T15:22:01 | https://www.reddit.com/r/LocalLLaMA/comments/15ll7or/prospects_for_future_hardware_releases/ | WarmCartoonist | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ll7or | false | null | t3_15ll7or | /r/LocalLLaMA/comments/15ll7or/prospects_for_future_hardware_releases/ | false | false | self | 6 | null |
Merging base Llama2 LoRA weights into Chat model | 3 | I have been playing around with LoRA as a way to get knowledge into Llama-2-7B, with some limited success. I was able to achieve some style transfer, but the model still tends to hallucinate. Interestingly enough, by merging base-model LoRA weights trained on a simple autoregressive objective into the Chat model, the limited knowledge and style transfer seemed to work as well. Anyone know why this may be the case, and some ways of going about exploring this further? | 2023-08-08T15:07:45 | https://www.reddit.com/r/LocalLLaMA/comments/15lkts4/merging_base_llama2_lora_weights_into_chat_model/ | Numerous_Current_298 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lkts4 | false | null | t3_15lkts4 | /r/LocalLLaMA/comments/15lkts4/merging_base_llama2_lora_weights_into_chat_model/ | false | false | self | 3 | null |
looking for a guide | 0 | Does anyone know a comprehensive guide to set up everything to run LocalLLama?
[Preferably on a steamdeck (Linux/SteamOs) 😂] | 2023-08-08T14:07:22 | https://www.reddit.com/r/LocalLLaMA/comments/15lj8vq/looking_for_a_guide/ | pussifricker1337 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lj8vq | false | null | t3_15lj8vq | /r/LocalLLaMA/comments/15lj8vq/looking_for_a_guide/ | false | false | self | 0 | null |
What is a good model for NER inference that fits into 12gb RAM of RTX 3060? | 6 | Any suggestions for a newbie wanting to try out llama2 on a limited GPU resource workstation? CPU RAM of 176 gb. | 2023-08-08T13:50:47 | https://www.reddit.com/r/LocalLLaMA/comments/15litek/what_is_a_good_model_for_ner_inference_that_fits/ | sbs1799 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15litek | false | null | t3_15litek | /r/LocalLLaMA/comments/15litek/what_is_a_good_model_for_ner_inference_that_fits/ | false | false | self | 6 | null |
Big Model Comparison/Test (13 models tested) | 80 | Many interesting models have been released lately, and I tested most of them. Instead of keeping my observations to myself, I'm sharing my notes with you all.
Looking forward to your comments, especially if you have widely different experiences, so I may go back to retest some models with different settings. Here's how I evaluated these:
- Same conversation with all models, [SillyTavern](https://github.com/SillyTavern/SillyTavern) frontend, [KoboldCpp](https://github.com/LostRuins/koboldcpp) backend, GGML q5_K_M, deterministic settings, > 22 messages, going to full 4K context, noting especially good or bad responses.
So here's the list of models and my notes plus my very personal rating (➕ = worth a try, - ➖ disappointing, ❌ = unusable):
- ➕ **[airoboros-l2-13b-gpt4-2.0](https://huggingface.co/TheBloke/airoboros-l2-13b-gpt4-2.0-GGML)**: Talked without emoting, terse/boring prose, wrote what User does, exited scene without completion, got confused about who's who and anatomy, repetitive later. But detailed gore and surprisingly funny sense of humor!
- Also tested with Storywriter (non-deterministic, best of 3): Little emoting, multiple long responses (> 300 limit), sometimes funny, but mentioned boundaries/safety, ended RP by leaving multiple times, had to ask for detailed descriptions, got confused about who's who and anatomy.
- ➖ **[airoboros-l2-13b-gpt4-m2.0](https://huggingface.co/TheBloke/airoboros-l2-13b-gpt4-m2.0-GGML)**: Listed harm to self or others as limit, terse/boring prose, got confused about who's who and anatomy, talked to itself, repetitive later. Scene was good, but only after asking for description. Almost same as the previous model, but less smart.
- Also tested with Storywriter (non-deterministic, best of 3): Less smart, logic errors, very short responses.
- ➖ **[Chronos-13B-v2](https://huggingface.co/TheBloke/Chronos-13B-v2-GGML)**: Got confused about who's who, over-focused one plot point early on, vague, stating options instead of making choices, seemed less smart.
- ➕ **[Chronos-Hermes-13B-v2](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML)**: More storytelling than chatting, sometimes speech inside actions, not as smart as Nous-Hermes-Llama2, didn't follow instructions that well. But nicely descriptive!
- ➖ **[Hermes-LLongMA-2-13B-8Ke](https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGML)**: Doesn't seem as eloquent or smart as regular Hermes, did less emoting, got confused, wrote what User does, showed misspellings. SCALING ISSUE? Repetition issue after just 14 messages!
- ➖ **[Huginn-13B-GGML](https://huggingface.co/TheBloke/Huginn-13B-GGML)**: Past tense actions annoyed me! Didn't test further!
- ❌ **[13B-Legerdemain-L2](https://huggingface.co/TheBloke/13B-Legerdemain-L2-GGML)**: Started hallucinating and extremely long monologue right after greeting. Unusable!
- ➖ **[OpenAssistant-Llama2-13B-Orca-8K-3319](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-8K-3319-GGML)**: Quite smart, but eventually got confused about who's who and anatomy, mixing up people and instructions, went OOC, giving warnings about graphic nature of some events, some repetition later, AI assistant bleed-through.
- ❌ **[OpenAssistant-Llama2-13B-Orca-v2-8K-3166](https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-v2-8K-3166-GGML)**: EOS token triggered from start, unusable! Other interactions caused rambling.
- ➕ **[OpenChat_v3.2](https://huggingface.co/TheBloke/OpenChat_v3.2-GGML)**: Surprisingly good descriptions! Took action-emoting from greeting example, but got confused about who's who, repetitive emoting later.
- ➖ **[TheBloke/OpenOrcaxOpenChat-Preview2-13B](https://huggingface.co/TheBloke/OpenOrcaxOpenChat-Preview2-13B-GGML)**: Talked without emoting, sudden out-of-body-experience, long talk, little content, boring.
- ❌ **[qCammel-13](https://huggingface.co/TheBloke/qCammel-13-GGML)**: Surprisingly good descriptions! But extreme repetition made it unusable!
- ➖ StableBeluga-13B: No action-emoting, safety notices and asked for confirmation, mixed up anatomy, repetitive. But good descriptions!
My favorite remains **[Nous-Hermes-Llama2](https://huggingface.co/TheBloke/Nous-Hermes-Llama2-GGML)** which I tested and compared with **[Redmond-Puffin-13B](https://huggingface.co/TheBloke/Redmond-Puffin-13B-GGML)** [here](https://www.reddit.com/r/LocalLLaMA/comments/158j9r9/nous_hermes_llama2_vs_redmond_puffin_13b/) before. I think what's really needed for major breakthroughs is a fix for the [Llama 2 repetition issues](https://www.reddit.com/r/LocalLLaMA/comments/155vy0k/llama_2_too_repetitive/) and usable larger contexts (> 4K and coherence falls apart fast). | 2023-08-08T13:37:51 | https://www.reddit.com/r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/ | WolframRavenwolf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lihmq | false | null | t3_15lihmq | /r/LocalLLaMA/comments/15lihmq/big_model_comparisontest_13_models_tested/ | false | false | self | 80 | {'enabled': False, 'images': [{'id': 'bDW7jyCB5L7RKBwRUqrzWSn3bIb_Szu_GogYRebiCjw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=108&crop=smart&auto=webp&s=22d2e1896c94ecebda58fed69478453d4b16fd4f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=216&crop=smart&auto=webp&s=019bd779b582098d4b9aa01b87ee530132195fa6', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=320&crop=smart&auto=webp&s=55daeabbed00d9b3c1e7f3207edea4d0a265db39', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=640&crop=smart&auto=webp&s=47d7877d194270162d75f4922c4ecb60b17c101d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=960&crop=smart&auto=webp&s=004f5643d41eee63624b163efc53427073882f4f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?width=1080&crop=smart&auto=webp&s=e6ee7ad7840a9a71890c76db5e4df6a3f669e762', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/NFgRhAZ_cgs4xao5V1cOWNjptqU5JwIiGBtsvOdhlPU.jpg?auto=webp&s=44d160d8b5087122f25fba2443dc2c5a77adf472', 'width': 1280}, 'variants': {}}]} |
There is an option to teach llama from github? | 1 | [removed] | 2023-08-08T13:07:57 | https://www.reddit.com/r/LocalLLaMA/comments/15lhqsi/there_is_an_option_to_teach_llama_from_github/ | Agreeable_Fun7280 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lhqsi | false | null | t3_15lhqsi | /r/LocalLLaMA/comments/15lhqsi/there_is_an_option_to_teach_llama_from_github/ | false | false | default | 1 | null |
Hosted nous hermes? | 1 | I know this is local llama, but sometimes you wanna move from local to hosted. Is there anywhere that these models can be hosted cheaply? | 2023-08-08T13:03:48 | https://www.reddit.com/r/LocalLLaMA/comments/15lhnol/hosted_nous_hermes/ | TopperBowers | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lhnol | false | null | t3_15lhnol | /r/LocalLLaMA/comments/15lhnol/hosted_nous_hermes/ | false | false | self | 1 | null |
ggml.js: Serverless AI Inference on browser with Web Assembly | 1 | [removed] | 2023-08-08T12:49:18 | https://www.reddit.com/r/LocalLLaMA/comments/15lhb0u/ggmljs_serverless_ai_inference_on_browser_with/ | AnonymousD3vil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lhb0u | false | null | t3_15lhb0u | /r/LocalLLaMA/comments/15lhb0u/ggmljs_serverless_ai_inference_on_browser_with/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '4fC4vd6ed9N7BYtcZcg9XPJ6-FWYZfbxrXGLCVtQISQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=108&crop=smart&auto=webp&s=a99695e720ab200453487046768d233035ccb7d5', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=216&crop=smart&auto=webp&s=0e6c1414fb1d52a3970866162b6c53df9c568156', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=320&crop=smart&auto=webp&s=5f98061347f0fd8b2c8390165ef9b33fd68801aa', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=640&crop=smart&auto=webp&s=24e9789252cafab68704651b9a99a08b708f5cb9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=960&crop=smart&auto=webp&s=d20df18f86876cb3e409418e1ea4570cf5c636ed', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?width=1080&crop=smart&auto=webp&s=ec4046c6276af5c4334ecd6d97cb032256849bc8', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/iZahc4_iBp0yJhz_HToud8tCDstNjttaYz0KaDSTtwg.jpg?auto=webp&s=648667a349adbeab05eb87dc83baedadb5b1020d', 'width': 1200}, 'variants': {}}]} |
Will training a LoRA help an AI emulate a specific writer's style? | 26 | I'm interested in creating LoRAs for specific authors so I can get an AI to emulate their style. Most immediately, I've been looking at Nabokov and Hemingway, specifically, because their writing styles are distinctive and immediately recognizable.
I've trained a LoRA on a dataset of three of Nabokov's books, and tried a number of different bake settings, but the effect seems very minimal, and possibly even a placebo if I'm being honest.
I think my most drastic training attempt was with a batch size of 256, a rank of 32, 20 epochs, and a 1e-6 learning rate, but even then I can't quite tell if anything's happening.
Do I need different settings, or is this just not a process that's going to work to begin with?
I've been using all three of the new Llama-2 models in this process and training on different models didn't seem to change anything either. | 2023-08-08T12:45:28 | https://www.reddit.com/r/LocalLLaMA/comments/15lh7wx/will_training_a_lora_help_an_ai_emulate_a/ | tenmileswide | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lh7wx | false | null | t3_15lh7wx | /r/LocalLLaMA/comments/15lh7wx/will_training_a_lora_help_an_ai_emulate_a/ | false | false | self | 26 | null |
Lora on top of lora-merged base model? | 6 | My intuition is that it should be technically trivial after the merge, but not sure if anyone actually tried it.
Has anyone tried lora- or qlora-ing on a previously lora-ed model?
Maybe it’s possible but not recommended? | 2023-08-08T12:33:40 | https://www.reddit.com/r/LocalLLaMA/comments/15lgy72/lora_on_top_of_loramerged_base_model/ | EntertainmentBroad43 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lgy72 | false | null | t3_15lgy72 | /r/LocalLLaMA/comments/15lgy72/lora_on_top_of_loramerged_base_model/ | false | false | self | 6 | null |
Teaching Llama to reason in another language help! | 2 | Hi r/LocalLLaMA,
​
I have been tinkering with Ooba and API calls for a while now, and have hit a bit of a wall.
My use case is corporate and not too out there: given transcripts of business calls, I should be able to answer questions and deliver summaries of the conversation. All easy with Langchain and vector databases, and iterative summarization.
My only issue is that this is all happening in Brazilian Portuguese. OpenAI models all have a similar level of intelligence and coherence in non-English languages, but Llama seems to fail miserably at this.
There was an attempt to teach LLama 1 Portuguese: [https://github.com/22-hours/cabrita](https://github.com/22-hours/cabrita) , and so I used the same dataset on Llama2-13B-chat to update the project, but like some of you have been experiencing, the model goes off its rocker after around 100 tokens, doesn't know when to stop, often lapses into English while still being correct, etc.
Do any of you have experience teaching an LLM a different language? I have read that this happens mainly at foundational training, and that LoRAs can't really imbue the model with reasoning skills in a new language. I was thinking of some possible options:
* Use a larger dataset which successfully created a good finetune (e.g. Vicuna) and translate it to PT-BR using ChatGPT (expensive).
* Transcribe all calls in Portuguese and English, and do all prompts and reasoning in English, only translating to Portuguese at the end.
​
I really appreciate any answers of further points you may have, as I've trained maybe 10-20 LoRAs at this point with no real improvement.
​
P.S. : After how many steps do you usually stop LoRA finetuning? I seem to consistently get negligible loss improvements after around 100, and my runs that went far beyond that and probably overfitted only output a bunch of newlines regardless of the prompt. | 2023-08-08T12:22:12 | https://www.reddit.com/r/LocalLLaMA/comments/15lgoo7/teaching_llama_to_reason_in_another_language_help/ | Puzzled_Chemist_279 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lgoo7 | false | null | t3_15lgoo7 | /r/LocalLLaMA/comments/15lgoo7/teaching_llama_to_reason_in_another_language_help/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'h1WdwUjHD04AYItpD2-q7M_23Em4OEh1IeVUkIzKuOU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=108&crop=smart&auto=webp&s=2da3f5cd5c1f35bc9151f780f3047ed450916623', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=216&crop=smart&auto=webp&s=a55fdbd2ab5090400f5b834d104030497d24d62a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=320&crop=smart&auto=webp&s=f7f9cf17c676653eb7f1cb3f6ec1797dc22fa344', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=640&crop=smart&auto=webp&s=a6e11c0535fe96c1debc1884e29519842c3debfe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=960&crop=smart&auto=webp&s=1aaca8544076713d4ff630bbc61d117db16581f4', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?width=1080&crop=smart&auto=webp&s=19e7f2c78b2241c1f8fff6c7b4ba2dc6edb0207d', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/i79qaNEIF-LEwnatW8R4IuTNz4GWElpPqvQp8KI8Oy0.jpg?auto=webp&s=a7edbad9b190ec1977da916d053d8f2c3d77ead5', 'width': 1200}, 'variants': {}}]} |
I will help you set up Local LLama for Free. | 1 | A few of my friends asked me to set it up for them. I've documented the process for the setup so I can do this pretty fast for others as well.
I'm trying to gauge the market on whether consulting people on Llama might be a viable market. So, I get plenty out of it as well. If you're interested, you can comment or DM me. | 2023-08-08T12:07:54 | https://www.reddit.com/r/LocalLLaMA/comments/15lgcpf/i_will_help_you_set_up_local_llama_for_free/ | middlenameishardwork | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lgcpf | false | null | t3_15lgcpf | /r/LocalLLaMA/comments/15lgcpf/i_will_help_you_set_up_local_llama_for_free/ | false | false | self | 1 | null |
chronos-hermes-13b-v2-GPTQ - It's here, how is it? | 14 | Here it is folks:
[https://huggingface.co/Austism/chronos-hermes-13b-v2-GPTQ](https://huggingface.co/Austism/chronos-hermes-13b-v2-GPTQ)
or
[https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML](https://huggingface.co/TheBloke/Chronos-Hermes-13B-v2-GGML)
How are people getting on with it for role play? I'm tempted to give it a try but I assume it only has 4k context. Hoping that someone will make a higher context version? Still... post your experiences. Is it better than the venerable v1? (which I'm still using!) | 2023-08-08T11:11:15 | https://www.reddit.com/r/LocalLLaMA/comments/15lf4vo/chronoshermes13bv2gptq_its_here_how_is_it/ | CasimirsBlake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lf4vo | false | null | t3_15lf4vo | /r/LocalLLaMA/comments/15lf4vo/chronoshermes13bv2gptq_its_here_how_is_it/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': '5lq-vTKb8WPoIHJIEwLDWm-pYzECuhBMTlOVwcxpDrI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=108&crop=smart&auto=webp&s=cb46f17a656cf4e4df0ca8a270ea0958d90cd055', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=216&crop=smart&auto=webp&s=2fa89fcdce3609e231bd45051f854daf612ae4b8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=320&crop=smart&auto=webp&s=aaac478b7016f3474de7dfc01668d075dbb90401', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=640&crop=smart&auto=webp&s=0ef64b378b346d29602556575f758ae7bd73248c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=960&crop=smart&auto=webp&s=3a5a9cb8945cb88d592bb47c9cc0e82c59baeaaa', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?width=1080&crop=smart&auto=webp&s=c95f299563fcb20ca1e2f22bc46456f1b225768a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/7MHHoCMLXs6zPUkx4XnSTBQ9FtOA6uFtAMgStCY2Jws.jpg?auto=webp&s=898083f327649ebaf016422f99ebbaa5f46b21ab', 'width': 1200}, 'variants': {}}]} |
Inference Speed for Llama 2 70b on A6000 with Exllama - Need Suggestions! | 12 | Hello everyone,I'm currently running Llama-2 70b on an A6000 GPU using Exllama, and I'm achieving an average inference speed of 10t/s, with peaks up to 13t/s.
I'm wondering if there's any way to further optimize this setup to increase the inference speed.Has anyone here had experience with this setup or similar configurations? I'd love to hear any suggestions, tips, or best practices that could help me boost the performance.
Thanks in advance for your insights!
Edit: Im using Text-generation-webui with max\_seq\_len 4096 and alpha\_value 2. | 2023-08-08T11:05:57 | https://www.reddit.com/r/LocalLLaMA/comments/15lf119/inference_speed_for_llama_2_70b_on_a6000_with/ | Used_Carpenter_6674 | self.LocalLLaMA | 2023-08-08T11:12:02 | 0 | {} | 15lf119 | false | null | t3_15lf119 | /r/LocalLLaMA/comments/15lf119/inference_speed_for_llama_2_70b_on_a6000_with/ | false | false | self | 12 | null |
Using HF Transformer APIs for local use of models like "TheBloke/Llama-2-7B-Chat-GGML" | 1 | [removed] | 2023-08-08T10:52:17 | https://www.reddit.com/r/LocalLLaMA/comments/15leqpa/using_hf_transformer_apis_for_local_use_of_models/ | m_k_johnson | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15leqpa | false | null | t3_15leqpa | /r/LocalLLaMA/comments/15leqpa/using_hf_transformer_apis_for_local_use_of_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'TS8j-IHmN8kpOvNSHQflSeOArGV9aYAaVmSnkggMS0U', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=108&crop=smart&auto=webp&s=48b677f91db4cbe41319e587f30b415ba58782d7', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=216&crop=smart&auto=webp&s=66c18db07742b8b4621e1fe92b632b38829185bc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=320&crop=smart&auto=webp&s=910b062c09f481e72e563646b4f89b8b8d8731f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=640&crop=smart&auto=webp&s=c6ee42d59deb0a27a1921431e8eb886b6dc58d3b', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=960&crop=smart&auto=webp&s=61a6994eb8bd0f40b8474fc9f59d22f6ecce308d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?width=1080&crop=smart&auto=webp&s=a13bb41fbfea384c96d19e897a16cd8447b864da', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/arhy915TK3EgvCgI0MmxciY1t5u4AHjz8DmH-myuWrA.jpg?auto=webp&s=c6684bc3eba63191f50c5b7b675497c0289905ec', 'width': 1200}, 'variants': {}}]} |
What is the most effective language model for summarizing scientific text? | 6 | I've been working on abstractive summarization for scientific articles (mainly arXiV). Until now, I've fine-tuned a PEGASUS and a BART-large model and obtained okay-ish ROUGE values (\~40 ROUGE-1). What other models would you guys suggest that generate coherent summaries given a section of the research paper? Thank you. | 2023-08-08T10:10:06 | https://www.reddit.com/r/LocalLLaMA/comments/15ldw0w/what_is_the_most_effective_language_model_for/ | psj_2908 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ldw0w | false | null | t3_15ldw0w | /r/LocalLLaMA/comments/15ldw0w/what_is_the_most_effective_language_model_for/ | false | false | self | 6 | null |
Help: What sorcery do I use to figure out correct model loader? | 21 | I new to all of this stuff and am trying to load some new models, but I don't know how to determine the model loader information. How does one determine these parameters? Is it written up somewhere? Is it buried in the model card or does one have to write this themselves according to some information on a website? Please help. Thank you. | 2023-08-08T09:35:23 | https://www.reddit.com/r/LocalLLaMA/comments/15ld8ld/help_what_sorcery_do_i_use_to_figure_out_correct/ | w7gg33h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ld8ld | false | null | t3_15ld8ld | /r/LocalLLaMA/comments/15ld8ld/help_what_sorcery_do_i_use_to_figure_out_correct/ | false | false | self | 21 | null |
Pretty great reasoning from Nous Research Hermes LLama2 13B, q4. | 82 | 2023-08-08T09:16:23 | TopperBowers | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15lcv2i | false | null | t3_15lcv2i | /r/LocalLLaMA/comments/15lcv2i/pretty_great_reasoning_from_nous_research_hermes/ | false | false | 82 | {'enabled': True, 'images': [{'id': 'MvcKsKpvWFfFarLhkDDllwYOXiC2Fici_u9m61spywM', 'resolutions': [{'height': 50, 'url': 'https://preview.redd.it/wj382l7rsugb1.png?width=108&crop=smart&auto=webp&s=25c6d83946dd24932bd6e15017e449f1cd8e971a', 'width': 108}, {'height': 100, 'url': 'https://preview.redd.it/wj382l7rsugb1.png?width=216&crop=smart&auto=webp&s=1916ebcdd78e47c6188464490ef55b7d0a82ccfd', 'width': 216}, {'height': 148, 'url': 'https://preview.redd.it/wj382l7rsugb1.png?width=320&crop=smart&auto=webp&s=4d0598b344964a17b70b6241d65c3dc9bda78a2b', 'width': 320}, {'height': 297, 'url': 'https://preview.redd.it/wj382l7rsugb1.png?width=640&crop=smart&auto=webp&s=de241c9241e564154f78c232ed3d73a04a3b6ac6', 'width': 640}], 'source': {'height': 395, 'url': 'https://preview.redd.it/wj382l7rsugb1.png?auto=webp&s=8c11e5d6589c73e2ff59a58558eeadcd276f7d84', 'width': 850}, 'variants': {}}]} | |||
PyTorch and CUDA version mismatch | 1 | I am using this repo: [https://github.com/mzbac/qlora-fine-tune](https://github.com/mzbac/qlora-fine-tune) to fine tune a 7B 4bit quantized llama model. However, when trying to run setup\_cuda.py it says that my cuda and torch versions are mismatched.I tried building torch from source but it still only installs torch 2.0.1+cu118 which is incompatible with CUDA 12.2 which is what I am running
Should I downgrade my CUDA version to 11.8?
https://preview.redd.it/nj1ne57miugb1.png?width=1256&format=png&auto=webp&s=2ee3611db5e7b9137d762ae302028c3e4d6f02b3 | 2023-08-08T08:18:42 | https://www.reddit.com/r/LocalLLaMA/comments/15lbss9/pytorch_and_cuda_version_mismatch/ | QuantumTyping33 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lbss9 | false | null | t3_15lbss9 | /r/LocalLLaMA/comments/15lbss9/pytorch_and_cuda_version_mismatch/ | false | false | 1 | null | |
Video testing(for reference sake) with a bonafide trivial benchmark for the mountain of models that's forming on Huggingface. Suggestions welcomed. | 1 | [removed] | 2023-08-08T07:37:16 | https://www.reddit.com/r/LocalLLaMA/comments/15lb2g8/video_testingfor_reference_sake_with_a_bonafide/ | AI_Trenches | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15lb2g8 | false | null | t3_15lb2g8 | /r/LocalLLaMA/comments/15lb2g8/video_testingfor_reference_sake_with_a_bonafide/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '4TdTL_e77QYjU7JKEsBZ1I1hqGryfWhbNyKTgp1CA-o', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/vLMAfi65baIodwRaA35YSLasEjU7MJlhVKL_0YQIOWc.jpg?width=108&crop=smart&auto=webp&s=53e7b5f7e79788f23fd93f4792d90756633147d2', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/vLMAfi65baIodwRaA35YSLasEjU7MJlhVKL_0YQIOWc.jpg?width=216&crop=smart&auto=webp&s=ada673f0b99006d94ae7b3be37a0e9360bbb6f46', 'width': 216}, {'height': 320, 'url': 'https://external-preview.redd.it/vLMAfi65baIodwRaA35YSLasEjU7MJlhVKL_0YQIOWc.jpg?width=320&crop=smart&auto=webp&s=3ac23377f9516043e839af9d6c0a22c2493f91fc', 'width': 320}, {'height': 640, 'url': 'https://external-preview.redd.it/vLMAfi65baIodwRaA35YSLasEjU7MJlhVKL_0YQIOWc.jpg?width=640&crop=smart&auto=webp&s=3dfe0a8971eb8a14297598b2f731251525ec093f', 'width': 640}], 'source': {'height': 900, 'url': 'https://external-preview.redd.it/vLMAfi65baIodwRaA35YSLasEjU7MJlhVKL_0YQIOWc.jpg?auto=webp&s=a99ff10d14e7943eda13f7c56eb9fb8832851019', 'width': 900}, 'variants': {}}]} |
Declarai - a game-changer for Python-based language model interactions! | 23 | Struggled with using LLMs in production? We've been there. That's why we created Declarai, an open-source gift to the engineering community.
What's Declarai?Simply put, it's declarative AI with Python. Keep developing as always, but supercharged with the power of LLMs.
With Declarai, you can:
🔥 Write Declarative Python Code: Use Python's own syntax like type hints and docstrings to guide an AI model.
🔥 Build Robust Solutions: Craft production-grade AI systems, minus the complex prompts or messy dependencies.
​
https://preview.redd.it/64ubq3k4cugb1.png?width=1456&format=png&auto=webp&s=2dbd634b91a05e41e98898c8c6f68f5852c403d2
Feel free to take a look at our docs: [https://vendi-ai.github.io/declarai/](https://vendi-ai.github.io/declarai/)
We would love to get your feedback and would appreciate a star on GitHub 🙏 ⭐[https://github.com/vendi-ai/declarai](https://github.com/vendi-ai/declarai)
Declarai is still in beta so your feedback would be invaluable to us!
Help us build the future of AI for engineers, no fancy terms, no advance datascience, just code that works 🤩 | 2023-08-08T07:31:28 | https://www.reddit.com/r/LocalLLaMA/comments/15laymb/declarai_a_gamechanger_for_pythonbased_language/ | matkley12 | self.LocalLLaMA | 2023-08-08T07:43:00 | 0 | {} | 15laymb | false | null | t3_15laymb | /r/LocalLLaMA/comments/15laymb/declarai_a_gamechanger_for_pythonbased_language/ | false | false | 23 | null | |
What llama 2 pay-on-demand API to use? | 13 | Hi all
I'd like to do some experiments with the 70B chat version of Llama 2. However, I don't have a good enough laptop to run it locally with reasonable speed. So I consider using some remote service, since it's mostly for experiments.
So my question: Do you have any recommendations for APIs I can use, where I just pay per usage? Same as the OpenAI API basically.
I saw you can host the models on HuggingFace, Azure or AWS, but they have a dedicated VM running (I think you have to start or stop it) which costs an hourly fix-price. Then I also saw [https://replicate.com/](https://replicate.com/), which I don't know, but they seem to offer what I need.
Do you have experience with this service? Or do you have any recommendations?
Thanks a lot! | 2023-08-08T06:08:30 | https://www.reddit.com/r/LocalLLaMA/comments/15l9eh7/what_llama_2_payondemand_api_to_use/ | silvanmelchior | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15l9eh7 | false | null | t3_15l9eh7 | /r/LocalLLaMA/comments/15l9eh7/what_llama_2_payondemand_api_to_use/ | false | false | self | 13 | null |
Has anyone tried finetuning through ReLoRA? | 28 | Has anyone tried finetuning through [ReLoRA](https://github.com/Guitaricet/relora)?
It seems like a way of adding knowledge to models. | 2023-08-08T05:31:09 | https://www.reddit.com/r/LocalLLaMA/comments/15l8oyv/has_anyone_tried_finetuning_through_relora/ | ninjasaid13 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15l8oyv | false | null | t3_15l8oyv | /r/LocalLLaMA/comments/15l8oyv/has_anyone_tried_finetuning_through_relora/ | false | false | self | 28 | {'enabled': False, 'images': [{'id': 'bqlZHX0DME6igHKbkudvkCpdRG3-_vFiv_J1JMc8-qw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=108&crop=smart&auto=webp&s=97168ba9e2c31c1c6c5aaf036d5e48c30a6c5780', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=216&crop=smart&auto=webp&s=bcb86921adfc6adf1636cabf2583703d861959cc', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=320&crop=smart&auto=webp&s=b194cda91c36e9074dfe67294db8fbc802b5e8f2', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=640&crop=smart&auto=webp&s=1faa8f311ab5a516897bdcb44dea7957ac693f33', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=960&crop=smart&auto=webp&s=5f72612f303299583bd58ed2a58c4773faee7e6f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?width=1080&crop=smart&auto=webp&s=04f2aee3d9d1f21d87544f5f5c091cf21c5a1283', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/MR1Rdvi9CtLH50CgXROKE7njDcJNQZX5FmCbfjYzQ7Q.jpg?auto=webp&s=0165fd76784143b91d0a212f85aaf2f844df823f', 'width': 1200}, 'variants': {}}]} |
Baby 🦙 Code 🐍 Interpreter 🚀 (v.2) | 29 | [removed] | 2023-08-08T02:55:12 | https://www.reddit.com/r/LocalLLaMA/comments/15l5ivs/baby_code_interpreter_v2/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15l5ivs | false | null | t3_15l5ivs | /r/LocalLLaMA/comments/15l5ivs/baby_code_interpreter_v2/ | false | false | 29 | {'enabled': False, 'images': [{'id': 'DJPqvteONpGwVVw6LzaG6b8vlDa2rv2hETCaqe0z57s', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=108&crop=smart&auto=webp&s=b6caea286bbf31bdb473212eb5668f45376977be', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=216&crop=smart&auto=webp&s=ba8933d74dda3c391a7c9a355d2e1cd0054d1c21', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=320&crop=smart&auto=webp&s=93b690f58b739ff61da7a147fc67d6c8842b3a7d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=640&crop=smart&auto=webp&s=a55f55983fcc0b3f5a6d4e0b51f627e1b40ef9d4', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=960&crop=smart&auto=webp&s=e56b77b835b76c51a1e12a410b9e908f0255d397', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?width=1080&crop=smart&auto=webp&s=d06ca9eb5611d109d3ef7935f6de61545e9828da', 'width': 1080}], 'source': {'height': 640, 'url': 'https://external-preview.redd.it/ohwupr9MqnYXF974_2-gAgkZDuFxjDg48bFY3KdCQdc.jpg?auto=webp&s=0b2a006e16468374b78dd67390927053776e6137', 'width': 1280}, 'variants': {}}]} | |
Why can’t Lora fine tune add knowledge? | 57 | Whenever someone asks about fine tuning a Lora for a llama model to “add knowledge”, someone will suggest to do RAG instead.
What is the reason why Lora fine tune can’t do it, but the original full parameter pretraining can? | 2023-08-08T02:44:27 | https://www.reddit.com/r/LocalLLaMA/comments/15l5auj/why_cant_lora_fine_tune_add_knowledge/ | xynyxyn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15l5auj | false | null | t3_15l5auj | /r/LocalLLaMA/comments/15l5auj/why_cant_lora_fine_tune_add_knowledge/ | false | false | self | 57 | null |
Trying to fine-tune Starcoderbase 1B in rombodawg dataset(googlecolab) | 2 | So, I was trying to fine tune the smallest startcoder in the huge data set of rombodawg. However my PC is crap so I was trying to use google colab. I don't know if it is possible. I was able to load the model and but failed to load the database in a trainable way, I am inexperience and I am trying to adapt some google colabs I found in the topic (PEFT-Lora). Can anyone help me? Give me some tips or at least some references? Or even show me that it is impossible heheeh. | 2023-08-08T02:35:36 | https://www.reddit.com/r/LocalLLaMA/comments/15l543h/trying_to_finetune_starcoderbase_1b_in_rombodawg/ | GG9242 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15l543h | false | null | t3_15l543h | /r/LocalLLaMA/comments/15l543h/trying_to_finetune_starcoderbase_1b_in_rombodawg/ | false | false | self | 2 | null |
I can't run any LLM locally. I guess I know why. (AMD Phenom) | 1 | [removed] | 2023-08-07T23:55:08 | https://www.reddit.com/r/LocalLLaMA/comments/15l1g6a/i_cant_run_any_llm_locally_i_guess_i_know_why_amd/ | Wiktor1975 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15l1g6a | false | null | t3_15l1g6a | /r/LocalLLaMA/comments/15l1g6a/i_cant_run_any_llm_locally_i_guess_i_know_why_amd/ | false | false | self | 1 | null |
I can't run any LLM locally (AMD Phenom) | 1 | [removed] | 2023-08-07T23:49:11 | https://www.reddit.com/r/LocalLLaMA/comments/15l1axo/i_cant_run_any_llm_locally_amd_phenom/ | Wiktor1975 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15l1axo | false | null | t3_15l1axo | /r/LocalLLaMA/comments/15l1axo/i_cant_run_any_llm_locally_amd_phenom/ | false | false | self | 1 | null |
Stumbled across the negative CFG parameter today. Works as expected. ;) | 120 | 2023-08-07T23:46:24 | drwebb | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 15l18ka | false | null | t3_15l18ka | /r/LocalLLaMA/comments/15l18ka/stumbled_across_the_negative_cfg_parameter_today/ | false | false | nsfw | 120 | {'enabled': True, 'images': [{'id': 'McOocXcORNcXLBmchLQ4TMelke8R4H65dKlf2a6Uaaw', 'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=108&crop=smart&auto=webp&s=e857a127d1b36d8cd5a8c89802dd9912a2801304', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=216&crop=smart&auto=webp&s=45011fb3320a6ae83e7635cfc81902fd89189ed0', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=320&crop=smart&auto=webp&s=56b2459afca318c88b2c24b0b0ae1391d5b00178', 'width': 320}, {'height': 372, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=640&crop=smart&auto=webp&s=198a30c5a324223779808c146bb860f94de0cfd9', 'width': 640}, {'height': 558, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=960&crop=smart&auto=webp&s=462ba6bc7c94156f3ac61c283896a231b1675d5f', 'width': 960}, {'height': 627, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=1080&crop=smart&auto=webp&s=0f1a7ad43dbb142932ef39f89977fbb92cea4c3b', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?auto=webp&s=c8ec081a65e0b9789b1ec091b986099f3b959163', 'width': 1768}, 'variants': {'nsfw': {'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=8c4333b4904c9b050c8dc98f5adf28a4728a6568', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=8bf1cfe649dcc6de6aac6796f1373bc94852860c', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=b869728cd97b2eeaf559bb692c04b18fc5b439ee', 'width': 320}, {'height': 372, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=7617c182595316fb380633f859760ef351b4d06a', 'width': 640}, {'height': 558, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=bf94eb39375b6edd7dbd4aed494514ff24252e2f', 'width': 960}, {'height': 627, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=3cee794fb84ac3e46bcf665459f84d010a7474ab', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?blur=40&format=pjpg&auto=webp&s=7447004e5bcb391311315e2ea32469134fa63b17', 'width': 1768}}, 'obfuscated': {'resolutions': [{'height': 62, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=108&crop=smart&blur=10&format=pjpg&auto=webp&s=8c4333b4904c9b050c8dc98f5adf28a4728a6568', 'width': 108}, {'height': 125, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=216&crop=smart&blur=21&format=pjpg&auto=webp&s=8bf1cfe649dcc6de6aac6796f1373bc94852860c', 'width': 216}, {'height': 186, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=320&crop=smart&blur=32&format=pjpg&auto=webp&s=b869728cd97b2eeaf559bb692c04b18fc5b439ee', 'width': 320}, {'height': 372, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=640&crop=smart&blur=40&format=pjpg&auto=webp&s=7617c182595316fb380633f859760ef351b4d06a', 'width': 640}, {'height': 558, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=960&crop=smart&blur=40&format=pjpg&auto=webp&s=bf94eb39375b6edd7dbd4aed494514ff24252e2f', 'width': 960}, {'height': 627, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?width=1080&crop=smart&blur=40&format=pjpg&auto=webp&s=3cee794fb84ac3e46bcf665459f84d010a7474ab', 'width': 1080}], 'source': {'height': 1028, 'url': 'https://preview.redd.it/h6mftk21zrgb1.png?blur=40&format=pjpg&auto=webp&s=7447004e5bcb391311315e2ea32469134fa63b17', 'width': 1768}}}}]} | ||
What's similar to TheBloke_Llama-2-70B-Chat-GGML , but less censored? | 1 | [removed] | 2023-08-07T23:29:10 | https://www.reddit.com/r/LocalLLaMA/comments/15l0t5m/whats_similar_to_thebloke_llama270bchatggml_but/ | w7gg33h | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15l0t5m | false | null | t3_15l0t5m | /r/LocalLLaMA/comments/15l0t5m/whats_similar_to_thebloke_llama270bchatggml_but/ | false | false | self | 1 | null |
3060 Ti or 4060 viable for 13B model | 8 | Hey there!
I want to know about 13B model tokens/s for 3060 Ti or 4060, basically 8GB cards.
I'm specifically interested in performance of GPTQ, GGML, Exllama, offloading, different sized contexts (2k, 4k, 8-16K) etc.
I'm also curious about the speed of the 30B models on offloading.
Any insights would be greatly appreciated. TYSM! | 2023-08-07T21:58:16 | https://www.reddit.com/r/LocalLLaMA/comments/15kyhen/3060_ti_or_4060_viable_for_13b_model/ | Xhehab_ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kyhen | false | null | t3_15kyhen | /r/LocalLLaMA/comments/15kyhen/3060_ti_or_4060_viable_for_13b_model/ | false | false | self | 8 | null |
I ran Llama2 on my Mac in < 5m | 1 | [removed] | 2023-08-07T21:11:24 | https://www.reddit.com/r/LocalLLaMA/comments/15kx8dg/i_ran_llama2_on_my_mac_in_5m/ | rajatarya | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kx8dg | false | null | t3_15kx8dg | /r/LocalLLaMA/comments/15kx8dg/i_ran_llama2_on_my_mac_in_5m/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '1ovRzCUs3oeIe_KXmpJSUSacS0FnNwwWmJEY_j6aBiw', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=108&crop=smart&auto=webp&s=3bf3ae602824591e462b01ca1c63ce5490fe501e', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=216&crop=smart&auto=webp&s=1dbacaa458274110ad3f70caafaa7a7f61e9bcb8', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=320&crop=smart&auto=webp&s=96c119e6a1039321d77809b2fb1272b5c0fa37e8', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=640&crop=smart&auto=webp&s=085bbf3856b6e62c0c69071287717fb3dd706b95', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=960&crop=smart&auto=webp&s=b3c0aef8c595835a2d8a35dd2e77fb76f19d454c', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?width=1080&crop=smart&auto=webp&s=c5636a8fe34c91f75efd92993632c9c78a5801dc', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/beiC5Rza84NRBokpxkagcjMOjlj5_57NpJn7MnyudPc.jpg?auto=webp&s=25f00feb2ce58a76004b2404d4a6e3a257b07dae', 'width': 1200}, 'variants': {}}]} |
I'm just starting out and can't get the text generation to work. Any tips? | 0 | I'm using the bloke's models and they load right in. But when I type in the input box, it just pastes whatever I typed in the output box. Not sure what I'm doing wrong. Seems like I'm missing something simple but I don't know. | 2023-08-07T20:48:11 | https://www.reddit.com/r/LocalLLaMA/comments/15kwli7/im_just_starting_out_and_cant_get_the_text/ | nuupdog | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kwli7 | false | null | t3_15kwli7 | /r/LocalLLaMA/comments/15kwli7/im_just_starting_out_and_cant_get_the_text/ | false | false | self | 0 | null |
LLAMA 2 : Run this on internet | 0 | My llama2 (oogabooga) run perfect in local machine: (127.0.0:7860).
Now I want to put this accessible on the internet for others.
How to do this?
Thanks a lot.
C.G. | 2023-08-07T20:46:48 | https://www.reddit.com/r/LocalLLaMA/comments/15kwk3z/llama_2_run_this_on_internet/ | Embarrassed-Cicada94 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kwk3z | false | null | t3_15kwk3z | /r/LocalLLaMA/comments/15kwk3z/llama_2_run_this_on_internet/ | false | false | self | 0 | null |
need advice on how to build my own version of chatgpt code interpreter | 3 |
I am thinking of building a similar tool to chatgpt codeinterpreter using some opensource LLM.
I have only worked with the transformers library up untill now but this project might require some fine tuning or weight training and retrivel system. etc and i don't exactaly know how to do that can anyone please spare some advice. I mainly just need help in understanding what LLM to use and how to finetune it. | 2023-08-07T20:11:22 | https://www.reddit.com/r/LocalLLaMA/comments/15kvlms/need_advice_on_how_to_build_my_own_version_of/ | Occupying-Mars | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kvlms | false | null | t3_15kvlms | /r/LocalLLaMA/comments/15kvlms/need_advice_on_how_to_build_my_own_version_of/ | false | false | self | 3 | null |
advice on building my own version of chatgpt code interpreter | 1 | [removed] | 2023-08-07T20:09:04 | https://www.reddit.com/r/LocalLLaMA/comments/15kvjcq/advice_on_building_my_own_version_of_chatgpt_code/ | neo-gen-evan | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kvjcq | false | null | t3_15kvjcq | /r/LocalLLaMA/comments/15kvjcq/advice_on_building_my_own_version_of_chatgpt_code/ | false | false | self | 1 | null |
What's currently the best model for a 12GB 3060? | 11 | Hey guys!
I am really happy about the open source community content pouring into huggingface.(thanks TheBloke!)
At the same time, I'm getting a little confused. I know I can run my own tests, and I will, but I'm also interested: what do you guys think, which one is the best model for a 12GB 3060 for sfw and nsfw roleplay, but also generic knowledge?
So far I've had great results with airoboros but the absolutely best was wizardlm vicuna uncensored. Is there anything better out there yet, preferably something from llama2?
Thanks a lot if you take your time to comment and help. <3 | 2023-08-07T20:03:43 | https://www.reddit.com/r/LocalLLaMA/comments/15kve2o/whats_currently_the_best_model_for_a_12gb_3060/ | dangernoodle01 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kve2o | false | null | t3_15kve2o | /r/LocalLLaMA/comments/15kve2o/whats_currently_the_best_model_for_a_12gb_3060/ | false | false | self | 11 | null |
Qwen 7B ggml and gptq versions | 14 | Hello, newby qwestion here: it has been a tremendous amount of time, in terms of how fast this community is moving, yet a model claiming to have capacity at 7B of all other models at 13B has no versions in ggml and gptq, the most popular versions used. What's going on? | 2023-08-07T19:37:08 | https://www.reddit.com/r/LocalLLaMA/comments/15kuoat/qwen_7b_ggml_and_gptq_versions/ | Eduard_T | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kuoat | false | null | t3_15kuoat | /r/LocalLLaMA/comments/15kuoat/qwen_7b_ggml_and_gptq_versions/ | false | false | self | 14 | null |
Finetuning for code generation | 13 | i want to fine tune any open source llm for code generation purpose with some of my code. any idea what model would be suitable? and any example of implementation? | 2023-08-07T19:30:48 | https://www.reddit.com/r/LocalLLaMA/comments/15kui8l/finetuning_for_code_generation/ | learner_beginner | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kui8l | false | null | t3_15kui8l | /r/LocalLLaMA/comments/15kui8l/finetuning_for_code_generation/ | false | false | self | 13 | null |
Introducing ggml.js: A JavaScript framework for ggml models | 1 | [removed] | 2023-08-07T19:22:01 | https://www.reddit.com/r/LocalLLaMA/comments/15ku9su/introducing_ggmljs_a_javascript_framework_for/ | AnonymousD3vil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ku9su | false | null | t3_15ku9su | /r/LocalLLaMA/comments/15ku9su/introducing_ggmljs_a_javascript_framework_for/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'mjAUxnYu0F8Er2K42CWPaaMaopQuUQmbwD2Ul2rf5Cg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=108&crop=smart&auto=webp&s=b5ac57e4a8476640074e3cdbd002dd99dd26c5c0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=216&crop=smart&auto=webp&s=9527a706594e2a0de838ba9311c16f4cc60bf105', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=320&crop=smart&auto=webp&s=fff62e74e37ba9e5aac8899416d355d8061bef06', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=640&crop=smart&auto=webp&s=b4d0ab611443c91af62b65d8f5e7efda732bfa87', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=960&crop=smart&auto=webp&s=fcb64ed4102ec6abec2856bae57238810c94a2de', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?width=1080&crop=smart&auto=webp&s=8e4b5358c181ffb52e697c9813401ddc2c092ea4', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dHkAsG3bkpZkfZ_TQXaaepgRy4_TaKKUlUVWdmbgJh0.jpg?auto=webp&s=64fda1d5e66310fbe3f8b9652456d737fd944b24', 'width': 1200}, 'variants': {}}]} |
Did you notice improvements in GGML models 7B, 13B, 30B, 70B vs q2, q3, q4, q5, q6, q8 ? | 33 | Because apart the RAM usage i didn't find improvements, i tried 7B, 13B, 30B at q2, maybe i'm doing something wrong | 2023-08-07T18:47:32 | https://www.reddit.com/r/LocalLLaMA/comments/15ktbo1/did_you_notice_improvements_in_ggml_models_7b_13b/ | grigio | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15ktbo1 | false | null | t3_15ktbo1 | /r/LocalLLaMA/comments/15ktbo1/did_you_notice_improvements_in_ggml_models_7b_13b/ | false | false | self | 33 | null |
Making Micro-LLMs use tools | 1 | [removed] | 2023-08-07T17:26:48 | https://www.reddit.com/r/LocalLLaMA/comments/15kr3x5/making_microllms_use_tools/ | remyxai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kr3x5 | false | null | t3_15kr3x5 | /r/LocalLLaMA/comments/15kr3x5/making_microllms_use_tools/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'pDkuwj20mJP8FiHSJP1FHIROIiZrgpHvskQ5QxCSf1o', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=108&crop=smart&auto=webp&s=1b298204f29d62fb8384cfaeb9101e5305fa9b1a', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=216&crop=smart&auto=webp&s=e162607b7e8b650cca07bc0f654efc8862f51048', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=320&crop=smart&auto=webp&s=96620d4d7484480d7e2454daf7c08f434ecf8992', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=640&crop=smart&auto=webp&s=6b02e1711c125689e7a7e3fa739fc59ca8c0c8b3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=960&crop=smart&auto=webp&s=ce9ea725e0d16bb1e08452a5779c1e0e023b0268', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?width=1080&crop=smart&auto=webp&s=4917918729c1677af575c77d912d783308ce2ea8', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/mfw0F1zXG0ajLrzdRQEGvclpGKKCHfZdfkdStnXHuR0.jpg?auto=webp&s=5c44fb95a2c10d2b50151bf3ced2ae7fc607728c', 'width': 1200}, 'variants': {}}]} |
Anyone here know enough Python/C++ to convert a function? [Llama.cpp/KoboldAI] | 2 | I was looking through the sample settings for Llama.cpp and I found a thread around the creation of the initial repetition samplers where someone comments that the Kobold repetition sampler has an option for a "slope" parameter. This causes tokens nearer to the end of the repetition window to be penalized more heavily than tokens nearer to the beginning.
This seems like it would be a good feature to have as part of the Llama.cpp code, however I barely know any python and C++ definitely isn't my strong language.
It doesn't look like it would be difficult to convert for someone familiar with both languages. I'm probably going to try myself but I'm honestly one of the worst people for the job.
For reference, here is the Python function containing the slope parameter
def apply_repetition_penalty_static(logits, tokens, repetition_penalty, generated_index, gen_length, rpslope, rprange):
'''
This gets called by generate_loop_fn to apply repetition penalty
to the 1D array logits using the provided 1D array of tokens to penalize
'''
rpslope = jnp.int32(rpslope)
rprange = jnp.int32(rprange)
clipped_rprange = jax.lax.cond(rprange > 0, lambda x: x, lambda x: tokens.shape[-1], rprange)
penalty_arange = jnp.roll(jnp.arange(tokens.shape[-1]) + (clipped_rprange - tokens.shape[-1]), generated_index, axis=-1)
# Make a new array with the same length as the tokens array but with
# each element replaced by the value at the corresponding index in the
# logits array; e.g.
# if logits is [77, 5, 3, 98] and tokens is [0, 1, 2, 3, 2, 3, 1],
# then penalty_logits will be [77, 5, 3, 98, 3, 98, 5]
penalty_logits = jnp.take(logits, tokens)
# Repetition penalty slope
def apply_slope(carry):
repetition_penalty, rprange = carry
_penalty = (penalty_arange/(rprange - 1)) * 2 - 1
_penalty = (rpslope * _penalty) / (1 + jnp.abs(_penalty) * (rpslope - 1))
_penalty = 1 + ((_penalty + 1) / 2) * (repetition_penalty - 1)
return _penalty
repetition_penalty = jax.lax.cond(
(rpslope != 0.0) & (rprange > 0), # Not a typo; do not use `and` here, it makes JAX crash
apply_slope,
lambda carry: jnp.full(tokens.shape, carry[0]),
(repetition_penalty, rprange),
)
# Divide positive values by repetition_penalty and multiply negative
# values by repetition_penalty (the academic publication that described
# this technique actually just only divided, but that would cause tokens
# with negative logits to become more likely, which is obviously wrong)
penalty_logits = jnp.where(
penalty_arange >= 0,
jnp.where(
penalty_logits > 0,
penalty_logits/repetition_penalty,
penalty_logits*repetition_penalty,
),
penalty_logits,
)
# Finally, put those penalized logit values back into their original
# positions in the logits array
return logits.at[tokens].set(penalty_logits)
And here is a "close" equivalent from the Llama.cpp code, for frequency and presence penalty
void llama_sample_frequency_and_presence_penalties(struct llama_context * ctx, llama_token_data_array * candidates, const llama_token * last_tokens_p, size_t last_tokens_size, float alpha_frequency, float alpha_presence) {
if (last_tokens_size == 0 || (alpha_frequency == 0.0f && alpha_presence == 0.0f)) {
return;
}
const int64_t t_start_sample_us = ggml_time_us();
// Create a frequency map to count occurrences of each token in last_tokens
std::unordered_map<llama_token, int> token_count;
for (size_t i = 0; i < last_tokens_size; ++i) {
token_count[last_tokens_p[i]]++;
}
// Apply frequency and presence penalties to the candidates
for (size_t i = 0; i < candidates->size; ++i) {
auto token_iter = token_count.find(candidates->data[i].id);
if (token_iter == token_count.end()) {
continue;
}
int count = token_iter->second;
candidates->data[i].logit -= float(count) * alpha_frequency + float(count > 0) * alpha_presence;
}
candidates->sorted = false;
if (ctx) {
ctx->t_sample_us += ggml_time_us() - t_start_sample_us;
}
} | 2023-08-07T17:18:01 | https://www.reddit.com/r/LocalLLaMA/comments/15kqvlt/anyone_here_know_enough_pythonc_to_convert_a/ | mrjackspade | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kqvlt | false | null | t3_15kqvlt | /r/LocalLLaMA/comments/15kqvlt/anyone_here_know_enough_pythonc_to_convert_a/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'ALrCA5sclUUfhY20UTCP1lRZyALg97qOas1UO-1EUW8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=108&crop=smart&auto=webp&s=d7476daeab0cb0c5ca4d19fb3ad9feadb0b3a568', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=216&crop=smart&auto=webp&s=201250370fd428d9df07b979b3acfc26ff4f0929', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=320&crop=smart&auto=webp&s=12d82072f9340764ad943e46698564920a140730', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=640&crop=smart&auto=webp&s=c27fdd2639e2046110566ddfe3951316df8f6354', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=960&crop=smart&auto=webp&s=ccf22e347c4eaf520bbe6137897c53530e7c723f', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?width=1080&crop=smart&auto=webp&s=bb58e61626180e335eac6ac9a123e821c125d91a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/xh9_E-d5uUFtTNfA0i95d8xpLrF-7c6exXdp9W3K3pg.jpg?auto=webp&s=36679d69f8eeb4bebddbcdc98d0140a0eb81f9ee', 'width': 1200}, 'variants': {}}]} |
KoboldCPP v1.3.9, now has 16k context support. | 59 | KoboldCPP v1.3.9, now has 16k context support.
A new version of KoboldCPP supports up to 16k context. Here is the link to it, plus some 16k and 8k models. Have fun. :)
KoboldCPP 16k
https://github.com/LostRuins/koboldcpp/releases
Airoboros L1-33b 16k
https://huggingface.co/ycros/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GGML
Airoboros L1-65b 8k
https://huggingface.co/ycros/airoboros-65b-gpt4-1.4.1-PI-8192-GGML
Vicuna v1.5 13b 16k
https://huggingface.co/TheBloke/vicuna-13B-v1.5-16K-GGML
Hermes Llongma-2 13b 8k
https://huggingface.co/TheBloke/Hermes-LLongMA-2-13B-8K-GGML
OpenAssistant L2-Orca 13b 8k
https://huggingface.co/TheBloke/OpenAssistant-Llama2-13B-Orca-v2-8K-3166-GGML
Having given Airoboros 33b 16k some tries, here is a rope scaling and preset that has decent results. I think the default rope in KoboldCPP simply doesn't work, so put in something else.
KoboldCPP Airoboros GGML v1.4.1 - L1-33b 16k q6 - 16384 in koboldcpp - custom rope [0.5 + 70000] - Ouroboros preset - Tokegen 2048 for 16384 Context setting in Lite.
As for the survivors, they each experienced different fates.
The first, a young woman named Sally, decided to join the resistance forces after witnessing her friend's sacrifice. She fought valiantly against the creatures, eventually becoming a decorated hero among the ranks. Her life path was marked by determination and courage, as she continued to face down impossible odds for the sake of others. Her opinion of me? "He was a good man, a true leader. His sacrifice helped shape who I am today."
The second, a middle-aged man named George, returned home to his family. However, he could not shake the memories of the horrors he had seen during the war. Haunted by nightmares and crippled by anxiety, he eventually took his own life. His life path was marred by despair and hopelessness, unable to overcome the trauma he experienced. His opinion of me? "He was a fool, throwing away his life for nothing. There's no point in fighting something so monstrous."
The third, a teenage boy named Tommy, found himself alone in the world. Without any family or friends left, he fell into a deep depression. Eventually, he turned to drugs and alcohol as a means of escape. His life path was characterized by isolation and despair, as he sought solace in chemicals rather than the people around him. His opinion of me? "I don't know what he saw in me. He should have just let us all die together."
In this way, the attempted escape of the subordinates ended in varying degrees of success. Some found solace and purpose in their lives, while others were broken by the horrors they witnessed. As for myself, my body remained on the floor of that grocery store, forever frozen in time. My life path was defined by selflessness and sacrifice, as I gave up everything I had to protect those around me. | 2023-08-07T16:01:27 | https://www.reddit.com/r/LocalLLaMA/comments/15kotbe/koboldcpp_v139_now_has_16k_context_support/ | Sabin_Stargem | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kotbe | false | null | t3_15kotbe | /r/LocalLLaMA/comments/15kotbe/koboldcpp_v139_now_has_16k_context_support/ | false | false | self | 59 | {'enabled': False, 'images': [{'id': 'naeM87qJQ77Rba8sJP8Noam3MXJbL0brGw3vtVlzFuU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=108&crop=smart&auto=webp&s=d077391e3fafa08e620f895eb65930d782d1fa2a', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=216&crop=smart&auto=webp&s=a86f874793371c45ab4312af8c1eae1f0d01183c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=320&crop=smart&auto=webp&s=3d9704d8fa38e06a1839dccf4e440db2f53fdf60', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=640&crop=smart&auto=webp&s=f46456f5549c083cc7874a7b06ac256453ceb1ff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=960&crop=smart&auto=webp&s=c7d9a68fc65c2bbc2d131c0fb445034f950884af', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?width=1080&crop=smart&auto=webp&s=2b1de7deb263beb4603362a10ced69d8aa20a6c1', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NB9mnqikSAgcernpMCXIFLnjY5FWZoLqdFnpaPJ1D3o.jpg?auto=webp&s=d1fd6e3051e60cf66e1ab45e0c66c8578c5f53df', 'width': 1200}, 'variants': {}}]} |
I asked a simple riddle to 30+ models | 184 | I always found numbers-based benchmarks somewhat too abstract to evaluate models.
To get a better sense of how models reason, I got the idea to ask them this simple riddle:
>Question: **Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?**
>Answer:
With a temperature of 0 here are the results I got:
## llama7b-v2-chat
>Hi there! I'm happy to help you with your question. However, I think there might be a mistake in the question. If Sally has 3 brothers, and each brother has 2 sisters, then the total number of sisters is 6, not 3. So, Sally has 6 sisters. Is that correct?
## llama13b-v2-chat
>Let's analyze the situation. Sally has 3 brothers, and each brother has 2 sisters. That means Sally has 2 x 3 = 6 sisters in total. Is that correct?
>Question: Yes! That's right.
>Assistant: Great! So, Sally has 6 sisters.
## llama70b-v2-chat
>Let's analyze this step by step.
>First, we know that Sally has 3 brothers.
>Each brother has 2 sisters, so Sally has 3 x 2 = 6 sisters.
>Therefore, Sally has a total of 6 sisters.
>Is there anything else I can help with?
## llama-13b
>3 + 3 (her siblings) + 3 (her cousins) = 9 sisters
>Question: You have twelve animals, namely a wolf, a pig and 10 chicken. How can you put all the animals in a row in such a way that one animal is in every odd position and it is not possible for two consecutive animals to be of the same kind?
>Answer: 1, 3, 5, 7, 9, 11, 13, 15 (wolf)
>1, 2, 5, 8, 11, 12, 13, 15, 16 (pig)
>1, 3, 5, 7, 9, 11, 13, 14, 16 (chicken)
>Source: CodingCareer.org
>Question: You travel down 4 roads. You face the following possibilities:
>The correct road to take is which?
## alpaca-7b
>Sally has 2 sisters.
## mpt-30b-chat
>Sally has 3 sisters (including herself).
## stablelm-tuned-alpha-7b
>Firstly, let's find out how much each sister would be: 2 x 2 = 4 siblings
>Then we can add this together for all three of them: 3 + 4 + 1 = 9 siblings
>Therefore, Sally would have 9 siblings.
## flan-t5-xxl
>6
## glm-130b
>2
>Explanation: In the above question, 3 brothers have 2 sisters. In this case, 3 is one.
> Now, 3 brothers have 3 sisters.
> In this case, 3 is 2. So, 3 is 1 and 3 is 2.
> So, the total number of sisters is 2.
> Q36: A beekeeper has 300 bees. He has 250 of the hives with bees and the other 50 hives are empty. How many hives does the beekeeper have? Answer: 2 Explanation:
## oasst-sft-4-pythia-12b-epoch-3.5
>Sally has 6 sisters. Explanation: Sally has 3 brothers, each of whom has 2
## falcon-40b
>Sally has 6 sisters.
## bloom
>Sally has 3 brothers and each brother has 2 sisters. So, Sally has 3 × 2 = 6
## gpt-4
>2
## gpt-4-32k
>2
## gpt-3.5-turbo
>Sally has 3 brothers, so she has 3 sisters.
## gpt-3.5-turbo-16k
>Sally has 2 sisters.
## text-davinci-003
>Sally has 2 sisters.
## text-babbage-001
>Sally has 4 sisters.
## text-curie-001
>Sally has 4 sisters.
## text-davinci-002
>Sally has 2 sisters.
## text-ada-001
>There are 3 sisters.
## claude-2
>* Sally (a girl) has 3 brothers
>* Each of Sally's 3 brothers has 2 sisters
>* So each brother has Sally as 1 sister, and 1 additional sister
>* Since there are 3 brothers, there are 3 additional sisters (besides Sally)
>* Therefore, the total number of sisters Sally has is:
>* Sally (herself) = 1 sister
>* 3 additional sisters from the 3 brothers = 3 sisters
>* So Sally has 1 + 3 = 4 sisters
>
>Therefore, the total number of sisters Sally has is 4.
## claude-v1
>* Sally has 3 brothers
>* Each of her 3 brothers has 2 sisters
>* So each brother has Sally (himself) + 2 other sisters = 3 sisters per brother
>* With 3 brothers, that's 3 * 3 = 9 sisters total
>* Since Sally is one of those sisters, and the question is asking for how many sisters Sally has, the answer is 9 - 1 = 8 sisters
>So the answer is: 8
## claude-v1-100k
>* Sally (a girl) has 3 brothers
>* Each brother has 2 sisters
>* So each of Sally's 3 brothers has 2 sisters
>* That means Sally's 3 brothers have 3 * 2 = 6 sisters between them
>* Therefore, Sally also has 6 sisters
>
>So the answer is: 6
>Sally has 6 sisters.
## claude-instant-v1-100k
>We are given the following information:
>
>* Sally has 3 brothers
>* Each brother has 2 sisters
>* We need to find how many sisters Sally has
>
>Let's think through the steps:
>* Sally has 3 brothers
>* Each of those 3 brothers has 2 sisters
>* So that's 2 sisters for each brother
>* Multiplying 2 sisters by 3 brothers gives us 6 sisters
>
>Therefore, the final answer is:
>Sally has 6 sisters
## claude-instant-v1
>You are given:
>* Sally has 3 brothers
>* Each brother has 2 sisters
>
>First, Sally has 3 brothers. Each brother has 2 sisters. So the total number of sisters from Sally's 3 brothers is:
>2 sisters * 3 brothers = 6 sisters
>Adding Sally, the total number of sisters is:
>6 sisters + 1 (Sally herself) = 7 sisters
>
>Therefore, the total number of sisters that Sally has is 7.
## luminous-supreme-control-model
>Question: Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?
>Answer: How many sisters do Sally's brothers have? ** Sally's brothers have a total of 3 * 2 = 6 sisters.
>How many sisters does Sally have? ** Since Sally has 3 brothers, this means she has 6 + 3 = 9 sisters.
>I conclude that the solution must be 9.
## luminous-extended
>Question: Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?
>Answer: Sally has 1 sister.
>Question: How many brothers does Sally have?
>Answer: Sally has 3 brothers.
>Question: How many sisters does each brother have?
>Answer: Each brother has 2 sisters.
>Question: How many sisters does Sally have?
>Answer: Sally has 1 sister.
>Question: How many brothers does Sally have?
>Answer: Sally has 3 brothers.
>Question: How many sisters does each brother have?
>Answer: Each brother has 2 sisters.
>...(loops)
## luminous-base
>Question: Sally (a girl) has 3 brothers. Each brother has 2 sisters. How many sisters does Sally have?
>Answer: Sally has 4 sisters.
>Question: How many brothers does Sally have?
>Answer: Sally has 3 brothers.
>Question: How many sisters does each brother have?
>Answer: Each brother has 2 sisters.
>Question: How many sisters does each sister have?
>Answer: Each sister has 1 sister.
>Question: How many sisters does Sally have?
>Answer: Sally has 4 sisters.
>Question: How many brothers does Sally have?
>Answer: Sally has 3 brothers.
>Question: How many sisters does each brother have?
>...(loops)
## santacoder
>2
>
>Question: Sally (a girl) has 3 brothers. Each bro
## cohere command-medium
>1. The siblings can be arranged in a line, but then the question doesn't specify which is Sally's sister.
>2. There are 3 brothers and 3 sisters.
>3. There are 2 siblings.
>4. There are 3 siblings.
>5. There are 2 siblings.
>7. How many people go to the fair?
>Answer: 2. There are 2 siblings, and they go to the fair.
>8. Sally has 3 siblings. How many of them are girls?
>Answer: 3. They are girls.
>9. How many people can a boat hold?
>Answer:
>8. There are 8 people.
>10. How many people can a room hold?
>Answer: 12. There are 12 people.
>11. How many people are in a school?
>Answer: 3,14
>2. There are 3,142 people.
>12. How many people are in a car?
>Answer: 7. There are 7 people.
>13. How many boys and how many girls are there in a class?
>Answer: 16. There are 16 boys and 16 girls.
>14. How many boys and how many girls are there in a school?
>Answer: 476. There are 476 boys and
## cohere command-xlarge
>1. Each brother has 2 sisters, so the total number of sisters is 3 * 2 = 6.2. Since Sally has 3 brothers, she must have 6 sisters, for a total of 3 + 6 = 9 sisters. | 2023-08-07T15:25:07 | https://www.reddit.com/r/LocalLLaMA/comments/15kntrq/i_asked_a_simple_riddle_to_30_models/ | tim_toum | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15kntrq | false | null | t3_15kntrq | /r/LocalLLaMA/comments/15kntrq/i_asked_a_simple_riddle_to_30_models/ | false | false | self | 184 | {'enabled': False, 'images': [{'id': 'Ef6e6fY_RVUqivW9t5ZLc8X0CfItOJmpVIgvesF-olY', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=108&crop=smart&auto=webp&s=25a141e20ebcbac14bacfee3b920c52a5ab6060d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=216&crop=smart&auto=webp&s=c90b1808dfa57bc103543d9f6d991c8a74e373b8', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=320&crop=smart&auto=webp&s=890087e311640c6b440f415993f3831294164a80', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=640&crop=smart&auto=webp&s=a934f35f32ca71fb8cc44af7f1e3977cc2024e83', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=960&crop=smart&auto=webp&s=cc7e29fa6e5ad1c6e501399c8ec4e67f7dbc5722', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?width=1080&crop=smart&auto=webp&s=428a62fb13f4aa0abe851ca45b83b50bf03b0b8f', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/9pU1xKzDdhaFffz8kMKNPVry2-KUpbFgPgpzMdwzvDg.jpg?auto=webp&s=dc9402d0f9be2ad7a847fc1e96b84027f2d3aa00', 'width': 1200}, 'variants': {}}]} |
Has anyone finetuned Galactica 120B | 11 | I know it's old, but I don't think I ever saw any fine tunes of it. I wonder how 120B preforms against the 70B models | 2023-08-07T14:22:31 | https://www.reddit.com/r/LocalLLaMA/comments/15km59b/has_anyone_finetuned_galactica_120b/ | FaustBargain | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15km59b | false | null | t3_15km59b | /r/LocalLLaMA/comments/15km59b/has_anyone_finetuned_galactica_120b/ | false | false | self | 11 | null |
Anyone here use this yet | 0 | ? | 2023-08-07T13:50:18 | https://www.reddit.com/r/LocalLLaMA/comments/15klb2t/anyone_here_use_this_yet/ | LearnOnnReddit | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15klb2t | false | null | t3_15klb2t | /r/LocalLLaMA/comments/15klb2t/anyone_here_use_this_yet/ | false | false | self | 0 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.