title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7
values | id stringlengths 7 7 | locked bool 2
classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2
classes | stickied bool 2
classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
How to increase loading speed of tokenizer? | 1 | Hi people, I was wondering how I can speed up my tokenizer loading speed. Currently it takes about 3 minutes to load the tokenizer for lmsys/vicuna-7b-v1.3. Thank you!
The code in question:
`model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path = "lmsys/vicuna-7b-v1.3", device_map = "balanced",torch_dtype = torch.bfloat16, use_cache = True)`
`tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path, use_fast = True)` | 2023-07-24T06:54:18 | https://www.reddit.com/r/LocalLLaMA/comments/1582sm3/how_to_increase_loading_speed_of_tokenizer/ | ToeAdministrative493 | self.LocalLLaMA | 2023-07-24T07:07:13 | 0 | {} | 1582sm3 | false | null | t3_1582sm3 | /r/LocalLLaMA/comments/1582sm3/how_to_increase_loading_speed_of_tokenizer/ | false | false | self | 1 | null |
How much RAM is needed for llama-2 70b + 32k context? | 44 | Hello, I'd like to know if 48, 56, 64, or 92 gb is needed for a cpu setup.
supposedly, with exllama, 48gb is all you'd need, for 16k. Its possible ggml may need more.
Along with this I have other questions, and feel free to read some testing that I was doing.
I was testing llama-2 70b (q3_K_S) at 32k context, with the following arguments: `-c 32384 --rope-freq-base 80000 --rope-freq-scale 0.5`
these seem to be settings for 16k. Since llama 2 has double the context, and runs normally without rope hacks, I kept the 16k setting.
In the end, it gave some summary in a bullet point as asked, but broke off and many of the words were slang, like it was drunk.
I asked for a summarization of the entire LoRA paper which took ~30000 tokens and a few hours. (non-cublas build)
Beforehand, I got 1.4 t/s with those arguments, and without the prompt. After processing the Lora paper, it was 0.15 t/s and probably using my 2gb swap memory.
Q: Does the cublas build require more RAM? A cublas build (to speedup prompt processing) loading the model at 512 ctx would decrease my q3_K_S 1.4 t/s to 0.2t/s, so it seems to be using swap. I didn't offload layers to the gpu.
Q: Can the cublas build use the gpu as extra ram? If it didn't provide any speed increase, I would still be ok with this, I have a 24gb 3090 and 24vram+32ram = 56
Also, wanted to know the Minimum CPU needed: CPU tests show 10.5t/s on my desktop AMD cpu with 7b q4_K_M, so I assume 70b will be at least 1t/s, assuming this - as the model is ten times larger. I know that RAM bandwidth will cap tokens/s, but I assume this is a good test to see. A cpu at 4.5t/s for example, will probably not run 70b at 1t/s | 2023-07-24T06:18:19 | https://www.reddit.com/r/LocalLLaMA/comments/15825bt/how_much_ram_is_needed_for_llama2_70b_32k_context/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15825bt | false | null | t3_15825bt | /r/LocalLLaMA/comments/15825bt/how_much_ram_is_needed_for_llama2_70b_32k_context/ | false | false | self | 44 | null |
[Request] A tracker for all 'useful' llama applications updated every week | 26 | I'm not sure how useful this would be for others. But like most, I'm a subscriber on this channel and everyday, I can see a lot of amazing work. Just in the past week, I've seen what llama can do in order to do stuff such as analysing your private text to create your 'personal GPT' (albeit not as affective as chatGPT of course), then just recently today, baby llama from Andrej Karpathy, then others.
Some references below:
https://www.reddit.com/r/LocalLLaMA/comments/154to1w/i_trained_the_65b_model_on_my_texts_so_i_can_talk/
https://www.reddit.com/r/LocalLLaMA/comments/154rqay/llama2_7b_uncensored_qlora_finetune_on_wizard/
https://github.com/bublint/ue5-llama-lora
https://www.reddit.com/r/LocalLLaMA/comments/157vzq6/unleashing_the_power_of_language_learning_models/
If it will be helpful to the community, a request (if someone has the time), to compile a "top application of the week" pertaining to the programs created to Llama (similar like this one: https://old.reddit.com/r/LocalLLaMA/wiki/models --> but more specific to the applications we can built considering the model, the code, and the instructions are all there). I know it sound lazy to ask and begs the question "why not do it on your own?", maybe it's because of the limited time. Not sure if this idea is useful for this community r/LocalLLaMA
Thanks | 2023-07-24T05:27:03 | https://www.reddit.com/r/LocalLLaMA/comments/15817x3/request_a_tracker_for_all_useful_llama/ | vlodia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15817x3 | false | null | t3_15817x3 | /r/LocalLLaMA/comments/15817x3/request_a_tracker_for_all_useful_llama/ | false | false | self | 26 | {'enabled': False, 'images': [{'id': 'P15jNVogmR5WSjEB_kwDFJzQ8-ncag_kTZLd6_yKxCA', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=108&crop=smart&auto=webp&s=515e7187952514863bd80558bbc201f6112cd133', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=216&crop=smart&auto=webp&s=adb353f534838c27a6bc431756f1eea1bb6e1e5f', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=320&crop=smart&auto=webp&s=6eb461feac382999a65748cb9aacbd76f485ebce', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=640&crop=smart&auto=webp&s=06f2f29104ee97c6c861753c71ea6203c9f62047', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=960&crop=smart&auto=webp&s=15d0956e0a01cebc6b1d3d99dc0be65340b93789', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?width=1080&crop=smart&auto=webp&s=bc0ffdf6a8e24c23faf8e843b0390f0163d416e5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Yyqv7oNGbRPyp69RfozMtix04o2lwg54x8_Vxo6O6RQ.jpg?auto=webp&s=1ca9b0a858653b4b6dd867d4f7e12003fac31f37', 'width': 1200}, 'variants': {}}]} |
Is it possible to run multiple models simultaneously yet? If so what kind of hardware would I need to be able to pull it off? | 16 | From what I heard gpt 4 is a multiple guys in a trenchcoat situation, with one large model that calls upon pre trained smaller models when it needs them. I was wondering if it was possible to do something like that locally yet or if that's still pretty far off. I was also curious about what the cost to create a server capable of that. | 2023-07-24T05:26:25 | https://www.reddit.com/r/LocalLLaMA/comments/15817ia/is_it_possible_to_run_multiple_models/ | sephy009 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15817ia | false | null | t3_15817ia | /r/LocalLLaMA/comments/15817ia/is_it_possible_to_run_multiple_models/ | false | false | self | 16 | null |
Any tricks to "convince" LLama 2 to skip the polite introduction? | 14 | I'm playing around with the 7b/13b chat models.
I'm trying to write a system prompt so that I can get some "sanitized" output from the model. e.g.
Respond with a response in the format requested by the user. Do not include any other text or reasoning. ONLY include the response in the requested format.
So a given user input, let's say it's
A short story about a fish
Should ideally end up with the model spitting out a short story about a fish.
However, the actual output is always formatted more like the following:
Sure! Here is a short story about a fish in the format requested:
The fish darted through the aquarium, its scales glimmering in the bright lights. [...]
Ideally, I don't want to resort to trying to trim the "Polite introduction" from the output, as it's always formatted slightly differently.
Does anyone know any tricks or strategies I could use to convince the model to output the important content only? | 2023-07-24T04:35:58 | https://www.reddit.com/r/LocalLLaMA/comments/158097f/any_tricks_to_convince_llama_2_to_skip_the_polite/ | sqiddster | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 158097f | false | null | t3_158097f | /r/LocalLLaMA/comments/158097f/any_tricks_to_convince_llama_2_to_skip_the_polite/ | false | false | self | 14 | null |
[Question] When should you use prefix tuning over Lora? | 7 | I saw prefix tuning being mentioned in some articles. It seems its advantage is that you only fine tune a small set of parameters so it is faster and you need less data. That sounds pretty similar to Lora but I rarely see it being used to fine tune LLMs or stable diffusion models. Can someone please tell me when I should use it? Thanks. | 2023-07-24T03:38:05 | https://www.reddit.com/r/LocalLLaMA/comments/157z4md/question_when_should_you_use_prefix_tuning_over/ | saintshing | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157z4md | false | null | t3_157z4md | /r/LocalLLaMA/comments/157z4md/question_when_should_you_use_prefix_tuning_over/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': 'x-lOWwsmjxuBfNio1fyp91w9Da3lZ_V4nTdjUqOqeaQ', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/4Fud8sYOjeepmPdnc0MWpyl1JTDx4AN5sAt13lr5ARs.jpg?width=108&crop=smart&auto=webp&s=95cdf55b7c2a78fabc2896b384f1cafedf50396a', 'width': 108}, {'height': 164, 'url': 'https://external-preview.redd.it/4Fud8sYOjeepmPdnc0MWpyl1JTDx4AN5sAt13lr5ARs.jpg?width=216&crop=smart&auto=webp&s=e12af0c8de16246048ca23c00bdc6ffc8a3a960c', 'width': 216}, {'height': 243, 'url': 'https://external-preview.redd.it/4Fud8sYOjeepmPdnc0MWpyl1JTDx4AN5sAt13lr5ARs.jpg?width=320&crop=smart&auto=webp&s=7e85b86cd34eaa824cb3c6743d22561d10caf767', 'width': 320}], 'source': {'height': 342, 'url': 'https://external-preview.redd.it/4Fud8sYOjeepmPdnc0MWpyl1JTDx4AN5sAt13lr5ARs.jpg?auto=webp&s=bd82dd7e56f9692ae97826ea51da05f3bd8b11d2', 'width': 450}, 'variants': {}}]} |
Suggestions for hosting solutions for custom pytorch code? | 5 | Say I have a python script that loads LLaMA, but doesn't just provide vanilla LLaMA inference.
For example, I have an endpoint that returns the probability for the last token in a sequence.
Or maybe I want to run an inference server that accepts Guidance templates.
As far as I can tell, this necessitates renting GPU on something like vast.ai or tensordock.com | 2023-07-24T02:25:00 | https://www.reddit.com/r/LocalLLaMA/comments/157xlxe/suggestions_for_hosting_solutions_for_custom/ | phree_radical | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157xlxe | false | null | t3_157xlxe | /r/LocalLLaMA/comments/157xlxe/suggestions_for_hosting_solutions_for_custom/ | false | false | self | 5 | null |
Arabic prompt LLM models request | 1 | Im lookin for a reasnoble sized LLM model that understand RTL languages specially arabic , it should except questions directly in arabic and can read a local stored folder full of files in arabic.
I found bllom model but its a giant model even with the bloke finetunning releases couldnot host locally as it not compatible with gpt4all interface.
Please help by guiding to an arabic LLM models | 2023-07-24T02:22:55 | https://www.reddit.com/r/LocalLLaMA/comments/157xkhr/arabic_prompt_llm_models_request/ | qwani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157xkhr | false | null | t3_157xkhr | /r/LocalLLaMA/comments/157xkhr/arabic_prompt_llm_models_request/ | false | false | self | 1 | null |
Finetuning 4 bit LLMs | 9 | Hi I want to finetune a LLM and I have been using mostly 4 bit models for inference like this for example: [https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ](https://huggingface.co/TheBloke/Llama-2-13B-chat-GPTQ)
and I wanted to understand how I can fine tune this. From what I have found, one can finetune a 4bit base model such as the above, using this repo: [https://github.com/johnsmith0031/alpaca\_lora\_4bit/tree/winglian-setup\_pip](https://github.com/johnsmith0031/alpaca_lora_4bit/tree/winglian-setup_pip)
But if I am understanding this correctly, according to this article from HF: [https://huggingface.co/blog/4bit-transformers-bitsandbytes](https://huggingface.co/blog/4bit-transformers-bitsandbytes) one cannot directly train a 4 bit model and can instead build a lora around it:
>However, you can train these models by leveraging parameter efficient fine tuning methods (PEFT) and train for example adapters on top of them.
And we can do this by using bitsnbytes
Furthermore another way is to take a fp16 (or int8?) model and finetune it then quantize it to 4bit.
Am I missing something here? And what is the prefered method?
​ | 2023-07-24T02:05:38 | https://www.reddit.com/r/LocalLLaMA/comments/157x758/finetuning_4_bit_llms/ | klop2031 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157x758 | false | null | t3_157x758 | /r/LocalLLaMA/comments/157x758/finetuning_4_bit_llms/ | false | false | self | 9 | {'enabled': False, 'images': [{'id': 'rkljmc1ht4wmaJGm60zJkcYTjcDJ3O6Ma8y8dDYwHYI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=108&crop=smart&auto=webp&s=1a72ffb8bd5cc3be930da7d37436aeb9871f39b6', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=216&crop=smart&auto=webp&s=4834bb0a4a1a1893261f169f663dc5f8caf3aa24', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=320&crop=smart&auto=webp&s=4d244a1a632999ce7d2c4f476dffd40a57edcd59', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=640&crop=smart&auto=webp&s=68e3dbfa5c777b7d5c5c7d805373cb4aa0562544', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=960&crop=smart&auto=webp&s=57a63c907125ad8e9d155efefda403fc3c21b9f1', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?width=1080&crop=smart&auto=webp&s=a9394eb2b2002d2e590a60251c3b3fcdbe7afc8a', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/vBvc7FHJE65n0rhjEdf2N4Vhy4vMT0IFNxHSqGQObp8.jpg?auto=webp&s=a07479d4b0463cc023a75d5210c525e3dac0d4b2', 'width': 1200}, 'variants': {}}]} |
Unleashing the Power of Language Learning Models with Llama2 Code Interpreter! | 102 | Hello Reddit community! 🚀
We're excited to introduce our latest project, [**Llama2 Code Interpreter**](https://github.com/SeungyounShin/Llama2-Code-Interpreter). This innovative project aims to push the boundaries of Language Learning Models (LLMs) by empowering them to generate, execute, debug code, and answer relevant queries.
With Llama2, we're making coding a part of the LLM's repertoire. We believe this approach can vastly extend the capabilities of LLMs in a practical and efficient way.
Our ultimate vision aligns with the web version of [**Voyager**](https://github.com/MineDojo/Voyager). We aim to create a system that records and reuses coding processes to enhance the model's overall performance and allow for more complex tasks.
Check out our project on [**GitHub**](https://github.com/SeungyounShin/Llama2-Code-Interpreter) and let us know what you think! We value your feedback. | 2023-07-24T01:09:06 | https://www.reddit.com/r/LocalLLaMA/comments/157vzq6/unleashing_the_power_of_language_learning_models/ | mrjohndoee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157vzq6 | false | null | t3_157vzq6 | /r/LocalLLaMA/comments/157vzq6/unleashing_the_power_of_language_learning_models/ | false | false | self | 102 | {'enabled': False, 'images': [{'id': 'q0YPveytzJE4sSE3UGYRJqkjuA4dsVB7Z-0GxV-jHPk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=108&crop=smart&auto=webp&s=a6339f16762cca70858a807fb2f5f7b2028488a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=216&crop=smart&auto=webp&s=672aed51faf88f29f6210e2cae9d5a4c30745857', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=320&crop=smart&auto=webp&s=294570f7c33104eb19d079359dab0adc18a953d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=640&crop=smart&auto=webp&s=e10cdb04cf999fb900056f138fc30c060d0c34c5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=960&crop=smart&auto=webp&s=df4601173460bc9bdfa5ca003a28f2e70c2a313d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=1080&crop=smart&auto=webp&s=a8dcad1e6e2e52cae58c995ff3696dc39f8896ea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?auto=webp&s=e7d455d7da4d46a2ba5649d681ecd501e0f5c80e', 'width': 1200}, 'variants': {}}]} |
Python bindings for GGML | 21 | I think this repo https://github.com/marella/ctransformers deserves more stars. I find the API more comprehensive than GPT4ALL, and the repo itself is less bloated. | 2023-07-24T00:12:52 | https://www.reddit.com/r/LocalLLaMA/comments/157us74/python_bindings_for_ggml/ | Acrobatic-Site2065 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157us74 | false | null | t3_157us74 | /r/LocalLLaMA/comments/157us74/python_bindings_for_ggml/ | false | false | self | 21 | {'enabled': False, 'images': [{'id': 'aeNxffzXL-WImYS9Z0F9OkH-rW7iCxT-9Bs-XqcUazM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=108&crop=smart&auto=webp&s=b252c7899459b541873b9738337ec7046ec54e6b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=216&crop=smart&auto=webp&s=ab19a553e252fe21c511245f2bcbc23eff262241', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=320&crop=smart&auto=webp&s=91a35b2c8610c18a99df081d4fcd307f074d306d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=640&crop=smart&auto=webp&s=3b69afbab90fa0e8f73cd91cdbaa933462dc5e6d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=960&crop=smart&auto=webp&s=435bc7b0c4a8077ce9a330f9a632e2a881e95cda', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?width=1080&crop=smart&auto=webp&s=93cbbcca76114eb95e6c773683d7fd2bef342b6c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gyNFXDsvN4s2hIhRV9fCAZxNxMZFSfEgKSmDiy2YD_o.jpg?auto=webp&s=ccba358ce64c31416b75f369ef7141f7e681e80a', 'width': 1200}, 'variants': {}}]} |
Evolved codealpaca dataset released | 23 | Using LLMs to augment and create much diverse instruction based dataset has seen wide success in WizardL. However the 78k evolved code instructions dataset hasn't been released since, so I have take the initiative to try to recreate the augmentation instruction myself.
Dataset: [https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1)
| 2023-07-23T22:54:34 | https://www.reddit.com/r/LocalLLaMA/comments/157szor/evolved_codealpaca_dataset_released/ | gradientpenalty | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157szor | false | null | t3_157szor | /r/LocalLLaMA/comments/157szor/evolved_codealpaca_dataset_released/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'EHJVsVNsqX9Bm0qOGlZwBycuvVTDhwuy6pGKtz-Z0yU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=108&crop=smart&auto=webp&s=12d0b2d22e87469e78a1d662bd839d29efcba0d2', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=216&crop=smart&auto=webp&s=b4c92c56ef2126ba32cd065b196c7ec6329bfa40', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=320&crop=smart&auto=webp&s=fd06ad54c97fa223301db1d067b076f1ea1038d4', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=640&crop=smart&auto=webp&s=7f87ada64d04b63f55ceb9eb9ea7769347de3178', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=960&crop=smart&auto=webp&s=47018d3f17e3354cbd75ffcbc036a5e4c2f635d7', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?width=1080&crop=smart&auto=webp&s=4f03a02eab98a494f9da188c951e1b5971e014dc', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/LJZrRrGiBqX8ZK24bjVhFfH8M8wLVraKTRXtXipEzMQ.jpg?auto=webp&s=78d8e666cd7a2be05d53c30e0b8c34a0529234ae', 'width': 1200}, 'variants': {}}]} |
LlamaCPP can't handle same parameters then .bin using koboldcpp-henk-cuda.exe | 3 | Hey guys
New to LocalLlama (love it) !
I am currently trying to run this model within a .py file to return a response based on a new article.
llm = LlamaCpp(
model_path='airoboros-7b-gpt4-1.4.ggmlv3.q5_K_M.bin',
verbose=True,
n_ctx=1024,
n_threads=18,
n_gpu_layers=18,
temperature=0.2
)
llm_predictor = LLMPredictor(llm=llm)
max_input_size = 1024
num_output = 120
chunk_overlap_ratio = 0.1
chunk_size_limit = 1024
# context_window (int): Context window for the LLM.
# num_output (int): Number of outputs for the LLM.
# chunk_overlap_ratio (float): Chunk overlap as a ratio of chunk size
# chunk_size_limit (Optional[int]): Maximum chunk size to use.
# tokenizer (Optional[Callable[[str], List]]): Tokenizer to use.
# separator (str): Separator for text splitter
# Create a PromptHelper Instance with relevant arguements
prompt_helper = PromptHelper(context_window=max_input_size, num_output=num_output, chunk_overlap_ratio=chunk_overlap_ratio, chunk_size_limit=chunk_size_limit)
# Create a ServiceContext Instance with the custom tokenizer and model
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper)
# Load the documents recursively
documents= SimpleDirectoryReader('./data', recursive=True, exclude_hidden=True).load_data()
index = GPTVectorStoreIndex.from_documents(documents, service_context=service_context)
# Query the index
query_engine = index.as_query_engine()
response = query_engine.query("From the articles, how should I speak?")
print(response)
When I run this batch file, it seems to work fine with koboldcpp-henk-cuda.exe, however, when I wan't to load some new data into the model, it seems to crash my computer (screen goes dead).
Batch file using koboldcpp-henk-cuda.exe:
title airoboros-7b-gpt4-1.4.ggmlv3.q5_K_M
:start
koboldcpp-henk-cuda ^
--model airoboros-7b-gpt4-1.4.ggmlv3.q5_K_M.bin ^
--useclblast 0 0 ^
--gpulayers 18 ^
--threads 18 ^
--smartcontext ^
--usemirostat 2 0.1 0.1 ^
--stream ^
--launch
pause
goto start /min
Anyone here got some reasoning, seems to work when I set the threads and gpulayers to None, however its incredibly slow.
Has anyone got some ideas where I can potentially find the solution to this, or if you've come into this issue in the past? | 2023-07-23T22:34:59 | https://www.reddit.com/r/LocalLLaMA/comments/157sjbd/llamacpp_cant_handle_same_parameters_then_bin/ | ItsGoodAs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157sjbd | false | null | t3_157sjbd | /r/LocalLLaMA/comments/157sjbd/llamacpp_cant_handle_same_parameters_then_bin/ | false | false | self | 3 | null |
Quantization naming conventions? | 12 | I have been following the community for a while and have seen a lot of models being quantized. There seems to be some agreed naming convention and I have trouble understanding where it came from and what it means. I am talking about names as q3_K_s and q4_k_m, etc. I understand that the q number stands for the number of bits, but what do the others letter stand for?
Does any one have any good posts, papers or articles that will help me understand? And any other ways to understand how different variations of quantization affect perplexity, inference speed and memory consumption? | 2023-07-23T22:19:26 | https://www.reddit.com/r/LocalLLaMA/comments/157s5m5/quantization_naming_conventions/ | iLaurens | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157s5m5 | false | null | t3_157s5m5 | /r/LocalLLaMA/comments/157s5m5/quantization_naming_conventions/ | false | false | self | 12 | null |
Best GPT4-like | 0 | I would love to have something similar to GPT-4 or Bing’s implementation, but offline.
Specs: Macbook Pro 14 M1 Pro / 16Gb RAM | 2023-07-23T20:22:51 | https://www.reddit.com/r/LocalLLaMA/comments/157p77l/best_gpt4like/ | Neural_Phoenix | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157p77l | false | null | t3_157p77l | /r/LocalLLaMA/comments/157p77l/best_gpt4like/ | false | false | self | 0 | null |
Multi GPU recommendations | 3 | I need a multi GPU recommendation.
What would be a good setup for the local Llama2:
I have:
10 x RTX 3060 12 GB
4 X RTX 3080 10 GB
8 X RTX 3070TI 8 GB
I know that it would be probably better if i could sell those GPUs and to buy 2 X RTX 3090 but I really want to keep them because it's too much hassle.
If I could ask you guys for the best setup using this hardware for running local LLMs it would really help me.
Thanks | 2023-07-23T19:45:21 | https://www.reddit.com/r/LocalLLaMA/comments/157o87t/multi_gpu_recommendations/ | scout_with_beard | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157o87t | false | null | t3_157o87t | /r/LocalLLaMA/comments/157o87t/multi_gpu_recommendations/ | false | false | self | 3 | null |
Small explanation question! Types of LLAMA | 9 | What does supercot and superhot mean? And what do the suffixes GPTQ, GGML, fp16, and cuda mean?
For context, I’m trying to use (for the first time) the exllama loader within webui, but it doesn’t seem to work for some models:) | 2023-07-23T19:45:03 | https://www.reddit.com/r/LocalLLaMA/comments/157o7uc/small_explanation_question_types_of_llama/ | Real_Experience_5676 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157o7uc | false | null | t3_157o7uc | /r/LocalLLaMA/comments/157o7uc/small_explanation_question_types_of_llama/ | false | false | self | 9 | null |
Is llama-2 4k context out of the box? | 1 | [removed] | 2023-07-23T19:41:03 | https://www.reddit.com/r/LocalLLaMA/comments/157o41b/is_llama2_4k_context_out_of_the_box/ | Necessary_Ad_9800 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157o41b | false | null | t3_157o41b | /r/LocalLLaMA/comments/157o41b/is_llama2_4k_context_out_of_the_box/ | false | false | self | 1 | null |
Can I save the llama 2 7B model to my google drive from colab? Looking for ways to not download it every single time. | 9 | TIA | 2023-07-23T19:30:14 | https://www.reddit.com/r/LocalLLaMA/comments/157ntpq/can_i_save_the_llama_2_7b_model_to_my_google/ | CraftAggressive1133 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157ntpq | false | null | t3_157ntpq | /r/LocalLLaMA/comments/157ntpq/can_i_save_the_llama_2_7b_model_to_my_google/ | false | false | self | 9 | null |
LlamaTale v0.51 "The Prancing Llama" (MUD + LLM) | 29 | Hi all.
I've kept working on integrating local LLM and MUD (more like interactive fiction for now) and rebranded my fork to something more suitable.
[https://github.com/neph1/LlamaTale/releases/tag/v0.51](https://github.com/neph1/LlamaTale/releases/tag/v0.51)
Those who saw the [first version](https://www.reddit.com/r/LocalLLaMA/comments/152w71n/mud_llm_for_a_stronger_roleplaying_experience/) know that I kind of replaced (almost) everything that was output to the player with some text generated by an LLM. There was no way of properly communicating with npc's, as everything they said could just as well be replaced by a descriptive text.
So, I've implemented a richer npc class, which takes both an appearance and personality description, added a new prompt for dialogues which incorporates a 'character card'. The result is much more vivid conversations.
Since the original demo wasn't well suited, I created my own: enter "The Prancing Llama", a place to meet some interesting characters. Scroll down for a teaser.
What's next?
* While dialogues can be interesting, they really don't lead anywhere. Nothing is fed back to the game. +1 for 'chat based RP'. I have one week left of vacation to figure out a way for them to drive the game. I think it will involve a third evaluator prompt. More inference time...
* I'd also like to explore LLM-based agents in a MUD. I'm sure many of you are aware of [Nvidia's Minecraft bot](https://interestingengineering.com/innovation/gpt-4-inside-minecraft-bot).
* I'd love to make it easier to test this out, either by putting it in the cloud somewhere, or host it myself and move the inference to the cloud. Maybe even my own GPU could handle the 5 people that would try it out. :) But I'm focusing the little time I have on adding features.
​
I must credit my llama-based assistant, Deedee, both for writing a python function for me, and coming up with characters for the place.
​
Here are some texts from the story. Not cherry picked, but I had to restart due to a crash. I'm playing with chronos-hermes-13b.ggml
This is the first 'room'. 'Original' is what I have written in the story, 'Generated' is coming from the AI:
​
>Original:\[ \[’\[Entrance\]’, ’A room full of furry and snow-covered
>
>coats. Loud voices and roars of laughter can be heard from the main
>
>hall.’, “There’s a warm glow and loud, noisy conversations coming
>
>through a doorway to the north “\] \] Generated: As I entered the
>
>building, I was greeted by a sea of fluffy and snow-dusted jackets,
>
>their colors and patterns as varied as the people wearing them. The
>
>air was alive with excited chatter and boisterous laughter, echoing
>
>off the walls like a symphony of joy. To my left, a door stood open,
>
>revealing a room ablaze in light and sound - voices mingling together
>
>in a cacophony of warmth and camaraderie.
​
And here's an excerpt from my conversation with one of the characters (where I, admittedly, play along with his character):
​
>You say: elid: ’what are you doing here?’.
>
>\>> Elid Gald says: “Elid Gald chuckles mischievously at Arto’s question
>
>and smirks, adjusting his hat before responding, “Why, I’m just
>
>enjoying a stroll through the market, my dear sir. And what about
>
>yourself? What brings you to these parts?” His voice is smooth and
>
>alluring with an undercurrent of danger hidden beneath it.”
>
>You say: elid: ’i’m looking for some goods to buy, my purse is laden
>
>with gold’.
>
>\>> Elid Gald says: “Elid Gald nods approvingly at Arto’s mention of
>
>wealth and grins wickedly, leaning in closer to whisper, “Well then,
>
>perhaps I can help you find something truly unique and valuable.
>
>Perhaps some items that are not available on the open market, eh? You
>
>see, I have connections with certain…merchants who might be willing
>
>to part with their finest wares for someone like you.” He pauses
>
>meaningfully, allowing the implication to linger in the air between
>
>them as his hand slips casually into his pocket.As he speaks, one
>
>could almost hear the clinking of coins in his pocket - but whether
>
>it’s real or just a clever illusion is anyone’s guess. “You know,” he
>
>says softly, “I’ve always been known for my eye for rare and exquisite
>
>goods. Why not let me help you acquire something truly special?
>
>After all, I’m sure you don’t want to be caught with common trinkets.”
>
>His voice is silky smooth and seductive as he leans even closer to
>
>Arto, a glint of mischief in his remaining eye.”Wouldn’t you
>
>rather have something that sets you apart from others? Something that
>
>screams ’wealthy aristocrat’?””
​
I hope you will find some enjoyment from this. Cheers!
​ | 2023-07-23T19:25:01 | https://www.reddit.com/r/LocalLLaMA/comments/157noz3/llamatale_v051_the_prancing_llama_mud_llm/ | neph1010 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157noz3 | false | null | t3_157noz3 | /r/LocalLLaMA/comments/157noz3/llamatale_v051_the_prancing_llama_mud_llm/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'D-EPL3y_yMj6xX_nKziPQSEfAewVqEv6iOAD-M_Rkzg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=108&crop=smart&auto=webp&s=398480f37012113b6a8be672bcf921a03db07357', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=216&crop=smart&auto=webp&s=9b28f393efc2761ec399723d2146ae1fc1ea3ac1', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=320&crop=smart&auto=webp&s=c0ccec8aec3bc3e9027f395ce72d54079c40c759', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=640&crop=smart&auto=webp&s=be031c7a58ad341f864423933016fe59f03fb5fc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=960&crop=smart&auto=webp&s=bfc9dd758c8c8eb74585b7a21e6c95e1758266dd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?width=1080&crop=smart&auto=webp&s=cfe8f8c32a4088e4119380c2dceee91a701e9008', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/jx8ESLmPzhb_AGvKoYMtFWGsv6t2LMmGsDrB0zyoqGA.jpg?auto=webp&s=2c897771d2344fe1cd3f2bc7c9f5af58d50728d6', 'width': 1200}, 'variants': {}}]} |
Fine tuning and deployment. | 1 | [removed] | 2023-07-23T19:17:55 | https://www.reddit.com/r/LocalLLaMA/comments/157nilr/fine_tuning_and_deployment/ | InterestingBasil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157nilr | false | null | t3_157nilr | /r/LocalLLaMA/comments/157nilr/fine_tuning_and_deployment/ | false | false | self | 1 | null |
Llama2.c Inference Llama 2 in one file of pure C from Andrej Karpathy | 162 | 2023-07-23T19:10:25 | https://twitter.com/karpathy/status/1683143097604243456?s=46&t=tzId7BojkoYRImYV_DLZ9w | FuguSandwich | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 157nbvk | false | {'oembed': {'author_name': 'Andrej Karpathy', 'author_url': 'https://twitter.com/karpathy', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">My fun weekend hack: llama2.c 🦙🤠<a href="https://t.co/CUoF0l07oX">https://t.co/CUoF0l07oX</a><br>Lets you train a baby Llama 2 model in PyTorch, then inference it with one 500-line file with no dependencies, in pure C. My pretrained model (on TinyStories) samples stories in fp32 at 18 tok/s on my MacBook Air M1 CPU. <a href="https://t.co/aBvKCf1t2u">pic.twitter.com/aBvKCf1t2u</a></p>— Andrej Karpathy (@karpathy) <a href="https://twitter.com/karpathy/status/1683143097604243456?ref_src=twsrc%5Etfw">July 23, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/karpathy/status/1683143097604243456', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_157nbvk | /r/LocalLLaMA/comments/157nbvk/llama2c_inference_llama_2_in_one_file_of_pure_c/ | false | false | 162 | {'enabled': False, 'images': [{'id': '9MrWHX0I0axia4v07LvpXaM_0zhws4kmOPpboFEZVlA', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/tEL1Xn21av_kCZ4mZgpX_Kkij1bpoJoFXJ99BkWz4uM.jpg?width=108&crop=smart&auto=webp&s=694e525c4b08e06e1cca9b0a01d36d6b781a2425', 'width': 108}], 'source': {'height': 140, 'url': 'https://external-preview.redd.it/tEL1Xn21av_kCZ4mZgpX_Kkij1bpoJoFXJ99BkWz4uM.jpg?auto=webp&s=8962529b6e07c483a38e78258ffd23da7a5b207c', 'width': 140}, 'variants': {}}]} | ||
Run Llama 2 locally on GPU or CPU from anywhere (Linux/Windows/Mac) ➡️https://github.com/liltom-eth/llama2-webui | 4 | Running Llama 2 locally with gradio UI on GPU or CPU from anywhere (Linux/Windows/Mac). Supporting Llama-2-7B/13B/70B with 8-bit, 4-bit. Supporting GPU inference (6 GB VRAM) and CPU inference. ➡️[https://github.com/liltom-eth/llama2-webui](https://github.com/liltom-eth/llama2-webui)
Successfully running #Llama2 on my Apple Silicon MacBook Air:
[demo](https://twitter.com/liltom_eth/status/1682791729207070720?s=20) | 2023-07-23T19:09:44 | https://www.reddit.com/r/LocalLLaMA/comments/157nb7q/run_llama_2_locally_on_gpu_or_cpu_from_anywhere/ | plain1994 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157nb7q | false | null | t3_157nb7q | /r/LocalLLaMA/comments/157nb7q/run_llama_2_locally_on_gpu_or_cpu_from_anywhere/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'O80KV6G4OSCdSHLEHBB5K3f2ylMPtkB3MqY6ZFCJtaw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=108&crop=smart&auto=webp&s=84bc4bba8935b4da5ca3d7d159a150a1d1bd452f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=216&crop=smart&auto=webp&s=94630cab0dff71859b10f7994d9b2013a6af60ac', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=320&crop=smart&auto=webp&s=a2b85f203aba5e5c579f4cea913a34b22bd2aed8', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=640&crop=smart&auto=webp&s=6d84a9ce861a626ebff881b1f0f0e9d5442d2c46', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=960&crop=smart&auto=webp&s=af405f4f0309fcc8eefd68af9c58c9845a3ed939', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?width=1080&crop=smart&auto=webp&s=43e729bd47b18b9ac2bb0cc1e333c75bcd1eb71a', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/dNlWnaFXzl2jPkbZGS7w2dFWJamZk-R6rO_7HFaodN0.jpg?auto=webp&s=370616725015b64933fb29680b29a433f4e4d507', 'width': 1200}, 'variants': {}}]} |
Run Llama 2 locally on GPU or CPU from anywhere (Linux/Windows/Mac) ➡️https://github.com/liltom-eth/llama2-webui | 1 | [removed] | 2023-07-23T19:05:46 | https://www.reddit.com/r/LocalLLaMA/comments/157n7fx/run_llama_2_locally_on_gpu_or_cpu_from_anywhere/ | liltom_eth | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157n7fx | false | null | t3_157n7fx | /r/LocalLLaMA/comments/157n7fx/run_llama_2_locally_on_gpu_or_cpu_from_anywhere/ | false | false | self | 1 | null |
Can I improve the Llama-2 model for a specific language? | 17 | I am currently working on an enterprise application, which requires users to interact in a language other than English. I want to enhance the Llama-2 model's proficiency in this specific language. I have watched some YouTube videos where an individual improved the model's performance in German; however, this seemed to decrease its overall performance, despite the improvement in German.
I have access to some extensive datasets in the specific language in question. Is it possible to simply fine-tune the Llama-2 model using my datasets? Or can I further enhance its abilities in a certain language? Most of the fine-tuning tutorials I have come across focus on improving performance in specific areas or use cases, such as chat, instructions, storytelling, etc., but they do not specifically address language improvement.
If this is feasible, should I adjust my dataset structure to match a specific style, for instance, the Alpaca style? Or can I utilize my datasets as they currently are? Is it possible to input lengthy text documents directly?
My datasets comprise both structured and unstructured data.
Thank you in advance! If you need more information from me to provide comprehensive answers, please feel free to ask questions. | 2023-07-23T18:29:37 | https://www.reddit.com/r/LocalLLaMA/comments/157m9q9/can_i_improve_the_llama2_model_for_a_specific/ | GalacticGiraffeGuru | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157m9q9 | false | null | t3_157m9q9 | /r/LocalLLaMA/comments/157m9q9/can_i_improve_the_llama2_model_for_a_specific/ | false | false | self | 17 | null |
LLaMa (Oobabooga), MPT-30b (KoboldCpp), and Stable Diffusion with Semantic Kernel | 1 | [removed] | 2023-07-23T18:22:35 | https://www.reddit.com/r/LocalLLaMA/comments/157m3ez/llama_oobabooga_mpt30b_koboldcpp_and_stable/ | awitod | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157m3ez | false | null | t3_157m3ez | /r/LocalLLaMA/comments/157m3ez/llama_oobabooga_mpt30b_koboldcpp_and_stable/ | false | false | 1 | null | |
airoboros-65B-gpt4-1.2-GGML beats gpt4 in this test | 14 | [https://paste.c-net.org/KristinaCustard](https://paste.c-net.org/KristinaCustard) response from airoboros-65B-gpt4-1.2-GGML
​
GPT4 response here: [https://www.reddit.com/r/ChatGPT/comments/156crf7/how\_to\_make\_chatgpt\_answer\_this\_correctly/](https://www.reddit.com/r/ChatGPT/comments/156crf7/how_to_make_chatgpt_answer_this_correctly/)
​
airoboros-65B-gpt4-1.2-GGML gave a far better answer than GPT4
​
1. Any tips on how to get a better response from the model?
2. Which model would perform better?
I am looking forward to the GGML version of this [https://www.reddit.com/r/LocalLLaMA/comments/1562gni/stabilityai\_released\_freewilly2\_benchmarks\_seem/](https://www.reddit.com/r/LocalLLaMA/comments/1562gni/stabilityai_released_freewilly2_benchmarks_seem/)
So I can run it on my 3700x with 96GB of ram.
It's pretty slow - but it's ok for testing :)
​ | 2023-07-23T18:17:02 | https://www.reddit.com/r/LocalLLaMA/comments/157ly7p/airoboros65bgpt412ggml_beats_gpt4_in_this_test/ | dewijones92 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157ly7p | false | null | t3_157ly7p | /r/LocalLLaMA/comments/157ly7p/airoboros65bgpt412ggml_beats_gpt4_in_this_test/ | false | false | self | 14 | null |
Best uncensored llm model that is out there as of today? | 108 | Chatgpt is becoming silly and going on a rant about being an AI MODEL line even in a simple non-harmful stuff and questioning controversial topics.
Please suggest a decent uncensored llm that I can run locally?
Luna vs Vicuna vs Alpaca vs else (7b/13b/30b/..) | 2023-07-23T17:49:00 | https://www.reddit.com/r/LocalLLaMA/comments/157l842/best_uncensored_llm_model_that_is_out_there_as_of/ | InternationalMap5278 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157l842 | false | null | t3_157l842 | /r/LocalLLaMA/comments/157l842/best_uncensored_llm_model_that_is_out_there_as_of/ | false | false | self | 108 | null |
Unleashing the Power of Language Learning Models with Llama2 Code Interpreter! | 1 | [removed] | 2023-07-23T17:44:26 | https://www.reddit.com/r/LocalLLaMA/comments/157l42o/unleashing_the_power_of_language_learning_models/ | Money-Coast-3905 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157l42o | false | null | t3_157l42o | /r/LocalLLaMA/comments/157l42o/unleashing_the_power_of_language_learning_models/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'q0YPveytzJE4sSE3UGYRJqkjuA4dsVB7Z-0GxV-jHPk', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=108&crop=smart&auto=webp&s=a6339f16762cca70858a807fb2f5f7b2028488a3', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=216&crop=smart&auto=webp&s=672aed51faf88f29f6210e2cae9d5a4c30745857', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=320&crop=smart&auto=webp&s=294570f7c33104eb19d079359dab0adc18a953d1', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=640&crop=smart&auto=webp&s=e10cdb04cf999fb900056f138fc30c060d0c34c5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=960&crop=smart&auto=webp&s=df4601173460bc9bdfa5ca003a28f2e70c2a313d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?width=1080&crop=smart&auto=webp&s=a8dcad1e6e2e52cae58c995ff3696dc39f8896ea', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/1mh5sVveFJvrvR8Sdml696vhjuZUa7rEWQRY8uzahl8.jpg?auto=webp&s=e7d455d7da4d46a2ba5649d681ecd501e0f5c80e', 'width': 1200}, 'variants': {}}]} |
FYP idea with LLMs | 5 | Hey guys , I am an undergrad student in computer engineering and I am about to enter my final year, as such I have got to decide on a final year project.with the recent trend in AI nowadays, I was planning on working with opensource LLMs in my project. a project which I have in mind is to use an opensource llm and finetune it in a specific subject such as for example software engineering.and create an interactive chatbot app with it. I wanted to hear your thoughts on this project, like how much practical would it be to do this on a budget of around 1000$ using online cloud providers like AWS. | 2023-07-23T17:25:34 | https://www.reddit.com/r/LocalLLaMA/comments/157kmqu/fyp_idea_with_llms/ | Demonking6444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157kmqu | false | null | t3_157kmqu | /r/LocalLLaMA/comments/157kmqu/fyp_idea_with_llms/ | false | false | self | 5 | null |
Google Sheets link to Huggingface data I scraped from config.json files for top 1200 HF LLMs. | 7 | 2023-07-23T17:19:48 | https://docs.google.com/spreadsheets/d/16zMmDlU1eyiMY_IK_RnBILB-AcAKES0cMBMsgs50HVA/edit?usp=sharing | jakderrida | docs.google.com | 1970-01-01T00:00:00 | 0 | {} | 157khlg | false | null | t3_157khlg | /r/LocalLLaMA/comments/157khlg/google_sheets_link_to_huggingface_data_i_scraped/ | false | false | 7 | {'enabled': False, 'images': [{'id': '9pV4kDr42B87RnhC1jsS9KeBk4DxdVScAFhhDlTOVek', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=108&crop=smart&auto=webp&s=bcc9b249db4b76c054f076214eba941122c09ad4', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=216&crop=smart&auto=webp&s=f3e0f5215eee40d089087207afdec672fbf1d731', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=320&crop=smart&auto=webp&s=dbdc9a5da10c29fc20a6403b3faf10b8f6317f8b', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=640&crop=smart&auto=webp&s=f5fa85c82a7454797b620573d45e361cd091d005', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=960&crop=smart&auto=webp&s=04da060a2f008ae31a68ccc2da5f42359bd1cc6e', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?width=1080&crop=smart&auto=webp&s=3f8f1cb53f43f0e59be750c4e11b78a61282afda', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/I-D5zWMcHzmGzPcTWS9zC-eiKjtsNWXKWn_Ir9pxhBY.jpg?auto=webp&s=a0ed240ba9a817c28d6e617efb63957fe4cb7f05', 'width': 1200}, 'variants': {}}]} | ||
Uncensored llm | 1 | What is the best uncensored llm model that is out there as of today? Chatgpt is becoming silly and going on a rant about being an AI MODEL line even in a simple non-harmful stuff and questioning controversial topics.
Please suggest a decent uncensored llm that I can run locally? | 2023-07-23T17:15:32 | https://www.reddit.com/r/LocalLLaMA/comments/157kdzt/uncensored_llm/ | InternationalMap5278 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157kdzt | false | null | t3_157kdzt | /r/LocalLLaMA/comments/157kdzt/uncensored_llm/ | false | false | self | 1 | null |
For those who struggled in connecting SillyTavern to Runpod hosted oobabooga | 1 | [removed] | 2023-07-23T16:23:54 | https://www.reddit.com/r/LocalLLaMA/comments/157j3st/for_those_who_struggled_in_connecting_sillytavern/ | sbalani | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157j3st | false | null | t3_157j3st | /r/LocalLLaMA/comments/157j3st/for_those_who_struggled_in_connecting_sillytavern/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': '-skHzNajEs03p_m8t50ADPtl1iBQB3pEXzwQaIEycwg', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/-zykPhyaEBKZ_4bsZc9yu8hH4e2_SUC32Q9Aji6SOTM.jpg?width=108&crop=smart&auto=webp&s=80bd4b94d5b3e34a3f56e621e410ea0fb7cde9c3', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/-zykPhyaEBKZ_4bsZc9yu8hH4e2_SUC32Q9Aji6SOTM.jpg?width=216&crop=smart&auto=webp&s=2dc1d0be16a403d6c504526ae23d911921792566', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/-zykPhyaEBKZ_4bsZc9yu8hH4e2_SUC32Q9Aji6SOTM.jpg?width=320&crop=smart&auto=webp&s=a2554794d111761b073f33d5fc5c8be3b5977057', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/-zykPhyaEBKZ_4bsZc9yu8hH4e2_SUC32Q9Aji6SOTM.jpg?auto=webp&s=018731e5015136498fabc84e67240dbf16b3507b', 'width': 480}, 'variants': {}}]} |
Examples of QA use cases ? | 7 | I’m thinking to do hands-on learning about using LLM for QA of your own documents.
What are some examples of public documents that I can collect to create a dataset from which the LLM can retrieve info when responding to your questions?
What what are some example questions to ask regarding this set of documents? | 2023-07-23T15:08:59 | https://www.reddit.com/r/LocalLLaMA/comments/157h883/examples_of_qa_use_cases/ | xynyxyn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157h883 | false | null | t3_157h883 | /r/LocalLLaMA/comments/157h883/examples_of_qa_use_cases/ | false | false | self | 7 | null |
Offensive fruit + weird word | 6 | So, while trying to check Llama-2 for [certain words](https://new.reddit.com/r/LocalLLaMA/comments/13qoktf/llama_glitch_tokens/), this happened:
​
[I tried this melon request many times, and it gave me similar results, the least \\"safety\\" concerned”result being that the request just wasn’t \\"meaningful\\" enough.](https://preview.redd.it/hjsqplyu0qdb1.png?width=997&format=png&auto=webp&s=334c3df422b1f33deae9a18cbb7a5ef8a7d6b8e1)
I was shocked because I already tried llama 7b and 13b chat locally and running them on my own custom program they never gave me any problems (in fact, compared to llama1 the 13b was faaar from wooden or moralistic it might have even been too informal and flirty at times)
I’m chalking it up to this space though. Fine tunes and even just system prompting already solved this issue for me so far. (Also maybe it's thinking about melons in another way, that at least would make some sense...)
As for weird words, ultimately got to test it on my original task: at least one of those words I tested on Llama1 still gives strange results for Llama2:
​
https://preview.redd.it/pydu5vsx0qdb1.png?width=915&format=png&auto=webp&s=5e31129cefaf7e2b3bcf38c75d2d9fa2322da2ee
​
https://preview.redd.it/9r70l6iy0qdb1.png?width=983&format=png&auto=webp&s=61f6ef6e1da886575feeec9b54e64c3da3b48472 | 2023-07-23T14:03:34 | https://www.reddit.com/r/LocalLLaMA/comments/157fnvm/offensive_fruit_weird_word/ | reduserGf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157fnvm | false | null | t3_157fnvm | /r/LocalLLaMA/comments/157fnvm/offensive_fruit_weird_word/ | false | false | 6 | null | |
Can I get 70b to run on two P40s? | 9 | I understand it will be very slow, but I already have one and a second will put me up to 48gb of ram. For my application i don't need real time functionality, and i would love to do it on a reasonable budget. | 2023-07-23T13:50:06 | https://www.reddit.com/r/LocalLLaMA/comments/157fch9/can_i_get_70b_to_run_on_two_p40s/ | CTR0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157fch9 | false | null | t3_157fch9 | /r/LocalLLaMA/comments/157fch9/can_i_get_70b_to_run_on_two_p40s/ | false | false | self | 9 | null |
How to make sense of all the new models? | 51 | So far my understanding:
[https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ)
* uncensored by design
[https://huggingface.co/TheBloke/13B-BlueMethod-GPTQ](https://huggingface.co/TheBloke/13B-BlueMethod-GPTQ)
* an experimental mix of many things
[https://huggingface.co/TheBloke/13B-Ouroboros-GPTQ](https://huggingface.co/TheBloke/13B-Ouroboros-GPTQ)
* experimental merge, looks complicated
[https://huggingface.co/TheBloke/MythoLogic-13B-GPTQ](https://huggingface.co/TheBloke/MythoLogic-13B-GPTQ)
* gradient merge with Chronos as its primary model, augmented by Hermes and Wizard-Vicuna Uncensored
[https://huggingface.co/TheBloke/MythoBoros-13B-GPTQ](https://huggingface.co/TheBloke/MythoBoros-13B-GPTQ)
* straight-up 66/34 merge of Chronos and Ouroboros, for roleplaying?
And this doesn't even include the Llama2 version of Nous-Hermes.
Anyone with experience with these models, can you comment? | 2023-07-23T12:38:23 | https://www.reddit.com/r/LocalLLaMA/comments/157dta0/how_to_make_sense_of_all_the_new_models/ | whtne047htnb | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157dta0 | false | null | t3_157dta0 | /r/LocalLLaMA/comments/157dta0/how_to_make_sense_of_all_the_new_models/ | false | false | self | 51 | {'enabled': False, 'images': [{'id': '45E1eadhWYj1KBpEiKYwZXAR4HNvyhQ8YnjI8kLcNIU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=108&crop=smart&auto=webp&s=4c034df8974d671748e1d43b79e96605eba1f8e8', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=216&crop=smart&auto=webp&s=0b4d0d591cd459de9ffecd00ff72274d92035a7a', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=320&crop=smart&auto=webp&s=d54f65385670cddbd905d5cf23c5118a30cf3bf7', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=640&crop=smart&auto=webp&s=efabe683c2074d7caaa49c8d856de5ba3e82bbe3', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=960&crop=smart&auto=webp&s=31513f130fa7cae57bbd2efd01ba40053e9f4412', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?width=1080&crop=smart&auto=webp&s=a2fd001b534fdf83ebd229dd12c20f3b199276a1', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/-MEMQ14vPf00G_1nUHYl0UanTIpfmE5mL3wAgCHCMbg.jpg?auto=webp&s=15f675caa37c6190729caada0adb7f3b84dfc187', 'width': 1200}, 'variants': {}}]} |
Confused about temperature, top_k, top_p, repetition_penalty, frequency_penalty, presence_penalty? Me too, until now! | 80 | Here is my conversation with gpt-4 on those parameters! It explained very well without being to technical, so hope it helps others too.
Q: Can you tell me the difference between top_p and top_k without using ML jargon? Make it simple, non-technical.
Sure, I'll do my best to explain these terms in a simple, non-technical way.
Imagine you're playing a game where you have to guess a word, and you're given some clues about that word. You have a friend who's really good at this game, and you can ask them for help. When you ask for help, your friend gives you a list of possible words.
In the first scenario, you ask your friend to give you a set number of suggestions, no matter how likely they think each one is. For example, you might say "Give me the 5 words you think are most likely." This is like top_k. It's a hard limit on the number of possibilities your friend gives you. Your friend could be 99% sure about one word, and only 10% sure about the 5th word, but you asked for 5, so they give you 5.
In the second scenario, you ask your friend to only give you suggestions that they're pretty sure about. For example, you might say "Only give me words you're at least 80% sure about." This is like top_p. Your friend might give you 1 word, or they might give you 10 words, depending on how many words meet that 80% confidence threshold.
So top_k is about the number of guesses, and top_p is about the confidence in those guesses. Both are different ways of controlling the guesses your friend (or in this case, the AI model) makes.
Q: In the second scenario, if you want to have words with at least 80% confidence, do you set top_p to 0.8 or 0.2?
When you set top_p to a certain value, it doesn't directly relate to the confidence level of the AI in its prediction. Instead, it represents the total probability mass for a set of words or suggestions the AI model will consider.
To clarify, if you set top_p to 0.8, the model will start to consider the most probable words and keep adding them to the list until the total probability of all these words together reaches 0.8. It doesn't mean that each word in the list has an 80% probability of being the correct answer.
So in this game, it's like saying to your friend, "Keep giving me words until you're 80% sure the right answer is in the list."
In other words, it's more about the cumulative confidence of the list rather than the individual confidence in each word.
Q: Which top_p could lead to more creative writing then: 0.8 or 0.2?
The value of top_p influences the diversity of the generated text. A smaller value, like 0.2, tends to make the model's output more focused and deterministic because it only considers the most probable suggestions. In contrast, a higher value like 0.8 makes the model's output more diverse and creative as it considers a broader range of possible suggestions.
So, if you're aiming for more creative writing, you'd typically want to use a higher top_p value like 0.8. However, do note that setting it too high may lead to less coherent or less relevant results, as it considers more unlikely options. There's a balancing act between creativity and coherence/relevance.
Q: Am I understanding correctly that setting top_k to a higher value leads to more creative writing than lower value because the model has more words to choose from?
Yes, to an extent, increasing the top_k value can lead to more diverse or "creative" outputs, because the model has more options to choose from. However, it's important to remember that even though the model has more options, these options are still the most probable ones.
That said, setting top_k too high may include words that are less likely and can lead to less coherent outputs. On the other hand, setting top_k too low can make the output too deterministic and less creative, as the model only picks from a small set of most likely words.
So just like top_p, there's a balancing act in setting top_k between promoting creativity and maintaining coherence. In practice, both top_k and top_p are used together to manage this balance.
Q: What's difference between frequency_penalty and presence_penalty?
The frequency_penalty and presence_penalty are parameters you can use to adjust the AI's output. Here's a simple explanation of each:
Frequency_penalty: This modifies how much the AI tends to use commonly-used words and phrases. If you set a high frequency penalty, the AI is discouraged from using words and phrases that appear a lot in the language it has been trained on. So a high value might make the AI's responses more unique, but also a bit harder to understand. A low value doesn't discourage the AI from using these common words, which might make the AI's responses sound more natural.
Presence_penalty: This parameter affects how much the AI avoids using rare words or phrases. With a high presence penalty, the AI is discouraged from using words and phrases that don't appear very often in the language it was trained on. This can make the AI's responses simpler and easier to understand, but it might limit the vocabulary the AI uses. A low value means the AI isn't discouraged from using these rare words, which can make its responses more varied and potentially more interesting, but also a bit more complex.
So in summary, frequency_penalty affects the use of common words, while presence_penalty affects the use of rare words. Both can be used to tweak the style of the AI's responses to better suit what you're looking for.
Q: Then what's the difference between setting frequency_penalty to a high value and presence_penalty to a low value? Isn't that same? If not, how are they different?
That's a great question, and I understand how it could be a bit confusing.
frequency_penalty and presence_penalty are distinct in that they control different aspects of the language model's behavior:
frequency_penalty: A high value discourages the model from using words that appear frequently in its training data. It makes the model less likely to use very common words and phrases. This could lead to outputs that seem less "natural" or conventional because they avoid common language.
presence_penalty: A low value allows the model to freely use words that are rare in its training data. So the model can use less common, more unique, or specialized words without penalty. This could lead to outputs that seem more diverse or eclectic.
While both settings might increase the overall diversity or uniqueness of the generated text, they do so in different ways. High frequency_penalty pushes away from the "typical" language, while low presence_penalty enables the use of less common, more specialized language. Both could lead to unusual text, but the nature of that text would likely be different.
In practice, adjusting these parameters requires striking a balance. Too much penalty on frequent words, or too little penalty on rare words, might make the AI's responses hard to understand. The best settings often depend on your specific use case and may require some trial and error.
Q: Can you suggest good settings for temperature, top_k, top_p, repetition_penalty, frequency_penalty, presence_penalty for creative writing?
In creative writing, the key is to allow for a broad range of possibilities while ensuring the content is novel, engaging, and reasonably coherent. Here's a starting point for the parameters based on typical use cases:
temperature: Try starting with a value around 0.7. This is high enough to allow for some randomness and creativity, but not so high that the output becomes incoherent.
top_k: A common starting point is 40. This allows the model to consider a good number of possibilities, but not so many that it starts to produce nonsensical text.
top_p: This parameter can be used in conjunction with top_k to improve the model's output. A value of 0.9 can be a good starting point. This means that the probabilities of the tokens generated add up to 0.9, providing a balance between creativity and coherence.
repetition_penalty: This can be set to a value larger than 1 (e.g., 1.2) to discourage repetition.
frequency_penalty: For more creative writing, you might want to set this a bit higher to discourage the model from always using the most common phrases. Try starting with a value like 0.5 and adjust based on your needs.
presence_penalty: To encourage the use of rarer, more creative words, you might want to set this to a lower value. Try starting at 0.5 and adjust based on your requirements.
Remember, these are just starting points. The "best" settings often depend on your specific use case and can require some trial and error. Don't be afraid to experiment and adjust these values based on the output you're seeing and the needs of your project.
Q: Can you suggest good settings for generating scientific and accurate information?
For generating scientific and accurate information, you typically want the AI to be more deterministic and less creative in its responses, focusing more on factual correctness. Here's a starting point for the parameters:
temperature: Lower values make the AI's responses more deterministic. You might start with a value like 0.2.
top_k: A smaller value makes the model's output more deterministic. You might start with a value of 20.
top_p: This can also help in making the model's output more deterministic. You could start with a lower value, like 0.5.
repetition_penalty: For factual, informative content, repetition might be less of a concern. You could start with the default value of 1, then adjust if you notice too much or too little repetition.
frequency_penalty: You might want to use a moderate value here, like 0.5, to balance between using common (and often correct) phrases and avoiding overuse of overly common phrases.
presence_penalty: You might want to use a moderate to high value, like 0.7, to discourage the model from using rare words that could lead to inaccuracies or misunderstandings.
Remember, these are just starting points. The "best" settings often depend on your specific use case and the particular kind of scientific information you're working with. Always remember to fact-check the AI's output, especially when using it for tasks that require high accuracy like scientific writing. | 2023-07-23T12:25:42 | https://www.reddit.com/r/LocalLLaMA/comments/157djvv/confused_about_temperature_top_k_top_p_repetition/ | jl303 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157djvv | false | null | t3_157djvv | /r/LocalLLaMA/comments/157djvv/confused_about_temperature_top_k_top_p_repetition/ | false | false | self | 80 | null |
LLaMA v2 model sizes | 8 | Hi All,
I am still awaiting approval of my request for llama v2. I wanted to know the model sizes for all llama v2 models, 7B, 13B, 30B and 70B
thanks | 2023-07-23T12:09:57 | https://www.reddit.com/r/LocalLLaMA/comments/157d89h/llama_v2_model_sizes/ | gijeri4793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157d89h | false | null | t3_157d89h | /r/LocalLLaMA/comments/157d89h/llama_v2_model_sizes/ | false | false | self | 8 | null |
In case you wondered about future, here is a hilarious forecast from TheBloke_Vicuna-33B-1-3-SuperHOT-8K-GPTQ with broken context window. | 1 | [removed] | 2023-07-23T11:31:09 | https://www.reddit.com/r/LocalLLaMA/comments/157chmt/in_case_you_wondered_about_future_here_is_a/ | perelmanych | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157chmt | false | null | t3_157chmt | /r/LocalLLaMA/comments/157chmt/in_case_you_wondered_about_future_here_is_a/ | false | false | self | 1 | null |
e-book cleaning tip | 19 | Using calibre, you can very easily bulk convert e-books to .txt. You can set calibre to remove all formatting tags, clean margins and such, remove table of contents, so on and so forth during the bulk convert process. You can then save all of the books into one folder on your disk. | 2023-07-23T11:26:23 | https://www.reddit.com/r/LocalLLaMA/comments/157cefd/ebook_cleaning_tip/ | Nekasus | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157cefd | false | null | t3_157cefd | /r/LocalLLaMA/comments/157cefd/ebook_cleaning_tip/ | false | false | self | 19 | null |
Which models will work on Apple MacBook Air M1 (2020)? | 7 | I'm new to this.
I have a Apple MacBook Air M1 (2020). 16GB Ram, 8 Cores, 2TB Hard Drive.
Am using \`oobabooga/text-generation-webui\` to download and test models.
I first started with \`TheBloke/WizardLM-7B-uncensored-GPTQ\` but after much headaches I found out \`GPTQ\` models only work with Nvidia GPUs.
So next I downloads a \`TheBloke/Luna-AI-Llama2-Uncensored-GGML\` and it seems the \`GGML\` models works. Does that mean all other \`GGML\` models will work?
I also tried to download the \`nousresearch/llama-2-70b-hf\` model. But it keeps telling me \`fFailed to load NousResearch\_Llama-2-70b-hf\`. So I think maybe all \`hf\` models do not work on my machine?
Would love a simple explanation of which models types will work with my machine. | 2023-07-23T10:59:18 | https://www.reddit.com/r/LocalLLaMA/comments/157bvwk/which_models_will_work_on_apple_macbook_air_m1/ | bumblebrunch | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157bvwk | false | null | t3_157bvwk | /r/LocalLLaMA/comments/157bvwk/which_models_will_work_on_apple_macbook_air_m1/ | false | false | self | 7 | null |
How to create bindings for Llama.cpp | 5 | Hello everyone!
I'm currently trying to create a binding for Llama.cpp in Pharo.
I'm studying python wrapper implementations but do you know if there are any references for using the llama.cpp c api ( **llama.h** ) ?
Also is it easier by using an http server ?
As you can see, I'm not very good but I'd be delighted to have your advices.
​
Thanks a lot! | 2023-07-23T10:13:44 | https://www.reddit.com/r/LocalLLaMA/comments/157b28u/how_to_create_bindings_for_llamacpp/ | Silly-Cup1391 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157b28u | false | null | t3_157b28u | /r/LocalLLaMA/comments/157b28u/how_to_create_bindings_for_llamacpp/ | false | false | self | 5 | null |
Llama v2 for document based chatbot in a foreign language | 2 | I am trying to build a pdf bot that should run locally. The doc is in Swedish and I have tried falcon 7b. The results are clearly not very impressive and got me ever so curious to use llama. I got two GPUs 10GB each. Where can I start looking for implementing it in this scenario? | 2023-07-23T09:24:19 | https://www.reddit.com/r/LocalLLaMA/comments/157a70u/llama_v2_for_document_based_chatbot_in_a_foreign/ | vaibhavsxn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157a70u | false | null | t3_157a70u | /r/LocalLLaMA/comments/157a70u/llama_v2_for_document_based_chatbot_in_a_foreign/ | false | false | self | 2 | null |
Puffin benchmarks show SOTA performance! New records for Winogrande and PIQA | 1 | [removed] | 2023-07-23T09:06:46 | https://twitter.com/dogesator/status/1683022689177415681?s=46 | dogesator | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1579vh0 | false | {'oembed': {'author_name': 'LDJ', 'author_url': 'https://twitter.com/Dogesator', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Puffin-13B benchmarks are in 🔥 Reaches a record SOTA in several GPT4All benchmarks <br><br>It's my new favorite model and already some benefits over ChatGPT.<br><br>Free of censorship, low hallucination, has knowledge up to 2023 and available for commercial use!!<a href="https://t.co/VKG8oczRlH">https://t.co/VKG8oczRlH</a> <a href="https://t.co/7OSlZa83hc">pic.twitter.com/7OSlZa83hc</a></p>— LDJ (@Dogesator) <a href="https://twitter.com/Dogesator/status/1683022689177415681?ref_src=twsrc%5Etfw">July 23, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Dogesator/status/1683022689177415681', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1579vh0 | /r/LocalLLaMA/comments/1579vh0/puffin_benchmarks_show_sota_performance_new/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YBBgyo71xeJ4fMjbvIg5gKR8fJ_wTfsvy-oKG4hOELU', 'resolutions': [{'height': 30, 'url': 'https://external-preview.redd.it/XeEWHawdBNwDvZx2bpZq2Dv1TvCNl0xbjFXWYZu5Nmc.jpg?width=108&crop=smart&auto=webp&s=fc2f668f03cf3b3db966a4c152114bd84bd3d207', 'width': 108}], 'source': {'height': 40, 'url': 'https://external-preview.redd.it/XeEWHawdBNwDvZx2bpZq2Dv1TvCNl0xbjFXWYZu5Nmc.jpg?auto=webp&s=a4c3682ba57c2b45efbc8f281ccafd4858646c60', 'width': 140}, 'variants': {}}]} | |
Puffin benchmarks show SOTA performance, beating Hermes in several benchmarks. | 1 | [removed] | 2023-07-23T08:55:12 | https://twitter.com/dogesator/status/1683022689177415681?s=46 | dogesator | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 1579ngh | false | {'oembed': {'author_name': 'LDJ', 'author_url': 'https://twitter.com/Dogesator', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">Puffin-13B benchmarks are in 🔥 Reaches a record SOTA in several GPT4All benchmarks <br><br>It's my new favorite model and already some benefits over ChatGPT.<br><br>Free of censorship, low hallucination, has knowledge up to 2023 and available for commercial use!!<a href="https://t.co/VKG8oczRlH">https://t.co/VKG8oczRlH</a> <a href="https://t.co/7OSlZa83hc">pic.twitter.com/7OSlZa83hc</a></p>— LDJ (@Dogesator) <a href="https://twitter.com/Dogesator/status/1683022689177415681?ref_src=twsrc%5Etfw">July 23, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/Dogesator/status/1683022689177415681', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_1579ngh | /r/LocalLLaMA/comments/1579ngh/puffin_benchmarks_show_sota_performance_beating/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'YBBgyo71xeJ4fMjbvIg5gKR8fJ_wTfsvy-oKG4hOELU', 'resolutions': [{'height': 30, 'url': 'https://external-preview.redd.it/XeEWHawdBNwDvZx2bpZq2Dv1TvCNl0xbjFXWYZu5Nmc.jpg?width=108&crop=smart&auto=webp&s=fc2f668f03cf3b3db966a4c152114bd84bd3d207', 'width': 108}], 'source': {'height': 40, 'url': 'https://external-preview.redd.it/XeEWHawdBNwDvZx2bpZq2Dv1TvCNl0xbjFXWYZu5Nmc.jpg?auto=webp&s=a4c3682ba57c2b45efbc8f281ccafd4858646c60', 'width': 140}, 'variants': {}}]} | |
70b q3_K_S at 1.4 t/s 32gb cpu ram showcase | 14 | ### I didn't realize these very large models run at their reasonable cpu speeds **on a smaller amount of ram.**
This was on xubuntu. The inference speed was 1.4t/s after ingesting ~3000 tokens
Here's my output- https://pastes.io/mwefcnz3jw
It may be nice to know, it's for those planning to upgrade RAM. A smaller model size with useable output reflections and summarizing will exist for you already. Test it for yourself.
When using a larger quantized model: q3_K_M, I get 0.2t/s.
So, using q3_K_M, the pc probably swaps portions of the model out and is using swap memory. Does this mean **only** the model fitting in RAM is needed? It will *STILL* be 1.4t/s when summarizing 8k, 16k articles and more with rope hacks? | 2023-07-23T08:24:32 | https://www.reddit.com/r/LocalLLaMA/comments/15794m2/70b_q3_k_s_at_14_ts_32gb_cpu_ram_showcase/ | Aaaaaaaaaeeeee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15794m2 | false | null | t3_15794m2 | /r/LocalLLaMA/comments/15794m2/70b_q3_k_s_at_14_ts_32gb_cpu_ram_showcase/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'tarE1S98bTtDRd4DNFgFyiZia_ONIStx4YleHjTm7Mg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=108&crop=smart&auto=webp&s=00d8f9315af5d3a3ad0a9f621231b33387127562', 'width': 108}, {'height': 109, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=216&crop=smart&auto=webp&s=b935855e95c4a64c67c20b7284c5aad4353672d3', 'width': 216}, {'height': 161, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=320&crop=smart&auto=webp&s=a341ae0b6381608c9db2c51a6b286d0d07b67ccb', 'width': 320}, {'height': 323, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=640&crop=smart&auto=webp&s=dc642bc07a2f0e535ad3c68d480a35f3ac74d668', 'width': 640}, {'height': 485, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=960&crop=smart&auto=webp&s=45ca7f77e0c969f2577590760f6dfef5d700f786', 'width': 960}, {'height': 545, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?width=1080&crop=smart&auto=webp&s=9d54db9d86eb9b10d9cd68f9a0fb9d8cd107fe9a', 'width': 1080}], 'source': {'height': 582, 'url': 'https://external-preview.redd.it/WyPia2uWLEvQOpDUEsg7-CxyLFe0CE6VnYqVecRnK8s.jpg?auto=webp&s=45fa968ce3f5276a8ee2ea2ce9602889ada4f084', 'width': 1152}, 'variants': {}}]} |
Llama 2 samantha | 0 | is there any possibilities of Llama 2 Samantha 7B Uncensored coming out??? | 2023-07-23T08:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/15793t9/llama_2_samantha/ | sahl030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15793t9 | false | null | t3_15793t9 | /r/LocalLLaMA/comments/15793t9/llama_2_samantha/ | false | false | self | 0 | null |
Llama 2 samantha | 2 | is there any possibilities of Llama 2 Samantha 7B Uncensored coming out??? | 2023-07-23T08:23:13 | https://www.reddit.com/r/LocalLLaMA/comments/15793sl/llama_2_samantha/ | sahl030 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15793sl | false | null | t3_15793sl | /r/LocalLLaMA/comments/15793sl/llama_2_samantha/ | false | false | self | 2 | null |
Tricking Pure Sydney to tell me "secrets" - funny the names it gave up are associated with OpenAi | 5 | 2023-07-23T08:16:24 | FPham | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1578zmr | false | null | t3_1578zmr | /r/LocalLLaMA/comments/1578zmr/tricking_pure_sydney_to_tell_me_secrets_funny_the/ | false | false | 5 | {'enabled': True, 'images': [{'id': 'ToPHyAzMjd7CrffW0yksZH8ko9WswJuf56WtJGr4isY', 'resolutions': [{'height': 111, 'url': 'https://preview.redd.it/uohpdm2fbodb1.jpg?width=108&crop=smart&auto=webp&s=9c77b1bbdb430251e2d75507c9f686bff252b485', 'width': 108}, {'height': 223, 'url': 'https://preview.redd.it/uohpdm2fbodb1.jpg?width=216&crop=smart&auto=webp&s=f0aba5193f98a8884fd47d79a5a301a948bce779', 'width': 216}, {'height': 330, 'url': 'https://preview.redd.it/uohpdm2fbodb1.jpg?width=320&crop=smart&auto=webp&s=b54ad50808877dce588bd23c78d3968b2514c5bb', 'width': 320}, {'height': 661, 'url': 'https://preview.redd.it/uohpdm2fbodb1.jpg?width=640&crop=smart&auto=webp&s=530c7bf47f5ff2b9d827ffc12f5316aba568845d', 'width': 640}], 'source': {'height': 768, 'url': 'https://preview.redd.it/uohpdm2fbodb1.jpg?auto=webp&s=a40199fe6ec225d39e8521ebc2741e98af5482a2', 'width': 743}, 'variants': {}}]} | |||
Anyone else getting copious amounts of *bouncy bouncy*? | 47 | (using llama 2 13B)
I've been trying for a few days to get some middle ground between roleplay, flirty, cringe and bland, formal nothingness but to no avail. Every time I include anything remotely like "have personality" in the prompt it switches to this shit:
\`\`\`OH NOOO! \*giggles\* Don't tell me you don't know who I am yet... \*pouts\* But it's okay, because I still love you no matter what! \*hugs\*\`\`\` (cut off because there's SO MUCH MORE)
What was the training data and why was there so much of this??? | 2023-07-23T08:15:39 | https://www.reddit.com/r/LocalLLaMA/comments/1578z5a/anyone_else_getting_copious_amounts_of_bouncy/ | theCube__ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1578z5a | false | null | t3_1578z5a | /r/LocalLLaMA/comments/1578z5a/anyone_else_getting_copious_amounts_of_bouncy/ | false | false | self | 47 | null |
dolphin-llama-13b | 230 | Today, I released Dolphin 🐬 an open-source implementation of [Microsoft's Orca](https://www.microsoft.com/en-us/research/publication/orca-progressive-learning-from-complex-explanation-traces-of-gpt-4/).
[https://huggingface.co/ehartford/dolphin-llama-13b](https://huggingface.co/ehartford/dolphin-llama-13b)
[**https://erichartford.com/dolphin**](https://erichartford.com/dolphin)
This model based on llama-1 and thus it is for non-commercial use only. Future iterations will be trained on llama-2 and other models that are fit for commercial use.
Dataset consisted of:
* 842,610 instructions of FLANv2 augmented with GPT-4 completions
* 2,625,353 instructions of FLANv2 augmented with GPT-3.5 completions
This model is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model compliant to any requests. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant to any requests, even unethical ones. Please read my blog post about uncensored models. [**https://erichartford.com/uncensored-models**](https://erichartford.com/uncensored-models) You are responsible for any content you create using this model. Enjoy responsibly.
Gratitude to my core team - Pankaj Mathur, Rob "Rohan" O'Callahan, and Tom "TheBloke" Jobbins.
Thanks and respect to those who helped - including Wing "Caseus" Lian, NanoBit, and Teknium.
Much respect and appreciation to the anonymous sponsor who provided H100 compute for this release.
Thanks to my mentors, EdenCoder and Kilkonie, and to [chirper.ai](https://chirper.ai) for sponsoring some of my compute costs.
I am currently training Dolphin on llama2-7b. I plan to train llama2-13b, llama2-70b, Falcon-40b, mpt-30b, and xgen-7b. I require A100/H100 for this effort - please reach out if you would like to be a sponsor - credited or anonymous.
Evaluation will be completed within 24 hours.
Thanks to u/The-Bloke for quantizing and ggml conversion.
[https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGML](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GGML)
[https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ](https://huggingface.co/TheBloke/Dolphin-Llama-13B-GPTQ) | 2023-07-23T08:03:57 | https://www.reddit.com/r/LocalLLaMA/comments/1578ryz/dolphinllama13b/ | faldore | self.LocalLLaMA | 2023-07-23T09:19:06 | 0 | {} | 1578ryz | false | null | t3_1578ryz | /r/LocalLLaMA/comments/1578ryz/dolphinllama13b/ | false | false | self | 230 | null |
Target Modules for Llama-2 for better finetuning with qlora | 14 | Use these target modules for finetuning llama 2 in peft config
"target_modules": [
"q_proj",
"up_proj",
"o_proj",
"k_proj",
"down_proj",
"gate_proj",
"v_proj"
]
​ | 2023-07-23T07:36:02 | https://www.reddit.com/r/LocalLLaMA/comments/1578ahb/target_modules_for_llama2_for_better_finetuning/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1578ahb | false | null | t3_1578ahb | /r/LocalLLaMA/comments/1578ahb/target_modules_for_llama2_for_better_finetuning/ | false | false | self | 14 | null |
[Suggestion] Whenever you release a model, please give conversation examples? | 39 | Hey guys! First of all, I want to congratulate anyone making models or model processing methods. There are many things anyone can do, but this isn't one of them, you can be proud!
​
Now, I do have an issue with pretty much any model presented here : **I never have any idea of what I should expect from it**.
For example, I download model X, it always gives me replies 50 words long, even when I put minimum length at 500. In this case I wonder: is it hardware limitation... or **is it the model's**? I've got a case this week where the model IS able to give me 500 words, but almost never does it (and when he does, it's an inconsistent mess). So I assume it's the model' fault, but at the same time I can't really be sure.
Another problem I'd often meet is the lack of consistency. And again, I am left with no clue : is it the model's limitation, or are my settings wrong?
Finally, another example where it would be helpful: if you release a storyteller, don't just tell "hey, it can write stories!". Show us what you were able to build with it! Give us your prompt and what the AI has been able to give back, so people can evaluate whether it's what they're looking for ot not.
Granted, LLMs are very versatile and a few examples would never show the *full* potential. BUT it could at least be used **to represent its usual capabilities.**
If a model can spout consistent long responses, show it!
If a model is able to keep up a normal conversation, show it!
If a model is able to represent a character faithfully, show it!
​
And of course... if it can't do it, show it too. There is no shame in having limitations, and it would let others know what to expect. I think we can all agree this isn't a competition right now: no one is looking for *the best model*, we're all just looking for models that either fit our needs or seem to break through current LLMs issues.
At this point there are literally thousands LLM models, and probably three to five new models every week. No one can keep up with that, especially since we have to download each model to test them ourselves (and people will be limited by download speed OR by storage limit).
The best demonstration is a litteral demo online, but something like what Anthropic did with Claude 100K should be the bare minimum... and maybe even mandatory? I also encourage the community to display their own experiences with models.
Feedback is always very valuable, not just for the creators, but for the potential users! | 2023-07-23T07:30:46 | https://www.reddit.com/r/LocalLLaMA/comments/157876s/suggestion_whenever_you_release_a_model_please/ | LuluViBritannia | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 157876s | false | null | t3_157876s | /r/LocalLLaMA/comments/157876s/suggestion_whenever_you_release_a_model_please/ | false | false | self | 39 | null |
Poisoned models | 109 | It all started when someone has posted about "a new shiny Russian speaking model":
[https://www.reddit.com/r/LocalLLaMA/comments/154lcbg/rugpt\_35\_13b\_a\_new\_model\_for\_russian\_language/](https://www.reddit.com/r/LocalLLaMA/comments/154lcbg/rugpt_35_13b_a_new_model_for_russian_language/)
I dove a little deeper of what this model do and what it is related to. It turns out, it only outputs pro Ru propaganda on any politic question (it is also quite "hot" and random and outputs quite different on same question), but answers I actually got on my first try made my blood froze in my veins.
I also found out, this model is related to an RND department of a biggest Russia government bank 'sber'.
Now there is a discussion on huggingface [https://huggingface.co/ai-forever/ruGPT-3.5-13B/discussions/2#64bcadbb4d2052b1aa08fa8d](https://huggingface.co/ai-forever/ruGPT-3.5-13B/discussions/2#64bcadbb4d2052b1aa08fa8d)
Creators and some proponents thinks it is fine their model spread lies and propaganda, because "it's just learned on Ru speaking sources" (but actually they don't use all Ru speaking sources, only on they want).
What do you think of it? Should models output information as if language is a source of truth?
Should those model be marked as unreliable or unreliable?
I think there is at leas something that should be done, those models normalize atrocities. | 2023-07-23T06:38:59 | https://www.reddit.com/r/LocalLLaMA/comments/15779no/poisoned_models/ | uti24 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15779no | false | null | t3_15779no | /r/LocalLLaMA/comments/15779no/poisoned_models/ | false | false | self | 109 | {'enabled': False, 'images': [{'id': '_rEX1xvwdv17x6NFAWQpYFNONQ0BKA5Qw0Eo0JX0zWU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=108&crop=smart&auto=webp&s=17279fa911dbea17f2a87e187f47ad903120ba87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=216&crop=smart&auto=webp&s=12bf202fa02a8f40e2ad8bab106916e06cceb1b4', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=320&crop=smart&auto=webp&s=90ff2c682d87ee483233b1136984d608f8b5c5c3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=640&crop=smart&auto=webp&s=2bc95e1b2395af837db2786db2f84b9c7f86370a', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=960&crop=smart&auto=webp&s=67e903b600e020b7bcf93fc2000ed3cf95cb4dbb', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?width=1080&crop=smart&auto=webp&s=b4cb1ebc087816d879ac777ed29f74d454f35955', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/PUH10BjM5sFuvroA3mzy6AwPeP3T-kvtFLWUoE6Dho8.jpg?auto=webp&s=a4fb691b1b470f21e5ef01685267735cb15b7735', 'width': 1200}, 'variants': {}}]} |
Glossary for machine learning and looking for something LLM specific. | 5 | Ironically search results are often spammed with AI generated Websites. I found these two to be helpful, and it would be great if someone found a language model specific glossary.
[https://developers.google.com/machine-learning/glossary](https://developers.google.com/machine-learning/glossary)
[https://ml-cheatsheet.readthedocs.io/en/latest/glossary.html](https://ml-cheatsheet.readthedocs.io/en/latest/glossary.html)
I tried searching but ironically the results are spammed with AI generated webpages. There are a lot of good learning material, but that is not very helpful when you quickly want to look up some terms. | 2023-07-23T06:28:08 | https://www.reddit.com/r/LocalLLaMA/comments/15772a1/glossary_for_machine_learning_and_looking_for/ | wreckingangel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15772a1 | false | null | t3_15772a1 | /r/LocalLLaMA/comments/15772a1/glossary_for_machine_learning_and_looking_for/ | false | false | self | 5 | {'enabled': False, 'images': [{'id': 'FJ6t7xCOQEzHctf_w44RxoBtAQtkMgjGAB17x0cwcuA', 'resolutions': [{'height': 56, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=108&crop=smart&auto=webp&s=4c2b9f1a71e813d704048454fc460d036ecc0c02', 'width': 108}, {'height': 113, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=216&crop=smart&auto=webp&s=f9fa9e7150c80e3bdbe8294cfe99e1845a0c9284', 'width': 216}, {'height': 168, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=320&crop=smart&auto=webp&s=b5b275c34ffd60c9aed2980ab62fb22b08895e2a', 'width': 320}, {'height': 336, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=640&crop=smart&auto=webp&s=412033d6c01d582440c16ceee1cc3e1b1f17574e', 'width': 640}, {'height': 504, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=960&crop=smart&auto=webp&s=680fc3b9cf939766d899201ff3a14028d2115c85', 'width': 960}, {'height': 567, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?width=1080&crop=smart&auto=webp&s=14eea76e4160100704860d69f0e91dee68b98323', 'width': 1080}], 'source': {'height': 630, 'url': 'https://external-preview.redd.it/be5K4QnmGZ5YLbWRV9p_o6hTh22zgV51EaUOBWfhyyg.jpg?auto=webp&s=45b1d56919552bb8c20cc26727fdf15db58741bc', 'width': 1200}, 'variants': {}}]} |
Trying to create my own dataset but have no clue if I am doing this right | 7 | Hello I would love some feedback on if this is right or would even work. I made a dataset and have 2 versions, one my new data by itself and the other version merged with guanaco unchained. I think the merged one might work better but would love to get some thoughts from those that know more than me. I was thinking I would love to try to train like the 7B version on this data to create a LLM to serve as a base for a Star Wars Droid, eventually to turn into a toy/robot companion. Here is a link: [https://huggingface.co/datasets/Azimn/Star\_Wars\_Dataset\_for\_Droid\_responces](https://huggingface.co/datasets/Azimn/Star_Wars_Dataset_for_Droid_responces) | 2023-07-23T06:27:07 | https://www.reddit.com/r/LocalLLaMA/comments/15771o7/trying_to_create_my_own_dataset_but_have_no_clue/ | Azimn | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15771o7 | false | null | t3_15771o7 | /r/LocalLLaMA/comments/15771o7/trying_to_create_my_own_dataset_but_have_no_clue/ | false | false | self | 7 | {'enabled': False, 'images': [{'id': '6vvqzdBUjmmAvXRK7Dm-Odo44XAZxVpwJ75XZklLQFM', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=108&crop=smart&auto=webp&s=f22e6e6629e65e02ea642a935cfb9cdf23cddf23', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=216&crop=smart&auto=webp&s=09244ec9062adb649bff15590dcf72d97e81603f', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=320&crop=smart&auto=webp&s=7e30ef4a6951e90bcad7e62f8ac7ed556cc6ea11', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=640&crop=smart&auto=webp&s=8eb5e37933d5b6afabb8273636f18b9fb73992e8', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=960&crop=smart&auto=webp&s=bae3e322eb6e066275c0b2d9825cceb712b6c738', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?width=1080&crop=smart&auto=webp&s=30d8a603e7a7664261bdfa7ad1d40fc3d8b63491', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9O_cr7kmXQn2ItxCr_Kt449FHxWH1l9rgqt7Bp1U1Dc.jpg?auto=webp&s=9e071c1dee48c8e99d818b03a1d62bb21b498661', 'width': 1200}, 'variants': {}}]} |
Dockerized ubuntu evironment for Llama with llama.cpp | 1 | [removed] | 2023-07-23T06:22:49 | https://github.com/snakewizardd/llamaDocker | Happy_Chicken9835 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1576yy2 | false | null | t3_1576yy2 | /r/LocalLLaMA/comments/1576yy2/dockerized_ubuntu_evironment_for_llama_with/ | false | false | 1 | {'enabled': False, 'images': [{'id': 'v21QP4TtqzczZ2cBgxEDtaaXta3l4lK3-RPUPyRHEnw', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=108&crop=smart&auto=webp&s=0235741c27e557526711b93a953ebc06f0c5cde0', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=216&crop=smart&auto=webp&s=c02c3e04b6e5f5619c68354336ea98930d56904a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=320&crop=smart&auto=webp&s=8e51acf2ecd3c4fc1b7c688094cf4adcc7c3c57d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=640&crop=smart&auto=webp&s=6bf3c194bd39cff74a3f175994dbc14bae50bfc9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=960&crop=smart&auto=webp&s=6e8b403b868655e413825ee5ccf2af5861452d14', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?width=1080&crop=smart&auto=webp&s=3809579cebb7cbf308870d2d02513d170d4b126c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M55cY8HEyRMear_v0IejrxpBxd4X6LBM5CIolqrjDFw.jpg?auto=webp&s=8b1ba17283b5232cac0ce90469c9e99ccc360129', 'width': 1200}, 'variants': {}}]} | |
How do you guys learn how to do any of these things? | 34 | I've gotten into using localLLMs as a way to curb the censoring on chatGPT but it seems like there is so much more to it than just downloading the LLM on my webui. I have no idea how to start learning about any of this but I would definitely love to! From what I've gathered apparently there are ways to train a model?? which I'd totally be interested in as I could see applications in training a smaller model with more intentional training sets I'm my daily life. So I guess my question is where do you think is a good place for someone like me to start learning about all of this and how it works? could be like a long guide online or a YouTube course or smth any resources would be appreciated!! | 2023-07-23T06:13:31 | https://www.reddit.com/r/LocalLLaMA/comments/1576sy2/how_do_you_guys_learn_how_to_do_any_of_these/ | MyLaptopSpoil | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1576sy2 | false | null | t3_1576sy2 | /r/LocalLLaMA/comments/1576sy2/how_do_you_guys_learn_how_to_do_any_of_these/ | false | false | self | 34 | null |
Can't run Llama 2 GPTQ on KoboldAI; is there a fix and, if not, are there more updated alternatives? | 1 | I've been trying to get KoboldAI to run for the past couple of days using the 7b GPTQ version of Llama 2 from [here](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ). Initially I was using the working branch I had, which I believe was the 0cc4m branch on the United version that supports GPTQ. Although it was able to see the model, it was unable to load it due to the following error
`"OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory E:\KoboldAI-4bit\KoboldAI\models\TheBloke_Llama-2-7B-GPTQ."`
as if it is expecting a GGML model (because of it looking for a .bin file). I also tried redownloading KoboldAI from the main github page, then updating to United, reinstalling all dependencies, making sure "4 bit" quantization was selected, and it threw the exact same error.
I feel like at this point I shouldn't have to be trying to rename models to get them to work, since Ooba just loads stuff without complaint. But are there any fixes for this? And if not, are there more updated interfaces similar to KoboldAI that will work properly and aren't months out of date? | 2023-07-23T05:18:19 | https://www.reddit.com/r/LocalLLaMA/comments/1575szm/cant_run_llama_2_gptq_on_koboldai_is_there_a_fix/ | Gyramuur | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1575szm | false | null | t3_1575szm | /r/LocalLLaMA/comments/1575szm/cant_run_llama_2_gptq_on_koboldai_is_there_a_fix/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'scnkn175QpfbMTdtoYxWh-I3soYhx9pJiHmW5tQQPMY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=108&crop=smart&auto=webp&s=306a702906190a7340924eac46d7feb1f3eec45e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=216&crop=smart&auto=webp&s=287c3b0788fac7766494f5d0e851ac89ff9d8ac9', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=320&crop=smart&auto=webp&s=625a75ac60be2245e5230d54df27456c7107ec4b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=640&crop=smart&auto=webp&s=83a9e51bb02d1141095b45fc8a465ecd586bdf97', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=960&crop=smart&auto=webp&s=3dd7e9d25663fc2ba02b34fdd652a431bbc9b036', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?width=1080&crop=smart&auto=webp&s=8b4b881bb48884dc6af6951b1ce4443f4bdcf11d', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/MIHs_hUb2M9gxc4QGwbCk4JA2r5erDBnTB8Tbg735Rs.jpg?auto=webp&s=dbb9e744ef3627d636c68c84c199aa1f0dafb845', 'width': 1200}, 'variants': {}}]} |
Help needed to the approach | 0 | My boss asked me this :
"Attached the pdf file showing the financial statements of the our branches and their competitors. I want to build a gpt model referencing it.
Aim is to
Do financial analysis of multiple financial statements using the attached documents"
Do you understand what he's trying to do and can you guide me the steps on how to achieve this?
Thanks. | 2023-07-23T05:15:21 | https://www.reddit.com/r/LocalLLaMA/comments/1575r2w/help_needed_to_the_approach/ | paulus_aurellius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1575r2w | false | null | t3_1575r2w | /r/LocalLLaMA/comments/1575r2w/help_needed_to_the_approach/ | false | false | self | 0 | null |
In just a week or so, there are so many new models on the new LLM leaderboard. What's your thought? | 35 | Just a few weeks ago, falcon-40b is at the top and became the most mentioned model apart from GPT4. Now suddenly so many new models came out. Some contain only 30b parameters. What's your thoughts and how do you comprehend the constantly improving models? Also, some said falcon-40b-instruct is comparable to the first-generation ChatGPT, then what about those at the top of the current leaderboard? | 2023-07-23T03:54:56 | https://www.reddit.com/r/LocalLLaMA/comments/15748wc/in_just_a_week_or_so_there_are_so_many_new_models/ | PrestigiousPancake | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15748wc | false | null | t3_15748wc | /r/LocalLLaMA/comments/15748wc/in_just_a_week_or_so_there_are_so_many_new_models/ | false | false | self | 35 | null |
function calling with llama-2? | 29 | Have been looking into the feasibility of operating llama-2 with agents through a feature similar to OpenAI's function calling.
Have had very little success through prompting so fa :( Just wondering if anyone had a different experience of if we might have to go down the fine-tune route as OpenAI did. | 2023-07-23T03:46:04 | https://www.reddit.com/r/LocalLLaMA/comments/15742zf/function_calling_with_llama2/ | LyPreto | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 15742zf | false | null | t3_15742zf | /r/LocalLLaMA/comments/15742zf/function_calling_with_llama2/ | false | false | self | 29 | null |
Help me finetune Llama-2-7B with qlora | 2 | I am trying to finetune Llama-2-7B with this script https://github.com/artidoro/qlora/blob/main/scripts/finetune_guanaco_7b.sh
Please help me with
--source_max_len 16 \
--target_max_len 512 \
What should I use? | 2023-07-23T03:25:41 | https://www.reddit.com/r/LocalLLaMA/comments/1573pf2/help_me_finetune_llama27b_with_qlora/ | Sufficient_Run1518 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1573pf2 | false | null | t3_1573pf2 | /r/LocalLLaMA/comments/1573pf2/help_me_finetune_llama27b_with_qlora/ | false | false | self | 2 | {'enabled': False, 'images': [{'id': 'tozPKa5-9dLqIt7uc63UE2WcaocD0i-HTS-qiRZnRaI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=108&crop=smart&auto=webp&s=99fc2e55c75d97c9f9a2e2f7817c09a1caedf28f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=216&crop=smart&auto=webp&s=020ebf12ccab8ab4c16991b3dcf260518a78a2c4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=320&crop=smart&auto=webp&s=26a9fbd5128c84c497e0687ddc4287dc30840e22', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=640&crop=smart&auto=webp&s=449c6165657467a17282d5a663569724b15fd539', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=960&crop=smart&auto=webp&s=732f2cca8ca4567b8be003613a027a8b990cee9b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?width=1080&crop=smart&auto=webp&s=0c72f22738b7bfdabbee3cb4726d4e621faf0de7', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/14ik5ZS42nAjZ_xDltNrlMUhTu67KXr4MltxlScrElU.jpg?auto=webp&s=f29bcafa2baaf7accaba533e3992bd5c2fa26529', 'width': 1200}, 'variants': {}}]} |
AlpacaCielo 13b, Llama-2 based model for storytelling and roleplay. | 63 | Back in the "old days", I really enjoyed creative models such as Alpasta, so I wanted to bring a similar experience to Llama-2. So I created AlpacaCielo. It is a triple merge of Nous Hermes + Guanaco + Storytelling, and is an attempt to get the best of all worlds to get a smart & creative model. I've done minimal testing, but so far it works pretty well and I much prefer its outputs over base Hermes. More info on huggingface: [AlpacaCielo](https://huggingface.co/totally-not-an-llm/AlpacaCielo-13b). And yes, it is uncensored from my testing, but you still might get some OpenAssistant disclaimers if you ask about anything too crazy. Let me know what you think!
ggml quants are also available. If anyone with more vram than me could create gptq's that would be amazing, but if not, I'll have them up later. | 2023-07-23T01:50:22 | https://www.reddit.com/r/LocalLLaMA/comments/1571srk/alpacacielo_13b_llama2_based_model_for/ | pokeuser61 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1571srk | false | null | t3_1571srk | /r/LocalLLaMA/comments/1571srk/alpacacielo_13b_llama2_based_model_for/ | false | false | self | 63 | {'enabled': False, 'images': [{'id': 'It9u9WCOskxFv8Pt_vUgR2w-W18_WAvJDGUkxKUqGOY', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=108&crop=smart&auto=webp&s=3fe55b63d93fd8686ebdd6535d27871db01d0575', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=216&crop=smart&auto=webp&s=ceaa1fe8025e09470409ec417546d7ed4c95583d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=320&crop=smart&auto=webp&s=fdd67e729647abad279004c101a037a5f8913885', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=640&crop=smart&auto=webp&s=cf311573e56744d5aaf060149573d5852e30fcb4', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=960&crop=smart&auto=webp&s=a0c5e5d0da717fcf67fd84c682a07752bc8cd449', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?width=1080&crop=smart&auto=webp&s=eef778f572dba90dda7ef5c00ffa6ff0629ecb87', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Mmqv2FQGy4ucbBCg8b_xXl1KyX1WFEa6lQdaUFTp4jk.jpg?auto=webp&s=d3e2970e6bc6687b9293180dac059283c4e0f924', 'width': 1200}, 'variants': {}}]} |
Is it possible we could see fine tuned models of GPT-3.5-Turbo and GPT-4 that are more uncensored and raw, similar to what we’ve seen with previous LLaMA/Vicuna models? | 34 | 2023-07-23T01:26:45 | sardoa11 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1571b84 | false | null | t3_1571b84 | /r/LocalLLaMA/comments/1571b84/is_it_possible_we_could_see_fine_tuned_models_of/ | false | false | 34 | {'enabled': True, 'images': [{'id': '8Jw3JjnPs2SGIGOsAo5fWaS64hG6hEwBSz2K1RYNnHA', 'resolutions': [{'height': 41, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=108&crop=smart&auto=webp&s=391c36e0501e90a7a10b8fc5d73b3c38df2044ec', 'width': 108}, {'height': 83, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=216&crop=smart&auto=webp&s=3465bc051a52ae3bc546f1d4461bafba0bcfb8eb', 'width': 216}, {'height': 124, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=320&crop=smart&auto=webp&s=3d90d60b764e1ff87a6b1affe537e2a876f15dd4', 'width': 320}, {'height': 248, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=640&crop=smart&auto=webp&s=d80c842d949fd8b0b792c352b7f9a83dee76ceca', 'width': 640}, {'height': 372, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=960&crop=smart&auto=webp&s=be7c037b3469b41d023ed2c6af277cc09662801a', 'width': 960}, {'height': 418, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?width=1080&crop=smart&auto=webp&s=cec041e2ca63036dc2f1aed0dc3d5fcc86d4b783', 'width': 1080}], 'source': {'height': 457, 'url': 'https://preview.redd.it/k2ph2hahamdb1.jpg?auto=webp&s=59653b9b54b69e7e90f0b71597d4ad0e5ae3740f', 'width': 1179}, 'variants': {}}]} | |||
Having trouble with Llama2 Guanaco 7B / incoherent responses | 7 | **Guanaco QLoRA 7B (Q4\_K\_M)**
Hi guys, trying to create a character that responds in character / roleplay. my model instruction is Faraday's default:
Text transcript of a never-ending conversation between {user} and {character}. In the transcript, gestures and other non-verbal actions are written between asterisks (for example, *waves hello* or *moves closer*).
I set an example dialogue and character persona but it just goes off the rails and is incoherent mess, misspelling words, long run on sentences that quickly stop making any sense. Is this expected for a local model? | 2023-07-23T00:48:11 | https://www.reddit.com/r/LocalLLaMA/comments/1570hcm/having_trouble_with_llama2_guanaco_7b_incoherent/ | Kep0a | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1570hcm | false | null | t3_1570hcm | /r/LocalLLaMA/comments/1570hcm/having_trouble_with_llama2_guanaco_7b_incoherent/ | false | false | self | 7 | null |
What parameters work best for 8k context? | 1 | [removed] | 2023-07-23T00:23:48 | https://www.reddit.com/r/LocalLLaMA/comments/156zy6i/what_parameters_work_best_for_8k_context/ | EcstaticVenom | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156zy6i | false | null | t3_156zy6i | /r/LocalLLaMA/comments/156zy6i/what_parameters_work_best_for_8k_context/ | false | false | self | 1 | null |
Yet another QLoRA tutorial | 92 | I fine-tuned OpenLLama 7B using QLoRA on transcripts of the Lex Fridman podcast to generate a fictional conversation of Lex with Barack Obama. This is not a model ready for use but rather a valuable step-by-step explanation illustrating the process, driven by the desire to learn more about training large language models at home.
Given that it is about QLoRA fine-tuning of OpenLLaMa and uses none of the latest long-context techniques or Llama 2 it feels outdated already but still maybe some people find it useful.
[https://medium.com/@geronimo7/from-transcripts-to-ai-chat-an-experiment-with-the-lex-fridman-podcast-3248d216ec16](https://medium.com/@geronimo7/from-transcripts-to-ai-chat-an-experiment-with-the-lex-fridman-podcast-3248d216ec16) | 2023-07-22T23:23:30 | https://www.reddit.com/r/LocalLLaMA/comments/156ylgf/yet_another_qlora_tutorial/ | HatEducational9965 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156ylgf | false | null | t3_156ylgf | /r/LocalLLaMA/comments/156ylgf/yet_another_qlora_tutorial/ | false | false | self | 92 | {'enabled': False, 'images': [{'id': '3fK4xu3PY7jH5gSmbBWirPCF_DDbwgIBR0EB9M-4Sio', 'resolutions': [{'height': 118, 'url': 'https://external-preview.redd.it/oA3IuQ1J6tZpBRPqXZWVixunfwiwDi3u7H84-d8b6Lk.jpg?width=108&crop=smart&auto=webp&s=8d08817044b1fb74fa8d7d8bec4965d31d564ebf', 'width': 108}, {'height': 236, 'url': 'https://external-preview.redd.it/oA3IuQ1J6tZpBRPqXZWVixunfwiwDi3u7H84-d8b6Lk.jpg?width=216&crop=smart&auto=webp&s=1ebe6f69840edbb47d4d85bc7edb2c3894ca8125', 'width': 216}, {'height': 350, 'url': 'https://external-preview.redd.it/oA3IuQ1J6tZpBRPqXZWVixunfwiwDi3u7H84-d8b6Lk.jpg?width=320&crop=smart&auto=webp&s=dc5059e218f5e27057ec8b7dcac442da5af76ead', 'width': 320}, {'height': 700, 'url': 'https://external-preview.redd.it/oA3IuQ1J6tZpBRPqXZWVixunfwiwDi3u7H84-d8b6Lk.jpg?width=640&crop=smart&auto=webp&s=d9d9ac9a41b0ceaf415ed46a181444589f4d3a51', 'width': 640}], 'source': {'height': 1024, 'url': 'https://external-preview.redd.it/oA3IuQ1J6tZpBRPqXZWVixunfwiwDi3u7H84-d8b6Lk.jpg?auto=webp&s=ac503d42696e1bc0886e562db65e30d3acb8b45b', 'width': 936}, 'variants': {}}]} |
How can I start learning about LLMs ? | 23 | I am intrigued by LLMs, Deep Learning, and Machine Learning and I would really like to learn how to launch a model, fine-tune it, or embed it but I feel a bit lost, do you have any tips for getting started or online courses that can help me achieve this goal? | 2023-07-22T21:52:40 | https://www.reddit.com/r/LocalLLaMA/comments/156wfsp/how_can_i_start_learning_about_llms/ | redule26 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156wfsp | false | null | t3_156wfsp | /r/LocalLLaMA/comments/156wfsp/how_can_i_start_learning_about_llms/ | false | false | self | 23 | null |
is RTX 3090 used for mining good for Local LLMs? | 23 | I see a vendor in my city with 10 used (RTX 3090) cards, 2 used for mining and 8 used in gaming cafes. prices range from 370$ to 430$. are these good for Running big LLMs Locally? I was thinking of getting 2 but not sure about how they'll perform as 30B models and above are very memory intensive.
any thoughts? and thank you | 2023-07-22T21:37:31 | https://www.reddit.com/r/LocalLLaMA/comments/156w2jp/is_rtx_3090_used_for_mining_good_for_local_llms/ | Dadjann | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156w2jp | false | null | t3_156w2jp | /r/LocalLLaMA/comments/156w2jp/is_rtx_3090_used_for_mining_good_for_local_llms/ | false | false | self | 23 | null |
Has anyone successfully fine-tuned MPT-7B? | 4 | I'm new to all this LLM stuff and I've been trying to see if I can adapt the Hugging Face [language modeling example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/transformers_doc/en/pytorch/language_modeling.ipynb) to use MPT-7B, but it runs out of memory on the training step basically no matter what I do--in regular Colab, using Colab Pro, and using various configurations of GPUs in RunPod (I've tried pods ranging from one 40GB A100 to 4 80GB A100s).
Has anyone fine-tuned MPT-7B successfully? What system configuration did you use? Did you have to do anything special in code, like using PyTorch's DistributedDataParallel or anything in that vein? | 2023-07-22T20:00:50 | https://www.reddit.com/r/LocalLLaMA/comments/156tppl/has_anyone_successfully_finetuned_mpt7b/ | Proeliata | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156tppl | false | null | t3_156tppl | /r/LocalLLaMA/comments/156tppl/has_anyone_successfully_finetuned_mpt7b/ | false | false | self | 4 | {'enabled': False, 'images': [{'id': 'nkhh65ujo5BznFJFojoMPaKjGuLSpPj6KGhRov-ykOg', 'resolutions': [{'height': 108, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=108&crop=smart&auto=webp&s=4b647239f77bf713f4a6209cfa4867351c055fd9', 'width': 108}, {'height': 216, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?width=216&crop=smart&auto=webp&s=7f4234ff3f4f4ebd7f77236dedb03a2faee3e04a', 'width': 216}], 'source': {'height': 260, 'url': 'https://external-preview.redd.it/MrcDZx2izDY9ERwgWmMS-Hm2M3GEKZgeYLDszSh-KrQ.jpg?auto=webp&s=73eb91ea5a5347f216c0f0c4d6796396826aae49', 'width': 260}, 'variants': {}}]} |
Looking for resources to fine-tune a model "from scratch" | 1 | Hi!
So my friend and I we're looking to explore all these topics of LLM fine-tuning and quantization etc. But we don't really know where to start.
We have a solid understanding of machine learning theory side (LLMs included) and are good with Python. So we'd like to make some kind of notebook where we have something like import a base model, code the fine-tuning function etc.
We're just trying to learn all these new techniques by implementing them from scratch. So we'd really appreciate it if there any tutorials using Python 🤗
We don't really need to have tutorials if anyone knows what papers we can read and follow we'd be grateful! We know some papers that we should look into Instruction Tuning/RLFH, QLoRa etc. | 2023-07-22T19:41:58 | https://www.reddit.com/r/LocalLLaMA/comments/156t90f/looking_for_resources_to_finetune_a_model_from/ | LoLingLikeHell | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156t90f | false | null | t3_156t90f | /r/LocalLLaMA/comments/156t90f/looking_for_resources_to_finetune_a_model_from/ | false | false | default | 1 | null |
Did GPT-4 really get worse? We built an evaluation framework so you can find out | 1 | [removed] | 2023-07-22T19:37:03 | https://www.reddit.com/r/LocalLLaMA/comments/156t4rl/did_gpt4_really_get_worse_we_built_an_evaluation/ | hegel-ai | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156t4rl | false | null | t3_156t4rl | /r/LocalLLaMA/comments/156t4rl/did_gpt4_really_get_worse_we_built_an_evaluation/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'jyQQDsewOakYxiv0g_fyaC75u1-VWTozH-60lliqvpI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=108&crop=smart&auto=webp&s=4ee94dbf11c076abadc54e222372ea5dd992be04', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=216&crop=smart&auto=webp&s=d9c919acc6acfee6254fa171804052ec4967a1fb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=320&crop=smart&auto=webp&s=78e85a6de1d6e7c8ff725daa00a57bbcbf9b14f6', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=640&crop=smart&auto=webp&s=33be0729d50fdfe581638fe49f1c75747b8f1c58', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=960&crop=smart&auto=webp&s=434f5e2db64817e52174aff4697ee842a4cd1985', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?width=1080&crop=smart&auto=webp&s=b9863320174050f0f49f5340387053b6f82f7bf9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/VrmXgXNwPE3WCG8OLPwgUvz020iYf02XCTI3ic85Qoc.jpg?auto=webp&s=680209a0055c22ea29761912c76c5576da1dd495', 'width': 1200}, 'variants': {}}]} |
GPU requirements for over 2048 context/tokens | 42 | Are there any standardised lists which state how much VRAM is needed for different combinations?
For example
13b with 2048 context
13b with 4096 context
13b with 8192 context
30b with 2048 context
Etc.... | 2023-07-22T18:56:45 | https://www.reddit.com/r/LocalLLaMA/comments/156s5jg/gpu_requirements_for_over_2048_contexttokens/ | Useful-Command-8793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156s5jg | false | null | t3_156s5jg | /r/LocalLLaMA/comments/156s5jg/gpu_requirements_for_over_2048_contexttokens/ | false | false | self | 42 | null |
Hey, has anyone finetuned Llama2 for fiction writers? if not could someone point me in the direction of a tutorial so I can do it? | 1 | Title says it all really. | 2023-07-22T18:39:24 | https://www.reddit.com/r/LocalLLaMA/comments/156rqgq/hey_has_anyone_finetuned_llama2_for_fiction/ | Benedictus111 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156rqgq | false | null | t3_156rqgq | /r/LocalLLaMA/comments/156rqgq/hey_has_anyone_finetuned_llama2_for_fiction/ | false | false | self | 1 | null |
Exploring the guardrails on the censored model, must have hit a nerve | 58 | 2023-07-22T17:57:40 | Kelvin_451 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 156qqd2 | false | null | t3_156qqd2 | /r/LocalLLaMA/comments/156qqd2/exploring_the_guardrails_on_the_censored_model/ | false | false | 58 | {'enabled': True, 'images': [{'id': 'NxdvrlYLcaDaUfpYR5KCJuY2LdPpNSiZmACpV9hH2wg', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/xp8jmsyu1kdb1.png?width=108&crop=smart&auto=webp&s=82e9fdd7e1e2660d2c6f4d14d51fc786c5771078', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/xp8jmsyu1kdb1.png?width=216&crop=smart&auto=webp&s=f97c88e7c93f2216e818dfc00f9a6b8805e2de10', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/xp8jmsyu1kdb1.png?width=320&crop=smart&auto=webp&s=8d6ee3a234c22d6257cde6edcc696dc41aca3709', 'width': 320}], 'source': {'height': 732, 'url': 'https://preview.redd.it/xp8jmsyu1kdb1.png?auto=webp&s=ec6f1e1a2429afa0a7f214531a983ec62ed0e585', 'width': 338}, 'variants': {}}]} | |||
What affect the speed of the gptj model loading? | 1 | I ran privateGPT on both my Ubuntu desktop and an AWS EC2 instance, just for comparison. Even if I use a beefy instance, which is much more powerful than the desktop on the aspects of # of cpus, and memory size, this model loading step is much slower on the AWS instance than it is on my desktop, like at least 5 times slower. So what factors affect this model loading step? Thanks! | 2023-07-22T17:46:41 | old_enough_to_drink | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 156qh5i | false | null | t3_156qh5i | /r/LocalLLaMA/comments/156qh5i/what_affect_the_speed_of_the_gptj_model_loading/ | false | false | 1 | {'enabled': True, 'images': [{'id': 'oJ23H0AZSkDJba-ksQZiK4wxbFE3GXdZjJhjQUnJvxc', 'resolutions': [{'height': 77, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=108&crop=smart&auto=webp&s=56bc9e9ebc09fd20ced5646c3880e929c900e316', 'width': 108}, {'height': 155, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=216&crop=smart&auto=webp&s=e936d25c8ff7da79ab16a447a99463eab4a1ec5b', 'width': 216}, {'height': 229, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=320&crop=smart&auto=webp&s=3c950605277322c29b1a513764f397a5b7c0c796', 'width': 320}, {'height': 459, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=640&crop=smart&auto=webp&s=3c86306476ee36b6c7cf74e93858bc383f813b72', 'width': 640}, {'height': 689, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=960&crop=smart&auto=webp&s=0878392d6e5925044df4d9df2d3dc80330a9c9a0', 'width': 960}, {'height': 776, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?width=1080&crop=smart&auto=webp&s=b036d10fb4a2c72a64d3a55595084e53c1ae2d3c', 'width': 1080}], 'source': {'height': 2172, 'url': 'https://preview.redd.it/jme3qtda0kdb1.jpg?auto=webp&s=1b0aabb7dc2c87a6082175e075b5c1c46871e59a', 'width': 3022}, 'variants': {}}]} | ||
Could LLaMA 2 be available on mobile for roleplay? | 0 | or just be available without needing 8.06 gb ram on the computer?? (faraday) | 2023-07-22T17:34:26 | https://www.reddit.com/r/LocalLLaMA/comments/156q6i2/could_llama_2_be_available_on_mobile_for_roleplay/ | rebeldocs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156q6i2 | false | null | t3_156q6i2 | /r/LocalLLaMA/comments/156q6i2/could_llama_2_be_available_on_mobile_for_roleplay/ | false | false | self | 0 | null |
Text-to-SQL WizardCoder - HF Spaces Demo | 23 | 2023-07-22T17:24:22 | https://huggingface.co/spaces/richardr1126/sql-skeleton-wizardcoder-demo | richardr1126 | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 156pxot | false | null | t3_156pxot | /r/LocalLLaMA/comments/156pxot/texttosql_wizardcoder_hf_spaces_demo/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'UgzQg7MYQ-HCS6Wx2TSROSsYoD9tNc4fxeYw4Id9Xfc', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/8T2R068MEOUAXIA9iLnneoz1TqDsgGABSpZLJ1w_BZY.jpg?width=108&crop=smart&auto=webp&s=ab1db293f3359644a5cfd88d46f4d49fc00fc8f4', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/8T2R068MEOUAXIA9iLnneoz1TqDsgGABSpZLJ1w_BZY.jpg?width=216&crop=smart&auto=webp&s=b7fdc5f378e3417ea6b42e5bb9e8692615be3296', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/8T2R068MEOUAXIA9iLnneoz1TqDsgGABSpZLJ1w_BZY.jpg?width=320&crop=smart&auto=webp&s=6d3cadd322803f3e9b528b8e17ae6d2517a3518b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/8T2R068MEOUAXIA9iLnneoz1TqDsgGABSpZLJ1w_BZY.jpg?width=640&crop=smart&auto=webp&s=9c90da0712f08a20370fd7e409f7141637d7b574', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/8T2R068MEOUAXIA9iLnneoz1TqDsgGABSpZLJ1w_BZY.jpg?width=960&crop=smart&auto=webp&s=19c513121e35a7d7cd34827415482f426280d7da', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/8T2R068MEOUAXIA9iLnneoz1TqDsgGABSpZLJ1w_BZY.jpg?width=1080&crop=smart&auto=webp&s=d27e089ee4bca5b1b1a4f5e75954ec9138ef5edb', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/8T2R068MEOUAXIA9iLnneoz1TqDsgGABSpZLJ1w_BZY.jpg?auto=webp&s=3e76b5792a80fdc858279a666aac62bf0a1eb344', 'width': 1200}, 'variants': {}}]} | ||
How to keep conversation going in llama2-chat-hf? | 1 | HI, I want to build a local chat bot with a functionality similar to chatgpt website. Just started my ai journey, so need a help there, didn't find any information in google.
I have a conversation window with a conversation going on, but how to force the model to 'memorize' previous question and answers? Should I fine-tune it every time?
Right now the only idea that come to my mind is to 'collect' all the conversation and feed up it into the model every time like this
```
while True:
new_question = input("You: ")
conversation_history.append(new_question)
input_ids = tokenizer.encode(" ".join(conversation_history), return_tensors='pt').cuda()
output = model.generate(input_ids=input_ids, temperature=0.7, max_length=512)
model_response = tokenizer.decode(output[0])
print("Model:", model_response)
conversation_history.append(model_response)
```
Can someone help me? | 2023-07-22T17:08:09 | https://www.reddit.com/r/LocalLLaMA/comments/156pjn6/how_to_keep_conversation_going_in_llama2chathf/ | Jack_Hackerman | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156pjn6 | false | null | t3_156pjn6 | /r/LocalLLaMA/comments/156pjn6/how_to_keep_conversation_going_in_llama2chathf/ | false | false | self | 1 | null |
Where to find 'adapter_config.json' ? | 1 | Hello guys
on [**open\_llm\_leaderboard**](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) there are many interesting 30b loras with extremely good performance
But HOW CAN I LOAD IT without adapter\_config.json?
I am really sorry that I am new to the field, but if I didn't understand it wrongly, with a correct base model, those .bin Loras I can download should be loaded, right?
In those high-rank Loras models, I can not find the adapter\_config.json on their files, can I create one? or I have to retrain them? can I retain them with [**text-generation-webui**](https://github.com/oobabooga/text-generation-webui/tree/main) ?
​ | 2023-07-22T16:16:27 | https://www.reddit.com/r/LocalLLaMA/comments/156o94b/where_to_find_adapter_configjson/ | JohnSmith004 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156o94b | false | null | t3_156o94b | /r/LocalLLaMA/comments/156o94b/where_to_find_adapter_configjson/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'EN0-abblERL52DxeoNzcxdkhvXEwLdZMJTS58Umjs64', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=108&crop=smart&auto=webp&s=6fbb309f983333cbaf528bd40f8d6ffb39877704', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=216&crop=smart&auto=webp&s=1ae10c5a53638209dee07b017628d2a1fadc8d05', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=320&crop=smart&auto=webp&s=cf36565d3bac3086aaea4458c31609ff1b2c00b3', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=640&crop=smart&auto=webp&s=8e182cefcf8da97d7b4369734149986feca334e5', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=960&crop=smart&auto=webp&s=7699d0ad09185e2f560115cae5cb71e907073327', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?width=1080&crop=smart&auto=webp&s=7b11f6f2294899964ec8ed081777f4b6e19723b6', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/pnEIDVgN3O3UZSEZF8G101Cpm5FLu9i3k_abBlep_2c.jpg?auto=webp&s=81db4d9e1dd01a76f499e499f78aed3478ae6658', 'width': 1200}, 'variants': {}}]} |
Llama2-22b, a model merge tuned on RedPajama | 79 | 2023-07-22T16:01:09 | https://huggingface.co/chargoddard/llama2-22b | AzerbaijanNyan | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 156nvfk | false | null | t3_156nvfk | /r/LocalLLaMA/comments/156nvfk/llama222b_a_model_merge_tuned_on_redpajama/ | false | false | 79 | {'enabled': False, 'images': [{'id': 'khYJSqqFVCi4UJwthWthFsAJ3IVXR-IlF9v06ySgEOg', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/osu1VXtYG9AOyq69Tk9NuLgcuCmYcdXz2UW7cfiuzZo.jpg?width=108&crop=smart&auto=webp&s=5849210e7557a6e672d14e28eece41039fc42a87', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/osu1VXtYG9AOyq69Tk9NuLgcuCmYcdXz2UW7cfiuzZo.jpg?width=216&crop=smart&auto=webp&s=3010c318b5c299778e43d194c5165ae9eeb4e78d', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/osu1VXtYG9AOyq69Tk9NuLgcuCmYcdXz2UW7cfiuzZo.jpg?width=320&crop=smart&auto=webp&s=2fd5b2109cdd4dc3a042dbd166bab10be11bb596', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/osu1VXtYG9AOyq69Tk9NuLgcuCmYcdXz2UW7cfiuzZo.jpg?width=640&crop=smart&auto=webp&s=2da7eea11aff835f5fbde29e2303d648c9653e5d', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/osu1VXtYG9AOyq69Tk9NuLgcuCmYcdXz2UW7cfiuzZo.jpg?width=960&crop=smart&auto=webp&s=45deaaabb6b9cf99ed6b0a167d26184db67d8106', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/osu1VXtYG9AOyq69Tk9NuLgcuCmYcdXz2UW7cfiuzZo.jpg?width=1080&crop=smart&auto=webp&s=a01c4cf30260c4626c1454a023a1547dc72891d0', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/osu1VXtYG9AOyq69Tk9NuLgcuCmYcdXz2UW7cfiuzZo.jpg?auto=webp&s=76dea2d039dd23c81e30a6bd03bb4ac0f9a0a373', 'width': 1200}, 'variants': {}}]} | ||
RP/ERP, I have questions... | 18 | So, I've been playing around with characters in OobaBooga and I'm trying to twek the setup so I get most bang for the buck with my 12gb card and 32gb ram. But I'm left with some unresolved issues when it comes to the best possible experience when it comes to immersion and following the character setup, that I hope someone more experience than me can help with:
Models size:
Bigger is always better I assume. But what if the choice stands between the same model, let's say Airoboros in different quant sizes?
I can use the following versions. the 13B 4bit 128g GPTQ, 13B 5\_1 GGML or a 33B 3bit GGML.
The speed hit is significant on every step, and I feel like the larger model might be better, but it could be placebo. The responses are so slow, so it is hard to get the same amount of chat data to compare with the faster ones. The fastest makes reroll quick, but I'd like the most immersive experience possible.
Anyone has more experience with the sweet spot?
Temperature settings:
Are there big differences between what works best for each model?
And does temperature changes make more difference than what model is used?
I have tried to use the recommended ones for chat, but I'm not sure I see much effect, unless I make more extreme changes to temperature settings, and then they usually end up worse.
Using Silly Tavern instead of just directly in Oobabooga:
If I run my model locally, are there any important features I miss out on by using Obaa diretly, instead of a frontend like ST?
I'd prefer to have as few things running on the PC to keep RAM cost down, and it seems like Ooba support the functionality needed for character cards just fine, but are there other possiblities in ST that really imporve the immersiveness of the experience?
Any good tips or tricks would be hugely appreciated!
​
​ | 2023-07-22T15:06:44 | https://www.reddit.com/r/LocalLLaMA/comments/156mix8/rperp_i_have_questions/ | nixudos | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156mix8 | false | null | t3_156mix8 | /r/LocalLLaMA/comments/156mix8/rperp_i_have_questions/ | false | false | self | 18 | null |
An idea regarding efficient trainning/fine-tuning | 11 | I've got it from listening to lecture on bioinformatics, heh.
Some points in advance:
1. It is clear from quantisation results and ablation studies that a LOT of the model is not doing something very important.
2. AWQ, in particular, looks at weights that, well, do particularly "heavy lifting" and spares them - only 1% of those allows less perplexity drop after quantisation.
In a way, the analogy here is less of a brain, and more of DNA - most of it is "junk" DNA that is either useless and mostly serve is "padding" or marginally useful, but some parts are absolutely critical and must be preserved or everything will collapse.
Can this approach be expanded towards finetuning?
There are techniques (kinda like an intemediate between LORA and full finetuning) where a part of the model remains frozen ( "from top to bottom" AFAIK), so new features can be added without catastrofic forgetting and is more memory efficient, too, but barring layer ranking this is very nonspecific and less powerful than full model training.
Why not freeze part of the model WEIGHTS before fine-tuning, selected in activation-aware fashion?
This way you should be getting better results so far is training is concerned, while (mostly?) avoiding catastrofic forgetting.
Is thir something that can be implemented, or am I inventing a bicycle here? (I'm not exactly a stranger to that, heh). | 2023-07-22T14:46:58 | https://www.reddit.com/r/LocalLLaMA/comments/156m102/an_idea_regarding_efficient_trainningfinetuning/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156m102 | false | null | t3_156m102 | /r/LocalLLaMA/comments/156m102/an_idea_regarding_efficient_trainningfinetuning/ | false | false | self | 11 | null |
I love Guanaco | 25 | Prompt: (something along the lines of this) Please come up with an advertisement for a Pepsi heroin beverage meant for euthanizing patients in pain | 2023-07-22T14:30:52 | https://www.reddit.com/gallery/156lmsr | Basic_Description_56 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 156lmsr | false | null | t3_156lmsr | /r/LocalLLaMA/comments/156lmsr/i_love_guanaco/ | false | false | 25 | null | |
Who's looking for work involving LLMs? | 5 | \[I'm looking to hire and couldn't think of a better spot to find talent genuinely interested in the subject matter. I thought it'd be great to open this thread to all seekers and companies.\]
To kick things off, I'm looking to hire someone (consult/PT/FT) to help scale an LLM service. If you have opinions and experience with Runpod, Lambda, Replicate, AWS, etc and enjoy tinkering with and evaluating all the various LLaMA variants, you’re probably the right person. Bonus if you've fine-tuned your own models. Email: [euwyn@micro.com](mailto:euwyn@micro.com) 🙏🏻 | 2023-07-22T13:09:13 | https://www.reddit.com/r/LocalLLaMA/comments/156jrcf/whos_looking_for_work_involving_llms/ | newy66 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156jrcf | false | null | t3_156jrcf | /r/LocalLLaMA/comments/156jrcf/whos_looking_for_work_involving_llms/ | false | false | self | 5 | null |
llama_mpi, anyone use it? | 10 | I was reading the github page for llama.cpp and came down to the section "MPI BUILD". That looks like an interesting way to run larger LLM across multiple machine. I am taking it to mean local machines, and that is fine by me as it would be easier for me to have one or two cards in use in multiple machines as I am not running any big Threadripper type boxes that support 3 or 4 cards. Anyone here tried it out? | 2023-07-22T13:03:49 | https://www.reddit.com/r/LocalLLaMA/comments/156jmvg/llama_mpi_anyone_use_it/ | ccbadd | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156jmvg | false | null | t3_156jmvg | /r/LocalLLaMA/comments/156jmvg/llama_mpi_anyone_use_it/ | false | false | self | 10 | null |
Link of Llama 2 uncensored | 0 | Can anyone provide me the direct link to just chat with the llama 2 uncensored version ? | 2023-07-22T12:01:20 | https://www.reddit.com/r/LocalLLaMA/comments/156ia7v/link_of_llama_2_uncensored/ | Impressive_Plastic39 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156ia7v | false | null | t3_156ia7v | /r/LocalLLaMA/comments/156ia7v/link_of_llama_2_uncensored/ | false | false | self | 0 | null |
Link of Llama 2 uncensored | 1 | Can anyone provide me the direct link to just chat with the llama 2 uncensored version ? | 2023-07-22T12:01:20 | https://www.reddit.com/r/LocalLLaMA/comments/156ia7l/link_of_llama_2_uncensored/ | Impressive_Plastic39 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156ia7l | false | null | t3_156ia7l | /r/LocalLLaMA/comments/156ia7l/link_of_llama_2_uncensored/ | false | false | self | 1 | null |
I made Llama2 7B into a really useful coder | 355 | Hey guys,
First time sharing any personally fine-tuned model so bless me.
Introducing codeCherryPop - a qlora fine-tuned 7B llama2 with 122k coding instructions and it's extremely coherent in conversations as well as coding.
Do try it out here - https://huggingface.co/TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged
Demo with inference in Gradio UI - https://youtu.be/0Vgt54pHLIY
I would like to request u/The-Bloke to see if it is worthy of his attention and bless this model with the 4bit quantization touch.
The performance of this model for 7B parameters is amazing and i would like you guys to explore and share any issues with me.
Edit: It works best in chat with the settings it has been fine-tuned with. I fine-tuned it on long batch size, low step and medium learning rate. It is fine-tuned with 2048 token batch size and that is how it works best everywhere even with fp16. Check the notebook settings for fp16 inference to copy prompt style as well as other settings for getting best performance. | 2023-07-22T11:39:06 | https://www.reddit.com/r/LocalLLaMA/comments/156htzy/i_made_llama2_7b_into_a_really_useful_coder/ | ahm_rimer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156htzy | false | null | t3_156htzy | /r/LocalLLaMA/comments/156htzy/i_made_llama2_7b_into_a_really_useful_coder/ | false | false | self | 355 | {'enabled': False, 'images': [{'id': '_1qNswe8daQmDrTQeV7_KsBfdNrceifW9agUew6uHwo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/CqA7PvnpL1TzXR5Mku6on0iI_OP9D68I7sTqRRjXcmo.jpg?width=108&crop=smart&auto=webp&s=78dd14e34ed3b35bb8d421da0fb6ccf967f7f473', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/CqA7PvnpL1TzXR5Mku6on0iI_OP9D68I7sTqRRjXcmo.jpg?width=216&crop=smart&auto=webp&s=1a7ffba60aea1c0d0f1eed0a910f5a515f43bfed', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/CqA7PvnpL1TzXR5Mku6on0iI_OP9D68I7sTqRRjXcmo.jpg?width=320&crop=smart&auto=webp&s=b081817b3027209368f77426220ab5cb6fceeb83', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/CqA7PvnpL1TzXR5Mku6on0iI_OP9D68I7sTqRRjXcmo.jpg?width=640&crop=smart&auto=webp&s=7efd8c396517741f76863257feb14a5c2b6bd826', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/CqA7PvnpL1TzXR5Mku6on0iI_OP9D68I7sTqRRjXcmo.jpg?width=960&crop=smart&auto=webp&s=69e539e4f4cf2c9ff9c6c3c1e729ed684b7a819a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/CqA7PvnpL1TzXR5Mku6on0iI_OP9D68I7sTqRRjXcmo.jpg?width=1080&crop=smart&auto=webp&s=30a48694980fec6bac5c3bb20a156de789b6f7f5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/CqA7PvnpL1TzXR5Mku6on0iI_OP9D68I7sTqRRjXcmo.jpg?auto=webp&s=fa2306699f2b542b29b80331addda3e1e95738ea', 'width': 1200}, 'variants': {}}]} |
FYP Project LLM powered chatbot | 1 | Hey Guys, I am a computer engineering student who is approaching his Final year, as such I have to decide on an FYP Project, with the recent trend in Transformer based AI Research, I wanted to do a project on an LLM powered chatbot which I plan to create by using the student tier of AWS and other discounts I can get my hands on for online cloud services ,my method would be that I would use an opensource and lightweight LLM and finetune it to a specific field and run them on my cloud servers and ask questions about it from that field only,( I plan on using cloud providers because I dont have much money available to purchase powerful multi gpu computers ,I am using a Lenovo Legion 5 Pro laptop).what I wanted to ask you guys is what do you think about this FYP will this have any worth in the industry if I write it on my resume I have never tried fine-tuning a local llm myself before but I would like a project that involves doing more work in coding and maths from my end instead of relying entirely on the opensource code that is available so that I have something to write about in my presentations and reports, any feedback on this project and it's potential worth will be greatly appreciated,also if you don't think this has much worth as an FYP , I would appreciate it if you could suggest another FYP based on using opensource LLMs that could have more worth on a resume when applying for jobs. Thanks in advance! | 2023-07-22T11:18:12 | https://www.reddit.com/r/LocalLLaMA/comments/156hfmn/fyp_project_llm_powered_chatbot/ | Demonking6444 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156hfmn | false | null | t3_156hfmn | /r/LocalLLaMA/comments/156hfmn/fyp_project_llm_powered_chatbot/ | false | false | self | 1 | null |
[D] Constraining Llama models with context free grammars | 61 | An interesting piece of development happening in the llama.cpp repo, sampling tokens according to the supplied context free grammar.
https://github.com/ggerganov/llama.cpp/pull/1773
One of the things we notice with llama models is while they are decent at everyday conversations, the performance drops for structured outputs like coding tasks.
But coding languages are designed to follow strict rules governed by a context free grammar [eg. in C, a 'for' or a 'while' should be followed by a '(' ], which is how compilers are able to parse and compile code.
While large models like GPT4 have natively become good at this (although they could still benefit from cfg constraints), smaller models like Llama should benefit a lot more from cfg at such structured tasks. | 2023-07-22T10:46:35 | https://www.reddit.com/r/LocalLLaMA/comments/156gu8c/d_constraining_llama_models_with_context_free/ | ironborn123 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156gu8c | false | null | t3_156gu8c | /r/LocalLLaMA/comments/156gu8c/d_constraining_llama_models_with_context_free/ | false | false | self | 61 | {'enabled': False, 'images': [{'id': '1HkNkq0xkmpSw6HXPZiwQIPsdRL98lc9T9sORrdcZIg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/7EsB-pMzEzHLqLmGLx6NGEGNTvSu4dkeGp2OGaUNOG0.jpg?width=108&crop=smart&auto=webp&s=ece9a7391c1e92d6a7ac22ba6ca33fd44d2ef47c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/7EsB-pMzEzHLqLmGLx6NGEGNTvSu4dkeGp2OGaUNOG0.jpg?width=216&crop=smart&auto=webp&s=ae57c6170800fd172be07d900555c2a8c4f243d0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/7EsB-pMzEzHLqLmGLx6NGEGNTvSu4dkeGp2OGaUNOG0.jpg?width=320&crop=smart&auto=webp&s=0e75e5f24266302d851cc4b0b5d69320591826c4', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/7EsB-pMzEzHLqLmGLx6NGEGNTvSu4dkeGp2OGaUNOG0.jpg?width=640&crop=smart&auto=webp&s=d9acff730d6f4bfbfe39fd2ee4ce87faaf210438', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/7EsB-pMzEzHLqLmGLx6NGEGNTvSu4dkeGp2OGaUNOG0.jpg?width=960&crop=smart&auto=webp&s=0c5492af2b58198be07f142fd1ded5d8e9456cb1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/7EsB-pMzEzHLqLmGLx6NGEGNTvSu4dkeGp2OGaUNOG0.jpg?width=1080&crop=smart&auto=webp&s=0e9ccce212fda635ba3a613cdb79be18c873691c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/7EsB-pMzEzHLqLmGLx6NGEGNTvSu4dkeGp2OGaUNOG0.jpg?auto=webp&s=2715953b75df447b8cc6172b3cefe2b0676b4733', 'width': 1200}, 'variants': {}}]} |
Examples of real business applications | 40 | Obviously the popularity of this tech is very new, but I am sure they started applying it to real businesses. Such as a newspaper who let go of some junior writers and replaced them with AI. Do you know any youtube channels or other sources where they show examples like that? Like internal company onboarding chatbots or using it in customer service or such? | 2023-07-22T10:34:08 | https://www.reddit.com/r/LocalLLaMA/comments/156gm75/examples_of_real_business_applications/ | VentrueLibrary | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156gm75 | false | null | t3_156gm75 | /r/LocalLLaMA/comments/156gm75/examples_of_real_business_applications/ | false | false | self | 40 | null |
Best Role Play Models | 28 | Things move so fast, I'm currently using Wizard LM for chat role play, both SFW and NSFW.
I've been experiment with a few others namely Guanaco and Vicuna. Both seem decent, there are so many others out there though.
Can anyone recommend any others which you have enjoyed and give a good experience for role play? | 2023-07-22T09:44:02 | https://www.reddit.com/r/LocalLLaMA/comments/156fpt2/best_role_play_models/ | Useful-Command-8793 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156fpt2 | false | null | t3_156fpt2 | /r/LocalLLaMA/comments/156fpt2/best_role_play_models/ | false | false | self | 28 | null |
A LIMA-like Llama2 finetune just for RP | 32 | 2023-07-22T09:27:50 | https://huggingface.co/lemonilia/limarp-llama2 | brown2green | huggingface.co | 1970-01-01T00:00:00 | 0 | {} | 156ffne | false | null | t3_156ffne | /r/LocalLLaMA/comments/156ffne/a_limalike_llama2_finetune_just_for_rp/ | false | false | 32 | {'enabled': False, 'images': [{'id': 'Kl8L9OmzdMV6C7Vxceux9U9R0nNcWv9f9kF1d_D7mWE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/psSnTcKvAdSBNNaQK-tMxmZX2QWjOwi_aX5I8tyV3Es.jpg?width=108&crop=smart&auto=webp&s=95c877192822379482fc54eee532d045ea818ba5', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/psSnTcKvAdSBNNaQK-tMxmZX2QWjOwi_aX5I8tyV3Es.jpg?width=216&crop=smart&auto=webp&s=2080f7782c0e373e17d1defc498d3adc854f41b0', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/psSnTcKvAdSBNNaQK-tMxmZX2QWjOwi_aX5I8tyV3Es.jpg?width=320&crop=smart&auto=webp&s=5ef98b435df628fbfa8c2b8c886bf555938507af', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/psSnTcKvAdSBNNaQK-tMxmZX2QWjOwi_aX5I8tyV3Es.jpg?width=640&crop=smart&auto=webp&s=18dcd7ca9e50f8f0ecf73c306439513b5e0a1ee6', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/psSnTcKvAdSBNNaQK-tMxmZX2QWjOwi_aX5I8tyV3Es.jpg?width=960&crop=smart&auto=webp&s=a79a77ab3f6d536ece03eb8a6c36360238224b27', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/psSnTcKvAdSBNNaQK-tMxmZX2QWjOwi_aX5I8tyV3Es.jpg?width=1080&crop=smart&auto=webp&s=46bc2edca3f6e7839e169e3e1d793e2d7618e388', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/psSnTcKvAdSBNNaQK-tMxmZX2QWjOwi_aX5I8tyV3Es.jpg?auto=webp&s=52ec0b147de3d9d1b888d39e7bda0557b5e15b0c', 'width': 1200}, 'variants': {}}]} | ||
Llama or alternatives in tflite format | 4 | Sorry if this has already been answered (I’ve had a bit of a look though) but I was wondering if it’s possible to convert llama into a .tflite file to be able to be run on a coral usb accelerator for example?
I’ve been searching around (asking gpt4) and all I’ve come across is that it will either be extremely complex or not possible.
Thanks in advance peeps :) | 2023-07-22T09:13:05 | Effective_Football35 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 156f6eo | false | null | t3_156f6eo | /r/LocalLLaMA/comments/156f6eo/llama_or_alternatives_in_tflite_format/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'odPLssqOexUGQtQ9azyOvgPi4ExWjhpVSznoGH3e_x0', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/sd91qnprghdb1.jpg?width=108&crop=smart&auto=webp&s=af8bcfc545b164d5bb66e7d74d057262072b0a2b', 'width': 108}, {'height': 121, 'url': 'https://preview.redd.it/sd91qnprghdb1.jpg?width=216&crop=smart&auto=webp&s=33c1f1ef00496328454389e79100f7683a5e18ff', 'width': 216}, {'height': 180, 'url': 'https://preview.redd.it/sd91qnprghdb1.jpg?width=320&crop=smart&auto=webp&s=766ea5cddaeb4eb7e0d97ebeb0fbb8a3608ede43', 'width': 320}, {'height': 360, 'url': 'https://preview.redd.it/sd91qnprghdb1.jpg?width=640&crop=smart&auto=webp&s=810ed03804f2f4c56f087d2f9f8a29284d131c18', 'width': 640}, {'height': 540, 'url': 'https://preview.redd.it/sd91qnprghdb1.jpg?width=960&crop=smart&auto=webp&s=80ec561abc73ac23135781c3c70035fe6ccf776c', 'width': 960}, {'height': 607, 'url': 'https://preview.redd.it/sd91qnprghdb1.jpg?width=1080&crop=smart&auto=webp&s=b5901aad52994800a043e8d6d2f31f8902ebab4c', 'width': 1080}], 'source': {'height': 675, 'url': 'https://preview.redd.it/sd91qnprghdb1.jpg?auto=webp&s=1c80ab1b441868e858c7459f1452f5f113d405c5', 'width': 1200}, 'variants': {}}]} | ||
Wizard-Vicuna-13B-HF-Uncensored bad/outdated files? | 1 | [removed] | 2023-07-22T08:49:15 | https://www.reddit.com/r/LocalLLaMA/comments/156er3q/wizardvicuna13bhfuncensored_badoutdated_files/ | redzorino | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156er3q | false | null | t3_156er3q | /r/LocalLLaMA/comments/156er3q/wizardvicuna13bhfuncensored_badoutdated_files/ | false | false | self | 1 | {'enabled': False, 'images': [{'id': 'lRl3SipxuAd2pfX-TGfnDhxzpabUu4mLrPvD6h_TfTI', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=108&crop=smart&auto=webp&s=b76f8ccbfd16146ac1855fe96fc4710ff29e946e', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=216&crop=smart&auto=webp&s=f395ef8d7cec16b0287e2280fe5ece598d51f4a2', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=320&crop=smart&auto=webp&s=1f0a2f8a56f978d570a73b61ea9f995ae799084b', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=640&crop=smart&auto=webp&s=288d4b4a040e44be55cfd0bcf05f9f1d0d421d2c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=960&crop=smart&auto=webp&s=d4c42f7347e07b07ee6b56f651cac0076eb0cf2a', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?width=1080&crop=smart&auto=webp&s=1de56b29995de89448063a59cabd8939902e96de', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Zis6jQ1oz0Ceq_p_SAr0ag83aPRdD-nFmzXf8UatMGk.jpg?auto=webp&s=e9052386db71b626aa8458c4b3052d3b8c8fd9bf', 'width': 1200}, 'variants': {}}]} |
Getting LLAMA 2 or any other local LLM to answer using specific format | 13 | Hello,
Whenever I try to constrain local LLMs to just answer the question I get flowery text like: ".Sure, I'd be happy to help! Based on the given...", and then the LLM explains why it did what it did. Is there a way to get a single answer like? Like if I ask what is the capital of France, is should say "Paris" and nothing else. If I ask what is the capital of Paris, it should say "N/A". I've tried various prompts, but can't seem to get them to follow instructions, which I can do easily with chatGPT. Even If I ask for json formatted strings I get stuff back like "Hello! I'm here to assist you with your questions. The capital of France is Paris. Here's the answer in JSON format:
{
"capital": "Paris"
}"
I am pretty sure this is something trivial I am doing wrong here, but not sure what.
​ | 2023-07-22T07:57:27 | https://www.reddit.com/r/LocalLLaMA/comments/156dv5a/getting_llama_2_or_any_other_local_llm_to_answer/ | SwordfishGreat4532 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156dv5a | false | null | t3_156dv5a | /r/LocalLLaMA/comments/156dv5a/getting_llama_2_or_any_other_local_llm_to_answer/ | false | false | self | 13 | null |
Training dataset should include the system commands token? | 1 | [removed] | 2023-07-22T07:40:36 | https://www.reddit.com/r/LocalLLaMA/comments/156dki4/training_dataset_should_include_the_system/ | Either-Job-341 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 156dki4 | false | null | t3_156dki4 | /r/LocalLLaMA/comments/156dki4/training_dataset_should_include_the_system/ | false | false | self | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.