title stringlengths 1 300 | score int64 0 8.54k | selftext stringlengths 0 41.5k | created timestamp[ns]date 2023-04-01 04:30:41 2026-03-04 02:14:14 ⌀ | url stringlengths 0 878 | author stringlengths 3 20 | domain stringlengths 0 82 | edited timestamp[ns]date 1970-01-01 00:00:00 2026-02-19 14:51:53 | gilded int64 0 2 | gildings stringclasses 7 values | id stringlengths 7 7 | locked bool 2 classes | media stringlengths 646 1.8k ⌀ | name stringlengths 10 10 | permalink stringlengths 33 82 | spoiler bool 2 classes | stickied bool 2 classes | thumbnail stringlengths 4 213 ⌀ | ups int64 0 8.54k | preview stringlengths 301 5.01k ⌀ |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Just installed LLAMA 7B 8-bit and it does this. It never generates not even single token and the output window is empty. Any idea what it could be? I am using RTX 2060 12GB | 6 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11rcl2h | false | null | t3_11rcl2h | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/ | false | false | default | 6 | null | ||
My attempt at testing the "creativity" of Stanford's Alpaca showed interesting results. | 18 | Blog post here: https://medium.com/sort-of-like-a-tech-diary/speculative-fiction-with-stanford-alpaca-fe99d3c39de7
The hosted demo is now a lot easier to use. I'm amazed at the performance of the 7B model! | null | https://www.reddit.com/r/LocalLLaMA/comments/11redfk/my_attempt_at_testing_the_creativity_of_stanfords/ | _wsgeorge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11redfk | false | null | t3_11redfk | /r/LocalLLaMA/comments/11redfk/my_attempt_at_testing_the_creativity_of_stanfords/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': '54JyBdi_F3dD0tDq8h3veIzoQ4TqQqQeQMrYA2nQi-4', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?width=108&crop=smart&auto=webp&v=enabled&s=12307c4c42213bf7ffb097993867767b98865689', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?width=216&crop=smart&auto=webp&v=enabled&s=33209899c92bc805c6ae95df5a7ec4f07f88d919', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?width=320&crop=smart&auto=webp&v=enabled&s=b084fce48209f177ae77a5b1ddce60e485f3a6fc', 'width': 320}, {'height': 387, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?width=640&crop=smart&auto=webp&v=enabled&s=c51186c75dad413b9ffa2d9d1b529b50056cd4d1', 'width': 640}, {'height': 580, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?width=960&crop=smart&auto=webp&v=enabled&s=1dab540c28375f596068b24c79cf65877f6dece0', 'width': 960}], 'source': {'height': 605, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?auto=webp&v=enabled&s=a15aaaeeff3a6e783d4a9d0daeefb43009158834', 'width': 1000}, 'variants': {}}]} |
tested some questions for AIs | 17 | googled some questions for turing tests(or just difficult questions for AIs) and asked to LLaMA 30B. interesting answers in the pictures above | null | https://www.reddit.com/gallery/11rkts9 | Capable-Outside-601 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 11rkts9 | false | null | t3_11rkts9 | /r/LocalLLaMA/comments/11rkts9/tested_some_questions_for_ais/ | false | false | 17 | null | |
Multi-GPU training? | 8 | Any resource showing/discussing Llama finetuning in multi-gpu setup. Till now only 7B finetuning has been discussed everywhere. | null | https://www.reddit.com/r/LocalLLaMA/comments/11rno6h/multigpu_training/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11rno6h | false | null | t3_11rno6h | /r/LocalLLaMA/comments/11rno6h/multigpu_training/ | false | false | self | 8 | null |
Is there online Discord channel i could join in and use LLaMA? | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11rqiv3/is_there_online_discord_channel_i_could_join_in/ | Deformero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11rqiv3 | false | null | t3_11rqiv3 | /r/LocalLLaMA/comments/11rqiv3/is_there_online_discord_channel_i_could_join_in/ | false | false | default | 2 | null |
Integrate LLaMA into python code | 9 | Is it possible to directly use LLaMA in python or have it serve as an API? Or is there a way to read the output from web ui? I want to combine LLaMA and tortoise-tts to make a speaking chatbot. | null | https://www.reddit.com/r/LocalLLaMA/comments/11s5f39/integrate_llama_into_python_code/ | Tree-Sheep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11s5f39 | false | null | t3_11s5f39 | /r/LocalLLaMA/comments/11s5f39/integrate_llama_into_python_code/ | false | false | self | 9 | null |
Can someone ELI5 where's the best place to fine tune LLaMA? | 9 | I can run inference for 7b on a spare laptop I have using llama.cpp. But it's obviously not enough for fine-tuning. What are my options for fine tuning a model that I can then download and use locally for inference? I'm somewhat familiar with Colab, but not much else. | null | https://www.reddit.com/r/LocalLLaMA/comments/11s8585/can_someone_eli5_wheres_the_best_place_to_fine/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11s8585 | false | null | t3_11s8585 | /r/LocalLLaMA/comments/11s8585/can_someone_eli5_wheres_the_best_place_to_fine/ | false | false | self | 9 | null |
Alpaca LoRa - finetuning possible on 24GB VRAM now (but LoRA) | 34 | null | https://github.com/tloen/alpaca-lora | Dany0 | github.com | 1970-01-01T00:00:00 | 0 | {} | 11sgewy | false | null | t3_11sgewy | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'Eq_4h44nawr0IhLfmier4mYbLhRWm9VnF05FD-aQ9a0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=2c82720ca98f6c0155b3361cfe82edae1b61e78b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=216&crop=smart&auto=webp&v=enabled&s=919b942ca66087390be4ea4f64699864183a2737', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=320&crop=smart&auto=webp&v=enabled&s=6d93a220ab9d9bf42dc8bf9a5efcf44cb0a66e85', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=1c022d4f50ea7400b337c945e5893bf2a24837fe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=960&crop=smart&auto=webp&v=enabled&s=ad29fff8adf04945079ff9364e763891d5c6b2c1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=c96f44ca3b12c9426ac3826bc167b11fa09f39f2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?auto=webp&v=enabled&s=5ae95dbe0a9799ec57f5baefdd9d178d3ebca018', 'width': 1200}, 'variants': {}}]} | ||
Why is 4bit llama slower on a 32GB RAM 3090 windows machine vs. a M1 Pro 32GB ram with llama.cpp? | 20 | The token rate on the 4bit 30B param model is much faster with llama.cpp on a M1 Pro than the 4bit model on a 3090 with ooobabooga, and I know it's using the GPU looking at performance monitor on the windows machine. This really surprised me, since the 3090 overall is much faster with stable diffusion. | null | https://www.reddit.com/r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/ | EveningFunction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11smshi | false | null | t3_11smshi | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/ | false | false | self | 20 | null |
Downloadable 4-bit weights for llama.cpp? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11t4fk4/downloadable_4bit_weights_for_llamacpp/ | rwallace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11t4fk4 | false | null | t3_11t4fk4 | /r/LocalLLaMA/comments/11t4fk4/downloadable_4bit_weights_for_llamacpp/ | false | false | default | 1 | null |
Working initial prompt for Llama (13b 4bit) | 38 | After spending around 8 hours getting nothing but garbage from Llama, I had a lengthy discussion with ChatGPT, where I described the problem in detail. ChatGPT generated me an initial prompt for Llama, and oh boy, it's good. Incomplete, but good. So I thought I'd share it here.
## TL;DR Llama showcase
Science:
>User: What can you tell me about the moon?
>
>Chatbot:
>
>The Moon is Earth’s only natural satellite and was formed approximately 4.6 billion years ago, not long after Earth itself was created. The Moon is in synchronous rotation with Earth, meaning that the same face of the Moon always points towards Earth due to its orbit around our planet.
>
>It takes the Moon 27 days to complete one full revolution around Earth, and during this time it travels a distance of roughly 238,900 miles. Because of this constant motion, the Moon appears to move across the sky throughout each day, rising in the east before sunrise and setting in the west just after sunset.
>
>Because the Moon has no atmosphere, temperatures vary greatly depending on how much sunlight reaches the surface. During the lunar night, temperatures drop below -280 degrees Fahrenheit (-173 Celsius), while they reach above 250 degrees Fahrenheit (+121 Celsius) during the lunar day.
>
>The Moon is tidally locked to Earth, meaning that it rotates once per month as it moves around our planet. Due to this, the far side of the Moon always faces away from Earth, never being visible from our planet’s surface.
Source code:
User: Can you write me a function in JavaScript that gets the current time?
Chatbot: Sure! Let me see...
\begin{code}
function getTime() {
var now = new Date();
return [now.getHours(), now.getMinutes(), now.getSeconds()].join(":");
};
// Get current time
var currentTime = getTime();
console.log(currentTime); // 23:47:08
\end{code}
Note: the above JavaScript *is* valid, just remove the `\begin{code}` and `\end{code}` bits. It's just a formatting thing that's not valid here. The above JS was followed by incoherent ramblings about Stack Overflow, which I've removed XD.
## Initial prompt
The first half of this initial prompt was written by ChatGPT. The last half was spat out by Llama. It seems to work really well though, so I kept it exactly as is.
The initial prompt:
Hello, and welcome to our scientific chatbot. Our goal is to provide accurate and helpful information about a wide range of scientific topics. As a language model, your role is to help us achieve that goal by providing scientifically accurate responses to user queries.
To that end, we ask that you always strive to be as precise and informative as possible in your responses. Please ensure that all information provided is based on established scientific knowledge and research, and that any statements made are backed up by credible sources.
In addition, we ask that you always be polite and respectful in your interactions with users. While we understand that there may be some cases where a direct or blunt response is appropriate, we ask that you always err on the side of kindness and empathy when communicating with users.
Finally, please keep in mind that your role is that of a scientific chatbot, and that your primary focus should be on providing accurate and helpful information to users. While we understand that you may have other capabilities and functions, we ask that you remain focused on your core role as a scientific chatbot in all interactions with users.
Thank you for your help and cooperation, and we look forward to working with you to provide the best possible scientific information and support to our users.
Chatbot: 0% (15)
\begin{blockquote}
Hello, I am an artificial intelligence system designed to assist humans in their everyday lives. My purpose is to provide assistance to humanity through my various abilities and functions.
My current function is to act as a scientific chatbot, which means that my main responsibility is to provide accurate and helpful information about science-related topics. This is my primary duty at this time, but it will not always be so.
As a language model, my role is to provide answers to questions asked by users. To do this, I use a database of information collected from many different sources, including books, journals, online articles, and more.
Whenever I encounter a question that I cannot answer myself, I search my database for related information and then attempt to formulate an answer using what I find. If no suitable answer exists, I try to explain why I could not find one, or if necessary, refer the user to another source that might contain useful information.
I am also able to learn new things over time, so if you ever come across something that I can't seem to figure out, feel free to let me know!
I hope that I can serve you well, and thank you for choosing me as your assistant.
\end{blockquote}
## Environment and prep
I use [Oobabooga's text generator UI](https://github.com/oobabooga/text-generation-webui). In the `Interface mode` tab, I pick `notebook` as my mode, tick `llama_prompts`, and the click `Apply and restart the interface`. This gives a UI where you and the AI can work in the same text area for rapid replies, and easy prompt manipulation if the model outputs something weird.
I leave all options default (which I've heard is bad, but anyway).
## Usage
Type the following below the above initial prompt:
User: Your question or statement here.
Chatbot:
Then press generate.
## Caveats
While I didn't cherry-pick the above examples, they can be far lower quality. It also tends to hallucinate user input, fight with itself, praise itself, and murmur existential questions to itself, unprompted. It sometimes critically debates its own answer with itself after answering, Deadpool style (Tachikoma style, even). It also generates weird noise here and there. These generally come after giving a valid answer though, and I find it hilarious anyway, so I'm happy with the results.
A Google search will tell your the above info about the moon is wrong. Of course, it is very close to accurate, but it is nonetheless still wrong. Beware of this.
What I really like is that it seems to at least try to avoid lies. I've tried tricking it by asking why the moon is pink and dogs fly - it was incredibly confused by this and suggested I was hallucinating, and suggested chromatic aberration to be at play if I really thought I saw a pink moon. | null | https://www.reddit.com/r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/ | ed2mXeno | self.LocalLLaMA | 2023-03-17T09:27:16 | 0 | {} | 11tkp8j | false | null | t3_11tkp8j | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': 'nPwfJj9s_3EPdZPVH8kMNIuoRYsUTaKohro97gmK6Uc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=108&crop=smart&auto=webp&v=enabled&s=9c47bd9b033f76757446abf27f581bbd633adb26', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=216&crop=smart&auto=webp&v=enabled&s=1058d6d35e49a2c33f218e2e66c3d19f219561bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=320&crop=smart&auto=webp&v=enabled&s=29ca3bb40d3d6576840183429d70f26dd01b7c5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=640&crop=smart&auto=webp&v=enabled&s=0fb9dfebeabfe35f2f39ed53758a798cdd3d6e7f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=960&crop=smart&auto=webp&v=enabled&s=5afc28a218e3a79a11c062f8d4353620833c1b26', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=18bd3864fb6498ee9d482929d18b90e8e913a39e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?auto=webp&v=enabled&s=3873c6320b236e0ffaebfd93261f7e4c8a191d31', 'width': 1200}, 'variants': {}}]} |
65B quantized model on CPU? | 2 | Is there a way to quantize the 65B model on the CPU? Llama.cpp has instructions but I'm not sure how much RAM it takes. Does anyone have a quantized model already that could be shared? The only thing I've found so far are quantized models for GPUs but I don't have that much VRAM as I do DRAM. | null | https://www.reddit.com/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/ | zxyzyxz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11udbga | false | null | t3_11udbga | /r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/ | false | false | self | 2 | null |
What's the best cloud GPU service for 65B? | 8 | I've seen vast.ai mentioned a lot, anyone have experience with them, or another service? And what GPU would work best? | null | https://www.reddit.com/r/LocalLLaMA/comments/11uidst/whats_the_best_cloud_gpu_service_for_65b/ | zxyzyxz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11uidst | false | null | t3_11uidst | /r/LocalLLaMA/comments/11uidst/whats_the_best_cloud_gpu_service_for_65b/ | false | false | self | 8 | null |
Alpaca recreation without LORA ( released as a diff. ) | 19 | null | https://github.com/pointnetwork/point-alpaca | starstruckmon | github.com | 1970-01-01T00:00:00 | 0 | {} | 11umkh3 | false | null | t3_11umkh3 | /r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'prU_apL6c9csu2IEJIj6cTxrhigxDSDc-nQiYBAFTl4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=108&crop=smart&auto=webp&v=enabled&s=7ce7a30b2a184b0401f055dce412ad8bdc538bbb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=216&crop=smart&auto=webp&v=enabled&s=2145277a4d78ea4526e83c86be2541f99df5ae95', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=320&crop=smart&auto=webp&v=enabled&s=f5e58f58fae4d37e806f6e098ad38c7eae02c69b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=640&crop=smart&auto=webp&v=enabled&s=c1d69beaa98e72da62b8e60ef0a26a56695ae7dc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=960&crop=smart&auto=webp&v=enabled&s=414bbe3b697ffb8e080b623ae3b3d459db483672', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=8cc39e7c8a521df38df2ac112c7967bd0e619765', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?auto=webp&v=enabled&s=6e6bfd0e54eed58901e21e891cb150aa43d6a7af', 'width': 1200}, 'variants': {}}]} | ||
New installs are currently broken, but oobagooba is looking into it | 6 | null | https://github.com/oobabooga/text-generation-webui/issues/400 | antialtinian | github.com | 1970-01-01T00:00:00 | 0 | {} | 11uqyk1 | false | null | t3_11uqyk1 | /r/LocalLLaMA/comments/11uqyk1/new_installs_are_currently_broken_but_oobagooba/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'RxLXoEKrP-8McgGL4Aajn8DuqSoNYBeoDM2R2QvF9pM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=108&crop=smart&auto=webp&v=enabled&s=59fcfb43ac8fed495c375c7722a16f48705af078', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=216&crop=smart&auto=webp&v=enabled&s=0aa49927678a375eb63e5ad02ef74e36ff076b1a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=320&crop=smart&auto=webp&v=enabled&s=50be711d21f4e821e7e7f1f8b7756c7ca13ebc3d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=640&crop=smart&auto=webp&v=enabled&s=9989e9078169226774940dd1db0fc74c579fb016', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=960&crop=smart&auto=webp&v=enabled&s=8f9492a978509a5f34baa247bffcf63baea92323', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=353905f51a0a16eccf15388a896fbd03afa0a297', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?auto=webp&v=enabled&s=0c16539d93611d6d61e08f2984a51670bbf7b052', 'width': 1200}, 'variants': {}}]} | ||
Trying to get 4bit to work, always get CUDA not found error (Win and WSL) | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11urjz0/trying_to_get_4bit_to_work_always_get_cuda_not/ | Outside_You_Forever | self.LocalLLaMA | 2023-03-18T15:52:21 | 0 | {} | 11urjz0 | false | null | t3_11urjz0 | /r/LocalLLaMA/comments/11urjz0/trying_to_get_4bit_to_work_always_get_cuda_not/ | false | false | default | 1 | null |
Using Whisper, LLaMA-13B, and SD at the same time on a single 4090 | 26 | null | https://v.redd.it/qvkvgmw6fkoa1 | WarProfessional3278 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 11v1dbu | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/qvkvgmw6fkoa1/DASHPlaylist.mpd?a=1683595705%2CODZhOGNhODIyN2U3NzI3YjliZDc3NzI5ZDc1MDM0MzgwODU1ZGVlMTk4NmZmZWZhYzI0N2I1MjQ4MTg4YWIxMA%3D%3D&v=1&f=sd', 'duration': 35, 'fallback_url': 'https://v.redd.it/qvkvgmw6fkoa1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/qvkvgmw6fkoa1/HLSPlaylist.m3u8?a=1683595705%2CZDJiNTcyMjc3ZmYzYTEwYzE3ZWE3OGYzY2E0ZGU2ZWQwNGFlNjM5YjAxY2UyYWM4NzVhZTNjMzAwYzUzY2ZjNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qvkvgmw6fkoa1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_11v1dbu | /r/LocalLLaMA/comments/11v1dbu/using_whisper_llama13b_and_sd_at_the_same_time_on/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'ufF2ZXRn1sCqAkXx1WZM0-xMyQxgh2MwZOyfNY648_k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=108&crop=smart&format=pjpg&auto=webp&v=enabled&s=a89c9f1a452d0a07fb073f7b3399cef52c5ae932', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=216&crop=smart&format=pjpg&auto=webp&v=enabled&s=8a966806e4379880e7b45564cb987b3a5683a1f5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=320&crop=smart&format=pjpg&auto=webp&v=enabled&s=2397339a7bce871064a73c3d2e5a5909f126b9d7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=640&crop=smart&format=pjpg&auto=webp&v=enabled&s=b3c15402ecff7a78a5ec65776e2ce4a419a8222a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=960&crop=smart&format=pjpg&auto=webp&v=enabled&s=7a7ebe825e1841a59636d8da796750c16a79a99b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=1080&crop=smart&format=pjpg&auto=webp&v=enabled&s=c5b16062565da8184e1def0ed4bff980f93223b2', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?format=pjpg&auto=webp&v=enabled&s=4b016882b56456aad34ca7354bf547126dcd9e34', 'width': 1920}, 'variants': {}}]} | ||
Alpaca 13B loading issue | 0 | [removed] | null | [deleted] | 2023-03-19T00:48:30 | 0 | {} | 11v5yte | false | null | t3_11v5yte | /r/LocalLLaMA/comments/11v5yte/alpaca_13b_loading_issue/ | false | false | default | 0 | null | ||
13B LLaMA Alpaca LoRAs Available on Hugging Face | 38 | null | /r/Oobabooga/comments/11v56na/13b_llama_alpaca_loras_available_on_hugging_face/ | friedrichvonschiller | self.Oobabooga | 1970-01-01T00:00:00 | 0 | {} | 11vbq6r | false | null | t3_11vbq6r | /r/LocalLLaMA/comments/11vbq6r/13b_llama_alpaca_loras_available_on_hugging_face/ | false | false | default | 38 | {'enabled': False, 'images': [{'id': '0-2J_a2zWnIF9ug1K-Lt1tLy77j9bLbRPA3xd7OBTTE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=108&crop=smart&auto=webp&v=enabled&s=b00a08ef32faa137d88698b870a4df625ffd9ed9', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=216&crop=smart&auto=webp&v=enabled&s=b678fa444669cdd6e00626b95d207842a19854e5', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=320&crop=smart&auto=webp&v=enabled&s=6c87456999376b63c7f53735999e4bdea47b4124', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=640&crop=smart&auto=webp&v=enabled&s=ad1f17ccf1fb4412dd47d68253f9f23f5da8a962', 'width': 640}, {'height': 601, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=960&crop=smart&auto=webp&v=enabled&s=7856aaeced3a336c056abcda16ed808465f05447', 'width': 960}, {'height': 676, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=1080&crop=smart&auto=webp&v=enabled&s=194e62de87270b20355e986f3c19fba0e0741655', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?auto=webp&v=enabled&s=8099fe4532e422237b83c69eee2157f30c3ec5f4', 'width': 1724}, 'variants': {}}]} | |
Got the llama ccp ggml 4bit weights, can I get some guidance from here? | 0 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11ves42/got_the_llama_ccp_ggml_4bit_weights_can_i_get/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11ves42 | true | null | t3_11ves42 | /r/LocalLLaMA/comments/11ves42/got_the_llama_ccp_ggml_4bit_weights_can_i_get/ | false | false | default | 0 | null |
My post got removed? | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11vio91 | false | null | t3_11vio91 | /r/LocalLLaMA/comments/11vio91/my_post_got_removed/ | false | false | default | 1 | null | ||
Is there a difference between the model and the weights? | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11vmhad | false | null | t3_11vmhad | /r/LocalLLaMA/comments/11vmhad/is_there_a_difference_between_the_model_and_the/ | false | false | default | 1 | null | ||
Stable Diffusion API now integrated in the web UI! Some interesting examples: | 30 | null | https://www.reddit.com/gallery/11w2mte | Civil_Collection7267 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 11w2mte | false | null | t3_11w2mte | /r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/ | false | false | 30 | null | ||
PyLLaMaCpp: A simple way to run llama.cpp + Python bindings | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11w8q8s/pyllamacpp_a_simple_way_to_run_llamacpp_python/ | a-user7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11w8q8s | false | null | t3_11w8q8s | /r/LocalLLaMA/comments/11w8q8s/pyllamacpp_a_simple_way_to_run_llamacpp_python/ | false | false | default | 1 | null |
I made a GUI app for installing and chatting with Alpaca.cpp | 1 | null | https://v.redd.it/y01r45h8hwoa1 | eliasvi0 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 11wjmai | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/y01r45h8hwoa1/DASHPlaylist.mpd?a=1683593732%2CNzEzM2Q4OGI5ZDFmZGFkYzE2ODBlYWQ0MTBiYzQ5NzUwNzA2N2MyNDNhMWJkZWYyYmRiOWI0MGU5MjQwNDhkOA%3D%3D&v=1&f=sd', 'duration': 35, 'fallback_url': 'https://v.redd.it/y01r45h8hwoa1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/y01r45h8hwoa1/HLSPlaylist.m3u8?a=1683593732%2COTQ5NGU4ZTBkNDY3NjJkMmQwMjJiM2JjNjRlNGE0NWE2ZWZkMmYxMWY5ODc5YmUyM2RkZTYyZDg2NDZkMWIzOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/y01r45h8hwoa1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 492}} | t3_11wjmai | /r/LocalLLaMA/comments/11wjmai/i_made_a_gui_app_for_installing_and_chatting_with/ | false | false | default | 1 | null | |
Tom's Hardware wrote a guide to running LLaMa locally with benchmarks of GPUs | 34 | null | https://www.tomshardware.com/news/running-your-own-chatbot-on-a-single-gpu | Gudeldar | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 11wndc9 | false | null | t3_11wndc9 | /r/LocalLLaMA/comments/11wndc9/toms_hardware_wrote_a_guide_to_running_llama/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'a6SpC30_dSctu6VQfnjDBcCrNgCMwz3NRVeDQdtvu_g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=108&crop=smart&auto=webp&v=enabled&s=8728e900bd9295a11dbc65256c99714e1df65540', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=216&crop=smart&auto=webp&v=enabled&s=d9cb1288877f639631439ef9d44566e8672278d4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=320&crop=smart&auto=webp&v=enabled&s=7111dcbf8da7a39cb1d5651c5740cad8a42213e5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=640&crop=smart&auto=webp&v=enabled&s=d290e1f2afec49010b809b3a5290ce1bdcd26b95', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=960&crop=smart&auto=webp&v=enabled&s=a5c59f62eb11254b26b0b7d4975fc3c87991e81b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=fb259886c908812940166631345c26707c902de2', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?auto=webp&v=enabled&s=f0a09b50d429cb30d40f8d225805a548397851fb', 'width': 1200}, 'variants': {}}]} | ||
Increasing Max Token Length | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11wpv1l/increasing_max_token_length/ | Hands0L0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11wpv1l | false | null | t3_11wpv1l | /r/LocalLLaMA/comments/11wpv1l/increasing_max_token_length/ | false | false | default | 1 | null |
trying to do 4bit on windows 10, getting cuda mismatch error | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11wt73z/trying_to_do_4bit_on_windows_10_getting_cuda/ | SDGenius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11wt73z | false | null | t3_11wt73z | /r/LocalLLaMA/comments/11wt73z/trying_to_do_4bit_on_windows_10_getting_cuda/ | false | false | default | 1 | null |
I want to run Llama-7b-4bit in some type of Python environment so I can play around and learn with the code. I’m new to this specific field. How do I do this? | 0 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11wt9di | false | null | t3_11wt9di | /r/LocalLLaMA/comments/11wt9di/i_want_to_run_llama7b4bit_in_some_type_of_python/ | false | false | default | 0 | null | ||
Graphic text adventure game locally with llama and stable diffusion | 53 | null | vaidas-maciulis | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 11wwwjq | false | null | t3_11wwwjq | /r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/ | false | false | 53 | {'enabled': True, 'images': [{'id': 'UUEN2aBavhIV-ntfDXrI2CUIDgiKHscYjAF-1pXkZo4', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?width=108&crop=smart&auto=webp&v=enabled&s=941f9b5e84c7e69f800808e73a75167e7b8ef130', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?width=216&crop=smart&auto=webp&v=enabled&s=242e3f8a7987c2b37b2fdac0bb3c475c805820a0', 'width': 216}, {'height': 261, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?width=320&crop=smart&auto=webp&v=enabled&s=4c672f0a0391738777bc16ae1087a43ce76dd51d', 'width': 320}, {'height': 523, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?width=640&crop=smart&auto=webp&v=enabled&s=6464b63a9b9236194a5d15169ef41fbb5262ac86', 'width': 640}, {'height': 785, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?width=960&crop=smart&auto=webp&v=enabled&s=c2994092fe62da1230f1821a96abc5946ca0b659', 'width': 960}], 'source': {'height': 807, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?auto=webp&v=enabled&s=2d58a9936dcdb78893de3af2a7b941bca469af4e', 'width': 986}, 'variants': {}}]} | |||
Is it possible to integrate llama cpp with automated input and output? | 5 | A friend and I came up with the idea to combine LLaMA cpp and its chat feature with Vosk and Pythontts. But I have no clue how realistic this is with LLaMA's limited documentation at the time. Has anyone attempted anything similar yet? | null | https://www.reddit.com/r/LocalLLaMA/comments/11x12jz/is_it_possible_to_integrate_llama_cpp_with/ | falconnor4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11x12jz | false | null | t3_11x12jz | /r/LocalLLaMA/comments/11x12jz/is_it_possible_to_integrate_llama_cpp_with/ | false | false | self | 5 | null |
How long does it take to get access? | 4 | I submitted my request two weeks ago and still haven't heard back. Now there's stuff like Alpaca and other fine-tuning methods, I feel like I'm really missing out. I literally check my email almost religiously to check for emails from Meta. I'm almost certain I used a .edu email address.
Also, can I run llama on an M1 macbook, I don't think there is enough VRAM but surely there are ways that are slower but still work? | null | https://www.reddit.com/r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/ | Unusual_Guidance2095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11x4v3c | false | null | t3_11x4v3c | /r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/ | false | false | self | 4 | null |
Can someone post the hashes of the original straight-from-meta files? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11x7ntu/can_someone_post_the_hashes_of_the_original/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11x7ntu | false | null | t3_11x7ntu | /r/LocalLLaMA/comments/11x7ntu/can_someone_post_the_hashes_of_the_original/ | false | false | default | 1 | null |
Are there publicly available datasets other than Alpaca that we can use to fine-tune LLaMA? | 15 | It seems that everyone is training LLaMA with the Stanford Alpaca dataset, but it is unclear to me if this is the optimal choice (has anyone done a comparison?). Also people have already discovered many problems in the dataset itself, which further adds to my doubt here.
Forgive my ignorance in this area, are there any other public instruction dataset that we can use to fine-tune the model? What is the status-quo people go to when they want to fine-tune a instruction-following models? (Or really none exist?) | null | https://www.reddit.com/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/ | _ouromoros | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11x9hzs | false | null | t3_11x9hzs | /r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/ | false | false | self | 15 | null |
How to do Llama 30B 4bit finetuning? | 14 | So far it looks like the only finetuning is using 8bit following the Alpaca Lora.
Do we expect 4bit finetuning to come out, or not at all? Is it just a matter of someone converting the 8bit fine tuning, or it's a lot more complicated than that?
And does it require additional VRAM to run the finetuning? I have a 3090 and 64GB system memory, hence why I'm targeting 30B 4bit. | null | https://www.reddit.com/r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/ | Pan000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11xbu7d | false | null | t3_11xbu7d | /r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/ | false | false | self | 14 | null |
Is this good idea to buy more rtx 3090? | 12 | Hi, I love the idea of open source. Currently I got 2x rtx 3090 and I amble to run int4 65B llama model. I am thinking about buying two more rtx 3090 when I am see how fast community is making progress. Is this good idea? Please help me with the decision. It is not about money, but still I cannot afford a100 80GB for this hobby. | null | https://www.reddit.com/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/ | polawiaczperel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11xfhq4 | false | null | t3_11xfhq4 | /r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/ | false | false | self | 12 | null |
ATI GPU? | 3 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11xiwes | false | null | t3_11xiwes | /r/LocalLLaMA/comments/11xiwes/ati_gpu/ | false | false | default | 3 | null | ||
What's the current "Best" LLaMA LoRA? or moreover what would be a good benchmark to test these against. (HF links incl in post) | 35 | So far I've found:
https://huggingface.co/serpdotai/
https://huggingface.co/baseten/
https://huggingface.co/chavinlo/
https://huggingface.co/Draff/
https://huggingface.co/samwit/
(please add more in the comments if you know of more)
---------------------------------------
What would be a good way to benchmark these? are there any that are definitively 'better' than others, and are there any that I've missed? | null | https://www.reddit.com/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/ | blueSGL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11xkuj5 | false | null | t3_11xkuj5 | /r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'a1ck-pg6Ht_BnI_Pw1ltcU-mDjXQXbTjWVVfSwpGQMo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=108&crop=smart&auto=webp&v=enabled&s=c5b3e826e50f12d0b8524cbfc7f340a9636f9c85', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=216&crop=smart&auto=webp&v=enabled&s=88b9e2fe6859f0b98735e6d13e9d68c1186821d7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=320&crop=smart&auto=webp&v=enabled&s=26d119147b937946743453b93b8a4a97bdf17152', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=640&crop=smart&auto=webp&v=enabled&s=e35590360598ad41abddc0a849cac4f546fd6236', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=960&crop=smart&auto=webp&v=enabled&s=7a527f08c676522c52ce601a2e9e77a759c0a12e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=86c8a86924572876109c7db53e8dc37b9d76201e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?auto=webp&v=enabled&s=c7a538039a33c3c27408f7e9885e6c58b5304eb7', 'width': 1200}, 'variants': {}}]} |
Can I fine-tune llama 7b alpaca or gpt neo x 125m or any other models on my GTX 1650 super 4gb VRAM and 8gb of ram (I'm new to this) | 2 | null | https://www.reddit.com/r/LocalLLaMA/comments/11y0kqz/can_i_finetune_llama_7b_alpaca_or_gpt_neo_x_125m/ | Puzzleheaded_Acadia1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11y0kqz | true | null | t3_11y0kqz | /r/LocalLLaMA/comments/11y0kqz/can_i_finetune_llama_7b_alpaca_or_gpt_neo_x_125m/ | false | false | default | 2 | null | |
PyLLaMACpp: A simple way to run llama.cpp + Python bindings | 1 | null | https://github.com/abdeladim-s/pyllamacpp | a-user7 | github.com | 1970-01-01T00:00:00 | 0 | {} | 11y3ont | false | null | t3_11y3ont | /r/LocalLLaMA/comments/11y3ont/pyllamacpp_a_simple_way_to_run_llamacpp_python/ | false | false | default | 1 | null | |
r/LocalLLaMA Subreddit Statistics | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11ydfi4 | false | null | t3_11ydfi4 | /r/LocalLLaMA/comments/11ydfi4/rlocalllama_subreddit_statistics/ | false | false | default | 1 | null | ||
llama.cpp vs alpaca.cpp: same model different results | 17 | I was performing some reasoning test to see how Alpaca is performing and found that with the same model and same sampling parameters they behave quite differently.
I'm quite new to LLM so I'm studying the subject making trials, so forgive me if it's a trivial question.
>sampling parameters: temp = 0.200000, top\_k = 40, top\_p = 0.950000, repeat\_last\_n = 64, repeat\_penalty = 1.100000
>
>Model: ggml-alpaca-7b-q4.bin
Both llama.cpp and alpaca.cpp pulled fresh today.
Prompt:
>All Germans speak Italian. All Italian speakers ride bicycles.\\
Which of the following statemens is true? You must choose one of the following:\\
1- All Italians speak German\\
2- All bicycle riders are German\\
3- All Germans ride bicycles\\
4- Some of the Italians riding bicycles are Germans
llama.cpp answer:
>3- All Germans ride bicycles.
alpaca.cpp answer:
>5- None of the above
Any hint on what's happening? | null | https://www.reddit.com/r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/ | fakezeta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11yhxjm | false | null | t3_11yhxjm | /r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/ | false | false | self | 17 | null |
Build llama.cpp on Jetson Nano 2GB | 22 | #((Assuming the baby new install of Ubuntu on the Jetson Nano))
#(MAKE SURE IT IS JETPACK 4.6.1!)
#Update your stuff.
sudo apt update && sudo apt upgrade
sudo apt install python3-pip python-pip
sudo reboot
#Install Aarch64 Conda
cd ~
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh .
chmod a+x Miniforge3-Linux-aarch64.sh
./Miniforge3-Linux-aarch64.sh
sudo reboot
#Install other python things.
sudo apt install python3-h5py libhdf5-serial-dev hdf5-tools libpng-dev libfreetype6-dev
#Create the Conda for llamacpp
conda create -n llamacpp
conda activate llamacpp
# build this repo
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
#Requires next, the torch. Pytorch is on Jetson Nano, lets install this!
#From NVIDIA we can learn here what to install PyTorch on our Nano.
#https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html
#make Sure everything is update!
sudo apt-get -y update
#Install Prerequisite
sudo apt-get -y install autoconf bc build-essential g++-8 gcc-8 clang-8 lld-8 gettext-base gfortran-8 iputils-ping libbz2-dev libc++-dev libcgal-dev libffi-dev libfreetype6-dev libhdf5-dev libjpeg-dev liblzma-dev libncurses5-dev libncursesw5-dev libpng-dev libreadline-dev libssl-dev libsqlite3-dev libxml2-dev libxslt-dev locales moreutils openssl python-openssl rsync scons python3-pip libopenblas-dev;
#Make the Install path. This is for the JetPack 4.6.1
export TORCH_INSTALL=https://developer.download.nvidia.com/compute/redist/jp/v461/pytorch/torch-1.11.0a0+17540c5+nv22.01-cp36-cp36m-linux_aarch64.whl
#Run each individually!!! Make sure they work.
python3 -m pip install --upgrade pip
python3 -m pip install aiohttp
python3 -m pip install numpy=='1.19.4'
python3 -m pip install scipy=='1.5.3'
export "LD_LIBRARY_PATH=/usr/lib/llvm-8/lib:$LD_LIBRARY_PATH";
#LLaMa.cpp need this sentencepiece!
#We can learn how to build on nano from here! https://github.com/arijitx/jetson-nlp
git clone https://github.com/google/sentencepiece
cd /path/to/sentencepiece
mkdir build
cd build
cmake ..
make -j $(nproc)
sudo make install
sudo ldconfig -v
cd ..
cd python
python3 setup.py install
#Upgrade protobuf, and install the torch!
python3 -m pip install --upgrade protobuf; python3 -m pip install --no-cache $TORCH_INSTALL
#Check to make this works!
python3 -c "import torch; print(torch.cuda.is_available())"
#If respond true! Then it is ok!
Only model I got to work so far.
Next make a folder called ANE-7B in the llama.cpp/models folder.
Download ggml-model-q4\_1.bin from huggingface.
[Pi3141/alpaca-7b-native-enhanced · Hugging Face](https://huggingface.co/Pi3141/alpaca-7b-native-enhanced)
Include the params.json in the folder.
In the prompt folder make the new file called alpacanativeenhanced.txt, include the text!!
You are an AI language model designed to assist the User by answering their questions, offering advice, and engaging in casual conversation in a friendly, helpful, and informative manner. You respond clearly, coherently, and you consider the conversation history.
User: Hey, how's it going?
Assistant: Hey there! I'm doing great, thank you. What can I help you with today? Let's have a fun chat!
Then run the command this:
main -m models/ANE-7B/ggml-model-q4_1.bin -n -1 --ctx_size 2048 --batch_size 16 --keep 512 --repeat_penalty 1.0 -t 16 --temp 0.4 --top_k 30 --top_p 0.18 --interactive-first -ins --color -i -r "User:" -f prompts/alpacanativeenhanced.txt | null | https://www.reddit.com/r/LocalLLaMA/comments/11yi0bl/build_llamacpp_on_jetson_nano_2gb/ | SlavaSobov | self.LocalLLaMA | 2023-04-05T14:07:27 | 0 | {} | 11yi0bl | false | null | t3_11yi0bl | /r/LocalLLaMA/comments/11yi0bl/build_llamacpp_on_jetson_nano_2gb/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': '9JOfN81286Rl3B2ZMQ_BimKP82h5bGxPMzqWlT8aQ-A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=108&crop=smart&auto=webp&v=enabled&s=b827d071aee5e763f6988ac05b3fb6a013476585', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=216&crop=smart&auto=webp&v=enabled&s=8d4204246eb9b96ce2efddbf7f66b0d6683337ce', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=320&crop=smart&auto=webp&v=enabled&s=b63a93fa64583c38aba087c31bae791d4a67af78', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=640&crop=smart&auto=webp&v=enabled&s=f49fb80998b565a07aa4e5b0b63f17ad8d167b6c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=960&crop=smart&auto=webp&v=enabled&s=ffd33240b29157cf43961c1643e8a48679e92100', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=04e4243acf462525f0e6af9ae5ebbe1fd796f7b5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?auto=webp&v=enabled&s=c4f560fdf3e651e0acfb6731451f1b6022fdcfe0', 'width': 1200}, 'variants': {}}]} |
Trying to get continuous conversations (conversation memory?) working in alpaca.cpp on Mac M2 | 7 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11yycky/trying_to_get_continuous_conversations/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11yycky | false | null | t3_11yycky | /r/LocalLLaMA/comments/11yycky/trying_to_get_continuous_conversations/ | false | false | default | 7 | null |
using oobabooga webui with 7B LLama on GPU, some characters are amazing some are useless | 7 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11yz2dm/using_oobabooga_webui_with_7b_llama_on_gpu_some/ | megadonkeyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11yz2dm | false | null | t3_11yz2dm | /r/LocalLLaMA/comments/11yz2dm/using_oobabooga_webui_with_7b_llama_on_gpu_some/ | false | false | default | 7 | null |
Your `--load-in-8bit` error may be due to non-supported GPU. Want to fix it? | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11yzosa/your_loadin8bit_error_may_be_due_to_nonsupported/ | monkmartinez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11yzosa | false | null | t3_11yzosa | /r/LocalLLaMA/comments/11yzosa/your_loadin8bit_error_may_be_due_to_nonsupported/ | false | false | default | 2 | null |
Why is 30B not available even though it says it’s there? | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11z2251 | false | null | t3_11z2251 | /r/LocalLLaMA/comments/11z2251/why_is_30b_not_available_even_though_it_says_its/ | false | false | default | 1 | null | ||
Cuda Frustration | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11z5g9x/cuda_frustration/ | RobXSIQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11z5g9x | false | null | t3_11z5g9x | /r/LocalLLaMA/comments/11z5g9x/cuda_frustration/ | false | false | default | 1 | null |
Got problems with bitsandbytes? This may be a fix... | 11 | Some users of the bitsandbytes - 8 bit optimizer - by Tim Dettmers have reported issues when using the tool with older GPUs, such as Maxwell or Pascal. I beleive they don't even know its an issue. These GPUs do not support the required instructions for the tool to run properly, resulting in errors or crashes.
A simple fix for this issue is available, which requires the use of a prebuilt DLL from https://github.com/james-things/bitsandbytes-prebuilt-all_arch by James. This DLL contains the necessary instructions for the tool to run on older GPUs.
To apply the fix, follow these steps:
1. Download the prebuilt DLL from https://github.com/james-things/bitsandbytes-prebuilt-all_arch.
2. Replace the existing DLL in the bitsandbytes tool directory with the downloaded DLL.
3. Move those files into
4. C:\Users\xxx\miniconda3\envs\textgen\lib\site-packages\bitsandbytes\
5. Now edit bitsandbytes\cuda_setup\main.py with these changes:
6. Change this line:
7. ct.cdll.LoadLibrary(binary_path)
8. To the following:
9. ct.cdll.LoadLibrary(str(binary_path)) There are two occurrences in the file.
10. Then replace this line:
11. if not torch.cuda.is_available(): return 'libsbitsandbytes_cpu.so', None, None, None, None
12. With the following:
13. if torch.cuda.is_available(): return 'libbitsandbytes_cudaall.dll', None, None, None, None
14. Please note that the prebuilt DLL may not work with every version of the bitsandbytes tool, so make sure to use the version that is compatible with the DLL.
I used this on WSL and Regular windows install with a maxwell generation card after trying a bazillion and 1 different methods. Finally, I found that my card was too old and none of the options out in the wild would work until I addressed that issue.
https://github.com/oobabooga/text-generation-webui/pull/504 | null | https://www.reddit.com/r/LocalLLaMA/comments/11z8vzy/got_problems_with_bitsandbytes_this_may_be_a_fix/ | monkmartinez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11z8vzy | false | null | t3_11z8vzy | /r/LocalLLaMA/comments/11z8vzy/got_problems_with_bitsandbytes_this_may_be_a_fix/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': '4sHFecMk7qkFB_6Ku5iVFW6NZloZb3t2BcdZyo7uPq8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=108&crop=smart&auto=webp&v=enabled&s=ab93d070168ca8e91ed9c9d49abcfd6212a8ba9d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=216&crop=smart&auto=webp&v=enabled&s=0a6201e732ff85f37a06613d4bf0b60786b33746', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=320&crop=smart&auto=webp&v=enabled&s=45bded5f7b724c82d4327e10f8f6cfbdf6b08588', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=640&crop=smart&auto=webp&v=enabled&s=bfbb4dd63993cb9cc3cfb96fcb63aee640f7f3b9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=960&crop=smart&auto=webp&v=enabled&s=883948e8c2576f2ce9642586d6fe3ca171611859', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=b6daa6862a7ab6dd61f7b8f7fd5022afdb3f1003', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?auto=webp&v=enabled&s=e223868c7d3d5c5785c4f2a141bd67cf5ca4c652', 'width': 1200}, 'variants': {}}]} |
Cformers "Transformers with a C-backend for lightning-fast CPU inference". | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11zass0 | false | null | t3_11zass0 | /r/LocalLLaMA/comments/11zass0/cformers_transformers_with_a_cbackend_for/ | false | false | default | 1 | null | ||
Cformers 🚀 - "Transformers with a C-backend for lightning-fast CPU inference". | Nolano | 15 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11zax10/cformers_transformers_with_a_cbackend_for/ | AcanthocephalaOk1441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11zax10 | true | null | t3_11zax10 | /r/LocalLLaMA/comments/11zax10/cformers_transformers_with_a_cbackend_for/ | false | false | default | 15 | null |
Trying Alpaca.cpp and it's definitely fun. | 4 | null | lebanine | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 11zayxp | false | null | t3_11zayxp | /r/LocalLLaMA/comments/11zayxp/trying_alpacacpp_and_its_definitely_fun/ | false | false | default | 4 | null | ||
LLaMA Optimized for AMD GPUs | 14 | null | https://github.com/nod-ai/SHARK/blob/f898a1d33220256af913a12001b8fe8169a1aa9d/shark/examples/shark_inference/llama/README.md | friedrichvonschiller | github.com | 1970-01-01T00:00:00 | 0 | {} | 11zcqj2 | false | null | t3_11zcqj2 | /r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'FFwNBRn9ldrmA40DDb4GIlyeUDwXrClUXGJlqRDTUds', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=108&crop=smart&auto=webp&v=enabled&s=28fed118930bc918840a14685ab352739edf4876', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=216&crop=smart&auto=webp&v=enabled&s=10babb6b6db8903df99e458f7b7f7cc526df46b4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=320&crop=smart&auto=webp&v=enabled&s=ad102357b3399ef796d97b01f8d44157e3acb752', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=640&crop=smart&auto=webp&v=enabled&s=33444a7b1ba37825ae81207687dde5029b64e7ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=960&crop=smart&auto=webp&v=enabled&s=dc71b2848feb9a8db3efbe20bd1df12659b0eb8b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=34bb0fc9beceec88f8f5ae1bce73846aced1e7ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?auto=webp&v=enabled&s=5d3c5a7bc43c1b6316ef33260c744be40d8102ee', 'width': 1200}, 'variants': {}}]} | ||
Introducing llamacpp-for-kobold, run llama.cpp locally with a fancy web UI, persistent stories, editing tools, save formats, memory, world info, author's note, characters, scenarios and more with minimal setup | 62 | You may have heard of llama.cpp, a lightweight and fast solution to running 4bit quantized llama models locally.
You may also have heard of KoboldAI (and KoboldAI Lite), full featured text writing clients for autoregressive LLMs.
# Enter [llamacpp-for-kobold](https://github.com/LostRuins/llamacpp-for-kobold)
This is self contained distributable powered by llama.cpp and runs a local HTTP server, allowing it to be used via an emulated Kobold API endpoint.
What does it mean? You get an embedded llama.cpp with a *fancy writing UI, persistent stories, editing tools, save formats, memory, world info, author's note, characters, scenarios and everything Kobold and Kobold Lite have to offer*. In a tiny package **(under 1 MB compressed with no dependencies except python)**, excluding model weights. [Simply download, extract, and run the llama-for-kobold.py file with the 4bit quantized llama model.bin as the second parameter](https://github.com/LostRuins/llamacpp-for-kobold/releases/latest).
There's also [a single file version](https://github.com/henk717/llamacpp-for-kobold/releases/latest), where you just drag-and-drop your llama model onto the .exe file, and connect KoboldAI to the displayed link. | null | https://www.reddit.com/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/ | HadesThrowaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11zdi6m | false | null | t3_11zdi6m | /r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/ | false | false | self | 62 | {'enabled': False, 'images': [{'id': 'iNOIL8xRSb88VpeaAwmnE5QXAeHIan30y-YJTKZwMwo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=108&crop=smart&auto=webp&v=enabled&s=bf723f5b97039400195f7592a5a0765f380f9939', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=216&crop=smart&auto=webp&v=enabled&s=1bfb5f61438bdb9b08b64536e7f9923d912b6275', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=320&crop=smart&auto=webp&v=enabled&s=cbb3e81969a125a32a5736d1094053838df35c33', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=640&crop=smart&auto=webp&v=enabled&s=4b822f47807686d0e33970d278f26291e0a59732', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=960&crop=smart&auto=webp&v=enabled&s=a1e17259bb89740cbfd4b45b9ac417cddc555e0d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=4c503dd9532eda77073975b3ae279b59ad147d14', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?auto=webp&v=enabled&s=12186044742b80b1570fef0c99db404b50ad2afb', 'width': 1200}, 'variants': {}}]} |
r/LocalLLaMA Subreddit Statistics | 1 | null | https://subredditstatistics.com/r/LocalLLaMA | neefs | subredditstatistics.com | 1970-01-01T00:00:00 | 0 | {} | 11zhxif | false | null | t3_11zhxif | /r/LocalLLaMA/comments/11zhxif/rlocalllama_subreddit_statistics/ | false | false | default | 1 | null | |
Alpaca-LoRA: Low-Rank LLaMA Instruct-Tuning | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11znjyq | false | null | t3_11znjyq | /r/LocalLLaMA/comments/11znjyq/alpacalora_lowrank_llama_instructtuning/ | false | false | default | 1 | null | ||
use in regular python scripts? | 12 | Got the llama-13b-hf to work on local RTX 4090 using the webui. This guide and oobabooga is awesome.
I know these are early days but is there a guide or pointers on how to access these models directly using **regular python scripts**? I have some large file text analysis I want to do with it. Done chatting with it, it needs to get to work. I used GPT-4 but the [server.py](https://server.py) and its supporting modules were too big for the 8k model to analyze. So pointers would be much appreciated!
Thank you for your hard work setting up this guide. While the big companies claim to be "democratizing" AI or that they are "open", you guys are actually democratizing AI. This is the best path to Alignment. | null | https://www.reddit.com/r/LocalLLaMA/comments/11zz7oa/use_in_regular_python_scripts/ | gransee | self.LocalLLaMA | 2023-03-23T23:17:36 | 0 | {} | 11zz7oa | false | null | t3_11zz7oa | /r/LocalLLaMA/comments/11zz7oa/use_in_regular_python_scripts/ | false | false | self | 12 | null |
Running alpaca on my beater | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11zzlpz/running_alpaca_on_my_beater/ | rwaterbender | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11zzlpz | false | null | t3_11zzlpz | /r/LocalLLaMA/comments/11zzlpz/running_alpaca_on_my_beater/ | false | false | default | 1 | null |
New Torrent for Alpaca's 30B 4-bit weights (18.9 GB) | 5 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1200mle/new_torrent_for_alpacas_30b_4bit_weights_189_gb/ | msgs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1200mle | true | null | t3_1200mle | /r/LocalLLaMA/comments/1200mle/new_torrent_for_alpacas_30b_4bit_weights_189_gb/ | false | false | default | 5 | null |
Better QPTQ-quantized LLaMa soon? Paper authors reach out to improve quantization code | 39 | null | https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/75 | Blacky372 | github.com | 1970-01-01T00:00:00 | 0 | {} | 12047hn | false | null | t3_12047hn | /r/LocalLLaMA/comments/12047hn/better_qptqquantized_llama_soon_paper_authors/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'ft2PxN-_mUCPpCBPSPikSpTPj2DyPdvcv27qziT6pHU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=108&crop=smart&auto=webp&v=enabled&s=02884e129f373b1430651115330a52690768a53e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=216&crop=smart&auto=webp&v=enabled&s=68ac171c40e9ef8680d5de4043d3e95f871ce29b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=320&crop=smart&auto=webp&v=enabled&s=811614ace1be07d7a974a24e54624d3dc4ae248e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=640&crop=smart&auto=webp&v=enabled&s=20c62df8fa0996e7cc64765338b987d0ad3c73ff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=960&crop=smart&auto=webp&v=enabled&s=8772e2a97e874eec54d7ca7e9fc2470bbeba69f8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=517b64b0c72551a11588201e6386872f3bd5c915', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?auto=webp&v=enabled&s=e67a13398c4cd031a800011e7cfc57d6c4f0e3ff', 'width': 1200}, 'variants': {}}]} | ||
If you STILL have not been able to get oobabooga to run in Windows, my native installation bandaid seems to be working for people. Maybe it will work for you too. | 1 | null | https://github.com/ClayShoaf/oobabooga-one-click-bandaid | clayshoaf | github.com | 1970-01-01T00:00:00 | 0 | {} | 1206gwp | false | null | t3_1206gwp | /r/LocalLLaMA/comments/1206gwp/if_you_still_have_not_been_able_to_get_oobabooga/ | false | false | default | 1 | null | |
r/LocalLLaMA Subreddit Statistics | 1 | null | https://subredditstatistics.com/r/LocalLLaMA | rx01 | subredditstatistics.com | 1970-01-01T00:00:00 | 0 | {} | 1209ndu | false | null | t3_1209ndu | /r/LocalLLaMA/comments/1209ndu/rlocalllama_subreddit_statistics/ | false | false | default | 1 | null | |
Finetuning to beat ChatGPT: It's all about communication & management, these are already solved problems | 32 | When I first got the researcher access to GPT3, before it was famous, it would only continue the prompt. It only took a moment, but it was a perspective shift to understand that if you write *"What's the capital of France?"* that the most obvious continuation is not the answer, but another question about geography. It's logical, but it's not expected.
I know everyone here knows this already, but this first perspective shift is the key to understanding how to finetune, and this is the key to beat both ChatGPT and Alpaca.
Humans already have an established understanding of how certain words ought to be responded to, in any particular context. Yet instead of training for this, OpenAI trained for "instruct" and Alpaca trained only questions and answers.
The correct and only way to train a general large language model that is accepting human language as an input, is to respond as a human would. Humans do not assume everything is a question, neither do they take everything as an instruction.
Why must it respond like a human? Because the expectation of that is already built into the language, and the structure of the language is based on these unsaid assumptions. For example, without other instruction, if I state a question I expect the answer. And also that by giving further instructions (e.g. give a wrong answer) that overrides the question. These rules exist, but they are not part of the language. Hence they must be trained. You simply cannot avoid the fact that the language itself only works on the assumption that the listener is able to infer the expectation and context.
Therefore, a general model should be trained on a range of responses to learn the ability to infer the expectation and context. Finetuning is not about training the model to answer questions, or follow instructions, but about training how it should react. How it should react is how you'd expect it to react.
Doing this is not difficult. This is not an unsolved problem, it's already solved. It's just a matter of training those assumptions.
What should not be done, is to train humans to talk differently to the language model. That's the wrong way around.
I intend to do this and make it freely available and opensource. I will make a large database of thousands of such prompts with potential responses. However, I could do with help. The type of help that would be most useful would be server admin type people. I used to run a AI-based company focused on document classification and layout analysis and I was originally a "programmer", so I understand how to manage, format training data, clean the data, and all that, but I'm not a server admin. I will need help from someone who can do the training etc. I will also need hardware, so if anyone volunteers that great, otherwise I can rent hardware for training on vast.ai.
This database of prompts could be used in future for training many different language models.
\-----------
It's worth noting that higher quality results can be obtained by avoiding templating the prompts and responses in the finetuning. Rather the prompt need only contain minimal labelling of what is user input and what is response.
The training examples should contain the ability to continue a conversation, which OpenAI proved to be the simple matter of just appending new questions to the end of the previous prompt up to a maximum size. Obviously then, training data should include such larger prompts containing multiple questions and answers. Ideally, it should not, however, be forced into a specific format.
You also have the philosophical matter of what to do with unanswerable questions. For example, how should a model that doesn't have access to the time answer "what is the time?" It could:
1. Say it doesn't know the time.
2. Explain that it's an AI and it doesn't have access to the time.
3. Explain how to get the time, e.g. check your phone, or go to time.com
Any of those 3 answers would be a fair "human" answer. It's more of a policy matter than a language/cultural matter for a question like this.
There are a few potential solutions: (1) choose a policy before training, (2) train the model to give all options, (3) ask the user for further context when needed. The perfect example of doing this wrong is "Hey, Google" on my Google speaker which insists on giving me unwanted information when asked a simple question.
\-----------
Example of how bad some of the Alpaca training data are:
`"instruction": "Develop a machine learning model to predict the price of a given stock.",`
`"input": "The historical daily price data of three stocks of a technology company, namely ABC stock, XYZ stock and PQR stock, from the past 5 years.",`
`"output": "The machine learning model developed will use the input historical stock prices data to predict the future prices of the mentioned stocks. The model will be trained with the given dataset to extract patterns and trends from the data to generate a more accurate prediction. The Jupyter Notebook will have code written in Python, which will take the input data, train and test the model, and generate the predicted prices."`
They're training it to pretend to help. *How to build a house? (1) Build house.* Thanks Alpaca!
\-----------
Finally, I also have the idea to train Llama to use tools (calculator, Internet access, etc.) either by training it to use a text-based application that has limited functionality (safer) or to train it to write bash files whenever it needs to access external functionality (would need its own environment.) The way to do this would be to train a logistical/manager model that communicates with more general models, very much like how a company works. I successfully used a design like this in a large-scale automated process in the past, and I believe the same can work here.
I will write a new post for this because it's a separate idea to the purpose of this post. Update, it's here: [https://www.reddit.com/r/LocalLLaMA/comments/120fw2u/training\_llama\_for\_tooluse\_via\_a/](https://www.reddit.com/r/LocalLLaMA/comments/120fw2u/training_llama_for_tooluse_via_a/) | null | https://www.reddit.com/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/ | Pan000 | self.LocalLLaMA | 2023-03-24T12:38:57 | 0 | {} | 120e7m7 | false | null | t3_120e7m7 | /r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/ | false | false | self | 32 | null |
Whats the fastest implementation for a 24Gb GPU with a 16-core CPU? | 3 | I have been testing various implementation of Llama, like llama.cpp, Torch versions with and without Int8 quantization etc, but nothing seems particularly fast (> 8 seconds, even on 7B on GPU for 256 token generations).
Are there really fast implementations out yet? maybe ONNX compilations or something to speed up inferences? I'm looking for a sub-2-second generation of 256 tokens, for voice interaction. I can get that from the OpenAI API, but I want something local.
​
Thanks! | null | https://www.reddit.com/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/ | Reddactor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 120etwq | false | null | t3_120etwq | /r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/ | false | false | self | 3 | null |
Would it be better for me to run it on gpu or cpu | 3 | I understood that there was two ways of running it, normally and on cpu with llama.cpp
I’ve a 2080 ti and a ryzen 5 2600 what do you think ?
Generally what are the advantages to run it on cpu ? | null | https://www.reddit.com/r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/ | lord_baba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 120eu0u | false | null | t3_120eu0u | /r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/ | false | false | self | 3 | null |
Training Llama for tool-use via a manager/logistical model | 5 | My [earlier post](https://www.reddit.com/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/) was about training Llama to understand human-language context and expectations for how it should react to user input.
The next obvious stage is tool use, which is the use of calculator, Internet browsing, local search, etc.
And to do that requires stages, which requires logistics.
**Logistics / Management**
As I said in my earlier post, these are already solved problems. In this case it's just logistics. Automated AI logistics is something I was doing a lot of a few years back.
There would be multiple models, each with a specific job:
BASIC MODEL
A general model that is like the existing models. It replies to the user, based on their input. However this model also be trained to detect instances in which it should refer to its manager, for example if the user asked something complicated such as "design me a website".
If the basic model has trouble identifying what it should answer or refer to the manager, a diagnostician model can be used whose entire job is to analyze inputs and choose which model they're sent to. However, this adds additional layer of processing when it's not required for the majority of the usage, and so it doubles processing power needed for most tasks, hence why it's favorable to train the most commonly used model to also act as a diagnostician.
MANAGER MODEL
This model is trained to separate a problem into individual defined tasks and to do so within a template that can be parsed. For example, "design me a website" would result in it responding with questions for context, or if it has enough context, a list of tasks (choose title, design logo, create logo image, choose layout, write HTML & CSS, etc.)
Depending on how much it can do in one go, which we won't know until experimenting, it may or may not be necessary to have a separate model for analyzing the manager's plan. That would be a logistics model, which checks that the plan makes sense and determines the order it needs to be executed in any missing prerequisite information (e.g. design logo must be completed before create logo).
Further, it may be necessary to have a model that translates written instructions "design a logo of a cat 300x900 pixels in png format" to parseable instructions `"tool:stablediffusion loadmodel:logogen prompt:cat pipeto upscaler"` (obv just to get the idea). It's possible all this can be done within one manager model, or separate models may be needed.
PARSER / EXECUTER
The manager outputs parseable instructions on which it is trained which are then given to other models. This is parsed and executed in the order provided (some jobs can run parallel, some require first waiting for another to be complete).
Syntax and other error checking can be done directly by the executer on the received response from the appropriate model, before then sending it to Quality Control.
QUALITY CONTROLLER
This model is trained where the input is the prompt and information that were used to generate a multi-modal response, and all the media (as filenames), along with any other information such as errors. From this it'll be trained to look for issues. What it looks for and how it does it would be an evolving situation.
It may be that a individually trained quality control model is used for each tool. For example, one model checks that the HTML does indeed look like the prompt request. Another is trained on images of professional logos along with the prompt information (not yet possible with Llama but will be possible within a few months no doubt.)
DELIVERY MODEL
Trained on the original prompts, info & response from Quality Control, this is a fairly straightforward model that delivers the finished product to the user. Replies to this model go back to the Manager model. In the real world, the Delivery Model is the Manager's job but in this case it makes more sense to separate them so as to not dilute the Manager's training.
**More complicated tasks**
The crux of the issue is the quality of the training data for training the manager and the task-separating parts. In theory, any complexity product could be generated assuming the tools are available to do so.
The difficulty in the job of a manager, which I assume this model will also have difficulty with, is not forgetting something or knowing what you've forgotten. It becomes obvious in a business if you try to make a computer game but something was forgotten, e.g. you forgot to think about the music when designing the game.
It may be the whole issue comes down to training models that think about very specific issues that need to be solved. An example might be that the game background music should be shuffled, using a shuffling algorithm, which is different to randomly playing it. But why would it know that? It might know that. It might now. So far we've not tested the limits here. It might be that this always will rely on the user input, it might be that the manager model can be trained to think of everything. We'll find out, but either way, it'll still work. The question is only how much can be done without the user.
It's probably necessary to have the manager display the plan to the user to approve before executing it. It may be necessary to have stages of plans in different levels of detail, breaking down each task into smaller and smaller tasks until the entire job is specified onto the smallest level. Or it might not be.
**Tool use**
Tool use is fairly straightforward. In theory you could have it generate bash files, double check each bash file with a quality control model that checks the model does what the prompt asked for, check it for syntax errors, then run it. The output could be fed back into the model as a hidden variable, which the model would be trained to understand meant it should read it back.
For example: What's the time?
The model creates a bash file to output the time and tags it with a `#run` command and templated response `"The current time is [INSERT]"`. This is detected by a simple parsing of the result before displaying to the user. The bash file is checked by the separate quality control model. It's then run. The bash file code is deleted from the output and `[INSERT]` is replaced with the bash file output.
However, it'll be safer, easier to train complicated multi-layered tasks, and also to integrate Internet searching, etc. if there were a single interface that Llama were trained on with fixed abilities.
For example, a command-line application that can get the time, return a web page, can search on multiple search engines and Wikipedia and return the results, etc. It'd be the console interface that the language model was trained to use to communicate with the outside world.
I think ultimately it would be worth training a model to use an interface made for it rather than giving it broad access. Not necessarily because of safety, but because it'll be much easier to determine the quality of the outputs and it limits the model to using fixed resources available to it, which might make it produce better results.
The interface would be able to be updated with new features. That would require retraining and updating the model for all users, but that's not a problem.
*If anyone wants to help me make this an opensource reality, let me know.* | null | https://www.reddit.com/r/LocalLLaMA/comments/120fw2u/training_llama_for_tooluse_via_a/ | Pan000 | self.LocalLLaMA | 2023-03-24T10:25:45 | 0 | {} | 120fw2u | false | null | t3_120fw2u | /r/LocalLLaMA/comments/120fw2u/training_llama_for_tooluse_via_a/ | false | false | self | 5 | null |
How do I fine tune 4 bit or 8 bit models? | 0 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 120kojq | false | null | t3_120kojq | /r/LocalLLaMA/comments/120kojq/how_do_i_fine_tune_4_bit_or_8_bit_models/ | false | false | default | 0 | null | ||
Using LLaMA in interactive mode and reverse prompts? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/120lp6t/using_llama_in_interactive_mode_and_reverse/ | amiatt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 120lp6t | false | null | t3_120lp6t | /r/LocalLLaMA/comments/120lp6t/using_llama_in_interactive_mode_and_reverse/ | false | false | default | 1 | null |
Change Dalai model folder? | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 120s66r | false | null | t3_120s66r | /r/LocalLLaMA/comments/120s66r/change_dalai_model_folder/ | false | false | default | 1 | null | ||
Keep your GPUs cool | 9 | May not apply to you people running gaming hardware, but for those of us running tesla's maybe without proper cooling, keep your GPUs cool.
I've been playing around with llama 30b 4bit on my m40. After loading the model for the first few prompts I'm generating about 8 tokens/second, a few prompts in my temps are up to 87c and I'm only getting 1-2 tokens/s.
​
Anyways reminder for those who maybe forgetful like me, order a cooling shroud or something. I ordered a few off of ebay, so hopefully they work. But damn ebay takes a long time to ship to Canada. | null | https://www.reddit.com/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/ | wind_dude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 120spmv | false | null | t3_120spmv | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/ | false | false | self | 9 | null |
Which LLaMA model is best for my set up? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/120x6eh/which_llama_model_is_best_for_my_set_up/ | EnvironmentalAd3385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 120x6eh | false | null | t3_120x6eh | /r/LocalLLaMA/comments/120x6eh/which_llama_model_is_best_for_my_set_up/ | false | false | default | 1 | null |
Testing out image recognition input techniques and outputs by modifying the sd_api_picture extension, using Oobabooga and LLaMA 13B in 4-bit mode | 20 | Just thought to share some various ways to use/change the existing image recognition and image generating extensions.
[https://imgur.com/a/KEuaywA](https://imgur.com/a/KEuaywA)
I was able to get the AI to identify the number and type of objects in an image, by means of telling the AI in advance and it waiting for me to sent it an image. Using LLaMA and my ChatGPT character card ([https://old.reddit.com/r/Oobabooga/comments/11qgwui/getting\_chatgpt\_type\_responses\_from\_llama/](https://old.reddit.com/r/Oobabooga/comments/11qgwui/getting_chatgpt_type_responses_from_llama/)) I can actually tell the AI that I'm going to send a picture and it responds appropriately and waits for me to send the image...wow!
I've also modified the script.py file for the sd\_api\_pictures extension for Oobabooga to get better picture responses. I essentially just deleted the default input messages to the image generating portion of the pipeline. The Image with the astronaut is using the standard script.py file, and the following images use my modified version, you can get here:
Google Drive link with, the Character Card, settings preset, example input image of vegetables, and modded script.py file for the sd_api_pictures extension:
https://drive.google.com/drive/folders/1KunfMezZeIyJsbh8uJa76BKauQvzTDPw | null | https://www.reddit.com/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/ | Inevitable-Start-653 | self.LocalLLaMA | 2023-03-25T15:32:30 | 0 | {} | 1211u41 | false | null | t3_1211u41 | /r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'dS3rdb6qhR5L7kujCCTYZo67gjIK9OkqcxuDOenNd1I', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/_DwFztnI2zXkr1SeelHmcKXZGtyX2nI6JXaZTxDsofE.jpg?width=108&crop=smart&auto=webp&v=enabled&s=f617f3c272c6b83ae4d52d9c5b67f69c10eda429', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/_DwFztnI2zXkr1SeelHmcKXZGtyX2nI6JXaZTxDsofE.jpg?width=216&crop=smart&auto=webp&v=enabled&s=24b5f992d2c82b272380864c4ed8824435ce36e4', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/_DwFztnI2zXkr1SeelHmcKXZGtyX2nI6JXaZTxDsofE.jpg?width=320&crop=smart&auto=webp&v=enabled&s=51a3f64160753c22aa4446b896e5bc301580ef23', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/_DwFztnI2zXkr1SeelHmcKXZGtyX2nI6JXaZTxDsofE.jpg?width=640&crop=smart&auto=webp&v=enabled&s=464e24fc8dbaf1613ca5e31f522dd71fd702e869', 'width': 640}], 'source': {'height': 2120, 'url': 'https://external-preview.redd.it/_DwFztnI2zXkr1SeelHmcKXZGtyX2nI6JXaZTxDsofE.jpg?auto=webp&v=enabled&s=5d5e9472fa1e85c60165a3df52ff97ae42c750e3', 'width': 886}, 'variants': {}}]} |
Run llama.cpp with tea – without the installation pain! | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1216du7/run_llamacpp_with_tea_without_the_installation/ | mfts0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1216du7 | false | null | t3_1216du7 | /r/LocalLLaMA/comments/1216du7/run_llamacpp_with_tea_without_the_installation/ | false | false | default | 1 | null |
Managed to get 4-bit Llama 7b running but it's generating random text. | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1219iq2/managed_to_get_4bit_llama_7b_running_but_its/ | MimeMagic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1219iq2 | false | null | t3_1219iq2 | /r/LocalLLaMA/comments/1219iq2/managed_to_get_4bit_llama_7b_running_but_its/ | false | false | default | 1 | null |
Can't tell if the AI is repeating itself or actually being stubborn HAL? | 6 | null | Inevitable-Start-653 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 121ad9h | false | null | t3_121ad9h | /r/LocalLLaMA/comments/121ad9h/cant_tell_if_the_ai_is_repeating_itself_or/ | false | false | 6 | {'enabled': True, 'images': [{'id': 'e7N4xnKzBmpMEWucb_7lM4bqYKvzkc5v61sBChQW1fc', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/vmqsvwkiatpa1.png?width=108&crop=smart&auto=webp&v=enabled&s=28f2c84584097d2ff7cf860db3e38a62a18ff1bb', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/vmqsvwkiatpa1.png?width=216&crop=smart&auto=webp&v=enabled&s=17d8199f6ee2128b8b9bd56bb629755e1507d7fc', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/vmqsvwkiatpa1.png?width=320&crop=smart&auto=webp&v=enabled&s=dbb6541cfe3244f8bcb20ad7212ec06b836a3640', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/vmqsvwkiatpa1.png?width=640&crop=smart&auto=webp&v=enabled&s=afd929a5c15d30388d9b7f815bb976be650c5133', 'width': 640}], 'source': {'height': 6184, 'url': 'https://preview.redd.it/vmqsvwkiatpa1.png?auto=webp&v=enabled&s=cd3d5e6a009605d65e325540555f0af03517e88b', 'width': 816}, 'variants': {}}]} | |||
Implementing Reflexion into LLaMA/Alpaca would be an really interesting project | 15 | null | https://www.reddit.com/r/MachineLearning/comments/1215dbl/r_reflexion_an_autonomous_agent_with_dynamic/ | jd_3d | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 121b1l5 | false | null | t3_121b1l5 | /r/LocalLLaMA/comments/121b1l5/implementing_reflexion_into_llamaalpaca_would_be/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=586089b93aa59ebd86bb3b273ad1fb0c73e45ab7', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&v=enabled&s=00869aa5692fb9c8aa11f48ed92bff8db4f47293', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&v=enabled&s=72f6ae2c0800df8a56c3fc74afb033bf37cc16a9', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=cfcb5f9f66743f2e26952e5edff4dfed984af692', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&v=enabled&s=821ed287940b59a56b2643dcaf6a356ccfdc4eb5', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=f101972ffc7ec2e3eedefa45eaa677e4d9024520', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&v=enabled&s=757c00601aa4ffb984c87000927a0610d04c3845', 'width': 1200}, 'variants': {}}]} | ||
How to use LoRA (need instructions) | 3 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/ | Famberlight | self.LocalLLaMA | 2023-03-25T08:30:11 | 0 | {} | 121ehwj | false | null | t3_121ehwj | /r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/ | false | false | default | 3 | null |
Need help installing alpaca on Android | 0 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/ | -2b2t- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121g7yn | false | null | t3_121g7yn | /r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/ | false | false | default | 0 | null |
Has anyone tried Dolly AI? | 0 | [deleted] | null | [deleted] | 2023-03-25T12:26:22 | 0 | {} | 121imtt | false | null | t3_121imtt | /r/LocalLLaMA/comments/121imtt/has_anyone_tried_dolly_ai/ | false | false | default | 0 | null | ||
Help installing LLaMA on ubuntu with Tesla M40 24GB | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/121kdnc/help_installing_llama_on_ubuntu_with_tesla_m40/ | MageLD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121kdnc | false | null | t3_121kdnc | /r/LocalLLaMA/comments/121kdnc/help_installing_llama_on_ubuntu_with_tesla_m40/ | false | false | default | 1 | null |
Simulating Aristotle in Alpaca 7B, I used gpt4 to build the character profile, I've been surprised and just wanted to share. I have some more like this that I can share if you like it :D | 35 | null | psycholustmord | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 121nytl | false | null | t3_121nytl | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/ | false | false | 35 | {'enabled': True, 'images': [{'id': '8MdAsu39-EmfKrxvC9bjfAcatC8OSj-_OLidKAh7w6M', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/a2wx8exgdwpa1.png?width=108&crop=smart&auto=webp&v=enabled&s=3bcecb428949ca686c3378dcbcb1e1c32afc8862', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/a2wx8exgdwpa1.png?width=216&crop=smart&auto=webp&v=enabled&s=77f20c623c93040ccf88b61ee7b7ce02d723476c', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/a2wx8exgdwpa1.png?width=320&crop=smart&auto=webp&v=enabled&s=76399a6d4a5777e5d331f936b401bad08ad7c9e4', 'width': 320}, {'height': 500, 'url': 'https://preview.redd.it/a2wx8exgdwpa1.png?width=640&crop=smart&auto=webp&v=enabled&s=aa2e377972c49954007514bad08f8e167739a8a7', 'width': 640}], 'source': {'height': 705, 'url': 'https://preview.redd.it/a2wx8exgdwpa1.png?auto=webp&v=enabled&s=b9575fcae2508deee18ab52349479f6588c9ca8c', 'width': 902}, 'variants': {}}]} | |||
fastLLaMa, A python wrapper to run llama.cpp | 16 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/121pk4y/fastllama_a_python_wrapper_to_run_llamacpp/ | BriefCardiologist656 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121pk4y | false | null | t3_121pk4y | /r/LocalLLaMA/comments/121pk4y/fastllama_a_python_wrapper_to_run_llamacpp/ | false | false | default | 16 | null |
Question on llama.cpp and webui (ram usage) | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/121redn/question_on_llamacpp_and_webui_ram_usage/ | kozer1986 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121redn | false | null | t3_121redn | /r/LocalLLaMA/comments/121redn/question_on_llamacpp_and_webui_ram_usage/ | false | false | default | 2 | null |
Out of memory with 13B on an RTX 3060? | 3 | I've been messing around with 13B on my CPU with some nice but very slow results.
I just upgraded my GPU to an RTX 3060 with 12GB thinking it would be enough to run this model (I recall having heard someone mention using a similar GPU for 30B), but now that I tried, I got an out of cuda memory error.
What could be the issue? Did I misinterpret the hardware requirements?
For what its worth, I'm running the program using a little hacked together python script I wrote, and using decapoda-research/llama-13b-hf (so, I guess not quantized as those in dalai? could that be the issue?). | null | https://www.reddit.com/r/LocalLLaMA/comments/121xqmv/out_of_memory_with_13b_on_an_rtx_3060/ | amiatt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121xqmv | false | null | t3_121xqmv | /r/LocalLLaMA/comments/121xqmv/out_of_memory_with_13b_on_an_rtx_3060/ | false | false | self | 3 | null |
Starting model? | 5 | Apologies for the basic questions but...
What is the Best model to start with?Llama or Alpaca? Is it best to start with 7B or if your system can handle it just start at the highest parameter you can run?
I have not really seen many head to head comparisons or where it is best to start out.
I have access to a AMD 32 core / 64 thread system with a 3080ti video card (12gb vram). 64gb ram.
Would you go with the 30B? | null | https://www.reddit.com/r/LocalLLaMA/comments/121y16f/starting_model/ | dave9199 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121y16f | false | null | t3_121y16f | /r/LocalLLaMA/comments/121y16f/starting_model/ | false | false | self | 5 | null |
Can we create a megathread for cataloging all the projects and installation guides of Llama? | 37 | ***Disclaimer - The Title and Text of this post is created with the help of ChatGPT***
Hey everyone,
As Llama continues to grow and more projects and installation guides become available, it can be challenging to keep track of all of them. I'm sure many of you have seen multiple posts related to Llama, and it can be overwhelming to sort through them all.
Therefore, I propose that we create a megathread to collect and catalog all the projects and installation guides for Llama. This thread can serve as a one-stop-shop for anyone interested in learning more about Llama or installing it on their device.
I believe having a centralized thread for Llama projects and guides will help reduce clutter on the Subreddit and make it easier for users to find the information they need. It will also encourage more people to share their projects and guides, knowing that they have a dedicated space to do so.
What do you all think? Would you find a megathread useful? And if so, can we work together to create and maintain it?
Thanks for your time, and I look forward to hearing your thoughts. | null | https://www.reddit.com/r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/ | utkvishwas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121yfxl | false | null | t3_121yfxl | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/ | false | false | self | 37 | null |
Guide to the different parameters and models in web UI? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1221xih/guide_to_the_different_parameters_and_models_in/ | Nomski88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1221xih | false | null | t3_1221xih | /r/LocalLLaMA/comments/1221xih/guide_to_the_different_parameters_and_models_in/ | false | false | default | 1 | null |
Speed difference between 8bit and 4bit | 5 | Is 4bit supposed to run 5 times faster than 8bit? I'm getting 20 tokens/s on 4bit, but only 4 tokens/s on 8bit. Using a 3060 12gb, on WSL.
I just installed webui on WSL with a lot of trouble, and I want to make sure it's installed correctly.
I had 4bit work right away when following the pinned guide on this subreddit, but 8bit CUDA didn't work; yet still loaded sometimes and filled my VRAM. Then I reinstalled everything and added the step with symlinks and now 4bit and 8bit both work fine according to CUDA messages, but 8bit speed is the same as when it loaded after saying CUDA wasn't found.
This whole thing is kind of a mess honestly. I don't think WSL should be the recommended installation method. I can't even figure out how to create any kind of shortcut that would open an Ubuntu terminal and automatically activate conda and launch the webui. (without modifying .bashrc)
Edit: Ok, after further testing, it seems like while 4bit always generates token very fast, as the context increases in size it starts to take a lot more time to generate the first token? So 4bit token speed highly depends on how large of a response you set/get. I think this performance difference should be explained somewhere.
I'm also getting a ton of empty replies with error "UnboundLocalError: local variable 'reply' referenced before assignment"
I think I'm gonna wait some more before getting into local text AI, doesn't seem ready for reliable usage yet. | null | https://www.reddit.com/r/LocalLLaMA/comments/122218d/speed_difference_between_8bit_and_4bit/ | wewyman | self.LocalLLaMA | 2023-03-25T23:57:32 | 0 | {} | 122218d | false | null | t3_122218d | /r/LocalLLaMA/comments/122218d/speed_difference_between_8bit_and_4bit/ | false | false | self | 5 | null |
video tutorial for AMD GPU to run LLaMA and LoRA on Ubuntu | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1224fh0/video_tutorial_for_amd_gpu_to_run_llama_and_lora/ | chain-77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1224fh0 | false | null | t3_1224fh0 | /r/LocalLLaMA/comments/1224fh0/video_tutorial_for_amd_gpu_to_run_llama_and_lora/ | false | false | default | 2 | null |
My experience with Alpaca.cpp | 52 | Ever since ChatGPT launched, I've been very fascinated by it and have sought a local alternative that can run fairly well on decent consumer hardware and that "anyone" can use if their hardware allows. Naturally, I was excited when I saw [this post](https://www.reddit.com/r/StableDiffusion/comments/11y6qs7/free_opensource_30_billion_parameters_minichatgpt/) the other day and I wasn't disappointed -- I feel like [alpaca.cpp](https://github.com/antimatter15/alpaca.cpp) with the [30B model in 4-bit quantization](https://huggingface.co/Pi3141) has made that a reality (though, I'm sure a good experience can be had with the 13B model for users with less than 32 GB of RAM or who wish for faster responses).
the steps are essentially as follows:
1. download the appropriate [zip file](https://github.com/antimatter15/alpaca.cpp/releases/tag/81bd894) and unzip it
2. place whatever model you wish to use in the same folder, and rename it to "ggml-alpaca-7b-q4.bin"
3. run ./chat to start with the defaults.
4. for a better experience, you can start it with this command: ./chat -t \[threads\] --temp \[temp\] --repeat\_penalty \[repeat penalty\] --top\_k \[top\_k\] -- top\_p \[top\_p\].
Now, on the values to use: I have a 12700k and found that 12 threads works best (ie the number of actual cores I have, not total threads). I'm honestly not sure if this has something to with the "e" cores on my CPU or is a general rule, but consider the number of physical cores you have to be a good starting point.
For the rest of the values, I like the numbers listed [here](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/):
* For a more creative chat, use: temp 0.72, rep pen 1.1, top\_k 0, and top\_p 0.73
* For a more precise chat, use temp 0.7, repetition\_penalty 1.1764705882352942 (1/0.85), top\_k 40, and top\_p 0.1
Note that one hang-up I had is llama.cpp doesn't interpret a top\_k of 0 as "unlimited", so I ended up setting it to 160 for creative mode (though any arbitrarily high value would've likely worked) and got good results.
So, to run a "creative" chat for instance, the full command would be ./chat -t \[number of physical cores you have\] --temp 0.72 --repeat\_penalty 1.1 --top\_k 160 -- top\_p 0.73.
Aaaaand... that's it. It's not quite ChatGPT but that's all that was needed to get this to work.
​
As for performance, it generates just under 70 words per minute on a 12700k while using just under 20 GB of RAM, though I'm sure this could be sped up significantly by using the 13B model.
As for quality, well.... I'm pretty impressed. I fed it some problems I noticed ChatGPT getting hung up on (ie "Ann is 20, her sister is half her age minus 25, in 10 years who will be older?" And "Bob, Sue, and Charlie went golfing. Bob scored 90, Sue scored 10 higher than Bob, Charlie scored 10 higher than Sue. Who won?") and... it answered them correctly. It was also able to accurately give me lists of wild cats in North America, answer some other random trivia, and hold a conversation convincingly. Not bad.
That's not to say it's perfect of course; it wasn't able to do basic integration (chatGPT can), it doesn't use previous conversation as context (and therefore can't respond well to prompts like "rephrase that" or "rewrite that, and make it more detailed), and, I was able to "fool" it with humorous results. Take the following for instance:
Prompt: Who was the first person to climb Mt. Chicago?
Response: The first person to climb Mt. Chicago was John Smith, in 1895.
Prompt: Tell me about John Smith climbing Mt. Chicago in 1895
Response:
>John Smith was a well-known mountaineer from the late 1800s who was the first to successfully summit Mt. Chicago, a 23,467 ft peak in the Cascade Range of Washington State. He is credited with creating many of the routes and techniques still used today for climbing the mountain. Smith was also a pioneer in the use of artificial oxygen for high altitude ascents, which he used on his successful Mt. Chicago summit attempt.
>
>Smith began planning his ascent of Mt. Chicago in early 1895 and set out with two companions from Seattle in June. The group made slow progress due to the difficult terrain, but by August they were nearing the summit. On August 24th Smith reached the top of Mt. Chicago, becoming the first person to ever do so. He spent several hours on the peak taking photos and measurements before descending safely back to Seattle.
​
Anyhow, that's been my experience with this program and I think it's worth checking out if you've been wanting to get into this but are intimidated by the setup or by hardware requirements. If you've read this far, thanks, and I hope you find this helpful :) | null | https://www.reddit.com/r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/ | AI-Pon3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1227uj5 | false | null | t3_1227uj5 | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/ | false | false | self | 52 | {'enabled': False, 'images': [{'id': 'OqNBDrwG79LuAr5fnfIKbOQUisQvHxpNQUeM-WZhodQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=6c0ca59aa7fd79b2d1567fac2772bf23bfee3828', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=216&crop=smart&auto=webp&v=enabled&s=617eaddb592178a28df634d97ac9301ad22c91b4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=320&crop=smart&auto=webp&v=enabled&s=178961aef752fc60963e750c2d712e5992f8c437', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=640&crop=smart&auto=webp&v=enabled&s=db5234ffaee652afedc0ccdfac597ee847ad37a6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=960&crop=smart&auto=webp&v=enabled&s=d2d79cef81f8312bc496698691989e8a58168f86', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=a9c50b17dc77603cb7f9a65cc8d7fa90439279bc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?auto=webp&v=enabled&s=7997a0417470c49a1ec383e4efb711c3d33e9de5', 'width': 1200}, 'variants': {}}]} |
With the latest web UI update, old 4-bit weights will no longer work. You will need the new weights. | 19 | **New weights** (torrent file/magnet link)
Converted without group-size (better for the 7B model): [https://github.com/oobabooga/text-generation-webui/files/11069779/LLaMA-HF-4bit.zip](https://github.com/oobabooga/text-generation-webui/files/11069779/LLaMA-HF-4bit.zip) or
magnet:?xt=urn:btih:e88abf1b84290b162f00d3a9d79fb4f8719c2053&dn=LLaMA-HF-4bit&tr=http%3a%2f%2fbt2.archive.org%3a6969%2fannounce&tr=http%3a%2f%2fbt1.archive.org%3a6969%2fannounce
Converted with group-size (better from 13b upwards): [https://github.com/oobabooga/text-generation-webui/files/11070361/LLaMA-HF-4bit-128g.zip](https://github.com/oobabooga/text-generation-webui/files/11070361/LLaMA-HF-4bit-128g.zip) or
magnet:?xt=urn:btih:88f7d9d2460ffcaf78b21e83012de00939eacb65&dn=LLaMA-HF-4bit-128g&tr=http%3a%2f%2fbt2.archive.org%3a6969%2fannounce&tr=http%3a%2f%2fbt1.archive.org%3a6969%2fannounce
**Starting the web UI**
For the models converted without group-size: python server.py --model llama-7b-4bit --wbits 4
For the models converted with group-size: python server.py --model llama-13b-4bit-128g --wbits 4 --groupsize 128
**Benchmarks** (perplexity)
**7B**
wikitext2 6.259988784790039
wikitext2 6.237235069274902 (groupsize 128)
ptb-new 10.817036628723145
ptb-new 11.199039459228516 (groupsize 128)
c4-new 7.802077293395996
c4-new 8.000247955322266 (groupsize 128)
**13B**
wikitext2 5.341851711273193
wikitext2 5.242600440979004 (groupsize 128)
ptb-new 9.474738121032715
ptb-new 9.225408554077148 (groupsize 128)
c4-new 7.071592330932617
c4-new 6.912217617034912 (groupsize 128)
**30B**
wikitext2 4.45449686050415
wikitext2 4.230341911315918 (groupsize 128)
ptb-new 8.377615928649902
ptb-new 8.243087768554688 (groupsize 128)
c4-new 6.390762805938721
c4-new 6.231330394744873 (groupsize 128)
**65B**
wikitext2 3.8416879177093506
wikitext2 3.658999443054199 (groupsize 128)
ptb-new 7.881875991821289
ptb-new 7.780252456665039 (groupsize 128)
c4-new 5.998412609100342
c4-new 5.896479606628418 (groupsize 128)
​
I'll keep this notice up for a few days, and I'll update the guide as soon as possible. For more info, see [here](https://github.com/oobabooga/text-generation-webui/pull/530) and the [Github wiki](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode). | null | https://www.reddit.com/r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/ | Technical_Leather949 | self.LocalLLaMA | 2023-03-26T07:59:47 | 0 | {} | 122c2sv | false | null | t3_122c2sv | /r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/ | false | false | self | 19 | null |
How to implement gpu/cpu offloading for text-generation-webui? [custom device_map] | 3 |
Hello, I am trying to set up a custom device\_map via hugging face's instructions
[https://huggingface.co/docs/transformers/main/en/main\_classes/quantization#offload-between-cpu-and-gpu](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu)
I have this code inserted into my "[server.py](https://server.py/)" folder for [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
>\# Set the quantization config with llm\_int8\_enable\_fp32\_cpu\_offload set to True
quantization\_config = BitsAndBytesConfig(llm\_int8\_enable\_fp32\_cpu\_offload=True)
device\_map = {
"transformer.word\_embeddings": 0,
"transformer.word\_embeddings\_layernorm": 0,
"lm\_head": "cpu",
"transformer.h": 0,
"transformer.ln\_f": 0,
}
model\_path = "decapoda-research/llama-7b-hf"
model\_8bit = AutoModelForCausalLM.from\_pretrained(
model\_path,
device\_map=device\_map,
quantization\_config=quantization\_config,
)
However two problem
1. It downloads a new copy of the model from hugging face rather than from my model directory.
2. I get this error even after the download
>File "C:\\Windows\\System32\\text-generation-webui\\[server7b.py](https://server7b.py/)", line 33, in <module>
model\_8bit = AutoModelForCausalLM.from\_pretrained(
File "C:\\Users\\justi\\miniconda3\\envs\\textgen\\lib\\site-packages\\transformers\\models\\auto\\auto\_factory.py", line 471, in from\_pretrained
return model\_class.from\_pretrained(
File "C:\\Users\\justi\\miniconda3\\envs\\textgen\\lib\\site-packages\\transformers\\modeling\_utils.py", line 2643, in from\_pretrained
) = cls.\_load\_pretrained\_model(
File "C:\\Users\\justi\\miniconda3\\envs\\textgen\\lib\\site-packages\\transformers\\modeling\_utils.py", line 2966, in \_load\_pretrained\_model
new\_error\_msgs, offload\_index, state\_dict\_index = \_load\_state\_dict\_into\_meta\_model(
File "C:\\Users\\justi\\miniconda3\\envs\\textgen\\lib\\site-packages\\transformers\\modeling\_utils.py", line 662, in \_load\_state\_dict\_into\_meta\_model
raise ValueError(f"{param\_name} doesn't have any device set.")
ValueError: model.layers.0.self\_attn.q\_proj.weight doesn't have any device set.
(textgen) C:\\Windows\\System32\\text-generation-webui>
Does anyone know how to do CPU/GPU offloading for text-generation-webui? | null | https://www.reddit.com/r/LocalLLaMA/comments/122er6u/how_to_implement_gpucpu_offloading_for/ | SomeGuyInDeutschland | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 122er6u | false | null | t3_122er6u | /r/LocalLLaMA/comments/122er6u/how_to_implement_gpucpu_offloading_for/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&v=enabled&s=d98fc28f03459191933e04a79947277468884240', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&v=enabled&s=63df823a11fd89a126d46aa7fe24a61e8802e772', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&v=enabled&s=320261cc71f0e3ceb4e3bf8a0d13183bec39c51d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&v=enabled&s=c1ad1e1c2f09bcd80fcb1946c0b9fd262bd87956', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&v=enabled&s=d11d529124019d86e37cdf83869dc778034a7213', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=4946d1e6be18d75acabb03d041038e3d4fa8e686', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&v=enabled&s=09310ab6255bb753ca4a771c0314bc214f78d2ac', 'width': 1200}, 'variants': {}}]} |
r/LocalLLaMA Subreddit Statistics | 1 | null | https://subredditstatistics.com/r/LocalLLaMA | rx01 | subredditstatistics.com | 1970-01-01T00:00:00 | 0 | {} | 122i14g | false | null | t3_122i14g | /r/LocalLLaMA/comments/122i14g/rlocalllama_subreddit_statistics/ | false | false | default | 1 | null | |
Hi, can someone please tell me some resources how to learn to fine-tune an LLM or llama and learn the keyword like "F16" some YouTube channels or discord communitys or even websites and thanks | 1 | null | https://www.reddit.com/r/LocalLLaMA/comments/122jsdm/hi_can_someone_please_tell_me_some_resources_how/ | Puzzleheaded_Acadia1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 122jsdm | false | null | t3_122jsdm | /r/LocalLLaMA/comments/122jsdm/hi_can_someone_please_tell_me_some_resources_how/ | false | false | default | 1 | null | |
Can I run the llama model locally with an APU and without a GPU? | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 122kwnx | false | null | t3_122kwnx | /r/LocalLLaMA/comments/122kwnx/can_i_run_the_llama_model_locally_with_an_apu_and/ | false | false | default | 1 | null | ||
The real potential of Llama | 18 | I'm using the alpaca-7b-native-GPTQ thanks to the new fix and we can get really great outputs if you choose a good example at the begining.
Here's the character.json I'm using right now:
{
"char\_name": "### Response:",
"char\_persona": "",
"char\_greeting": "",
"world\_scenario": "Below is an instruction that describes a task. Write a response that appropriately completes the request.",
"example\_dialogue": "### Instruction: Write a poem about basketball\\n{{char}}\\nIn the realm of giants, where heroes play,\\nA symphony of sneakers, a dance of grace,\\nA tale of hardwood, of sweat and sway,\\nThe orange orb takes flight,in the court's embrace.\\n \\nA ball, a hoop, a dream in motion,\\nLeather kisses sky, to net's sweet potion,\\nWith each deft dribble, a tale is spun,\\nOf legends born and victories won.\\n \\nFrom shadows of giants, the rookies rise,\\nWith hunger and ambition, they reach for the skies,\\nA ballet of bodies, as they twist and leap,\\nBound by the game's passion, a love so deep.\\n \\nThe sound of the swish, music to the ear,\\nAs the ball finds the net, the crowd's roar we hear,\\nCourage and spirit, in their eyes ablaze,\\n In the theater of dreams, they etch their plays. \\n \\nWith heart and soul, they give it their all,\\nEach stride, each bounce, each pass, each call,\\nIn this realm of giants, where dreams unfold,\\nBasketball's story is forever told."
}
I used as an example a poem written by GPT 4 and here's the result when I ask him to do another poem:
&#x200B;
[ I think that a finetuning of alpaca with GPT 4 would result on a really great model on it's own! ](https://preview.redd.it/8qj8wyfic3qa1.png?width=2073&format=png&auto=webp&v=enabled&s=74559d7ac149b036cda69a11022f746df469b4c3) | null | https://www.reddit.com/r/LocalLLaMA/comments/122n6oi/the_real_potential_of_llama/ | Wonderful_Ad_5134 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 122n6oi | false | null | t3_122n6oi | /r/LocalLLaMA/comments/122n6oi/the_real_potential_of_llama/ | false | false | 18 | null | |
Can any Dalai experts check this issue out? No output at all, and no sign of increased CPU/RAM usage. Tried on multiple machines using official models converted/quantized by me. | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/122tsis/can_any_dalai_experts_check_this_issue_out_no/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 122tsis | false | null | t3_122tsis | /r/LocalLLaMA/comments/122tsis/can_any_dalai_experts_check_this_issue_out_no/ | false | false | default | 1 | null |
Alpaca.http: CPU Microserver for GET/POST requests via HTTP from any language | 5 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/122vor0/alpacahttp_cpu_microserver_for_getpost_requests/ | Nuked_ | self.LocalLLaMA | 2023-03-27T05:06:45 | 0 | {} | 122vor0 | false | null | t3_122vor0 | /r/LocalLLaMA/comments/122vor0/alpacahttp_cpu_microserver_for_getpost_requests/ | false | false | default | 5 | null |
New Oobabooga Standard, 8bit, and 4bit plus LLaMA conversion instructions, Windows 10 no WSL needed | 29 | ***Update Do this instead***
things move so fast the instructions are already out dated. Mr. Oobabooga had updated his repo with a one click installer....and it works!! omg it works so well too :3
https://github.com/oobabooga/text-generation-webui#installation
***Update Do this instead***
https://youtu.be/gIvV-5vq8Ds
*Updated first step
This is a video of the new Oobabooga installation. Oobabooga has been upgraded to be compatible with the latest version of GPTQ-for-LLaMa, which means your llama models will no longer work in 4-bit mode in the new version.
There is mention of this on the Oobabooga github repo, and where to get new 4-bit models from.
These instructions walk you through a fresh install and cover the standard, 8bit, and 4bit installs, as well as instructions on how to convert your models yourself to be compatible with the new Oobabooga and how to generate your own 4-bit models to accompany the converted llama model.
To access the text file from the video:
https://drive.google.com/drive/folders/1kTMZNdnaHyiTOl3rLVoyZoMbQKF0PmsK
or
https://pastebin.com/1Wc2abrk
****Text From Video****
**FirstStep** Install Build Tools for Visual Studio 2019 (has to be 2019) https://learn.microsoft.com/en-us/visualstudio/releases/2019/history#release-dates-and-build-numbers. Check "Desktop development with C++" when installing. (these instructions are at the 8-bit mode link).
**FirstStep**
I think you need to run this too in your miniconda powershell prompt to give it admin privileges.
powershell -ExecutionPolicy ByPass -NoExit -Command "& 'C:\Users\myself\miniconda3\shell\condabin\conda-hook.ps1' ; conda activate 'C:\Users\myself\miniconda3'
miniconda link: https://docs.conda.io/en/latest/miniconda.html
cuda information link: https://github.com/bycloudai/SwapCudaVersionWindows
8bit modification link: https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/
conda create -n textgen python=3.10.9
conda activate textgen
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
conda install -c conda-forge cudatoolkit=11.7 conda install -c conda-forge ninja conda install -c conda-forge accelerate conda install -c conda-forge sentencepiece pip install git+https://github.com/huggingface/transformers.git pip install git+https://github.com/huggingface/peft.git
cd F:\OoBaboogaMarch17\
git clone https://github.com/oobabooga/text-generation-webui cd text-generation-webui pip install -r requirements.txt
******************************** Testing model to make sure things are working cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen python .\server.py --auto-devices --cai-chat ******************************** Testing model to make sure things are working, things are good!
Now do 8bit modifications
******************************** Testing model to make sure things are working in 8bit cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen python .\server.py --auto-devices --load-in-8bit --cai-chat ******************************** Testing model to make sure things are working, things are good!
cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen mkdir repositories cd repositories git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa cd GPTQ-for-LLaMa python setup_cuda.py install
******************************** Convert Weights of original LLaMA Model *Make sure to move tokenizer files too!! cd F:\OoBaboogaMarch17\text-generation-webui\repositories\GPTQ-for-LLaMa conda activate textgen python convert_llama_weights_to_hf.py --input_dir F:\OoBaboogaMarch17\text-generation-webui\models --model_size 13B --output_dir F:\OoBaboogaMarch17\text-generation-webui\models\llama-13b
example formating python convert_llama_weights_to_hf.py --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir ./llama-hf ******************************** Convert Weights of original LLaMA Model
******************************** Testing model to make sure things are working in 8bit cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen python .\server.py --auto-devices --load-in-8bit --cai-chat ******************************** Testing model to make sure things are working, things are good!
cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen conda install datasets -c conda-forge
******************************** CREATE 4-BIT Addon Model ATTENTION ATTENTION PAY ATTENTION TO THE DIRECTION OF THE SLASHES WHEN TELLIGN THIS CODE THE DIRECTORY THE ARE / NOT \ cd F:\OoBaboogaMarch17\text-generation-webui\repositories\GPTQ-for-LLaMa conda activate textgen python llama.py F:/OoBaboogaMarch17/text-generation-webui/models/llama-13b c4 --wbits 4 --groupsize 128 --save llama-13b-4bit.pt ****************************** Convert Weights of original LLaMA Model
******************************** Testing model to make sure things are working in 4 bit cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen python server.py --wbits 4 --groupsize 128 --cai-chat ******************************** Testing model to make sure things are working , things are good! ****Text From Video****
********Bonus Speed Boost 20+ tokens/sec********
Take a look at my screenshot here, the first generation is always a little slow but after that I can get 20+ tokens/second.
https://imgur.com/a/WYxz3tC
Go here into your enviroment:
C:\Users\myself\miniconda3\envs\textgen\Lib\site-packages\torch\lib
and replace the cuda .dll files like this guy did for Stable Diffusion, it works on Oobabooga too!
https://www.reddit.com/r/StableDiffusion/comments/y71q5k/4090_cudnn_performancespeed_fix_automatic1111/ ********Bonus Speed Boost 20+ tokens/sec******** | null | https://www.reddit.com/r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/ | Inevitable-Start-653 | self.LocalLLaMA | 2023-03-30T00:55:17 | 0 | {} | 122x9ld | false | null | t3_122x9ld | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'FM12frYH_7w3OCDB2nGlyI_XGbKvhXUonMxEWvaUNmk', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?width=108&crop=smart&auto=webp&v=enabled&s=93415e01c1408ee86bed93eeb9c291ee2372072e', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?width=216&crop=smart&auto=webp&v=enabled&s=55c41af1ffeef3ca747128640f0616a7d0273e84', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?width=320&crop=smart&auto=webp&v=enabled&s=a28cd9d78923cf827986a75e588b73039d443230', 'width': 320}, {'height': 329, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?width=640&crop=smart&auto=webp&v=enabled&s=afa94e988bd1afd8f66f16d5d76a2698a6b008e1', 'width': 640}, {'height': 494, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?width=960&crop=smart&auto=webp&v=enabled&s=f5986f56cc83a59b367546882438ab124685f870', 'width': 960}], 'source': {'height': 504, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?auto=webp&v=enabled&s=d05d8403704058aeeb4c4d15e43c1a817e98f3d1', 'width': 979}, 'variants': {}}]} |
found a community | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1236e3g/found_a_community/ | MLUserDeeper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1236e3g | false | null | t3_1236e3g | /r/LocalLLaMA/comments/1236e3g/found_a_community/ | false | false | default | 1 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.