name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jdvf1kt | I'm stuck here for GPTQ:
[https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/4](https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/4)
Card arrives today. | 2 | 0 | 2023-03-27T14:15:37 | friedrichvonschiller | false | null | 0 | jdvf1kt | false | /r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdvf1kt/ | false | 2 |
t1_jdveiop | I see that as a bit counterintuitive. In the generative state we can add dynamics to an environment that is static. If you put someone in a empty room, soon this person will be trying to make something happen. So, I see generation is an outward activity, but consciousness seems to be so much inwards that we can even ma... | 2 | 0 | 2023-03-27T14:11:53 | Potential_Face_870 | false | null | 0 | jdveiop | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdveiop/ | false | 2 |
t1_jdved7o | It's not quite the same as your analogy. Isn't it more like saying the following (inlining the analogy and the real situation)?
1a) There are ancient pyramids in Egypt and Central America
1b) Both humans and LLMs produce coherent paragraphs of text
2a) We have archaeology that says Egyptian pyramids were built by sla... | 2 | 0 | 2023-03-27T14:10:46 | soundslogical | false | null | 0 | jdved7o | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdved7o/ | false | 2 |
t1_jdvdv8c | https://github.com/lxe/simple-llama-finetuner | 1 | 0 | 2023-03-27T14:07:12 | clayshoaf | false | null | 0 | jdvdv8c | false | /r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdvdv8c/ | false | 1 |
t1_jdvduu3 | That's the main reason I shared. I appreciate your open mind and challenge to my thoughts. I think we're already beyond our ability to enumerate, much less predict, and I'm excited to see researchers far smarter than me trying to do it. | 1 | 0 | 2023-03-27T14:07:07 | friedrichvonschiller | false | null | 0 | jdvduu3 | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdvduu3/ | false | 1 |
t1_jdvdt2r | I mean it has the higher number of params. I don't have enough RAM nor VRAM to run that. | 2 | 0 | 2023-03-27T14:06:45 | ckkkckckck | false | null | 0 | jdvdt2r | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdvdt2r/ | false | 2 |
t1_jdvdc2s | Very probing question. My thought is that generation is only useful when we're actively interacting with the environment -- the only time when consciousness is engaged.
Training is a deeply, thoroughly introspective process. Perhaps external stimuli that do not present an immediate threat are disruptive to it.
I ca... | 1 | 0 | 2023-03-27T14:03:17 | friedrichvonschiller | false | 2023-03-27T14:10:28 | 0 | jdvdc2s | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdvdc2s/ | false | 1 |
t1_jdvbq82 | If the connections you are making are good, then I have a question:
Given that sleep is connected to training, and wake to inference (generation). Then we are establishing a correlation between generation and consciousness.
Why should the only the generation task require consciousness? What is about generation tha... | 2 | 0 | 2023-03-27T13:51:24 | Potential_Face_870 | false | null | 0 | jdvbq82 | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdvbq82/ | false | 2 |
t1_jdvax9g | Have you gotten it to work on oobabooga? So far I've only gotten the Alpaca-30B 4-bit to work using alpaca.cpp. | 5 | 0 | 2023-03-27T13:45:13 | qrayons | false | null | 0 | jdvax9g | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdvax9g/ | false | 5 |
t1_jdvajwa | I guess a Covington Event, a HEMP Attack or just a Normal Blackout will end AI pretty quick.... | 1 | 0 | 2023-03-27T13:42:22 | YoringeTBE | false | 2023-03-27T14:12:02 | 0 | jdvajwa | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdvajwa/ | false | 1 |
t1_jdvadba | We won... First: What did we won? Second: Sure if we extend Natures Patience, we get exterminated quick. I look at you, Goon Funghi!!!! | 1 | 0 | 2023-03-27T13:40:59 | YoringeTBE | false | null | 0 | jdvadba | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdvadba/ | false | 1 |
t1_jdv9xa4 | You can use poor Grammar on Humans and they still can get what you are saying. Does a Machine too? 🤔 And how do you Think! without using Language? | 0 | 0 | 2023-03-27T13:37:36 | YoringeTBE | false | 2023-03-27T13:45:33 | 0 | jdv9xa4 | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdv9xa4/ | false | 0 |
t1_jdv945p | It's certainly an fascinating time for AI, and who knows what the next big leap in capability will be for these models.
While I might not share your conviction, it's interesting to think about. | 3 | 0 | 2023-03-27T13:31:11 | NegHead_ | false | null | 0 | jdv945p | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdv945p/ | false | 3 |
t1_jdv7jwf | Here's some results I got using ggml-alpaca-13b-q4.bin
Using Alpaca Electron and whatever arguments it has set:
"The school bus passed the racecar because it was driving so quickly. In the previous sentence, what was driving so quickly?"
**It can be inferred from this instruction that "the fastest mode of transporta... | 2 | 0 | 2023-03-27T13:18:39 | ambient_temp_xeno | false | 2023-03-27T13:30:43 | 0 | jdv7jwf | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdv7jwf/ | false | 2 |
t1_jdv50np | So is Alpaca-30B using 4-bit weights! | 3 | 0 | 2023-03-27T12:57:34 | spectrachrome | false | null | 0 | jdv50np | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdv50np/ | false | 3 |
t1_jdv17nc | That's a fascinating take. I'd never thought of that. Does clear language convey clear thinking in a general sense, and training on mistakes convey mistaken thinking? Very interesting. It would still be learning, but not things that are biologically necessary. Only reasoning. Which, [yes](https://www.jasonwei.net... | 3 | 0 | 2023-03-27T12:23:46 | friedrichvonschiller | false | null | 0 | jdv17nc | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdv17nc/ | false | 3 |
t1_jdv06ta | [removed] | 1 | 0 | 2023-03-27T12:14:14 | [deleted] | true | null | 0 | jdv06ta | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdv06ta/ | false | 1 |
t1_jdv063x | Alpaca native 4bit is fucking mental. It's honestly really good. | 18 | 0 | 2023-03-27T12:14:03 | ckkkckckck | false | null | 0 | jdv063x | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdv063x/ | false | 18 |
t1_jdv02t0 | >another String Theory
I aspire to that level of proof. | 2 | 0 | 2023-03-27T12:13:10 | friedrichvonschiller | false | null | 0 | jdv02t0 | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdv02t0/ | false | 2 |
t1_jduzzxi | the way i see it.. language encodes intelligence
intelligence produces language <--> languages produces intelligence | 6 | 0 | 2023-03-27T12:12:22 | megadonkeyx | false | null | 0 | jduzzxi | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jduzzxi/ | false | 6 |
t1_jduyz6f | * I never called this science. I called this personal conviction. Reddit is not a peer-reviewed journal.
* There is no causality posited here. It's just correlation in results and correlation in methods that I take as very strong evidence.
* Yes, it is interesting that humans like to build monuments no matter where ... | 2 | 0 | 2023-03-27T12:02:23 | friedrichvonschiller | false | 2023-03-27T12:12:31 | 0 | jduyz6f | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jduyz6f/ | false | 2 |
t1_jdux9yu | Cool!! Someone answered the exac thing on github. It might be you lol In case it was you, i'm VivaPeron there. | 1 | 0 | 2023-03-27T11:45:18 | Christ0ph_ | false | null | 0 | jdux9yu | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdux9yu/ | false | 1 |
t1_jdux8sc | Our initiative and our ability to create these in the first place undoubtedly set us a cut above, you're right. But now I find myself wondering whether my initiative comes purely from the world around me.
A very obvious purpose for a neural network in any organism would be to anticipate what is coming next in its env... | 1 | 0 | 2023-03-27T11:44:57 | friedrichvonschiller | false | 2023-03-27T13:09:09 | 0 | jdux8sc | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdux8sc/ | false | 1 |
t1_jdux23c | Just another String Theory...FD: i run on Hersay 🤷🤔 | -1 | 0 | 2023-03-27T11:42:59 | YoringeTBE | false | null | 0 | jdux23c | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdux23c/ | false | -1 |
t1_jdux1bt | That's not how science works. You can't just look to "converging results" and be affirmative about how they converged.
This is like saing "oh, there are ancient pyramids in Egypt and Central America. I'm 100% convinced that aliens did both." Correlation between results does not mean correlation in methods.
What you h... | 6 | 0 | 2023-03-27T11:42:46 | berchielli | false | 2023-03-27T12:05:31 | 0 | jdux1bt | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdux1bt/ | false | 6 |
t1_jduwp0m | I really hope this works because I'm sick of Ngreedia. Let's goo!! 🚀 | 3 | 0 | 2023-03-27T11:39:15 | 2muchnet42day | false | null | 0 | jduwp0m | false | /r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jduwp0m/ | false | 3 |
t1_jduwnwf | You mistake predictive coding in general with "predicting the next word".
Predictive coding is much more than that... though, admittedly, our own language might indeed work on a similar principle.
Llms are great examples of p-zombies, though I daresay Dennet will say that that without conscious agents to begin with (an... | 2 | 0 | 2023-03-27T11:38:54 | BalorNG | false | null | 0 | jduwnwf | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jduwnwf/ | false | 2 |
t1_jdutzrd | well im running it with [oobabooga](https://github.com/oobabooga)/[**text-generation-webui**](https://github.com/oobabooga/text-generation-webui) and 8 bit now works after i did this fix [Add 8bit threshold for my Pascal card. I use 1.5 or 1.0, otherwise NaN · Ph0rk0z/text-generation-webui-testing@ecad08f (github.com... | 1 | 0 | 2023-03-27T11:09:27 | MageLD | false | null | 0 | jdutzrd | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdutzrd/ | false | 1 |
t1_jdus9zh | It's a great question. The evidence to me comes more from [the converging results](https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/) rather than any underlying biological evidence. I find the capacity of machine learning to acquire diverse skills based on diverse inpu... | 4 | 1 | 2023-03-27T10:49:04 | friedrichvonschiller | false | 2023-03-27T11:33:59 | 0 | jdus9zh | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdus9zh/ | false | 4 |
t1_jduqxw4 | Why are you 100% convinced? There is no biological evidence for predictive coding yet. There is no evidence that the neural network structures/patterns used in ML appear in biology, nor is there evidence that the brain actually does back-propogation during sleep.
Seems like a big 'maybe' to me, with no real evidence. | 24 | 0 | 2023-03-27T10:32:09 | NegHead_ | false | null | 0 | jduqxw4 | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jduqxw4/ | false | 24 |
t1_jdunbuz | its not like we really care that much about the inner kitchen. Its a mere alias of long numbers to short numbers that lets numbers fit into smaller vram. Here we are talking about a very specific algorithm of this kind of compression for GPTs called GPTQ(GPT quantization, lol) which is described here [https://arxiv.org... | 1 | 0 | 2023-03-27T09:42:47 | LienniTa | false | null | 0 | jdunbuz | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdunbuz/ | false | 1 |
t1_jdumpia | Did I understand correctly that this model would no longer run if I was to update:
[https://huggingface.co/elinas/alpaca-30b-lora-int4](https://huggingface.co/elinas/alpaca-30b-lora-int4)
If that is the case then I think I'll wait until a similar model with new format is available. | 3 | 0 | 2023-03-27T09:33:55 | rerri | false | null | 0 | jdumpia | false | /r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/jdumpia/ | false | 3 |
t1_jdulsl0 | Managed to get it working by rolling back to commit 841feed. There seems to be an issue with HIP where it doesn't handle fp16 types correctly, but I'm in over my head when it comes to GPU programming APIs so that's all I could infer. | 1 | 0 | 2023-03-27T09:20:38 | xZANiTHoNx | false | 2023-03-27T09:23:41 | 0 | jdulsl0 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdulsl0/ | false | 1 |
t1_jduls5r | Managed to get it working by rolling back to commit 841feed. There seems to be an issue with HIP where it doesn't handle fp16 types correctly, but I'm in over my head when it comes to GPU programming APIs so that's all I could infer. | 1 | 0 | 2023-03-27T09:20:27 | xZANiTHoNx | false | 2023-03-27T09:23:33 | 0 | jduls5r | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jduls5r/ | false | 1 |
t1_jdulrpa | Managed to get it working by rolling back to commit 841feed. There seems to be an issue with HIP where it doesn't handle fp16 types correctly, but I'm in over my head when it comes to GPU programming APIs so that's all I could infer. | 2 | 0 | 2023-03-27T09:20:16 | xZANiTHoNx | false | 2023-03-27T09:23:37 | 0 | jdulrpa | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdulrpa/ | false | 2 |
t1_jdujgab | I think you want alpaca.http you can download it here [https://github.com/Nuked88/alpaca.http](https://github.com/Nuked88/alpaca.http)
I hope they don't delete my post again, i don't even know why my project is so bad for reddit mod :( | 4 | 0 | 2023-03-27T08:45:48 | Nuked_ | false | null | 0 | jdujgab | false | /r/LocalLLaMA/comments/123e02i/using_llamacpp_how_to_access_api/jdujgab/ | false | 4 |
t1_jdugpms | Take a look at langchain [https://langchain.readthedocs.io/en/latest/](https://langchain.readthedocs.io/en/latest/) | 2 | 0 | 2023-03-27T08:04:32 | reneil1337 | false | null | 0 | jdugpms | false | /r/LocalLLaMA/comments/123e02i/using_llamacpp_how_to_access_api/jdugpms/ | false | 2 |
t1_jdug7oi | Love it :D thank you! | 2 | 0 | 2023-03-27T07:57:00 | psycholustmord | false | null | 0 | jdug7oi | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdug7oi/ | false | 2 |
t1_jduchg4 | Very cool! I wrote a guide on how to run Alpaca 13B 4bit on a 12GB VRAM GPU via KoboldAI + TavernAI and added this json example to the end of the guide where I explain how to import custom characters. Of course I linked this reddit thread - hope you like it.
[https://www.reddit.com/r/KoboldAI/comments/122zjd0/guid... | 2 | 0 | 2023-03-27T07:01:58 | reneil1337 | false | null | 0 | jduchg4 | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jduchg4/ | false | 2 |
t1_jduaeas | [removed] | 1 | 0 | 2023-03-27T06:33:21 | [deleted] | true | null | 0 | jduaeas | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jduaeas/ | false | 1 |
t1_jduads6 | Created a guide for Alpaca/Llama + Kobold 4bit
[https://www.reddit.com/r/KoboldAI/comments/122zjd0/guide\_alpaca\_13b\_4bit\_via\_koboldai\_in\_tavernai/](https://www.reddit.com/r/KoboldAI/comments/122zjd0/guide_alpaca_13b_4bit_via_koboldai_in_tavernai/) | 1 | 0 | 2023-03-27T06:33:09 | reneil1337 | false | null | 0 | jduads6 | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jduads6/ | false | 1 |
t1_jduacdk | [removed] | 1 | 0 | 2023-03-27T06:32:38 | [deleted] | true | null | 0 | jduacdk | false | /r/LocalLLaMA/comments/121xqmv/out_of_memory_with_13b_on_an_rtx_3060/jduacdk/ | false | 1 |
t1_jduacaj | Followed your steps exactly and got it working! Thank you! I actually hadn't converted the 7B model yet, was just trying with the 30B, and now got the 30B working perfectly. | 2 | 0 | 2023-03-27T06:32:36 | jetpackswasno | false | 2023-03-28T04:28:24 | 0 | jduacaj | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jduacaj/ | false | 2 |
t1_jduabu1 | You can try to run the 4bit. I wrote a guide how to run it with KoboldAI
[https://www.reddit.com/r/KoboldAI/comments/122zjd0/guide\_alpaca\_13b\_4bit\_via\_koboldai\_in\_tavernai/](https://www.reddit.com/r/KoboldAI/comments/122zjd0/guide_alpaca_13b_4bit_via_koboldai_in_tavernai/) | 5 | 0 | 2023-03-27T06:32:26 | reneil1337 | false | null | 0 | jduabu1 | false | /r/LocalLLaMA/comments/121xqmv/out_of_memory_with_13b_on_an_rtx_3060/jduabu1/ | false | 5 |
t1_jdu88ik | This is just doing caching for the speed increase, right? So as soon as you modify your context, you'll be back to slow speeds? | 2 | 0 | 2023-03-27T06:04:42 | AmazinglyObliviouse | false | null | 0 | jdu88ik | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdu88ik/ | false | 2 |
t1_jdu4qex | To convert the model I:
- save the [script](https://gist.githubusercontent.com/eiz/828bddec6162a023114ce19146cb2b82/raw/6b1d2b192815e6d61386a9a8853f2c3293b3f568/gistfile1.txt) as "convert.py"
- created a batch file "convert.bat" in the same folder that contains:
python convert.py %~dp0 tokenizer.model
... | 2 | 0 | 2023-03-27T05:21:46 | MoneyPowerNexis | false | 2023-03-27T08:01:47 | 0 | jdu4qex | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdu4qex/ | false | 2 |
t1_jdu3nx8 | Thank you! It has worked. | 1 | 0 | 2023-03-27T05:09:18 | TomFlatterhand | false | null | 0 | jdu3nx8 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdu3nx8/ | false | 1 |
t1_jdtz9fs | One other thing. There are pinned threads and announcements as well. If a help need is getting many posts, moderators should notice that and pin a helpful guide for it. If the guide is truly helpful, there will be less posts asking for help to the issue. If the guide is not helpful, don't punish users for continuing to... | 2 | 0 | 2023-03-27T04:21:06 | Virtamancer | false | null | 0 | jdtz9fs | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jdtz9fs/ | false | 2 |
t1_jdtz0oc | I take the view that forums should be as minimally moderated as possible. No illegal content, and address any spam issues (e.g. the first page being full of product advertisement posts or scams made by bots).
Otherwise, reddit has self-filtering and auto-surfacing built in. That's all the moderation we need. Beyond th... | 3 | 0 | 2023-03-27T04:18:35 | Virtamancer | false | null | 0 | jdtz0oc | false | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/jdtz0oc/ | false | 3 |
t1_jdtwi33 | been trying with [this model](https://huggingface.co/Pi3141/alpaca-lora-30B-ggml) from Pi3141 (using what i believe to be the respective [tokenizer.model](https://huggingface.co/decapoda-research/llama-30b-hf/blob/main/tokenizer.model)) and it executes with no errors or any other console messages, however no .tmp produ... | 1 | 0 | 2023-03-27T03:53:00 | jetpackswasno | false | null | 0 | jdtwi33 | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdtwi33/ | false | 1 |
t1_jdtveia | is there anywhere I can read a description of how exactly the quantization actually works? I'd like to understand what it's doing. | 1 | 0 | 2023-03-27T03:42:12 | Tystros | false | null | 0 | jdtveia | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdtveia/ | false | 1 |
t1_jdtvanc | Do you have a 4090? | 2 | 0 | 2023-03-27T03:41:09 | pepe256 | false | null | 0 | jdtvanc | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdtvanc/ | false | 2 |
t1_jdtmia5 | I'll try. My video card is running late. I live somewhere beyond where the sidewalk ends.
Thanks for your patience. | 2 | 0 | 2023-03-27T02:23:43 | friedrichvonschiller | false | null | 0 | jdtmia5 | false | /r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdtmia5/ | false | 2 |
t1_jdthsur | Hi! Yeah.. boost contains alot of file! Luckily it is only needed for the building process otherwise it would have been a problem! | 2 | 0 | 2023-03-27T01:43:27 | Nuked_ | false | null | 0 | jdthsur | false | /r/LocalLLaMA/comments/122vor0/alpacahttp_cpu_microserver_for_getpost_requests/jdthsur/ | false | 2 |
t1_jdtfp5e | I have a self-contained linux executable with the model inside of it. You just need to have an input.txt file next to it. I'm thinking on releasing it on github soon. | 2 | 0 | 2023-03-27T01:25:54 | coderguyofficial | false | null | 0 | jdtfp5e | false | /r/LocalLLaMA/comments/11x12jz/is_it_possible_to_integrate_llama_cpp_with/jdtfp5e/ | false | 2 |
t1_jdtet7q | I think know all the potential issues that could cause that problem.
1. Not checking "Desktop development with C++" when installing Visual stuidio 2019
2. Not having the 11.7 files schootched up in the list for the environmental variables.
3. Not downloading both files to the bits and bytes folder. (also maybe n... | 1 | 0 | 2023-03-27T01:18:28 | Inevitable-Start-653 | false | null | 0 | jdtet7q | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdtet7q/ | false | 1 |
t1_jdte8yq | NP, I hope we can figure this out, this was the hardest step for me to figure out.
Did you schootch the two files up in the other path thing you need to change? | 1 | 0 | 2023-03-27T01:13:48 | Inevitable-Start-653 | false | null | 0 | jdte8yq | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdte8yq/ | false | 1 |
t1_jdtdvj6 | https://imgur.com/a/pET89M7
Either I'm trashed or I trashed the env variables but to my knowledge it should see 11.7. And this is where I was stuck a few days ago before the bandaid fix came out so idk. I do appreciate your help and your contribution with your tutorial video. | 2 | 0 | 2023-03-27T01:10:44 | tomobobo | false | null | 0 | jdtdvj6 | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdtdvj6/ | false | 2 |
t1_jdtdmzu | Also, yeah it does look like you have 11.7 installed, did you have Desktop development with C++ configured when you installed visual studio? | 1 | 0 | 2023-03-27T01:08:49 | Inevitable-Start-653 | false | null | 0 | jdtdmzu | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdtdmzu/ | false | 1 |
t1_jdtddif | Ahh, you know what, it might be your environment settings. Go to this link, it shows you very clearly how to change them and make sure they are set up correctly.
You just want things to say 11.7 where he is changing his to 11.3
https://github.com/bycloudai/SwapCudaVersionWindows | 1 | 0 | 2023-03-27T01:06:39 | Inevitable-Start-653 | false | null | 0 | jdtddif | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdtddif/ | false | 1 |
t1_jdtczsk | I mean yeah I am 99% sure it's what I have it installed here: https://imgur.com/a/3AdzAZI, I saw other people successfully setup_cuda.py but there's this bandaid fix here:https://github.com/ClayShoaf/oobabooga-one-click-bandaid that did somehow do the setup_cuda.py successfully on my machine. I'm working through your ... | 2 | 0 | 2023-03-27T01:03:28 | tomobobo | false | null | 0 | jdtczsk | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdtczsk/ | false | 2 |
t1_jdtc72y | Do you have visual studio 2019 installed? I'm not sure what the bandaid.bat does.
But as far as I'm aware you can't execute the instructions you are trying to do unless you have visual studio 2019 installed like here:
Install Build Tools for Visual Studio 2019 (has to be 2019) https://learn.microsoft.com/en-us/visu... | 1 | 0 | 2023-03-27T00:56:53 | Inevitable-Start-653 | false | null | 0 | jdtc72y | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdtc72y/ | false | 1 |
t1_jdtb3qo | I was having problems with the setup_cuda.py before, and the only thing that seemed to get me through the errors was the bandaid.bat. This is what I get when I try to setup_cuda.py
https://pastebin.com/Kt9BSsc2
I got my cuda_path set in system env. Idk why I get this issue but I guess I will just use the bandaid if I... | 2 | 0 | 2023-03-27T00:47:59 | tomobobo | false | null | 0 | jdtb3qo | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdtb3qo/ | false | 2 |
t1_jdt88ac | Hmm. So that means it could be possible to train 65B on a 2x3090 or 2x4090 system (possibly, not sure how tight it is when they say "possible on a 24 GB card" for 30B). Though, I could feasibly see it taking days or weeks with the reduced power and slowdown (and some purists would argue that it's not the same as a lora... | 2 | 0 | 2023-03-27T00:24:48 | AI-Pon3 | false | null | 0 | jdt88ac | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdt88ac/ | false | 2 |
t1_jdt2hnh | hmm so the torrent seems to be dead for me. Is there anywhere else to get these new weights? | 1 | 0 | 2023-03-26T23:39:40 | bubblyhobo15 | false | null | 0 | jdt2hnh | false | /r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/jdt2hnh/ | false | 1 |
t1_jdt1g4n | I see a lot of people saying WSL is faster in my post over at r/Oobabooga ... I can get 20+ tokens/second running on windows only.
https://imgur.com/a/WYxz3tC
Go here into your enviroment:
C:\Users\myself\miniconda3\envs\textgen\Lib\site-packages\torch\lib
and replace the cuda .dll files like this guy did for Stabl... | 2 | 0 | 2023-03-26T23:31:39 | Inevitable-Start-653 | false | null | 0 | jdt1g4n | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdt1g4n/ | false | 2 |
t1_jdsz23a | There's a [repo](https://github.com/johnsmith0031/alpaca_lora_4bit) for tuning Loras on the 4-bit models. Readme says it can train 30B on a single 24GB card with Gradient Checkpointing enabled (which does slow things down quite a lot). | 3 | 0 | 2023-03-26T23:13:27 | Nextil | false | null | 0 | jdsz23a | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdsz23a/ | false | 3 |
t1_jdsswao | Can you explain this to someone who is using Windows and barley understands how to get stuff from git? | 2 | 0 | 2023-03-26T22:26:37 | Slug_Laton_Rocking | false | null | 0 | jdsswao | false | /r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jdsswao/ | false | 2 |
t1_jdssip0 | >RuntimeError: probability tensor contains either \`inf\`, \`nan\` or element < 0
How are you running it? I can run it with [tloen](https://github.com/tloen)/[**alpaca-lora**](https://github.com/tloen/alpaca-lora) and [oobabooga](https://github.com/oobabooga)/[**text-generation-webui**](https://github.com/oobabo... | 1 | 0 | 2023-03-26T22:23:45 | wind_dude | false | null | 0 | jdssip0 | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdssip0/ | false | 1 |
t1_jdsr6b6 | Ah, super cool. I'll try it a bit later today.
Thanks for the update! | 1 | 0 | 2023-03-26T22:13:27 | remghoost7 | false | null | 0 | jdsr6b6 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdsr6b6/ | false | 1 |
t1_jdsn16w | Ah, got a bit further after doing more commands on that link you mention.
Getting an 'out of memory' error on a PC with 128gig, so will dig into that. Again. Thanks I am making progress after 7+ hours of working on this. | 1 | 0 | 2023-03-26T21:42:49 | thebaldgeek | false | null | 0 | jdsn16w | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdsn16w/ | false | 1 |
t1_jdsmtli | [https://github.com/oobabooga/text-generation-webui/commit/49c10c5570b595e9d4fdcb496c456a9982ede070](https://github.com/oobabooga/text-generation-webui/commit/49c10c5570b595e9d4fdcb496c456a9982ede070) | 2 | 0 | 2023-03-26T21:41:19 | Wonderful_Ad_5134 | false | null | 0 | jdsmtli | false | /r/LocalLLaMA/comments/122n6oi/the_real_potential_of_llama/jdsmtli/ | false | 2 |
t1_jdsmea5 | Thank you very much, I'll definitely be attempting to set this up. | 1 | 0 | 2023-03-26T21:38:19 | AI-Pon3 | false | null | 0 | jdsmea5 | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdsmea5/ | false | 1 |
t1_jdsm06q | I tested both and it runs the same speed with 16 threads vs 32 threads despite showing 100% utilization on both so I don't think it's properly utilizing all of them.
I told it to count to 100 words with a new word on each line and I timed it with a stopwatch (not sure if there's a better way):
30B: about 40 words a ... | 4 | 0 | 2023-03-26T21:35:29 | ThePseudoMcCoy | false | 2023-03-27T01:20:12 | 0 | jdsm06q | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdsm06q/ | false | 4 |
t1_jdsl2au | Thanks for answering. My project has about 500 ish people looking for results, I am trying my best for them...
After I ran your steps.... I had to snip the end, it was massive, but similar to what's shown here.
`(llama4bit) C:\Users\tbg\ai\text-generation-webui>python` [`server.py`](https://server.py) `--... | 1 | 0 | 2023-03-26T21:28:35 | thebaldgeek | false | null | 0 | jdsl2au | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdsl2au/ | false | 1 |
t1_jdsip1c | what new fix? | 1 | 0 | 2023-03-26T21:11:25 | APUsilicon | false | null | 0 | jdsip1c | false | /r/LocalLLaMA/comments/122n6oi/the_real_potential_of_llama/jdsip1c/ | false | 1 |
t1_jdsiecj | its simple and works great, boost takes a super long time to decompress though | 1 | 0 | 2023-03-26T21:09:20 | APUsilicon | false | null | 0 | jdsiecj | false | /r/LocalLLaMA/comments/122vor0/alpacahttp_cpu_microserver_for_getpost_requests/jdsiecj/ | false | 1 |
t1_jdsidkb | [removed] | 1 | 0 | 2023-03-26T21:09:10 | [deleted] | true | null | 0 | jdsidkb | false | /r/LocalLLaMA/comments/11s8585/can_someone_eli5_wheres_the_best_place_to_fine/jdsidkb/ | false | 1 |
t1_jdsicdg | I used a colab from a youtube video and it worked very well: [https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO#scrollTo=OdgRTo5YxyRL](https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO#scrollTo=OdgRTo5YxyRL)
I'm just working on a larger dataset but I trained llama-7b usi... | 1 | 0 | 2023-03-26T21:08:56 | Sixhaunt | false | null | 0 | jdsicdg | false | /r/LocalLLaMA/comments/11s8585/can_someone_eli5_wheres_the_best_place_to_fine/jdsicdg/ | false | 1 |
t1_jdsi3re | [removed] | 1 | 0 | 2023-03-26T21:07:15 | [deleted] | true | null | 0 | jdsi3re | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdsi3re/ | false | 1 |
t1_jdsi2oy | With the latest web UI update, the ozcur model should now work. I didn't check the page at the time, but the uploader of that model had quantized it with the recent GPTQ update before the web UI updated to meet that standard. From the model card page:
llama.py /output/path c4 --wbits 4 --groupsize 128 --save alpac... | 2 | 0 | 2023-03-26T21:07:02 | Technical_Leather949 | false | null | 0 | jdsi2oy | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdsi2oy/ | false | 2 |
t1_jdshl6t | It works well when done with GPTQ. There's effectively no difference in quality, and you can see the [benchmarks here](https://github.com/qwopqwop200/GPTQ-for-LLaMa#result). | 1 | 0 | 2023-03-26T21:03:33 | Technical_Leather949 | false | null | 0 | jdshl6t | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdshl6t/ | false | 1 |
t1_jdshed3 | I have now scraped all the data for artifacts, creatures, factions, flora, gods, npcs, and places for the ElderScrolls Franchise. I should be able to do a lot better this time around | 1 | 0 | 2023-03-26T21:02:11 | Sixhaunt | false | null | 0 | jdshed3 | false | /r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jdshed3/ | false | 1 |
t1_jdsh6lp | >Found models\\llama-13b-4bit.pt
This is the wrong model. The correct one should be a safetensors file. If you're trying to run 4-bit LLaMA, you want to use the with group-size version.
Make sure both your text-generation-webui and GPTQ repo are up to date. Navigate to the text-generation-webui folder and do this:... | 2 | 0 | 2023-03-26T21:00:38 | Technical_Leather949 | false | null | 0 | jdsh6lp | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdsh6lp/ | false | 2 |
t1_jdsh3ze | >I still cannot understand why running the model and fine-tune it have different requirements
that's the case for stuff like StableDiffusion too. Most people need to use googleColab to train it even though they can easily run StableDiffusion locally at over 10X the resolution of the training images | 1 | 0 | 2023-03-26T21:00:07 | Sixhaunt | false | null | 0 | jdsh3ze | false | /r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/jdsh3ze/ | false | 1 |
t1_jdsgaak | absolute life savers! I recommend making an edit to make this clearer in the instructions :) I'm sure a bunch of people would like to push the limit of what their hardware can load | 1 | 0 | 2023-03-26T20:54:20 | SomeGuyInDeutschland | false | null | 0 | jdsgaak | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdsgaak/ | false | 1 |
t1_jdsek6p | You will probably OOM with 10000MiB. Try experimenting with different values until you find the highest that doesn't OOM. | 1 | 0 | 2023-03-26T20:42:13 | oobabooga1 | false | null | 0 | jdsek6p | false | /r/LocalLLaMA/comments/122er6u/how_to_implement_gpucpu_offloading_for/jdsek6p/ | false | 1 |
t1_jdsdzl9 | very nice! new 4b quantization is with group size and its even better than old 8 bit now lol | 3 | 0 | 2023-03-26T20:38:17 | LienniTa | false | null | 0 | jdsdzl9 | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdsdzl9/ | false | 3 |
t1_jdsbz4n | Let me give this a go | 1 | 0 | 2023-03-26T20:24:31 | APUsilicon | false | null | 0 | jdsbz4n | false | /r/LocalLLaMA/comments/122vor0/alpacahttp_cpu_microserver_for_getpost_requests/jdsbz4n/ | false | 1 |
t1_jdsaibl | How fast is it running on your system? Op mentioned getting around 70 words per minute on his 12700K. | 1 | 0 | 2023-03-26T20:14:14 | orick | false | null | 0 | jdsaibl | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdsaibl/ | false | 1 |
t1_jds4a78 | I have a 10gb rtx 3080
Should I use --gpu-memory-10000Mib? | 1 | 0 | 2023-03-26T19:30:39 | SomeGuyInDeutschland | false | null | 0 | jds4a78 | false | /r/LocalLLaMA/comments/122er6u/how_to_implement_gpucpu_offloading_for/jds4a78/ | false | 1 |
t1_jds2bhd | [deleted] | 1 | 0 | 2023-03-26T19:16:46 | [deleted] | true | null | 0 | jds2bhd | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jds2bhd/ | false | 1 |
t1_jds1zbp | Quick question, I'm having trouble figuring out how to utilize this dataset to train/retrain a model. It'd be a great help if you would point me in the right direction. | 1 | 0 | 2023-03-26T19:14:17 | rancorger | false | null | 0 | jds1zbp | false | /r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/jds1zbp/ | false | 1 |
t1_jds1nxy | It's already implemented. Just use `--auto-devices --gpu-memory 4000MiB` or `--auto-devices --gpu-memory 4000MiB 5000MiB` for multiple GPUs . There is no need to create a layer by layer device map manually. | 1 | 0 | 2023-03-26T19:12:02 | oobabooga1 | false | null | 0 | jds1nxy | false | /r/LocalLLaMA/comments/122er6u/how_to_implement_gpucpu_offloading_for/jds1nxy/ | false | 1 |
t1_jdrwa9g | WOOOT 8 Bit working now.... texting 4 bit now =D THANK YOU | 1 | 0 | 2023-03-26T18:34:00 | MageLD | false | null | 0 | jdrwa9g | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdrwa9g/ | false | 1 |
t1_jdrrk44 | I did use that colab but then I just also pulled from a git repo with the ElderScrolls data then I reformatted it with code. Converting the dataset to the proper format and stuff only takes a minute or two to run so it didn't cost much extra to process it in the doc itself. For long documents it will be tough though be... | 2 | 0 | 2023-03-26T18:00:16 | Sixhaunt | false | null | 0 | jdrrk44 | false | /r/LocalLLaMA/comments/121y16f/starting_model/jdrrk44/ | false | 2 |
t1_jdrqnf6 | Yeah, the q is for Quantization if I recall. So that’s quantization 4 bits. | 1 | 0 | 2023-03-26T17:53:49 | EnvironmentalAd3385 | false | null | 0 | jdrqnf6 | false | /r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/jdrqnf6/ | false | 1 |
t1_jdro73i | How does the 4-bit quantization work? I’m interested in applying for computer vision models | 1 | 0 | 2023-03-26T17:36:36 | 4_love_of_Sophia | false | null | 0 | jdro73i | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdro73i/ | false | 1 |
t1_jdrhwgm | Followed the pure windows (11) guide and did not encounter any errors.
Downloaded what I think is the correct model and repository. (Unclear about with and without group size). Trying the 13b 4bit.
When I start the server: \` python server.py --model llama-13b --wbits 4 --no-stream \` I get the following error (n... | 2 | 0 | 2023-03-26T16:52:31 | thebaldgeek | false | null | 0 | jdrhwgm | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdrhwgm/ | false | 2 |
t1_jdrhnep | >RuntimeError: probability tensor contains either \`inf\`, \`nan\` or element < 0
Try this fix and see if it works: [https://github.com/Ph0rk0z/text-generation-webui-testing/commit/ecad08f54c3282356888ee8f4dbf112cb331544a](https://github.com/Ph0rk0z/text-generation-webui-testing/commit/ecad08f54c3282356888ee8f4d... | 1 | 0 | 2023-03-26T16:50:47 | Technical_Leather949 | false | null | 0 | jdrhnep | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdrhnep/ | false | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.