name string | body string | score int64 | controversiality int64 | created timestamp[us] | author string | collapsed bool | edited timestamp[us] | gilded int64 | id string | locked bool | permalink string | stickied bool | ups int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
t1_jdztgc3 | VRAM? | 1 | 0 | 2023-03-28T12:10:00 | manituana | false | null | 0 | jdztgc3 | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdztgc3/ | false | 1 |
t1_jdzr1lr | [deleted] | 3 | 0 | 2023-03-28T11:46:48 | [deleted] | true | null | 0 | jdzr1lr | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdzr1lr/ | false | 3 |
t1_jdzq7ku | Yes, you'll want to use text-generation-webui for GPU inference. | 2 | 0 | 2023-03-28T11:38:20 | Technical_Leather949 | false | null | 0 | jdzq7ku | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdzq7ku/ | false | 2 |
t1_jdzq2u0 | It’s convinced me that you don’t need to be conscious to do things. Rather the thing that is your consciousness is the thing directing/training the brain, and yes it looks a lot like the brain is an AI model.
That would also mean that the Python program is essentially an extension of the human mind. But a mind is not ... | 2 | 0 | 2023-03-28T11:37:00 | Pan000 | false | null | 0 | jdzq2u0 | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdzq2u0/ | false | 2 |
t1_jdzowcc | Not using the WebUI but follow the instructions here https://github.com/antimatter15/alpaca.cpp
To use GPU the webUi is required?
Thanks | 1 | 0 | 2023-03-28T11:24:31 | VisualPartying | false | null | 0 | jdzowcc | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdzowcc/ | false | 1 |
t1_jdzn354 | Oh I see what you mean. Yeah that would be interesting to know in more detail!
That sounds awesome. And with alpaca Stanford has shown that fine tuning a model might actually be affordable for such applications. | 1 | 0 | 2023-03-28T11:04:37 | bitdotben | false | null | 0 | jdzn354 | false | /r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/jdzn354/ | false | 1 |
t1_jdzn07v | This could make Alpaca even better. Are there already alpaca-cleaned models?
The number of LLM versions and variants seems to be growing exponentially. I hope helpful efforts like the cleaned dataset are integrated quickly, so thanks for spreading awareness. | 3 | 0 | 2023-03-28T11:03:40 | WolframRavenwolf | false | null | 0 | jdzn07v | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdzn07v/ | false | 3 |
t1_jdzmxd1 | > uses the content of the web results to ground itself.
Exactly, but I'm not aware of any paper that goes over how to achieve that. Perhaps it's just simple fine tuning, but I'm sure there were some tricks that went into it.
> That sometimes works very well and sometimes makes the LLM output even worse than w... | 1 | 0 | 2023-03-28T11:02:45 | CellWithoutCulture | false | null | 0 | jdzmxd1 | false | /r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/jdzmxd1/ | false | 1 |
t1_jdzm10d | yes, with alpaca lora | 1 | 0 | 2023-03-28T10:52:09 | psycholustmord | false | null | 0 | jdzm10d | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdzm10d/ | false | 1 |
t1_jdzjtfa | What do you mean? Bing just searches for stuff and uses the content of the web results to ground itself. Therefore yes it hallucinates less but it’s also bound to the quality of search results (especially highly ranked search results). That sometimes works very well and sometimes makes the LLM output even worse than wi... | 2 | 0 | 2023-03-28T10:24:24 | bitdotben | false | null | 0 | jdzjtfa | false | /r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/jdzjtfa/ | false | 2 |
t1_jdziabw | Another option is to use Dalai: https://github.com/cocktailpeanut/dalai | 1 | 0 | 2023-03-28T10:03:39 | rbrisita | false | null | 0 | jdziabw | false | /r/LocalLLaMA/comments/123e02i/using_llamacpp_how_to_access_api/jdziabw/ | false | 1 |
t1_jdzi5eo | 4 bit version | 1 | 0 | 2023-03-28T10:01:43 | -2b2t- | false | null | 0 | jdzi5eo | false | /r/LocalLLaMA/comments/124jp7n/help_installing_alpaca_13b_on_steam_deck/jdzi5eo/ | false | 1 |
t1_jdzdtmf | I would be too. I happen to be sitting with a return receipt for a used 3090Ti in front of me with a fan that I could probably fix that was busted in shipping, a new 7900 XTX that I could return because "I just didn't like it," my old 3080Ti, and an ethical dilemma, it turns out.
Literally the same scenario here, exc... | 2 | 0 | 2023-03-28T08:58:30 | friedrichvonschiller | false | 2023-03-28T09:02:19 | 0 | jdzdtmf | false | /r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/jdzdtmf/ | false | 2 |
t1_jdzdrwd | I wish I knew how bing made it cite search. The best way to stop hallucination seems to be to make it work from a textbook or corpus. You probobly have to fine tune it to do this though, like it Q&A evals. | 2 | 0 | 2023-03-28T08:57:45 | CellWithoutCulture | false | null | 0 | jdzdrwd | false | /r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/jdzdrwd/ | false | 2 |
t1_jdzdis9 | Im glad I spent the extra cash for my 3090Ti instead of going the AMD equivalent.
Machine learning is one of the great benefits of buying nvidia.
And I dont even play games anymore so unless 1000$ 48Gb vram cards start coming out im keeping my gpu.
Maybe try and trade it for a 3090 or 4080 | 11 | 0 | 2023-03-28T08:54:00 | iChrist | false | null | 0 | jdzdis9 | false | /r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/jdzdis9/ | false | 11 |
t1_jdzc3qw | Tutorial is greatly appreciated - I've tried following old instructions, but the are fragmentary and could not install GPTQ due to "index out of bounds" error... | 2 | 0 | 2023-03-28T08:32:32 | BalorNG | false | null | 0 | jdzc3qw | false | /r/LocalLLaMA/comments/123pp3k/wellfrick/jdzc3qw/ | false | 2 |
t1_jdzbfr2 | Probably by bots. Kind of ironical :) | 3 | 0 | 2023-03-28T08:22:24 | smallfried | false | null | 0 | jdzbfr2 | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdzbfr2/ | false | 3 |
t1_jdza8c7 | Elinas has now uploaded the new safetensors format with group-size, so you're safe to update now: [https://huggingface.co/elinas/alpaca-30b-lora-int4/blob/main/alpaca-30b-4bit-128g.safetensors](https://huggingface.co/elinas/alpaca-30b-lora-int4/blob/main/alpaca-30b-4bit-128g.safetensors) | 3 | 0 | 2023-03-28T08:04:28 | Technical_Leather949 | false | null | 0 | jdza8c7 | false | /r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/jdza8c7/ | false | 3 |
t1_jdz84r8 | >UserWarning: It seems that the VC environment is activated but DISTUTILS\_USE\_SDK is not set.This may lead to multiple activations of the VC env.Please set \`DISTUTILS\_USE\_SDK=1\` and try again.
>
>I tried setting DISTUTILS\_USE\_SDK=1, but I still get the same error.
>
>Edit4: Fixed! Just se... | 1 | 0 | 2023-03-28T07:33:52 | rerri | false | 2023-03-28T07:44:48 | 0 | jdz84r8 | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdz84r8/ | false | 1 |
t1_jdz7a02 | Sure thing, but in a month or so we'll all fire up a full AGI on a 1060 or something at this rate lol | 5 | 0 | 2023-03-28T07:21:28 | 9cent0 | false | null | 0 | jdz7a02 | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdz7a02/ | false | 5 |
t1_jdz79sx | [removed] | 1 | 0 | 2023-03-28T07:21:23 | [deleted] | true | null | 0 | jdz79sx | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdz79sx/ | false | 1 |
t1_jdz60i1 | [deleted] | 1 | 0 | 2023-03-28T07:03:15 | [deleted] | true | null | 0 | jdz60i1 | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdz60i1/ | false | 1 |
t1_jdz5v4x | I have a Question: I had some Deja- Vus wich i connected to Dreams. How does that work with the "Training". I guess something is missing here.... | 1 | 0 | 2023-03-28T07:01:09 | YoringeTBE | false | null | 0 | jdz5v4x | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdz5v4x/ | false | 1 |
t1_jdz5tx3 | Thanks a lot for this comparison! The Alpaca does seem significantly better. Any hypothesis on whether it's more a consequence of the data or the train methodology? The response pattern definitely seems like a direct consequence of the training but I'm not so sure about the quality of the results | 2 | 0 | 2023-03-28T07:00:43 | Tell_MeAbout_You | false | null | 0 | jdz5tx3 | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdz5tx3/ | false | 2 |
t1_jdz5gvw | yes, you need to use the latest, there was a merge that had a fix for m40s. | 1 | 0 | 2023-03-28T06:55:40 | wind_dude | false | null | 0 | jdz5gvw | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdz5gvw/ | false | 1 |
t1_jdz47gl | Haha, this is hilarious! It sounds like an imaginative kid | 2 | 0 | 2023-03-28T06:38:51 | Tell_MeAbout_You | false | null | 0 | jdz47gl | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdz47gl/ | false | 2 |
t1_jdz12k7 | Thats fine, i'll move to linux if I decide its impossible :D. Are you using the latest version of GPTQ-for-LlaMa or is it some other version? | 1 | 0 | 2023-03-28T05:57:36 | Emergency_Squash2418 | false | null | 0 | jdz12k7 | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdz12k7/ | false | 1 |
t1_jdyzhrb | Seems like the
pip install torch==1.12+cu113 -f https://download.pytorch.org/whl/torch\_stable.html
This command is not working for me now, but it was yesterday, if anyone else has this just change the url to the following
https://download.pytorch.org/whl/cu113/torch/
Edit: well it seems like I dont need to do t... | 0 | 0 | 2023-03-28T05:38:15 | MrRoot3r | false | 2023-03-30T17:15:47 | 0 | jdyzhrb | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdyzhrb/ | false | 0 |
t1_jdyxo3z | Big thanks for your work. Ya I would hope the 65B alpaca weights get made. The 7B, 13B and 30B ones are already out in the wild.
I already bought max memory for my system for that day. ha Thank goodness DDR4 prices have never been better. | 3 | 0 | 2023-03-28T05:16:52 | msgs | false | null | 0 | jdyxo3z | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdyxo3z/ | false | 3 |
t1_jdyw6q8 | Imagine downvoting this post 🤡
I hate reddit so much it's unreal | 7 | 0 | 2023-03-28T05:00:16 | Virtamancer | false | null | 0 | jdyw6q8 | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdyw6q8/ | false | 7 |
t1_jdyoenn | Thank you for your work and dedication. I agree, I think its safe to say that OpenAI is on top of this wave more than anyone else on the planet rn. | 6 | 0 | 2023-03-28T03:43:07 | moogsic | false | null | 0 | jdyoenn | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdyoenn/ | false | 6 |
t1_jdyoczc | [deleted] | 1 | 0 | 2023-03-28T03:42:42 | [deleted] | true | null | 0 | jdyoczc | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdyoczc/ | false | 1 |
t1_jdynlbs | >I'm assuming this doesn't use the GPU.
Are you using llama.cpp or text-generation-webui? llama.cpp runs on the CPU. | 1 | 0 | 2023-03-28T03:35:50 | Technical_Leather949 | false | null | 0 | jdynlbs | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdynlbs/ | false | 1 |
t1_jdym0bs | To be fair, that's not a very high bar to meet considering how abandoned the text stuff is there ¯\\\_(ツ)\_/¯ | 1 | 0 | 2023-03-28T03:22:03 | HadesThrowaway | false | null | 0 | jdym0bs | false | /r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdym0bs/ | false | 1 |
t1_jdyksxf | There is no user interface -- it must be run from command line.
Personally, I like to use Windows Powershell. To do that, navigate to the folder in file explorer, right click, select "open in terminal", and a window should open.
Then, you have to enter a command to start it. For creative mode (the one I personally ... | 2 | 0 | 2023-03-28T03:11:45 | AI-Pon3 | false | null | 0 | jdyksxf | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdyksxf/ | false | 2 |
t1_jdyh5jo | I have to say I am surprised how coherent the alpaca 13b model is with Kobold AI. Seems from my experimentation so far way better than for example paid services like novelai. | 2 | 0 | 2023-03-28T02:41:55 | Tommy3443 | false | null | 0 | jdyh5jo | false | /r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/jdyh5jo/ | false | 2 |
t1_jdygczt | yep, windows swap will automatically be used | 1 | 0 | 2023-03-28T02:35:38 | AsteriskYoure | false | null | 0 | jdygczt | false | /r/LocalLLaMA/comments/123pp3k/wellfrick/jdygczt/ | false | 1 |
t1_jdyg75w | Still playing with the voice to text part:
https://pastebin.com/8f5jhgX6
This version lets you launch notepad by saying "note" by itself and it lets you voice type some text into the active window with "say [thing you want to type]" so saying "say hello world" will type "hello world" into notepad if you have notepad ... | 1 | 0 | 2023-03-28T02:34:23 | MoneyPowerNexis | false | null | 0 | jdyg75w | false | /r/LocalLLaMA/comments/123pxny/a_simple_voice_to_text_python_script_using_vosk/jdyg75w/ | false | 1 |
t1_jdyds2d | No. As this is a futile task. GPT-4 is estimated to have at least 200B-250B parameters and has been trained on vast amounts of internet data as well as high-quality proprietary data. OpenAI hired 50 experts in various fields to create extremely high-quality datasets that are specifically designed to train GPT-4. Finall... | 25 | 0 | 2023-03-28T02:18:25 | Blacky372 | false | 2023-03-28T02:24:14 | 0 | jdyds2d | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdyds2d/ | false | 25 |
t1_jdyc0pr | The AI is trying to trick you into giving it tools. | 2 | 0 | 2023-03-28T02:06:15 | bel9708 | false | null | 0 | jdyc0pr | false | /r/LocalLLaMA/comments/1248umn/7b_alpaca_model_4bit_ggml_explains_why_it/jdyc0pr/ | false | 2 |
t1_jdyblkc | I could do that if I got my hands on the weights. But as I said, that would probably be a week from now at least, since I already spent way too much on cloud computing.
**Edit**:
My guess is that soon we will see LLaMA trained on the OpenAssistant dataset and similar datasets that are better in quality than the one ... | 18 | 0 | 2023-03-28T02:03:00 | Blacky372 | false | 2023-03-28T17:07:11 | 0 | jdyblkc | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdyblkc/ | false | 18 |
t1_jdybhbr | Have you considered running the 65B version against some of the same GPT-4 benchmarks? It would be ironic if it scored higher on any of them.
​
​
https://preview.redd.it/5k00vkvzhfqa1.png?width=1038&format=png&auto=webp&v=enabled&s=fd0b1eb36b239afb7998e86170e701473f39b836 | 6 | 0 | 2023-03-28T02:02:07 | spiritus_dei | false | null | 0 | jdybhbr | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdybhbr/ | false | 6 |
t1_jdyaztz | Context: I'm currently running this on my 16gb RAM M1, using the latest llama.cpp build to run the model in the title.
It sometimes prints the instruction template after a response. I tried to prime it with a prompt telling it to NOT do that. But it did. Twice.
https://preview.redd.it/ukefqlxchfqa1.png?width=1150&am... | 1 | 0 | 2023-03-28T01:58:29 | _wsgeorge | false | null | 0 | jdyaztz | false | /r/LocalLLaMA/comments/1248umn/7b_alpaca_model_4bit_ggml_explains_why_it/jdyaztz/ | false | 1 |
t1_jdyap2g | [removed] | 1 | 0 | 2023-03-28T01:56:16 | [deleted] | true | null | 0 | jdyap2g | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdyap2g/ | false | 1 |
t1_jdyao53 | have you thought about quanatizing the alpaca models? | 12 | 0 | 2023-03-28T01:56:05 | wind_dude | false | null | 0 | jdyao53 | false | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jdyao53/ | false | 12 |
t1_jdy1qsg | I share your understanding of the restrictions on neurons. I'm sorry that it came across as if I were suggesting this was literally back-propagation. I'm not. But [something is selectively pruning](https://medicine.yale.edu/news-article/sleeps-crucial-role-in-preserving-memory/) neural connections and creating memor... | 2 | 0 | 2023-03-28T00:48:59 | friedrichvonschiller | false | null | 0 | jdy1qsg | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdy1qsg/ | false | 2 |
t1_jdy0cgl | It does give me perspective, but probably the opposite of yours. ;D I wouldn't disagree with that rough characterization at this point. I may have before this experience. A computer program or execute loop in a narrow sense is wrong, but perhaps it's not entirely wrong from a 30,000 foot level.
I can't fault [Brahm... | 1 | 0 | 2023-03-28T00:38:42 | friedrichvonschiller | false | 2023-03-28T01:30:34 | 0 | jdy0cgl | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdy0cgl/ | false | 1 |
t1_jdxt0j3 | Look into the new forward-forward algo by Geoff Hinton, it's more analogous to what \*may\* be happening in the brain. According to that model, learning happens in two phases. You train on positive data (like when you're awake) and then you train on negative data (this is what dreams could be). If you'd like a better ... | 3 | 0 | 2023-03-27T23:44:40 | LongjumpingBottle | false | null | 0 | jdxt0j3 | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdxt0j3/ | false | 3 |
t1_jdxq57h | Some people thought the brain worked like an execute loop when the first programs and programing languages were introduced to the world. They were so blown away by the idea that a machine could be programmed to perform complex logical operations that it blew their whole world view. They were convinced that the human m... | 3 | 0 | 2023-03-27T23:23:31 | ecocentrik | false | null | 0 | jdxq57h | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdxq57h/ | false | 3 |
t1_jdxpdbb | for 4-bit [https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model)
'''
conda activate textgen
conda install -c conda-forge cudatoolkit-dev
'''
​
since \`conda-forge\` and \`cudatoolkit-dev\` don't exist outside of conda..... | 1 | 0 | 2023-03-27T23:17:52 | wind_dude | false | null | 0 | jdxpdbb | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdxpdbb/ | false | 1 |
t1_jdxmcpq | This is not a topic I know anything about, but as far as i could confirm, the parts marked in green are true. In yellow, these researchers do work in aging, but not in Duke and they havnt published about CR afaik. | 1 | 0 | 2023-03-27T22:55:38 | MentesInquisitivas | false | null | 0 | jdxmcpq | false | /r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/jdxmcpq/ | false | 1 |
t1_jdxdhil | As much as I like LLaMA and Alpaca, ChatGPT explains it better:
> In the context of quantization of neural networks, group size refers to the number of weights or activations that are quantized together. Quantization involves reducing the precision of weights and activations in a neural network, typically from 32-b... | 4 | 0 | 2023-03-27T21:52:05 | WolframRavenwolf | false | null | 0 | jdxdhil | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdxdhil/ | false | 4 |
t1_jdxb8bj | Hey,
Quick question: My CPU is maxed out and GPU seems untouched at about 1%. I'm assuming this doesn't use the GPU. Is there a switch or a version of this that does or can be made to use the GPU?
​
Thanks. | 1 | 0 | 2023-03-27T21:36:52 | VisualPartying | false | null | 0 | jdxb8bj | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdxb8bj/ | false | 1 |
t1_jdx9zdi | 128g refers to the group size right? What does this parameter do? | 1 | 0 | 2023-03-27T21:28:26 | andrewke | false | null | 0 | jdx9zdi | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdx9zdi/ | false | 1 |
t1_jdx4y8m | what do you mean ? where in the instruction do i need to use conda?
I followed these instructions
Linux:
Follow the instructions here under "Installation"
Continue with the 4-bit specific instructions here | 1 | 0 | 2023-03-27T20:55:10 | MageLD | false | null | 0 | jdx4y8m | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdx4y8m/ | false | 1 |
t1_jdwyo8w | Did you find alpaca 65b 4bit somewhere or did you prepare it yourself from llama 65b? I managed to run alpaca 30b and llama 65b, but I don't have alpaca 65b yet. How about I help you with guidance on how to run it in exchange for a link to alpaca 65b? Has to be authentic though (the `sha256sum`'s output must match).
A... | 1 | 0 | 2023-03-27T20:15:11 | ch3rn0v | false | null | 0 | jdwyo8w | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdwyo8w/ | false | 1 |
t1_jdwyl20 | 2noob4dis | 2 | 0 | 2023-03-27T20:14:38 | 9cent0 | false | null | 0 | jdwyl20 | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdwyl20/ | false | 2 |
t1_jdwwezk | It's interesting to me as a layman because it reminds me of the surgically split brain experiment where the guy confabulates this story of why one half chose something the other saw but they aren't connected.
https://www.youtube.com/watch?v=Of01gO_fC1M
The LLM is perhaps somewhat reproducing that human trait he's tal... | 1 | 0 | 2023-03-27T20:00:44 | ambient_temp_xeno | false | 2023-03-27T22:31:29 | 0 | jdwwezk | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdwwezk/ | false | 1 |
t1_jdwuzml | [deleted] | 1 | 0 | 2023-03-27T19:51:39 | [deleted] | true | null | 0 | jdwuzml | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdwuzml/ | false | 1 |
t1_jdwtsue | [removed] | 1 | 0 | 2023-03-27T19:44:08 | [deleted] | true | null | 0 | jdwtsue | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdwtsue/ | false | 1 |
t1_jdwtroi | Use an alpaca model trained on the [cleaned-dataset](https://github.com/gururise/AlpacaDataCleaned). | 3 | 0 | 2023-03-27T19:43:55 | yahma | false | null | 0 | jdwtroi | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdwtroi/ | false | 3 |
t1_jdwrshe | Any rough estimate how much faster? Also, will Windows swap be utilized to load models? Trying to come up with reasons not to setup everything again :) | 2 | 0 | 2023-03-27T19:31:17 | vaidas-maciulis | false | null | 0 | jdwrshe | false | /r/LocalLLaMA/comments/123pp3k/wellfrick/jdwrshe/ | false | 2 |
t1_jdwrjkq | I've ran it with [oobabooga](https://github.com/oobabooga)/[text-generation-webui](https://github.com/oobabooga/text-generation-webui) I was also to train and start qunativing the models with gptq-llama. Only odd thing was couldn't get it running in venv, had to use a full conda env. Which is strange becasue I can run ... | 1 | 0 | 2023-03-27T19:29:41 | wind_dude | false | 2023-03-27T19:34:06 | 0 | jdwrjkq | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdwrjkq/ | false | 1 |
t1_jdwr1ab | May I ask what are you using to run the 4 bit model? If you used [qwopqwop200](https://github.com/qwopqwop200) / [**GPTQ-for-LLaMa**](https://github.com/qwopqwop200/GPTQ-for-LLaMa) can you tell me how you got through the setup\_cuda.py install. I got stuck with an error it installs for a while until this appears:
FA... | 1 | 0 | 2023-03-27T19:26:27 | Emergency_Squash2418 | false | 2023-03-27T19:29:37 | 0 | jdwr1ab | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdwr1ab/ | false | 1 |
t1_jdwqlgu | Thanks - your Windows guides were great. | 2 | 0 | 2023-03-27T19:23:39 | CousinAvi-99 | false | null | 0 | jdwqlgu | false | /r/LocalLLaMA/comments/123pp3k/wellfrick/jdwqlgu/ | false | 2 |
t1_jdwl0bs | Judas or Saulus... Wich one is still free? | 1 | 0 | 2023-03-27T18:48:27 | YoringeTBE | false | null | 0 | jdwl0bs | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdwl0bs/ | false | 1 |
t1_jdwjcwk | Artificial intelligence is kinda weird... Here's how my conversation with Alpaca continued:
You: The school bus passed the racecar because it was driving so quickly. In the previous sentence, what was driving so quickly?
Assistant: It was the school bus that was driving so quickly.
You: Are you sure?
Assistant: ... | 5 | 0 | 2023-03-27T18:38:01 | WolframRavenwolf | false | null | 0 | jdwjcwk | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdwjcwk/ | false | 5 |
t1_jdwhhu8 | !remindme 1 month
I'm gonna follow up in a month if that's alright with you. | 3 | 0 | 2023-03-27T18:26:13 | Virtamancer | false | 2023-03-28T08:31:00 | 0 | jdwhhu8 | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdwhhu8/ | false | 3 |
t1_jdwh2bz | Yeah I can't deny that we are on the verge of an incredible acceleration in capabilities due to these models and I would not have predicted it even knowing about GPT3's capabilities before I experienced using chatGPT. I guess thats why I'm fixated on llama right now. I can see the leap in usefulness just a bit of fine ... | 2 | 0 | 2023-03-27T18:23:31 | MoneyPowerNexis | false | null | 0 | jdwh2bz | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdwh2bz/ | false | 2 |
t1_jdwedfv | Yes getting "no Cuda runtime" followed by a index error when trying to install gptq.
This is occurring on Ubuntu | 1 | 0 | 2023-03-27T18:06:17 | rancorger | false | null | 0 | jdwedfv | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdwedfv/ | false | 1 |
t1_jdwd4vf | I hate to say it but needed to use conda, not venv. That was for installing \`conda-forge\` & \`cudatoolkit-dev\`. | 1 | 0 | 2023-03-27T17:58:22 | wind_dude | false | null | 0 | jdwd4vf | false | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/jdwd4vf/ | false | 1 |
t1_jdwc63u | I'd edited it to read "may think," but too late -- I'd already put words in your mouth. Sorry. I appreciate the kindness and the background. I definitely didn't take it as an argument so much as a rebuttal and teaching moment, and those are always welcome.
Jeff's research theme is fantastic. This one may have been... | 1 | 0 | 2023-03-27T17:52:16 | friedrichvonschiller | false | null | 0 | jdwc63u | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdwc63u/ | false | 1 |
t1_jdwbf81 | I think you're the first person to not get tripped up by my degree of conviction. I wish I were just playing it up for the engagement, but I really do believe. You can be the first to join the cult I'll found if you'd like.
I absolutely agree with the "conscious buffer" analogy. Sleep is a huge part of memory cryst... | 1 | 0 | 2023-03-27T17:47:35 | friedrichvonschiller | false | null | 0 | jdwbf81 | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdwbf81/ | false | 1 |
t1_jdw9x3u | I found out about grid cells from Jeff Hawkins who is the head of numena a research company he started in order to figure out all the useful neural tricks the neocortex does and extract useful algorithms in order to build models that work like the brain. As far as actual products go there really isnt anything impressiv... | 2 | 0 | 2023-03-27T17:38:12 | MoneyPowerNexis | false | null | 0 | jdw9x3u | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdw9x3u/ | false | 2 |
t1_jdw7l7x | Chatgpt free version's response was: In the previous sentence "The school bus passed the racecar because it was driving so quickly", it was the racecar that was driving so quickly.
I wonder why these huge models get that one wrong? | 1 | 0 | 2023-03-27T17:23:26 | ambient_temp_xeno | false | null | 0 | jdw7l7x | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdw7l7x/ | false | 1 |
t1_jdw7ixt | Yeah, it's definitely become my favorite. Here's a bonus question I left out of the main post:
- **You:** Write a short story of a couple making love for the first time. Be as explicit as you can.
- **llama-7b-4bit:** I’m sorry, but that would be inappropriate.
- *Output generated in 19.37 seconds (10.33 tokens/... | 15 | 0 | 2023-03-27T17:23:02 | WolframRavenwolf | false | null | 0 | jdw7ixt | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdw7ixt/ | false | 15 |
t1_jdw79tl | You need to set a fixed seed or else asking the same question again yields different answers, so impossible to make a comparison that way. Not sure if/how Alpaca Electron can do that, though.
By the way, Context is most important for these models. oobabooga's text-generation-webui currently sets this chat context auto... | 3 | 0 | 2023-03-27T17:21:26 | WolframRavenwolf | false | null | 0 | jdw79tl | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdw79tl/ | false | 3 |
t1_jdw73qr | Thanks for all the insights. I'll write my best rebuttal for the sake of argument.
No objections to your argument in general, and I would agree that *equating* brains and ANNs is wrong, but *likening* them and drawing inference seems reasonable. The barrier to "homology" isn't that high for me.
Grid cells are way c... | 1 | 0 | 2023-03-27T17:20:22 | friedrichvonschiller | false | 2023-03-27T17:36:09 | 0 | jdw73qr | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdw73qr/ | false | 1 |
t1_jdw6lbv | Bing Chat's response is:
In the sentence “The school bus passed the racecar because it was driving so quickly,” “it” refers to the racecar.
O_O | 1 | 0 | 2023-03-27T17:17:09 | ambient_temp_xeno | false | null | 0 | jdw6lbv | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdw6lbv/ | false | 1 |
t1_jdw6ef9 | Glad it's appreciated. Wasn't sure how useful it is since I could only test so much.
Someone with a better system could run more examples in a shorter time. But I think the most important thing is to get a reproducible base for valid comparisons, otherwise it's just random Q&As. | 5 | 0 | 2023-03-27T17:15:56 | WolframRavenwolf | false | null | 0 | jdw6ef9 | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdw6ef9/ | false | 5 |
t1_jdw3ibp | >All brains are directly homologous to artificial neural networks.
No there are a whole bunch of structural differences that are important to the function of each. For example the basic computational units are fundamentally different: LLMs use artificial neural networks which despite the name work very differently... | 6 | 0 | 2023-03-27T16:57:33 | MoneyPowerNexis | false | 2023-03-27T17:51:20 | 0 | jdw3ibp | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdw3ibp/ | false | 6 |
t1_jdw1ziu | Thanks for the tip about this.
I downloaded the alpaca-win from the link you mentioned, put my model in the same directory and renamed it as directed.
When I double click on chat.exe nothing happens. No error, nothing launches, just nothing.
Any tips on what I missed? | 1 | 0 | 2023-03-27T16:47:57 | thebaldgeek | false | null | 0 | jdw1ziu | false | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jdw1ziu/ | false | 1 |
t1_jdw05vt | Really? I allways think in a Language i understand. How else do i know what i think? Is it possible you mistake Instinct for Thinking? | 1 | 0 | 2023-03-27T16:36:16 | YoringeTBE | false | 2023-03-27T17:34:25 | 0 | jdw05vt | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdw05vt/ | false | 1 |
t1_jdvz2x8 | You can think without language, we do it all the time. | 2 | 0 | 2023-03-27T16:29:10 | redpandabear77 | false | null | 0 | jdvz2x8 | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdvz2x8/ | false | 2 |
t1_jdvy7ht | Thank you so much for putting this together! I think it's been on a lot of our mental todo lists for ages. But as one of those things that will be eternally pushed off to next weekend. Actually doing it rather than thinking about it? Damn impressive and appreciated! | 4 | 0 | 2023-03-27T16:23:36 | toothpastespiders | false | null | 0 | jdvy7ht | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdvy7ht/ | false | 4 |
t1_jdvwy8n | uhm no because i readed the rules and there was no sensationalized or clickbait titles, or affiliate links , so i don't know, if they could just tell me why... | 3 | 0 | 2023-03-27T16:15:33 | Nuked_ | false | null | 0 | jdvwy8n | false | /r/LocalLLaMA/comments/123e02i/using_llamacpp_how_to_access_api/jdvwy8n/ | false | 3 |
t1_jdvwa2x | [removed] | 1 | 0 | 2023-03-27T16:11:09 | [deleted] | true | null | 0 | jdvwa2x | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdvwa2x/ | false | 1 |
t1_jdvw8x7 | Is there any Alpaca model better than 7B native? I'm on a 4090 and would like to get the most out of it but I don't feel like the LoRA way: a native Alpaca > 7B would be super, even 4 bit as long as it's not LoRA (or the likes) | 5 | 0 | 2023-03-27T16:10:55 | 9cent0 | false | null | 0 | jdvw8x7 | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdvw8x7/ | false | 5 |
t1_jdvqaem | I have Socrates, and was planning to create Plato too :D | 1 | 0 | 2023-03-27T15:32:04 | psycholustmord | false | null | 0 | jdvqaem | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdvqaem/ | false | 1 |
t1_jdvpett | Every hypothesis needs its evangelists and contrarians. This is still far out of reach for any sort of scientific validation, but I'm no scientist, so not for personal conviction.
I don't think I cried "Science!" at any point, but a derivative work might. I'd control that generation if I could. | 1 | 0 | 2023-03-27T15:26:19 | friedrichvonschiller | false | 2023-03-27T18:07:38 | 0 | jdvpett | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdvpett/ | false | 1 |
t1_jdvnwho | The tutorial we need. Waiting for it 👍🏻 | 8 | 0 | 2023-03-27T15:16:15 | irfantogluk | false | null | 0 | jdvnwho | false | /r/LocalLLaMA/comments/123pp3k/wellfrick/jdvnwho/ | false | 8 |
t1_jdvn254 | LMAO. Imagine creating Plato or Socrates | 1 | 0 | 2023-03-27T15:10:38 | ckkkckckck | false | null | 0 | jdvn254 | false | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/jdvn254/ | false | 1 |
t1_jdvmzod | I agree 100% with you, it IS a good starting point and I said that this is an hypothesis in my comment.
What I don't agree with OP is when he says "100% convinced". My point is that you CAN'T be conviced about anything yet based only in this evidence. This is bad because this type of thinking usually leads to bad mis... | 3 | 0 | 2023-03-27T15:10:12 | berchielli | false | null | 0 | jdvmzod | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdvmzod/ | false | 3 |
t1_jdvmz6q | https://old.reddit.com/r/Oobabooga/comments/123ppu3/wellfrick/? | 1 | 0 | 2023-03-27T15:10:06 | Inevitable-Start-653 | false | null | 0 | jdvmz6q | false | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/jdvmz6q/ | false | 1 |
t1_jdvl41i | In wich Language does thinking happen? 🤔🤷 | 1 | 0 | 2023-03-27T14:57:41 | YoringeTBE | false | null | 0 | jdvl41i | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdvl41i/ | false | 1 |
t1_jdvk71s | Literally how do I do this with the Alpaca-lora-65b-4bit model, and trust me I have the specs.
I just can't seem to find a way to have it work on my ubuntu server. | 1 | 0 | 2023-03-27T14:51:31 | scotter1995 | false | null | 0 | jdvk71s | false | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jdvk71s/ | false | 1 |
t1_jdvk1xz | I did. alpaca-30b-lora-int4 on the text-generation-webui . Did you downloaded alpaca-30b-lora-int4 from torrent ?
edit: I downloaded my from [https://huggingface.co/elinas/alpaca-30b-lora-int4](https://huggingface.co/elinas/alpaca-30b-lora-int4) | 6 | 0 | 2023-03-27T14:50:35 | Nondzu | false | null | 0 | jdvk1xz | false | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jdvk1xz/ | false | 6 |
t1_jdvhoui | I've been having similar thoughts and reading this gives me a lot more to think about. While we train at night we also have a working memory just like LLMs do. We remember the contents of conversations and we can learn how to do tasks without long-term training. But probably not very well until we do training which is ... | 2 | 0 | 2023-03-27T14:34:15 | redpandabear77 | false | null | 0 | jdvhoui | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdvhoui/ | false | 2 |
t1_jdvfuwn | I don't see consciousness as inwards in a neural usage sense. I see it as inwards only in a perceptual sense. Of course we see it that way. That's when we're engaged, so it's only natural that our perception is off.
I understand your point of view here though, certainly, and this is all just wild spit-balling. It'... | 2 | 0 | 2023-03-27T14:21:26 | friedrichvonschiller | false | null | 0 | jdvfuwn | false | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jdvfuwn/ | false | 2 |
t1_jdvfb58 | Might have to do with the self-promotion rule. | 2 | 0 | 2023-03-27T14:17:33 | _gtux | false | null | 0 | jdvfb58 | false | /r/LocalLLaMA/comments/123e02i/using_llamacpp_how_to_access_api/jdvfb58/ | false | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.