name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_je6px9m
Well... that is one of those exceptions because I've used to have 8 + 4 + 4 gb, but one of modules died :) And yea, that's one of rarer 12gb 2060 gtx. Makes little gaming sense, but good for dabbing in ML and running GPU CFD with very rough, but reasonable accuracy, heh.
2
0
2023-03-29T20:20:48
BalorNG
false
null
0
je6px9m
false
/r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/je6px9m/
false
2
t1_je6pg5s
And just after you made your tutorials too 😭 Thanks for the heads up
2
0
2023-03-29T20:17:44
the_real_NordVPN
false
null
0
je6pg5s
false
/r/LocalLLaMA/comments/125m1q5/the_windows_oneclick_installer_has_been_updated/je6pg5s/
false
2
t1_je6p84j
You almost certainly don't have "12gb system ram." If you have a GTX-2060, you should have 6GB of VRAM (the memory on the card) and some number of GB of system RAM that will almost always be some number that is a multiple of 8 (8 GB, 32 GB, 64 GB, 128GB). There are exceptions but they're not common.
0
0
2023-03-29T20:16:18
the_quark
false
null
0
je6p84j
false
/r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/je6p84j/
false
0
t1_je6oq27
Oh, finally it worked with new one-click installer. I guess that befits my level of intelligence :) and you are right - my 12gb system ram is not enough to load. 13b model it seems, but 7b loads quickly and runs very fast. I'ld need to add some...
2
0
2023-03-29T20:13:04
BalorNG
false
null
0
je6oq27
false
/r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/je6oq27/
false
2
t1_je6o1ds
So I am having a few problems with the `python setup_cuda.py install` part of the installation. Right after I run the command, i get `No CUDA runtime is found, using CUDA_HOME="(path to envv)"` before a few deprecationwarnings. After this, it seems like it manages to run bdist\_egg, egg\_info, and then doing the foll...
1
0
2023-03-29T20:08:43
Comptoneffect
false
null
0
je6o1ds
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/je6o1ds/
false
1
t1_je6nbu2
Interested to see what people think of this one, but since it's Linux and I'm not set up for that I'll sit this one out until I hear if it's worth it or not.
3
0
2023-03-29T20:04:11
ThePseudoMcCoy
false
null
0
je6nbu2
false
/r/LocalLLaMA/comments/125u56p/colossalchat/je6nbu2/
false
3
t1_je6m3pr
Holly smoke ! I just finiched reading the paper, how the hell did you do that in less than 10 days? PS : A potential 4-bit quantization for the bigger LLaMA models maybe?
1
0
2023-03-29T19:56:20
assalas23
false
null
0
je6m3pr
false
/r/LocalLLaMA/comments/1259yxg/llamaadapter_efficient_finetuning_of_llama/je6m3pr/
false
1
t1_je6h698
[deleted]
1
0
2023-03-29T19:24:58
[deleted]
true
null
0
je6h698
false
/r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/je6h698/
false
1
t1_je6h0as
It's definitely much more than 1 token per word for that estimation unfortunately. A token is used for every period, hyphen, individual quotation or parentheses, apostrophe, and many words are just oddly split into a bunch of tokens. It even uses a token for every space or tab.
2
0
2023-03-29T19:23:55
Travistyse
false
null
0
je6h0as
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je6h0as/
false
2
t1_je6gjox
> alpaca-7b-nativeEnhanced First time I've heard of this, but that looks very interesting, based on the cleaned Alpaca dataset and including exact instructions how to get the best out of it! However, apparently the model files aren't online yet, so we'll have to keep an eye on it...
1
0
2023-03-29T19:21:01
WolframRavenwolf
false
null
0
je6gjox
false
/r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/je6gjox/
false
1
t1_je6fp37
Yes, this can't be stressed enough: A model's performance is vastly influenced by the prompt, especially the initial prompt or "character" (for chat). Yesterday I updated my [LLaMA and Alpaca comparison](https://www.reddit.com/r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/) with responses of a ChatGPT...
4
0
2023-03-29T19:15:35
WolframRavenwolf
false
null
0
je6fp37
false
/r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/je6fp37/
false
4
t1_je6fo1m
The problem I see with all of these models is that the context size is tiny compared to GPT3/GPT4. All the LLaMA models have context windows of 2048 characters, whereas GPT3.5 has a context of 2048 tokens (and GPT4 of up to 32k tokens). A token is roughly equivalent to a word, and 2048 words goes a lot farther than 204...
2
0
2023-03-29T19:15:24
Recursive_Descent
false
null
0
je6fo1m
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je6fo1m/
false
2
t1_je6fbhb
I'm not sure what to share in a comment. How about a prompt, a seed, and settings? What do you want to accomplish? Structure is totally dependent on what you'd like to achieve.
1
0
2023-03-29T19:13:09
friedrichvonschiller
false
null
0
je6fbhb
false
/r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/je6fbhb/
false
1
t1_je6enrz
Yes, I hope so, too. LLaMA home use is only possible because the model got leaked, so either Meta decides to give it a truly open license (not very likely - but who knows, it could help them catch up with ~~Open~~ClosedAI) or we'll need a legit and legal alternative, basically something that's for LLMs what Stable Diff...
1
0
2023-03-29T19:08:57
WolframRavenwolf
false
null
0
je6enrz
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je6enrz/
false
1
t1_je6dq8o
What trouble do you have building gptq? And I don't know that WSL is essential, I just am a Linux guy so I already do all my work in Windows under WSL.
1
0
2023-03-29T19:02:59
the_quark
false
null
0
je6dq8o
false
/r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/je6dq8o/
false
1
t1_je6d8ka
Hmm, I've tried setting up WSL 2 times, from zero, and could not compile gptq :( Strange... I'll try new "one-click installer" I guess
1
0
2023-03-29T18:59:51
BalorNG
false
null
0
je6d8ka
false
/r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/je6d8ka/
false
1
t1_je6ctdx
I was able to get 4 bit LLaMA 13B working on a GTX-2080Ti with 11GB of VRAM. Done in Windows 11 in WSL, but I also have 64GB of RAM available to WSL, which is important with small VRAM sizes.
1
0
2023-03-29T18:57:08
the_quark
false
null
0
je6ctdx
false
/r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/je6ctdx/
false
1
t1_je6c9ov
Yeah the only reason I stick with AMD is because the linux experience is just better than with Nvidia, but this is really embarrassing…
2
0
2023-03-29T18:53:39
Tiefbegabt
false
null
0
je6c9ov
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/je6c9ov/
false
2
t1_je6avf2
Who is "you"? I've donated to Oobabooga and I would love to donate to qwopqwop200, but he's actively disinterested in and apparently uncomfortable with donations. I think this is a labor of love for the most important projects currently in the space and that's unlikely to change. I would absolutely support the "*act...
3
0
2023-03-29T18:44:45
friedrichvonschiller
false
null
0
je6avf2
false
/r/LocalLLaMA/comments/125q5zt/are_you_accepting_donations/je6avf2/
false
3
t1_je6aeuw
2 months beyond their own self-imposed public deadline. They're losing true believers, and that's about all AMD had left. They'll have to really shock the market at this point.
2
0
2023-03-29T18:41:48
friedrichvonschiller
false
null
0
je6aeuw
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/je6aeuw/
false
2
t1_je6a7zy
We're particularly curious about how it would work for 4bit. The larger L3 cache in the 7900 holds the prospect of superior performance with GPTQ PyTorch. I think AMD is going to be very strong in this field eventually since they cover the entire GPU/CPU/motherboard ecosystem like no other single company, but they ab...
2
0
2023-03-29T18:40:35
friedrichvonschiller
false
null
0
je6a7zy
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/je6a7zy/
false
2
t1_je69ftf
[deleted]
1
0
2023-03-29T18:35:38
[deleted]
true
2023-03-29T21:16:31
0
je69ftf
false
/r/LocalLLaMA/comments/125m1q5/the_windows_oneclick_installer_has_been_updated/je69ftf/
false
1
t1_je68akt
[https://cdn1.frocdn.ch/KJJw3ZIHhyqzolX.xz](https://cdn1.frocdn.ch/KJJw3ZIHhyqzolX.xz) [https://files.catbox.moe/jyjrof.xz](https://files.catbox.moe/jyjrof.xz) There is someone who removed the "ethic" bullshit lines, we can train on those :D ​ https://preview.redd.it/lbkv74lzirqa1.png?width=2534&for...
5
0
2023-03-29T18:28:15
Wonderful_Ad_5134
false
null
0
je68akt
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je68akt/
false
5
t1_je66wzl
I hope someone can do one based on GPT-4. It seems to use those disclaimers more sparingly and appropriately.
1
0
2023-03-29T18:19:32
iJeff
false
null
0
je66wzl
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je66wzl/
false
1
t1_je66eq6
Yeah. One of the things that impressed me with an alpaca 13B was the simple, concise and opinionated answers. I asked it if whales tasted good and it said no, because "Whales are too big and their meat is too tough."
4
0
2023-03-29T18:16:14
synn89
false
null
0
je66eq6
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je66eq6/
false
4
t1_je65wmm
Can this be done on open source LLMs like Cerberes?
1
0
2023-03-29T18:13:03
PM_ME_ENFP_MEMES
false
null
0
je65wmm
false
/r/LocalLLaMA/comments/1259yxg/llamaadapter_efficient_finetuning_of_llama/je65wmm/
false
1
t1_je606um
capacity?
1
0
2023-03-29T17:36:58
Sixhaunt
false
null
0
je606um
false
/r/LocalLLaMA/comments/12587y5/dirty_data_sets_and_llamaalpaca/je606um/
false
1
t1_je5wldl
It has integrated Alpaca?
2
0
2023-03-29T17:14:48
SnooWoofers780
false
null
0
je5wldl
false
/r/LocalLLaMA/comments/125m1q5/the_windows_oneclick_installer_has_been_updated/je5wldl/
false
2
t1_je5vqkc
[removed]
-1
0
2023-03-29T17:09:24
[deleted]
true
null
0
je5vqkc
false
/r/LocalLLaMA/comments/125q5zt/are_you_accepting_donations/je5vqkc/
false
-1
t1_je5uh8b
Any chance we'll see this as a 13b 4-bit 128g model?
3
0
2023-03-29T17:01:28
whitepapercg
false
null
0
je5uh8b
false
/r/LocalLLaMA/comments/1259yxg/llamaadapter_efficient_finetuning_of_llama/je5uh8b/
false
3
t1_je5tfic
What capacities did you use for training?
1
0
2023-03-29T16:54:51
whitepapercg
false
null
0
je5tfic
false
/r/LocalLLaMA/comments/12587y5/dirty_data_sets_and_llamaalpaca/je5tfic/
false
1
t1_je5m0iz
[removed]
1
0
2023-03-29T16:07:46
[deleted]
true
null
0
je5m0iz
false
/r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/je5m0iz/
false
1
t1_je5lcyr
Thanks I'll give this a go!
2
0
2023-03-29T16:03:31
ThePseudoMcCoy
false
null
0
je5lcyr
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je5lcyr/
false
2
t1_je5i972
8 bits worked out of the box (still have to manually edit the tokenizer json file "LLama" to "Llama"). Going to try 4 bits later.
4
0
2023-03-29T15:43:42
aerilyn235
false
null
0
je5i972
false
/r/LocalLLaMA/comments/125m1q5/the_windows_oneclick_installer_has_been_updated/je5i972/
false
4
t1_je5c4u1
I can load 30B on 32GB ram with an RTX 4090. Need to limit the max prompt size to prevent performance drops. Thinking about picking up another 32GB of ram to see how much it helps
1
0
2023-03-29T15:04:07
KriyaSeeker
false
null
0
je5c4u1
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je5c4u1/
false
1
t1_je5bagu
The dataset is garbage with all those moronic "ethics" answers that are present because they trained with chatgpt 3.5 a.k.a "the prude AI" Fortunatly a man of culture in 4chan cleaned this shit and we have now a good database that could be trained natively with the Llama models. [https://cdn1.frocdn.ch/KJJw3ZIHhyqzol...
9
0
2023-03-29T14:58:32
Wonderful_Ad_5134
false
2023-03-30T02:20:35
0
je5bagu
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je5bagu/
false
9
t1_je5987j
This honestly works worse than the OG alpaca model, and also refuses to answer a bunch of questions that don't fit with the ethics. Not sure what the hype is here, but I feel like people should just stick to the Cleaned Alpaca Data set if theyre gonna fine tune new models.
5
0
2023-03-29T14:44:56
violent_cat_nap
false
null
0
je5987j
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je5987j/
false
5
t1_je55c4h
I'm sorry, that was supposed to be a joke, and it wasn't clear in the context. There will be better models someday. This is where home LLM users are congregating. It'll be a misnomer someday. :D
2
0
2023-03-29T14:18:23
friedrichvonschiller
false
null
0
je55c4h
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je55c4h/
false
2
t1_je54hxm
Thanks, I had misunderstood that.
1
0
2023-03-29T14:12:32
MentesInquisitivas
false
null
0
je54hxm
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je54hxm/
false
1
t1_je54cv4
Thanks for the clarification!
1
0
2023-03-29T14:11:33
MentesInquisitivas
false
null
0
je54cv4
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je54cv4/
false
1
t1_je534rx
I don't think llama is the "future" of this space, the future will be the OpenAssistant model, which will be fully open source and hopefully just as good as alpaca.
3
0
2023-03-29T14:02:54
Tystros
false
null
0
je534rx
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je534rx/
false
3
t1_je51o47
It’s a bit weird because I can load the 13B with 16gb ram
1
0
2023-03-29T13:52:26
Necessary_Ad_9800
false
null
0
je51o47
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je51o47/
false
1
t1_je515nz
It’s so hilarious how one of the maintainers said in October: > prepare to be pleasantly surprised And now it’s almost April and support is still nowhere to be seen.
2
0
2023-03-29T13:48:43
Tiefbegabt
false
null
0
je515nz
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/je515nz/
false
2
t1_je50cz0
I don't think it would be. Like I said I couldn't get it to work in 64GB, with an additional 11GB of VRAM. However, it's possible that I could've messed more with the configuration and gotten it to.
1
0
2023-03-29T13:42:51
the_quark
false
null
0
je50cz0
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je50cz0/
false
1
t1_je4zcwp
Thanks, I can’t load the model on 16gb ram. I wonder if 32 will be enough..
1
0
2023-03-29T13:35:24
Necessary_Ad_9800
false
null
0
je4zcwp
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je4zcwp/
false
1
t1_je4z3nk
I have an RTX-3090 with 24GB of VRAM and 64GB of system RAM. I'm getting six-line responses in about 30 seconds, though I did have to drop the max prompt size from 2048 tokens to 1024 to get reasonable performance out of it (limiting the length of bot's history and context). I upgraded from a GTX-2080Ti with 11GB of V...
2
0
2023-03-29T13:33:27
the_quark
false
2023-03-29T13:36:34
0
je4z3nk
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je4z3nk/
false
2
t1_je4y3wn
How much ram do you have?
1
0
2023-03-29T13:25:51
Necessary_Ad_9800
false
null
0
je4y3wn
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je4y3wn/
false
1
t1_je4xzi5
I don’t really understand, can you share some character or conversation screenshot to provide more context?
2
0
2023-03-29T13:24:56
Necessary_Ad_9800
false
null
0
je4xzi5
false
/r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/je4xzi5/
false
2
t1_je4w80q
Weird... It theoretically should be faster, but it probably lacks optimalization
1
0
2023-03-29T13:11:05
pkuba208
false
null
0
je4w80q
false
/r/LocalLLaMA/comments/125k1s9/are_llama_2bit_quantized_models_publicly_availible/je4w80q/
false
1
t1_je4vn5j
Believe it or not, if you have experience with Linux, doing things through bash is much faster, easier and more reliable than using a windows GUI. You do have to overcome you fear of the command line tho.
1
0
2023-03-29T13:06:26
GoSouthYoungMan
false
null
0
je4vn5j
false
/r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/je4vn5j/
false
1
t1_je4vg8g
I tried it with 16b, and 30b. It gets stuck in repeating letters often and is a lot slower to generate than 4bit
2
0
2023-03-29T13:04:53
pearax
false
null
0
je4vg8g
false
/r/LocalLLaMA/comments/125k1s9/are_llama_2bit_quantized_models_publicly_availible/je4vg8g/
false
2
t1_je4scpw
Need some benchmark and comprasion results
1
0
2023-03-29T12:39:38
irfantogluk
false
null
0
je4scpw
false
/r/LocalLLaMA/comments/125k1s9/are_llama_2bit_quantized_models_publicly_availible/je4scpw/
false
1
t1_je4robj
yeah I tried to do it in my head, missed a zero. How can I ever recover from this.
5
0
2023-03-29T12:33:52
ckkkckckck
false
null
0
je4robj
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je4robj/
false
5
t1_je4o83b
I haven't tried it yet, but I'm hoping this thing works. I just want everyone to experience this technology ❤️
3
0
2023-03-29T12:02:55
Inevitable-Start-653
false
null
0
je4o83b
false
/r/LocalLLaMA/comments/125m1q5/the_windows_oneclick_installer_has_been_updated/je4o83b/
false
3
t1_je4krhm
Don't worry about it too much. You can check the health of ssd with `brew install smartmontools` `smartctl -a disk0`
6
0
2023-03-29T11:28:38
IonizedRay
false
null
0
je4krhm
false
/r/LocalLLaMA/comments/12517ab/has_anyone_tried_the_65b_model_with_alpacacpp_on/je4krhm/
false
6
t1_je4gx09
very nice
2
0
2023-03-29T10:45:02
CellWithoutCulture
false
null
0
je4gx09
false
/r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/je4gx09/
false
2
t1_je4gdi2
I love this base prompt https://preview.redd.it/isptsqc57pqa1.png?width=1892&format=png&auto=webp&v=enabled&s=406c11d53217afcc5977ef85be453e749e38924f
2
0
2023-03-29T10:38:19
goatsdontlie
false
null
0
je4gdi2
false
/r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/je4gdi2/
false
2
t1_je4fta9
I'd be interested to know how a really good Ryzen cpu with a fast mainboard and memory compares to the Nvidia GPUs for inference. Especially for the 30b models.
2
0
2023-03-29T10:31:08
ambient_temp_xeno
false
null
0
je4fta9
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/je4fta9/
false
2
t1_je4fp17
Special tokens could be an option, but it would probably cause some incompatibilities between different local models. For a fully integrated assistant, it would most likely be the best option.
1
0
2023-03-29T10:29:37
goatsdontlie
false
null
0
je4fp17
false
/r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/je4fp17/
false
1
t1_je4fcl3
yeah, that's what I would try. Mayby use uncommon or special tokens if needed. it's a good way to avoid hallucnation imo
2
0
2023-03-29T10:25:03
CellWithoutCulture
false
null
0
je4fcl3
false
/r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/je4fcl3/
false
2
t1_je4dgco
I think it is probably deeply finetuned to add sources in a machine recognizable format (something like markdown url links or something), and parsed in the UI afterwards. Resources are probably added in a "common" format in the initial prompt. It should not be too hard to implement something similar in langchain and ...
2
0
2023-03-29T09:59:07
goatsdontlie
false
2023-03-29T11:47:09
0
je4dgco
false
/r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/je4dgco/
false
2
t1_je4dbsl
Oh so that's why I'm so tired: my brain is rewiring some language area to more effectively communicate with LLaMA. This is fine.
5
0
2023-03-29T09:57:23
ambient_temp_xeno
false
null
0
je4dbsl
false
/r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/je4dbsl/
false
5
t1_je4c4cy
I mean the model that can visit internet (even without image reading), is this feature hard to implement? After alpaca happen two weeks, I think there was enough time to do this, or the open source community don't care this thing?
1
0
2023-03-29T09:40:10
nillouise
false
null
0
je4c4cy
false
/r/LocalLLaMA/comments/125ei8x/what_local_llm_models_are_now_accessible_to_the/je4c4cy/
false
1
t1_je4bnmv
How much ram do you need for native training either 7B or 13B? Is it doable with one or two 3090s?
3
0
2023-03-29T09:33:22
2muchnet42day
false
null
0
je4bnmv
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je4bnmv/
false
3
t1_je4as3h
Yes. I checked and indeed, despite the error, it loads into VRAM. I will ignore the error then. Thank you!
3
0
2023-03-29T09:20:16
nufeen
false
null
0
je4as3h
false
/r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/je4as3h/
false
3
t1_je485zu
Really? Now I am scared.
1
0
2023-03-29T08:40:42
ma-2022
false
null
0
je485zu
false
/r/LocalLLaMA/comments/12517ab/has_anyone_tried_the_65b_model_with_alpacacpp_on/je485zu/
false
1
t1_je47sqa
Not in foreseeable future, if bencmarks are of any indication - its 13b model is not on par with llama 7b
2
0
2023-03-29T08:35:17
BalorNG
false
null
0
je47sqa
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je47sqa/
false
2
t1_je47hs4
I've read claims that some sort of phase shift where the model gets capable of effective self-reflection (if you ask it to, tho) happens on 20b parameters. But I'm sure that is going to depend on a ton of other settings like hyperparameters and dataset.
1
0
2023-03-29T08:30:41
BalorNG
false
null
0
je47hs4
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je47hs4/
false
1
t1_je474h3
It does seem a bit pointless for the average user. The worst of both worlds: try it now!
6
0
2023-03-29T08:25:07
ambient_temp_xeno
false
null
0
je474h3
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je474h3/
false
6
t1_je46lsd
I don't have an exact measure, but I'd estimate something like 0.3 to 0.5 seconds per token. Bit slow to read aloud, but feels actually pretty fast. Only thing is that it takes like 5 to 10 seconds to start up.
1
0
2023-03-29T08:17:26
spectrachrome
false
null
0
je46lsd
false
/r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/je46lsd/
false
1
t1_je46j3c
Just a heads up, I still get the same error that you're getting, but the model does load into VRAM and use the GPU now if I wait long enough. As long as you don't see the "The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable." message (which...
3
0
2023-03-29T08:16:19
Zesty-Fruits
false
null
0
je46j3c
false
/r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/je46j3c/
false
3
t1_je468f8
Careful, you can kill your SSD real quick.
2
0
2023-03-29T08:11:52
BalorNG
false
null
0
je468f8
false
/r/LocalLLaMA/comments/12517ab/has_anyone_tried_the_65b_model_with_alpacacpp_on/je468f8/
false
2
t1_je44m5c
I also have this problem, but the fix above did not work for me. It's strange, because a week ago i did successfully install Oobabooga webui in WSL by following the instructions for WSL-Linux. This installation still works in my other wsl2 distribution. I wanted to update it, but decided to keep the previous version...
2
0
2023-03-29T07:48:10
nufeen
false
null
0
je44m5c
false
/r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/je44m5c/
false
2
t1_je41b4a
I recently bought a cheap used DELL RTX 3090 for about $680 and put it in my server with a 10900k CPU with 128GB of RAM.
2
0
2023-03-29T07:01:09
Nondzu
false
null
0
je41b4a
false
/r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/je41b4a/
false
2
t1_je410xz
Yeah, it seems from the specs there that Cerebras 13B *almost* catches up to LLaMa 7B in a couple tests, but otherwise it's a clean sweep -- and that's just 7B. Now, if you *need* one of these models for commercial usage or a project that requires a very permissive license, then it might be your only choice and I'm gl...
4
0
2023-03-29T06:57:19
AI-Pon3
false
null
0
je410xz
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je410xz/
false
4
t1_je40q4y
This sounds good. Could you please release (your changes to) the weights as "xor encrypted" diffs, like point-alpaca did, so that we can try it out? I would prefer to try out the proper model not an inferior version of it. [https://github.com/pointnetwork/point-alpaca](https://github.com/pointnetwork/point-alpaca)
3
0
2023-03-29T06:53:23
sswam
false
null
0
je40q4y
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je40q4y/
false
3
t1_je40ot8
locally and readily accessible, but like Bing Chat? there's nothing like that currently.
1
0
2023-03-29T06:52:54
Civil_Collection7267
false
null
0
je40ot8
false
/r/LocalLLaMA/comments/125ei8x/what_local_llm_models_are_now_accessible_to_the/je40ot8/
false
1
t1_je40ejr
>Unfortunately, I don't see this as having as much potential as LLaMa-based models for local usage. [https://i.redd.it/dnzh28eg7jqa1.png](https://i.redd.it/dnzh28eg7jqa1.png) I agree, I doubt these models will have widespread use. Reposting what I commented in another sub: >It's great to have smaller models, b...
7
0
2023-03-29T06:49:06
Civil_Collection7267
false
null
0
je40ejr
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je40ejr/
false
7
t1_je3y8j7
Unfortunately, I don't see this as having as much potential as LLaMa-based models for local usage. The data in the article states they're following the rule of 20 tokens per parameter, which is "optimal" in terms of loss achieved per compute -- that assumes of course that increasing the model size isn't a big deal. W...
11
0
2023-03-29T06:20:29
AI-Pon3
false
null
0
je3y8j7
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je3y8j7/
false
11
t1_je3xzwo
I can't agree more at least on LLaMA. I just upgraded my hardware and went from 13B to 30B and it's enormous. So much easier to keep a conversation going.
1
0
2023-03-29T06:17:20
the_quark
false
null
0
je3xzwo
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je3xzwo/
false
1
t1_je3xvlr
Does it contain some instructions that are critical toward right deployement? Good thing you try and create a guide, but while I'm way too old for "tiktok culture" wasting an hour for what can be explained by a list of bullet points frankly sucks. To be fair, I've wasted more on other wiki guides that didn't work eithe...
2
0
2023-03-29T06:15:48
BalorNG
false
null
0
je3xvlr
false
/r/LocalLLaMA/comments/123pp3k/wellfrick/je3xvlr/
false
2
t1_je3xv00
[removed]
1
0
2023-03-29T06:15:35
[deleted]
true
null
0
je3xv00
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je3xv00/
false
1
t1_je3xt8b
I mean the local LLM model, which I can run it locally.
1
0
2023-03-29T06:14:57
nillouise
false
null
0
je3xt8b
false
/r/LocalLLaMA/comments/125ei8x/what_local_llm_models_are_now_accessible_to_the/je3xt8b/
false
1
t1_je3xdau
You added a zero; 1.4 trillion / 65 billion = 21.54 On the other hand, their 33B model was trained on ~42 tokens per parameter, and that number increases to ~77 for the 13B model and ~143 for the 7B model.
7
0
2023-03-29T06:09:17
AI-Pon3
false
null
0
je3xdau
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je3xdau/
false
7
t1_je3xc1x
ChatGPT 3.5, ChatGPT 4, BingAI does
1
0
2023-03-29T06:08:50
irfantogluk
false
null
0
je3xc1x
false
/r/LocalLLaMA/comments/125ei8x/what_local_llm_models_are_now_accessible_to_the/je3xc1x/
false
1
t1_je3x08e
Absolutely! It's unfortunate they chose to restrict access to valuable research and data.
3
0
2023-03-29T06:04:40
Tell_MeAbout_You
false
null
0
je3x08e
false
/r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/je3x08e/
false
3
t1_je3w2jw
Hmmm, clever !
2
0
2023-03-29T05:53:05
Famberlight
false
null
0
je3w2jw
false
/r/LocalLLaMA/comments/124w1fi/where_can_i_find_characters_that_were_made_by/je3w2jw/
false
2
t1_je3vy9f
It's the opposite actually, they're doing the chinchilla formula 20 tokens per parameter. So it's less token per parameter than llama. Llama has absurdly high token count per parameter like 10x more than the chinchilla recommended. I just calculated for llama 65B and it comes around 214 tokens per parameter. It's so hi...
5
0
2023-03-29T05:51:39
ckkkckckck
false
2023-03-29T12:34:23
0
je3vy9f
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je3vy9f/
false
5
t1_je3us0y
they claim to use less tokens per parameter, not more. that's why their models are significantly less capable than llama at the same amount of parameters.
2
0
2023-03-29T05:37:43
Tystros
false
null
0
je3us0y
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je3us0y/
false
2
t1_je3tb4j
I like the part where it claimed that Bert and Ernie were Simpsons characters voiced by Tress MacNeille and Harry Shearer.
3
0
2023-03-29T05:20:34
Dwedit
false
null
0
je3tb4j
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je3tb4j/
false
3
t1_je3t8p4
Ok change of plans! [https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced](https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced) Looks like a better native model is in town, good luck soldier !! :D
2
0
2023-03-29T05:19:49
Wonderful_Ad_5134
false
null
0
je3t8p4
false
/r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/je3t8p4/
false
2
t1_je3t68m
Hey, thanks so much for putting this together this excellent guide! Unfortunately, I'm running into a CUDA problem when it comes to actually launching the webUI. Getting this message: CUDA SETUP: WARNING! libcuda.so not found! Do you have a CUDA driver installed? If you are on a cluster, make sure you are on a CUD...
3
0
2023-03-29T05:19:00
Zesty-Fruits
false
2023-03-29T05:30:54
0
je3t68m
false
/r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/je3t68m/
false
3
t1_je3svzu
From the [GPT4All Technical Report](https://s3.amazonaws.com/static.nomic.ai/gpt4all/2023_GPT4All_Technical_Report.pdf): >We train several models finetuned from an instance of LLaMA 7B (Touvron et al., 2023). The model associated with our initial public release is trained with LoRA (Hu et al., 2021) on the 437,60...
3
0
2023-03-29T05:15:47
friedrichvonschiller
false
null
0
je3svzu
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/je3svzu/
false
3
t1_je3sgvy
Successively closer approximations, I would say. We're honing in on it.
1
0
2023-03-29T05:11:02
friedrichvonschiller
false
null
0
je3sgvy
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/je3sgvy/
false
1
t1_je3saa7
The victorians used to think the human brain was a steam engine. In the 1970s, people used to think the human brain was a computer, with subroutines and tuples. Now, we think the human brain is a large language model.
4
0
2023-03-29T05:09:02
noellarkin
false
null
0
je3saa7
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/je3saa7/
false
4
t1_je3s0o0
We have to rename the subreddit already?
2
0
2023-03-29T05:06:09
friedrichvonschiller
false
null
0
je3s0o0
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je3s0o0/
false
2
t1_je3ryey
I'm glad you appreciated it! I totally agree: all LLaMA wants is a proper introduction. On the theme, I had never realized how poorly I communicated until I tried chatting with LLaMA 13B. I'm in phraseological rehab right now. Send flowers.
6
0
2023-03-29T05:05:28
friedrichvonschiller
false
null
0
je3ryey
false
/r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/je3ryey/
false
6
t1_je3rwhh
They claim to be using far more tokens per parameter, which in theory should allow them to achieve similar performance with fewer parameters.
2
0
2023-03-29T05:04:52
MentesInquisitivas
false
null
0
je3rwhh
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je3rwhh/
false
2
t1_je3rbjg
Currently I have filtered and converted the QuAC and SQuAD datasets into the alpaca format along with datasets for all the ElderScrolls books, artifacts, creatures, factions, flora, gods, npcs, and places I tested with just the alpaca data plus the books and it gave a good alpaca version that loosely understood the wo...
1
0
2023-03-29T04:58:29
Sixhaunt
false
null
0
je3rbjg
false
/r/LocalLLaMA/comments/12587y5/dirty_data_sets_and_llamaalpaca/je3rbjg/
false
1
t1_je3r8fr
I hope they’re working on a 30B model. From my limited experience with llama and alpaca I feel that’s where the magic begins to happen.
9
0
2023-03-29T04:57:33
R009k
false
null
0
je3r8fr
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/je3r8fr/
false
9