name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jen0zpz
Thank you! Is there any way to generate datasets from the documents I already have? I have GPT4 API, is there any way to ask gpt4 to create a dataset?
1
0
2023-04-02T08:26:28
mevskonat
false
null
0
jen0zpz
false
/r/LocalLLaMA/comments/129ch6l/is_there_any_way_to_do_embedding_with_localllama/jen0zpz/
false
1
t1_jen001w
This is completely wrong. The ".pt" extension is a PyTorch convention for storing "pickled" tensors. It usually contains complete weights and can also have arbitrary python code, so it's convenient for scientific community but a security nightmare for widespread use. LLaMA was first "released" in this format. The bin...
10
0
2023-04-02T08:12:40
lacethespace
false
2023-04-02T08:16:37
0
jen001w
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jen001w/
false
10
t1_jemzhuv
I've heard different things about LoRA and Peft from different people. Some people say it's the same, some people even say the LoRA is better. Some people say it's not as good. Can you definitively say that better results are produced by full finetuning?
1
0
2023-04-02T08:05:49
Pan000
false
null
0
jemzhuv
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jemzhuv/
false
1
t1_jemz56j
>Thanks! 👍 You're welcome!
1
0
2023-04-02T08:01:03
exclaim_bot
false
null
0
jemz56j
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jemz56j/
false
1
t1_jemz48h
Thanks! 👍
1
0
2023-04-02T08:00:43
Pan000
false
null
0
jemz48h
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jemz48h/
false
1
t1_jemxiwd
[deleted]
2
0
2023-04-02T07:39:48
[deleted]
true
null
0
jemxiwd
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jemxiwd/
false
2
t1_jemxed7
Yes, I know that. But do we actually know how related both models are in reality? Nobody has researched how both models differ from each other. Both have been trained on the same corpus and and they have a similar structure. Wouldn't it be possible that there is some kind of correlation between these models so it could...
1
0
2023-04-02T07:38:14
Ok-Scarcity-7875
false
2023-04-02T09:27:18
0
jemxed7
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jemxed7/
false
1
t1_jemwhrp
[deleted]
2
0
2023-04-02T07:26:56
[deleted]
true
2023-06-10T01:10:16
0
jemwhrp
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jemwhrp/
false
2
t1_jemwaut
I installed a 13B 'version' a week+ ago. I've tried to use a 30B but to no avail. My question is if it's possible (now) to run a 30B model when I have 32Gb ram and 8GB vram? The version I have installed is a "docker" desktop version and uses CPU only. Is the solution to somehow ultra-manage the pc to only use absolu...
7
0
2023-04-02T07:24:29
sEi_
false
null
0
jemwaut
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jemwaut/
false
7
t1_jemvfgy
If you're looking to use LLaMA like GPT-4 and effectively read very long legal documents, that's not possible given the 2048 token context length. >If we are to fine tune llama, are there any law-focused datasets that we can use for QnA training? None at the moment, but you can create a dataset and use that to finetu...
1
0
2023-04-02T07:13:42
Civil_Collection7267
false
null
0
jemvfgy
false
/r/LocalLLaMA/comments/129ch6l/is_there_any_way_to_do_embedding_with_localllama/jemvfgy/
true
1
t1_jemurkz
Oh man. Thanks :D Edit: I have a Nvidia gpu and I grabbed the cuda version. The normal one not ggml. It works in textgen.
1
0
2023-04-02T07:05:37
artificial_genius
false
2023-04-03T05:24:35
0
jemurkz
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jemurkz/
false
1
t1_jemu0sw
When I first set up text-generation-webui with llama-7b, it immediately went on one tangent after another. And honestly, this is exactly why I love this LLM. Tangents like this are just fucking hilarious xD No matter if LLaMa or Alpaca, they are so much fun to watch become fully derailed. Thanks for sharing, was a goo...
7
0
2023-04-02T06:56:31
IngwiePhoenix
false
null
0
jemu0sw
false
/r/LocalLLaMA/comments/128ror7/ok_this_is_not_really_funny_but_i_couldnt_help/jemu0sw/
false
7
t1_jems6gf
Loras are much smaller than full models and can be trained easily
1
0
2023-04-02T06:33:47
Sixhaunt
false
null
0
jems6gf
false
/r/LocalLLaMA/comments/129ch6l/is_there_any_way_to_do_embedding_with_localllama/jems6gf/
false
1
t1_jemqli5
[deleted]
3
0
2023-04-02T06:13:08
[deleted]
true
null
0
jemqli5
false
/r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/jemqli5/
false
3
t1_jemnzdt
If you end up doing this I'd love to see a detailed write up on it. So far the explanations of fine tuning have left me a bit confused since there seems to be competing ways to do it. :)
5
0
2023-04-02T05:40:32
teachersecret
false
null
0
jemnzdt
false
/r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/jemnzdt/
false
5
t1_jemnx87
This explains why we like music. I can [feel the beat](https://www.researchgate.net/publication/267102366_Rhythmic_complexity_and_predictive_coding_A_novel_approach_to_modeling_rhythm_and_meter_perception_in_music). [Steven Pinker was close, but wrong](https://humanitiescenter.byu.edu/is-music-simply-auditory-cheesec...
1
0
2023-04-02T05:39:49
friedrichvonschiller
false
2023-04-06T04:57:43
0
jemnx87
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jemnx87/
false
1
t1_jemmtum
Huggingface is what you're looking for
1
0
2023-04-02T05:27:03
ckkkckckck
false
null
0
jemmtum
false
/r/LocalLLaMA/comments/1298rat/is_there_a_good_place_to_post_datasets_for_the/jemmtum/
false
1
t1_jemlmsm
Links & param count?
5
0
2023-04-02T05:13:57
Wroisu
false
null
0
jemlmsm
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jemlmsm/
false
5
t1_jemlhx5
Those reviews look faaaaaast
1
0
2023-04-02T05:12:24
claygraffix
false
null
0
jemlhx5
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jemlhx5/
false
1
t1_jeml2n9
awesome, thanks!
2
0
2023-04-02T05:07:37
Sixhaunt
false
null
0
jeml2n9
false
/r/LocalLLaMA/comments/1298rat/is_there_a_good_place_to_post_datasets_for_the/jeml2n9/
false
2
t1_jemjmw8
They probably meant to say use *llama.cpp* to run Alpaca and LLaMA. >which do you recommend creativity and power wise? For creativity, use the standard 30B LLaMA. The [Getting Started](https://www.reddit.com/r/LocalLLaMA/wiki/index) page of the wiki has some general tips. If you're trying to use the web UI and you d...
4
0
2023-04-02T04:51:45
Technical_Leather949
false
null
0
jemjmw8
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jemjmw8/
false
4
t1_jemixls
You only have to download one model. The models directory section in this guide shows the links to models.
1
0
2023-04-02T04:44:15
Technical_Leather949
false
null
0
jemixls
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jemixls/
false
1
t1_jemirgt
[https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g](https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g) It's the 4-bit version of chavinlo's GPT4 and Alpaca finetune, uploaded by someone else recently.
2
0
2023-04-02T04:42:20
Technical_Leather949
false
null
0
jemirgt
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jemirgt/
false
2
t1_jemid0y
Trust me what you need to run is GPT4 x alpaca, is just far superior to alpaca, it can code!!!, is the true chatGPT on your pocket, the hype is real about this one.
6
0
2023-04-02T04:38:01
PsychologicalSock239
false
null
0
jemid0y
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jemid0y/
false
6
t1_jemh8jw
I forget what the fastest raid level is but these are fast. https://www.amazon.com/gp/aw/d/B09F5P2JT8/ref=ox_sc_act_image_1
2
0
2023-04-02T04:25:51
artificial_genius
false
null
0
jemh8jw
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jemh8jw/
false
2
t1_jemfjy3
Literally what I was suggesting! :-)
1
0
2023-04-02T04:08:28
randomqhacker
false
null
0
jemfjy3
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jemfjy3/
false
1
t1_jemffc2
[Datasets](https://huggingface.co/datasets) too. Hugging Face is the place.
6
0
2023-04-02T04:07:12
friedrichvonschiller
false
null
0
jemffc2
false
/r/LocalLLaMA/comments/1298rat/is_there_a_good_place_to_post_datasets_for_the/jemffc2/
false
6
t1_jemezqv
Most models seem to be on Huggingface
3
0
2023-04-02T04:02:53
Yharnam_FM
false
null
0
jemezqv
false
/r/LocalLLaMA/comments/1298rat/is_there_a_good_place_to_post_datasets_for_the/jemezqv/
false
3
t1_jemas7k
I have been considering a Jetson Orin Nano 8GB or an NX 16GB, as a contained IOT LLaMa instance, but not sure. Limited money makes me leery to jump. I was trying to get LLaMa on the older Nano I have, Llama.cpp built perfectly, but Pytorch packages did not seem working anymore. I may not have had Jetpack 4.6.1, but t...
2
0
2023-04-02T03:23:38
SlavaSobov
false
null
0
jemas7k
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jemas7k/
false
2
t1_jem8hdy
Is this the one that got leaked? Do you only need to download one version and not the entire 218 GBs?
1
0
2023-04-02T03:03:21
ScotChattersonz
false
null
0
jem8hdy
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jem8hdy/
false
1
t1_jem88a3
Thank you for this quick rundown. I'm still git'in the model but I grabbed another one (or two) and got it running with the oobabooga one-click. Running very slowly, but I can try and chisel that down now. For anyone else wanting the one-click oobabooga installer it's located here: [https://github.com/oobabooga/text-...
3
0
2023-04-02T03:01:11
nDeconstructed
false
null
0
jem88a3
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jem88a3/
false
3
t1_jem674w
Is there a good install guid for the 65b version?
2
0
2023-04-02T02:43:43
MAXXSTATION
false
null
0
jem674w
false
/r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jem674w/
false
2
t1_jem62kv
It's not possible, because the model would have learn different thing in their own way, it's like trying to fit a square into a round hole. You will just break stuff.
1
0
2023-04-02T02:42:37
LowSpecDev972
false
null
0
jem62kv
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jem62kv/
false
1
t1_jem5ctq
It's the same amount of RAM, it just gets reported by the OS differently. If you can't fit the whole model into memory, then it's going to have to repeatedly load data from the disk which will be very slow. Basically the same as if you made a big swap file and then tried to load a model bigger than your memory. /u/Fr...
4
0
2023-04-02T02:36:25
KerfuffleV2
false
null
0
jem5ctq
false
/r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jem5ctq/
false
4
t1_jem5aen
and he's made a good comparison too!
8
0
2023-04-02T02:35:51
sfhsrtjn
false
null
0
jem5aen
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jem5aen/
false
8
t1_jem1wfb
Is there some kind of step-by-step guide for the 30b? Already got gpt4all running with restricted, need to test it with unrestricted later on.
1
0
2023-04-02T02:07:13
MAXXSTATION
false
null
0
jem1wfb
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jem1wfb/
false
1
t1_jelya2x
https://preview.redd.it/…keep up somehow.
6
0
2023-04-02T01:37:06
sswam
false
null
0
jelya2x
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelya2x/
false
6
t1_jeltjy4
I'm testing it. It sometimes works, but hangs a lot.
1
0
2023-04-02T00:58:08
anotherfakeloginname
false
null
0
jeltjy4
false
/r/LocalLLaMA/comments/125u56p/colossalchat/jeltjy4/
false
1
t1_jelti9k
[deleted]
1
0
2023-04-02T00:57:46
[deleted]
true
null
0
jelti9k
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelti9k/
false
1
t1_jelsf69
>Ada Lovelace A6000 48GB My current PC is pretty close to what you have in your list, differences being I only have 32GB at the moment, a 3900x CPU, and the 4090. I'm going to upgrade to the same 128gb 3600 RAM you have listed. I have one of those ASUS Hyper M.2 x16 Gen 4 Card adapters, so I could make a really fast...
2
0
2023-04-02T00:48:57
Unhappy_Donut_8551
false
null
0
jelsf69
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jelsf69/
false
2
t1_jelr1bd
interstellar-mcconaughey-crying.gif
3
0
2023-04-02T00:38:00
tataragato
false
null
0
jelr1bd
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelr1bd/
false
3
t1_jelpnln
I think an easier way to do this would be to store each days conversation in a log file that has a summary included. When you ask your LLM a question, depending how its phrased, it can search the summaries of all the log files for the closest match of relevant information, then apply the summary of the most relevant l...
3
0
2023-04-02T00:27:01
Nezarah
false
null
0
jelpnln
false
/r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jelpnln/
false
3
t1_jelp6r8
No prob!
2
0
2023-04-02T00:23:14
ThePseudoMcCoy
false
null
0
jelp6r8
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelp6r8/
false
2
t1_jelobpz
Absolutely clutch with the information. Thanks a ton !
2
0
2023-04-02T00:16:17
Wroisu
false
null
0
jelobpz
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelobpz/
false
2
t1_jelo6h0
You must beware because the answers are a mix of truths, half truths and outright fabrications.
15
0
2023-04-02T00:15:11
seastatefive
false
null
0
jelo6h0
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelo6h0/
false
15
t1_jeln5so
The parameters are based on instance so you always have to set them when running but there is a shortcut: The best way to do it is to create a batch file called "Creative-8-threads.bat" (you can name it whatever you want of course) local to the chat.exe file and then in that bat file put your code just as you would ty...
2
0
2023-04-02T00:07:02
ThePseudoMcCoy
false
null
0
jeln5so
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeln5so/
false
2
t1_jelli0b
thanks - last question then I’ll be out, when you change the thread count it’s using is that a permanent change or do you need to set thus every time?
1
0
2023-04-01T23:53:29
Wroisu
false
null
0
jelli0b
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelli0b/
false
1
t1_jellepk
If this is the "Encarta" of LLMs I'm excited to see Wikipedia.
9
0
2023-04-01T23:52:44
AI-Pon3
false
null
0
jellepk
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jellepk/
false
9
t1_jelk1d7
This is so cool!!! I hope it gets bundled into the Oobabooga install. I got it installed and it looks like it's working!
1
0
2023-04-01T23:41:39
Inevitable-Start-653
false
null
0
jelk1d7
false
/r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jelk1d7/
false
1
t1_jelinob
Likewise, my 64gb has been ordered.
3
0
2023-04-01T23:30:26
y___o___y___o
false
null
0
jelinob
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelinob/
false
3
t1_jeli66j
There YouTube video on how to install oogabooga
3
0
2023-04-01T23:26:31
Puzzleheaded_Acadia1
false
null
0
jeli66j
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeli66j/
false
3
t1_jeli5zc
I just use the alpaca [command line](https://preview.redd.it/research-alpaca-7b-language-model-running-on-my-pixel-7-v0-n9ctmf71xioa1.png?width=1080&format=png&auto=webp&s=263546d97e8fa79769b47f9291ddc7e61e2b85e3) prompt from the chat.exe. I use it to tell me stories and answer questions. It's technically a chat form...
4
0
2023-04-01T23:26:28
ThePseudoMcCoy
false
null
0
jeli5zc
false
/r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jeli5zc/
false
4
t1_jelhkni
I didn't hear about gpt4 x alpaca how good is it and can i run it on 8gb of ram
1
0
2023-04-01T23:21:51
Puzzleheaded_Acadia1
false
null
0
jelhkni
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelhkni/
false
1
t1_jelhi70
That's unlocked something in my brain I had long forgotten about.
5
0
2023-04-01T23:21:19
y___o___y___o
false
null
0
jelhi70
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelhi70/
false
5
t1_jelhi49
[deleted]
1
0
2023-04-01T23:21:18
[deleted]
true
null
0
jelhi49
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelhi49/
false
1
t1_jelhfse
Yeah I'm trying to get my family excited and was considering posting that same thing "what blows me away most is that gpt can fit on 5% of my laptop" but they will probably won't understand the significance.
3
0
2023-04-01T23:20:47
y___o___y___o
false
null
0
jelhfse
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelhfse/
false
3
t1_jelh3ah
What UI do you use?
1
0
2023-04-01T23:18:04
uncle-philbanks
false
null
0
jelh3ah
false
/r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jelh3ah/
false
1
t1_jelg16f
The one click has been fully revamped and has a high rate of success, I feel you there were a lot of steps before.
3
0
2023-04-01T23:09:45
Inevitable-Start-653
false
null
0
jelg16f
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelg16f/
false
3
t1_jelfqn7
That's the only tricky part, regardless of what model it is it always has to have that 7b filename for the program to access it since that name is hard coded. So renamed 30b to 7b like this: ggml-alpaca-7b-q4.bin
2
0
2023-04-01T23:07:28
ThePseudoMcCoy
false
null
0
jelfqn7
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelfqn7/
false
2
t1_jelekmp
I tried using oobagooba's one clock install but it always gave me errors, I'm only able to install it by going through a bunch of CMD commands, VisualBasic and miniconda with edited dll files for cuda. I'll try to look more into CPU version tomorrow, hopefully it's just as easy as installing the right model and setting...
2
0
2023-04-01T22:58:13
Famberlight
false
null
0
jelekmp
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelekmp/
false
2
t1_jele294
Ah okay, then I must have downloaded 30B! is it normal for the file to say ggml-alpaca-*7b*-q4.bin? Or do I have to change 7b to 30b?
1
0
2023-04-01T22:54:08
Wroisu
false
null
0
jele294
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jele294/
false
1
t1_jeldvpj
I'm sorry, I don't know much about the CPU only version of running these models. I will say that the oobabooga installation gives you a gpu or cpu option when installing. So you might want to try there, you don't have to do much so if it doesn't work there is very little time wasted.
2
0
2023-04-01T22:52:41
Inevitable-Start-653
false
null
0
jeldvpj
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeldvpj/
false
2
t1_jeldk66
I downloaded the 20 gig file from here: https://huggingface.co/Pi3141/alpaca-lora-30B-ggml/tree/main The larger file was uploaded in the last day or so and looks to be for llama as it won't load off alpaca, so ignore that file if using alpaca.
2
0
2023-04-01T22:50:12
ThePseudoMcCoy
false
null
0
jeldk66
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeldk66/
false
2
t1_jeldchm
I wrote a simple python script that checks the current price of GPU's on runpod per GB of VRAM since that's the important spec. If it helps, this is the script: from bs4 import BeautifulSoup import requests url = 'https://www.runpod.io/gpu-instance/pricing' response = requests.get(url) soup = ...
2
0
2023-04-01T22:48:30
Sixhaunt
false
null
0
jeldchm
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeldchm/
false
2
t1_jelcmyy
I can't find any normal guide on how to run llama on CPU, there are mostly MacOS guides. Can you link any instructions? And is it possible to run CPU version on oobabooga (don't really want to spend another few days troubleshooting installation of another gui)?
2
0
2023-04-01T22:42:51
Famberlight
false
null
0
jelcmyy
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelcmyy/
false
2
t1_jelbldh
I can understand that, but there cpu only versions that run LLaMA pretty well.
3
0
2023-04-01T22:34:41
Inevitable-Start-653
false
null
0
jelbldh
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelbldh/
false
3
t1_jelbi8k
While it's great, it also makes a lot of mistakes and errors, it should come with somekind of warning to not take all the answers seriously and verify if the answers are correct using multiple sources.
1
0
2023-04-01T22:33:58
ptitrainvaloin
false
2023-04-03T14:47:18
0
jelbi8k
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelbi8k/
false
1
t1_jelbhp4
You can interface oobabooga with stable diffusion....just sayin...
3
0
2023-04-01T22:33:51
Inevitable-Start-653
false
null
0
jelbhp4
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelbhp4/
false
3
t1_jelbb2h
Hmm, I'm unsure about that. I don't want to give you the wrong information, I'm just not familiar with llama.cpp.
2
0
2023-04-01T22:32:22
Inevitable-Start-653
false
null
0
jelbb2h
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelbb2h/
false
2
t1_jelaut9
Hey, one random question. I followed the 30b 20 GB file but it downloaded the 7b quantization - do I have to follow another link? thanks for any additional help - and how would I choose between different instances of it? For example - if I have 7b, 13b, and 30b in one folder.
1
0
2023-04-01T22:28:47
Wroisu
false
2023-04-01T22:37:52
0
jelaut9
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelaut9/
false
1
t1_jelabw4
The .pt file is like an accessory model to the .bin model. The .bin files are all bits of one model, the .pt file is a 4 bit accessory model that needs to be in the same folder as your model. For example in my models folder I have a model called llama-13b, the files for which live in a folder called "llama-13b" If I ...
2
0
2023-04-01T22:24:37
Inevitable-Start-653
false
null
0
jelabw4
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jelabw4/
false
2
t1_jel7uy1
Hardware cost and complex installation are not inviting
7
0
2023-04-01T22:04:48
Famberlight
false
null
0
jel7uy1
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jel7uy1/
false
7
t1_jel71sg
Put in a Waifu's graphical interface and suddently they will be interested.
3
0
2023-04-01T21:58:22
ptitrainvaloin
false
null
0
jel71sg
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jel71sg/
false
3
t1_jel56rz
Llama.cpp is what I've been using.
5
0
2023-04-01T21:43:55
vfx_4478978923473289
false
null
0
jel56rz
false
/r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jel56rz/
false
5
t1_jel4yvi
Same, use chatgpt daily for this. Sooooon!
2
0
2023-04-01T21:42:15
claygraffix
false
null
0
jel4yvi
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jel4yvi/
false
2
t1_jel4vzj
I'm in a similar boat with a Mac. The code to run GPU-accelerated quantized models is nvidia-specific, so I've been using Alpaca.cpp to run it on CPU-only. I **think** if I convert the models to CoreML, I can run them on the GPU and Neural Engine hardware...
3
0
2023-04-01T21:41:37
GreaterAlligator
false
null
0
jel4vzj
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jel4vzj/
false
3
t1_jel2td7
Thanks for your time and guidance!
3
0
2023-04-01T21:25:42
MyVoiceIsElevating
false
null
0
jel2td7
false
/r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jel2td7/
false
3
t1_jel20we
I've been having good luck running llama and alpaca with just my CPU.
4
0
2023-04-01T21:19:54
ThePseudoMcCoy
false
null
0
jel20we
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jel20we/
false
4
t1_jel1m4d
Yeah, I find it ok to wait for responses to finish generating if it means saving 1000 dollars on a GPU + being able to run it locally. I upgraded from 16 to 64 gigs of ram yesterday just to run llama 65 and on anticipation of alpaca and gpt4-x-alpaca 65b.
2
0
2023-04-01T21:16:49
ThatLastPut
false
null
0
jel1m4d
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jel1m4d/
false
2
t1_jel0lrf
Very insightful
-1
0
2023-04-01T21:09:13
ZestyData
false
null
0
jel0lrf
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jel0lrf/
false
-1
t1_jekzpug
On Linux use llama.cpp and CPU inference, it works well!
1
0
2023-04-01T21:02:36
reddiling
false
null
0
jekzpug
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekzpug/
false
1
t1_jekzg75
Can chavinlo model be used with llama.cpp?
2
0
2023-04-01T21:00:34
reddiling
false
null
0
jekzg75
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekzg75/
false
2
t1_jeky0mc
if you have an nvidia gpu\* No one is talking about Linux in the first place outside of very limited and savvy users. I'm sitting here almost getting an aneurysm trying to think about how to install something properly with my 6800xt even when I'm booted into linux \-- After malding a bit I just found out that I can ...
5
0
2023-04-01T20:50:00
illyaeater
false
2023-04-01T22:37:21
0
jeky0mc
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeky0mc/
false
5
t1_jekx1cc
For a few months if you check their discord. He even promised for it to come out in december and run on phones... Well, guess we have that, sorta. I've recently heard they're having a problem with getting enough compute, and it's likely to persist for the next 9 months... Let's hope for the best.
1
0
2023-04-01T20:42:45
Scriptod
false
null
0
jekx1cc
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jekx1cc/
false
1
t1_jekwi0i
From what I can sort of work out, you do need the ram unless you want to kill your SSD. Ram is cheaper and less traumatic. I have 32gb on order (can't even face dealing with trying to match new and old 16gb I will just have spare 16gb).
1
0
2023-04-01T20:38:46
ambient_temp_xeno
false
2023-04-01T20:57:46
0
jekwi0i
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekwi0i/
false
1
t1_jekw3ck
There is no cloud. It's just other people's computers.
2
0
2023-04-01T20:35:54
Scriptod
false
null
0
jekw3ck
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jekw3ck/
false
2
t1_jekvpjy
GPU inference implementation is ridiculously unoptimized in general.
2
0
2023-04-01T20:33:05
Scriptod
false
null
0
jekvpjy
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jekvpjy/
false
2
t1_jekviu3
that is amazing! but what do you mean by using LLamA and use it to run alpaca and llama? I'm still a beginner so I don't really know what you mean by that.
1
0
2023-04-01T20:31:45
FriendDimension
false
null
0
jekviu3
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekviu3/
false
1
t1_jekucxw
Not exactly six gigs, but since using mmap() the memory usage fell by about a half (in pr's author's words). Also not vram, just ram, llama.cpp runs on the cpu.
6
0
2023-04-01T20:23:19
Scriptod
false
null
0
jekucxw
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekucxw/
false
6
t1_jektoja
Great reference!
14
0
2023-04-01T20:18:27
ThePseudoMcCoy
false
null
0
jektoja
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jektoja/
false
14
t1_jekt4kk
I’d say go with LLamA and use it to run alpaca and llama. They just updated so that 30B can run off of only 6 gigs vr ram !
6
0
2023-04-01T20:14:25
Wroisu
false
null
0
jekt4kk
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekt4kk/
false
6
t1_jeksvq9
It's Encarta all over again.
48
0
2023-04-01T20:12:36
2muchnet42day
false
null
0
jeksvq9
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeksvq9/
false
48
t1_jeksogn
I had issues with cmake but I code with c# in visual studios so I downloaded the desktop development with c++ module in visual studios setup and compiled it that way. By default it compiles the debug version of the executable so I had to figure out how to compile the release version which is running much faster. You...
2
0
2023-04-01T20:11:06
ThePseudoMcCoy
false
null
0
jeksogn
false
/r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jeksogn/
false
2
t1_jekskc0
something I dont understand is if you look at this one [https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g/tree/main](https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g/tree/main) his file says pt file. And they are smaller I am assuming they are smaller because they are 4bit...
3
0
2023-04-01T20:10:13
FriendDimension
false
null
0
jekskc0
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekskc0/
false
3
t1_jekshxu
thank you, genuinely appreciate it !
2
0
2023-04-01T20:09:43
Wroisu
false
null
0
jekshxu
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekshxu/
false
2
t1_jekrs89
Great, thanks! I’m fine trying it out. Is there a guide you found worthwhile for compiling? I tried the Cmake approach and I’m encountering issues (likely user error). There’s so many guides floating around, so it’s hard to tell what’s already behind the curve.
3
0
2023-04-01T20:04:34
MyVoiceIsElevating
false
null
0
jekrs89
false
/r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jekrs89/
false
3
t1_jekrllv
can you give me a link to the updated less ram model?
1
0
2023-04-01T20:03:14
FriendDimension
false
null
0
jekrllv
false
/r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jekrllv/
false
1
t1_jekqved
how do you get those optimizations would gpt4xalpaca have those optimizations?
2
0
2023-04-01T19:57:48
FriendDimension
false
null
0
jekqved
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekqved/
false
2
t1_jekqnsg
which do you recommend creativity and power wise? llama, alpaca or gpt4xalpaca? And for llama and alpaca I am assuming you are talking about 13b?
6
0
2023-04-01T19:56:15
FriendDimension
false
null
0
jekqnsg
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekqnsg/
false
6
t1_jekqfxy
I'm on mobile so hard to cross reference, but I'm not familiar with the download link you've provided, it may work but I used this guide to get alpaca going https://www.reddit.com/r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp Which is basically as simple as downloading an executable file and double clicki...
3
0
2023-04-01T19:54:39
ThePseudoMcCoy
false
null
0
jekqfxy
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekqfxy/
false
3