name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jeffjk0
Hm, just another clueless guy to chime in. Do you think it would be easy to train AI - or just fine tune it - on say colab pro? The say max running time is 12 hours, but in pay-as you go- model I was always cut max after four hours. That assumes I would gather enough data to fine tune it, but let's just assume I wo...
1
0
2023-03-31T16:51:25
szopen76
false
null
0
jeffjk0
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeffjk0/
false
1
t1_jeffhim
I gotta stress that this is far from an ideal setup. But I put together my system for somewhere around $350 by going with used stuff from ebay for everything other than the open air frame. Obviously, there's the luck of the draw there about what deals someone's going to find though. And what someone considers acceptabl...
2
0
2023-03-31T16:51:04
toothpastespiders
false
null
0
jeffhim
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeffhim/
false
2
t1_jefence
This extension probably works like llama index, it just index text and save into db then find most relevant sentences with given words from input and then put this relevant sentences along with prompt in input like short term context
2
0
2023-03-31T16:45:38
Liverpool67
false
null
0
jefence
false
/r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jefence/
false
2
t1_jefebk3
I love this answer. I would give you one of those Reddit award things if I was not reddit broke. Instead, please accept the heartfelt thanks from a human being.
7
0
2023-03-31T16:43:30
Old_Court9173
false
null
0
jefebk3
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefebk3/
false
7
t1_jefe35q
Runpod, vast.ai. paperspace has them for per hour rent too but the rates aren't as good. They're a good company otherwise though. Flat rate is decent but you won't get an a100 realistically.
6
0
2023-03-31T16:41:58
kif88
false
null
0
jefe35q
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefe35q/
false
6
t1_jefdmjl
You can try Alpaca.cpp to start with. It'll run off your CPU.
3
0
2023-03-31T16:39:02
iJeff
false
null
0
jefdmjl
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefdmjl/
false
3
t1_jefdktz
Personally, I think it is. And you can even get a surprising amount of bang for your buck if you put something together yourself, and lean in on ebay deals, rather than going with a prebuilt machine. The real fun for me is that you're not getting nickel and dime'd through a million microtransactions from either opena...
25
0
2023-03-31T16:38:44
toothpastespiders
false
2023-03-31T16:55:19
0
jefdktz
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefdktz/
false
25
t1_jefd924
Hah, I have/use a business grade laptop... However, I'm seeing some encouraging results in this thread which make it seem like I will not have to drop an extraordinary amount of money to get my feet wet!
1
0
2023-03-31T16:36:38
Old_Court9173
false
null
0
jefd924
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefd924/
false
1
t1_jefcsic
What hardware do you have now? There's a good chance you'll be able to run something and, at worst, a little slowly. I'd recommend just starting with whatever can run on your existing hardware then consider a custom build. A pre-built PC isn't a good option.
1
0
2023-03-31T16:33:40
iJeff
false
null
0
jefcsic
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefcsic/
false
1
t1_jefcp8h
You can get away with much cheaper equipment! A computer with an rtx 2080ti GPU, a descent consumer CPU and 32gb of ram will run the llama 7B parameter model with 8 bit precision and even a 13b parameter model using GPTQ 4 bit quantization.
1
0
2023-03-31T16:33:04
Professional-Ad-4149
false
null
0
jefcp8h
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefcp8h/
false
1
t1_jefcebh
What resource would you use to rent a server with an A100?
3
0
2023-03-31T16:31:06
Dany-BurnThemAll
false
null
0
jefcebh
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jefcebh/
false
3
t1_jefbxh7
Sorry, surely a stupid question (not an expert): I thought LLama is currently only trained on text input, how do you feed in images?
2
0
2023-03-31T16:28:05
Soggy-Can221
false
null
0
jefbxh7
false
/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/jefbxh7/
false
2
t1_jefbasb
Wow, that's amazing. I'm currently working on memories through finetuning but with no scientific background,so it is just try and error with not enough understanding what I'm actually doing and if that what I'm doing can even work. My question is, can one see somewhere what the database is injecting for each reply, to...
2
0
2023-03-31T16:23:59
Ok-Scarcity-7875
false
2023-03-31T16:32:31
0
jefbasb
false
/r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jefbasb/
false
2
t1_jefaj1a
You can get some really nice server features when you go with used business hardware too. You can get things like remote console which feels luxurious when it's in your basement and you can install an OS from another room, better hardware monitoring for failures, and cooling designed for higher heat production. One t...
3
0
2023-03-31T16:19:01
LetMeGuessYourAlts
false
null
0
jefaj1a
false
/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jefaj1a/
false
3
t1_jef9u6i
You could build a rig on the aws cloud and throttle down the cpu until its needed, and way up until it has enough compute to run your llm. Why would you put exotic hardware on prem, if you mostly leave it off for the next 4 years? I did like that, for about 1k dollars per yr. But i ended up aborting that and got a use...
1
0
2023-03-31T16:14:28
memberjan6
false
2023-03-31T16:18:39
0
jef9u6i
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jef9u6i/
false
1
t1_jef6rat
Wait a month, and use one of the many services that will be available.
3
0
2023-03-31T15:54:34
Pan000
false
null
0
jef6rat
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jef6rat/
false
3
t1_jef6q53
This is exactly the sort of response I'm looking for. What has your experience been like if you don't mind sharing?
3
0
2023-03-31T15:54:22
Old_Court9173
false
null
0
jef6q53
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jef6q53/
false
3
t1_jef6jnl
Worst case scenario: Nothing better than llama gets released and you're left with an awesome gaming rig.
8
0
2023-03-31T15:53:12
MentesInquisitivas
false
null
0
jef6jnl
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jef6jnl/
false
8
t1_jef6cza
As just a person with an EE degree who's mind was blown by ChatGPT, and then decided to drop a couple of thousands on a server tower so I could play around with AI's... I'd say yeah, if you got the money, go for it.
7
0
2023-03-31T15:52:00
MentesInquisitivas
false
null
0
jef6cza
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jef6cza/
false
7
t1_jef659x
There is never a good reason to purchase Alienware, if you were seriously considering it you should spend quite a bit of time learning the basics of computers before attempting to build or purchase a specialized setup. From a practical standpoint, any serious work will be done on rented GPU clusters. An A100 is about ...
43
0
2023-03-31T15:50:39
MotionTwelveBeeSix
false
null
0
jef659x
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jef659x/
false
43
t1_jef5icp
That sounds logical. However would be nice to see what happens. Is there a method to convert the layers into the correct form so it can be applied to the base model?
2
0
2023-03-31T15:46:33
Ok-Scarcity-7875
false
2023-03-31T15:51:57
0
jef5icp
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jef5icp/
false
2
t1_jef5ce2
So basically I can change this code into any format I want if I would prepare my own data?
2
0
2023-03-31T15:45:29
szopen76
false
null
0
jef5ce2
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jef5ce2/
false
2
t1_jef50us
Maybe try this instead, someone rigged up an extension to store memories in a database and recall them based on relevancy [GitHub - wawawario2/text-generation-webui: A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.](https://github.com/wawawario2/text-generation-w...
1
0
2023-03-31T15:43:24
NDV-Twist-5283
false
null
0
jef50us
false
/r/LocalLLaMA/comments/126przf/increasing_maximum_context_length/jef50us/
false
1
t1_jef4dun
I'm really new to this, but my guess is that this won't be possible. All weights in a model are in relation to all other weights, so they would just be wrong when applied to a different set of base weights.
3
0
2023-03-31T15:39:16
NickUnrelatedToPost
false
null
0
jef4dun
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jef4dun/
false
3
t1_jef0aj6
[deleted]
1
0
2023-03-31T15:12:40
[deleted]
true
2023-03-31T15:39:30
0
jef0aj6
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jef0aj6/
false
1
t1_jeewreo
If you had the same results with 4-bit 30B, then there's something wrong with either your setup or how you downloaded the models. I've been using 4-bit 30B LLaMA for days now with no problems. Before redoing anything, I recommend first trying to redownload the models from the torrent and checking to see if you still r...
1
0
2023-03-31T14:49:16
Civil_Collection7267
false
null
0
jeewreo
false
/r/LocalLLaMA/comments/127ldgw/extremely_slow_performance_with_8bit_30b_and/jeewreo/
false
1
t1_jeewo43
You're right that there's no case law specifically about model weights, but I think the chances of bringing a successful infringement suit would be very slim. For example, in the past the courts have ruled against phone book listings being copyrightable because there was no creativity or originality involved in their c...
1
0
2023-03-31T14:48:39
busy_beaver
false
null
0
jeewo43
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jeewo43/
false
1
t1_jeevxtx
Another way to do it would be to send it in chunks of 2048 then ask Llama to summarize it in 256 then recombine all the small context into 2048 context. At some point information might be lost but you might even do iteratively a few time.
1
0
2023-03-31T14:43:49
aerilyn235
false
null
0
jeevxtx
false
/r/LocalLLaMA/comments/126przf/increasing_maximum_context_length/jeevxtx/
false
1
t1_jeevk02
How does that tool works?
1
0
2023-03-31T14:41:16
aerilyn235
false
null
0
jeevk02
false
/r/LocalLLaMA/comments/126przf/increasing_maximum_context_length/jeevk02/
false
1
t1_jeeui60
Buying a server for the RAM on ebay is about the cheapest way to go. Then sell or give away the server carcass. ASUS has a couple of workstation motherboards (SAGE C422 I think is one) that will work with the RAM. You just have to be careful in picking the processors that will match with the memory and are compatibl...
1
0
2023-03-31T14:34:16
aigoopy
false
null
0
jeeui60
false
/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jeeui60/
false
1
t1_jeetxg9
[removed]
1
0
2023-03-31T14:30:22
[deleted]
true
null
0
jeetxg9
false
/r/LocalLLaMA/comments/126przf/increasing_maximum_context_length/jeetxg9/
false
1
t1_jeetapw
I was working with the BLOOM model in a similar way but CPU only. It took about 350GB RAM, took a half hour to load and token time was 30+ seconds. Way too slow for it to be usable but it was better than using the GPU shard swapping which was 3+ minutes per token. When I first started playing with that on a laptop,...
2
0
2023-03-31T14:26:06
aigoopy
false
null
0
jeetapw
false
/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jeetapw/
false
2
t1_jeet1lu
Seems like it's decently good at answering questions, not great at generating code yet.
3
0
2023-03-31T14:24:24
NegHead_
false
null
0
jeet1lu
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jeet1lu/
false
3
t1_jeeszy3
Thank you for your reply. Forgot to mention in the original post, I had the same results with the 4 bit 30b model as the 65, and that does seem to be the same model.
1
0
2023-03-31T14:24:05
mrredditman2021
false
null
0
jeeszy3
false
/r/LocalLLaMA/comments/127ldgw/extremely_slow_performance_with_8bit_30b_and/jeeszy3/
false
1
t1_jeeq3cr
[https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced](https://huggingface.co/8bit-coder/alpaca-7b-nativeEnhanced) I've tried it and it somewhere feels worse than regular Alpaca Native, but I haven't performed any quantitative tests for this.
1
0
2023-03-31T14:04:03
Civil_Collection7267
false
null
0
jeeq3cr
false
/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jeeq3cr/
false
1
t1_jeepoal
Someone should write an open letter to these tech companies to pause their work... not for the safety of humanity, but so I can catch up on the news!
7
0
2023-03-31T14:01:03
NegHead_
false
null
0
jeepoal
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jeepoal/
false
7
t1_jeepmhv
>Any suggestions on how either the performance of the 30b can be improved use 4-bit LLaMA 30B instead. >The 65b model was much faster to output the gibberish response Are you using a model that was properly quantized? Gibberish can happen otherwise. Check that it's the 4-bit 65B model from here [https://github...
1
0
2023-03-31T14:00:41
Civil_Collection7267
false
null
0
jeepmhv
false
/r/LocalLLaMA/comments/127ldgw/extremely_slow_performance_with_8bit_30b_and/jeepmhv/
false
1
t1_jeemwal
- In that case it could also be trained, say, on a bunch of Wikipedia articles with the 'INPUT' (tagged) as any keyword, and the output the entire article (leaving aside token limit for now)? In which case you'd have a Wikipedia article from keyword generator, correct? - Hence to train it, lets say to answer queries l...
2
0
2023-03-31T13:41:01
Puzzleheaded_Acadia1
false
null
0
jeemwal
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jeemwal/
false
2
t1_jeek7uh
Thank you!
1
0
2023-03-31T13:20:39
TyThomson
false
null
0
jeek7uh
false
/r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/jeek7uh/
false
1
t1_jeegg2x
Might be obvious to some, but in case it's not, you still need the original model when using lora. So if you only plan on using one model, then there really isn't a file size difference. For example, "20gb native fine tuned model vs 20gb original plus 10mb lora". However, if you plan on having something like 10 differe...
6
0
2023-03-31T12:50:57
qrayons
false
null
0
jeegg2x
false
/r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/jeegg2x/
false
6
t1_jeeg2go
I think it'll be llama trained on openassistant data. Llama have quite the quality headstart compared to other downloadable models. There won't be an official release, but someone will make it and I think that'll be better than OpenAssistant's model.
1
0
2023-03-31T12:47:54
TheTerrasque
false
null
0
jeeg2go
false
/r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/jeeg2go/
false
1
t1_jeedgnj
I'm not sure I understood the answer to the question the other redditor asked. Like, how do you handle converting lots of text into a question/answer kind of format for alpaca to learn on?
1
0
2023-03-31T12:25:37
2muchnet42day
false
null
0
jeedgnj
false
/r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jeedgnj/
false
1
t1_jeed4bq
I have exactly the same question. Say you have an article about dogs, how do you feed it so that you can ask how many legs a dog has? Because the way I see it, you'd need to combine both the article and chat like learning
5
0
2023-03-31T12:22:37
2muchnet42day
false
null
0
jeed4bq
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jeed4bq/
false
5
t1_jeecsbb
The lora paper claims it's almost the same (https://arxiv.org/abs/2106.09685)
3
0
2023-03-31T12:19:39
CellWithoutCulture
false
null
0
jeecsbb
false
/r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/jeecsbb/
false
3
t1_jee9ap4
Thank you.
2
0
2023-03-31T11:46:41
2muchnet42day
false
null
0
jee9ap4
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jee9ap4/
false
2
t1_jee81v9
~~This is most likely based on the [Open Assistant](http://reddit.com/r/OpenAssistant) project. You can see the language breakdown [on their website](http://www.open-assistant.io) if you log in.~~ My mistake. I didn't read the article. >LAION’s Open Assistant (OA) project is our efforts to replicate the function...
1
0
2023-03-31T11:34:10
Captain_Pumpkinhead
false
2023-03-31T23:12:39
0
jee81v9
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jee81v9/
false
1
t1_jee5blt
Oh that arkward time in training when the loss function seems to only keep raising and you desperately want to increase the learning rate... We don't know why it happens. They're discussing it in and out over at /r/parenting, but frankly, they all have different models and never seem to have a working solution. The ...
2
0
2023-03-31T11:04:46
NickUnrelatedToPost
false
null
0
jee5blt
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jee5blt/
false
2
t1_jee4wqh
That's great. I'm still waiting for a truly open alternative to LLaMA and Alpaca. With the pace things are developing right now, I'm looking forward to experiment with this soon, when a model is available for oobabooga's text-generation-webui...
8
0
2023-03-31T10:59:53
WolframRavenwolf
false
null
0
jee4wqh
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jee4wqh/
false
8
t1_jee3tag
They used to use physical reinforcement learning, but I'm afraid it's been deprecated
4
0
2023-03-31T10:47:07
CellWithoutCulture
false
null
0
jee3tag
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jee3tag/
false
4
t1_jee3n9t
Thank you for explaining in terms I understand. I seem to be inundated with advice from those who cannot simply frame it in terms of machine learning! ;p
3
0
2023-03-31T10:45:07
CellWithoutCulture
false
null
0
jee3n9t
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jee3n9t/
false
3
t1_jee2x5e
I have a 17 year old and a 14 year old. Is there some sort of tech support for the alignment problems I'm having?
5
0
2023-03-31T10:36:16
raika11182
false
null
0
jee2x5e
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jee2x5e/
false
5
t1_jee2voh
Where are weights natively trained on the cleaned alpaca set?
1
0
2023-03-31T10:35:44
2muchnet42day
false
null
0
jee2voh
false
/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jee2voh/
false
1
t1_jee2oag
> so you can just swap that in and out I had not thought of it that way, keeping several fine tuned lora with a single base set of weights that work for different specific tasks. Probably even possible to load them all at the same time and share vram?! Wow, I totally gotta check that.
1
0
2023-03-31T10:33:09
2muchnet42day
false
null
0
jee2oag
false
/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jee2oag/
false
1
t1_jee2ebh
Gratulations! Be assured, training one of those NIs is even more fun. Soo many parameters. ;-D But the first two years are hard. The reinforcement algo is badly documented and and there is no progress bar. They are learing, from day one! It's just hard to see. Keep the data flow up, then it will work out in the end.
3
0
2023-03-31T10:29:33
NickUnrelatedToPost
false
null
0
jee2ebh
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jee2ebh/
false
3
t1_jee18cz
Lora in stable diffusion works fundamentally different compared to LLM
2
0
2023-03-31T10:14:15
FHSenpai
false
null
0
jee18cz
false
/r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/jee18cz/
false
2
t1_jee0ix6
u/WolframRavenwolf I'm thinking of giving a try to "alpaca-30b-4bit-128g.safetensors" from [here](https://huggingface.co/elinas/alpaca-30b-lora-int4) with your setup for both "LLaMA-Precise" and "ChatGPT-like persona", but I don't know how to save the whole chat history in a single shot between the many history clearin...
1
0
2023-03-31T10:04:44
9cent0
false
2023-03-31T10:09:00
0
jee0ix6
false
/r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jee0ix6/
false
1
t1_jedzu3z
I believe the newest cpu versions for llama.cpp are on rentry(dot)org/nur779 now.
3
0
2023-03-31T09:55:13
ambient_temp_xeno
false
2023-03-31T11:27:00
0
jedzu3z
false
/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jedzu3z/
false
3
t1_jedysw9
thank you for this example
1
0
2023-03-31T09:40:43
Vulcan_Grip
false
null
0
jedysw9
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedysw9/
false
1
t1_jedyguh
I'm still fuzzy on everything to be completely honest. In the code it seems to cut or pad everything to the same length but when I got rid of that code it still ran, but it ran out of memory since some of my data is really long and too long for it to handle. The reason it takes longer to train is that longer inputs req...
1
0
2023-03-31T09:36:01
Sixhaunt
false
null
0
jedyguh
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedyguh/
false
1
t1_jedy4ja
Didn't see a breakdown by language? I feel this is very important for non English support.
5
0
2023-03-31T09:31:08
2muchnet42day
false
null
0
jedy4ja
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jedy4ja/
false
5
t1_jedy18x
I dont really have any guides for modifying it the way that I did, I'm just a software developer so I hacked it together fairly easily. It depends on your data though. For example with the ElderScrools Books data It came in this format: { title: "Some book title", game: ["Skyrim", "ESO"], ...
1
0
2023-03-31T09:29:51
Sixhaunt
false
null
0
jedy18x
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedy18x/
false
1
t1_jedxw42
woah! amazing! please message me if you ever figure this out do you have like a github page or anything like that to follow your work?
1
0
2023-03-31T09:27:46
Vulcan_Grip
false
null
0
jedxw42
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedxw42/
false
1
t1_jedxgw7
Are you saying that if I have even one 2000 token data-record that it will take many times longer to train? Surely, if you're training it to do less you're limiting its original ability down to only the shorter size? Or it can still do the 2000 if it wants to, you just don't have training examples for that?
2
0
2023-03-31T09:21:41
Pan000
false
null
0
jedxgw7
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedxgw7/
false
2
t1_jedx5n4
No, it would be done for you. It's one of the models I'm working on. You just upload your notes and it makes it into QA pairs and trains a model for you.
3
0
2023-03-31T09:17:09
Pan000
false
null
0
jedx5n4
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedx5n4/
false
3
t1_jedx59z
wow you are doing some amazing and interesting things! And your elder scroll s example is something I would love to do with my notes. Do you have any content or videos I can watch to get started to do what you did? The only problem is I never coded in my life, is there a way to simply feed my note texts into something ...
1
0
2023-03-31T09:17:00
Vulcan_Grip
false
null
0
jedx59z
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedx59z/
false
1
t1_jedwpta
that sounds like a lot of work I have probably over 2 million words in just 2 years.
1
0
2023-03-31T09:10:43
Vulcan_Grip
false
null
0
jedwpta
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedwpta/
false
1
t1_jedw5yx
In StableDiffusion (Image generating AI) Loras arent quite as good as a finetuned model but I dont know for this and it seems to work well
2
0
2023-03-31T09:02:48
Sixhaunt
false
null
0
jedw5yx
false
/r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/jedw5yx/
false
2
t1_jedvzcr
>Alpaca just made up those tags, no? How is it actually going to the transformer, just as one block of text? Yes they made it up and yes in the end it's just the block of text as a whole submitted for training. There is no actual input or output fields as far as the model is concerned, just a list of text files for...
4
0
2023-03-31T09:00:01
Sixhaunt
false
null
0
jedvzcr
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedvzcr/
false
4
t1_jedv6a0
From my understanding, although obviously I'm not sure because I'm asking that myself, you'd need to turn your notes into question-answer pairs to interact with it in that way. I intend to make a system that does this for you, so you'd be able to import your notes and get out a model that you can question about them.
1
0
2023-03-31T08:48:06
Pan000
false
null
0
jedv6a0
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedv6a0/
false
1
t1_jedv5i2
I'm still very new to all of this so I dont know much in terms of comparisons of the various training methods but I have tried making loras by modifying the code used to train the alpaca7b lora. I used [this google colab](https://colab.research.google.com/drive/1rqWABmz2ZfolJOdoy6TRc6YI7d128cQO#scrollTo=OdgRTo5YxyRL) f...
2
0
2023-03-31T08:47:47
Sixhaunt
false
null
0
jedv5i2
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedv5i2/
false
2
t1_jeduy72
Are there any downsides to using the lora method? Seems like faster training and small filesize are 2 wins for it.
1
0
2023-03-31T08:44:48
Pan000
false
null
0
jeduy72
false
/r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/jeduy72/
false
1
t1_jeduq78
That's Alpaca but I'm not using Alpaca, I want to train on my own dataset. Alpaca just made up those tags, no? How is it actually going to the transformer, just as one block of text? I don't understand what you mean about training length? I will use the full length that I can, so my question is, what's the maximum siz...
1
0
2023-03-31T08:41:28
Pan000
false
null
0
jeduq78
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jeduq78/
false
1
t1_jedujpg
hey six im completely new to this world and want to understand if it is possible to teach something like gpt4all or a open source model my research notes? As in feed it my notes so I can interact with my ideas better, would this be considered fine tuning and is there any resources to learn how to do that? I'm in the ne...
1
0
2023-03-31T08:38:49
Vulcan_Grip
false
null
0
jedujpg
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedujpg/
false
1
t1_jedudx5
oh yes, my bad. I forgot that the colab I used had this code duplicated with one version unfixed. The first one, where I copied this from, had this typo in it although it doesnt matter since it was just a data-check thing. The second one which actually converts the data to be used has it proper and I forgot about that ...
1
0
2023-03-31T08:36:29
Sixhaunt
false
null
0
jedudx5
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedudx5/
false
1
t1_jedtlq9
for the filesize part, a finetuned model is the same size as the model it was trained from. As for loras, it depends on the lora file but it's only in the megabytes so tiny in comparison.
1
0
2023-03-31T08:24:48
Sixhaunt
false
null
0
jedtlq9
false
/r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/jedtlq9/
false
1
t1_jedszds
> if data_point["instruction"]: That should say if data_point["input"]:, right?
3
0
2023-03-31T08:15:35
Disastrous_Elk_6375
false
null
0
jedszds
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedszds/
false
3
t1_jedsk52
the input and putput are put together and turned into a single document when training. This is the code for it: if data_point["input"]: return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request. ...
3
0
2023-03-31T08:09:07
Sixhaunt
false
2023-03-31T08:36:42
0
jedsk52
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jedsk52/
false
3
t1_jedr00m
You do know that part of the T&C is that its against people posting the weights/models that Facebook came up with? It's the reason why they've been actively killing all the posted downloads on Face-Hugging, GitHub and elsewhere.
1
0
2023-03-31T07:46:16
redfoxkiller
false
null
0
jedr00m
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jedr00m/
false
1
t1_jedqpjy
alpaca.cpp is deprecated, use [llama.cpp](https://github.com/ggerganov/llama.cpp) instead. if you need any help with installing llama.cpp, then asking in their [Discussions page](https://github.com/ggerganov/llama.cpp/discussions) would yield the fastest answers. if you have any more questions on LLaMA installation in...
1
0
2023-03-31T07:41:55
Civil_Collection7267
false
null
0
jedqpjy
false
/r/LocalLLaMA/comments/127duh3/problem_with_running_alpacacpp_on_windows_10/jedqpjy/
true
1
t1_jedp4un
I know, we just had a baby, and it's seriously interfering with my alpaca time
4
0
2023-03-31T07:19:25
CellWithoutCulture
false
null
0
jedp4un
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jedp4un/
false
4
t1_jedofq7
It looks intentional and actually misleading. Naturopathy is a scam.
3
0
2023-03-31T07:09:49
TheCastleReddit
false
null
0
jedofq7
false
/r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/jedofq7/
false
3
t1_jedo4ez
If you have 8GB of VRAM, then you can run 4-bit LLaMA 7B. Results aren't great with 7B, so I recommend using [Alpaca Native 4-bit here](https://huggingface.co/ozcur/alpaca-native-4bit) for better outputs. You could use llama.cpp instead and run one of the higher models with that, but I don't recommend it. The web UI ...
1
0
2023-03-31T07:05:31
Civil_Collection7267
false
null
0
jedo4ez
false
/r/LocalLLaMA/comments/127buld/considering_this_hardware_what_dyou_think_i_could/jedo4ez/
false
1
t1_jednqkj
sorry a formatting error occured that i didnt catch, i do not have 32gb of vram, i have 8gb of vram with 16 additional swap/shared ram making a total accessible 24gb of vram. i have 32gb of normal ram
1
0
2023-03-31T07:00:16
FairArkExperience
false
null
0
jednqkj
false
/r/LocalLLaMA/comments/127buld/considering_this_hardware_what_dyou_think_i_could/jednqkj/
false
1
t1_jedlyms
To be fair, this isn't so much "technology moving fast" as it is just people scaling up. Kinda like when a paper publishes a damning story on something, and then everyone else publishes all the other stories.
4
0
2023-03-31T06:36:36
MarlinMr
false
null
0
jedlyms
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jedlyms/
false
4
t1_jedldmx
[removed]
1
0
2023-03-31T06:28:51
[deleted]
true
null
0
jedldmx
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jedldmx/
false
1
t1_jedksqy
>AMD Ryzen 7 5800H with Radeon Graphics 3.20 GHzRAM: 32gbvram: nvidia, 8gb dedi+ 16gb shared > >32gbvram you have 32GB VRAM? you can run 4-bit LLaMA 30B. the guide here has the download links for the model with group-size: [https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit...
1
0
2023-03-31T06:21:19
Civil_Collection7267
false
null
0
jedksqy
false
/r/LocalLLaMA/comments/127buld/considering_this_hardware_what_dyou_think_i_could/jedksqy/
true
1
t1_jedjapd
Why does nobody have regards for my limited spare time? I'm calling for an AI memorandum... I can't keep up digesting all that cool stuff ^^
12
0
2023-03-31T06:02:17
NickUnrelatedToPost
false
null
0
jedjapd
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jedjapd/
false
12
t1_jediczs
theyre not going to release the weights, or the training data, they violate the terms of each of the sources they mentioned holding back on until they could determine if a release would violate any
1
1
2023-03-31T05:50:42
FairArkExperience
false
null
0
jediczs
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jediczs/
false
1
t1_jedcbx3
Counter example for you to consider... If sleep is requires to learn things, how come people are able to learn something in less than a day?
2
0
2023-03-31T04:42:46
tvetus
false
null
0
jedcbx3
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jedcbx3/
false
2
t1_jedb6nm
> sliding window buffer That could be useful for summarizing a whole book without having to break it up into lots of chunks.
1
0
2023-03-31T04:30:52
tvetus
false
null
0
jedb6nm
false
/r/LocalLLaMA/comments/126przf/increasing_maximum_context_length/jedb6nm/
false
1
t1_jed8p6m
I had the same feeling last year when the diffusion models craze started.
16
0
2023-03-31T04:06:17
Disastrous_Elk_6375
false
null
0
jed8p6m
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jed8p6m/
false
16
t1_jed8m0k
But Meta knew exactly that the model will leak. It was on purpose
2
0
2023-03-31T04:05:26
polawiaczperel
false
null
0
jed8m0k
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jed8m0k/
false
2
t1_jed7c22
That's not correct. It's still unclear if model weights can have copyright. Meta is actively working on letting them be taken down, so they certainly think they do have that. Until a court decides, it's unclear who's right.
3
0
2023-03-31T03:53:22
Tystros
false
null
0
jed7c22
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jed7c22/
false
3
t1_jed6zph
It's getting hard to keep up with all these new releases. I'm not sure I've seen any technology move so fast like this in my life, and I was born in the early 70s.
33
0
2023-03-31T03:50:13
synn89
false
null
0
jed6zph
false
/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/jed6zph/
false
33
t1_jed5u67
>yikes Wow! Thanks, man. This is moving so fast!
1
0
2023-03-31T03:39:47
Lorenzo9196
false
null
0
jed5u67
false
/r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/jed5u67/
false
1
t1_jed317x
Are you using the web UI or llama.cpp? This is the first result I got with Alpaca Native with the web UI: *Write a comedy script between Joker and Batman, who decide to host a Christmas special.* >Joker and Batman are standing backstage at a sound stage, getting ready to tape their new Christmas special. > ...
4
0
2023-03-31T03:14:50
Civil_Collection7267
false
null
0
jed317x
false
/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jed317x/
false
4
t1_jed30wx
From the short while I played with it last night, it's what I personally expected. I'm lucky enough to have a server that can handle LLaMA with the 65B model. It does run slower then the 7B, but that's to be expected. I'll be running tests this weekend, so I can see the time difference in replies the models have from...
2
0
2023-03-31T03:14:45
redfoxkiller
false
null
0
jed30wx
false
/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jed30wx/
false
2
t1_jed20rt
You can't copyright model weights, and you're not bound by meta's license if you never agreed to it in the first place.
6
0
2023-03-31T03:06:04
busy_beaver
false
null
0
jed20rt
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jed20rt/
false
6
t1_jed1xvh
With just 1 batch size of a6000 X 4 (vram 196g), 7b model fine tuning was possible. This was confirmed on a Korean site. Even with this specification, full fine tuning is not possible for the 13b model. wake up, bro.
1
0
2023-03-31T03:05:23
Hot_Season152
false
null
0
jed1xvh
false
/r/LocalLLaMA/comments/1262kko/native_finetuning_on_dual_rtx3090/jed1xvh/
false
1
t1_jed1766
How good is it?
2
0
2023-03-31T02:59:00
Pretend_Jellyfish363
false
null
0
jed1766
false
/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/jed1766/
false
2