name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jekpm99
Cool! Would you recommend just downloading LLama then, or is the process more involved, do you have to run llama for 30B to work?
4
0
2023-04-01T19:48:37
Wroisu
false
null
0
jekpm99
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekpm99/
false
4
t1_jekpl6f
Yes this is a known issue and an easy fix if you are comfortable copy pasting code and recompiling. I also understand this is incredibly intimidating if you don't code of course. Here is the code to change it: https://github.com/trevtravtrev/alpaca.cpp/commit/47a5e37ba38f69de2c4ab2a5c14bc1adb4ce46c7 You could have s...
2
0
2023-04-01T19:48:24
ThePseudoMcCoy
false
null
0
jekpl6f
false
/r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jekpl6f/
false
2
t1_jekpjuh
thank you so much I did not know about git clone! I am really excited! :)
6
0
2023-04-01T19:48:08
FriendDimension
false
null
0
jekpjuh
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekpjuh/
false
6
t1_jekp4yi
https://github.com/antimatter15/alpaca.cpp if I download alpaca from this source will it load from command line? (secondary question) would I have to download the bigger models from elsewhere, namely 30B?
3
0
2023-04-01T19:45:02
Wroisu
false
null
0
jekp4yi
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekp4yi/
false
3
t1_jekon6z
Yup you download everything and put it in a folder. You can download everything one by one here: https://huggingface.co/chavinlo/gpt4-x-alpaca under the files tab and make a new folder called whatever you want, and just put the files in that foler. Or you can clone the huggingface repo like github with "git clone ht...
28
0
2023-04-01T19:41:23
Inevitable-Start-653
false
null
0
jekon6z
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekon6z/
false
28
t1_jekon3l
Gpt4all is more restricted, less creative - but I see it being extraordinarily useful for when I can’t query an internet browser but need information on the go. Alpaca / LLamA has creativity plus that.
7
0
2023-04-01T19:41:22
Wroisu
false
null
0
jekon3l
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekon3l/
false
7
t1_jekokqd
Yeah the raw data of Wikipedia was definitely impressive for the compressed file size, but to be able to ask questions and context and summarization and even asking it about logical fallacies is pretty amazing. Also you can ask it programming questions of course, but I can't wait until we have chat GPT style results...
10
0
2023-04-01T19:40:52
ThePseudoMcCoy
false
null
0
jekokqd
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekokqd/
false
10
t1_jekoce1
thank you very much will read it now
2
0
2023-04-01T19:39:09
FriendDimension
false
null
0
jekoce1
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekoce1/
false
2
t1_jeknp25
Sure this is the guide that got me going: https://www.reddit.com/r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp You could be up and running in 5 to 10 minutes it's so easy compared to other options, and does not require a beefy PC. The download link location at huggingface.com has three different file siz...
7
0
2023-04-01T19:34:19
ThePseudoMcCoy
false
null
0
jeknp25
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeknp25/
false
7
t1_jeknin0
im interested but dont know how to get things up and running, for instance I know about the 1 click installer but im confused what to install on hugging face. Like the gpt4 x alpaca model has 6 different pytorch models do I download all of them and what about the other files?
11
0
2023-04-01T19:33:00
FriendDimension
false
null
0
jeknin0
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeknin0/
false
11
t1_jekngex
In windows alpaca runs through the command prompt by default so if you open up PowerShell and navigate to the executable location you can type in ./chat.exe and run it through the power shell no problem (I just tested it). Llama runs through the web interface by default so maybe that's what you're talking about?
5
0
2023-04-01T19:32:32
ThePseudoMcCoy
false
2023-04-01T19:38:14
0
jekngex
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekngex/
false
5
t1_jekn9j9
can you recommend any guides? what is the 20 gig file that you are using?
2
0
2023-04-01T19:31:06
FriendDimension
false
null
0
jekn9j9
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekn9j9/
false
2
t1_jekn702
why is not up to par?
2
0
2023-04-01T19:30:35
FriendDimension
false
null
0
jekn702
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekn702/
false
2
t1_jekn155
I’ve found for a community like that, r/Singularity is pretty good. The pinned posts in this sub more revolve around running it on local hardware. I love both of these spaces, I can’t wait to see what the future (at this rate, next week) holds!
2
0
2023-04-01T19:29:23
SRSchiavone
false
null
0
jekn155
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jekn155/
false
2
t1_jekmv7w
I'm with you, absolutely amazing but nobody I know is interested. Their loss, i have no clue why everyone isn't talking about this all the time.
56
0
2023-04-01T19:28:10
Inevitable-Start-653
false
null
0
jekmv7w
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekmv7w/
false
56
t1_jekmdgw
the llama.cpp can run both alpaca and llama models from the command line
10
0
2023-04-01T19:24:31
space_iio
false
null
0
jekmdgw
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekmdgw/
false
10
t1_jekm8g5
It works pretty well with CPU inference, no need for GPU. Don't even need that much RAM anymore with the latest optimizations of llama.cpp
6
0
2023-04-01T19:23:30
space_iio
false
null
0
jekm8g5
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekm8g5/
false
6
t1_jekm5ze
That's about the size of a compressed offline English text copy of Wikipedia. The copy of Wikipedia is probably more accurate, but Llama can explain things. Best would be to integrate them like an offline Bing Chat, along with other offline references, your personal library, notes, etc!
12
0
2023-04-01T19:23:00
randomqhacker
false
null
0
jekm5ze
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekm5ze/
false
12
t1_jekjwcs
How do I get alpaca running through powershell, or what install did you use? Dalai UI is absolute shit for 7B & 13B…. currently using gpt4all as a supplement until I figure that out. Is it available on Alpaca.cpp?
2
0
2023-04-01T19:06:18
Wroisu
false
2023-04-01T19:15:10
0
jekjwcs
false
/r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jekjwcs/
false
2
t1_jekjh1o
I just want to figure out a reliable way to run alpaca 13B & 30B through powershell, instead of having a web ui - I have gpt4all, but it’s not quite up to par.
10
0
2023-04-01T19:03:11
Wroisu
false
null
0
jekjh1o
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekjh1o/
false
10
t1_jekj8bn
> There is no biological evidence for predictive coding yet. There's some evidence from brain data: https://www.nature.com/articles/s41562-022-01516-2 >nor is there evidence that the brain actually does back-propogation during sleep. I'd agree, the consensus in neuroscience is that backprop isn't biologically plausi...
5
0
2023-04-01T19:01:22
currentscurrents
false
null
0
jekj8bn
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jekj8bn/
false
5
t1_jekig0y
Sounds like llama.cpp just got updated to use less RAM. Maybe I can run 65b now with 32GB.
1
0
2023-04-01T18:55:34
uncle-philbanks
false
null
0
jekig0y
false
/r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jekig0y/
false
1
t1_jekgt8x
llama 65b is the best model I'm aware of that you can run locally, though the improvement between 30b and 65b is small.
5
0
2023-04-01T18:43:47
Gudeldar
false
null
0
jekgt8x
false
/r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jekgt8x/
false
5
t1_jekgmi7
I'm having a ton of fun with Alpaca 30b using these parameters for creativity taken from another post: --temp 0.72 --repeat_penalty 1.1 --top_k 160 --top_p 0.73
13
0
2023-04-01T18:42:24
ThePseudoMcCoy
false
null
0
jekgmi7
false
/r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/jekgmi7/
false
13
t1_jekgfgk
Although 3090's are cheaper on eBay if you buy used on Amazon direct from Amazon the returns would be exceptionally easy in case a GPU was junk. What a buy haha
2
0
2023-04-01T18:40:58
artificial_genius
false
null
0
jekgfgk
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jekgfgk/
false
2
t1_jekg4wr
It is! Sadly, my PSU fried so now I am using a laptop with IRIS graphics and a mx330 gpu. Saving money to buy myself a PSU.
1
0
2023-04-01T18:38:54
Geralt1168
false
null
0
jekg4wr
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jekg4wr/
false
1
t1_jekf7un
$1.10 per hour for an A100 on LambdaLabs. Alpaca took 1 hour to instruct-tune Llama 7B using 8 A100s. That's $8.80 for DIY instruct tuning llama 7B. Maybe $50 tops to instruct tune a LoRa 30b model? Versus, what, $7k to buy a single A100? It's simple economies of scale. DIY compute clusters will never be as cheap ...
1
0
2023-04-01T18:32:29
ZestyData
false
2023-04-01T18:48:25
0
jekf7un
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jekf7un/
false
1
t1_jekevjc
Yeah, I understand. I just consider running your own cloud very different to using a service.
1
0
2023-04-01T18:30:07
hanoian
false
null
0
jekevjc
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jekevjc/
false
1
t1_jeked6x
Agreed. I just don't see the need to run a cloud instance if I can do so locally on hardware I already own.
2
0
2023-04-01T18:26:39
TyThomson
false
null
0
jeked6x
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeked6x/
false
2
t1_jeke4a5
Agreed. I think the cloud is the way to go for training and what not. But for a personal assistant, u personally don't want my data being transmitted anywhere when there's no need for it. Not so much even security. Maybe I'm just old. I came up in the world of BBS, and PGP encrypted emails. I want as little...
2
0
2023-04-01T18:24:56
TyThomson
false
null
0
jeke4a5
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeke4a5/
false
2
t1_jekduxd
[removed]
1
0
2023-04-01T18:23:07
[deleted]
true
null
0
jekduxd
false
/r/LocalLLaMA/comments/128sgso/are_there_currently_available_llms_that_can/jekduxd/
false
1
t1_jekdoy9
People looking to run this locally, wouldn't use ChatGPT either.
1
0
2023-04-01T18:21:56
addandsubtract
false
null
0
jekdoy9
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jekdoy9/
false
1
t1_jek9xwe
You do it through code. The data format of each dataset is different so I can't just give you code and say "modify X". If your dataset is a list of items in a game ant their description then your text might look like (as an example for the Gemstone): >\### instruction > >What is a Gemstone > >\### output > ...
1
0
2023-04-01T17:55:21
Sixhaunt
false
2023-04-01T17:58:28
0
jek9xwe
false
/r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jek9xwe/
false
1
t1_jek9l5n
Jetson Orin?
3
0
2023-04-01T17:52:55
Joe-Repliko
false
null
0
jek9l5n
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jek9l5n/
false
3
t1_jek774a
Interesting. My interpretation of the subreddit was for development of Llama locally/yourself rather than using a premade black box API from some big tech. I.e - a sub dedicated to the ML behind Llama & it's advances, and resources on getting into this kind of development. Not strictly the niche that it has to run o...
1
0
2023-04-01T17:36:44
ZestyData
false
null
0
jek774a
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jek774a/
false
1
t1_jek6wg0
Data safety really isn't a problem with cloud compute. There's a world of difference between modern software development, which is all involving cloud deployments, and giving your data over to OpenAI by using their APIs.
1
0
2023-04-01T17:34:43
ZestyData
false
2023-04-01T17:57:47
0
jek6wg0
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jek6wg0/
false
1
t1_jek6sed
[deleted]
1
0
2023-04-01T17:33:58
[deleted]
true
null
0
jek6sed
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jek6sed/
false
1
t1_jek68lu
[deleted]
3
0
2023-04-01T17:30:14
[deleted]
true
null
0
jek68lu
false
/r/LocalLLaMA/comments/128ror7/ok_this_is_not_really_funny_but_i_couldnt_help/jek68lu/
false
3
t1_jek59yt
[removed]
1
0
2023-04-01T17:23:38
[deleted]
true
null
0
jek59yt
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jek59yt/
false
1
t1_jek53s8
overdone
1
0
2023-04-01T17:22:29
Disastrous_Elk_6375
false
null
0
jek53s8
false
/r/LocalLLaMA/comments/128sgso/are_there_currently_available_llms_that_can/jek53s8/
false
1
t1_jek50d7
How comparable is it to vicuna-13b?
1
0
2023-04-01T17:21:49
Scriptod
false
null
0
jek50d7
false
/r/LocalLLaMA/comments/125u56p/colossalchat/jek50d7/
false
1
t1_jek4zb8
[deleted]
1
0
2023-04-01T17:21:37
[deleted]
true
null
0
jek4zb8
false
/r/LocalLLaMA/comments/128sgso/are_there_currently_available_llms_that_can/jek4zb8/
false
1
t1_jek4izn
[deleted]
0
0
2023-04-01T17:18:33
[deleted]
true
null
0
jek4izn
false
/r/LocalLLaMA/comments/128sgso/are_there_currently_available_llms_that_can/jek4izn/
false
0
t1_jek2935
Really waiting for that 30B version. Or at least this one's weights...
2
0
2023-04-01T17:03:03
Scriptod
false
null
0
jek2935
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jek2935/
false
2
t1_jek1hv2
The minimum you will need to run 65B 4-bit llama (no alpaca or other fine tunes for this yet, but I expect we will have a few in a month) is about 40GB ram and some cpu. The cheapest way of getting it to run slow but manageable is to pack something like i5 13400 and 48/64gb of ram. My plan for running it is 11400f wit...
6
0
2023-04-01T16:57:54
ThatLastPut
false
2023-04-02T15:28:39
0
jek1hv2
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jek1hv2/
false
6
t1_jek1gb9
Vastai is also a good option 👌
5
0
2023-04-01T16:57:35
H0PEN1K
false
null
0
jek1gb9
false
/r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/jek1gb9/
false
5
t1_jek0q4n
It’s because the alpaca fine-tuning (and LoRAs) used ChatGPT to generate the instructions, and the responses to the questions which it was trained on. I’m sure if you filtered out all of the “As an AI trained by OpenAI” responses from the dataset and retrained Llama it wouldn’t exhibit this behavior.
2
0
2023-04-01T16:52:43
disarmyouwitha
false
null
0
jek0q4n
false
/r/LocalLLaMA/comments/1261dau/has_an_offline_ai_in_some_ways_taught_you_just_as/jek0q4n/
false
2
t1_jejzxor
I love this idea so much. Can I train CSS bot to do my website, too? xD
2
0
2023-04-01T16:47:22
disarmyouwitha
false
null
0
jejzxor
false
/r/LocalLLaMA/comments/1269knq/chat_emojis_from_a_character_card_click_for_json/jejzxor/
false
2
t1_jejzi33
Hey um, I think you forgot which sub you’re in. It’s quite literally in the name: r/**Local**LLaMA
7
0
2023-04-01T16:44:28
SRSchiavone
false
null
0
jejzi33
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jejzi33/
false
7
t1_jejz4qg
Bump!
2
0
2023-04-01T16:41:57
SRSchiavone
false
null
0
jejz4qg
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jejz4qg/
false
2
t1_jejyw2c
To have similar setup with a GPU and decent ram + the server storage how much are you going to pay each month ? I would guess probably a few k$/month. I would personally prefer to buy hardware. Regarding training this is another discussion obviously.
1
0
2023-04-01T16:40:15
pololueco
false
null
0
jejyw2c
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jejyw2c/
false
1
t1_jejtddc
If you mean for security, there isn't really any reason to think using the cloud is less secure.
1
0
2023-04-01T16:01:17
hanoian
false
null
0
jejtddc
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jejtddc/
false
1
t1_jejo2u4
I think the point is to keep your data local.
5
0
2023-04-01T15:23:37
TyThomson
false
null
0
jejo2u4
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jejo2u4/
false
5
t1_jejlcfm
Thanks
1
0
2023-04-01T15:04:20
IonizedRay
false
null
0
jejlcfm
false
/r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/jejlcfm/
false
1
t1_jejkavh
I started down this rabbit hole a week plus back and ended up having to research into a lot of things before I settled on a build. My use case will be Stable Diffusion and being able to run some of the larger home models like Llama. Video cards. I currently have a nvidia 3080 with 10GB of ram. It runs Stable Diffusion...
12
0
2023-04-01T14:56:39
synn89
false
2023-04-01T15:00:07
0
jejkavh
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jejkavh/
false
12
t1_jejd8gg
I was thinking of exactly this! At 1000 tokens (or whatever) making a request with the conversation so far to a different model and ask it to summarize the conversation, setting a token limit of 256 (1/4 reduction) and feeding this back into the current conversation =] In my experiments when prompting it to summarize ...
1
0
2023-04-01T14:03:11
disarmyouwitha
false
2023-04-01T14:06:40
0
jejd8gg
false
/r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jejd8gg/
false
1
t1_jejcd5d
>Is the model size really like 220 GB? No, there are different LLaMA models with different sizes, and you can choose which one to download: 7B, 13B, 30B, and 65B. >Will it run on my device? You can use llama.cpp, and their page explains everything about setting it up. [https://github.com/ggerganov/llama.cpp](https:/...
1
0
2023-04-01T13:56:05
Civil_Collection7267
false
null
0
jejcd5d
false
/r/LocalLLaMA/comments/128mocy/setting_up_llama/jejcd5d/
true
1
t1_jej5w66
Lads, we moved away from DIY desktop servers 15 years ago. Have you heard of our lord and saviour the CLOUD
-16
0
2023-04-01T13:00:29
ZestyData
true
null
0
jej5w66
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jej5w66/
false
-16
t1_jeiwr80
visually observable
1
0
2023-04-01T11:25:42
FHSenpai
false
null
0
jeiwr80
false
/r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/jeiwr80/
false
1
t1_jeivsiv
The tokenizer is inherently a single threaded task, so maybe it would make sense from that aspect
5
0
2023-04-01T11:13:37
[deleted]
false
null
0
jeivsiv
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeivsiv/
false
5
t1_jeiudqh
If you are interested in playing with python code, you can. But if you are neither familiar nor interested with python/torch, open-source options are less attractive than ChatGPT. I'm a hobbyist (albeit with an EE degree, and decades of programming experience), so I really enjoy tinkering oobabooga's textgen codebase. ...
1
0
2023-04-01T10:55:26
__nullgate__
false
null
0
jeiudqh
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeiudqh/
false
1
t1_jeisfm5
What I don't understand is how you can transform a chunk of text into a series of questions (without using AI)
1
0
2023-04-01T10:28:36
2muchnet42day
false
null
0
jeisfm5
false
/r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/jeisfm5/
false
1
t1_jeip0b6
Linode is pretty good
1
0
2023-04-01T09:38:34
EnvironmentalAd3385
false
null
0
jeip0b6
false
/r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/jeip0b6/
false
1
t1_jeiovsi
[deleted]
1
0
2023-04-01T09:36:46
[deleted]
true
null
0
jeiovsi
false
/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/jeiovsi/
false
1
t1_jeiohyn
Why is that not possible? >No, I'm not talking about to reshape the base models. I'm talking about to reshape the small lora models which only have the size of a few MB. For 7B it is like 8MB size or 16MB if trained. But of course the lora adapter model for 13B is somehow a little bigger and has a bigger shape. I...
1
0
2023-04-01T09:30:58
Ok-Scarcity-7875
false
null
0
jeiohyn
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jeiohyn/
false
1
t1_jeiocaj
[deleted]
1
0
2023-04-01T09:28:33
[deleted]
true
null
0
jeiocaj
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jeiocaj/
false
1
t1_jeinmrn
The LlaMa or Alpaca 30B check this guide https://www.reddit.com/r/KoboldAI/comments/122zjd0/guide_alpaca_13b_4bit_via_koboldai_in_tavernai/jeh8iay
2
0
2023-04-01T09:17:56
reneil1337
false
null
0
jeinmrn
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeinmrn/
false
2
t1_jeimwsw
I see people talking alot about the 30b and the 65b what is the difference between llama 7b from gpt4all and alpaca i don't have the hardware to expirement with it and I didn't find any website provide a demo
5
0
2023-04-01T09:07:13
Puzzleheaded_Acadia1
false
null
0
jeimwsw
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeimwsw/
false
5
t1_jeim2re
[deleted]
1
0
2023-04-01T08:54:36
[deleted]
true
null
0
jeim2re
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jeim2re/
false
1
t1_jeikzll
Yeah, I know that maybe sounds stupid. I just think if the layer of the adapter model for 7B has layers with the size of 1024x1024 for example and the adapter model for the 13b model has layers sized 2048x2048 can't you just up scale the 7B adapter like you would upscale an image and show it on a 2048x2048 screen...ehm...
1
0
2023-04-01T08:38:24
Ok-Scarcity-7875
false
null
0
jeikzll
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jeikzll/
false
1
t1_jeik6qt
🕺
1
0
2023-04-01T08:26:26
QTQRQD
false
null
0
jeik6qt
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jeik6qt/
false
1
t1_jeik2gw
No, I'm not talking about to reshape the base models. I'm talking about to reshape the small lora models which only have the size of a few MB. For 7B it is like 8MB size or 16MB if trained. But of course the lora adapter model for 13B is somehow a little bigger and has a bigger shape. I just want to reshape the 7B lora...
1
0
2023-04-01T08:24:39
Ok-Scarcity-7875
false
2023-04-01T08:29:37
0
jeik2gw
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jeik2gw/
false
1
t1_jeijnis
you cant just reshape the matrices theyre literally bigger. to suggest we reshape the larger models to fit the matrices into a lora for a smaller model is like saying u want to reshape a semitruck to fit into a home garage
1
0
2023-04-01T08:18:33
QTQRQD
false
null
0
jeijnis
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jeijnis/
false
1
t1_jeijiay
What about reshape() of the small alpaca model (**EDIT: adapter not model,sorry**) so it fits, maybe fill what can't be filled with zeros?
1
0
2023-04-01T08:16:22
Ok-Scarcity-7875
false
2023-04-01T08:26:25
0
jeijiay
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jeijiay/
false
1
t1_jeih31n
I am the creator of https://github.com/LostRuins/llamacpp-for-kobold . It runs a local http server serving a koboldai compatible api with a built in webui. Compatible with all llama.cpp and alpaca.cpp models.
1
0
2023-04-01T07:41:18
HadesThrowaway
false
null
0
jeih31n
false
/r/LocalLLaMA/comments/123e02i/using_llamacpp_how_to_access_api/jeih31n/
false
1
t1_jeig772
What would you run on a single rtx3090 with 32gb ram? I am playing around with stable diffusion since it is out but have no experience with llama.
1
0
2023-04-01T07:28:48
sekopasa
false
null
0
jeig772
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeig772/
false
1
t1_jeif1y5
Yeah, this blows my memory extensions away. Hats off to you.
2
0
2023-04-01T07:12:52
theubie
false
null
0
jeif1y5
false
/r/LocalLLaMA/comments/127pmko/long_term_memory_extension/jeif1y5/
false
2
t1_jeiddel
[This](https://www.reddit.com/r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/?utm_source=share&utm_medium=ios_app&utm_name=iossmf) post had some great replies by a guy who fed his model Elder Scrolls content. Turned it into a kind of elder scrolls wiki/expert. should give you a vague idea ...
1
0
2023-04-01T06:50:12
Nezarah
false
null
0
jeiddel
false
/r/LocalLLaMA/comments/126k42q/using_twitter_data/jeiddel/
false
1
t1_jeid1om
Llama-65b-4bit should work fine on 2xRTX3090 [https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how\_to\_install\_llama\_8bit\_and\_4bit/](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/)
5
0
2023-04-01T06:45:52
Nondzu
false
null
0
jeid1om
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeid1om/
false
5
t1_jeiarve
>It's not clear whether we're hitting VRAM latency limits, CPU limitations, or something else — probably a combination of factors — but your CPU definitely plays a role. We tested an RTX 4090 on a Core i9-9900K and the 12900K, for example, and the latter was almost twice as fast. > >It looks like some of the work a...
9
0
2023-04-01T06:16:00
ChiaraStellata
false
null
0
jeiarve
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeiarve/
false
9
t1_jeiapy6
Wanted to assess the performance you can count on CPU vs GPU. My goal is to check how viable those models are to run them or similar ones on production environment in an on premise deployment. It's easier to get CPUs for that instead of having the clients to buy GPUs. 4 disks - it was just cheaper to buy 4x2tb than 2x4...
5
0
2023-04-01T06:15:20
Loose_Historian
false
null
0
jeiapy6
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeiapy6/
false
5
t1_jeiao4z
>Great, thank you! You're welcome!
2
0
2023-04-01T06:14:42
exclaim_bot
false
null
0
jeiao4z
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeiao4z/
false
2
t1_jeianhb
Great, thank you!
3
0
2023-04-01T06:14:28
the_quark
false
null
0
jeianhb
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeianhb/
false
3
t1_jeiahf3
Not at home at the moment, but will post some numbers in the evening
8
0
2023-04-01T06:12:18
Loose_Historian
false
null
0
jeiahf3
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jeiahf3/
false
8
t1_jei75te
Speed is apparently [constrained by something](https://www.tomshardware.com/news/running-your-own-chatbot-on-a-single-gpu) other than GPU. I don't think it's been studied in depth.
5
0
2023-04-01T05:31:41
friedrichvonschiller
false
null
0
jei75te
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jei75te/
false
5
t1_jei6s9h
Is it important to have a good CPU? I thought most of the computation was done on GPU. And why is your disk so ridiculously fast, is that needed?
3
0
2023-04-01T05:27:12
ChiaraStellata
false
null
0
jei6s9h
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jei6s9h/
false
3
t1_jei6nw4
What's your performance like if you don't mind? How many tokens per second, how quickly do replies generally take? I'm thinking seriously about getting a second 3090 to try this myself.
4
0
2023-04-01T05:25:48
the_quark
false
null
0
jei6nw4
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jei6nw4/
false
4
t1_jei5ynj
I am running Llama-65b-4bit locally on Threadripper 3970x, Aorus TRX40 Extreme, 256gb DDR4, 2x Asus 3090 in O11D XL, 4x nvme SSD in Raid0
20
0
2023-04-01T05:17:41
Loose_Historian
false
null
0
jei5ynj
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jei5ynj/
false
20
t1_jei5xtt
48GB VRAM = 65B in 4bit ([A6000, for instance](https://aituts.com/llama/)) However, the amount of memory required may change as implementations improve, especially if you're willing to trade speed. This code is all brand new. I don't think anyone knows what the ultimate system requirements are going to be. I would ...
3
0
2023-04-01T05:17:25
friedrichvonschiller
false
2023-04-01T05:42:16
0
jei5xtt
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jei5xtt/
false
3
t1_jei5ckx
>I had to chase libraries similarly when I was waiting for Navi 21 support. You have infinitely more faculty in building code than I do. I'm genuinely impressed. I'm proud of myself for rebuilding an SRPM. >The unfortunate reality is that people like us, who use consumer cards for compute, are in the minority in te...
1
0
2023-04-01T05:10:43
friedrichvonschiller
false
2023-04-01T05:13:52
0
jei5ckx
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/jei5ckx/
false
1
t1_jei3mrf
Hey, curious what applications do you mean? I'm keen to check them out.
1
0
2023-04-01T04:51:59
crisp_spruce
false
null
0
jei3mrf
false
/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/jei3mrf/
false
1
t1_jei1oiq
A lot of the big data centers like DOE's OLCF and LLNL, Meta, Microsoft etc have adopted or are adopting AMD hardware. This is likely where all of the dev hours are going -- the billion dollar contracts. Unfortunately it means that the small system users have to wait for out of the box solutions. Now, the ISA for gfx1...
3
0
2023-04-01T04:31:32
estebanyelmar
false
null
0
jei1oiq
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/jei1oiq/
false
3
t1_jei1fyi
[deleted]
0
0
2023-04-01T04:29:01
[deleted]
true
null
0
jei1fyi
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jei1fyi/
false
0
t1_jei0tnu
Thanks for engaging in good faith with me here. I think it is true over longer periods of time. Being awake is necessary as well. I probably ought to have written "encoding of memory in durable forms" because learning is such a gooey term. Even my rephrasing is bad. I would need years of study to be comfortable ma...
1
0
2023-04-01T04:22:43
friedrichvonschiller
false
2023-04-01T05:12:48
0
jei0tnu
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jei0tnu/
false
1
t1_jehz4p5
> Consolidation represents the processes by which a memory becomes stable Sure. You can say consolidation is similar to training. Maybe I misunderstood your claim that learning in general requires sleep, which is clearly not true.
2
0
2023-04-01T04:05:48
tvetus
false
null
0
jehz4p5
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jehz4p5/
false
2
t1_jehxs6g
> And I'm still a little on the fence as to whether it's something I'd actually recommend. For me when I was looking into the Teslas and thinking about if I'd need to buy a fan, power adapter, and 3d printed fan mount, I was wondering if a 3060 12GB card just made more sense. If an open air rig means you don't need al...
1
0
2023-04-01T03:52:46
synn89
false
null
0
jehxs6g
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jehxs6g/
false
1
t1_jehv5f0
What are beams if not recursive deliberation?
1
0
2023-04-01T03:28:13
friedrichvonschiller
false
null
0
jehv5f0
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jehv5f0/
false
1
t1_jehv0sx
I have no idea whether there is a precise definition, I'm afraid. If neurology is anything like my field, then there is only a savage debate. You people can downvote me on this all you want. If you want to argue it, [go argue with Harvard](http://healthysleep.med.harvard.edu/healthy/matters/benefits-of-sleep/learnin...
1
0
2023-04-01T03:27:03
friedrichvonschiller
false
2023-04-01T03:57:42
0
jehv0sx
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jehv0sx/
false
1
t1_jehuwkg
Question to noone in particular: would a Mac mini or Mac Studio be an alternative to building a rtx 3090 pc to run inference on these local models? Can they train? I have been thinking of getting one as a desktop to complement my windows desktop and m1 air—which I have been using to familiarize myself with llama.cpp
1
0
2023-04-01T03:25:59
stopandwatch
false
null
0
jehuwkg
false
/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/jehuwkg/
false
1
t1_jehuktx
I know. I tried to emphasize "something like" twice. I linked to supporting material. I don't understand why everyone is fixated on equating them.
1
0
2023-04-01T03:23:05
friedrichvonschiller
false
null
0
jehuktx
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jehuktx/
false
1