name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jezxghl
Yes, it is. It's probably the best way to run it https://youtu.be/iQ3Lhy-eD1s
5
0
2023-04-05T01:40:53
YuhFRthoYORKonhisass
false
null
0
jezxghl
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jezxghl/
false
5
t1_jezse0p
Yes it's a semantic issue.
1
0
2023-04-05T01:03:17
Art10001
false
null
0
jezse0p
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jezse0p/
false
1
t1_jezs4v2
I wonder if anyone has compiled llama.cpp with -O3 and the difference it makes. Or even, I daresay, -Ofast.
1
0
2023-04-05T01:01:25
Art10001
false
null
0
jezs4v2
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jezs4v2/
false
1
t1_jezs4jo
I wonder if anyone has compiled llama.cpp with -O3 and the difference it makes. Or even, I daresay, -Ofast.
1
0
2023-04-05T01:01:21
Art10001
false
null
0
jezs4jo
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jezs4jo/
false
1
t1_jezs2u0
Ah, so it *is* running 30b, but the name is just a semantic issue? Also, I’m running it using the antimatter download if that matters (https://github.com/antimatter15/alpaca.cpp/releases/tag/81bd894)
1
0
2023-04-05T01:00:59
Wroisu
false
null
0
jezs2u0
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jezs2u0/
false
1
t1_jezrss4
If they said it's hardcoded then sadly it is.
1
0
2023-04-05T00:58:53
Art10001
false
null
0
jezrss4
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jezrss4/
false
1
t1_jezqwtf
The full command is ./chat and then I presume it’s running (ggml-alpaca-7b-04.bin). I downloaded from here (https://huggingface.co/Pi3141/alpaca-lora-30B-ggml/tree/main) From the original thread it said to rename to your specific model, so I renamed (ggml-model-q4_0.bin) to (ggml-alpaca-30b-04.bin) but this caused a...
1
0
2023-04-05T00:52:13
Wroisu
false
null
0
jezqwtf
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jezqwtf/
false
1
t1_jezqo5p
That's curious and indeed interesting.
2
0
2023-04-05T00:50:29
Art10001
false
null
0
jezqo5p
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jezqo5p/
false
2
t1_jezqidt
I suggest to also try Koala from the same demo page https://chat.lmsys.org/
1
0
2023-04-05T00:49:18
polawiaczperel
false
null
0
jezqidt
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jezqidt/
false
1
t1_jezq0tm
So far it's quite impressive compared to 13B Alpaca for chatting. More verbose, stays in character and in scene, has OOC prompting. Was able to demonstrate Italian but asked me to stay in English for prompting. Asked me if anything interesting had happened in my life and I mentioned "Well, this one time I met a vampire...
1
0
2023-04-05T00:45:41
synn89
false
null
0
jezq0tm
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jezq0tm/
false
1
t1_jezpyjm
For the last question? It was pretty interesting actually - it said something along the lines of “it doesn’t mind because it has time to ruminate & think without the interruptions of the outside world”. Unfortunately I had FN lock on when I tried to screenshot, so it didn’t save.
2
0
2023-04-05T00:45:14
Wroisu
false
2023-04-05T17:11:51
0
jezpyjm
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jezpyjm/
false
2
t1_jezpwqx
Yes that's what it's implying. I am not sure which one it wants to use. If there's a file name already you could change it for the 30B. Could you paste the full command you're running here?
1
0
2023-04-05T00:44:52
Art10001
false
null
0
jezpwqx
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jezpwqx/
false
1
t1_jezplz9
>in the coming weeks. ... I mean, I don't blame the devs themselves and I hope this is because it's going to be *just that good* but... um... AMD?
1
0
2023-04-05T00:42:40
friedrichvonschiller
false
null
0
jezplz9
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/jezplz9/
false
1
t1_jezpd24
https://github.com/ggerganov/llama.cpp/pull/613
1
0
2023-04-05T00:40:49
jsalsman
false
null
0
jezpd24
false
/r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jezpd24/
false
1
t1_jezp71i
Ah I see. Thank you for the information, so, is there a flag pointing to 30b that I’m missing, is that what the other comment is implying? If so, what would I need to do to execute it?
1
0
2023-04-05T00:39:34
Wroisu
false
null
0
jezp71i
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jezp71i/
false
1
t1_jeznwj2
Yes. Sadly I can't give instructions. The ANE should let it run faster than anything, give GPUs and other CPUs a run for their money... but nobody has converted it to CoreML *so far*.
3
0
2023-04-05T00:30:03
Art10001
false
null
0
jeznwj2
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jeznwj2/
false
3
t1_jezn8ii
Somebody should make convert.cpp. GGML is already C/C++ I heard.
3
0
2023-04-05T00:25:05
Art10001
false
null
0
jezn8ii
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jezn8ii/
false
3
t1_jezn35z
> It just seems to have emergent properties the more good data gets shoveled in Parameters mainly.
2
0
2023-04-05T00:24:01
Art10001
false
null
0
jezn35z
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jezn35z/
false
2
t1_jezmk8m
Red Magic phones have up to 16 GB RAM. Some other phones brag of having 18, but they may or may not have 16+2 swap instead.
1
0
2023-04-05T00:20:11
Art10001
false
null
0
jezmk8m
false
/r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jezmk8m/
false
1
t1_jezmftw
Only using a portion of the GPU is common. Your [CPU is apparently sometimes the bottleneck](https://www.tomshardware.com/news/running-your-own-chatbot-on-a-single-gpu), but we don't know yet. It won't use more RAM than it needs.
2
0
2023-04-05T00:19:17
friedrichvonschiller
false
null
0
jezmftw
false
/r/LocalLLaMA/comments/12bxgmy/help_d/jezmftw/
false
2
t1_jezlxcj
Has anyone tried the Apple M series on them? Somebody should really convert one to CoreML to allow using the Neural Engine. Nowadays it has 16 cores that can execute trillions of operations.
2
0
2023-04-05T00:15:29
Art10001
false
null
0
jezlxcj
false
/r/LocalLLaMA/comments/12aiv10/single_board_computers_with_64_gb_ram/jezlxcj/
false
2
t1_jezloaa
What did it say?
2
0
2023-04-05T00:13:38
Art10001
false
null
0
jezloaa
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jezloaa/
false
2
t1_jezlhwv
Flags, like -v or --file, are basic instructions for the command line program, written after the executable before pressing Enter. For example -v, prints the version number. --file file1, loads the file `file1`. Even standard Windows CMD/Powershell commands have them.
1
0
2023-04-05T00:12:17
Art10001
false
null
0
jezlhwv
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jezlhwv/
false
1
t1_jezkdu8
Linux: `Python server.py —cai-chat —wbits 4 —groupsize 128` Windows: …give me a min I’ll reboot and check^^;
1
0
2023-04-05T00:04:03
disarmyouwitha
false
null
0
jezkdu8
false
/r/LocalLLaMA/comments/12bhp4f/alpaca_30b_inference_and_fine_tuning_reduce/jezkdu8/
false
1
t1_jezhakv
i thought the weights are available, just in a round about way, via Diffs: [https://github.com/young-geng/EasyLM/blob/main/docs/koala.md](https://github.com/young-geng/EasyLM/blob/main/docs/koala.md)
4
0
2023-04-04T23:40:48
SquishyBrainStick
false
null
0
jezhakv
false
/r/LocalLLaMA/comments/12bsfvg/koala_a_dialogue_model_for_academic_research/jezhakv/
false
4
t1_jezdrdb
I see. Thanks for your answer and for the link. I will try your recommendation "gpt4-x-alpaca" then.
1
0
2023-04-04T23:14:26
Outside_You_Forever
false
null
0
jezdrdb
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jezdrdb/
false
1
t1_jezc6m0
It's the number of tokens in the prompt that are fed into the model at a time. For example, if your prompt is 8 tokens long at the batch size is 4, then it'll send two chunks of 4. It _may_ be more efficient to process in larger chunks. For some models or approaches, sometimes that is the case. It will depend on how ll...
2
0
2023-04-04T23:02:34
KerfuffleV2
false
null
0
jezc6m0
false
/r/LocalLLaMA/comments/12aj0ze/what_is_batchsize_in_llamacpp_also_known_as_n/jezc6m0/
false
2
t1_jezbmpw
>64GB of RAM Use [llama.cpp](https://github.com/ggerganov/llama.cpp). The GitHub page explains the complete setup process, and you'd be able to run 13B LLaMA or a larger model. If you'd prefer to use the web UI instead and don't mind something smaller than 13B, then this [4-bit Alpaca Native](https://huggingface.co/o...
1
0
2023-04-04T22:58:25
Technical_Leather949
false
null
0
jezbmpw
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jezbmpw/
false
1
t1_jezafau
[deleted]
1
0
2023-04-04T22:49:30
[deleted]
true
null
0
jezafau
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jezafau/
false
1
t1_jez9yiw
what if you have 8GB of VRAM and 64GB of RAM, is there a way to run the 13B model using these settings?
1
0
2023-04-04T22:45:59
ninjasaid13
false
null
0
jez9yiw
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jez9yiw/
false
1
t1_jez72ex
Let me know if you get that working. I was looking at the RK3588, but if it doesn't work, I might not get one.
1
0
2023-04-04T22:24:43
DataPhreak
false
null
0
jez72ex
false
/r/LocalLLaMA/comments/129kue0/is_llamacpp_any_good_on_arm_eg_ampere_altra_or/jez72ex/
false
1
t1_jez68k8
Yeah, I've been trying to get llama going, but I've run into some problems. I wonder if there's a discord server or IRC where I can ask for help, as reddit tends to be a bit slow.
1
0
2023-04-04T22:18:40
lelwanichan
false
null
0
jez68k8
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jez68k8/
false
1
t1_jez5vn2
When I try to load the model on this, I get an error saying (bad magic) , any idea what could be the cause?
2
0
2023-04-04T22:16:02
lelwanichan
false
null
0
jez5vn2
false
/r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/jez5vn2/
false
2
t1_jez5rzi
Yeah, sorry it was either really early or really late when I saw read the posy tysm.
1
0
2023-04-04T22:15:18
nstevnc77
false
null
0
jez5rzi
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jez5rzi/
false
1
t1_jez3iw8
In the link provided in the post? Vacuna 13B GPTQ 4bit 128g?
2
0
2023-04-04T21:59:04
Nezarah
false
null
0
jez3iw8
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jez3iw8/
false
2
t1_jez0ez5
download the gpt4-alpaca model, this one is totally unfiltered :D
7
0
2023-04-04T21:37:35
Wonderful_Ad_5134
false
null
0
jez0ez5
false
/r/LocalLLaMA/comments/12bsfvg/koala_a_dialogue_model_for_academic_research/jez0ez5/
false
7
t1_jeyyyag
I’ll shoot you my command once I get home to look it up. You need to run it with —wbits 4 —groupsize 128 I also had a lot of issues with the name of my folder that ooga didn’t like.
1
0
2023-04-04T21:27:38
disarmyouwitha
false
null
0
jeyyyag
false
/r/LocalLLaMA/comments/12bhp4f/alpaca_30b_inference_and_fine_tuning_reduce/jeyyyag/
false
1
t1_jeyydjv
Lmao, serious?
4
0
2023-04-04T21:23:42
RebornZA
false
null
0
jeyydjv
false
/r/LocalLLaMA/comments/12bsfvg/koala_a_dialogue_model_for_academic_research/jeyydjv/
false
4
t1_jeyxuzc
The documentation is extensive, I like it
2
0
2023-04-04T21:20:04
petitponeyrose
false
null
0
jeyxuzc
false
/r/LocalLLaMA/comments/12bo18z/baize_is_an_opensource_chat_model_finetuned_with/jeyxuzc/
false
2
t1_jeyxsu9
I run 30b on 32 gigs, but I'm curious about the larger models myself.
4
0
2023-04-04T21:19:40
ThePseudoMcCoy
false
null
0
jeyxsu9
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeyxsu9/
false
4
t1_jeyxjpw
You think 64gb of system memory would be enough for the large model?
1
0
2023-04-04T21:17:57
lelwanichan
false
null
0
jeyxjpw
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jeyxjpw/
false
1
t1_jeyq8yg
Also https://huggingface.co/elinas/alpaca-30b-lora-int4 I've gotten the above to work in KoboldAI, but not Oobabooga.
2
0
2023-04-04T20:29:20
synn89
false
null
0
jeyq8yg
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jeyq8yg/
false
2
t1_jeyq8u7
[deleted]
2
0
2023-04-04T20:29:19
[deleted]
true
null
0
jeyq8u7
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jeyq8u7/
false
2
t1_jeyoond
Ok i found a fix. Had to replace llama\_inference\_offload.py with the new one. Thank you !
2
0
2023-04-04T20:19:28
Famberlight
false
2023-04-04T20:38:51
0
jeyoond
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jeyoond/
false
2
t1_jeyo89g
No, used the johnsmith repo. Pretty much it's the only thing I used.
1
0
2023-04-04T20:16:33
2muchnet42day
false
null
0
jeyo89g
false
/r/LocalLLaMA/comments/12bhp4f/alpaca_30b_inference_and_fine_tuning_reduce/jeyo89g/
false
1
t1_jeynaml
Are you running it with llama.cpp? Do you need to convert the weights?
1
0
2023-04-04T20:10:39
redboundary
false
null
0
jeynaml
false
/r/LocalLLaMA/comments/12bhp4f/alpaca_30b_inference_and_fine_tuning_reduce/jeynaml/
false
1
t1_jeykzj0
[deleted]
19
0
2023-04-04T19:56:00
[deleted]
true
null
0
jeykzj0
false
/r/LocalLLaMA/comments/12bsfvg/koala_a_dialogue_model_for_academic_research/jeykzj0/
false
19
t1_jeykwfp
[deleted]
3
0
2023-04-04T19:55:28
[deleted]
true
null
0
jeykwfp
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jeykwfp/
false
3
t1_jeykj1w
And for people that are curious, this is using gpt-3.5-turbo to generate multi-line conversations on various subjects and does so with normal prompting. This means that certain subjects will kick in the GPT AI censor. For a general use release this makes sense and is pointed out in their paper. Personally, I like role...
9
0
2023-04-04T19:53:08
synn89
false
null
0
jeykj1w
false
/r/LocalLLaMA/comments/12bo18z/baize_is_an_opensource_chat_model_finetuned_with/jeykj1w/
false
9
t1_jeyk7zb
I'm definitely being a bit salty, but why don't these papers just release their papers when they have the model weights available to release. This + the inane "i can't do this bc it's against my ethics" type responses in training sets/models is infuriating. I will literally comp $500 to someone with more skill than me ...
27
0
2023-04-04T19:51:12
violent_cat_nap
false
null
0
jeyk7zb
false
/r/LocalLLaMA/comments/12bsfvg/koala_a_dialogue_model_for_academic_research/jeyk7zb/
false
27
t1_jeyk56d
[deleted]
1
0
2023-04-04T19:50:42
[deleted]
true
null
0
jeyk56d
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jeyk56d/
false
1
t1_jeyirqb
[deleted]
2
0
2023-04-04T19:41:57
[deleted]
true
null
0
jeyirqb
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jeyirqb/
false
2
t1_jeyhoug
[deleted]
1
0
2023-04-04T19:35:06
[deleted]
true
null
0
jeyhoug
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jeyhoug/
false
1
t1_jeyh6eb
Do you have the latest version of GPTQ for LLaMA? That's an old issue that should have been resolved by this PR: [https://github.com/qwopqwop200/GPTQ-for-LLaMa/pull/99](https://github.com/qwopqwop200/GPTQ-for-LLaMa/pull/99). Make sure to switch to the `cuda` branch though, since `triton` doesn't support Windows yet.
2
0
2023-04-04T19:31:48
xZANiTHoNx
false
null
0
jeyh6eb
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jeyh6eb/
false
2
t1_jeyg9tg
[removed]
1
0
2023-04-04T19:26:03
[deleted]
true
null
0
jeyg9tg
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jeyg9tg/
false
1
t1_jeyfft7
Its still the same error. I think the problem is in one of those pip modules or miniconda or somewhere in code. It doesnt even starts to generate, just throughs the error immediately
1
0
2023-04-04T19:20:45
Famberlight
false
null
0
jeyfft7
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jeyfft7/
false
1
t1_jeyeki8
Yes, I already have 7b alpaca and it's quite good. But I heard that there is a big gap between 7 and 13b models. Just hoped to try a better one
2
0
2023-04-04T19:15:03
Famberlight
false
null
0
jeyeki8
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jeyeki8/
false
2
t1_jeyeh7l
I am not blaming the dev. They are probably severely understaffed in the ROCM department and/or mismanaged. But yeah super frustrating as a customer.
3
0
2023-04-04T19:14:28
Tiefbegabt
true
null
0
jeyeh7l
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/jeyeh7l/
false
3
t1_jeye7wb
So how do I run it on windows?
2
0
2023-04-04T19:12:46
Necessary_Ad_9800
false
null
0
jeye7wb
false
/r/LocalLLaMA/comments/12bo18z/baize_is_an_opensource_chat_model_finetuned_with/jeye7wb/
false
2
t1_jeye16l
I haven't checked this yet. But it does seem to be getting somewhere. ​ using UnityEngine; public class SolarSystem : MonoBehaviour { public GameObject star; public GameObject[] planets; public float orbitalDistance = 10f; private void Start() ...
1
0
2023-04-04T19:11:31
Radiant_Dog1937
false
null
0
jeye16l
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jeye16l/
false
1
t1_jeybf33
thanks
1
0
2023-04-04T18:54:23
3deal
false
null
0
jeybf33
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jeybf33/
false
1
t1_jeyaqy1
So 7B, 13B and 30B models. Based on Llama, so non-commercial, but that's pretty much where we're at right now until we have good open source models. Included data sets, which also includes specific types of chats like medical. Clean, simple code to help with your own data sets. Full documentation. This is awesome. Tha...
9
0
2023-04-04T18:50:03
synn89
false
null
0
jeyaqy1
false
/r/LocalLLaMA/comments/12bo18z/baize_is_an_opensource_chat_model_finetuned_with/jeyaqy1/
false
9
t1_jey9347
I thought it would run on CPU or something hehe. There has got to be a 7b version of alpaca native in 4bit too. Llama is really good even at 7b I bet.
1
0
2023-04-04T18:39:10
artificial_genius
false
null
0
jey9347
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jey9347/
false
1
t1_jey7sjz
Set "---pre_layer" to "1" and go up from there.
1
0
2023-04-04T18:30:41
PsychologicalSock239
false
null
0
jey7sjz
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jey7sjz/
false
1
t1_jey6oxt
I figure this is just a side effect of leveraging Chat GPT for training the model, it learns to respond as Chat GPT does. It's not because the devs set out to create a censored model, and they won't be as committed to fighting workarounds as OpenAI is.
4
0
2023-04-04T18:23:37
the_quark
false
null
0
jey6oxt
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jey6oxt/
false
4
t1_jey55r2
Interesting, I have not play with Vicuna yet, but hear is very good, except censoring. I was funny to find out a Vicuna is the animal similar to Llama and Alpaca. I did not know this.
3
0
2023-04-04T18:13:46
SlavaSobov
false
null
0
jey55r2
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jey55r2/
false
3
t1_jey3sky
[removed]
1
0
2023-04-04T18:04:59
[deleted]
true
null
0
jey3sky
false
/r/LocalLLaMA/comments/12br5qk/i_made_a_script_to_generate_linux_commands_using/jey3sky/
false
1
t1_jey3lh2
Same error
1
0
2023-04-04T18:03:42
Famberlight
false
null
0
jey3lh2
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jey3lh2/
false
1
t1_jey36l8
Did you try adding --model_type LLaMA ? Edit: probably not this... I had to do this for vicuna to run.
1
0
2023-04-04T18:00:59
skatardude10
false
null
0
jey36l8
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jey36l8/
false
1
t1_jexz6wd
>https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g Thank you! Ill definitely try alpaca if ill be able to run 13b llama on my gpu/cpu. It seems that only 8gb gpu is not enough for 13b :(
1
0
2023-04-04T17:35:15
Famberlight
false
null
0
jexz6wd
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jexz6wd/
false
1
t1_jexym6q
I use a gpu and don't know much about running the cpu stuff. I just wanted you to have a better model when you get everything running :) [https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g](https://huggingface.co/anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g)
1
0
2023-04-04T17:31:26
artificial_genius
false
null
0
jexym6q
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jexym6q/
false
1
t1_jexym7z
I tried python server.py --model llama-13b-4bit-128g --wbits 4 --groupsize 128 --pre_layer 20 but it still outputs the same error
1
0
2023-04-04T17:31:26
Famberlight
false
null
0
jexym7z
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jexym7z/
false
1
t1_jexygda
No… you cannot feed proprietary information into ChatGPT.
6
0
2023-04-04T17:30:22
Tilted_reality
false
null
0
jexygda
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jexygda/
false
6
t1_jexxn6v
[deleted]
3
0
2023-04-04T17:25:04
[deleted]
true
null
0
jexxn6v
false
/r/LocalLLaMA/comments/12bovxx/need_help_running_llama_13b4bit128g_with_cpu/jexxn6v/
false
3
t1_jexwchu
Once [this Pull Request](https://github.com/oobabooga/text-generation-webui/pull/767) gets merged, it will be preinstalled in oobabooga's text-generation-webui. Until then, you'll have to set the parameters yourself or create a textfile `LLaMA-Precise.txt` in the presets subdir with this content: do_sample=True top_...
1
0
2023-04-04T17:16:32
WolframRavenwolf
false
null
0
jexwchu
false
/r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jexwchu/
false
1
t1_jexw8dc
You want more salt on that wound? One dev responded after all of this shitshow, after angering the community and making us discovering they are working already on 5.6.0 and 6.0, and all this time. **ENJOY!** [https://github.com/RadeonOpenCompute/ROCm/discussions/1836#discussioncomment-5521211](https://github.com/Ra...
4
0
2023-04-04T17:15:46
Notfuckingcannon
false
null
0
jexw8dc
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/jexw8dc/
false
4
t1_jexw1de
I don't think even the best ai models have this fixed, with millions at stake. Maybe try running it at lower temperature and check how that affects the hallucinations. Also, maybe change the prompt to be more strict on factual information, it could work.
1
0
2023-04-04T17:14:29
ThatLastPut
false
null
0
jexw1de
false
/r/LocalLLaMA/comments/12bhp4f/alpaca_30b_inference_and_fine_tuning_reduce/jexw1de/
false
1
t1_jexuy6j
Where do you get LlaMa-precise?
2
0
2023-04-04T17:06:26
-becausereasons-
false
null
0
jexuy6j
false
/r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/jexuy6j/
false
2
t1_jexubkz
So are you asking about making the gradio interface? After ooga is setup up you run with the --share and it should make a gradio link that you can connect to. If you are having some issues getting going here is the wiki [https://github.com/oobabooga/text-generation-webui/wiki/](https://github.com/oobabooga/text-genera...
1
0
2023-04-04T17:02:22
artificial_genius
false
null
0
jexubkz
false
/r/LocalLLaMA/comments/128ylyt/quantization_question_is_convert_llama_weights_to/jexubkz/
false
1
t1_jextox2
Can this run on a M1 Max MacBook Pro>
3
0
2023-04-04T16:58:19
watchforwaspess
false
null
0
jextox2
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jextox2/
false
3
t1_jexs5y8
Trying to fight the urge to create a bajillion colab notebooks to train the next llama 😩
1
0
2023-04-04T16:48:38
Scriptod
false
null
0
jexs5y8
false
/r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jexs5y8/
false
1
t1_jexrbk3
I get coil whine too with a 3060Ti with llama 4bit 7b Interestingly, nothing happens when I run Stable Diffusion
1
0
2023-04-04T16:43:17
megamell0
false
null
0
jexrbk3
false
/r/LocalLLaMA/comments/12avbud/weird_noise_coming_from_3070_gpu_when_generating/jexrbk3/
false
1
t1_jexr04b
Ick, that's disappointing. I was just using vanilla Sphinx Moth. [Generation parameters matter](https://www.reddit.com/r/LocalLLaMA/comments/12az7ah/comparing_llama_and_alpaca_presets/), but less than I expected. Model size is probably the biggest factor in the different outcomes here. I would get outputs like your...
4
0
2023-04-04T16:41:15
friedrichvonschiller
false
null
0
jexr04b
false
/r/LocalLLaMA/comments/12b9se3/writing_llama_prompts_for_long_custom_stories/jexr04b/
false
4
t1_jexpm12
[deleted]
2
0
2023-04-04T16:32:15
[deleted]
true
null
0
jexpm12
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jexpm12/
false
2
t1_jexp16w
What parameters are you using. This is what i get with Llama 7b 4bit and NovelAI-StoryWriter: ​ >\### 40 WORD SYNOPSIS of Story Okono > >Okono is the newest hero introduced to League of Legends. His ultimate > >power electrocutes all enemies nearby. After Okono is added to the game, > >Nate plays a...
4
0
2023-04-04T16:28:31
redboundary
false
null
0
jexp16w
false
/r/LocalLLaMA/comments/12b9se3/writing_llama_prompts_for_long_custom_stories/jexp16w/
false
4
t1_jexo9gn
[deleted]
2
0
2023-04-04T16:23:38
[deleted]
true
null
0
jexo9gn
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jexo9gn/
false
2
t1_jexo84g
Thanks! I just started to think whether the goal I have set myself (personalized AI for RPGing) is achieveable - after a week or so I have about 1MB of data transformed and checked by hand and what you are saying is giving me a new hope - otherwise I presume it would take me more than half a year to gather and prepare ...
1
0
2023-04-04T16:23:24
szopen76
false
null
0
jexo84g
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jexo84g/
false
1
t1_jexnwyw
I personally have not used it. But that’s because I don’t need to use any of its resources. (I have my own GPUs: 2x A6000, 2x 3090Ti, 4x3080TI)
1
0
2023-04-04T16:21:26
Gohan472
false
null
0
jexnwyw
false
/r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jexnwyw/
false
1
t1_jexnt27
I'm just running it from the terminal.
1
0
2023-04-04T16:20:45
2muchnet42day
false
null
0
jexnt27
false
/r/LocalLLaMA/comments/12bhp4f/alpaca_30b_inference_and_fine_tuning_reduce/jexnt27/
false
1
t1_jexnjn9
There must be something wrong with your configuration. I'm running a 3080 12Gb Command: call python server.py --notebook --wbits 4 --groupsize 128 --listen --model vicuna-13b-GPTQ-4bit-128g --model_type llama Performance: Output generated in 13.17 seconds (15.11 tokens/s, 199 tokens, context 32) Output ...
3
0
2023-04-04T16:19:03
lineape
false
null
0
jexnjn9
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jexnjn9/
false
3
t1_jexn2a2
Did you get it running in Ooba? I can't seem to get it to work without a ton of errors. Won't load.
1
0
2023-04-04T16:15:55
-becausereasons-
false
null
0
jexn2a2
false
/r/LocalLLaMA/comments/12bhp4f/alpaca_30b_inference_and_fine_tuning_reduce/jexn2a2/
false
1
t1_jexmz6g
Thanks for the info. Found more info on GitHub: [bigscience-workshop/petals: 🌸 Run 100B+ language models at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading](https://github.com/bigscience-workshop/petals) Sounds exactly like what OP asked for, at least in theory. Any experience with ...
2
0
2023-04-04T16:15:21
WolframRavenwolf
false
null
0
jexmz6g
false
/r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jexmz6g/
false
2
t1_jexmrpe
Thank you for your kind words and your engaging conversation. I enjoyed talking with you about intelligence, prediction, time, and physics. I think you have a very interesting and original argument, and I appreciate your willingness to consider alternative points of view. I learned a lot from our conversation, and I ho...
1
0
2023-04-04T16:13:59
friedrichvonschiller
false
null
0
jexmrpe
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jexmrpe/
false
1
t1_jexmdik
Bing's best counterargument to the above: Here is a possible way of writing a genial counterargument with a few points that may invalidate your sweeping generalizations, using known quantities and entities to the extent possible, rather than speculation: I appreciate your argument and the effort you put into it. Howe...
1
0
2023-04-04T16:11:26
friedrichvonschiller
false
2023-04-04T19:43:32
0
jexmdik
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jexmdik/
false
1
t1_jexm4q4
Predictive coding has interesting implications when crossed with the anthropic principle. Bing can't find anyone who's connected them, and we had a very long and deep discussion about it. So, as I have nothing to lose and nobody listening to me, I'll throw some ideas down in bits. Bing is hard-coded for skepticism ...
2
0
2023-04-04T16:09:51
friedrichvonschiller
false
2023-04-04T19:40:41
0
jexm4q4
false
/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/jexm4q4/
false
2
t1_jexltsu
That sounds logic again to me. Are you telling me that each model is unique, even trained with the same data? That's almost like you're telling me that each model is unique like a human being. Like the randomness itself is the birth of a new being...haha..just joking I see now why it does not work.
1
0
2023-04-04T16:07:52
Ok-Scarcity-7875
false
null
0
jexltsu
false
/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/jexltsu/
false
1
t1_jexl6td
You can not train from scratch using Petals, but you can fine-tune with it. Yeah, hivemind would be the only choice for massive distributed training
2
0
2023-04-04T16:03:40
Gohan472
false
null
0
jexl6td
false
/r/LocalLLaMA/comments/12b6km8/we_need_to_create_open_and_public_model_and/jexl6td/
false
2
t1_jexl3qx
[deleted]
5
0
2023-04-04T16:03:06
[deleted]
true
null
0
jexl3qx
false
/r/LocalLLaMA/comments/12bgpci/whats_the_minimal_size_of_dataset_for_finetuning/jexl3qx/
false
5
t1_jexksna
But you already have and you're completely okay with it.
2
1
2023-04-04T16:01:05
redpandabear77
false
null
0
jexksna
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jexksna/
false
2
t1_jexkitb
Ah, I've seen that [llama.cpp](https://github.com/oobabooga/text-generation-webui/wiki/llama.cpp-models) has recently been added to oobabooga/text-generation-webui, but the limitations (no presets, smaller context) made me skip it, waiting for an improved implementation.
1
0
2023-04-04T15:59:21
WolframRavenwolf
false
null
0
jexkitb
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jexkitb/
false
1