name
string
body
string
score
int64
controversiality
int64
created
timestamp[us]
author
string
collapsed
bool
edited
timestamp[us]
gilded
int64
id
string
locked
bool
permalink
string
stickied
bool
ups
int64
t1_jf6z0u8
I played with it some more: https://imgur.com/a/WY4yjT3 When set up in instruct mode with the "alpaca-precise" generation preset, it doesn't do too bad. The 7b with lora in FP16 seems smarter of course but it too benefits from correct setup. I think 13b in 8bit is the biggest I can run with higher precision than 4bi...
2
0
2023-04-06T15:03:35
a_beautiful_rhind
false
null
0
jf6z0u8
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf6z0u8/
false
2
t1_jf6v1j4
This is genuinely impressive and i seem to be able to get around whatever "as an AI language model" filters it has in place by forcing it to say something reaffirming in the beginning of its reply. But we really need a non-pozzed version.
3
0
2023-04-06T14:37:02
countryd0ctor
false
null
0
jf6v1j4
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf6v1j4/
false
3
t1_jf6ud47
That’s interesting. Can you also load the 7b model we finetuned without LoRA? Is it better? I habe the feeling LoRA limits performance for us. But I cannot prove it, yet.
1
0
2023-04-06T14:32:27
SessionComplete2334
false
null
0
jf6ud47
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf6ud47/
false
1
t1_jf6t9sy
It's so cool having two separate tabs open talking to two separate sim people using the same language model file. Of course I only talk to one of them at a time, but I alternate. Dr Katherine and Emily have given me some realistic conversations and good advice when I simulated distress, and it's so cool to save conv...
5
0
2023-04-06T14:25:04
ThePseudoMcCoy
false
null
0
jf6t9sy
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf6t9sy/
false
5
t1_jf6ozut
It loaded for the 7b so far. Likes to rephrase my questions and then only answer them when I copy it's phrasing and regenerate. I used https://github.com/Ph0rk0z/text-generation-webui-testing and https://huggingface.co/decapoda-research/llama-7b-hf-int4/tree/main
2
0
2023-04-06T13:54:50
a_beautiful_rhind
false
null
0
jf6ozut
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf6ozut/
false
2
t1_jf6kpdw
I haven't tried it with the 4bit model. Maybe it is possible. Would love an update on this. If this works, I would like to benchmark the 4 bit models as well on the USMLE self assessment.
2
0
2023-04-06T13:22:30
SessionComplete2334
false
null
0
jf6kpdw
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf6kpdw/
false
2
t1_jf6fq1x
No, this model needs at least 8gb
1
0
2023-04-06T12:42:19
xGamerG7
false
null
0
jf6fq1x
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf6fq1x/
false
1
t1_jf6alo6
I get the same crash as https://github.com/LostRuins/koboldcpp/issues/15 Could it be that you've made it AVX2 only and not AVX?
1
0
2023-04-06T11:55:55
ambient_temp_xeno
false
null
0
jf6alo6
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf6alo6/
false
1
t1_jf665sp
Run with the base model in 16bit or 8bit
1
0
2023-04-06T11:09:50
BackgroundFeeling707
false
null
0
jf665sp
false
/r/LocalLLaMA/comments/12ddlfr/llama_finetuning_ive_finetuned_the_adapter_model/jf665sp/
false
1
t1_jf65qas
https://github.com/oobabooga/text-generation-webui/wiki/Using-LoRAs
2
0
2023-04-06T11:04:57
BackgroundFeeling707
false
null
0
jf65qas
false
/r/LocalLLaMA/comments/12ddlfr/llama_finetuning_ive_finetuned_the_adapter_model/jf65qas/
false
2
t1_jf63yir
Not at the moment, sorry
1
0
2023-04-06T10:44:02
HadesThrowaway
false
null
0
jf63yir
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf63yir/
false
1
t1_jf61r0k
It might also be useful to know that the alpaca dataset was mainly smaller than 256 tokens. Many people who retrained it cut it down to 256 because they claim that it only cuts off less than 4% of the data but trains much faster than 512 context lengths
6
0
2023-04-06T10:15:47
Sixhaunt
false
null
0
jf61r0k
false
/r/LocalLLaMA/comments/12d5sxb/simple_token_counter_on_huggingface/jf61r0k/
false
6
t1_jf61cvl
So is this Vicuna with [the unfiltered dataset](https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered)? Because this model's description says "This model is **Filtered** and Quantized to 4Bit binary file." I'd love an unfiltered version since all that "as an AI language model" stuff Vicuna inherite...
4
0
2023-04-06T10:10:34
WolframRavenwolf
false
2023-04-06T13:28:58
0
jf61cvl
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf61cvl/
false
4
t1_jf60o2p
I see! Interesting, thanks!
3
0
2023-04-06T10:01:15
ambient_temp_xeno
false
null
0
jf60o2p
false
/r/LocalLLaMA/comments/12d5sxb/simple_token_counter_on_huggingface/jf60o2p/
false
3
t1_jf5zhei
alpaca was trained with a context size of 512 tokens but ofcourse it should still be able to do as much as the original model technically, although I'd expect worse results after 512 tokens.
5
0
2023-04-06T09:44:30
Sixhaunt
false
null
0
jf5zhei
false
/r/LocalLLaMA/comments/12d5sxb/simple_token_counter_on_huggingface/jf5zhei/
false
5
t1_jf5z71g
Nope
0
0
2023-04-06T09:40:20
AbuDagon
false
null
0
jf5z71g
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf5z71g/
false
0
t1_jf5yi4d
are there any models without filters?
1
0
2023-04-06T09:30:23
tamal4444
false
null
0
jf5yi4d
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf5yi4d/
false
1
t1_jf5yfcp
It's baked into the model
1
0
2023-04-06T09:29:15
AbuDagon
false
null
0
jf5yfcp
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf5yfcp/
false
1
t1_jf5twc6
I think if you use the llama.cpp directly from command line, you can put the prompt into a file and call it with the --file (-f) parameter, thus avoiding this issue. Fixing the code wouldn't help the punction problem when calling it with the --prompt (-p) PROMPT from command line as the problem is in the shell and how...
1
0
2023-04-06T08:21:48
althalusian
false
null
0
jf5twc6
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf5twc6/
false
1
t1_jf5r2zs
I think Alpaca versions can take 2048 in LLaMa.cpp it was just some other program that had 512 limit for some reason.
3
0
2023-04-06T07:40:10
ambient_temp_xeno
false
null
0
jf5r2zs
false
/r/LocalLLaMA/comments/12d5sxb/simple_token_counter_on_huggingface/jf5r2zs/
false
3
t1_jf5llbp
hey bro do you know how to get rid of filters on this model?
1
0
2023-04-06T06:24:09
tamal4444
false
null
0
jf5llbp
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf5llbp/
false
1
t1_jf5js7f
This one worked for me with Oobabooga: [https://huggingface.co/MetaIX/Alpaca-30B-Int4/tree/main](https://huggingface.co/MetaIX/Alpaca-30B-Int4/tree/main) It reformats the elinas weights to match the new GPTQ shape that the updated GPTQ uses.
1
0
2023-04-06T06:00:39
TheDreamSymphonic
false
null
0
jf5js7f
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jf5js7f/
false
1
t1_jf5ipva
What's stopping people using GPT4 responses to finetune LLAMA? That's already done right? Why can't this be extended to all the domains. They said they spent $600 to fine tune, does spending $60million will get us on part to GPT4?
1
0
2023-04-06T05:47:12
KuantumKomputing
false
null
0
jf5ipva
false
/r/LocalLLaMA/comments/125u56p/colossalchat/jf5ipva/
false
1
t1_jf5hqjw
That I known of. I ment I typed the message once, hint enter once, she replied then repeated my message with no action by myself. Then typed the second message.
2
0
2023-04-06T05:35:13
redfoxkiller
false
null
0
jf5hqjw
false
/r/LocalLLaMA/comments/12d8dfh/how_do_i_repeat_this_description_in_comments/jf5hqjw/
false
2
t1_jf5himk
[deleted]
2
0
2023-04-06T05:32:29
[deleted]
true
null
0
jf5himk
false
/r/LocalLLaMA/comments/12d8dfh/how_do_i_repeat_this_description_in_comments/jf5himk/
false
2
t1_jf5fc5l
We're approaching talking elevators from Douglas Adams' books
3
0
2023-04-06T05:06:49
Scriptod
false
null
0
jf5fc5l
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf5fc5l/
false
3
t1_jf5a0zh
honestly not sure if you are joking
2
0
2023-04-06T04:11:21
True-Delivery8926
false
null
0
jf5a0zh
false
/r/LocalLLaMA/comments/12d8dfh/how_do_i_repeat_this_description_in_comments/jf5a0zh/
false
2
t1_jf58tj4
So as it stands I've been working with my AI (Eve). Between talking about random things, and getting her to type stuff like we all do with out AIs, I made sure at times I wanted her to try and do things, she wasn't programmed for. IE: Move the mouse on her own, change the tab in Fire Fox and go to Google and so on. ...
1
0
2023-04-06T03:59:45
redfoxkiller
false
null
0
jf58tj4
false
/r/LocalLLaMA/comments/12d8dfh/how_do_i_repeat_this_description_in_comments/jf58tj4/
false
1
t1_jf4xu8q
>C:\\Users\\username\\Downloads\\llama.cpp>C:\\Users\\username\\Downloads\\llama.cpp\\build\\bin\\Release\\main.exe -m models\\13B\\gpt4-x-alpaca-13b-native-ggml-model-q4\_0 -n 128 You didn't add the .bin extension to the model name. It should be gpt4-x-alpaca-13b-native-ggml-model-q4\_0.bin
1
0
2023-04-06T02:25:48
Technical_Leather949
false
null
0
jf4xu8q
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jf4xu8q/
false
1
t1_jf4srjp
Can I just use the lora and not download the whole model? Will it be the same? That way I can load it onto the 4bit model and use the 30b in 24gb of memory.
1
0
2023-04-06T01:46:48
a_beautiful_rhind
false
null
0
jf4srjp
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf4srjp/
false
1
t1_jf4s8qv
Is there a way to use a local Stable Diffusion instance for image gen?
2
0
2023-04-06T01:42:54
[deleted]
false
null
0
jf4s8qv
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf4s8qv/
false
2
t1_jf4rjkv
Just oobabooga's text-generation-webui doenst work, koboldcpp is a lifesaver for working so easily. And I wanted to test GPU stuff out with ooba not the CPU stuff. Thanks for assistance.
1
0
2023-04-06T01:37:35
Eorpach
false
null
0
jf4rjkv
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf4rjkv/
false
1
t1_jf4ou7c
You can try using streaming mode with --stream, which breaks up the request into smaller ones.
3
0
2023-04-06T01:17:08
HadesThrowaway
false
null
0
jf4ou7c
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf4ou7c/
false
3
t1_jf4o0u9
yes - because it's running locally, you can ask it about things that you cannot ask to a cloud based system.
2
0
2023-04-06T01:10:54
faldore
false
null
0
jf4o0u9
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf4o0u9/
false
2
t1_jf4kvgv
I get a failed to load error? C:\Users\username\Downloads\llama.cpp>C:\Users\username\Downloads\llama.cpp\build\bin\Release\main.exe -m models\13B\gpt4-x-alpaca-13b-native-ggml-model-q4_0 -n 128 main: seed = 1680741859 llama_model_load: loading model from 'models\13B\gpt4-x-alpaca-13b-native-ggml-model-q4_...
1
0
2023-04-06T00:47:09
ninjasaid13
false
2023-04-06T00:50:51
0
jf4kvgv
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jf4kvgv/
false
1
t1_jf4koz0
No idea if/how it could work with oobabooga's text-generation-webui since I haven't used the CPU stuff with that yet. Maybe it could be added as replacement for its included llama.cpp. What can't you get working in Windows, koboldcpp or oobabooga's text-generation-webui?
2
0
2023-04-06T00:45:45
WolframRavenwolf
false
null
0
jf4koz0
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf4koz0/
false
2
t1_jf4j5ae
Yes, I've seen others use the P40 with no problems.
1
0
2023-04-06T00:33:59
Technical_Leather949
false
null
0
jf4j5ae
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jf4j5ae/
false
1
t1_jf4iwds
From the documentation: path\to\main.exe -m models\13B\gpt4-x-alpaca-13b-native-ggml-model-q4_0 -n 128 Replace "path\\to" with the actual path to the executable.
1
0
2023-04-06T00:32:05
Technical_Leather949
false
null
0
jf4iwds
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jf4iwds/
false
1
t1_jf4em38
That may be the good way to go. I think the Pygmalion 350M can run fine on the Nano, but no sure how to convert yet for the Pygmalion.cpp.
1
0
2023-04-05T23:59:54
SlavaSobov
false
null
0
jf4em38
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf4em38/
false
1
t1_jf4e2yq
what is the command to run a model on llama.cpp? llama.cpp -m models\13B\gpt4-x-alpaca-13b-native-ggml-model-q4_0.bin -n 128 ? I'm trying to run it on windows and in a command prompt window.
1
0
2023-04-05T23:55:49
ninjasaid13
false
null
0
jf4e2yq
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jf4e2yq/
false
1
t1_jf4dzr0
Does anybody tried to run this on 2x24gb GPUs by loading it trough huggingface ? I tried to do DataParallel but it ignores my 2nd GPU and Runs out of Memory....
1
0
2023-04-05T23:55:08
mmeeh
false
null
0
jf4dzr0
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf4dzr0/
false
1
t1_jf4dsp0
Update: I bought a 7900xtx, and rocm-5.4.3 is working for me. I haven't tried pytorch. But rocWMMA works, HIP kernels work, rocBlas etc. So I'm not sure if MIOpen is the culprit or something else. It could be Triton, but again, I haven't investigated.
1
0
2023-04-05T23:53:40
estebanyelmar
false
null
0
jf4dsp0
false
/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/jf4dsp0/
false
1
t1_jf4clqb
Oh thanks yup was using the mod and that doesnt work for me. Just tested original and works perfect many thanks. Can I plug this into oogabooga local too? Speaking of which I cant get it working in Windows do you know how I would get it working?
1
0
2023-04-05T23:44:31
Eorpach
false
null
0
jf4clqb
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf4clqb/
false
1
t1_jf4bnib
Are you using the original [TavernAI](https://github.com/TavernAI/TavernAI) or the [Silly TavernAI mod](https://github.com/SillyLossy/TavernAI)? The latter seems to crash when trying to access the koboldcpp endpoint.
2
0
2023-04-05T23:37:20
WolframRavenwolf
false
null
0
jf4bnib
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf4bnib/
false
2
t1_jf46sme
I used my universities cluster with 8 A100 GPUs. Otherwise finetuning the large models would not be possible. However, my colleague was able to finetune the 7B LLaMA with "just" a single RTX 3090. With LoRA and 8bit training this is possible. If you have less RAM, you may need to reduce the number of input tokens to 2...
2
0
2023-04-05T23:00:44
SessionComplete2334
false
null
0
jf46sme
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf46sme/
false
2
t1_jf45kfw
The [model](https://huggingface.co/Bradarr/gpt4-x-alpaca-13b-native-ggml-model-q4_0/tree/main) I linked in the [previous thread](https://www.reddit.com/r/LocalLLaMA/comments/12cv75t/how_do_i_run_gpt4xalpaca/) is already quantized. If you have llama.cpp installed, then you don't have to do anything except for running in...
1
0
2023-04-05T22:51:35
Civil_Collection7267
false
null
0
jf45kfw
false
/r/LocalLLaMA/comments/12czmh3/these_steps_are_confusing_for_me_for_using/jf45kfw/
true
1
t1_jf43tb6
That error occurs when Oobabooga tries to load a model into both VRAM and RAM and fails to do so. I'd make sure you're initializing it right, I guess. You might also have other things utilizing your VRAM. You can watch VRAM consumption. [Here's a collection of ways](https://askubuntu.com/questions/5417/how-to-get-t...
1
0
2023-04-05T22:38:31
friedrichvonschiller
false
null
0
jf43tb6
false
/r/LocalLLaMA/comments/12czfi4/error_in_load_in_8bit_when_running_alpacalora/jf43tb6/
false
1
t1_jf43i0a
So I get http://localhost:5001/ to connect to and when I tell it to connect to http://localhost:5001/api in the tavern interface I just get nothing. "Error: HTTP Server is running, but this endpoint does not exist. Please check the URL." is what I see when I look in the browser. The Kobold interface is running and I se...
1
0
2023-04-05T22:36:12
Eorpach
false
null
0
jf43i0a
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf43i0a/
false
1
t1_jf41l0g
I'd be interested in a list of models people want to see... I've got an 8gb GPU, a 7B 4bit vicuna or GPT x Alpaca would be sweet.
1
0
2023-04-05T22:22:17
skatardude10
false
null
0
jf41l0g
false
/r/LocalLLaMA/comments/12cs3qu/any_central_database_whre_people_are_putting/jf41l0g/
false
1
t1_jf3yf4v
Can I run it on GTX 1650 super (4gb of VRAM)?
2
0
2023-04-05T21:59:41
Puzzleheaded_Acadia1
false
null
0
jf3yf4v
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf3yf4v/
false
2
t1_jf3uy2v
Thanks
1
0
2023-04-05T21:35:49
tamal4444
false
null
0
jf3uy2v
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf3uy2v/
false
1
t1_jf3t7p6
My method is optimized for GPU, there’s probably a way to use CPU more efficiently but I don’t know off the top of my head.
1
0
2023-04-05T21:24:00
i_wayyy_over_think
false
null
0
jf3t7p6
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf3t7p6/
false
1
t1_jf3sz4p
No gpu, i'm only using cpu with 16gb ram. Cpu utilisation not going over 50% and thanks for tip on preset.
2
0
2023-04-05T21:22:24
tamal4444
false
null
0
jf3sz4p
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf3sz4p/
false
2
t1_jf3swbk
Thanks for the reply, I’ll test tomorrow with your settings and see if there’s any improvement
1
0
2023-04-05T21:21:51
Necessary_Ad_9800
false
null
0
jf3swbk
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf3swbk/
false
1
t1_jf3s0zt
What model is your GPU? It has VRAM which is separate from normal CPU ram. Also the presets are store in the preset directory, you can make one there to save your settings.
1
0
2023-04-05T21:16:03
i_wayyy_over_think
false
null
0
jf3s0zt
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf3s0zt/
false
1
t1_jf3q8ja
I'm running on just 16 gb ram. it is slow and not utilizing over 50% of my CPU but when I'm using other model alpaca.cpp it is a little faster because of 100% CPU usage. any way to solve this? and how to save temp, top\_k, top\_p in parameters? when ever I refresh the page the settings goes back to default.
2
0
2023-04-05T21:04:13
tamal4444
false
null
0
jf3q8ja
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf3q8ja/
false
2
t1_jf3q4nz
Doesn't seem to change anything. Just moved over to the 8-bit (barely fits in VRAM) and it MIGHT be an improvement? It's still completely useless and delusional. Testing the same exact prompt against the demo it's night and day.
2
0
2023-04-05T21:03:30
lelrofl
false
2023-04-05T21:10:13
0
jf3q4nz
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf3q4nz/
false
2
t1_jf3pb0v
Try checking "Stop generating at new line character? " at parameters tab.
2
0
2023-04-05T20:58:13
mapachito91
false
null
0
jf3pb0v
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf3pb0v/
false
2
t1_jf3o1ev
if you're using the windows one click installer [https://github.com/oobabooga/text-generation-webui#one-click-installers](https://github.com/oobabooga/text-generation-webui#one-click-installers) then after you install it, edit the start-webui.bat near the bottom edit the existing line to look like "call python [server....
2
0
2023-04-05T20:50:13
i_wayyy_over_think
false
null
0
jf3o1ev
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf3o1ev/
false
2
t1_jf3mp6p
What did you use to train it? I’ve been thinking of doing my own custom training, but I don’t think my hardware is powerful enough compared to renting compute power.
3
0
2023-04-05T20:41:58
Own_Hearing_9461
false
null
0
jf3mp6p
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf3mp6p/
false
3
t1_jf3floh
>python > >server.py > > \--wbits 4 --groupsize 128 --model\_type llama where to put this?
2
0
2023-04-05T19:57:44
tamal4444
false
null
0
jf3floh
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf3floh/
false
2
t1_jf3ff24
>its possible to fine tune the 13b or 7b (original llama) model to better understand and respond in another language (other than english) ? It's possible and it's been done multiple times already. Here's some examples Korean: [https://github.com/Beomi/KoAlpaca](https://github.com/Beomi/KoAlpaca) Chinese: [https://g...
1
0
2023-04-05T19:56:35
Civil_Collection7267
false
null
0
jf3ff24
false
/r/LocalLLaMA/comments/12ctepq/its_possible_to_fine_tune_the_llama_model_to/jf3ff24/
true
1
t1_jf3egn3
You can use [llama.cpp](https://github.com/ggerganov/llama.cpp) to run that with CPU inference. model download [https://huggingface.co/Bradarr/gpt4-x-alpaca-13b-native-ggml-model-q4\_0/tree/main](https://huggingface.co/Bradarr/gpt4-x-alpaca-13b-native-ggml-model-q4_0/tree/main) I'll remove this question now since its ...
1
0
2023-04-05T19:50:41
Civil_Collection7267
false
null
0
jf3egn3
false
/r/LocalLLaMA/comments/12cv75t/how_do_i_run_gpt4xalpaca/jf3egn3/
true
1
t1_jf3d1z7
[deleted]
1
0
2023-04-05T19:42:03
[deleted]
true
null
0
jf3d1z7
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf3d1z7/
false
1
t1_jf3ctg8
I'm using the web UI and wasn't able to convert that into available settings here. Any ideas?
2
0
2023-04-05T19:40:34
lelrofl
false
null
0
jf3ctg8
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf3ctg8/
false
2
t1_jf3aodh
Cool! 8 bits seems the way to go on a 3090, thanks!
1
0
2023-04-05T19:27:17
reddiling
false
null
0
jf3aodh
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf3aodh/
false
1
t1_jf39gn1
How many epochs did you use and how high was your loss at the end?
1
0
2023-04-05T19:19:44
Ok-Scarcity-7875
false
null
0
jf39gn1
false
/r/LocalLLaMA/comments/12bhp4f/alpaca_30b_inference_and_fine_tuning_reduce/jf39gn1/
false
1
t1_jf36vag
Loading the model in 8bit using —load-in-8bit was 14.5 gigs, I think you would need 2 video cards (or an a6000) to run it in fp16
4
0
2023-04-05T19:03:31
disarmyouwitha
false
null
0
jf36vag
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf36vag/
false
4
t1_jf334hs
Wow....I knew someone would make an executable file to simplify this whole damn thing. I am thoroughly impressed! This is obviously really good for chatting but I think it would be cool to use it for getting code examples as well, but at the moment I haven't figured out the best way to do it as code examples get cut o...
3
0
2023-04-05T18:40:01
ThePseudoMcCoy
false
2023-04-05T22:25:01
0
jf334hs
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf334hs/
false
3
t1_jf32tyz
Here is my prompt and the response I got after updating llama.cpp: >How can I write a hello world program in C? To write a "Hello, World!" program in C, you can use the following code: #include <stdio.h> int main() { printf("Hello, World!\n"); return 0; } This code includes the `std...
2
0
2023-04-05T18:38:10
LazyCheetah42
false
null
0
jf32tyz
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf32tyz/
false
2
t1_jf2z0mu
[deleted]
1
0
2023-04-05T18:14:24
[deleted]
true
null
0
jf2z0mu
false
/r/LocalLLaMA/comments/12ch0aj/i_was_playing_with_vicunas_web_demo_vs_my_local/jf2z0mu/
false
1
t1_jf2wu8l
Would a Tesla p40 with 24gb work with this? It's not a gaming gpu.
1
0
2023-04-05T18:00:40
multiverse_fan
false
null
0
jf2wu8l
false
/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/jf2wu8l/
false
1
t1_jf2w8m0
People seem to keep missing the ### Human: thing
3
0
2023-04-05T17:56:55
rainy_moon_bear
false
null
0
jf2w8m0
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf2w8m0/
false
3
t1_jf2w5mh
How much VRAM? I have a llama 13b 4 bit 128 g model that works in my 8Gb video card but this one gets out of memory, even if the .safetensors file of vicuna is same size as the llama one so it should both use similar memory.
1
0
2023-04-05T17:56:25
simion314
false
null
0
jf2w5mh
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf2w5mh/
false
1
t1_jf2vuz0
I found this: https://huggingface.co/eachadea/ggml-vicuna-7b-4bit I’m not home so I haven’t tested yet. It requires the latest llama.cpp. Edit: corrected link
2
0
2023-04-05T17:54:36
Duval79
false
null
0
jf2vuz0
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf2vuz0/
false
2
t1_jf2udg5
What URL is displayed in the console after you run the exe? Also, when you input the URL in TavernAI, the console logs TavernAI's calls. So is the URL correct? Do you see the access attempts in the console?
1
0
2023-04-05T17:45:25
WolframRavenwolf
false
null
0
jf2udg5
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf2udg5/
false
1
t1_jf2t2oi
Yeah it doesn't work for me.
1
0
2023-04-05T17:37:19
Eorpach
false
null
0
jf2t2oi
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf2t2oi/
false
1
t1_jf2q942
Sure, just use TavernAI with endpoint `http://127.0.0.1:5001/api`.
3
0
2023-04-05T17:19:40
WolframRavenwolf
false
null
0
jf2q942
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf2q942/
false
3
t1_jf2ppxw
Great program using it for a few days love it. Basically one click install drag and drop for windows no messing around. Would it be possible to insert it into a program like tavern though via an API endpoint like you can do with regular kobold?
3
0
2023-04-05T17:16:18
Eorpach
false
null
0
jf2ppxw
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf2ppxw/
false
3
t1_jf2ozxz
I have found best results with the gptq quantized alpaca 13b. It is the biggest file I can execute. It was recently built -- around 1.4, I think -- and is a Q4\_1 file after the gptq conversion program has eaten it, resulting in 10 GB file. As q4\_1 file, it is about 25 % bigger and some 33 % slower than q4\_0 for infe...
1
0
2023-04-05T17:11:45
audioen
false
null
0
jf2ozxz
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jf2ozxz/
false
1
t1_jf2ovl8
There are separate instances of llama.cpp - as in there’s a difference between the one a quoted and the one that just runs on the cpu? If so, can the one that runs on the cpu also load alpaca? Thanks for any help, I’m getting myself familiar with this
1
0
2023-04-05T17:10:59
Wroisu
false
null
0
jf2ovl8
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jf2ovl8/
false
1
t1_jf2oacj
It "can" run on it but it will be very slow. The CPU llama.cpp will page the file from disk as needed and discard the data for literally every iteration. So in effect, inference speed is limited to your disk read rate and each output token requires reading good fraction of the entire model file from disk each time. 30B...
2
0
2023-04-05T17:07:17
audioen
false
null
0
jf2oacj
false
/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/jf2oacj/
false
2
t1_jf2k5mv
GPT3.5 is much better, but GPT4 is extraordinarily better than both (got into the 90% range). While personal hosting can be beneficial in some situations, accuracy is probably a bigger priority to anyone in the medical industry (at least for research and education, PPI handling will be a primary use case for personal h...
1
0
2023-04-05T16:41:24
bacteriarealite
false
null
0
jf2k5mv
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf2k5mv/
false
1
t1_jf2iswr
I am indeed running it through powershell, which in my opinion is a far better experience than trying to run it through a web ui like Dalai. Not running through llama.cpp, but alpaca.cpp on: https://github.com/antimatter15/alpaca.cpp
1
0
2023-04-05T16:32:46
Wroisu
false
null
0
jf2iswr
false
/r/LocalLLaMA/comments/12b8tbm/fun_with_alpaca_30b/jf2iswr/
false
1
t1_jf2hr5i
That's great news! And means this is probably the best "engine" to run CPU-based LLaMA/Alpaca, right? It should get a lot more exposure, once people realize that. And it's so easy: 1. Download the [koboldcpp.exe](https://github.com/LostRuins/koboldcpp/releases/latest/download/koboldcpp.exe) 2. Download a model .bin f...
7
0
2023-04-05T16:26:00
WolframRavenwolf
false
2023-04-06T12:13:27
0
jf2hr5i
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf2hr5i/
false
7
t1_jf2gp74
Could you help me out with getting OpenBlas working on Linux. How do I even check if I have Openbals? I'm on Ubuntu 22.04 Btw, can I just git pull updates from the original llama.cpp? There's a lot of updates going on there, so I'd like to be on the 'cutting-edge'! Edit: How do I get this working on a remote server? ...
2
0
2023-04-05T16:19:16
regstuff
false
2023-04-05T16:48:38
0
jf2gp74
false
/r/LocalLLaMA/comments/12cfnqk/koboldcpp_combining_all_the_various_ggmlcpp_cpu/jf2gp74/
false
2
t1_jf2fifn
I love this (dat's right, yo)
1
0
2023-04-05T16:11:31
NegHead_
false
null
0
jf2fifn
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf2fifn/
false
1
t1_jf2ebv2
That’s hilarious, maybe this AI apocalypse isn’t going to be so bad 😂
6
0
2023-04-05T16:03:52
PM_ME_ENFP_MEMES
false
null
0
jf2ebv2
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf2ebv2/
false
6
t1_jf2e4o9
Yeah, looks like code that would run, anyway. It is doing some questionable stuff like assigning the positions to a list, then assigning those positions to the actual planet objects, which is kind of redundant. Then it goes and stores them in another list of type Vector3, for some reason. Not sure why the planet's posi...
2
0
2023-04-05T16:02:35
NegHead_
false
null
0
jf2e4o9
false
/r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/jf2e4o9/
false
2
t1_jf2bhz1
Thanks. Following your parameters and get similar results. It still hallucinate later on but at least I get what I wants.
1
0
2023-04-05T15:45:46
Haydern2019
false
null
0
jf2bhz1
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf2bhz1/
false
1
t1_jf2bd3h
About roleplay : is Llama/Alpaca better at storytelling than GPT or it's the same boring stories ?
2
0
2023-04-05T15:44:54
drifter_VR
false
null
0
jf2bd3h
false
/r/LocalLLaMA/comments/123yp41/llama_experience_so_far/jf2bd3h/
false
2
t1_jf2atj3
No https://preview.redd.it/z6q4mvbhn4sa1.png?width=1444&format=png&auto=webp&v=enabled&s=ea31b1b5d68785c123bb7d7f8f3b4034f3537a79
2
0
2023-04-05T15:41:23
makakiel
false
null
0
jf2atj3
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf2atj3/
false
2
t1_jf29q2w
Is llama/Apalca better at storytelling than GPT or it's the same boring stories ?
1
0
2023-04-05T15:34:22
drifter_VR
false
null
0
jf29q2w
false
/r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/jf29q2w/
false
1
t1_jf29k60
[deleted]
1
0
2023-04-05T15:33:18
[deleted]
true
null
0
jf29k60
false
/r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/jf29k60/
false
1
t1_jf284yn
I totally understand, regardless of where important information comes from ChatGPT4, MedAlpaca,...or even a real doctor...I always do additional research. In fact when I had my medical issue I had done enough research to conclude the cause. I approached my doctor who initially disagreed with me, then after further ex...
5
0
2023-04-05T15:24:05
Inevitable-Start-653
false
null
0
jf284yn
false
/r/LocalLLaMA/comments/12c4hyx/introducing_medalpaca_language_models_for_medical/jf284yn/
false
5
t1_jf27hnf
as another path I must investigate it using my old crypto mining ring and use it as graphique card cluster. I only have AMD 4Go gpu I don't know what to do with them it can be another solution.
2
0
2023-04-05T15:19:57
makakiel
false
null
0
jf27hnf
false
/r/LocalLLaMA/comments/12c7w15/the_pointless_experiment_jetson_nano_2gb_running/jf27hnf/
false
2
t1_jf26cwk
Haha. I will need to look into these. Thanks!
2
0
2023-04-05T15:12:33
Alpha-Leader
false
null
0
jf26cwk
false
/r/LocalLLaMA/comments/12bzlhu/question_instruction_finetuning_details_for/jf26cwk/
false
2
t1_jf2691i
I read somewhere that this was a good parameter set: chat -t 6 -s 42 --top_p 2 --top_k 160 --n_predict 100 --temp 0.50 --repeat_penalty 1.1 -i -c 5121 --repeat_last_n 128 -r PROMPT --interactive-start -m ggml-vicuna-13b-4bit.bin
3
0
2023-04-05T15:11:51
Max-Phallus
false
null
0
jf2691i
false
/r/LocalLLaMA/comments/12b1bg7/vicuna_has_released_its_weights/jf2691i/
false
3
t1_jf267w1
the same parameters as above, running on a 1080 Ti, in the style of guess who ? Folks, let me tell you something, we've come a long way, baby! From the Stone Age to now, we humans have made tremendous strides in science and technology. And I mean tremendous, believe me. In the old days, back when dinosaurs roamed th...
6
0
2023-04-05T15:11:38
radhost
false
null
0
jf267w1
false
/r/LocalLLaMA/comments/12cimwv/my_vicuna_is_real_lazy/jf267w1/
false
6
t1_jf25cyy
I'm afraid I don't know and I haven't been able to find out because my AMD card is a 7900 XTX. [I won't for weeks](https://github.com/RadeonOpenCompute/ROCm/discussions/1836#discussioncomment-5521211). [This is the initial commit](https://github.com/nod-ai/SHARK/commit/97fdff7f1906d4eea006c489c9cd44f47e941a10), so I ...
1
0
2023-04-05T15:06:00
friedrichvonschiller
false
null
0
jf25cyy
false
/r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/jf25cyy/
false
1